Showing posts with label Java Major Projects. Show all posts
Showing posts with label Java Major Projects. Show all posts
Saturday, December 29, 2012
3
Saturday, December 29, 2012
prakash chalumuri
H/W System
Configuration:-
IEEE Java Project - Detecting and Resolving Firewall Policy Anomalies
Detecting
and Resolving Firewall Policy Anomalies
ABSTRACT:
The advent of emerging computing
technologies such as service-oriented architecture and cloud computing has
enabled us to perform business services more efficiently and effectively.
However, we still suffer from unintended security leakages by unauthorized
actions in business services. Firewalls are the most widely deployed security
mechanism to ensure the security of private networks in most businesses and
institutions. The effectiveness of security protection provided by a firewall
mainly depends on the quality of policy configured in the firewall.
Unfortunately, designing and managing firewall policies are often error prone
due to the complex nature of firewall configurations as well as the lack of
systematic analysis mechanisms and tools. In this paper, we represent an
innovative policy anomaly management framework for firewalls, adopting a
rule-based segmentation technique to identify policy anomalies and derive
effective anomaly resolutions. In particular, we articulate a grid-based
representation technique, providing an intuitive cognitive sense about policy
anomaly. We also discuss a proof-of-concept implementation of a
visualization-based firewall policy analysis tool called Firewall Anomaly
Management Environment (FAME). In addition, we demonstrate how efficiently our approach
can discover and resolve anomalies in firewall policies through rigorous
experiments.
EXISTING
SYSTEM:
Firewall policy management is a
challenging task due to the complexity and interdependency of policy rules.
This is further exacerbated by the continuous evolution of network and system
environments.
The process of configuring a
firewall is tedious and error prone. Therefore, effective mechanisms and tools
for policy management are crucial to the success of firewalls.
Existing policy analysis tools, such
as Firewall Policy Advisor and FIREMAN, with the goal of detecting policy
anomalies have been introduced. Firewall Policy Advisor only has the capability
of detecting pair wise anomalies in firewall rules. FIREMAN can detect anomalies
among multiple rules by analyzing the relationships between one rule and the
collections of packet spaces derived from all preceding rules.
However, FIREMAN also has limitations
in detecting anomalies. For each firewall rule, FIREMAN only examines all
preceding rules but ignores all subsequent rules when performing anomaly analysis.
In addition, each analysis result from FIREMAN can only show that there is a misconfiguration
between one rule and its preceding rules, but cannot accurately indicate all rules
involved in an anomaly.
PROPOSED
SYSTEM:
In this paper, we represent a novel
anomaly management framework for firewalls based on a rule-based segmentation technique
to facilitate not only more accurate anomaly detection but also effective
anomaly resolution.
Based on this technique, a network
packet space defined by a firewall policy can be divided into a set of disjoint
packet space segments. Each segment associated with a unique set of firewall
rules accurately indicates an overlap relation (either conflicting or redundant)
among those rules.
We also introduce a flexible conflict
resolution method to enable a fine-grained conflict resolution with the help of
several effective resolution strategies with respect to the risk assessment of
protected networks and the intention of policy definition.
System Configuration:-
H/W System
Configuration:-
ü Processor -Pentium –III
ü Speed - 1.1 Ghz
ü RAM - 256 MB(min)
ü Hard
Disk - 20 GB
ü Floppy
Drive - 1.44 MB
ü Key
Board - Standard Windows Keyboard
ü Mouse - Two or Three Button Mouse
ü Monitor - SVGA
S/W System Configuration:-
v
Operating System :
Windows95/98/2000/XP
v
Front End :
Java
REFERENCE:
Hongxin Hu, Student Member, IEEE, Gail-Joon
Ahn, Senior Member, IEEE, and Ketan Kulkarni,” Detecting and Resolving Firewall
Policy Anomalies”, IEEE TRANSACTIONS ON
DEPENDABLE AND SECURE COMPUTING, VOL. 9, NO. 3, MAY/JUNE 2012.
Friday, December 28, 2012
2
Friday, December 28, 2012
prakash chalumuri
IEEE Java Project - Design and Implementation of TARF: A Trust-Aware Routing Framework for WSNs
Design
and Implementation of TARF:
A Trust-Aware Routing
Framework for WSNs
ABSTRACT:
The multihop routing in wireless
sensor networks (WSNs) offers little protection against identity deception
through replaying routing information. An adversary can exploit this defect to
launch various harmful or even devastating attacks against the routing protocols,
including sinkhole attacks, wormhole attacks, and Sybil attacks. The situation
is further aggravated by mobile and harsh network conditions. Traditional
cryptographic techniques or efforts at developing trust-aware routing protocols
do not effectively address this severe problem. To secure the WSNs against
adversaries misdirecting the multihop routing, we have designed and implemented
TARF, a robust trust-aware routing framework for dynamic WSNs. Without tight
time synchronization or known geographic information, TARF provides trustworthy
and energy-efficient route. Most importantly, TARF proves effective against
those harmful attacks developed out of identity deception; the resilience of
TARF is verified through extensive evaluation with both simulation and
empirical experiments on large-scale WSNs under various scenarios including
mobile and RF-shielding network conditions. Further, we have implemented a
low-overhead TARF module in TinyOS; as demonstrated, this implementation can be
incorporated into existing routing protocols with the least effort. Based on
TARF, we also demonstrated a proof-of-concept mobile target detection application
that functions well against an anti-detection mechanism.
EXISTING
SYSTEM:
In the existing system, the
multihop routing of WSNs often becomes the target of malicious attacks. An
attacker may tamper nodes physically, create traffic collision with seemingly
valid transmission, drop or misdirect messages in routes, or jam the
communication channel by creating radio interference.
PROPOSED
SYSTEM:
In the proposed system , to secure
the WSNs against adversaries misdirecting the multihop routing, we have
designed and implemented TARF, a robust trust-aware routing framework for
dynamic WSNs.
SYSTEM
REQUIREMENTS:
HARDWARE REQUIREMENTS:
•
System : Pentium IV 2.4 GHz.
•
Hard
Disk : 40 GB.
•
Floppy
Drive : 1.44 Mb.
•
Monitor : 15 VGA Colour.
•
Mouse : Logitech.
•
Ram : 512 Mb.
SOFTWARE REQUIREMENTS:
•
Operating system : - Windows XP
•
Coding Language :- JAVA
REFERENCE:
Guoxing Zhan, Weisong Shi, and
Julia Deng, “Design and Implementation of TARF: A Trust-Aware Routing Framework
for WSNs”, IEEE TRANSACTIONS ON
DEPENDABLE AND SECURE COMPUTING, VOL. 9, NO. 2, MARCH/APRIL 2012.
10
prakash chalumuri
IEEE Java Project - Cut Detection in Wireless Sensor Networks
ABSTRACT
A wireless sensor network can get separated into multiple connected
components due to the failure of some of its nodes, which is called a “cut”. In
this article we consider the problem of detecting cuts by the remaining nodes
of a wireless sensor network. We propose an algorithm that allows (i) every
node to detect when the connectivity to a specially designated node has been
lost, and (ii) one or more nodes (that are connected to the special node after
the cut) to detect the occurrence of the cut. The algorithm is distributed and
asynchronous: every node needs to communicate with only those nodes that are
within its communication range. The algorithm is based on the iterative
computation of a fictitious “electrical potential” of the nodes. The
convergence rate of the underlying iterative scheme is independent of the size
and structure of the network.
EXISTING SYSTEM
Wireless Multimedia Sensor
Networks (WMSNs) has many challenges such as nature of wireless media and
multimedia information transmission. Consequently traditional mechanisms for
network layers are no longer acceptable or applicable for these networks. Wireless sensor network can get separated
into multiple connected components due to the failure of some of its nodes,
which is called a “cut”. Existing cut detection system deployed only for
wired networks.
Disadvantages
1.
Unsuitable for dynamic network reconfiguration.
2. Single
path routing approach.
PROPOSED SYSTEM
Wireless
sensor networks (WSNs) are a promising technology for monitoring large regions
at high spatial and temporal resolution .Failure of a set of nodes will reduce
the number of multi-hop paths in the network. Such failures can cause a subset
of nodes – that have not failed – to become disconnected from the rest,
resulting in a “cut”. Two nodes are said to be disconnected if there is no path
between them. We consider the problem of detecting cuts by the nodes of a
wireless network. We assume that there is a specially designated node in the
network, which we call the source nodeSince
a cut may or may not separate a node from the source node, we distinguish
between two distinct outcomes of a cut for a particular node. When a node u is
disconnected from the source, we say that a DOS (Disconnected from Source) event
has occurred for u. When a cut occurs in the network that does not separate a
node u from the source node, we say that CCOS (Connected, but a Cut
Occurred Somewhere) event has occurred for u. By cut detection we mean (i) detection
by each node of a DOS event when it occurs, and (ii) detection of CCOS
events by the nodes close to a cut, and the approximate location of the cut. In
this article we propose a distributed algorithm to detect cuts, named the Distributed Cut Detection (DCD)
algorithm. The algorithm allows each node to detect DOS events and a subset of
nodes to detect CCOS events. The algorithm we propose is distributed and asynchronous:
it involves only local communication between neighboring nodes, and is robust
to temporary communication failure between node pairs The convergence rate of
the computation is independent of the size and structure of the network.
MODULE DESCRIPTION:
DISTRIBUTED CUT DETECTION:
The algorithm allows each node
to detect DOS events and a subset of nodes to detect CCOS events. The algorithm
we propose is distributed and asynchronous: it involves only local
communication between neighboring nodes, and is robust to temporary communication
failure between node pairs. A key component of the DCD algorithm is a
distributed iterative computational step through which the nodes compute their
(fictitious) electrical potentials. The convergence rate of the computation is
independent of the size and structure of the network.
CUT:
Wireless sensor networks
(WSNs) are a promising technology for
monitoring large regions at high spatial and temporal resolution. In
fact, node failure is expected to be quite common due to the typically limited
energy budget of the nodes that are powered by small batteries. Failure of a
set of nodes will reduce the number of multi-hop paths in the network. Such
failures can cause a subset of nodes – that have not failed – to become
disconnected from the rest, resulting in a “cut”. Two nodes are said to be
disconnected if there is no path between them.
SOURCE NODE:
We
consider the problem of detecting cuts by the nodes of a wireless network. We
assume that there is a specially designated node in the network, which we call
the source node. The source node may be a base station that serves as an
interface between the network and its users.Since a cut may or may not separate
a node from the source node, we distinguish between two distinct outcomes of a
cut for a particular node.
CCOS AND DOS:
When a
node u is disconnected from the source, we say that a DOS (Disconnected
frOm Source) event has occurred for u. When a cut occurs in the network that does
not separate a node u from the source node, we say that CCOS (Connected, but a Cut
Occurred Somewhere) event has occurred for u. By cut detection
we mean (i) detection by each node of a DOS event when it occurs, and (ii)
detection of CCOS events by the nodes close to a cut, and the approximate
location of the cut.
NETWORK SEPARATION:
Failure of a set of
nodes will reduce the number of multi-hop paths in the network. Such failures
can cause a subset of nodes – that have not failed – to become disconnected
from the rest, resulting in a “cut”. Because of cut, some nodes may separated
from the network, that results the separated nodes can’t receive the data from
the source node.
System
Configuration:-
H/W System
Configuration:-
Processor - Pentium –III
Speed - 1.1 Ghz
RAM - 256
MB(min)
Hard
Disk - 20 GB
Floppy
Drive - 1.44 MB
Key
Board - Standard Windows Keyboard
Mouse - Two or Three Button Mouse
Monitor - SVGA
S/W System
Configuration:-
Operating
System :Windows XP
Front
End : JAVA,RMI, SWING
CONCLUSION
The DCD
algorithm we propose here enables every node of a wireless sensor network to
detect DOS (Disconnected frOm Source) events if they occur. Second, it enables
a subset of nodes that experience CCOS (Connected, but Cut Occurred Somewhere)
events to detect them and estimate the approximate location of the cut in the
form of a list of active nodes that lie at the boundary of the cut/hole. The
DOS and CCOS events are defined with respect to a specially designated source node.
The algorithm is based on ideas from electrical network theory and parallel
iterative solution of linear equations. Numerical simulations, as well as
experimental evaluation on a real WSN system consisting of micaZ motes, show
that the algorithm works effectively with a large classes of graphs of varying
size and structure, without requiring changes in the parameters. For certain
scenarios, the algorithm is assured to detect connection and disconnection to
the source node without error. A key strength of the DCD algorithm is that the
convergence rate of the underlying iterative scheme is quite fast and independent
of the size and structure of the network, which makes detection using this
algorithm quite fast. Application of the DCD algorithm to detect node
separation and re-connection to the source in mobile networks is a topic of
ongoing research.
3
prakash chalumuri
IEEE Java Project - Clustering with Multi-Viewpoint based Similarity Measure
Clustering
with Multi-Viewpoint based
Similarity Measure
ABSTRACT:
All clustering methods have to assume some cluster relationship among the
data objects that they are applied on. Similarity between a pair of objects can
be defined either explicitly or implicitly. In this paper, we introduce a novel
multi-viewpoint based similarity measure and two related clustering methods.
The major difference between a traditional dissimilarity/similarity measure and
ours is that the former uses only a single viewpoint, which is the origin,
while the latter utilizes many different viewpoints, which are objects assumed
to not be in the same cluster with the two objects being measured. Using
multiple viewpoints, more informative assessment of similarity could be achieved.
Theoretical analysis and empirical study are conducted to support this claim.
Two criterion functions for document clustering are proposed based on this new
measure. We compare them with several well-known clustering algorithms that use
other popular similarity measures on various document collections to verify the
advantages of our proposal.
EXISTING SYSTEMS
·
Clustering is one of the most interesting and important
topics in data mining. The aim of clustering is to find intrinsic structures in
data, and organize them into meaningful subgroups for further study and
analysis. There have been many clustering algorithms published every year.
·
Existing Systems greedily picks the next frequent item set
which represent the next cluster to minimize the overlapping between the
documents that contain both the item set and some remaining item sets.
·
In other words, the clustering result depends on the order of
picking up the item sets, which in turns depends on the greedy heuristic. This
method does not follow a sequential order of selecting clusters. Instead, we
assign documents to the best cluster.
PROPOSED SYSTEM
·
The main work is to develop a
novel hierarchal algorithm for document clustering which provides maximum
efficiency and performance.
·
It is particularly focused in
studying and making use of cluster overlapping phenomenon to design cluster
merging criteria. Proposing a new way to compute the overlap rate in order to
improve time efficiency and “the veracity” is mainly concentrated. Based on the
Hierarchical Clustering Method, the usage of Expectation-Maximization (EM)
algorithm in the Gaussian Mixture Model to count the parameters and make the
two sub-clusters combined when their overlap is the largest is narrated.
·
Experiments in both public data
and document clustering data show that this approach can improve the efficiency
of clustering and save computing time.
Given a data set satisfying the
distribution of a mixture of Gaussians, the degree of overlap between
components affects the number of clusters “perceived” by a human operator or
detected by a clustering algorithm. In other words, there may be a significant
difference between intuitively defined clusters and the true clusters
corresponding to the components in the mixture.
MODULES
·
HTML PARSER
·
CUMMULATIVE DOCUMENT
·
DOCUMENT SIMILARITY
·
CLUSTERING
MODULE DESCRIPTION:
HTML Parser
·
Parsing is the first step done when the document enters the
process state.
·
Parsing is defined as the separation or identification of
meta tags in a HTML document.
·
Here, the raw HTML file is read and it is parsed through all
the nodes in the tree structure.
Cumulative Document
·
The cumulative document is the sum of all the documents,
containing meta-tags from all the documents.
·
We find the references (to other pages) in the input base
document and read other documents and then find references in them and so on.
·
Thus in all the documents their meta-tags are identified,
starting from the base document.
Document Similarity
·
The similarity between two documents is found by the
cosine-similarity measure technique.
·
The weights in the cosine-similarity are found from the
TF-IDF measure between the phrases (meta-tags) of the two documents.
·
This is done by computing the term weights involved.
·
TF = C / T
·
IDF = D / DF.
D à quotient of the total number of
documents
DF à number of times each word is found
in the entire corpus
C à quotient of no of times a word
appears in each document
T à total number of words in the document
· TFIDF = TF *
IDF
Clustering
·
Clustering is a division of data into groups of similar
objects.
·
Representing the data by fewer clusters necessarily loses
certain fine details, but achieves simplification.
The similar
documents are grouped together in a cluster, if their cosine similarity measure
is less than a specified threshold
SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
•
System : Pentium IV 2.4 GHz.
•
Hard
Disk : 40 GB.
•
Floppy
Drive : 1.44 Mb.
•
Monitor : 15 VGA Colour.
•
Mouse : Logitech.
•
Ram : 512 Mb.
SOFTWARE REQUIREMENTS:
•
Operating system : - Windows XP.
•
Coding Language : - JAVA
REFERENCE:
Duc Thang Nguyen, Lihui Chen and Chee Keong Chan, “Clustering with
Multi-Viewpoint based Similarity Measure”, IEEE
TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 24, NO. 6, JUNE 2012.
Thursday, December 27, 2012
1
Thursday, December 27, 2012
prakash chalumuri
2
.Trusted Platform Module
IEEE Java Project - CLOUD DATA PRODUCTION FOR MASSES
ABSTRACT
Offering strong data
protection to cloud users while enabling rich applications is a challenging task.
We explore a new cloud platform architecture called Data Protection as a
Service, which dramatically reduces the per-application development effort
required to offer data protection, while still allowing rapid development and
maintenance.
EXISTING SYSTEM
Cloud computing promises lower costs, rapid
scaling, easier maintenance, and service availability anywhere, anytime, a key
challenge is how to ensure and build confidence that the cloud can handle user
data securely. A recent Microsoft survey found that “58 percent of the public
and 86 percent of business leaders are excited about the possibilities of cloud
computing. But more than 90 percent of them are worried about security,
availability, and privacy of their data as it rests in the cloud.”
PROPOSED SYSTEM
We propose a new cloud computing paradigm, data protection
as a service (DPaaS) is a suite of security primitives offered by a
cloud platform, which enforces data security and privacy and offers
evidence of privacy to data owners, even in the presence of potentially
compromised or malicious applications. Such
as secure data using encryption, logging, key management.
MODULE DESCRIPTION:
1. Cloud Computing
2. Trusted Platform Module
3. Third Party Auditor
4.
User Module
1. Cloud
Computing
Cloud computing is the provision of dynamically
scalable and often virtualized resources as a services over the
internet Users need not have knowledge of, expertise in, or control
over the technology infrastructure in the "cloud" that supports them.
Cloud computing represents a major change in how we store information and run
applications. Instead of hosting apps and data on an individual desktop
computer, everything is hosted in the "cloud"—an assemblage
of computers and servers accessed via the Internet.
Cloud computing exhibits the following key characteristics:
1. Agility improves with users' ability to re-provision technological
infrastructure resources.
2. Multi tenancy enables sharing of resources and costs across a large pool of
users thus allowing for:
3. Utilization and efficiency improvements
for systems that are often only 10–20% utilized.
4. Reliability is improved if multiple redundant sites are used, which makes
well-designed cloud computing suitable for business continuity and disaster
recovery.
5. Performance is
monitored and consistent and loosely coupled architectures are constructed
using web services as the system interface.
6. Security could improve due to centralization of data, increased
security-focused resources, etc., but concerns can persist about loss of
control over certain sensitive data, and the lack of security for stored
kernels. Security is often as good as or better than other traditional systems,
in part because providers are able to devote resources to solving security
issues that many customers cannot afford. However, the complexity of security
is greatly increased when data is distributed over a wider area or greater
number of devices and in multi-tenant systems that are being shared by
unrelated users. In addition, user access to security audit logs may
be difficult or impossible. Private cloud installations are in part motivated
by users' desire to retain control over the infrastructure and avoid losing
control of information security.
7. Maintenance of
cloud computing applications is easier, because they do not need to be
installed on each user's computer and can be accessed from different places.
2
.Trusted Platform Module
Trusted
Platform Module (TPM) is both
the name of a published specification
detailing a secure crypto
processor that can store cryptographic keys that
protect information, as well as the general name of implementations of that
specification, often called the "TPM chip" or "TPM Security
Device". The TPM specification is the work of the Trusted Computing
Group.
Disk
encryption is a technology which protects information by converting it into
unreadable code that cannot be deciphered easily by unauthorized people. Disk
encryption uses disk encryption
software or hardware
to encrypt every bit
of data that goes on a disk or disk volume. Disk
encryption prevents unauthorized access to data storage. The term "full
disk encryption" (or whole disk encryption) is often used to
signify that everything on a disk is encrypted, including the programs that can
encrypt bootable operating system partitions. But they must still leave the master boot record
(MBR), and thus part of the disk, unencrypted. There are, however, hardware-based
full disk encryption systems that can truly encrypt the entire boot
disk, including the MBR.
3. Third Party
Auditor
In this module, Auditor views the all user data and
verifying data and also changed data. Auditor directly views all user data
without key. Admin provided the permission to Auditor. After auditing data,
store to the cloud.
4. User Module
User
store large amount of data to clouds and access data using secure key. Secure
key provided admin after encrypting data. Encrypt the data using TPM. User
store data after auditor, view and verifying data and also changed data. User again
views data at that time admin provided the message to user only changes data.
System
Configuration:-
H/W System
Configuration:-
Processor - Pentium –III
Speed - 1.1 Ghz
RAM - 256
MB(min)
Hard
Disk - 20 GB
Floppy
Drive - 1.44 MB
Key
Board -
Standard Windows Keyboard
Mouse - Two or Three Button Mouse
Monitor - SVGA
S/W System
Configuration:-
Operating
System :Windows95/98/2000/XP
Application
Server : Tomcat5.0/6.X
Front
End : HTML, Java, Jsp
Scripts : JavaScript.
Server
side Script : Java Server Pages.
Database :
Mysql
Database
Connectivity : JDBC.
CONCLUSION
As private data moves online, the need to
secure it properly becomes increasingly urgent. The good news is that the same
forces concentrating data in enormous datacenters will also aid in using
collective security expertise more effectively. Adding protections to a single
cloud platform can immediately benefit hundreds of thousands of applications
and, by extension, hundreds of millions of users. While we have focused here on
a particular, albeit popular and privacy-sensitive, class of applications, many
other applications also needs solutions.
Subscribe to:
Posts (Atom)