Showing posts with label Java Projects. Show all posts
Showing posts with label Java Projects. Show all posts
Saturday, December 29, 2012
3
Saturday, December 29, 2012
prakash chalumuri
H/W System
Configuration:-
IEEE Java Project - Detecting and Resolving Firewall Policy Anomalies
Detecting
and Resolving Firewall Policy Anomalies
ABSTRACT:
The advent of emerging computing
technologies such as service-oriented architecture and cloud computing has
enabled us to perform business services more efficiently and effectively.
However, we still suffer from unintended security leakages by unauthorized
actions in business services. Firewalls are the most widely deployed security
mechanism to ensure the security of private networks in most businesses and
institutions. The effectiveness of security protection provided by a firewall
mainly depends on the quality of policy configured in the firewall.
Unfortunately, designing and managing firewall policies are often error prone
due to the complex nature of firewall configurations as well as the lack of
systematic analysis mechanisms and tools. In this paper, we represent an
innovative policy anomaly management framework for firewalls, adopting a
rule-based segmentation technique to identify policy anomalies and derive
effective anomaly resolutions. In particular, we articulate a grid-based
representation technique, providing an intuitive cognitive sense about policy
anomaly. We also discuss a proof-of-concept implementation of a
visualization-based firewall policy analysis tool called Firewall Anomaly
Management Environment (FAME). In addition, we demonstrate how efficiently our approach
can discover and resolve anomalies in firewall policies through rigorous
experiments.
EXISTING
SYSTEM:
Firewall policy management is a
challenging task due to the complexity and interdependency of policy rules.
This is further exacerbated by the continuous evolution of network and system
environments.
The process of configuring a
firewall is tedious and error prone. Therefore, effective mechanisms and tools
for policy management are crucial to the success of firewalls.
Existing policy analysis tools, such
as Firewall Policy Advisor and FIREMAN, with the goal of detecting policy
anomalies have been introduced. Firewall Policy Advisor only has the capability
of detecting pair wise anomalies in firewall rules. FIREMAN can detect anomalies
among multiple rules by analyzing the relationships between one rule and the
collections of packet spaces derived from all preceding rules.
However, FIREMAN also has limitations
in detecting anomalies. For each firewall rule, FIREMAN only examines all
preceding rules but ignores all subsequent rules when performing anomaly analysis.
In addition, each analysis result from FIREMAN can only show that there is a misconfiguration
between one rule and its preceding rules, but cannot accurately indicate all rules
involved in an anomaly.
PROPOSED
SYSTEM:
In this paper, we represent a novel
anomaly management framework for firewalls based on a rule-based segmentation technique
to facilitate not only more accurate anomaly detection but also effective
anomaly resolution.
Based on this technique, a network
packet space defined by a firewall policy can be divided into a set of disjoint
packet space segments. Each segment associated with a unique set of firewall
rules accurately indicates an overlap relation (either conflicting or redundant)
among those rules.
We also introduce a flexible conflict
resolution method to enable a fine-grained conflict resolution with the help of
several effective resolution strategies with respect to the risk assessment of
protected networks and the intention of policy definition.
System Configuration:-
H/W System
Configuration:-
ü Processor -Pentium –III
ü Speed - 1.1 Ghz
ü RAM - 256 MB(min)
ü Hard
Disk - 20 GB
ü Floppy
Drive - 1.44 MB
ü Key
Board - Standard Windows Keyboard
ü Mouse - Two or Three Button Mouse
ü Monitor - SVGA
S/W System Configuration:-
v
Operating System :
Windows95/98/2000/XP
v
Front End :
Java
REFERENCE:
Hongxin Hu, Student Member, IEEE, Gail-Joon
Ahn, Senior Member, IEEE, and Ketan Kulkarni,” Detecting and Resolving Firewall
Policy Anomalies”, IEEE TRANSACTIONS ON
DEPENDABLE AND SECURE COMPUTING, VOL. 9, NO. 3, MAY/JUNE 2012.
Friday, December 28, 2012
2
Friday, December 28, 2012
prakash chalumuri
IEEE Java Project - Design and Implementation of TARF: A Trust-Aware Routing Framework for WSNs
Design
and Implementation of TARF:
A Trust-Aware Routing
Framework for WSNs
ABSTRACT:
The multihop routing in wireless
sensor networks (WSNs) offers little protection against identity deception
through replaying routing information. An adversary can exploit this defect to
launch various harmful or even devastating attacks against the routing protocols,
including sinkhole attacks, wormhole attacks, and Sybil attacks. The situation
is further aggravated by mobile and harsh network conditions. Traditional
cryptographic techniques or efforts at developing trust-aware routing protocols
do not effectively address this severe problem. To secure the WSNs against
adversaries misdirecting the multihop routing, we have designed and implemented
TARF, a robust trust-aware routing framework for dynamic WSNs. Without tight
time synchronization or known geographic information, TARF provides trustworthy
and energy-efficient route. Most importantly, TARF proves effective against
those harmful attacks developed out of identity deception; the resilience of
TARF is verified through extensive evaluation with both simulation and
empirical experiments on large-scale WSNs under various scenarios including
mobile and RF-shielding network conditions. Further, we have implemented a
low-overhead TARF module in TinyOS; as demonstrated, this implementation can be
incorporated into existing routing protocols with the least effort. Based on
TARF, we also demonstrated a proof-of-concept mobile target detection application
that functions well against an anti-detection mechanism.
EXISTING
SYSTEM:
In the existing system, the
multihop routing of WSNs often becomes the target of malicious attacks. An
attacker may tamper nodes physically, create traffic collision with seemingly
valid transmission, drop or misdirect messages in routes, or jam the
communication channel by creating radio interference.
PROPOSED
SYSTEM:
In the proposed system , to secure
the WSNs against adversaries misdirecting the multihop routing, we have
designed and implemented TARF, a robust trust-aware routing framework for
dynamic WSNs.
SYSTEM
REQUIREMENTS:
HARDWARE REQUIREMENTS:
•
System : Pentium IV 2.4 GHz.
•
Hard
Disk : 40 GB.
•
Floppy
Drive : 1.44 Mb.
•
Monitor : 15 VGA Colour.
•
Mouse : Logitech.
•
Ram : 512 Mb.
SOFTWARE REQUIREMENTS:
•
Operating system : - Windows XP
•
Coding Language :- JAVA
REFERENCE:
Guoxing Zhan, Weisong Shi, and
Julia Deng, “Design and Implementation of TARF: A Trust-Aware Routing Framework
for WSNs”, IEEE TRANSACTIONS ON
DEPENDABLE AND SECURE COMPUTING, VOL. 9, NO. 2, MARCH/APRIL 2012.
10
prakash chalumuri
IEEE Java Project - Cut Detection in Wireless Sensor Networks
ABSTRACT
A wireless sensor network can get separated into multiple connected
components due to the failure of some of its nodes, which is called a “cut”. In
this article we consider the problem of detecting cuts by the remaining nodes
of a wireless sensor network. We propose an algorithm that allows (i) every
node to detect when the connectivity to a specially designated node has been
lost, and (ii) one or more nodes (that are connected to the special node after
the cut) to detect the occurrence of the cut. The algorithm is distributed and
asynchronous: every node needs to communicate with only those nodes that are
within its communication range. The algorithm is based on the iterative
computation of a fictitious “electrical potential” of the nodes. The
convergence rate of the underlying iterative scheme is independent of the size
and structure of the network.
EXISTING SYSTEM
Wireless Multimedia Sensor
Networks (WMSNs) has many challenges such as nature of wireless media and
multimedia information transmission. Consequently traditional mechanisms for
network layers are no longer acceptable or applicable for these networks. Wireless sensor network can get separated
into multiple connected components due to the failure of some of its nodes,
which is called a “cut”. Existing cut detection system deployed only for
wired networks.
Disadvantages
1.
Unsuitable for dynamic network reconfiguration.
2. Single
path routing approach.
PROPOSED SYSTEM
Wireless
sensor networks (WSNs) are a promising technology for monitoring large regions
at high spatial and temporal resolution .Failure of a set of nodes will reduce
the number of multi-hop paths in the network. Such failures can cause a subset
of nodes – that have not failed – to become disconnected from the rest,
resulting in a “cut”. Two nodes are said to be disconnected if there is no path
between them. We consider the problem of detecting cuts by the nodes of a
wireless network. We assume that there is a specially designated node in the
network, which we call the source nodeSince
a cut may or may not separate a node from the source node, we distinguish
between two distinct outcomes of a cut for a particular node. When a node u is
disconnected from the source, we say that a DOS (Disconnected from Source) event
has occurred for u. When a cut occurs in the network that does not separate a
node u from the source node, we say that CCOS (Connected, but a Cut
Occurred Somewhere) event has occurred for u. By cut detection we mean (i) detection
by each node of a DOS event when it occurs, and (ii) detection of CCOS
events by the nodes close to a cut, and the approximate location of the cut. In
this article we propose a distributed algorithm to detect cuts, named the Distributed Cut Detection (DCD)
algorithm. The algorithm allows each node to detect DOS events and a subset of
nodes to detect CCOS events. The algorithm we propose is distributed and asynchronous:
it involves only local communication between neighboring nodes, and is robust
to temporary communication failure between node pairs The convergence rate of
the computation is independent of the size and structure of the network.
MODULE DESCRIPTION:
DISTRIBUTED CUT DETECTION:
The algorithm allows each node
to detect DOS events and a subset of nodes to detect CCOS events. The algorithm
we propose is distributed and asynchronous: it involves only local
communication between neighboring nodes, and is robust to temporary communication
failure between node pairs. A key component of the DCD algorithm is a
distributed iterative computational step through which the nodes compute their
(fictitious) electrical potentials. The convergence rate of the computation is
independent of the size and structure of the network.
CUT:
Wireless sensor networks
(WSNs) are a promising technology for
monitoring large regions at high spatial and temporal resolution. In
fact, node failure is expected to be quite common due to the typically limited
energy budget of the nodes that are powered by small batteries. Failure of a
set of nodes will reduce the number of multi-hop paths in the network. Such
failures can cause a subset of nodes – that have not failed – to become
disconnected from the rest, resulting in a “cut”. Two nodes are said to be
disconnected if there is no path between them.
SOURCE NODE:
We
consider the problem of detecting cuts by the nodes of a wireless network. We
assume that there is a specially designated node in the network, which we call
the source node. The source node may be a base station that serves as an
interface between the network and its users.Since a cut may or may not separate
a node from the source node, we distinguish between two distinct outcomes of a
cut for a particular node.
CCOS AND DOS:
When a
node u is disconnected from the source, we say that a DOS (Disconnected
frOm Source) event has occurred for u. When a cut occurs in the network that does
not separate a node u from the source node, we say that CCOS (Connected, but a Cut
Occurred Somewhere) event has occurred for u. By cut detection
we mean (i) detection by each node of a DOS event when it occurs, and (ii)
detection of CCOS events by the nodes close to a cut, and the approximate
location of the cut.
NETWORK SEPARATION:
Failure of a set of
nodes will reduce the number of multi-hop paths in the network. Such failures
can cause a subset of nodes – that have not failed – to become disconnected
from the rest, resulting in a “cut”. Because of cut, some nodes may separated
from the network, that results the separated nodes can’t receive the data from
the source node.
System
Configuration:-
H/W System
Configuration:-
Processor - Pentium –III
Speed - 1.1 Ghz
RAM - 256
MB(min)
Hard
Disk - 20 GB
Floppy
Drive - 1.44 MB
Key
Board - Standard Windows Keyboard
Mouse - Two or Three Button Mouse
Monitor - SVGA
S/W System
Configuration:-
Operating
System :Windows XP
Front
End : JAVA,RMI, SWING
CONCLUSION
The DCD
algorithm we propose here enables every node of a wireless sensor network to
detect DOS (Disconnected frOm Source) events if they occur. Second, it enables
a subset of nodes that experience CCOS (Connected, but Cut Occurred Somewhere)
events to detect them and estimate the approximate location of the cut in the
form of a list of active nodes that lie at the boundary of the cut/hole. The
DOS and CCOS events are defined with respect to a specially designated source node.
The algorithm is based on ideas from electrical network theory and parallel
iterative solution of linear equations. Numerical simulations, as well as
experimental evaluation on a real WSN system consisting of micaZ motes, show
that the algorithm works effectively with a large classes of graphs of varying
size and structure, without requiring changes in the parameters. For certain
scenarios, the algorithm is assured to detect connection and disconnection to
the source node without error. A key strength of the DCD algorithm is that the
convergence rate of the underlying iterative scheme is quite fast and independent
of the size and structure of the network, which makes detection using this
algorithm quite fast. Application of the DCD algorithm to detect node
separation and re-connection to the source in mobile networks is a topic of
ongoing research.
Thursday, December 27, 2012
1
Thursday, December 27, 2012
prakash chalumuri
2
.Trusted Platform Module
IEEE Java Project - CLOUD DATA PRODUCTION FOR MASSES
ABSTRACT
Offering strong data
protection to cloud users while enabling rich applications is a challenging task.
We explore a new cloud platform architecture called Data Protection as a
Service, which dramatically reduces the per-application development effort
required to offer data protection, while still allowing rapid development and
maintenance.
EXISTING SYSTEM
Cloud computing promises lower costs, rapid
scaling, easier maintenance, and service availability anywhere, anytime, a key
challenge is how to ensure and build confidence that the cloud can handle user
data securely. A recent Microsoft survey found that “58 percent of the public
and 86 percent of business leaders are excited about the possibilities of cloud
computing. But more than 90 percent of them are worried about security,
availability, and privacy of their data as it rests in the cloud.”
PROPOSED SYSTEM
We propose a new cloud computing paradigm, data protection
as a service (DPaaS) is a suite of security primitives offered by a
cloud platform, which enforces data security and privacy and offers
evidence of privacy to data owners, even in the presence of potentially
compromised or malicious applications. Such
as secure data using encryption, logging, key management.
MODULE DESCRIPTION:
1. Cloud Computing
2. Trusted Platform Module
3. Third Party Auditor
4.
User Module
1. Cloud
Computing
Cloud computing is the provision of dynamically
scalable and often virtualized resources as a services over the
internet Users need not have knowledge of, expertise in, or control
over the technology infrastructure in the "cloud" that supports them.
Cloud computing represents a major change in how we store information and run
applications. Instead of hosting apps and data on an individual desktop
computer, everything is hosted in the "cloud"—an assemblage
of computers and servers accessed via the Internet.
Cloud computing exhibits the following key characteristics:
1. Agility improves with users' ability to re-provision technological
infrastructure resources.
2. Multi tenancy enables sharing of resources and costs across a large pool of
users thus allowing for:
3. Utilization and efficiency improvements
for systems that are often only 10–20% utilized.
4. Reliability is improved if multiple redundant sites are used, which makes
well-designed cloud computing suitable for business continuity and disaster
recovery.
5. Performance is
monitored and consistent and loosely coupled architectures are constructed
using web services as the system interface.
6. Security could improve due to centralization of data, increased
security-focused resources, etc., but concerns can persist about loss of
control over certain sensitive data, and the lack of security for stored
kernels. Security is often as good as or better than other traditional systems,
in part because providers are able to devote resources to solving security
issues that many customers cannot afford. However, the complexity of security
is greatly increased when data is distributed over a wider area or greater
number of devices and in multi-tenant systems that are being shared by
unrelated users. In addition, user access to security audit logs may
be difficult or impossible. Private cloud installations are in part motivated
by users' desire to retain control over the infrastructure and avoid losing
control of information security.
7. Maintenance of
cloud computing applications is easier, because they do not need to be
installed on each user's computer and can be accessed from different places.
2
.Trusted Platform Module
Trusted
Platform Module (TPM) is both
the name of a published specification
detailing a secure crypto
processor that can store cryptographic keys that
protect information, as well as the general name of implementations of that
specification, often called the "TPM chip" or "TPM Security
Device". The TPM specification is the work of the Trusted Computing
Group.
Disk
encryption is a technology which protects information by converting it into
unreadable code that cannot be deciphered easily by unauthorized people. Disk
encryption uses disk encryption
software or hardware
to encrypt every bit
of data that goes on a disk or disk volume. Disk
encryption prevents unauthorized access to data storage. The term "full
disk encryption" (or whole disk encryption) is often used to
signify that everything on a disk is encrypted, including the programs that can
encrypt bootable operating system partitions. But they must still leave the master boot record
(MBR), and thus part of the disk, unencrypted. There are, however, hardware-based
full disk encryption systems that can truly encrypt the entire boot
disk, including the MBR.
3. Third Party
Auditor
In this module, Auditor views the all user data and
verifying data and also changed data. Auditor directly views all user data
without key. Admin provided the permission to Auditor. After auditing data,
store to the cloud.
4. User Module
User
store large amount of data to clouds and access data using secure key. Secure
key provided admin after encrypting data. Encrypt the data using TPM. User
store data after auditor, view and verifying data and also changed data. User again
views data at that time admin provided the message to user only changes data.
System
Configuration:-
H/W System
Configuration:-
Processor - Pentium –III
Speed - 1.1 Ghz
RAM - 256
MB(min)
Hard
Disk - 20 GB
Floppy
Drive - 1.44 MB
Key
Board -
Standard Windows Keyboard
Mouse - Two or Three Button Mouse
Monitor - SVGA
S/W System
Configuration:-
Operating
System :Windows95/98/2000/XP
Application
Server : Tomcat5.0/6.X
Front
End : HTML, Java, Jsp
Scripts : JavaScript.
Server
side Script : Java Server Pages.
Database :
Mysql
Database
Connectivity : JDBC.
CONCLUSION
As private data moves online, the need to
secure it properly becomes increasingly urgent. The good news is that the same
forces concentrating data in enormous datacenters will also aid in using
collective security expertise more effectively. Adding protections to a single
cloud platform can immediately benefit hundreds of thousands of applications
and, by extension, hundreds of millions of users. While we have focused here on
a particular, albeit popular and privacy-sensitive, class of applications, many
other applications also needs solutions.
1
prakash chalumuri
IEEE Java project - Bootstrapping Ontologies for Web Services
Bootstrapping
Ontologies for Web Services
ABSTRACT:
Ontologies have become the de-facto
modeling tool of choice, employed in many applications and prominently in the semantic
web. Nevertheless, ontology construction remains a daunting task. Ontological bootstrapping,
which aims at automatically generating concepts and their relations in a given
domain, is a promising technique for ontology construction. Bootstrapping an ontology
based on a set of predefined textual sources, such as web services, must address
the problem of multiple, largely unrelated concepts. In this paper, we propose
an ontology bootstrapping process for web services. We exploit the advantage
that web services usually consist of both WSDL and free text descriptors. The
WSDL descriptor is evaluated using two methods, namely Term Frequency/Inverse
Document Frequency (TF/IDF) and web context generation. Our proposed ontology
bootstrapping process integrates the results of both methods and applies a
third method to validate the concepts using the service free text descriptor, thereby
offering a more accurate definition of ontologies. We extensively validated our
bootstrapping method using a large repository of real-world web services and
verified the results against existing ontologies. The experimental results
indicate high precision. Furthermore, the recall versus precision comparison of
the results when each method is separately implemented presents the advantage
of our integrated bootstrapping approach.
Architecture:
AIM:
To develop an Ontological bootstrapping which aims
at automatically generating concepts and their relations in a given domain is a
promising technique for ontology construction. Bootstrapping an ontology based
on a set of predefined textual sources, such as Web services, must address the
problem of multiple, largely unrelated concepts.
EXISTING SYSTEM:
Ontology creation and evolution and in particular on
schema matching. Many heuristics were proposed for the automatic matching of
schema and several theoretical models were proposed to represent various
aspects of the matching process such as representation of mappings between Ontologies.
However, all the methodologies described require comparison between existing Ontologies.
DISADVANTAGES OF EXISTING SYSTEM:
·
Previous work on
ontology bootstrapping focused on either a limited domain or expanding an
existing ontology.
·
UDDI registries have
some major flaws. In particular, UDDI registries either are publicly available and
contain many obsolete entries or require registration that limits access. In
either case, a registry only stores a limited description of the available
services.
PROPOSED SYSTEM:
The ontology bootstrapping process is based on
analyzing a Web service using three different methods, where each method
represents a different perspective of viewing the Web service. As a result, the
process provides a more accurate definition of the ontology and yields better
results. In particular, the Term Frequency/ Inverse Document Frequency (TF/IDF)
method analyzes the Web service from an internal point of view, i.e., what
concept in the text best describes the WSDL document content. The Web Context
Extraction method describes the WSDL document from an external point of view,
i.e., what most common concept represents the answers to the Web search queries
based on the WSDL content. Finally, the Free Text Description Verification
method is used to resolve inconsistencies with the current ontology.
ADVANTAGES
OF PROPOSED SYSTEM:
The web service ontology
bootstrapping process proposed in this paper is based on the advantage that a
web service can be separated into two types of descriptions:
1) The Web Service Description
Language (WSDL) describing “how” the service should be used and
2) A textual description of the web
service in free text describing “what” the service does. This advantage allows
bootstrapping the ontology based on WSDL and verifying the process based on the
web service free text descriptor.
MODULES:
·
Data Extraction
·
Token Extraction
·
Term Frequency/IDF
Analysis
·
Web context extraction
·
Ontology Evolution
MODULES DESCRIPTION:
Data Extraction:
In
this module we develop the data extraction process using Whois. Whois is a Web
service that allows domain details to be identified by based on the domain name
.It maintains a web services related with operations and services.
Token Extraction:
In
this module we develop the token extraction process using WSDL (Web Service
Description Language). WSDL document with the token list bolded. The extracted
token list serves as a baseline. These tokens are extracted from the WSDL
document of a Web service Whois. The service is used as an initial step in our
example in building the ontology. Additional services will be used later to
illustrate the process of expanding the ontology.
Term Frequency/IDF Analysis:
Term
Frequency/Inverse Document Frequency analysis is made in this module. TF/IDF is
applied here to the WSDL descriptors. By building an independent corpus for
each document, irrelevant terms are more distinct and can be thrown away with a
higher confidence. To formally define TF/IDF, we start by defining frequency as
the number of occurrences of the token within the document descriptor.
Web context extraction:
In this module, we develop the web context
extraction process. Where, the Web pages clustering algorithm is based on the
concise all pairs profiling (CAPP) clustering method. This method approximates
profiling of large classifications. It compares all classes’ pair wise and then
minimizes the total number of features required to guarantee that each pair of
classes is contrasted by at least one feature.
Ontology Evolution:
Ontology
evolution is the last module where, the descriptor is further validated using
the textual service descriptor. The analysis is based on the advantage that a
Web service can be separated into two descriptions: the WSDL description and a
textual description of the Web service in free text. The WSDL descriptor is
analyzed to extract the context descriptors and possible concepts as described.
CONCLUSION:
In
this project we propose an approach for bootstrapping an ontology based on Web
service descriptions. The approach is based on analyzing Web services from
multiple perspectives and integrating the results. Our approach takes advantage
of the fact that Web services usually consist of both WSDL and free text
descriptors.
SYSTEM
REQUIREMENTS:
HARDWARE REQUIREMENTS:
•
System : Pentium IV 2.4 GHz.
•
Hard
Disk : 40 GB.
•
Floppy
Drive : 1.44 Mb.
•
Monitor : 15 VGA Colour.
•
Mouse : Logitech.
•
Ram : 512 Mb.
SOFTWARE REQUIREMENTS:
•
Operating system : - Windows XP.
•
Coding Language : J2EE
•
Data Base :
MYSQL
REFERENCE:
Aviv Segev, and Quan Z. Sheng,
“Bootstrapping Ontologies for Web Services”, IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 5, NO. 1, JANUARY-MARCH
2012.
Subscribe to:
Posts (Atom)