Showing posts with label Project. Show all posts
Showing posts with label Project. Show all posts
Thursday, December 27, 2012
1
Thursday, December 27, 2012
prakash chalumuri
2. Tracking membership Service
3. Byzantine Fault Tolerance
4. Dynamic Replication Reliable Automatic Reconfiguration
IEEE Dot Net Project - Automatic Reconfiguration for Large-Scale Reliable StorageSystems
Automatic Reconfiguration for Large-Scale
Reliable StorageSystems
Abstract
Byzantine-fault-tolerant replication enhances the availability and reliability of
Internet services that store critical state and preserve it despite attacks or software errors.
However, existing Byzantine-fault-tolerant storage systems either assume a static set of
replicas, or have limitations in how they handle reconfigurations (e.g., in terms of the
scalability of the solutions or the consistency levels they provide). This can be
problematic in long-lived, large-scale systems where system membership is likely to
change during the system lifetime. In this paper, we present a complete solution for
dynamically changing system membership in a large-scale Byzantine-fault-tolerant
system. We present a service that tracks system membership and periodically notifies
other system nodes of membership changes. The membership service runs mostly
automatically, to avoid human configuration errors; is itself Byzantine fault- tolerant and
reconfigurable; and provides applications with a sequence of consistent views of the
system membership. We demonstrate the utility of this membership service by using it in
a novel distributed hash table called dBQS that provides atomic semantics even across
changes in replica sets. dBQS is interesting in its own right because its storage algorithms
extend existing Byzantine quorum protocols to handle changes in the replica set, and
because it differs from previous DHTs by providing Byzantine fault tolerance and
offering strong semantics. We implemented the membership service and dBQS. Our
results show that the approach works well, in practice: the membership service is able to
manage a large system and the cost to change the system membership is low.
Existing System
In Existing System, replication enhanced the reliability of internet services to
store the data’s. The preserved data to be secured from software errors. But, existing
Byzantine-fault tolerant systems is a static set of replicas. It has no limitations. So,
scalability is inconsistency. So, these data’s are not came for long-lived systems.
The existence of the following cryptographic techniques that an adversary cannot
subvert: a collision resistant hash function, a public key cryptography scheme, and
forward-secure signing key and the existence of a proactive threshold signature protocol.
Proposed System
In Proposed System, has two parts. The first is a membership service (MS) that
tracks and responds to membership changes. The MS works mostly automatically, and requires only minimal human intervention; this way we can reduce manual configuration
errors, which are a major cause of disruption in computer systems periodically, the MS
publishes a new system membership; in this way it provides a globally consistent view of
the set of available servers. The choice of strong consistency makes it easier to
implement applications, since it allows clients and servers to make consistent local
decisions about which servers are currently responsible for which parts of the service.
The second part of our solution addresses the problem of how to reconfigure
applications automatically as system membership changes. We present a storage system,
dBQS that provides Byzantine-fault-tolerant replicated storage with strong consistency.
Modules
1. Reliable Automatic Reconfiguration2. Tracking membership Service
3. Byzantine Fault Tolerance
4. Dynamic Replication Reliable Automatic Reconfiguration
In this Module, it provides the abstraction of a globally consistent view of the
system membership. This abstraction simplifies the design of applications that use it,
since it allows different nodes to agree on which servers are responsible for which subset
of the service. It is designed to work at large scale, e.g., tens or hundreds of thousands of
servers. Support for large scale is essential since systems today are already large and we
can expect them to scale further.
It is secure against Byzantine (arbitrary) faults. Handling Byzantine faults is
important because it captures the kinds of complex failure modes that have been reported
for our target deployments.
Tracking membership Service
In this Module, is only part of what is needed for automatic reconfiguration. We
assume nodes are connected by an unreliable asynchronous network like the Internet,
where messages may be lost, corrupted, delayed, duplicated, or delivered out of order.
While we make no synchrony assumptions for the system to meet its safety guarantees, it
is necessary to make partial synchrony assumptions for liveness.
The MS describes membership changes by producing a configuration, which
identifies the set of servers currently in the system, and sending it to all servers. To allow
the configuration to be exchanged among nodes without possibility of forgery, the MS
authenticates it using a signature that can be verified with a well-known public key.
Byzantine Fault Tolerance
In this Module, to provide Byzantine fault tolerance for the MS, we implement it
with group replicas executing the PBFT state machine replication protocol.
These MS replicas can run on server nodes, but the size of the MS group is small
and independent of the system size. So, to implement from tracking service,
1. Add – It takes a certificate signed by the trusted authority describing the node
adds the node to the set of system members.
2. Remove – It also takes a certificate signed by the trusted authority that identifies
the node to be removed. And removes this node from the current set of members.
3. Freshness – It receives a freshness challenge, the reply contains the nonce and
current epoch number signed by the MS.
4. PROBE – The MS sends probes to servers periodically. It serves respond with a
simple ack, or, when a nonce is sent, by repeating the nonce and signing the
response.
5. New EPOCH – It informs nodes of a new epoch. Here certificate vouching for the
configuration and changes represents the delta in the membership.
Dynamic Replication
In this Module, to prevent attacker from predicting
1. Choose the random number.
2. Sign the configuration using the old shares
3. Carry out a resharing of the MS keys with the new MS members.
4. Discard the old shares
System Configuration
Hardware Requirements
· System : Pentium IV 2.4 GHz.
· Hard Disk : 40 GB.
· Floppy Drive : 1.44 Mb.
· Monitor : 15 VGA Color.
· Mouse : Logitech.
· Ram : 512 Mb
Software Requirements
· Operating system : Windows XP.
· Coding Language : C#.Net
· Database : Sql Server 2005
0
prakash chalumuri
EXISTING SYSTEM:
System Configuration:-
H/W System
Configuration:-
ü Processor -Pentium –III
S/W System
Configuration:-
IEEE Java Project - Cloud Computing Security From Single to Multi-Clouds
Cloud Computing
Security From Single to Multi-Clouds
ABSTRACT:
The use
of cloud computing has increased rapidly in many organizations. Cloud computing
provides many benefits in terms of low cost and accessibility of data. Ensuring
the security of cloud computing is a major factor in the cloud computing
environment, as users often store sensitive information with cloud storage
providers but these providers may be untrusted. Dealing with “single cloud”
providers is predicted to become less popular with customers due to risks of
service availability failure and the possibility of malicious insiders in the
single cloud. A movement towards “multi-clouds”, or in other words, “interclouds”
or “cloud-of-clouds” has emerged recently. This paper surveys recent research
related to single and multi-cloud security and addresses possible solutions. It
is found that the research into the use of multi-cloud providers to maintain
security has received less attention from the research community than has the
use of single clouds. This work aims to promote the use of multi-clouds due to
its ability to reduce security risks that affect the cloud computing user.
ALGORITHM USED:
Secret Sharing
Algorithms:
Data stored in the cloud can be
compromised or lost. So, we have to come up with a way to secure those files.
We can encrypt them before storing them in the cloud, which sorts out the
disclosure aspects. However, what if the data is lost due to some catastrophe
befalling the cloud service provider? We could store it on more than one cloud
service and encrypt it before we send it off. Each of them will have the same
file. What if we use an insecure, easily guessable password to protect the 2012
45th Hawaii International Conference on System Sciences file, or the same one
to protect all files? I have often thought that secret sharing algorithms could
be employed to good effect in these circumstances instead.
SYSTEM ARCHITECTURE:
EXISTING SYSTEM:
Cloud providers should address
privacy and security issues as a matter of high and urgent priority. Dealing
with “single cloud” providers is becoming less popular with customers due to
potential problems such as service availability failure and the possibility
that there are malicious insiders in the single cloud. In recent years, there
has been a move towards “multi-clouds”, “inter-cloud” or “cloud-of-clouds”.
DISADVANTAGES OF
EXISTING SYSTEM:
1. Cloud providers should address
privacy and security issues as a matter of high and urgent priority.
2. Dealing with “single cloud” providers
is becoming less popular with customers due to potential problems such as
service availability failure and the possibility that there are malicious
insiders in the single cloud.
PROPOSED SYSTEM:
This paper focuses on the issues
related to the data security aspect of cloud computing. As data and information
will be shared with a third party, cloud computing users want to avoid an un-trusted
cloud provider. Protecting private and important information, such as credit
card details or a patient’s medical records from attackers or malicious
insiders is of critical importance. In addition, the potential for migration
from a single cloud to a multi-cloud environment is examined and research
related to security issues in single and multi-clouds in cloud computing is
surveyed.
ADVANTAGES OF PROPOSED
SYSTEM:
1. Data Integrity
2. Service Availability.
3. The user runs custom
applications using the service provider’s resources
4. Cloud service providers should
ensure the security of their customers’ data and should be responsible if any
security risk affects their customers’ service infrastructure.
MODULES:
1. Data Integrity
2. Data Intrusion
3. Service Availability
4. DepSKy System Model
MODULE DESCRIPTION:
Data Integrity:
One of the most important issues
related to cloud security risks is data integrity. The data stored in the cloud
may suffer from damage during transition operations from or to the cloud
storage provider. Cachinet al. give examples of the risk of attacks from both
inside and outside the cloud provider, such as the recently attacked Red Hat
Linux’s distribution servers.
One of the solutions that they
propose is to use a Byzantine fault-tolerant replication protocol within the
cloud. Hendricks et al. State that this solution can avoid data corruption
caused by some components in the cloud. However, Cachinet al. Claim that using
the Byzantine fault tolerant replication protocol within the cloud is
unsuitable due to the fact that the servers belonging to cloud providers use
the same system installations and are physically located in the same place.
Data Intrusion:
According to Garfinkel, another
security risk that may occur with a cloud provider, such as the Amazon cloud
service, is a hacked password or data intrusion. If someone gains access to an
Amazon account password, they will be able to access all of the account’s
instances and resources. Thus the stolen password allows the hacker to erase
all the information inside any virtual machine instance for the stolen user
account, modify it, or even disable its services. Furthermore, there is a possibility
for the user’s email(Amazon user name) to be hacked (see for a discussion of
the potential risks of email), and since Amazon allows a lost password to be
reset by email, the hacker may still be able to log in to the account after receiving
the new reset password.
Service Availability:
Another major concern in cloud
services is service availability. Amazon mentions in its licensing agreement
that it is possible that the service might be unavailable from time to time.
The user’s web service may terminate for any reason at any time if any user’s
files break the cloud storage policy. In addition, if any damage occurs to any
Amazon web service and the service fails, in this case there will be no charge
to the Amazon Company for this failure. Companies seeking to protect services
from such failure need measures such as backups or use of multiple providers.
DepSKy System Model:
The DepSky system model contains
three parts: readers, writers, and four cloud storage providers, where readers
and writers are the client’s tasks. Bessani et al. explain the difference
between readers and writers for cloud storage. Readers can fail arbitrarily
(for example, they can fail by crashing, they can fail from time to time and
then display any behavior) whereas, writers only fail by crashing.
System Configuration:-
H/W System
Configuration:-
ü Processor -Pentium –III
ü Speed - 1.1 Ghz
ü RAM - 256 MB(min)
ü Hard
Disk - 20 GB
ü Floppy
Drive - 1.44 MB
ü Key
Board - Standard Windows Keyboard
ü Mouse - Two or Three Button Mouse
ü Monitor - SVGA
S/W System
Configuration:-
v
Operating System :
Windows95/98/2000/XP
v
Application Server : Tomcat5.0/6.X
v
Front End :
HTML, Java, JSP
v
Script : JavaScript.
v
Server side Script :
Java Server Pages.
v
Database :
MYSQL
REFERENCE:
Mohammed A.
Alzain, Eric Parded, Ben Soh, James A. Thom, “Cloud Computing Security: From
Single to Multi-Clouds”, 2012, IEEE
CONFERENCE ON SYSTEM SCIENCES
0
prakash chalumuri
IEEE Dot Net Project - A Secure Intrusion detection system against DDOS attack in Wireless Mobile Ad-hoc Network
A
Secure Intrusion detection system against DDOS attack in Wireless Mobile Ad-hoc
Network
ABSTRACT:
Wireless Mobile ad-hoc network (MANET) is an
emerging technology and have great strength to be applied in critical
situations like battlefields and commercial applications such as building,
traffic surveillance, MANET is infrastructure less, with no any centralized
controller exist and also each node contain routing capability, Each device in
a MANET is independently free to move in any direction, and will therefore
change its connections to other devices frequently. So one of the major
challenges wireless mobile ad-hoc networks face today is security, because no
central controller exists. MANETs are a kind of wireless ad hoc networks that
usually has a routable networking environment on top of a link layer ad hoc
network. Ad hoc also contains wireless sensor network so the problems is facing
by sensor network is also faced by MANET. While developing the sensor nodes in
unattended environment increases the chances of various attacks. There are many
security attacks in MANET and DDoS (Distributed denial of service) is one of
them. Our main aim is seeing the effect of DDoS in routing load, packet drop
rate, end to end delay, i.e. maximizing due to attack on network. And with
these parameters and many more also we build secure IDS to detect this kind of
attack and block it. In this paper we discussed some attacks on MANET and DDOS
also and provide the security against the DDOS attack.
EXISTING
SYSTEM:
In existing system, Mobile ad-hoc networks
devices or nodes or terminals with a capability of wireless communications and
networking which makes them able to communicate with each other without the aid
of any centralized system. This is an autonomous
system in which nodes are connected by wireless links and send data to each
other. As we know that there is no any
centralized system so routing is done by node itself. Due to its mobility and
self routing capability nature, there are many weaknesses in its security. One of the serious attacks to be
considered in ad hoc network is DDoS attack. A DDoS attack is launched by sending huge
amount of packets to the target node through
the co-ordination of large amount of hosts which are distributed all over in
the network. At the victim side this
large traffic consumes the bandwidth and not allows any other important packet reached to the victim.
PROPOSED
SYSTEM:
In proposed system, to solve the
security issues we need an intrusion detection system. This can be categorized into two
models:
1. Signature-based intrusion detection
2. Anomaly-based intrusion detection
The benefits of this IDS technique are
that it can be able to detect attack without prior knowledge of attack.
Intrusion attack is very easy in wireless network as compare to wired network.
One of the serious attacks to be considered in ad hoc network is DDoS attack.
MODULES:
1. User Registration
2. Upload & Send files to users
3. Attack on Ad-Hoc Network
4. Criteria for Attack detection
5. Simulation Results
MODULES
DESCRIPTION:
User
Registration:
In this module, user registers his/her
personal details in database.
Each user has unique id, username and
password and digital signature.
After using these details he can request
file from server.
Upload
& Send files to users:
In this module, server can upload the
files in the database. After verify user digital signature file could be
transfer to correct user via mobile ad-hoc network. Attack on Ad-Hoc Network. In this module, to
see what the attack on ad-hoc is network is
Distributed
Denial of Services (DDoS):
A DDoS attack is a form of DoS attack
but difference is that DoS attack is performed by only one node and DDoS is
performed by the combination of many nodes. All nodes simultaneously attack on
the victim node or network by sending them huge packets, this will totally
consume the victim bandwidth and this will not allow victim to receive the
important data from the network.
Criteria
for Attack detection :
In this module, we use multiple nodes
and simulate through different criteria
are NORMAL, DDoS and IDS (intrusion detection case). Normal Case We set number of sender and
receiver nodes and transport layer mechanism as TCP and UDP with routing
protocol as AODV (ad-hoc on demand distance vector) routing. After setting all
parameter simulate the result through our simulator.
IDS
Case
In IDS (Intrusion detection system) we
set one node as IDS node, that node watch the all radio range mobile nodes if
any abnormal behavior comes to our network, first check the symptoms of the
attack and find out the attacker node , after finding attacker node, IDS block
the attacker node and remove from the DDOS attack. In our simulation result we
performed some analysis in terms of routing load , UDP analysis , TCP
congestion window, Throughput Analysis and overall summery. Simulation Results In this module, we
implement the random waypoint movement model for the simulation, in which a
node starts at a random position, waits for the pause time, and then moves to
another random position with a velocity.
a. Throughput
b. Packet delivery fraction
c. End to End delay
d. Normalized routing load
SYSTEM
REQUIREMENTS
Hardware
Requirements:
•
System :
Pentium IV 2.4 GHz.
•
Hard Disk : 40 GB.
•
Floppy Drive : 1.44 Mb.
•
Monitor : 15
VGA Colour.
•
Mouse : Logitech.
•
Ram : 512 Mb.
Software
Requirements:
•
Operating system : Windows XP.
•
Coding Language : C#.NET
•
TOOL :
VISUAL STUDIO 2008
REFERENCE:
Prajeet Sharma, Niresh Sharma, Rajdeep
Singh, “A Secure Intrusion detection
system against DDOS attack in Wireless Mobile Ad-hoc Network”, International
Journal of Computer Applications (0975 – 8887) Volume 41– No.21, March 2012
Subscribe to:
Posts (Atom)