Thursday, 26 January 2012

On the Information Flow Required for Tracking Control in Networks of Mobile Sensing Agents


On the Information Flow Required for Tracking Control in Networks of Mobile Sensing Agents 

Abstract:
We design controllers that permit mobile agents with distributed or networked sensing capabilities to track (follow) desired trajectories, identify what trajectory information must be distributed to each agent for tracking, and develop methods to minimize the communication needed for the trajectory information distribution.
Existing System:
Almost all work on mobile ad hoc networks relies on simulations, which, in turn, rely on realistic movement models for their credibility. Since there is a total absence of realistic data in the public domain, synthetic models for movement pattern generation must be used and the most widely used models are currently very simplistic, the focus being ease of implementation rather than soundness of foundation. Whilst it would be preferable to have models that better reflect the movement of real users, it is currently impossible to validate any movement model against real data. However, it is lazy to conclude from this that all models are equally likely to be invalid so any will do. We note that movement is strongly affected by the needs of humans to socialize in one form or another. Fortunately, humans are known to associate in particular ways that can be mathematically modeled, and that are likely to bias their movement patterns. Thus, we propose a new mobility model that is founded on social network theory, because this has empirically been shown to be useful as a means of describing human relationships. In particular, the model allows collections of hosts to be grouped together in a way that is based on social relationships among the individuals. This grouping is only then mapped to a topographical space, with topography biased by the strength of social tie. We discuss the implementation of this mobility model and we evaluate emergent properties of the generated networks.
page1image16208
page1image16480
page1image16752
page1image17024
Proposed System:
We focus on the causes of mobility. Starting from established research in sociology, we propose SIMPS, a mobility model of human crowds with pedestrian motion.
We propose Sociological Interaction Mobility for Population, a mobility model aimed at pedestrian crowd motion that explores recent sociological findings driving human interactions:

(i) Each human has specific socialization needs, quantified by a target social interaction level, which corresponds to her personal status (e.g., age and social class.
(ii) Humans make acquaintances in order to meet their social interaction needs. We show that these two components can be translated into a coherent set of behaviors, called sociostation.
Hardware Requirements
  • SYSTEM : Pentium IV 2.4 GHz
  • HARD DISK : 40 GB
  • FLOPPY DRIVE : 1.44 MB
page2image8176
page2image8448
  • MONITOR
  • MOUSE
  • RAM
  • KEYBOARD
    Software Requirements
: 15 VGA colour
: Logitech. :256MB
: 110 keys enhanced.

page2image11456
  • Operating system :- Windows XP Professional
  • Front End : - Java Technology.
    REFERENCE:
page2image13312
Liang Chen, Sandip Roy and Ali Saberi, “On the information flow required for tracking control in networks of mobile sensing agents”, IEEE Transactions on Mobile Computing, Vol. 10, No.5, April 2011. 
for more details contact denniscodd site:
http://www.denniscodd.com

A Competitive Study of Cryptography Techniques


A Competitive Study of Cryptography Techniques over Block Cipher 

ABSTRACT:
The complexity of cryptography does not allow many people to actually understand the motivations and therefore available for practicing security cryptography. Cryptography process seeks to distribute an estimation of basic cryptographic primitives across a number of confluences in order to reduce security assumptions on individual nodes, which establish a level of fault-tolerance opposing to the node alteration. In a progressively networked and distributed communications environment, there are more and more useful situations where the ability to distribute a computation between a number of unlike network intersections is needed. The reason back to the efficiency (separate nodes perform distinct tasks), fault-tolerance (if some nodes are unavailable then others can perform the task) and security (the trust required to perform the task is shared between nodes) that order differently. Hence, this paper aims to describe and review the different research that has done toward text encryption and description in the block cipher. Moreover, this paper suggests a cryptography model in the block cipher.
EXISTING SYSTEM:
  •   Generally, the utilization of the encryption techniques has raises different security issues, which consisted mostly on how to effectively manage the encryption keys to ensure that they are safeguarded throughout their life cycle and are protected from unauthorized disclosure and modification.
  •   Several reasons in the encryption of information over block cipher are observed in terms of key management, which known as an important issue to the public safety community, most of these issues addressed the following:
    o Difficulties in addressing the security issues regarding encryption key management;
    o Lacks in providing a suitable details about the different threats in terms of decision makers on the importance of key management;
page1image15872
page1image16144
page1image16416
page1image16688
o Difficulties in generating the suitable recommendations for establishing proper key management.
PROPOSED SYSTEM:
  •   Sequentially, providing a secure and flexible cryptography mechanism raises the needs for analyzing and comparing different encryption algorithms for the aim of enhancing the security during the encryption process.
  •   Hence, this paper suggested a cryptography mechanism in the block cipher by managing the keys sequentially.
  •   These keys will works dependently for extracting and generating the content relation to be managed later by the key management that helps to communicate and share sensitive information.
  •   In particular, the importance of thorough, consistent key management processes among public safety agencies with interoperable functions cannot be overstated.
  •   This model aims to secure dissemination, loading, saving, and eliminating faults of keys to make encryption implementations effective.
  •   There are inherent possibilities if suitable key management processes are not accompanied because of the intricacy of dispensing keys to all block in a certain fashion.
  •   This risk can be meaningfully appeased through sufficient key controls and proper education on encryption key management.
page2image10768
HARDWARE REQUIREMENTS
page3image1288
  • SYSTEM
  • HARD DISK         
  • MONITOR
  • MOUSE
  • RAM
  • KEYBOARD
: Pentium IV 2.4 GHz : 40 GB
: 15 VGA colour : Logitech.
: 256 MB
: 110 keys enhanced.

Windows XP Professional : JAVA
NetBeans IDE

SOFTWARE REQUIREMENTS
  • Operating system :
  • Front End
  • Tool :
page3image7704
MODULES:
  •   Homophonic Cryptographic IDE
  •   Encryption Module with Key Generation
  •   Decryption Module
    REFERENCE:
    Ashwak M, AL-Abiachi, Faudziah Ahmad, Ku Ruhana, “A Competitive Study of Cryptography Techniques over Block Cipher”, IEEE 2011 UKSim 13th International Conference on Modelling and Simulation, IEEE 2011. 
    for more details contact denniscodd site:

Denial of Service Attacks in Wireless Networks


Denial of Service Attacks in Wireless Networks: The Case of Jammers 

ABSTRACT:
The shared nature of the medium in wireless networks makes it easy for an adversary to launch a Wireless Denial of Service (WDoS) attack. Recent studies, demonstrate that such attacks can be very easily accomplished using off-the shelf equipment. To give a simple example, a malicious node can continually transmit a radio signal in order to block any legitimate access to the medium and/or interfere with reception. This act is called jamming and the malicious nodes are referred to as jammers. Jamming techniques vary from simple ones based on the continual transmission of interference signals, to more sophisticated attacks that aim at exploiting vulnerabilities of the particular protocol used. In this survey, we present a detailed up-to-date discussion on the jamming attacks recorded in the literature. We also describe various techniques proposed for detecting the presence of jammers. Finally, we survey numerous mechanisms which attempt to protect the network from jamming attacks. We conclude with a summary and by suggesting future directions.
EXISTING SYSTEM
SECURITY is one of the critical attributes of any communication network. Various attacks have been reported over the last many years. Most of them, however, target wired networks. Wireless networks have only recently been gaining widespread deployment. At the present time, with the advances in technology, wireless networks are becoming more affordable and easier to build. Many metropolitan areas deploy public WMANs for people to use freely. Moreover, the prevalence of WLANs as the basic edge access solution to the Internet is rapidly becoming the reality. However, wireless networks are accompanied with an important security flaw; they are much easier to attack than any wired network.
page1image15248
page1image15520
page1image15792
page1image16064
PROPOSED SYSTEM
In this survey paper, we describe some of the most harmful attacks that can be launched by a jammer. We develop such as one system, to show the effect of the dos attack.
In our proposed system, the normal client and server process is initially depicted, then the attack is lauched manually to show how the dos attack affect the normal client/server process.
HARDWARE REQUIREMENTS
The most common set of requirements defined by any operating system or software application is the physical computer resources, also known as hardware. The hardware requirements required for this project are:
20 GB of Hard disk
256 MB RAM                                                  
Pentium 133 MHZ or above (Processor)
PC’s which are interconnected in LAN
Network Adapter card configured with an IP address

page2image8168
page2image8720
SOFTWARE REQUIREMENTS
Software Requirements deal with defining software resource requirements and pre-requisites that need to be installed on a computer to provide optimal functioning of an application. These requirements or pre-requisites are generally not included in the software installation package and need to be installed separately before the software is installed. The software requirements that are required for this project are:
Java 1.3 or more
Windows 98 or more
Modules:
Client Application 
DoS attack
File Server
Location Guard 
Normal Client
REFERENCE
Konstantinos Pelechrinis, Marios Iliofotou and Srikanth V. Krishnamurthy, “Denail of Service Attacks in Wireless Networks: The Case of Jammers”, IEEE Communications Surveys & Turtorials, Vol. 13, No.2, Second 
for more details contact denniscodd site:
http://www.denniscodd.com

Design of p-Cycles for Full Node Protection


Design of p-Cycles for full node protection in WDM Mesh Networks

ABSTRACT:
We propose a p-cycle expanded protection scheme that can guarantee 100% node protection, in addition to 100% protection against single link failures. While some previous studies had already noted that p-cycles can naturally offer some node protection, we show that, at the expense of some p-cycle overlapping, with very mild impact on the bandwidth efficiency, we can guarantee node protection. We propose a design and solution method based on large scale optimization tools, namely Column Generation (CG), which compute p-cycles offering both link and node protection. Previous models offer a solution where a large number of potential cycles needs first to be enumerated, leading to very large ILP models which cannot scale. Comparisons are made between our proposed design approach and the work of Grover and Onguetou (2009). Results show that our approach clearly outperforms their design in terms of capacity efficiency and of the number of distinct cycles. We also evaluate the extra spare capacity requirement of p- cycles for full node protection compared to the one for link protection only. Results shows that p-cycles offering node and link protection only require a slightly larger spare capacity than conventional p-cycles while the implicit protection against a dual link failure is only marginally affected.
Existing System:

Algorithms for protection against link failures have traditionally considered Single- link failures. However, dual link failures are becoming increasingly important due to two reasons. First, links in the networks share resources such as conduits or ducts and the failure of such shared resources result in the failure of multiple links. Second, the average repair time for a failed link is in the order of a few hours to few days, and this repair time is sufficiently long for a second failure to occur. Although algorithms developed for single-link failure resiliency is shown to cover a good percentage of dual-link failures , these cases often include links that are far away from each other. Considering the fact
page1image17544
page1image17816
page1image18088
page1image18360
page1image18632
that these algorithms are not developed for dual-link failures, they may serve as an alternative to recover from independent dual-link failures. However, reliance on such approaches may not be preferable when the links close to one another in the network share resources, leading to correlated link failures.
Proposed System:
This paper formally classifies the approaches for providing dual-link failure resiliency. Recovery from a dual-link failure using an extension of link protection for single link failure results in a constraint, referred to as BLME constraint, whose satisfiability allows the network to recover from dual-link failures without the need for broadcasting the failure location to all nodes. The paper develops the necessary theory for deriving the sufficiency condition for a solution to exist, formulates the problem of finding backup paths for links satisfying the BLME constraint as an ILP, and further develops a polynomial time heuristic algorithm. The formulation and heuristic are applied to six different networks and the results are compared. The heuristic is shown to obtain a solution for most scenarios with a high failure recovery guarantee, although such a solution may have longer average hop lengths compared with the optimal values. The paper also establishes the potential benefits of knowing the precise failure location in a four-connected network that has lower installed capacity than a three-connected network for recovering from dual-link failures.
HARDWARE REQUIREMENTS
page2image12232
  • SYSTEM                                                
  • HARD DISK
  • MONITOR
  • MOUSE
  • RAM
  • KEYBOARD
: Pentium IV 2.4 GHz : 40 GB
: 15 VGA colour : Logitech.
:256MB
: 110 keys enhanced.

SOFTWARE REQUIREMENTS
page2image16816
Operating system : FrontEnd :
Tool :
REFERENCE:
Windows XP Professional JAVA
NETBEANS IDE
page3image2640
Brigitte Jaumar, Honghui Li, “Design of p-Cycles for full node protection in WDM Mesh Networks”, IEEE ICC 2011. 


for more details contact denniscoddsite:
http://www.denniscodd.com

A new approach for FEC decoding


A New Approach for FEC Decoding Based on the BP Algorithm in LTE and WiMAX Systems 


ABSTRACT:
Many wireless communication systems such as IS- 54, enhanced data rates for the GSM evolution (EDGE), worldwide interoperability for microwave access (WiMAX) and long term evolution (LTE) have adopted low-density parity-check (LDPC), tail-biting convolutional, and turbo codes as the forward error correcting codes (FEC) scheme for data and overhead channels. Therefore, many efficient algorithms have been proposed for decoding these codes. However, the different decoding approaches for these two families of codes usually lead to different hardware architectures. Since these codes work side by side in these new wireless systems, it is a good idea to introduce a universal decoder to handle these two families of codes. The present work exploits the parity-check matrix (H) representation of tailbiting convolutional and turbo codes, thus enabling decoding via a unified belief propagation (BP) algorithm. Indeed, the BP algorithm provides a highly effective general methodology for devising low-complexity iterative decoding algorithms for all convolutional code classes as well as turbo codes. While a small performance loss is observed when decoding turbo codes with BP instead of MAP, this is offset by the lower complexity of the BP algorithm and the inherent advantage of a unified decoding architecture.
Existing System:
  •   For analysis purposes the packet-loss process resulting from the single- multiplexer model was assumed to be independent and, consequently, the simulation results provided show that this simplified analysis considerably overestimates the performance of FEC.
  •   Evaluation of FEC performance in multiple session was more complex in existing applications.
  •   Surprisingly, all numerical results given indicates that the resulting residual packet-loss rates with coding are always greater than without coding, i.e., FEC is ineffective in this application.
page1image16104
page1image16376
page1image16648
page1image16920
The increase in the redundant packets added to the data will increase the performance, but it will also make the data large and it will also lead to increase in data loss.
Proposed System:
  •   In this work we have evaluated the performance of FEC coding more accurately than previous works.
  •   We have reduced the complexity in multiple session and introduced a simple way for its implementation.
  •   We show that the unified approach provides an integrated framework for exploring the tradeoffs between the key coding parameters: specifically, Interleaving depths, channel coding rates and block lengths.
  •   Thus by choosing the coding parameter appropriately we have achieved high performance of FEC, reduced the time delay for Encoding and Decoding with Interleaving.
    System Requirements Hardware:
    PROCESSOR :
page2image8536
page2image8808
page2image9080
Software:
RAM MONITOR HARD DISK CDDRIVE
PENTIUM IV 2.6 GHz : 512MB
: 15”
: 20 GB

: 52X
page2image12192
FRONT END
TOOLS USED
OPERATING SYSTEM: WINDOWS XP

Modules of the Project
  • FEC Encoder
  • Interleaver
  • Implementation of the Queue
: JAVA, SWING
: JFRAME BUILDER

page2image15768
De-Interleaver
  • FEC Decoder
  • Performance Evaluation
    Module Description 1. FEC Encoder:
    FEC is a system of error control for data transmission, where the sender adds redundant data to its messages. This allows the receiver to detect and correct errors (within some bounds) without the need to ask the sender for additional data. In this module we add redundant data to the given input data, known as FEC Encoding.
    The text available in the input text file is converted into binary. The binary conversion is done for each and every character in the input file. Then we add the redundant data for each bit of the binary. After adding we have a block of packets for each character.
    The User Interface design is also done in this module. We use the Swing package available in Java to design the User Interface. Swing is a widget toolkit for Java. It is part of Sun Microsystems' Java Foundation Classes (JFC) an API for providing a graphical user interface (GUI) for Java programs.
    2. Interleaver:
    Interleaving is a way of arranging data in a non-contiguous way in order to increase performance. It is used in data transmission to protest against burst errors. In this module we arrange the data (shuffling) to avoid burst errors which is useful to increase the performance of FEC Encoding.
    This module gets the input as blocks of bits from the FEC Encoder. In this module we shuffle the bits inside a single block in order to convert burst errors into random errors. This shuffling process is done for each and every block comes from the FEC Encoder. Then we create a Socket connection to transfer the blocks from Source to the
page3image15920
page3image16192
page3image16464
Queue. This connection is created by using the Server Socket and Socket class Available in Java.
3. Implementation of the Queue:
In this module we receive the data from the Source system. This data is the blocks after FEC Encoding and Interleaving processes are done. These blocks come from the Source system through Server Socket and Socket. Server socket and Socket are classes available inside Java. These two classes are used to create a connection between two systems inside a network for data transmission. After we receive the packets from Source, we create packet loss. Packet loss is a process of deleting the packets randomly. After creating loss we send the remaining blocks to the Destination through the socket connection.
4. De-Interleaver:
This module receives the blocks of data from the Queue through the socket connection. These blocks are the remaining packets after the loss in the Queue. In this module we re arrange the data packets inside a block in the order in which it is before Interleaving. This process of Interleaving and De-Interleaving is done to convert burst errors into random errors. After De-Interleaving the blocks are arranged in the original order. Then the data blocks are sent to the FEC Decoder.
5. FEC Decoder:
This module gets the input from the De-Interleaver. The received packets are processed to remove the original bits from it. Thus we recover the original bits of a character in this module. After retrieving the original bits, we convert it to characters and write it inside a text file.
6. Performance Evaluation:
In this module we calculate the overall performance of FEC Coding in recovering the packet losses. After retrieving the original bits, we convert it to characters and write it
page4image15936
page4image16208
page4image16480
page4image16752
inside a text file. This performance is calculated by using the coding parameters like Coding rate, Interleaving depth, Block length and several other parameters. First we calculate the amount of packet loss and with it we use various formulas to calculate the overall performance of Forward Error Correction in recovering the network packet losses.
In module given input and expected output Module-1
Given Input:
Text file.

Expected Output:
Packets coded with redundant data.

Module-2
Given Input:
Packets coded with redundant data. Expected Output:          
Packets shuffled inside every block of data.

Module-3
Given Input:
Shuffled packets
Expected Output:
Packets passed successfully (other than lost packets)

Module-4
Given Input:
Packets from the queue
Expected Output:
Packets re-ordered like it was before Interleaving

Module-5
Given Input:
Packets from De-Interleaver Expected Output:
Original packets

Module-6
Given Input:
All the parameters used in FEC Coding Expected Output:
Output file and the calculations result

REFERENCE:
Ahmed Refaey, Sebastien Roy and Paul Fortier, “A New approach for FEC Decoding based on the BP algorithm in LTE and WiMAX Systems”, IEEE Conference 2011. 
for more details contact denniscodd site:
http://www.denniscodd.com