Académique Documents
Professionnel Documents
Culture Documents
Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856
Studies on the advancements of Neural Networks and Neural Network Based Cryptanalytic Works
SambasivaRao Baragada, Member IEEE and P. Satyanarayana Reddy, Member IEEE
Government Degree College, Khairatabad, Hyderabad, India.
An artificial neural network (ANN) mimics some features of a real nervous system, contains a collection of basic computing units called neurons. Such a model shows strong resemblance to axons and dendrites in a nervous system. Robustness, flexibility and collective computation are the attractive features of this model, due to its selforganizing and adaptive nature. This model also shows the ability to deal with a variety of data situations and could be more user-friendly than the traditional approaches. The nodes of these networks resemble differential equations. The connections between these nodes can either be inter-connected within between layers or intra-connected within the same layer. Activation value is fed to the nodes in the successive layers, computed from connection weights with the outputs of previous layer. The activation value is passed through a non-linear function. Hard-limiting non linearity is considered, if vectors are binary or bipolar and a squashed function is chosen, if vectors are analog in nature. Popular squashed functions are sigmoid (0 to 1), tanh (-1 to +1), Gaussian, logarithmic and exponential. A network can either be discrete or analog. The neuron of a discrete network is associated with two states, whereas the analog network is associated with a continuous output. Discrete network can be synchronous, when the state of every neuron in the network is updated. In the same way it can asynchronous, when only one neuron is updated for a given time period. A feed forward network provides input to the next layer with no closed chain of dependence among neural states through a set of connection strengths or weights. The chain has to be closed to make it feedback network. When the output of the network depends upon the current input, Volume 2, Issue 5 September October 2013
the network is static (no memory). If the output of the network depends upon past inputs or outputs, the network is dynamic (recurrent). If the interconnection among neurons changes with time, the network is adaptive; otherwise it is called non-adaptive. Updation of connection strengths of the networks can be seen in fixed weight association networks methods, supervised methods, and unsupervised methods. The weights are pre-computed for fixed weight association networks. Supervised methods consider both input and output for weight updation, whereas unsupervised methods use only input. The complete pattern recognition system composed of instantiation space, selection of patterns, training and testing the network. The development of artificial neural networks was first reported in the early forties by McCulloh and Pits in [1]. A neuron is said to be fired, if the sum of its excitatory inputs reaches its threshold. This state remains, until neuron receives no inhibitory input. The model proposed by the McCulloh and Pits can used to construct a network which has the ability to compute any logical function. Rosenblatt found that the McCulloh-pitts model [1] was unbiological. In order to overcome the deficiencies in the McCulloh-Pitts model, he found out a new model, namely, the perceptron model, which could be utilized to learn and generalize. Further, he investigated several mathematical models, which included competitive learning or self-organization, and forced learning which is somewhat similar to reinforcement learning. In addition to the above two types of learning, the concept of supervised learning was developed and incorporated in the adaptive linear element model (ADALINE). The ADALINE was found by Widrow et al. [2]. The ADALINE is a single neuron, which uses a method to descend the gradient of the error, by using the supervised learning. The ADALINE is a linear neuron, and it is helpful to discriminate the patterns, which are linearly separable. The concept of multi layer ADALINEs or multilayer network was developed for patterns which were not linearly separable. The training of the multi layer network was first explained by Werbos [3] as backpropagation algorithm (BPA) in his Ph.D. dissertation. His work did not become popular. Rumelhart [4] and his group published the parallel processing, a two-volume collection of studies on a broad variety of neural network configurations. Through these books, the concept of backPage 23
4. SUMMARY
Neural Networks and its advancements are presented in as the first part of the paper. The article then focused on reviewing the some potential cryptanalysis works using neural networks during last 10 years. The paradigm shift can be clearly observed in the research proposals of various researchers trending to neural networks. Chengqing, Li, Culbrk et. al. rigourously worked to investigate the vulnerabilities of various potential encryption schemes. It is clearly observed that these cryptanalysis attacks using neural networks are not confined and are continuously been under improvements.
ACKNOWLEDGEMENTS
The authors wholeheartedly express their thanks to the Principal and staff of Government Degree College, Khairatabad, Andhra Pradesh, India for their continuous support in pursuing this research work.
REFERENCES
[1] F. Rosenblatt, Principles of neurodynamics: Perceptron and theory of brain mechanisms, Washington D.C: Spartan Books, 1962. [2] B. Widrow,An Adaptive Adaline Neuron Using Chemical Memistors, Stanford Electronics Laboratories Technical Report 1553-2, October 1960. [3] Werbos PJ, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences, PhD thesis, Harvard University, 1974. [4] Rumelhart, D. E., Hinton, G. E., and Williams, R. J., Learning internal representations by error propagation, Parallel Distributed Processing: Explorations in microstructure of cognition, Vol. 1, pp. 318362, 1986. [5] W. Y. Huang and R.P. Lippmann, "Comparisons between neuralnet and conventional classifiers", 1st IEEE International Conference on Neural Networks, San Diego, Vol. IV, pp. 485493, June 1987. [6] J Siestma, Rober JF Dow, Creating artificial neural networks that generalize, Neural Networks, ACM, Vol. 4, No. 1, pp. 6779, January 1991. [7] Chester, D.L., "Why Two Hidden Layers are Better than One, International Joint Conference on Neural
Page 26
Page 28