Collective Neurodynamic Optimization Technology For Distributed Big Data Processing
Science Trends connects scientists and their research with a global audience.
Join our community of 2,800+ science contributors.
With the development of artificial intelligence, especially in big data, machine learning and related areas, the size and complexity of modern datasets are increasing explosively. To solve these problems with a large-scale dataset, the distributed/decentralized computing frame has been proposed and well established.
There are mainly two considerations for the distributed computing: The first one is that the data itself is stored distributedly due to the considering from the sense of data storing and security. In this case, the data must be processed in a distributed manner. The second one is that the data is very large and it is difficult to be processed using the centralized method.
Neurodynamic system: A parallel computing model for real-time optimization
In the 1980s, John J. Hopfield and David W. Tank first proposed the recurrent neural networks, which simulated the brain’s information processing function, to solve optimization problems in real time by circuit implementation. From then on, an amount of research on this topic, which is called neurodynamic optimization late, has been developed significantly, especially the work from Jun Wang (The IEEE Neural Networks Pioneer Award Winner in 2014) and his research team. Inspired by biological neural networks, the neurodynamic optimization aims to design recurrent neural networks for real-time engineering optimization, which can be implemented by software and hardware.
Multi-agent system for large-scale distributed optimization
Recently, distributed optimization has a variety of applications in science and engineering, such as source localization, power grid control, and distributed data regression. For big data and large-scale optimization, the distributed optimization based on multi-agent systems is becoming a hot topic in engineering and has resulted in in-depth investigations. Distributed optimization is to minimize/maximize a sum of local objective functions, distributively subject to local constraints, which are generally in the form of equalities and inequalities. The local objective functions and constraints are usually partially accessible only by the agents, and all the multi-agents cooperate to solve the distributed optimization problems with consensus.
Collective neurodynamic system for distributed optimization
Neurodynamic system as a parallel computing unit has significant performance for real-time optimization. If we combine many of these units to a collective neurodynamic system, i.e., this system is with multiple interconnected neurodynamic units described as recurrent neural networks (RNNs), it can be used for solving large-scale distributed optimization problems.
From the view of a multi-agent system, each neurodynamic unit in the collective system can be considered as a multi-agent with the architecture of RNN. Then each local objective function is optimized individually by using an RNN, with consensus among others. Recently, based on our former research on neurodynamic optimization, we constructed collective neurodynamic systems to solve the large-scale optimization in a distributed manner.
In contrast to other distributed optimization methods, the collective neurodynamic system consists of RNNs, which can be even heterogeneous, and its dynamic behaviors can be easily analyzed. We believe that the collective system will play an important role in big data processing, and we hope the research will lead to more effective algorithms for distributed optimization with big data.
This study, Global Lagrange stability of complex-valued neural networks of neutral type with time-varying delays was recently published in the journal Complexity, and A collective neurodynamic approach to distributed constrained optimization was recently published in the journal IEEE Transactions on Neural Networks and Learning Systems.