Artificial sleep helps AI learn new tasks without losing them
The phenomenon of artificial intelligence acquiring and retaining knowledge in numerous activities can be likened to the cognitive process of sleep, which aids in the consolidation of information acquired during wakefulness.
Maxim Bazhenov, a researcher at the University of California, San Diego, highlights the current prominent inclination to include concepts from neurology and biology into the enhancement of established machine learning techniques, with sleep being a notable area of focus.
Numerous artificial intelligences exhibit a limitation whereby they possess the ability to proficiently handle a singular collection of precisely defined jobs, yet they lack the capacity to acquire supplementary knowledge subsequently without forfeiting all previously acquired knowledge. According to Pavel Sanda from the Czech University of Sciences in the Czech Republic, a challenge arises when attempting to design systems that possess the ability to engage in lifelong learning. The process of lifelong learning enables individuals to acquire information and skills that are essential for adapting to and effectively addressing forthcoming problems.
Bazhenov, Sanda, and their colleagues conducted a study in which they trained artificial spiking neural network. This network was designed to mimic the structural organization of the human brain, consisting of interconnected artificial neurons. The objective of the study was to enable the network to perform two distinct tasks while ensuring that the connections acquired during the initial task were not overwritten or disrupted. The achievement of this outcome was facilitated by the strategic incorporation of concentrated training intervals interspersed with periods resembling sleep.
The process of simulating sleep in a neural network was achieved by inducing activation in the network's artificial neurons by the application of a noisy pattern. Additionally, they ensured that the auditory stimuli generated during sleep were approximately aligned with the pattern of neural activity observed during the training sessions. This approach served as a means of reactivating and reinforcing the neural connections formed during the execution of both tasks.
The scientists first attempted training a neural network on the initial task, then on to the second task, followed by ultimately adding a sleep time at the end. However, it was promptly recognized that this particular sequence resulted in the erasure of the artificial neural network connections that had been acquired during the initial exercise.
According to Erik Delanois from the University of California, San Diego, subsequent trials indicated the significance of incorporating frequent and alternating training and sleep cycles throughout the AI's acquisition of the second task. This activity facilitated the consolidation of the connections established in the initial job, which might have otherwise been susceptible to forgetting.
The conducted experiments demonstrated the efficacy of training a generating neural network in enabling an artificial intelligence agent to acquire proficiency in two distinct foraging patterns. These patterns involve the agent's hunt for simulating food particles while concurrently avoiding dangerous particles.
According to Hava Siegelmann, a researcher at the University of Massachusetts Amherst, the objective of lifelong learning AI is to possess the capacity to effectively integrate diverse experiences and utilize this acquired knowledge in unfamiliar contexts, akin to the cognitive abilities exhibited by animals and humans.
According to Siegelmann, the utilization of spiking neural networks, characterized by their intricate, biologically-inspired architecture, has not yet demonstrated practicality for extensive application due to the inherent challenge of training them. In order to further establish the efficacy of this methodology, it is important to conduct demonstrations including increasingly intricate tasks on artificial neural networks that are ubiquitously employed by technology enterprises.
One notable benefit of spiking neural networks is their superior energy efficiency in comparison to alternative neural network architectures. According to Ryan Golden, a researcher at the University of California, San Diego, there is a growing inclination towards adopting spiking network technology in the coming decade or so. It is advantageous to ascertain those problems at an early stage.
Submitted by:
Manu Sharma
Comments
Post a Comment