Home > News

Coupled nano-oscillators capable of recognizing vowels according to a learning rule

Physicists have succeeded in fabricating a network of four coupled nano-oscillators capable of recognizing spoken vowels by tuning their frequencies according to an automatic rule of real-time learning. They show that high experimental recognition rates are a result of the exceptional ability of the oscillators to synchronize.
In contrast with the flagship algorithms used in artificial intelligence that lie in artificial neural networks, physicists work on physical components inspired by biological neurons. Each of these nanoscale components plays the role of a nano-neuron capable of solving complex problems, using the synchronization phenomena of its magnetic oscillations.

The component studied by researchers from the Unité Mixte de Physique CNRS/Thales, the Centre de Nanosciences et de Nanotechnologies – C2N (CNRS/UPSUD) and the AIST in Japan is composed of magnetic and non-magnetic layers at the nanoscale. A year ago, a study by the same authors showed that a single of these components could behave like an artificial neuron and detect spoken numbers with a high recognition rate (state-of-the-art). The single component realized the neural network all by itself by successively performing the work of each neuron.

Dynamic couplings between several components could be used to play the role of synaptic communication between neurons. However, a major challenge for implementing these models with nano-devices is to achieve learning, which requires finely controlling and tuning the coupled oscillation of the components. The dynamic characteristics of nanodevices can indeed be difficult to control, and subject to noise and variability. It is this challenge of fine adjustment of oscillations that has been taken up in these new works. The researchers show that the exceptional tunability of spintronic nano-oscillators, that is to say the ability to control their frequency widely and precisely through the electric current and the magnetic field, can solve this problem. They successfully form a hardware network of four spin-spin nano-oscillators to recognize spoken vowels by tuning their frequencies according to an automatic real-time learning rule. They show that high experimental recognition rates are a result of the oscillators’ exceptional ability to synchronize. Their work is published in Nature.

The four neurons are experimentally implemented with four nano-spin-transfer oscillators; magnetic circular tunnel junctions of 375 nm of diameter and a free layer of Iron Boron with a vortex as a ground state. Symmetrical neural interconnections are implemented by electrically connecting the four oscillators using millimeter wires: in this configuration, the microwave current generated by each oscillator propagates in the microwave electric loop and in turn influences the dynamics - in particular the frequency – of other oscillators. The oscillators are thus coupled. The sum of all microwave emissions is detected by a spectrum analyzer. With this network of neurons, the researchers recognized vowels pronounced by different people. The audio signals of each vowel are transformed by Fourier analysis into two frequencies, accelerated a hundred thousand times and then applied by an antenna to nano-oscillators in the form of microwave signals of high amplitude, which can synchronize the oscillators. The vowels are correctly recognized and classified if each vowel leads to a specific synchronization configuration whatever the person who pronounces it: for example for the vowel "ih" a single oscillator is synchronized, for the vowel "ah", two oscillators are synchronized. This behavior is not innate: the network must be trained to achieve it. To do so, the researchers gradually modified the frequency of each oscillator by adjusting the DC current flowing in each according to a learning law.

These results demonstrate that classification tasks of non-trivial forms can be performed with small physical neural networks by giving them nonlinear dynamic characteristics: here, oscillations and synchronization. This demonstration of real-time learning with a set of four nano-oscillators with spin transfer torque is an important step for neuromorphic computing based on spintronics. The research perspectives will consist in coupling a larger number of components together.

Reference: Vowel recognition with four coupled spin-torque nano-oscillators. Miguel Romera, Philippe Talatchian et al., Nature 2018

Contact : Julie Grollier, CNRS Senior Researcher at Unité Mixte de Physique CNRS/Thales, and Damien Querlioz, CNRS Researcher at Centre de Nanosciences et de Nanotechnologies (C2N)

Approach for pattern classification with coupled spin-torque nano-oscillators. (a) Schematic of the emulated neural network. (b) Schematic of the experimental set-up with four spin torque nano-oscillators electrically connected in series and coupled through their own emitted microwave currents. Two microwave signals encoding information in their frequencies fA and fB are applied as inputs to the system through a strip line, which translates into two microwave fields. The total microwave output of the oscillator network is recorded with a spectrum analyzer. (c) Microwave output emitted by the network of four oscillators without (light blue) and with (dark blue) the two microwave signals applied to the system. The two curves have been shifted vertically for clarity. The four peaks in the light blue curve correspond to the emissions of the four oscillators. The two red narrow peaks in the dark blue curve correspond to the external microwave signals with frequencies fA and fB. (d-e) Learning to classify patterns by tuning the frequencies of oscillators. Experimental synchronization map as a function of the frequencies of the external signals (d) before training (e) after training. The colored dots represent the inputs applied to the oscillatory network: vowels pronounced by different speakers. Different vowels are in different colors. Videos are provided on the web: full movie (3’30”) ; Short movie (20”).
Copyright UMPHY/C2N - CNRS/Thales/UPSud