#differentialequations

waynerad@pluspora.com

"A type of neural network that learns on the job, not just during its training phase" has been developed. "These flexible algorithms, dubbed 'liquid' networks, change their underlying equations to continuously adapt to new data inputs."

"Ramin Hasani, the study's lead author, points to video processing, financial data, and medical diagnostic applications as examples of time series that are central to society."

"Hasani drew inspiration directly from the microscopic nematode, C. elegans. 'It only has 302 neurons in its nervous system,' he says, 'yet it can generate unexpectedly complex dynamics.' Hasani coded his neural network with careful attention to how C. elegans neurons activate and communicate with each other via electrical impulses. In the equations he used to structure his neural network, he allowed the parameters to change over time based on the results of a nested set of differential equations."

"This flexibility is key. Most neural networks' behavior is fixed after the training phase, which means they're bad at adjusting to changes in the incoming data stream. Hasani says the fluidity of his 'liquid' network makes it more resilient to unexpected or noisy data, like if heavy rain obscures the view of a camera on a self-driving car."

"There's another advantage of the network's flexibility, he adds: 'It's more interpretable.'"

"Thanks to Hasani's small number of highly expressive neurons, it's easier to peer into the 'black box' of the network's decision making and diagnose why the network made a certain characterization."

"Hasani's network excelled in a battery of tests. It edged out other state-of-the-art time series algorithms by a few percentage points in accurately predicting future values in datasets, ranging from atmospheric chemistry to traffic patterns. 'In many applications, we see the performance is reliably high,' he says. Plus, the network's small size meant it completed the tests without a steep computing cost."

Unfortunately, I'm not going to be able to explain much of how this system works, since I don't understand it well enough myself. In simple terms, it is combining two ideas: an ordinary differential equation problem-solver, and a recurrent neural network. A recurrent neural network is a neural network that takes part of its output and sends it back around as input for the next time step. This is a simple way to get a neural network to "remember" things it has seen in the past. During training, when the backpropagation step changes the weights on the neural connections by going backwards through the network, and for a recurrent neural network, this involves going backwards through this output-to-input circuit as well, doing what is known as "backpropagation through time".

The ordinary differential equation part, on the other hand, represents the system as a set of outputs that change with time and have a time derivative, and the relationship between the time derivative and the actual outputs is defined by an equation that has parameters that are themselves trained by a neural network. The modification made here, that seems to make all the difference, is the inclusion of an additional term that has a time constant that controls how quickly the ordinary differential equation solver moves the whole system towards an equilibrium state.

How exactly this leads to the "liquidity" that lets the system "learn on the job", I haven't figured out. Along with the greater "interpretability" -- well it sounds like that is simply a function of fewer neurons.

"Liquid" machine-learning system adapts to changing conditions

#solidstatelife #ai #liquidnetworks #recurrentnetworks #timeseries #differentialequations