How do neural networks learn? A mathematical formula explains how they detect relevant patterns

Researchers found that a formula used in statistical analysis provides a streamlined mathematical description of how neural networks, such as GPT-2, a precursor to ChatGPT, learn relevant patterns in data, known as features. This formula also explains how neural networks use these relevant patterns to make predictions. The team presented their findings in the March 7 issue of the journal Science.

Mind to molecules: Does brain’s electrical encoding of information ‘tune’ sub-cellular structure?

A new paper by researchers at MIT, City —University of London, and Johns Hopkins University posits that the electrical fields of the network influence the physical configuration of neurons’ sub-cellular components to optimize network stability and efficiency, a hypothesis the authors call “Cytoelectric Coupling.”

EMBARGOED: Two brain networks are activated while reading, study finds

When a person reads a sentence, two distinct networks in the brain are activated, working together to integrate the meanings of the individual words to obtain more complex, higher-order meaning, according to a study at UTHealth Houston.

Nanoengineers Develop a Predictive Database for Materials

Nanoengineers at the University of California San Diego’s Jacobs School of Engineering have developed an AI algorithm that predicts the structure and dynamic properties of any material—whether existing or new—almost instantaneously. Known as M3GNet, the algorithm was used to develop matterverse.ai, a database of more than 31 million yet-to-be-synthesized materials with properties predicted by machine learning algorithms. Matterverse.ai facilitates the discovery of new technological materials with exceptional properties.

A new neuromorphic chip for AI on the edge, at a small fraction of the energy and size of today’s compute platforms

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of AI applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing. The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server.

Machine Learning Reveals Hidden Components of X-Ray Pulses

Ultrafast pulses from X-ray lasers reveal how atoms move at femtosecond timescales, but measuring the properties of the pulses is challenging. A new approach trains neural networks to analyze the pulses. Starting from low-resolution measurements, the neural networks reveal finer details with each pulse, and they can analyze pulses millions of times faster than previous methods.

School of Physics Uses Moths and Origami Structures for Innovative Defense Research

Georgia Tech has received two Department of Defense (DoD) 2022 Multidisciplinary University Research Initiative (MURI) awards totaling almost $14 million. The highly competitive government program supports interdisciplinary teams of investigators developing innovative solutions in DoD interest areas. This year, the DoD awarded $195 million to 28 research teams across the country.

Contrary to expectations, study finds primate neurons have fewer synapses than mice in visual cortex

A UChicago and Argonne National Laboratory study analyzing over 15,000 individual synapses in macaques and mice found that primate neurons have two to five times fewer synapses in the visual cortex compared to mice – and the difference may be due to the metabolic cost of maintaining synapses.

Evolution Sets the Stage for More Powerful Spiking Neural Networks

Spiking neural networks (SNNs) closely replicate the structure of the human brain, making them an important step on the road to developing artificial intelligence. Researchers recently advanced a key technique for training SNNs using an evolutionary approach. This approach involves recognizing and making use of the different strengths of individual elements of the SNN.

Developing Smarter, Faster Machine Intelligence with Light

SUMMARYResearchers at the George Washington University, together with researchers at the University of California, Los Angeles, and the deep-tech venture startup Optelligence LLC, have developed an optical convolutional neural network accelerator capable of processing large amounts of information, on the…

New Machine Learning-Based Model More Accurately Predicts Liver Transplant Waitlist Mortality

Data from a new study presented this week at The Liver Meeting Digital Experience® – held by the American Association for the Study of Liver Diseases – found that using neural networks, a type of machine learning algorithm, is a more accurate model for predicting waitlist mortality in liver transplantation, outperforming the older model for end-stage liver disease (MELD) scoring. This advancement could lead to the development of more equitable organ allocation systems and even reduce liver transplant waitlist death rates for patients.

Recipe for Neuromorphic Processing Systems?

The field of “brain-mimicking” neuromorphic electronics shows great potential for basic research and commercial applications, and researchers in Germany and Switzerland recently explored the possibility of reproducing the physics of real neural circuits by using the physics of silicon. In Applied Physics Letters, they present their work to understand neural processing systems, as well as a recipe to reproduce these computing principles in mixed signal analog/digital electronics and novel materials.

Applying Deep Learning to Automate UAV‐Based Detection of Scatterable Landmines

Recent advances in unmanned‐aerial‐vehicle‐ (UAV‐) based remote sensing utilizing lightweight multispectral and thermal infrared sensors allow for rapid wide‐area landmine contamination detection and mapping surveys. We present results of a study focused on developing and testing an automated technique of…

ORNL researchers develop ‘multitasking’ AI tool to extract cancer data in record time

To better leverage cancer data for research, scientists at ORNL are developing an artificial intelligence (AI)-based natural language processing tool to improve information extraction from textual pathology reports. In a first for cancer pathology reports, the team developed a multitask convolutional neural network (CNN)—a deep learning model that learns to perform tasks, such as identifying key words in a body of text, by processing language as a two-dimensional numerical dataset.

Researchers from TU Delft discover real Van Gogh using artificial intelligence

What did Vincent van Gogh actually paint and draw? Paintings and drawings fade, so researchers from TU Delft are using deep learning to digitally reconstruct works of art and discover what they really looked like. ‘What we see today is not the painting or drawing as it originally was,’ says researcher Jan van der Lubbe.