The inner crust of a neutron star is characterized by the presence of a neutron superfluid. To accurately predict the properties of neutron matter in this state, researchers make theoretical calculations that typically assume that neutrons form “Cooper pairs.” This study used artificial neural networks to make accurate predictions without relying on this assumption.
Tag: Neural Networks
Enhancing MRI with AI to Improve Diagnosis of Brain Disorders
Researchers from UC San Francisco have developed a machine learning algorithm to enhance 3T MRIs by synthesizing 7T-like images that approximate real 7T MRIs. Their model enhanced pathological tissue with more fidelity for clinical insights and represents a new step toward evaluating clinical applications of synthetic 7T MRI models.
Flexible Circuits Made with Silk and Graphene on the Horizon
Ultra-thin layers of silk deposited on graphene in perfect alignment represent a key advance for the control needed in microelectronics and advanced neural network development.
Psilocybin generates psychedelic experience by disrupting brain network
Researchers at Washington University School of Medicine in St. Louis report that psilocybin, the active ingredient in magic mushrooms, destabilizes a critical network of brain areas involved in introspective thinking. The findings provide a neurobiological explanation for the drug’s mind-bending effects.
Acoustic radiation and scattering: a new era with BINNs technology
A new method called Boundary Integrated Neural Networks (BINNs) has been developed for analyzing acoustic radiation and scattering.
Unlocking cryptocurrency profits: AI-powered trading strategies tame market swings
In the rapidly evolving world of cryptocurrency, volatility management remains a crucial challenge. Researchers have now developed a novel approach that integrates Exponential Generalized Autoregressive Conditional Heteroskedasticity (EGARCH) with genetic algorithms and neural networks to enhance the precision of trading decisions in this volatile market.
NJIT Researcher: Neural Networks Can Mediate Between Download Size and Quality
Application data requirements vs. available network bandwidth has been the ongoing Battle of the Information Age, but now it appears that a truce is within reach, based on new research from NJIT Associate Professor Jacob Chakareski.
Berkeley Lab Scientists Invent Novel Microdevice Array for Energy Efficient Optical Computing
Scientists at Lawrence Berkeley National Laboratory have developed a breakthrough technology that could greatly develop the field of optical computing and devices utilizing all- optical control.
Researchers Reveal Roadmap for AI Innovation in Brain and Language Learning
A new study co-led by Georgia Institute of Technology’s Anna (Anya) Ivanova uncovers the relationship between language and thought in artificial intelligence models like ChatGPT, leveraging cognitive neuroscience research on the human brain. The results are a roadmap to developing new AIs — and to better understanding how we think and communicate.
Introducing Floorlocator: a game-changer in indoor navigation technology
Researchers have developed FloorLocator, a breakthrough in indoor navigation technology, which combines the high efficiency of Spiking Neural Networks (SNNs) with the advanced learning capabilities of Graph Neural Networks (GNNs).
How do neural networks learn? A mathematical formula explains how they detect relevant patterns
Researchers found that a formula used in statistical analysis provides a streamlined mathematical description of how neural networks, such as GPT-2, a precursor to ChatGPT, learn relevant patterns in data, known as features. This formula also explains how neural networks use these relevant patterns to make predictions. The team presented their findings in the March 7 issue of the journal Science.
UTSW team’s new AI method may lead to ‘automated scientists’
UT Southwestern Medical Center researchers have developed an artificial intelligence (AI) method that writes its own algorithms and may one day operate as an “automated scientis” to extract the meaning behind complex datasets.
How Does the Brain Make Decisions?
Mouse study provides insights into communication between neurons during decision-making
World’s largest childhood trauma study uncovers brain rewiring
The world’s largest brain study of childhood trauma has revealed how it affects development and rewires vital pathways.
AI learns through the eyes and ears of a child
AI systems, such as GPT-4, can now learn and use human language, but they learn from astronomical amounts of language input—much more than children receive when learning how to understand and speak a language.
New parallel hybrid network achieves better performance through quantum-classical collaboration
Building efficient quantum neural networks is a promising direction for research at the intersection of quantum computing and machine learning.
Study: Deep neural networks don’t see the world the way we do
Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.
Novel information on the neural origins of speech and singing
Unlike previously thought, speech production and singing are supported by the same circuitry in the brain. Observations in a new study can help develop increasingly effective rehabilitation methods for patients with aphasia.
Mathematical theory predicts self-organized learning in real neurons
An international collaboration between researchers at the RIKEN Center for Brain Science (CBS) in Japan, the University of Tokyo, and University College London has demonstrated that self-organization of neurons as they “learn” follows a mathematical theory called the free energy principle.
The digital dark matter clouding AI
Artificial intelligence has entered our daily lives. First, it was ChatGPT. Now, it’s AI-generated pizza and beer commercials. While we can’t trust AI to be perfect, it turns out that sometimes we can’t trust ourselves with AI either.
Mind to molecules: Does brain’s electrical encoding of information ‘tune’ sub-cellular structure?
A new paper by researchers at MIT, City —University of London, and Johns Hopkins University posits that the electrical fields of the network influence the physical configuration of neurons’ sub-cellular components to optimize network stability and efficiency, a hypothesis the authors call “Cytoelectric Coupling.”
A neural coordination strategy for attachment and detachment of a climbing robot inspired by gecko locomotion
A research article by scientists at the Nanjing University of Aeronautics and Astronautics developed a neural control algorithm to coordinate the adhesive toes and limbs of the climbing robot.
EMBARGOED: Two brain networks are activated while reading, study finds
When a person reads a sentence, two distinct networks in the brain are activated, working together to integrate the meanings of the individual words to obtain more complex, higher-order meaning, according to a study at UTHealth Houston.
Neural network learns how to identify chromatid cohesion defects
Scientists from Tokyo Metropolitan University have used machine learning to automate the identification of defects in sister chromatid cohesion.
New study deepens understanding of the regulation of circadian rhythms in the mammalian central clock
Circadian rhythms are inherent cycles of approximately 24 hours that regulate various biological processes, such as sleep and wakefulness.
Study Evaluates Neural Network Involvement in PTSD
Article title: The brain landscape of the two-hit model of posttraumatic stress disorder Authors: Lisa M. James, Brian E. Engdahl, Peka Christova, Scott M. Lewis, and Apostolos P. Georgopoulos From the authors: “The present study provides a novel contribution by…
Holding information in mind may mean storing it among synapses
Between the time you read the Wi-Fi password off the café’s menu board and the time you can get back to your laptop to enter it, you have to hold it in mind.
5th HK Tech Forum investigates quantum physics and complex systems
Leading academic and industry researchers in the rapidly developing fields of quantum computation, quantum physics, and related areas gathered at the HK Tech Forum on Quantum Physics and Complex Systems hosted by the Hong Kong Institute for Advanced Study at City University of Hong Kong (CityU) from 7 to 9 December.
Nanoengineers Develop a Predictive Database for Materials
Nanoengineers at the University of California San Diego’s Jacobs School of Engineering have developed an AI algorithm that predicts the structure and dynamic properties of any material—whether existing or new—almost instantaneously. Known as M3GNet, the algorithm was used to develop matterverse.ai, a database of more than 31 million yet-to-be-synthesized materials with properties predicted by machine learning algorithms. Matterverse.ai facilitates the discovery of new technological materials with exceptional properties.
How artificial intelligence can explain its decisions
Artificial intelligence (AI) can be trained to recognise whether a tissue image contains a tumour.
A new neuromorphic chip for AI on the edge, at a small fraction of the energy and size of today’s compute platforms
An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of AI applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing. The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server.
Machine Learning Reveals Hidden Components of X-Ray Pulses
Ultrafast pulses from X-ray lasers reveal how atoms move at femtosecond timescales, but measuring the properties of the pulses is challenging. A new approach trains neural networks to analyze the pulses. Starting from low-resolution measurements, the neural networks reveal finer details with each pulse, and they can analyze pulses millions of times faster than previous methods.
Capturing Cortical Connectivity Close-Up
The brain is made up of a complex series of networks—signals are constantly bouncing between those networks to allow us to experience the world and move through it effectively.
School of Physics Uses Moths and Origami Structures for Innovative Defense Research
Georgia Tech has received two Department of Defense (DoD) 2022 Multidisciplinary University Research Initiative (MURI) awards totaling almost $14 million. The highly competitive government program supports interdisciplinary teams of investigators developing innovative solutions in DoD interest areas. This year, the DoD awarded $195 million to 28 research teams across the country.
Size Matters for Bee ‘Superorganism’ Colonies
Researchers studying honey bees have found that colony size matters in determining how members make decisions in the face of dynamic survival conditions. Large, established colonies are less likely to take chances while smaller colonies are much more willing to take risks.
Researchers study recurrent neural network structure in the brain
Two University of Wyoming researchers decided to pick each other’s brain, so to speak.
Contrary to expectations, study finds primate neurons have fewer synapses than mice in visual cortex
A UChicago and Argonne National Laboratory study analyzing over 15,000 individual synapses in macaques and mice found that primate neurons have two to five times fewer synapses in the visual cortex compared to mice – and the difference may be due to the metabolic cost of maintaining synapses.
ORNL licenses revolutionary AI system to General Motors for automotive use
The Department of Energy’s Oak Ridge National Laboratory has licensed its award-winning artificial intelligence software system, the Multinode Evolutionary Neural Networks for Deep Learning, to General Motors for use in vehicle technology and design.
Evolution Sets the Stage for More Powerful Spiking Neural Networks
Spiking neural networks (SNNs) closely replicate the structure of the human brain, making them an important step on the road to developing artificial intelligence. Researchers recently advanced a key technique for training SNNs using an evolutionary approach. This approach involves recognizing and making use of the different strengths of individual elements of the SNN.
Researchers Hunt for New Particles in Particle Collider Data
Berkeley Lab researchers participated in a study that used machine learning to scan for new particles in three years of particle-collision data from CERN’s ATLAS detector.
Developing Smarter, Faster Machine Intelligence with Light
SUMMARYResearchers at the George Washington University, together with researchers at the University of California, Los Angeles, and the deep-tech venture startup Optelligence LLC, have developed an optical convolutional neural network accelerator capable of processing large amounts of information, on the…
New Machine Learning-Based Model More Accurately Predicts Liver Transplant Waitlist Mortality
Data from a new study presented this week at The Liver Meeting Digital Experience® – held by the American Association for the Study of Liver Diseases – found that using neural networks, a type of machine learning algorithm, is a more accurate model for predicting waitlist mortality in liver transplantation, outperforming the older model for end-stage liver disease (MELD) scoring. This advancement could lead to the development of more equitable organ allocation systems and even reduce liver transplant waitlist death rates for patients.
Recipe for Neuromorphic Processing Systems?
The field of “brain-mimicking” neuromorphic electronics shows great potential for basic research and commercial applications, and researchers in Germany and Switzerland recently explored the possibility of reproducing the physics of real neural circuits by using the physics of silicon. In Applied Physics Letters, they present their work to understand neural processing systems, as well as a recipe to reproduce these computing principles in mixed signal analog/digital electronics and novel materials.
Applying Deep Learning to Automate UAV‐Based Detection of Scatterable Landmines
Recent advances in unmanned‐aerial‐vehicle‐ (UAV‐) based remote sensing utilizing lightweight multispectral and thermal infrared sensors allow for rapid wide‐area landmine contamination detection and mapping surveys. We present results of a study focused on developing and testing an automated technique of…
ORNL researchers develop ‘multitasking’ AI tool to extract cancer data in record time
To better leverage cancer data for research, scientists at ORNL are developing an artificial intelligence (AI)-based natural language processing tool to improve information extraction from textual pathology reports. In a first for cancer pathology reports, the team developed a multitask convolutional neural network (CNN)—a deep learning model that learns to perform tasks, such as identifying key words in a body of text, by processing language as a two-dimensional numerical dataset.
Lasers Learn to Accurately Spot Space Junk
Scientists have developed space junk identification systems, but it has proven tricky to pinpoint the swift, small specks of space litter. A unique set of algorithms for laser ranging telescopes, described in the Journal of Laser Applications, by AIP Publishing, has significantly improving the success rate of space debris detection.
Researchers from TU Delft discover real Van Gogh using artificial intelligence
What did Vincent van Gogh actually paint and draw? Paintings and drawings fade, so researchers from TU Delft are using deep learning to digitally reconstruct works of art and discover what they really looked like. ‘What we see today is not the painting or drawing as it originally was,’ says researcher Jan van der Lubbe.