A team of NIH microscopists and computer scientists used a type of artificial intelligence called a neural network to obtain clearer pictures of cells at work even with extremely low, cell-friendly light levels.
Neural network training could one day require less computing power and hardware, thanks to a new nanodevice that can run neural network computations using 100 to 1000 times less energy and area than existing CMOS-based hardware.
Berkeley Lab researchers participated in a study that used machine learning to scan for new particles in three years of particle-collision data from CERN’s ATLAS detector.
A University of Washington-led team has come up with a system that could help speed up AI performance and find ways to reduce its energy consumption: an optical computing core prototype that uses phase-change material.
SUMMARYResearchers at the George Washington University, together with researchers at the University of California, Los Angeles, and the deep-tech venture startup Optelligence LLC, have developed an optical convolutional neural network accelerator capable of processing large amounts of information, on the…
PNNL’s new Smart Power Grid Simulator, or Smart-PGsim, combines high-performance computing and artificial intelligence to optimize power grid simulations without sacrificing accuracy.
PNNL researchers and university collaborators have developed a system to ferret out questionable use of high-performance computing (HPC) systems.
Machine learning performed by neural networks is a popular approach to developing artificial intelligence, as researchers aim to replicate brain functionalities for a variety of applications. A paper in the journal Applied Physics Reviews proposes a new approach to perform computations required by a neural network, using light instead of electricity. In this approach, a photonic tensor core performs multiplications of matrices in parallel, improving speed and efficiency of current deep learning paradigms.
When it fires, a neuron consumes significantly more energy than an equivalent computer operation. And yet, a network of coupled neurons can continuously learn, sense and perform complex tasks at energy levels that are currently unattainable for even state-of-the-art processors.What does a neuron do to save energy that a contemporary computer processing unit doesn’t?Computer modelling by researchers at Washington University in St.
Ever wonder why your smart phone can do facial recognition, but your smart watch can’t? UD’s Chengmo Yang is researching ways to support neural networks in low-power embedded systems by using emerging memory devices that can retrieve information even when powered off, and furthermore minimize errors.