Q&A With Vascular Surgeon Elizabeth Chou, MD

After 11 years spent in medical school, residency and fellowships, Elizabeth Chou, MD, a vascular surgeon who recently joined the Smidt Heart Institute at Cedars-Sinai, has earned her dream career. And she has no plans of stopping there. She’s on a path toward ensuring women in vascular surgery are represented—as incoming physicians and as patients.

Mount Sinai Researchers Use Artificial Intelligence to Uncover the Cellular Origins of Alzheimer’s Disease and Other Cognitive Disorders

Deep learning models represent “an entirely new paradigm for studying dementia”

Researchers combine data science and machine learning techniques to improve traditional MRI image reconstruction

University of Minnesota Twin Cities researchers have found a way to improve the performance of traditional Magnetic Resonance Imaging (MRI) reconstruction techniques, allowing for faster MRIs without relying on the use of newer deep learning methods.

Rensselaer Researchers to Address Big Data Challenges

Dr. Yangyang Xu, assistant professor of mathematical sciences at Rensselaer Polytechnic Institute, has received a $250,000 grant from the National Science Foundation (NSF) to research challenges associated with distributed big data in machine learning.Machine learning algorithms allow computers to make decisions, predictions, and recommendations on the basis of input training data without being explicitly told what information to look for in the data.

DeepSqueak Tool Identifies Marine Mammal Calls #ASA182

As the size and number of acoustic datasets increase, accurately and quickly matching the bioacoustics signals to their corresponding sources becomes more challenging and important. This is especially difficult in noisy, natural acoustic environments. At the 182nd ASA Meeting, Elizabeth Ferguson, from Ocean Science Analytics, will describe how DeepSqueak, a deep learning tool, can classify underwater acoustic signals. It uses deep neural network image recognition and classification methods to determine the important features within spectrograms, then match those features to specific sources.

AF2Complex: Researchers Leverage Deep Learning to Predict Physical Interactions of Protein Complexes

Proteins are the molecular machinery that makes life possible, and researchers have long been interested in a key trait of protein function: their three-dimensional structure. A new study by Georgia Tech and Oak Ridge National Laboratory details a computational tool able to predict the structure protein complexes – and lends new insights into the biomolecular mechanisms of their function.

Novel Tag Provides First Detailed Look into Goliath Grouper Behavior

A study is the first to reveal detailed behavior of massive goliath groupers. Until now, no studies have documented their fine-scale behavior. What is known about them has been learned from divers, underwater video footage, and observing them in captivity. Using a multi-sensor tag with a three axis accelerometer, gyroscope and magnetometer as well as a temperature, pressure and light sensor, a video camera and a hydrophone, researchers show how this species navigates through complex artificial reef environments, maintain themselves in high current areas, and how much time they spend in different cracks and crevices – none of which would be possible without the tag.

Novel Model Predicts COVID-19 Outbreak Two Weeks Ahead of Time

People’s social behavior, reflected in their mobility data, is providing scientists with a way to forecast the spread of COVID-19 nationwide at the county level. Researchers have developed the first data-driven deep learning model with the potential to predict an outbreak in COVID-19 cases two weeks in advance. Feeding the mobility data to epidemiological forecasting models helps to estimate COVID-19 growth as well as evaluating the effects of government policies such as mandating masks on the spread of COVID-19.

NSF makes $20 Million investment in Optimization-focused AI Research Institute led by UC San Diego

The National Science Foundation (NSF) announced today an investment of $220 million to establish 11 artificial intelligence (AI) institutes, each receiving $20 million over five years. One of these, The Institute for Learning-enabled Optimization at Scale (TILOS), will be led by the University of California San Diego.

Now in 3D: Deep learning techniques help visualize X-ray data in three dimensions

A team of Argonne scientists has leveraged artificial intelligence to train computers to keep up with the massive amounts of X-ray data taken at the Advanced Photon Source.

Helping companies use high-performance computing to improve U.S. manufacturing

Argonne is helping U.S. companies solve pressing manufacturing challenges through an innovative program that provides access to Argonne’s world-class computing resources and technical expertise.

ORNL’s superb materials expertise, data and AI tools propel progress

At the Department of Energy’s Oak Ridge National Laboratory, scientists use artificial intelligence, or AI, to accelerate the discovery and development of materials for energy and information technologies.

Virtual Argonne workshop provides guidance on using AI and supercomputing tools for science

The Argonne Leadership Computing Facility continues its efforts to build a community of scientists who can employ AI and data-intensive analysis at a scale that requires DOE supercomputers.

The AI-driven initiative that’s hastening the discovery of drugs to treat COVID-19

Ten organizations have created a pipeline of artificial intelligence and simulation tools to narrow the search for drug candidates that can inhibit SARS-CoV-2.

Automatic deep-learning, artificial-intelligence clinical tool that can measure the volume of cerebral ventricles on MRIs in children

Researchers from multiple institutions in North America have developed a fully automated, deep-learning (DL), artificial-intelligence clinical tool that can measure the volume of cerebral ventricles on magnetic resonance images (MRIs) in children within about 25 minutes.

Building a better traffic forecasting model

Researchers from Argonne have developed a new way to accurately forecast traffic and proved that it could work using as their model the California highway system, the busiest in the United States.

Creating the software that will unlock the power of exascale

Researchers nationwide are building the software and applications that will run on some the world’s fastest supercomputers. Among them are members of DOE’s Exascale Computing Project who recently published a paper highlighting their progress so far.

New Artificial Intelligence Platform Uses Deep Learning to Diagnose Dystonia with High Accuracy in Less Than One Second

Researchers at Mass Eye and Ear have developed a unique diagnostic tool called DystoniaNet that uses artificial intelligence to detect dystonia from MRI scans in 0.36 seconds. DystoniaNet is the first technology of its kind to provide an objective diagnosis of the disorder. In a new study of 612 brain MRI scans, the platform diagnosed dystonia with 98.8 percent accuracy.

Algorithm Created By “Deep Learning” Identifies Potential Therapeutic Targets Throughout Genome

A team of researchers have developed an algorithm through machine learning that helps predict sites of DNA methylation – a process that can change the activity of DNA without changing its overall structure – and could identify disease-causing mechanisms that would otherwise be missed by conventional screening methods.

Deep learning algorithm identifies tumor subtypes based on routine histological images

Researchers at the University of Chicago Medicine Comprehensive Cancer Center, working with colleagues in Europe, created a deep learning algorithm that can infer molecular alterations directly from routine histology images across multiple common tumor types. The findings were published July 27 in Nature Cancer.

National Science Foundation Awards $5 Million to Develop Innovative AI Resource

The NSF has awarded the San Diego Supercomputer Center (SDSC) at UC San Diego a $5 million grant to develop a high-performance resource for conducting artificial intelligence (AI) research across a wide swath of science and engineering domains.

Calibrated approach to AI and deep learning models could more reliably diagnose and treat disease

In a recent preprint (available through Cornell University’s open access website arXiv), a team led by a Lawrence Livermore National Laboratory computer scientist proposes a novel deep learning approach aimed at improving the reliability of classifier models designed for predicting disease types from diagnostic images, with an additional goal of enabling interpretability by a medical expert without sacrificing accuracy. The approach uses a concept called confidence calibration, which systematically adjusts the model’s predictions to match the human expert’s expectations in the real world.

Researchers use drones, machine learning to detect dangerous ‘butterfly’ landmines

Using advanced machine learning, drones could be used to detect dangerous “butterfly” landmines in remote regions of post-conflict countries, according to research from Binghamton University, State University at New York.

U.S. Department of Energy’s INCITE program seeks proposals for 2021

The INCITE program is now seeking proposals for high-impact, computationally intensive research projects that require the power and scale of DOE’s leadership-class supercomputers.

Capturing 3D microstructures in real time

Argonne researchers have invented a machine-learning based algorithm for quantitatively characterizing material microstructure in three dimensions and in real time. This algorithm applies to most structural materials of interest to industry.

Applying Deep Learning to Automate UAV‐Based Detection of Scatterable Landmines

Recent advances in unmanned‐aerial‐vehicle‐ (UAV‐) based remote sensing utilizing lightweight multispectral and thermal infrared sensors allow for rapid wide‐area landmine contamination detection and mapping surveys. We present results of a study focused on developing and testing an automated technique of…

Computer Scientist Develops the Art of Artificial Intelligence

Dr. Kang Zhang uses artificial intelligence (AI) to teach computers to create illustrations in the style of the famous masters: Jackson Pollock and his paint splatters or Joan Miró and his curved shapes and sharp lines. The process involves feeding computers examples of colors, abstract shapes and layouts so they can learn to produce their own versions of masterpieces.

ORNL researchers develop ‘multitasking’ AI tool to extract cancer data in record time

To better leverage cancer data for research, scientists at ORNL are developing an artificial intelligence (AI)-based natural language processing tool to improve information extraction from textual pathology reports. In a first for cancer pathology reports, the team developed a multitask convolutional neural network (CNN)—a deep learning model that learns to perform tasks, such as identifying key words in a body of text, by processing language as a two-dimensional numerical dataset.

LLNL computer scientists explore deep learning to improve efficiency of ride-hailing and autonomous electric vehicles

Computer scientists at Lawrence Livermore National Laboratory are preparing the future of commuter traffic by applying Deep Reinforcement Learning — the same kind of goal-driven algorithms that have defeated video game experts and world champions in the strategy game Go — to determine the most efficient strategy for charging and driving electric vehicles used for ride-sharing services.