The work, led by Johns Hopkins University, the Georgia Institute of Technology, and University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely used model operate with significant gender and racial biases. The work is set to be presented and published this week at the 2022 Conference on Fairness, Accountability, and Transparency.
Valhalla, a Python-based performance modeling framework developed at Sandia National Laboratories, uses high-performance computing to build preliminary satellite designs based on mission requirements and then runs those designs through thousands of simulations.
A new research centre that focuses on next-generation artificial intelligence (AI) technology will develop the high-calibre expertise Australia needs to compete in the coming machine learning-enabled global economy.
The scientific computing and networking leadership of the U.S. Department of Energy’s (DOE’s) national laboratories will be on display at SC21, the International Conference for High-Performance Computing, Networking, Storage and Analysis. The conference takes place Nov. 14-19 in St. Louis via a combination of on-site and online resources.
Scientists at Sandia National Laboratories are creating a concept for a new kind of computer for solving complex probability problems that involve random chance.
Analog photonic solutions offer unique opportunities to address complex computational tasks with unprecedented performance in terms of energy dissipation and speeds, overcoming current limitations of modern computing architectures based on electron flows and digital approaches. In a new study published…
Scientists at PPPL have transferred a technique from one realm of plasma physics to another to enable the more efficient design of powerful magnets for doughnut-shaped fusion facilities known as tokamaks.
The U.S. Department of Energy (DOE) announced $15.1 million for three collaborative research projects, at five universities, to advance the development of a flexible multi-tiered data and computational infrastructure to support a diverse collection of on-demand scientific data processing tasks and computationally intensive simulations.
Scientists at Berkeley Lab and UC Berkeley have created an ultrathin magnet that operates at room temperature. The ultrathin magnet could lead to new applications in computing and electronics – such as spintronic memory devices – and new tools for the study of quantum physics.
Scientist demonstrated a new way of observing atoms as they move in a tiny quantum electronic switch as it operates. Along the way, they discovered a new material state that could pave the way for faster, more energy-efficient computing.
Computers play an integral role in nearly every discipline of research today, giving scientists the ability to discover new drugs, develop new materials, forecast the impacts of climate change, and solve some of today’s most challenging problems.
Using complementary computing calculations and neutron scattering techniques, researchers from the Department of Energy’s Oak Ridge and Lawrence Berkeley national laboratories and the University of California, Berkeley, discovered the existence of an elusive type of spin dynamics in a quantum mechanical system.
The Coalition for Academic Scientific Computation, or CASC — a network of more than 90 research computing-focused organizations from academic institutions, national labs, and research centers around the U.S. — hopes to continue to bring this critical computing work to the forefront through its activities. Newly elected leadership and recent changes within CASC are helping the nonprofit organization excel in this mission.
The prodigious amount of data produced at the Large Hadron Collider presents a major challenge for data analysis. Coffea, a Python package developed by Fermilab researchers, speeds up computation and helps scientists work more efficiently. Around a dozen international LHC research groups now use Coffea, which draws on big data techniques used outside physics.
The U.S. Air Force and the Department of Energy’s Oak Ridge National Laboratory launched a new high-performance weather forecasting computer system that will provide a platform for some of the most advanced weather modeling in the world.
A research team at Sandia National Laboratories has successfully used machine learning — computer algorithms that improve themselves by learning patterns in data — to complete cumbersome materials science calculations more than 40,000 times faster than normal.
Applying his passions for science and art, Nikhil Tiwale—a postdoc at Brookhaven Lab’s Center for Functional Nanomaterials—is fabricating new microelectronics components.
Four Rutgers professors have been named fellows of the American Association for the Advancement of Science (AAAS), an honor given to AAAS members by their peers. They join 485 other new AAAS fellows as a result of their scientifically or socially distinguished efforts to advance science or its applications. A virtual induction ceremony is scheduled for Feb. 13, 2021.
PNNL’s new Smart Power Grid Simulator, or Smart-PGsim, combines high-performance computing and artificial intelligence to optimize power grid simulations without sacrificing accuracy.
Globus, a leading research data management service, reached a huge milestone by breaking the exabyte barrier. While it took over 2,000 days for the service to transfer the first 200 petabytes (PB) of data, the last 200PB were moved in just 247 days. This rapidly accelerating growth is reflected by the more than 150,000 registered users who have now transferred over 120 billion files using Globus.
Three Fermilab scientists have been selected 2020 fellows of the American Physical Society, a distinction awarded each year to no more than one-half of 1 percent of current APS members by their peers.
Fermilab scientists have implemented a cloud-based machine learning framework to handle data from the CMS experiment at the Large Hadron Collider. Now they can begin to use graph neural networks to boost their pattern recognition abilities in the search for new particles.
An international group of researchers has developed a technique that forecasts how tokamaks might respond to unwanted magnetic errors. These forecasts could help engineers design fusion facilities that create a virtually inexhaustible supply of safe and clean fusion energy to generate electricity.
A new design facility, located at the University of Adelaide, will offer organisations and individuals a space in which to push the boundaries of technical, industrial and business innovation.
Today, the White House Office of Science and Technology Policy, the National Science Foundation (NSF), and the U.S. Department of Energy (DOE) announced over $1 billion in awards for the establishment of 12 new artificial intelligence (AI) and quantum information science (QIS) research institutes nationwide.
Researchers led by PPPL have upgraded a key computer code for calculating forces acting on magnetically confined plasma in fusion energy experiments. The upgrade will help scientists further improve the design of breakfast-cruller-shaped facilities known as stellarators.
The Department of Energy is supporting the development of both conventional exascale supercomputers and quantum computers. Each provide benefits that could transform scientific research.
Machine learning performed by neural networks is a popular approach to developing artificial intelligence, as researchers aim to replicate brain functionalities for a variety of applications. A paper in the journal Applied Physics Reviews proposes a new approach to perform computations required by a neural network, using light instead of electricity. In this approach, a photonic tensor core performs multiplications of matrices in parallel, improving speed and efficiency of current deep learning paradigms.
To meet the needs of tomorrow’s supercomputers, the National Nuclear Security Administration’s (NNSA’s) Lawrence Livermore National Laboratory (LLNL) has broken ground on its Exascale Computing Facility Modernization (ECFM) project, which will substantially upgrade the mechanical and electrical capabilities of the Livermore Computing Center.
During an internship at Brookhaven National Laboratory, Juliette Stecenko is using modern supercomputers and quantum computing platforms to perform astronomy simulations that may help us better understand where we came from.
The Deep Underground Neutrino Experiment will collect massive amounts of data from star-born and terrestrial neutrinos. A worldwide network of computers will provide the infrastructure to help analyze it. Using artificial intelligence and machine learning, scientists write software to mine the data.
Fermilab, Brookhaven, and Open Science Grid dedicate computational power to COVID-19 research.
Scientists and engineers at Fermilab and Brookhaven are uniting with other organizations in the Open Science Grid to help fight COVID-19 by dedicating considerable computational power to researchers studying how they can help combat the virus-borne disease.
Timothy M. VanReken is a program director for the Established Program to Stimulate Competitive Research (EPSCoR), part of the Office of Integrative Activities at the National Science Foundation.
This is a continuing profile series on the directors of the Department of Energy (DOE) Office of Science User Facilities. Michael E. Papka is the director of the Argonne Leadership Computing Facility.
The advent of artificial intelligence, machine learning and the internet of things is expected to change modern electronics. The pressing question for many researchers is how to handle this technological revolution. Brain-inspired electronics with organic memristors could offer a functionally promising and cost- effective platform. Since memristors are functionally analogous to the operation of neurons, the computing units in the brain, they are optimal candidates for brain-inspired computing platforms.
The field of “brain-mimicking” neuromorphic electronics shows great potential for basic research and commercial applications, and researchers in Germany and Switzerland recently explored the possibility of reproducing the physics of real neural circuits by using the physics of silicon. In Applied Physics Letters, they present their work to understand neural processing systems, as well as a recipe to reproduce these computing principles in mixed signal analog/digital electronics and novel materials.
Lawrence Livermore National Laboratory (LLNL), Hewlett Packard Enterprise (HPE) and Advanced Micro Devices, Inc. (AMD) today announced the selection of AMD as the node supplier for El Capitan, projected to be the world’s most powerful supercomputer when it is fully deployed in 2023.
To better leverage cancer data for research, scientists at ORNL are developing an artificial intelligence (AI)-based natural language processing tool to improve information extraction from textual pathology reports. In a first for cancer pathology reports, the team developed a multitask convolutional neural network (CNN)—a deep learning model that learns to perform tasks, such as identifying key words in a body of text, by processing language as a two-dimensional numerical dataset.
Using advanced computational methods to find working designs, researchers created six protein pairs in cells.
Using supercomputer simulations and a large dataset of materials, scientists found a connection between distortions in the material’s atomic structure and the amount of energy required to separate a proton from the material.
The Big Questions series features perspectives from the five recipients of the Department of Energy Office of Science’s 2019 Distinguished Scientists Fellows Award describing their research and what they plan to do with the award. Ian Foster is the director of Argonne National Laboratory’s Data Science and Learning Division.
Since 1947, computing development has seen a consistent doubling of the number of transistors that can fit on a chip. But that trend, Moore’s Law, may reach its limit as components of submolecular size encounter problems with thermal noise, making further scaling impossible. In this week’s Applied Physics Reviews, researchers present an examination of the computing landscape, focusing on functions needed to advance brain-inspired neuromorphic computing.
Computer storage devices often use magnetic materials printed on very thin films. In this study, researchers rotated cobalt-iron alloy thin films relative to an applied magnetic field. Unexpectedly, depending on the rotation angle, a sizeable change – up to 400% – was seen in how well the material holds on to energy.
Northern Arizona University assistant professor Fatemeh Afghah is one of 40 recipients of the grant, given to foster creative basic research in science and engineering, enhance career development and provide opportunities for engineers to address military challenges in science and engineering.
By blasting a frustrated mixture of materials with quick pulses of laser light, researchers transformed a superlattice into a supercrystal, a rare, repeating, three-dimensional structural much larger than an ordinary crystal. Using machine learning techniques, they studied the underlying structure of this sample at the nanoscale level before and after applying the laser pulse treatment.
⎯ A team of researchers from the National University of Singapore (NUS) has put Singapore on the global map of Artificial Intelligence (AI) and big data analytics. Their open-source project, called Apache SINGA, “graduated” from the Apache Incubator on 16 October 2019 and is now Southeast Asia’s first Top-Level Project (TLP) under the Apache Software Foundation, the world’s largest open-source software community.
The University of Utah’s School of Computing, which is under the College of Engineering, has developed a new bachelor’s of science degree in data science that addresses all aspects of compiling, organizing and analyzing data. It is one of only a handful of universities in America with an undergraduate degree in the discipline.
Scientists from DOE’s Pacific Northwest National Laboratory, DOE’s Sandia National Laboratories, and the Georgia Institute of Technology will collaborate on solutions to some of the most challenging problems in AI today, thanks to $5.5 million in funding from DOE.
Since that first computer more than 30 years ago, ESnet has expanded to connect more than 40 major research institutions at speeds 15,000 times faster than a home network. From acting as an early adopter of protocols that now run the internet to making today’s scientific discoveries possible, ESnet is the big player in the internet you’ve probably never heard of.