Breaking Barriers: Study Uses AI to Interpret American Sign Language in Real-time

A study is the first-of-its-kind to recognize American Sign Language (ASL) alphabet gestures using computer vision. Researchers developed a custom dataset of 29,820 static images of ASL hand gestures. Each image was annotated with 21 key landmarks on the hand, providing detailed spatial information about its structure and position. Combining MediaPipe and YOLOv8, a deep learning method they trained, with fine-tuning hyperparameters for the best accuracy, represents a groundbreaking and innovative approach that hasn’t been explored in previous research.

The best AI strategy to recognize multiple objects in one image

Image classification is one of AI’s most common tasks, where a system is required to recognize an object from a given image. Yet real life requires us to recognize not a single standalone object but rather multiple objects appearing together in a given image.

This reality raises the question: what is the best strategy to tackle multi-object classification? The common approach is to detect each object individually and then classify them. But new research challenges this customary approach to multi-object classification tasks.

In an article published today in Physica A, researchers from Bar-Ilan University in Israel show how classifying objects together, through a process known as Multi-Label Classification (MLC), can surpass the common detection-based classification.

New algorithms help four-legged robots run in the wild

A new system of algorithms developed by UC San Diego engineers enables four-legged robots to walk and run on challenging terrain while avoiding both static and moving obstacles. The work brings researchers a step closer to building robots that can perform search and rescue missions or collect information in places that are too dangerous or difficult for humans.

AI Learns to Predict Human Behavior from Videos

New Columbia Engineering study unveils a computer vision technique for giving machines a more intuitive sense for what will happen next by leveraging higher-level associations between people, animals, and objects.“Our algorithm is a step toward machines being able to make better predictions about human behavior, and thus better coordinate their actions with ours,” said Computer Science Professor Carl Vondrick. “Our results open a number of possibilities for human-robot collaboration, autonomous vehicles, and assistive technology.”

AI software enables real-time 3D printing quality assessment

Oak Ridge National Laboratory researchers have developed artificial intelligence software for powder bed 3D printers that assesses the quality of parts in real time, without the need for expensive characterization equipment.

Research reflects how AI sees through the looking glass

Intrigued by how reflection changes images in subtle and not-so-subtle ways, a team of Cornell University researchers used artificial intelligence to investigate what sets originals apart from their reflections. Their algorithms learned to pick up on unexpected clues such as hair parts, gaze direction and, surprisingly, beards – findings with implications for training machine learning models and detecting faked images.