Since the 1970s, when early autonomous underwater vehicles (AUVs) were developed at the Massachusetts Institute of Technology, scientists there have tackled various barriers to the design of robots that can travel autonomously in the deep ocean. Part two of the four-part series examines how advanced mathematical techniques enable AUVs to survey large, complex, and cluttered seascapes.
Since the 1970s, when early autonomous underwater vehicles (AUVs) were developed at the Massachusetts Institute of Technology (MIT), Institute scientists have tackled various barriers to robots that can travel autonomously in the deep ocean. This four-part series examines current MIT efforts to refine AUVs’ artificial intelligence, navigation, stability, and tenacity.
John McCarthy, a pioneer in artificial intelligence technology and creator of the computer programming language often used in that field, died this week at age 84. He was a leader in the field, coining the term in a 1955 research proposal and going on to create influential laboratories at both Stanford University and Massachusetts Institute of Technology.
First it was chess. Then it was Jeopardy. Now computers are at it again, but this time they are trying to automate the scientific process itself. An interdisciplinary team of scientists at Vanderbilt University, Cornell University, and CFD Research Corporation Inc., has taken a major step toward this goal by demonstrating that a computer can analyze raw experimental data from a biological system and derive the basic mathematical equations that describe the way the system operates.
Robots for everyone. That's James McLurkin's dream, and as the director of a Rice University robotics laboratory, he's creating an inexpensive and sophisticated robot called the "R-one" to make the dream a reality.
Since 2000, R&D Magazine has annually honored an individual whose research has greatly contributed to the advance of high technology, and whose achievements have helped change society. In 2011, for the first time, the editors recognize the teamwork involved in making possible the most advanced computer-supported intelligence system yet: Watson.
University of Texas at Austin researchers have discovered how to extract and use information in an individual image to determine how far objects are from the focus distance, a feat only accomplished by human and animal visual systems until now.
By combining two innovative algorithms developed at the Massachusetts Institute of Technology, researchers have built a new robotic motion-planning system that calculates much more efficient trajectories through free space. This will allow robots to execute tasks more efficiently and move more predictably.
University of Exeter engineers have pioneered new methods for detecting leaky pipes and identifying flood risks with technologies normally used for computer game graphics and artificial intelligence. These techniques could help to identify water supply and flooding problems more quickly than ever before, potentially saving people from the traumatic experience of flooding or not having water on tap.
Ground controllers turned Robonaut on Monday for the first time since it was delivered to the International Space Station in February. The test involved sending power to all of Robonaut's systems. The robot was not commanded to move; that will happen next week. It is, however, tweeting now.
Computer scientists in the field of artificial intelligence have made an important advance that blends computer vision, machine learning, and automated planning, and created a new system that may improve everything from factory efficiency to airport operation or nursing care. It is based on watching football.
Today, IBM researchers unveiled the company’s first neurosynaptic computing chips, which are designed to emulate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry.
A robot in a University of Michigan lab can run like a human—a feat that represents the height of agility and efficiency for a two-legged machine. With a peak pace of 6.8 miles per hour, MABEL is believed to be the world's fastest bipedal robot with knees.
The editors of R&D Magazine have opened the nominations for the 2012 R&D 100 Awards competition, which will celebrate the 50th anniversary of the awards. If your organization introduced a new product this year, or is planning to, you can begin the entry process now.
Researchers at Caltech have taken a major step toward creating artificial intelligence—not in a robot or a silicon chip, but in a test tube. The researchers are the first to have made an artificial neural network out of DNA, creating a circuit of interacting molecules that can recall memories based on incomplete patterns, just as a brain can.
In a nod to Sandia National Laboratories' contributions to the field of robotics, the Smithsonian Institution has obtained nine of Sandia's historically significant robots for its permanent collection at the National Museum of American History.
Researchers at the Stanford School of Engineering have made a nanoelectronic synapse that might drive a new class of microchips that can learn, adapt, and make probability-based decisions in complex environments. The device emulates synaptic plasticity using phase-change material, and makes a leap past two-state transistors by demonstrating the ability to convey at least 100 values from each synapse.
MIT mechanical engineers are working to develop a new intelligent transportation system (ITS) algorithm that takes into account models of human driving behavior to warn drivers of potential collisions, and ultimately takes control of the vehicle to prevent a crash.
Millions of Americans have implantable medical devices. Most of these devices have wireless connections, so that doctors can monitor patients' vital signs or revise treatment programs. But recent research has shown that this leaves the devices vulnerable to attack. However, researchers from MIT and UMass developed a new system for preventing such attacks.
Digitally mimicking the photographic blur caused by moving objects is surprisingly hard, but new research offers ways to make it easier.
New algorithms make it easier to write rules for distributed-computing systems, such as networks of sensors, servers, or robots.
Imagine a robot able to retrieve a pile of laundry from the back of a cluttered closet, deliver it to a washing machine, start the cycle, and then zip off to the kitchen to start preparing dinner. This may have been a domestic dream a half-century ago, when the fields of robotics and artificial intelligence first captured public imagination. However, it quickly became clear that even "simple" human actions are extremely difficult to replicate in robots. Now, MIT computer scientists are tackling the problem with a hierarchical, progressive algorithm that has the potential to greatly reduce the computational cost associated with performing complex actions.
IBM is still perhaps two years from marketing a medical Watson, but Columbia Univ. medical school professor Dr. Herbert Chase, who is working with the company to adapt the computer for medical tasks, says its ability to understand plain language and access medical history and symptoms might mean quicker diagnoses and treatments.
Learning how to program a computer to display the words "Hello World" once may have excited students, but that hoary chestnut of a lesson doesn’t cut it in a world of videogames, smartphones, and Twitter. One option to take its place and engage a new generation of students in computer programming is a Carnegie Mellon Univ.-developed robot called Finch.