For the first time, scientists report the development of a stretchable “electronic skin” closely modeled after our own that can detect not just pressure, but also what direction it’s coming from.
Computers are good at identifying patterns in huge data sets. Humans, by contrast, are good at inferring patterns from just a few examples. In a recent paper, Massachusetts Institute of Technology researchers present a new system that bridges these two ways of processing information, so that humans and computers can collaborate to make better decisions.
Researchers from the Queen Mary Univ. of London gave a computer program the outline of how a magic jigsaw puzzle and a mind-reading card trick work, as well the results of experiments into how humans understand magic tricks, and the system created completely new variants on those tricks which can be delivered by a magician.
North Carolina State Univ. researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound.
Scientists at IBM and leading global energy company Repsol S.A. announced this week the world’s first research collaboration to leverage cognitive technologies that will help transform the oil and gas industry. IBM and Repsol are jointly developing two prototype cognitive applications specifically designed to augment Repsol’s strategic decision making in the optimization of oil reservoir production and in the acquisition of new oil fields.
As transistors get smaller, they also grow less reliable. Increasing their operating voltage can help, but that means a corresponding increase in power consumption. With information technology consuming a steadily growing fraction of the world’s energy supplies, some researchers and hardware manufacturers are exploring the possibility of simply letting chips botch the occasional computation.
Inside Massachusetts Institute of Technology’s Building 41, a small, Roomba-like robot is trying to decided where to go. As the robot considers its options, its “thoughts” are projected on the ground in the form of different colored dots and lines. This new visualization system, called “measurable virtual reality”, combines projectors with motion-capture technology and animation software to project a robot’s intentions in real time.
Massachusetts Institute of Technology researchers unveiled an oval-shaped submersible robot, a little smaller than a football, with a flattened panel on one side that can slide along an underwater surface to perform ultrasound scans. Originally designed to look for cracks in nuclear reactors’ water tanks, the robot could also inspect ships for the false hulls and propeller shafts that smugglers frequently use to hide contraband.
Washington State Univ. professor Rich Lamb has figured out a dramatically easier and more cost-effective way to do research on science curriculum in the classroom, and it could include playing video games. Called “computational modeling,” it involves a computer “learning” student behavior and then “thinking” as students would. Lamb, who teaches science education, says the process could revolutionize the way educational research is done.
Objects in space tend to spin—and spin in a way that’s totally different from the way they spin on earth. Understanding how objects are spinning, where their centers of mass are, and how their mass is distributed is crucial to any number of actual or potential space missions, from cleaning up debris in the geosynchronous orbit favored by communications satellites to landing a demolition crew on a comet.
Proofs are the key method of mathematics. Until now, it has mainly been humans who have verified whether proofs are correct. This could change, says Russian mathematician Vladimir Voevodsky, who points to evidence that, in the near future, computers rather than humans could reliably verify whether a mathematical proof is correct.
Advances in artificial intelligence and robotics mean that machines will soon be able to do many of the tasks of today's workers. But David Hummels, a professor of economics at Purdue Univ., says humans still have a unique advantage that machines may never be able to emulate: our ability to respond to other humans.
In the near future, the package that you ordered online may be deposited at your doorstep by a drone: Last December, online retailer Amazon announced plans to explore drone-based delivery, suggesting that fleets of flying robots might serve as autonomous messengers that shuttle packages to customers within 30 mins of an order.
It makes sense that the credit for science papers with multiple authors should go to the authors who perform the bulk of the research, yet that’s not always the case. Now a new algorithm developed at Northeastern’s Center for Complex Network Research helps sheds light on how to properly allocate credit.
In the age of big data, visualization tools are vital. With a single glance at a graphic display, a human being can recognize patterns that a computer might fail to find even after hours of analysis. But what if there are aberrations in the patterns? Or what if there’s just a suggestion of a visual pattern that’s not distinct enough to justify any strong inferences? Or what if the pattern is clear, but not what was to be expected?
Visual impairment comes in many forms, and it's on the rise in America. A Univ. of Cincinnati experiment aimed at this diverse and growing population could spark development of advanced tools to help all the aging baby boomers, injured veterans, diabetics and white-cane-wielding pedestrians navigate the blurred edges of everyday life.
Much artificial intelligence research addresses the problem of making predictions based on large data sets. An obvious example is the recommendation engines at retail sites like Amazon and Netflix. But some types of data are harder to collect than online click histories. And in other applications there may just not be enough time to crunch all the available data.
Twisting a screwdriver, removing a bottle cap and peeling a banana are just a few simple tasks that are tricky to pull off single handedly. Now a new wrist-mounted robot can provide a helping hand—or rather, fingers. Researchers at Massachusetts Institute of Technology have developed a robot that enhances the grasping motion of the human hand.
The next big thing in aviation may be really small. With some no bigger than a hummingbird, the hottest things at this week's Farnborough International Airshow are tiny compared with the titans of the sky, such as the Airbus 380 or the Boeing Dreamliner.
Fully automated "deep learning" by computers greatly improves the odds of discovering particles such as the Higgs boson, according to a recent study. In fact, this approach beats even veteran physicists' abilities, which now consists of developing mathematical formulas by hand to apply to data. New machine learning methods are rendering that approach unnecessary.
Machine learning, in which computers learn new skills by looking for patterns in training data, is the basis of most recent advances in artificial intelligence, from voice-recognition systems to self-parking cars. It’s also the technique that autonomous robots typically use to build models of their environments. That type of model-building gets complicated, however, in cases in which clusters of robots work as teams.
This growing season, crop researchers at the Univ. of Illinois are experimenting with the use of drones—unmanned aerial vehicles—on the university’s South Farms. Dennis Bowman, a crop sciences educator with U. of I. Extension, is using two drones to take aerial pictures of crops growing in research plots on the farms.
One of the reasons we don’t yet have self-driving cars and miniature helicopters delivering online purchases is that autonomous vehicles tend not to perform well under pressure. A system that can flawlessly parallel park at 5 mph may have trouble avoiding obstacles at 35 mph. Part of the problem is the time it takes to produce and interpret camera data.
Automated guided vehicles—or AGVs—are robotic versions of draft animals, hauling heavy loads and navigating their way in factories, distribution centers, ports and other facilities. These modern beasts of burden are evolving so rapidly in capabilities and electronic intelligence that the need for the equivalent of standardized performance testing has become apriority for the fast-growing AGV industry and its customers.
Researchers are working on a new algorithm that could make re-identification much easier for computers by identifying the major orientations in 3-D scenes. The same algorithm could also simplify the problem of scene understanding, one of the central challenges in computer vision research.