Automated guided vehicles—or AGVs—are robotic versions of draft animals, hauling heavy loads and navigating their way in factories, distribution centers, ports and other facilities. These modern beasts of burden are evolving so rapidly in capabilities and electronic intelligence that the need for the equivalent of standardized performance testing has become apriority for the fast-growing AGV industry and its customers.
Researchers are working on a new algorithm that could make re-identification much easier for computers by identifying the major orientations in 3-D scenes. The same algorithm could also simplify the problem of scene understanding, one of the central challenges in computer vision research.
Although neural networks have been used in the past to solve pattern recognition problems such as speech and image recognition, it was usually in software on a conventional computer. Researchers in Belgium have manufactured such a small neural network in hardware, using a silicon photonics chip. This chip is made using the same technology as traditional computer chips but uses light instead of electricity as information carrier.
Researchers are working to enable smartphones and other mobile devices to understand and immediately identify objects in a camera's field of view, overlaying lines of text that describe items in the environment. The innovation could find applications in "augmented reality" technologies like Google Glass, facial recognition systems and robotic cars that drive themselves.
Soft robots have become a sufficiently popular research topic that they now have their own journal, Soft Robotics. In the first issue of that journal, Massachusetts Institute of Technology researchers report the first self-contained autonomous soft robot capable of rapid body motion: a “fish” that can execute an escape maneuver, convulsing its body to change direction in just a fraction of a second, or almost as quickly as a real fish can.
Driving behavior is a key factor that is often insufficiently accounted for in computational models that gauge the dynamic characteristics of vehicles. Researchers in Germany have developed a new driving simulator designed to make the “human factor“ more calculable for vehicle engineers.
Writing a program to control a single autonomous robot navigating an uncertain environment with an erratic communication link is hard enough; write one for multiple robots that may or may not have to work in tandem, depending on the task, is even harder. As a consequence, engineers designing control programs for multiagent systems have restricted themselves to special cases. Until now.
Even scientists are fond of thinking of the human brain as a computer, following sets of rules. But if the brain is like a computer, why do brains make mistakes that computers don't? Recent research shows that our brains stumble on even the simplest rule-based calculations, because humans get caught up in contextual information, even when the rules are as clear-cut as separating even numbers from odd.
Researchers at the Wyss Institute for Biologically Inspired Engineering and Harvard Univ. have have recently shown that an important class of artificial intelligence algorithms could be implemented using chemical reactions. These algorithms use a technique called “message passing inference on factor graphs” and are a mathematical coupling of ideas from graph theory and probability.
Hipster, surfer or biker? Computers may soon be able to tell the difference: Scientists in California are developing an algorithm that uses group pictures to determine to which of these groups, or urban tribes, you belong. So far, the algorithm is 48% accurate on average, much better than chance but not yet to level of humans.
The information and communications technologies (ICT) industry, and the significant level of R&D that supports it, is driven by constant change in consumer preferences, market demand and technological evolution. The ICT industry is the largest private-sector R&D investor in the U.S., performing nearly one-third of the total.
Researchers are trying to plant a digital seed for artificial intelligence by letting a massive computer system browse millions of pictures and decide for itself what they all mean. The system at Carnegie Mellon Univ. is called NEIL, short for Never Ending Image Learning. In mid-July, it began searching the Internet for images continuously and, in tiny steps, is deciding for itself how those images relate to each other.
If you think with the release of every new i-device the world is getting closer to thought-controlled smart tech and robotic personal assistants, you might be right. And thanks in part to work led by the Univ. of Cincinnati's Anca Ralescu, we may be even closer than you realize.
Much artificial intelligence research is concerned with finding statistical correlations between variables. As the number of variables grows, calculating their aggregate statistics becomes dauntingly complex. But that calculation can be drastically simplified if you know something about the structure of the data.
The Georgia Institute of Technology has announced the launch of its Institute for Robotics and Intelligent Machines (IRM), the newest of Georgia Tech’s 10 Interdisciplinary Research Institutes. IRIM brings together robotics researchers from across campus—spanning colleges, departments and individual labs—to support and connect research initiatives, enhance educational programs and foster advances for the National Robotics Initiative.
As transistors get smaller, they also become less reliable. So far, computer-chip designers have been able to work around that problem, but in the future, it could mean that computers stop improving at the rate we’ve come to expect. A third possibility, which some researchers have begun to float, is that we could simply let our computers make more mistakes.
Details have been released by IBM Research on Watson-related cognitive technologies that are expected to help physicians make more informed and accurate decisions faster and to cull new insights from electronic medical records (EMR). The new computing capabilities allow for a more natural interaction between physicians, data and EMRs.
Object recognition is one of the most widely studied problems in computer vision. But a robot that manipulates objects in the world needs to do more than just recognize them; it also needs to understand their orientation. Is that mug right-side up or upside-down? And which direction is its handle facing? To improve robots’ ability to gauge object orientation, a team is exploiting a statistical construct called the Bingham distribution.
Computing systems like IBM Research’s Watson have been engineered to learn, reason and help human experts make complex decisions involving extraordinary volumes of fast-moving data. To advance the development and deployment of these cognitive computing systems, IBM has announced a collaborative research initiative with four top universities.
Siri and Watson may seem brainy in certain situations, but to build truly smart, world-changing machines, researchers must understand how human intelligence emerges from brain activity. To help encourage progress in this field, the National Science Foundation (NSF) recently awarded $25 million to establish a Center for Brains, Minds and Machines at the Massachusetts Institute of Technology (MIT).
In complex crisis situations teams of experts must often make difficult decisions within a narrow time frame. However, voluminous amounts of information and the complexity of distributed cognition can hamper the quality and timeliness of decision-making by human teams and lead to catastrophic consequences. A Penn State Univ. team has devised a system that merges human and computer intelligence to support decision-making.
The human brain has 100 billion neurons, connected to each other in networks that allow us to interpret the world around us, plan for the future and control our actions and movements. Massachusetts Institute of Technology neuroscientist Sebastian Seung wants to map those networks, creating a wiring diagram of the brain that could help scientists learn how we each become our unique selves.
Now that the Internet’s basic protocols are more than 30 years old, network scientists are increasingly turning their attention to ad hoc networks where unsolved problems still abound. Most theoretical analyses of ad hoc networks have assumed that the communications links within the network are stable. But that often isn’t the case with real-world wireless devices.
Debi Green is trying to book a vacation, but she's having a hard time getting the words out. Even though it's been nearly nine years since she suffered a stroke, language sometimes fails her. Luckily, the computerized travel agent has all the time in the world. It's an avatar being tested at Temple Univ. in Philadelphia, where researchers are working to develop a virtual speech therapist.
In a pair of recent papers, researchers at Massachusetts Institute of Technology have demonstrated that, for a few specific tasks, it’s possible to write computer programs using ordinary language rather than special-purpose programming languages. The work may be of some help to programmers, and it could let non-programmers manipulate common types of files in ways that previously required familiarity with programming languages.