This growing season, crop researchers at the Univ. of Illinois are experimenting with the use of drones—unmanned aerial vehicles—on the university’s South Farms. Dennis Bowman, a crop sciences educator with U. of I. Extension, is using two drones to take aerial pictures of crops growing in research plots on the farms.
One of the reasons we don’t yet have self-driving cars and miniature helicopters delivering online purchases is that autonomous vehicles tend not to perform well under pressure. A system that can flawlessly parallel park at 5 mph may have trouble avoiding obstacles at 35 mph. Part of the problem is the time it takes to produce and interpret camera data.
Automated guided vehicles—or AGVs—are robotic versions of draft animals, hauling heavy loads and navigating their way in factories, distribution centers, ports and other facilities. These modern beasts of burden are evolving so rapidly in capabilities and electronic intelligence that the need for the equivalent of standardized performance testing has become apriority for the fast-growing AGV industry and its customers.
Researchers are working on a new algorithm that could make re-identification much easier for computers by identifying the major orientations in 3-D scenes. The same algorithm could also simplify the problem of scene understanding, one of the central challenges in computer vision research.
Although neural networks have been used in the past to solve pattern recognition problems such as speech and image recognition, it was usually in software on a conventional computer. Researchers in Belgium have manufactured such a small neural network in hardware, using a silicon photonics chip. This chip is made using the same technology as traditional computer chips but uses light instead of electricity as information carrier.
Researchers are working to enable smartphones and other mobile devices to understand and immediately identify objects in a camera's field of view, overlaying lines of text that describe items in the environment. The innovation could find applications in "augmented reality" technologies like Google Glass, facial recognition systems and robotic cars that drive themselves.
Soft robots have become a sufficiently popular research topic that they now have their own journal, Soft Robotics. In the first issue of that journal, Massachusetts Institute of Technology researchers report the first self-contained autonomous soft robot capable of rapid body motion: a “fish” that can execute an escape maneuver, convulsing its body to change direction in just a fraction of a second, or almost as quickly as a real fish can.
Driving behavior is a key factor that is often insufficiently accounted for in computational models that gauge the dynamic characteristics of vehicles. Researchers in Germany have developed a new driving simulator designed to make the “human factor“ more calculable for vehicle engineers.
Writing a program to control a single autonomous robot navigating an uncertain environment with an erratic communication link is hard enough; write one for multiple robots that may or may not have to work in tandem, depending on the task, is even harder. As a consequence, engineers designing control programs for multiagent systems have restricted themselves to special cases. Until now.
IBM is investing over $1 billion to give its Watson supercomputer its own business division and a new home in the heart of New York City. The Armonk, N.Y.-based computing company said the new business unit will be dedicated to the development and commercialization of the project that first gained fame by defeating a pair of "Jeopardy!" champions, including 74-time winner Jennings, in 2011.
Even scientists are fond of thinking of the human brain as a computer, following sets of rules. But if the brain is like a computer, why do brains make mistakes that computers don't? Recent research shows that our brains stumble on even the simplest rule-based calculations, because humans get caught up in contextual information, even when the rules are as clear-cut as separating even numbers from odd.
The CEO of FedEx doesn't see drones taking over the package delivery business anytime soon. Fred Smith says FedEx has several drone studies underway. But the idea of delivering items by drone is "almost amusing," Smith said on a conference call on Wednesday after the company reported financial results.
Researchers at the Wyss Institute for Biologically Inspired Engineering and Harvard Univ. have have recently shown that an important class of artificial intelligence algorithms could be implemented using chemical reactions. These algorithms use a technique called “message passing inference on factor graphs” and are a mathematical coupling of ideas from graph theory and probability.
Hipster, surfer or biker? Computers may soon be able to tell the difference: Scientists in California are developing an algorithm that uses group pictures to determine to which of these groups, or urban tribes, you belong. So far, the algorithm is 48% accurate on average, much better than chance but not yet to level of humans.
The information and communications technologies (ICT) industry, and the significant level of R&D that supports it, is driven by constant change in consumer preferences, market demand and technological evolution. The ICT industry is the largest private-sector R&D investor in the U.S., performing nearly one-third of the total.
Researchers are trying to plant a digital seed for artificial intelligence by letting a massive computer system browse millions of pictures and decide for itself what they all mean. The system at Carnegie Mellon Univ. is called NEIL, short for Never Ending Image Learning. In mid-July, it began searching the Internet for images continuously and, in tiny steps, is deciding for itself how those images relate to each other.
If you think with the release of every new i-device the world is getting closer to thought-controlled smart tech and robotic personal assistants, you might be right. And thanks in part to work led by the Univ. of Cincinnati's Anca Ralescu, we may be even closer than you realize.
Much artificial intelligence research is concerned with finding statistical correlations between variables. As the number of variables grows, calculating their aggregate statistics becomes dauntingly complex. But that calculation can be drastically simplified if you know something about the structure of the data.
The Georgia Institute of Technology has announced the launch of its Institute for Robotics and Intelligent Machines (IRM), the newest of Georgia Tech’s 10 Interdisciplinary Research Institutes. IRIM brings together robotics researchers from across campus—spanning colleges, departments and individual labs—to support and connect research initiatives, enhance educational programs and foster advances for the National Robotics Initiative.
As transistors get smaller, they also become less reliable. So far, computer-chip designers have been able to work around that problem, but in the future, it could mean that computers stop improving at the rate we’ve come to expect. A third possibility, which some researchers have begun to float, is that we could simply let our computers make more mistakes.
Details have been released by IBM Research on Watson-related cognitive technologies that are expected to help physicians make more informed and accurate decisions faster and to cull new insights from electronic medical records (EMR). The new computing capabilities allow for a more natural interaction between physicians, data and EMRs.
Object recognition is one of the most widely studied problems in computer vision. But a robot that manipulates objects in the world needs to do more than just recognize them; it also needs to understand their orientation. Is that mug right-side up or upside-down? And which direction is its handle facing? To improve robots’ ability to gauge object orientation, a team is exploiting a statistical construct called the Bingham distribution.
Computing systems like IBM Research’s Watson have been engineered to learn, reason and help human experts make complex decisions involving extraordinary volumes of fast-moving data. To advance the development and deployment of these cognitive computing systems, IBM has announced a collaborative research initiative with four top universities.
Siri and Watson may seem brainy in certain situations, but to build truly smart, world-changing machines, researchers must understand how human intelligence emerges from brain activity. To help encourage progress in this field, the National Science Foundation (NSF) recently awarded $25 million to establish a Center for Brains, Minds and Machines at the Massachusetts Institute of Technology (MIT).
In complex crisis situations teams of experts must often make difficult decisions within a narrow time frame. However, voluminous amounts of information and the complexity of distributed cognition can hamper the quality and timeliness of decision-making by human teams and lead to catastrophic consequences. A Penn State Univ. team has devised a system that merges human and computer intelligence to support decision-making.