Writing a program to control a single autonomous robot navigating an uncertain environment with an erratic communication link is hard enough; write one for multiple robots that may or may not have to work in tandem, depending on the task, is even harder. As a consequence, engineers designing control programs for multiagent systems have restricted themselves to special cases. Until now.
IBM is investing over $1 billion to give its...
Even scientists are fond of thinking of the human...
Researchers at the Wyss Institute for Biologically Inspired Engineering and Harvard Univ. have have recently shown that an important class of artificial intelligence algorithms could be implemented using chemical reactions. These algorithms use a technique called “message passing inference on factor graphs” and are a mathematical coupling of ideas from graph theory and probability.
Hipster, surfer or biker? Computers may soon be able to tell the difference: Scientists in California are developing an algorithm that uses group pictures to determine to which of these groups, or urban tribes, you belong. So far, the algorithm is 48% accurate on average, much better than chance but not yet to level of humans.
The information and communications technologies (ICT) industry, and the significant level of R&D that supports it, is driven by constant change in consumer preferences, market demand and technological evolution. The ICT industry is the largest private-sector R&D investor in the U.S., performing nearly one-third of the total.
Researchers are trying to plant a digital seed for artificial intelligence by letting a massive computer system browse millions of pictures and decide for itself what they all mean. The system at Carnegie Mellon Univ. is called NEIL, short for Never Ending Image Learning. In mid-July, it began searching the Internet for images continuously and, in tiny steps, is deciding for itself how those images relate to each other.
If you think with the release of every new i-device the world is getting closer to thought-controlled smart tech and robotic personal assistants, you might be right. And thanks in part to work led by the Univ. of Cincinnati's Anca Ralescu, we may be even closer than you realize.
Much artificial intelligence research is concerned with finding statistical correlations between variables. As the number of variables grows, calculating their aggregate statistics becomes dauntingly complex. But that calculation can be drastically simplified if you know something about the structure of the data.
The Georgia Institute of Technology has announced the launch of its Institute for Robotics and Intelligent Machines (IRM), the newest of Georgia Tech’s 10 Interdisciplinary Research Institutes. IRIM brings together robotics researchers from across campus—spanning colleges, departments and individual labs—to support and connect research initiatives, enhance educational programs and foster advances for the National Robotics Initiative.
As transistors get smaller, they also become less reliable. So far, computer-chip designers have been able to work around that problem, but in the future, it could mean that computers stop improving at the rate we’ve come to expect. A third possibility, which some researchers have begun to float, is that we could simply let our computers make more mistakes.
Details have been released by IBM Research on Watson-related cognitive technologies that are expected to help physicians make more informed and accurate decisions faster and to cull new insights from electronic medical records (EMR). The new computing capabilities allow for a more natural interaction between physicians, data and EMRs.
Object recognition is one of the most widely studied problems in computer vision. But a robot that manipulates objects in the world needs to do more than just recognize them; it also needs to understand their orientation. Is that mug right-side up or upside-down? And which direction is its handle facing? To improve robots’ ability to gauge object orientation, a team is exploiting a statistical construct called the Bingham distribution.
Computing systems like IBM Research’s Watson have been engineered to learn, reason and help human experts make complex decisions involving extraordinary volumes of fast-moving data. To advance the development and deployment of these cognitive computing systems, IBM has announced a collaborative research initiative with four top universities.
Siri and Watson may seem brainy in certain situations, but to build truly smart, world-changing machines, researchers must understand how human intelligence emerges from brain activity. To help encourage progress in this field, the National Science Foundation (NSF) recently awarded $25 million to establish a Center for Brains, Minds and Machines at the Massachusetts Institute of Technology (MIT).
In complex crisis situations teams of experts must often make difficult decisions within a narrow time frame. However, voluminous amounts of information and the complexity of distributed cognition can hamper the quality and timeliness of decision-making by human teams and lead to catastrophic consequences. A Penn State Univ. team has devised a system that merges human and computer intelligence to support decision-making.
The human brain has 100 billion neurons, connected to each other in networks that allow us to interpret the world around us, plan for the future and control our actions and movements. Massachusetts Institute of Technology neuroscientist Sebastian Seung wants to map those networks, creating a wiring diagram of the brain that could help scientists learn how we each become our unique selves.
Now that the Internet’s basic protocols are more than 30 years old, network scientists are increasingly turning their attention to ad hoc networks where unsolved problems still abound. Most theoretical analyses of ad hoc networks have assumed that the communications links within the network are stable. But that often isn’t the case with real-world wireless devices.
Debi Green is trying to book a vacation, but she's having a hard time getting the words out. Even though it's been nearly nine years since she suffered a stroke, language sometimes fails her. Luckily, the computerized travel agent has all the time in the world. It's an avatar being tested at Temple Univ. in Philadelphia, where researchers are working to develop a virtual speech therapist.
In a pair of recent papers, researchers at Massachusetts Institute of Technology have demonstrated that, for a few specific tasks, it’s possible to write computer programs using ordinary language rather than special-purpose programming languages. The work may be of some help to programmers, and it could let non-programmers manipulate common types of files in ways that previously required familiarity with programming languages.
Honda's robotics technology, although among the most advanced for mobility, has come under fire as lacking practical applications and being little more than an expensive toy. The latest example is its walking, talking interactive Asimo robot, which is now acting as a museum guide in Tokyo. In addition to glitches that have interrupted its operation, it lacks voice recognition.
Each summer, power grids are pushed to their limits. A single failure in the system can cause power outages throughout a neighborhood or across towns. To help prevent smaller incidents from snowballing into massive power failures, researchers devised an algorithm that identifies the most dangerous pairs of failures among the millions possible in a power grid.
The world's first space conversation experiment between a robot and humans is ready to be launched. Developers from the Kirobo project, named after "kibo" or hope in Japanese and "robot," gathered in Tokyo Wednesday to demonstrate the humanoid robot's ability to talk.
Researchers at Massachusetts Institute of Technology have developed a new algorithm that can accurately measure the heart rates of people depicted in ordinary digital video by analyzing imperceptibly small head movements that accompany the rush of blood caused by the heart’s contractions.
A team of researchers has developed a new encryption scheme, known as a functional-encryption scheme, that solves a major problem with homomorphic encryption. The scheme would let the cloud server to run a single, specified computation on the homomorphically encrypted result, without being able to extract any other information about it.
Germany's defense minister on Wednesday admitted mistakes were made in the handling of a program to develop unmanned surveillance drones and announced tougher oversight procedures for all armament projects. Opposition parties say Thomas de Maiziere wasted public funds by canceling the botched 600 million euro ($800 million) program too late, but he rejected calls for his resignation.
Reinforcement learning is a technique in which a computer system learns how best to solve some problem through trial-and-error. Classic applications of reinforcement learning involve problems like robot navigation and automated surveillance. Now, researchers have developed a new reinforcement-learning algorithm that, for many problems, allows computer systems to find solutions much more efficiently than previous algorithms did.
Researchers from North Carolina State University have developed a software algorithm that detects and isolates cyberattacks on networked control systems—which are used to coordinate transportation, power, and other infrastructure across the United States.
- Page 1