During this week’s Intel Developer Forum, new Intel CEO Brian Krzanich announced a number of near-term changes for the company’s product line, including new LTE and 14-nm products, and a lower-power product family called Quark directed at future wearable electronics devices.
Your smartphone snapshots could be instantly converted into professional-looking photographs...
Qubit-based computing that exploits spooky quantum effects like entanglement and superposition...
On Tuesday IBM introduced a new line of mainframe computers the company calls its most powerful and technologically advanced ever. The zEnterprise EC12 mainframe server is designed to help users securely and quickly sift through massive amounts of data. Running at 5.5 GHz, IBM said the microprocessor that powers the mainframe is the fastest chip in the world.
A research team at the University of Santa Barbara has designed and fabricated a quantum processor capable of factoring a composite number—in this case the number 15—into its constituent prime factors, 3 and 5. Although modest compared to, say, a 600-digit number, the algorithm they developed was right about half the time, matching theoretical predictions and marking a milestone on the trail of building a stronger quantum computer.
Continued miniaturization and increased component density in today’s electronics have pushed heat generation and power dissipation to unprecedented levels. Current technology is keeping pace, but greatly adding to the size and weight of electronics. As a solution DARPA pursuing a new thermal management strategy that place microfluidic cooling inside the chip substrate.
In a recent project that has challenged the notion that the best chip is the most accurate one, a research team has unveiled this week its prototype “inexact” computer chip. By allowing the chip to make a few mistakes, developers were able to slash the power consumption of the chip dramatically. The result is a chip at least 15 times more efficient than today’s technology.
Multicore chips are common, but chips of the future are likely to have hundreds or even thousands of cores. Software simulations will work up to a point, but hardware models facilitated by programmable chips that won’t get bogged down by resource requests will be required to test designs. A new system to improve the efficiency of such model has been developed by Massachusetts Institute of Technology computer scientists.
The international Square Kilometre Array (SKA) will be the world’s largest and most sensitive radio telescope when it is built, and will require the processing power of several million of today’s fastest computers to collect the exabytes of data it will generate. IBM and the Netherlands Institute for Radio Astronomy (ASTRON) are embarking on a five-year project to solve this data collection problem.
For the last decade or so, computer chip manufacturers have been increasing the speed of their chips by giving them extra processing units, or “cores.” But more cores means greater risks if new designs don’t work. A new software-simulation system promises much more accurate evaluation of promising—but potentially fault-ridden—multicore-chip designs.
Following on the news that the Japanese K computer topped other high-performance computers at the SC11 conference, the National Nuclear Security Administration’s IBM Blue Gene/Q prototype has topped the Graph500, an increasingly competitive ranking that stresses supercomputer performance on “big data” scaling problems rather than purely arithmetic computations.
After topping both the June and November 2011 TOP500 fastest computers list, RIKEN and Fujitsu’s “K” computer has bolstered its status as an all-around performer but ranking at the top in all four benchmarks of the 2011 HPC Challenge Awards at SC11 in Seattle.
The University of Tennessee's National Institute for Computational Sciences announced at the SC11 conference that it has entered a multi-year strategic engagement with Intel Corporation to pursue development of next-generation, high-performance computing solutions based on the Intel’s Many Integrated Core architecture.
Researchers from North Carolina State University are developing a 3D central processing unit (CPU) with the goal of boosting energy efficiency by 15 to 25%. The work is being done under a $1.5 million grant from the Intel Corporation.
The PC is still the backbone of the digital world, powering e-commerce and social networking and selling more than a million examples per day. But worldwide sales have slowed in recent years and the industry is looking to foreign markets and handheld gadgets to shore up their profit margins.
MIT researchers show how to make e-beam lithography, commonly used to prototype computer chips, more practical as a mass-production technique.
Researchers from North Carolina State University have developed two new techniques related to common efficiency strategies like prefetching and bandwidth allocation to help maximize the performance of multi-core computer chips by allowing them to retrieve data more efficiently, which boosts chip performance by 10 to 40%.
Norwegian company Novelda has recently developed silicon chips which measure just 2 x 2 mm, but contain nearly two million transistors and 512 radars that simultaneously sense and transmit information. Unlike conventional radar devices, which must be placed several meters away from the object to be measured, Novelda's can be located directly on the object.
Maligned as too power-hungry for small mobile devices, Intel’s chip technology is in the midst of realignment as Intel is trying to elbow in to a mobile market dominated by lower-power processors from companies such as Qualcomm and Texas Instruments. The new chip is slated for release later this year.
An international team of computing experts from the United States, Switzerland, and Singapore has created a breakthrough technique for doubling the efficiency of computer chips by trimming away the portions that are rarely used.
Engineering researchers at the Univ. of Michigan have found a way to improve the performance of ferroelectric materials, which have the potential to make memory devices with more storage capacity than magnetic hard drives and faster write speed and longer lifetimes than flash memory.
In an effort to make it easier to build inexpensive, next-generation silicon-based electro-optical chips, which allow computers to move information with light and electricity, a research team is developing design tools and using commercial nanofabrication tools.
Over the week, the R&D Daily has been highlighting MIT's Project Angstrom, an ambitious initiative to create tomorrow’s computing systems from the ground up by developing new hardware, a new operating system, and sophisticated programming language to take advantage of multicore chips. MIT's Larry Hardesty, in the last part of this series, discusses how programmers will need software development systems that will let multicore chips express themselves in fundamentally new ways.
An Austrian research group led by physicist Rainer Blatt suggests a fundamentally novel architecture for quantum computation. They have experimentally demonstrated quantum antennae, which enable the exchange of quantum information between two separate memory cells located on a computer chip. This offers new opportunities to build practical quantum computers.
Today's consumers expect mobile devices that are increasingly small, yet ever-more powerful. They want all the bells and whistles, however, these technologies suck up energy. To promote energy-efficient multitasking, a Harvard Univ. grad student has developed and demonstrated a new device with the potential to reduce the power usage of modern processing chips.
Over the week, the R&D Daily has been highlighting MIT's Project Angstrom, an ambitious initiative to create tomorrow’s computing systems from the ground up by developing new hardware and a new operating system to take advantage of multicore chips. MIT's Larry Hardesty continues discussing the development of new algorithms. Today, efforts to adapt Fourier transforms for everyday computing use are described.
At its most fundamental, computer science is about the search for better algorithms. But most new algorithms are designed to run on serial computers, which process instructions one after another. Retooling them to run on parallel processors is rarely simple.
Project Angstrom, an ambitious initiative to create tomorrow’s computing systems from the ground up, funded by the U.S. Defense Department and drawing on the work of 19 MIT researchers, is concerned with multicore computing at all levels, from chip architecture up to the design of programming languages. But at its heart is the development of a new operating system.
- Page 1