Monthly Archives: May 2017

Google Plans to Demonstrate the Supremacy of Quantum Computing

Quantum computers have long held the promise of performing certain calculations that are impossible—or at least, entirely impractical—for even the most powerful conventional computers to perform. Now, researchers at a Google laboratory in Goleta, Calif., may finally be on the cusp of proving it, using the same kinds of quantum bits, or qubits, that one day could make up large-scale quantum machines.

By the end of this year, the team aims to increase the number of superconducting qubits it builds on integrated circuits to create a 7-by-7 array. With this quantum IC, the Google researchers aim to perform operations at the edge of what’s possible with even the best supercomputers, and so demonstrate “quantum supremacy.”

“We’ve been talking about, for many years now, how a quantum processor could be powerful because of the way that quantum mechanics works, but we want to specifically demonstrate it,” says team member John Martinis, a professor at the University of California, Santa Barbara, who joined Google in 2014.

A system size of 49 superconducting qubits is still far away from what physicists think will be needed to perform the sorts of computations that have long motivated quantum computing research. One of those is Shor’s algorithm, a computational scheme that would enable a quantum computer to quickly factor very large numbers and thus crack one of the foundational components of modern cryptography. In a recent commentary in Nature, Martinis and colleagues estimated that a 100-million-qubit system would be needed to factor a 2,000-bit number—a not-uncommon public key length—in one day. Most of those qubits would be used to create the special quantum states that would be needed to perform the computation and to correct errors, creating a mere thousand or so stable “logical qubits” from thousands of less stable physical components, Martinis says.

There will be no such extra infrastructure in this 49-qubit system, which means a different computation must be performed to establish supremacy. To demonstrate the chip’s superiority over conventional computers, the Google team will execute operations on the array that will cause it to evolve chaotically and produce what looks like a random output. Classical machines can simulate this output for smaller systems. In April, for example, Lawrence Berkeley National Laboratory reported that its 29-petaflop supercomputer, Cori, had simulated the output of 45 qubits. But 49 qubits would push—if not exceed—the limits of conventional supercomputers.

This computation does not as yet have a clear practical application. But Martinis says there are reasons beyond demonstrating quantum supremacy to pursue this approach. The qubits used to make the 49-qubit array can also be used to make larger “universal” quantum systems with error correction, the sort that could do things like decryption, so the chip should provide useful validation data.

Brain Computers: Bad in Math, Good in Others

Painful exercises in basic arithmetic are a vivid part of our elementary school memories. A multiplication like 3,752 × 6,901 carried out with just pencil and paper for assistance may well take up to a minute. Of course, today, with a cellphone always at hand, we can quickly check that the result of our little exercise is 25,892,552. Indeed, the processors in modern cellphones can together carry out more than 100 billion such operations per second. What’s more, the chips consume just a few watts of power, making them vastly more efficient than our slow brains, which consume about 20 watts and need significantly more time to achieve the same result.

Of course, the brain didn’t evolve to perform arithmetic. So it does that rather badly. But it excels at processing a continuous stream of information from our surroundings. And it acts on that information—sometimes far more rapidly than we’re aware of. No matter how much energy a conventional computer consumes, it will struggle with feats the brain finds easy, such as understanding language and running up a flight of stairs.

If we could create machines with the computational capabilities and energy efficiency of the brain, it would be a game changer. Robots would be able to move masterfully through the physical world and communicate with us in plain language. Large-scale systems could rapidly harvest large volumes of data from business, science, medicine, or government to detect novel patterns, discover causal relationships, or make predictions. Intelligent mobile applications like Siri or Cortana would rely less on the cloud. The same technology could also lead to low-power devices that can support our senses, deliver drugs, and emulate nerve signals to compensate for organ damage or paralysis.

But isn’t it much too early for such a bold attempt? Isn’t our knowledge of the brain far too limited to begin building technologies based on its operation? I believe that emulating even very basic features of neural circuits could give many commercially relevant applications a remarkable boost. How faithfully computers will have to mimic biological detail to approach the brain’s level of performance remains an open question. But today’s brain-inspired, or neuromorphic, systems will be important research tools for answering it.

A key feature of conventional computers is the physical separation of memory, which stores data and instructions, from logic, which processes that information. The brain holds no such distinction. Computation and data storage are accomplished together locally in a vast network consisting of roughly 100 billion neural cells (neurons) and more than 100 trillion connections (synapses). Most of what the brain does is determined by those connections and by the manner in which each neuron responds to incoming signals from other neurons.

When we talk about the extraordinary capabilities of the human brain, we are usually referring to just the latest addition in the long evolutionary process that constructed it: the neocortex. This thin, highly folded layer forms the outer shell of our brains and carries out a diverse set of tasks that includes processing sensory inputs, motor control, memory, and learning. This great range of abilities is accomplished with a rather uniform structure: six horizontal layers and a million 500-micrometer-wide vertical columns all built from neurons, which integrate and distribute electrically coded information along tendrils that extend from them—the dendrites and axons.

Like all the cells in the human body, a neuron normally has an electric potential of about –70 millivolts between its interior and exterior. This membrane voltage changes when a neuron receives signals from other neurons connected to it. And if the membrane voltage rises to a critical threshold, it forms a voltage pulse, or spike, with a duration of a few milliseconds and a value of about 40 mV. This spike propagates along the neuron’s axon until it reaches a synapse, the complex biochemical structure that connects the axon of one neuron to a dendrite of another. If the spike meets certain criteria, the synapse transforms it into another voltage pulse that travels down the branching dendrite structure of the receiving neuron and contributes either positively or negatively to its cell membrane voltage.

Connectivity is a crucial feature of the brain. The pyramidal cell, for example—a particularly important kind of cell in the human neocortex—contains about 30,000 synapses and so 30,000 inputs from other neurons. And the brain is constantly adapting. Neuron and synapse properties—and even the network structure itself—are always changing, driven mostly by sensory input and feedback from the environment.

General-purpose computers these days are digital rather than analog, but the brain is not as easy to categorize. Neurons accumulate electric charge just as capacitors in electronic circuits do. That is clearly an analog process. But the brain also uses spikes as units of information, and these are fundamentally binary: At any one place and time, there is either a spike or there is not. Electronically speaking, the brain is a mixed-signal system, with local analog computing and binary-spike communication. This mix of analog and digital helps the brain overcome transmission losses. Because the spike essentially has a value of either 0 or 1, it can travel a long distance without losing that basic information; it is also regenerated when it reaches the next neuron in the network.

Another crucial difference between brains and computers is that the brain accomplishes all its information processing without a central clock to synchronize it. Although we observe synchronization events—brain waves—they are self-organized, emergent products of neural networks. Interestingly, modern computing has started to adopt brainlike asynchronicity, to help speed up computation by performing operations in parallel. But the degree and the purpose of parallelism in the two systems are vastly different.

The Benefits of Building an Artificial Brain

In the mid-1940s, a few brilliant people drew up the basic blueprints of the computer age. They conceived a general-purpose machine based on a processing unit made up of specialized subunits and registers, which operated on stored instructions and data. Later inventions—transistors, integrated circuits, solid-state memory—would supercharge this concept into the greatest tool ever created by humankind.

So here we are, with machines that can churn through tens of quadrillions of operations per second. We have voice-recognition-enabled assistants in our phones and homes. Computers routinely thrash us in our ancient games. And yet we still don’t have what we want: machines that can communicate easily with us, understand and anticipate our needs deeply and unerringly, and reliably navigate our world.

Now, as Moore’s Law seems to be starting some sort of long goodbye, a couple of themes are dominating discussions of computing’s future. One centers on quantum computers and stupendous feats of decryption, genome analysis, and drug development. The other, more interesting vision is of machines that have something like human cognition. They will be our intellectual partners in solving some of the great medical, technical, and scientific problems confronting humanity. And their thinking may share some of the fantastic and maddening beauty, unpredictability, irrationality, intuition, obsessiveness, and creative ferment of our own.

In this issue, we consider the advent of neuromorphic computing and its prospects for ushering in a new age of truly intelligent machines. It is already a sprawling enterprise, being propelled in part by massive research initiatives in the United States and Europe aimed at plumbing the workings of the human brain. Parallel engineering efforts are now applying some of that knowledge to the creation of software and specialized hardware that “learn”—that is, get more adept—by repeated exposure to computational challenges.

Brute speed and clever algorithms have already produced machines capable of equaling or besting us at activities we’ve long thought of as deeply human: not just poker and Go but also stock picking, language translation, facial recognition, drug discovery and design, and the diagnosis of several specific diseases. Pretty soon, speech recognition, driving, and flying will be on that list, too.

The emergence of special-purpose hardware, such as IBM’s TrueNorth chips and the University of Manchester’s SpiNNaker, will eventually make the list longer. And yet, our intuition (which for now remains uniquely ours) tells us that even then we’ll be no closer to machines that can, through learning, become capable of making their way in our world in an engaging and yet largely independent way.

To produce such a machine we will have to give it common sense. If you act erratically, for example, this machine will recall that you’re going through a divorce and subtly change the way it deals with you. If it’s trying to deliver a package and gets no answer at your door, but hears a small engine whining in your backyard, it will come around to see if there’s a person (or machine) back there willing to accept the package. Such a machine will be able to watch a motion picture, then decide how good it is and write an astute and insightful review of the movie.

But will this machine actually enjoy the movie? And, just as important, will we be able to know if it does? Here we come inevitably to the looming great challenge, and great puzzle, of this coming epoch: machine consciousness. Machines probably won’t need consciousness to outperform us in almost every measurable way. Nevertheless, deep down we will surely regard them with a kind of disdain if they don’t have it.

Trying to create consciousness may turn out to be the way we finally begin to understand this most deeply mysterious and precious of all human attributes. We don’t understand how conscious experience arises or its purpose in human beings—why we delight in the sight of a sunset, why we are stirred by the Eroica symphony, why we fall in love. And yet, consciousness is the most remarkable thing the universe has ever created. If we, too, manage to create it, it would be humankind’s supreme technological achievement, a kind of miracle that would fundamentally alter our relationship with our machines, our image of ourselves, and the future of our civilization.

We Could Build an Artificial Brain Right Now

Brain-inspired computing is having a moment. Artificial neural network algorithms like deep learning, which are very loosely based on the way the human brain operates, now allow digital computers to perform such extraordinary feats as translating language, hunting for subtle patterns in huge amounts of data, and beating the best human players at Go.

But even as engineers continue to push this mighty computing strategy, the energy efficiency of digital computing is fast approaching its limits. Our data centers and supercomputers already draw megawatts—some 2 percent of the electricity consumed in the United States goes to data centers alone. The human brain, by contrast, runs quite well on about 20 watts, which represents the power produced by just a fraction of the food a person eats each day. If we want to keep improving computing, we will need our computers to become more like our brains.

Hence the recent focus on neuromorphic technology, which promises to move computing beyond simple neural networks and toward circuits that operate more like the brain’s neurons and synapses do. The development of such physical brainlike circuitry is actually pretty far along. Work at my lab and others around the world over the past 35 years has led to artificial neural components like synapses and dendrites that respond to and produce electrical signals much like the real thing.

So, what would it take to integrate these building blocks into a brain-scale computer? In 2013, Bo Marr, a former graduate student of mine at Georgia Tech, and I looked at the best engineering and neuroscience knowledge of the time and concluded that it should be possible to build a silicon version of the human cerebral cortex with the transistor technology then in production. What’s more, the resulting machine would take up less than a cubic meter of space and consume less than 100 watts, not too far from the human brain.

That is not to say creating such a computer would be easy. The system we envisioned would still require a few billion dollars to design and build, including some significant packaging innovations to make it compact. There is also the question of how we would program and train the computer. Neuromorphic researchers are still struggling to understand how to make thousands of artificial neurons work together and how to translate brainlike activity into useful engineering applications.

Still, the fact that we can envision such a system means that we may not be far off from smaller-scale chips that could be used in portable and wearable electronics. These gadgets demand low power consumption, and so a highly energy-efficient neuromorphic chip—even if it takes on only a subset of computational tasks, such as signal processing—could be revolutionary. Existing capabilities, like speech recognition, could be extended to handle noisy environments. We could even imagine future smartphones conducting real-time language translation between you and the person you’re talking to. Think of it this way: In the 40 years since the first signal-processing integrated circuits, Moore’s Law has improved energy efficiency by roughly a factor of 1,000. The most brainlike neuromorphic chips could dwarf such improvements, potentially driving down power consumption by another factor of 100 million. That would bring computations that would otherwise need a data center to the palm of your hand.

The ultimate brainlike machine will be one in which we build analogues for all the essential functional components of the brain: the synapses, which connect neurons and allow them to receive and respond to signals; the dendrites, which combine and perform local computations on those incoming signals; and the core, or soma, region of each neuron, which integrates inputs from the dendrites and transmits its output on the axon.

Simple versions of all these basic components have already been implemented in silicon. The starting point for such work is the same metal-oxide-semiconductor field-effect transistor, or MOSFET, that is used by the billions to build the logic circuitry in modern digital processors.

These devices have a lot in common with neurons. Neurons operate using voltage-controlled barriers, and their electrical and chemical activity depends primarily on channels in which ions move between the interior and exterior of the cell—a smooth, analog process that involves a steady buildup or decline instead of a simple on-off operation.

MOSFETs are also voltage controlled and operate by the movement of individual units of charge. And when MOSFETs are operated in the “subthreshold” mode, below the voltage threshold used to digitally switch between on and off, the amount of current flowing through the device is very small—less than a thousandth of what is seen in the typical switching of digital logic gates.

The notion that subthreshold transistor physics could be used to build brainlike circuitry originated with Carver Mead of Caltech, who helped revolutionize the field of very-large-scale circuit design in the 1970s. Mead pointed out that chip designers fail to take advantage of a lot of interesting behavior—and thus information—when they use transistors only for digital logic. The process, he wrote in 1990 [PDF], essentially involves “taking all the beautiful physics that is built into…transistors, mashing it down to a 1 or 0, and then painfully building it back up with AND and OR gates to reinvent the multiply.” A more “physical” or “physics-based” computer could execute more computations per unit energy than its digital counterpart. Mead predicted such a computer would take up significantly less space as well.

In the intervening years, neuromorphic engineers have made all the basic building blocks of the brain out of silicon with a great deal of biological fidelity. The neuron’s dendrite, axon, and soma components can all be fabricated from standard transistors and other circuit elements. In 2005, for example, Ethan Farquhar, then a Ph.D. candidate, and I created a neuron circuit using a set of six MOSFETs and a handful of capacitors. Our model generated electrical pulses that very closely matched those in the soma part of a squid neuron, a long-standing experimental subject. What’s more, our circuit accomplished this feat with similar current levels and energy consumption to those in the squid’s brain. If we had instead used analog circuits to model the equations neuroscientists have developed to describe that behavior, we’d need on the order of 10 times as many transistors. Performing those calculations with a digital computer would require even more space.