Category Archives: 99 Computer Hardware

U.S. Slips in New Top500 Supercomputer Ranking

In June, we can look forward to two things: the Belmont Stakes and the first of the twice-yearly TOP500 rankings of supercomputers. This month, a well-known gray and black colt named Tapwrit came in first at Belmont, and a well-known gray and black supercomputer named Sunway TaihuLight came in first on June’s TOP500 list, released today in conjunction with the opening session of the ISC High Performance conference in Frankfurt. Neither was a great surprise.

Tapwrit was the second favorite at Belmont, and Sunway TaihuLight was the clear pick for the number-one position on TOP500 list, it having enjoyed that first-place ranking since June of 2016 when it beat out another Chinese supercomputer, Tianhe-2. The TaihuLight, capable of some 93 petaflops in this year’s benchmark tests, was designed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and is located at the National Supercomputing Center in Wuxi, China. Tianhe-2, capable of almost 34 petaflops, was developed by China’s National University of Defense Technology (NUDT), is deployed at the National Supercomputer Center in Guangzho, and still enjoys the number-two position on the list.

More of a surprise, and perhaps more of a disappointment for some, is that the highest-ranking U.S. contender, the Department of Energy’s Titan supercomputer (17.6 petaflops) housed at Oak Ridge National Laboratory, was edged out of the third position by an upgraded Swiss supercomputer called Piz Daint (19.6 petaflops), installed at the Swiss National Supercomputing Center, part of the Swiss Federal Institute of Technology (ETH) in Zurich.

Not since 1996 has a U.S. supercomputer not made it into one of the first three slots on the TOP500 list. But before we go too far in lamenting the sunset of U.S. supercomputing prowess, we should pause for a moment to consider that the computer that bumped it from the number-three position was built by Cray and is stuffed with Intel processors and NVIDIA GPUs, all the creations of U.S. companies.

Even the second-ranking Tianhe-2 is based on Intel processors and co-processors. It’s only the TaihuLight that is truly a Chinese machine, being based on the SW26010, a 260-core processor designed by the National High Performance Integrated Circuit Design Center in Shanghai. And U.S. supercomputers hold five of the 10 highest ranking positions on the new TOPS500 list.

Still, national rivalries seem to have locked the United States into a supercomputer arms race with China, with both nations vying to be the first to reach the exascale threshold—that is, to have a computer that can perform a 1018 floating-point operations per second. China hopes to do so by amassing largely conventional hardware and is slated to have a prototype system readyaround the end of this year. The United States, on the other hand, is looking to tackle the problems that come with scaling to that level using novel approaches, which require more research before even a prototype machine can be built. Just last week, the U.S. Department of Energy announced that it was awarding Advanced Micro Devices, Cray, Hewlett Packard, IBM, Intel, and NVIDIA US $258 million to support research toward building an exascale supercomputer. Who will get there first, is, of course, up for grabs. But one thing’s for sure: It’ll be a horse race worth watching.

Global Race Towards Exascale Will Drive Supercomputing

For the first time in 21 years, the United States no longer claimed even the bronze medal. With this week’s release of the latest Top 500 supercomputer ranking, the top three fastest supercomputers in the world are now run by China (with both first and second place finishers) and Switzerland. And while the supercomputer horserace is spectacle enough unto itself, a new report on the supercomputer industry highlights broader trends behind both the latest and the last few years of Top500 rankings.

The report, commissioned last year by the Japanese national science agency Riken, outlines a worldwide race toward exascale computers in which the U.S. sees R&D spending and supercomputer talent pools shrink, Europe jumps into the breach with increased funding, and China pushes hard to become the new global leader, despite a still small user and industry base ready to use the world’s most powerful supercomputers.

Steve Conway, report co-author and senior vice president of research at Hyperion, says the industry trend in high-performance computing is toward laying groundwork for pervasive AI and big data applications like autonomous cars and machine learning. And unlike more specialized supercomputer applications from years past, the workloads of tomorrow’s supercomputers will likely be mainstream and even consumer-facing applications.

“Ten years ago the rationale for spending on supercomputers was primarily two things: national security and scientific leadership, and I think there are a lot of people who still think that supercomputers are limited to problems like will a proton go left or right,” he says. “But in fact, there’s been strong recognition [of the connections] between supercomputing leadership and industrial leadership.”

“With the rise of big data, high-performance computing has moved to the forefront of research in things like autonomous vehicle design, precision medicine, deep learning, and AI,” Conway says. “And you don’t have to ask supercomputing companies if this is true. Ask Google and Baidu. There’s a reason why Facebook has already bought 26 supercomputers.”

As the 72-page Hyperion report notes, “IDC believes that countries that fail to fund development of these future leadership-class supercomputers run a high risk of falling behind other highly developed countries in scientific innovation, with later harmful consequences for their national economies.” (Since authoring the report in 2016 as part of the industry research group IDC, its authors this year formed the spin-off research firm Hyperion.)

Conway says that solutions to problems plaguing HPC systems today will be found in consumer electronics and industry applications of the future. So while operating massively parallel computers with multiple millions of cores may today only be a problem facing the world’s fastest and second-fastest supercomputers—China’s Sunway TaihuLight and Tianhe-2, running on 10.6 and 3.1 million cores, respectively—that fact won’t hold true forever. However, because China is the only country tackling this problem now means they are more likely to develop the technology first, technology that the world will want when cloud computing with multiple millions of cores approaches the mainstream.

The same logic applies to optimizing the ultra-fast data rates that today’s top HPC systems use and minimizing the megawatt electricity budgets they consume. And as the world’s supercomputers approach the exascale, that is, the 1 exaflop or 1000 petaflop mark, new challenges will no doubt arise too.

So, for instance, the report says that rapid shut-down and power-up of cores not in use will be one trick supercomputer designers use to trim back some of their systems’ massive power budgets. And, too, high-storage density—in the 100 petabyte range—will become paramount to house the big datasets the supercomputers consume.

“You could build an exascale system today,” Conway says. “But it would take well over 100 megawatts, which nobody’s going to supply, because that’s over a 100 million dollar electricity bill. So it has to get the electricity usage under control. Everybody’s trying to get it in the 20 to 30 megawatts range. And it has to be dense. Much denser than any computing today. It’s got to fit inside some kind of building. You don’t want the building to be 10 miles long. And also the denser the machine, the faster the machine is going to be too.”

Conway predicts that these and other challenges will be surmounted, and the first exaflop supercomputers will appear on the Top500 list around 2021, while exaflop supercomputing could become commonplace by 2023.

Search Engines for Brain Available at Sight: The Reboot Human Brain Project

The human brain is smaller than you might expect: One of them, dripping with formaldehyde, fits in a single gloved hand of a lab supervisor here at the Jülich Research Center, in Germany.

Soon, this rubbery organ will be frozen solid, coated in glue, and then sliced into several thousand wispy slivers, each just 60 micrometers thick. A custom apparatus will scan those sections using 3D polarized light imaging (3D-PLI) to measure the spatial orientation of nerve fibers at the micrometer level. The scans will be gathered into a colorful 3D digital reconstruction depicting the direction of individual nerve fibers on larger scales—roughly 40 gigabytes of data for a single slice and up to a few petabytes for the entire brain. And this brain is just one of several to be scanned.

Neuroscientists hope that by combining and exploring data gathered with this and other new instruments they’ll be able to answer fundamental questions about the brain. The quest is one of the final frontiers—and one of the greatest challenges—in science.

Imagine being able to explore the brain the way you explore a website. You might search for the corpus callosum—the stalk that connects the brain’s two hemispheres—and then flip through individual nerve fibers in it. Next, you might view networks of cells as they light up during a verbal memory test, or scroll through protein receptors embedded in the tissue.

Right now, neuroscientists can’t do that. They lack the hardware to store and access the avalanche of brain data being produced around the world. They lack the software to bridge the gaps from genes, molecules, and cells to networks, connectivity, and human behavior.

“We don’t have the faintest idea of the molecular basis for diseases like Alzheimer’s or schizophrenia or others. That’s why there are no cures,” says Paolo Carloni, director of the Institute for Computational Biomedicine at Jülich. “To make a big difference, we have to dissect [the brain] into little pieces and build it up again.”

That’s why there’s no choice but to move from small-scale investigations to large, collaborative efforts. “The brain is too complex to sit in your office and solve it alone,” says neuroscientist Katrin Amunts, who coleads the 3D-PLI project at Jülich. Neuroscientists need to make the same transition that physicists and geneticists once did—from solo practitioners to consortia—and that transformation won’t be easy.

Chip Hall of Fame: Western Digital WD1402A UART

Gordon Bell is famous for launching the PDP series of minicomputers at Digital Equipment Corp. in the 1960s. These ushered in the era of networked and interactive computing that would come to full flower with the introduction of the personal computer in the 1970s. But while minicomputers as a distinct class now belong to the history books, Bell also invented a lesser known but no less significant piece of technology that’s still in action all over the world: The universal asynchronous receiver/transmitter, or UART.

UARTs are used to let two digital devices communicate with each other by sending bits one at a time over a serial interface without bothering the device’s primary processor with the details.

Today, more sophisticated serial setups are available, such as the ubiquitous USB standard, but for a time UARTs ruled supreme as the way to, for example, connect modems to PCs. And the simple UART still has its place, not least as the communication method of last resort with a lot of modern network equipment.

The UART was invented because of Bell’s own need to connect a Teletype to a PDP-1, a task that required converting parallel signals into serial signals. He cooked up a circuit that used some 50 discrete components. The idea proved popular and Western Digital, a small company making calculator chips, offered to create a single-chip version of the UART. Western Digital founder Al Phillips still remembers when his vice president of engineering showed him the Rubylith sheets with the design, ready for fabrication. “I looked at it for a minute and spotted an open circuit,” Phillips says. “The VP got hysterical.” Western Digital introduced the WD1402A around 1971, and other versions soon followed.

Rigetti Launches Full-Stack Quantum Computing Service and Quantum IC Fab

Much of the ongoing quantum computing battle among tech giants such as Google and IBM has focused on developing the hardware necessary to solve impossible classical computing problems. A Berkeley-based startup looks to beat those larger rivals with a one-two combo: a fab lab designed for speedy creation of better quantum circuits and a quantum computing cloud service that provides early hands-on experience with writing and testing software.

Rigetti Computing recently unveiled its Fab-1 facility, which will enable its engineers to rapidly build new generations of quantum computing hardware based on quantum bits, or qubits. The facility can spit out entirely new designs for 3D-integrated quantum circuits within about two weeks—much faster than the months usually required for academic research teams to design and build new quantum computing chips. It’s not so much a quantum computing chip factory as it is a rapid prototyping facility for experimental designs.

“We’re fairly confident it’s the only dedicated quantum computing fab in the world,” says Andrew Bestwick, director of engineering at Rigetti Computing. “By the standards of industry, it’s still quite small and the volume is low, but it’s designed for extremely high-quality manufacturing of these quantum circuits that emphasizes speed and flexibility.”

But Rigetti is not betting on faster hardware innovation alone. It has also announced its Forest 1.0 service that enables developers to begin writing quantum software applications and simulating them on a 30-qubit quantum virtual machine. Forest 1.0 is based on Quil—a custom instruction language for hybrid quantum/classical computing—and open-source python tools intended for building and running Quil programs.

By signing up for the service, both quantum computing researchers and scientists in other fields will get the chance to begin practicing how to write and test applications that will run on future quantum computers. And it’s likely that Rigetti hopes such researchers from various academic labs or companies could end up becoming official customers.

“We’re a full stack quantum computing company,” says Madhav Thattai, Rigetti’s chief strategy officer. “That means we do everything from design and fabrication of quantum chips to packaging the architecture needed to control the chips, and then building the software so that people can write algorithms and program the system.”

Much still has to be done before quantum computing becomes a practical tool for researchers and companies. Rigetti’s approach to universal quantum computing uses silicon-based superconducting qubits that can take advantage of semiconductor manufacturing techniques common in today’s computer industry. That means engineers can more easily produce the larger arrays of qubits necessary to prove that quantum computing can outperform classical computing—a benchmark that has yet to be reached.

Google researchers hope to demonstrate such “quantum supremacy” over classical computing with a 49-qubit chip by the end of 2017. If they succeed, it would be an “incredibly exciting scientific achievement,” Bestwick says. Rigetti Computing is currently working on scaling up from 8-qubit chips.

But even that huge step forward in demonstrating the advantages of quantum computing would not result in a quantum computer that is a practical problem-solving tool. Many researchers believe that practical quantum computing requires systems to correct the quantum errors that can arise in fragile qubits. Error correction will almost certainly be necessary to achieve the future promise of 100-million-qubit systems that could perform tasks that are currently impractical, such as cracking modern cryptography keys.

Though it may seem like quantum computing demands far-off focus, Rigetti Computing is complementing its long-term strategy with a near-term strategy that can serve clients long before more capable quantum computers arise. The quantum computing cloud service is one example of that. The startup also believes a hybrid system that combines classical computing architecture with quantum computing chips can solve many practical problems in the short term, especially in the fields of machine learning and chemistry. What’s more, says Rigetti, such hybrid classical/quantum computers can perform well even without error correction.

“We’ve uncovered a whole new class of problems that can be solved by the hybrid model,” Bestwick says. “There is still a large role for classical computing to own the shell of the problem, but we can offload parts of the problem that the quantum computing resource can handle.”

There is another tall hurdle that must be overcome before we’ll be able to build the quantum computing future: There are not many people in the world qualified to build a full-stack quantum computer. But Rigetti Computing is focused on being a full-stack quantum computing company that’s attractive to talented researchers and engineers who want to work at a company that is trying to take this field beyond the academic lab to solve real-world problems.

Much of Rigetti’s strategy here revolves around its Junior Quantum Engineer Program, which helps recruit and train the next generation of quantum computing engineers. The program, says Thattai, selects some of the “best undergraduates in applied physics, engineering, and computer science” to learn how to build full-stack quantum computing in the most hands-on experience possible. It’s a way to ensure that the company continues to feed the talent pipeline for the future industry.

On the client side, Rigetti is not yet ready to name its main customers. But it did confirm that it has partnered with NASA to develop potential quantum computing applications. Venture capital firms seem impressed by the startup’s near-term and long-term strategies as well, given news earlier this year that Rigetti had raised $64 million in series A and B funding led by Andreessen Horowitz and Vy Capital.

Whether it’s clients or investors, Rigetti has sought out like-minded people who believe in the startup’s model of preparing for the quantum computing future beyond waiting on the hardware.

“Those people know that when the technology crosses the precipice of being beyond what classical computing can do, it will flip very, very quickly in one generation,” Thattai says. “The winners and losers in various industries will be decided by who took advantage of quantum computing systems early.”

Qudits: The Real Future of Quantum Computing?

Instead of creating quantum computers based on qubits that can each adopt only two possible options, scientists have now developed a microchip that can generate “qudits” that can each assume 10 or more states, potentially opening up a new way to creating incredibly powerful quantum computers, a new study finds.

Classical computers switch transistors either on or off to symbolize data as ones and zeroes. In contrast, quantum computers use quantum bits, or qubits that, because of the bizarre nature of quantum physics, can be in a state of superposition where they simultaneously act as both 1 and 0.

The superpositions that qubits can adopt let them each help perform two calculations at once. If two qubits are quantum-mechanically linked, or entangled, they can help perform four calculations simultaneously; three qubits, eight calculations; and so on. As a result, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the known universe, solving certain problems much faster than classical computers. However, superpositions are extraordinarily fragile, making it difficult to work with multiple qubits.

Most attempts at building practical quantum computers rely on particles that serve as qubits. However, scientists have long known that they could in principle use qudits with more than two states simultaneously. In principle, a quantum computer with two 32-state qudits, for example, would be able to perform as many operations as 10 qubits while skipping the challenges inherent with working with 10 qubits together.

An Early Door to Cyberspace: The Computer Memory Terminal

COMMUNITY MEMORY is the name we give to this experimental information service. It is an attempt to harness the power of the computer in the service of the community. We hope to do this by providing a sort of super bulletin board where people can post notices of all sorts and can find the notices posted by others rapidly.

We are Loving Grace Cybernetics, a group of Berkeley people operating out of Resource One Inc., a non-profit collective located in Project One in S.F. Resource One grew out of the San Francisco Switchboard and has managed to obtain control of a computer (XDS 940) for use in communications.

Pictured above is one of the Community Memory teletype terminals. The first was installed at Leopold’s Records, a student-run record store in Berkeley. The terminal connected by modem to a time-sharing computer in San Francisco, which hosted the electronic bulletin-board system. Users could exchange brief messages about a wide range of topics: apartment listings, music lessons, even where to find a decent bagel. Reading the bulletin board was free, but posting a listing cost a quarter, payable by the coin-op mechanism. The terminals offered many users their first interaction with a computer.

Among the volunteers who made up Loving Grace Cybernetics and Resource One was Lee Felsenstein, who would go on to help establish the Homebrew Computer Club and who played a number of other pioneering roles in the nascent personal computing industry. For Felsenstein, Community Memory was important for, among other things, opening “the door to cyberspace.”

The Community Memory project continued into the 1980s, when the terminal pictured here was created, and eventually evolved to include the idea of creating a national network of terminals and resources. [For more on the history of bulletin-board systems, see “Social Media’s Dial-Up Ancestor: The Bulletin Board,” in IEEE Spectrum.] But the underlying purpose remained unchanged: to increase the accessibility of computing as a means for communication and information exchange.

To learn more about the Community Memory project, see the Computer History Museum’s extensive collection on the topic.

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

Ordinary Computer Can Access The Secret of Quantum Computing

You may not need a quantum computer of your own to securely use quantum computing in the future. For the first time, researchers have shown how even ordinary classical computer users could remotely access quantum computing resources online while keeping their quantum computations securely hidden from the quantum computer itself.

Tech giants such as Google and IBM are racing to build universal quantum computers that could someday analyze millions of possible solutions much faster than today’s most powerful classical supercomputers. Such companies have also begun offering online access to their early quantum processors as a glimpse of how anyone could tap the power of cloud-based quantum computing. Until recently, most researchers believed that there was no way for remote users to securely hide their quantum computations from prying eyes unless they too possessed quantum computers. That assumption is now being challenged by researchers in Singapore and Australia through a new paper published in the 11 July issue of the journal Physical Review X.

“Frankly, I think we are all quite surprised that this is possible,” says Joseph Fitzsimons, a theoretical physicist for the Centre for Quantum Technologies at the National University of Singapore and principal investigator on the study. “There had been a number of results showing that it was unlikely for a classical user to be able to hide [delegated quantum computations] perfectly, and I think many of us in the field had interpreted this as evidence that nothing useful could be hidden.”

The technique for helping classical computer users hide their quantum computations relies upon a particular approach known as measurement-based quantum computing. Quantum computing’s main promise relies upon leveraging quantum bits (qubits) of information that can exist as both 1s and 0s simultaneously—unlike classical computing bits that exist as either 1 or 0. That means qubits can simultaneously represent and process many more states of information than classical computing bits.

In measurement-based quantum computing, a quantum computer puts all its qubits into a particular state of quantum entanglement so that any changes to a single qubit affect all the qubits. Next, qubits are individually measured one by one in a certain order that specifies the program being run on the quantum computer. A remote user can provide step-by-step instructions for each qubit’s measurement that encode both the input data and the program being run. Crucially, each measurement depends on the outcome of previous measurements.

Fitzsimons and his colleagues figured out how to exploit this step-wise approach to quantum computing and achieve a new form of “blind quantum computation” security. They showed how remote users relying on classical computers can hide the meaning behind each step of the measurement sequence from the quantum computer performing the computation. That means the owner of the quantum computer cannot tell the role of each measurement step and which qubits were used for inputs, operations, or outputs.

The finding runs counter to previous assumptions that it was impossible to guarantee data privacy for users relying on ordinary classical computers to remotely access quantum computers. But Fitzsimons says that early feedback to the group’s work has been “very positive” because the proposed security mechanism—described as the “flow ambiguity effect”—is fairly straightforward.

Human OS ComputingHardware Low Cost Play Materials

Researchers have made a low-cost smart glove that can translate the American Sign Language alphabet into text and send the messages via Bluetooth to a smartphone or computer. The glove can also be used to control a virtual hand.

While it could aid the deaf community, its developers say the smart glove could prove really valuable for virtual and augmented reality, remote surgery, and defense uses like controlling bomb-diffusing robots.

This isn’t the first gesture-tracking glove. There are companies pursuing similar devices that recognize gestures for computer control, à la the 2002 film Minority Report. Some researchers have also specifically developed gloves that convert sign language into text or audible speech.

What’s different about the new glove is its use of extremely low-cost, pliable materials, says developer Darren Lipomi, a nanoengineering professor at the University of California, San Diego. The total cost of the components in the system reported in the journal PLOS ONE cost less than US $100, Lipomi says. And unlike other gesture-recognizing gloves, which use MEMS sensors made of brittle materials, the soft stretchable materials in Lipomi’s glove should make it more robust.

The key components of the new glove are flexible strain sensors made of a rubbery polymer. Lipomi and his team make the sensors by cutting narrow strips from a super-thin film of the polymer and coating them with conductive carbon paint.

Then they use a stretchy glue to attach nine sensors on the knuckles of an athletic leather glove, two on each finger and one on the thumb. Thin, stainless steel threads connect each sensor to a circuit board attached at the wrist. The board also has an accelerometer and a Bluetooth transmitter.

Complex Biological Computer Commands Living Cells

Researchers have developed a biological computer that functions inside living bacterial cells and tells them what to do, according to a report published today in Nature. Composed of ribonucleic acid, or RNA, the new “ribocomputer” can survive in the bacterium E. coli and respond to a dozen inputs, making it the most complex biological computer to date.

“We’ve developed a way to control how cells behave,” says Alexander Green, an engineer at The Biodesign Institute at Arizona State University, who developed the technology with colleagues at Harvard’s Wyss Institute for Biologically Inspired Engineering. The cells go about their normal business, replicating and sensing what’s going on in their environments, “but they’ve also got this layer of computational machinery that we’ve instructed them to synthesize,” he says.

The biological circuit works just like a digital one: It receives an input and makes a logic-based decision, using AND, OR, and NOT operations. But instead of the inputs and outputs being voltage signals, they are the presence or absence of specific chemicals or proteins.

The process begins with the design of a DNA strand that codes for all the logic the system will need. The researchers insert the synthesized DNA into E. coli bacteria as part of a plasmid—a ring of DNA that can replicate as it floats around in the cell.

The DNA serves as a template for the biological computer’s machinery. The cell’s molecular machinery translates the DNA into RNA, essentially copying the DNA code onto a different molecule for use by the cell. RNA links up with a cell’s ribosome and instructs it to produce a protein specified in the RNA’s code.

Here’s where the system behaves like a computer, rather than just a genetically engineered organism: The RNA only does its job when it receives an input that activates it. That’s because the engineered RNA contains codes not just for a protein, but also for logic functions. The logic portions must receive the right inputs in order to activate the RNA in a way that allows the ribosome to use it to produce the circuit’s output—in this case a protein that glows.