Monthly Archives: April 2017

Opinion: Raspberry Pi Merger With CoderDojo Not All

This past Friday, the Raspberry Pi Foundation and the CoderDojo Foundationbecame one. The Raspberry Pi Foundation described it as “a merger that will give many more young people all over the world new opportunities to learn how to be creative with technology.” Maybe. Or maybe not. Before I describe why I’m a bit skeptical, let me first take a moment to explain more about what these two entities are.

The Raspberry Pi Foundation is a charitable organization created in the U.K. in 2009. Its one-liner mission statement says it works to “put the power of digital making into the hands of people all over the world.” In addition to designing and manufacturing an amazingly popular line of inexpensive single-board computers—the Raspberry Pi—the Foundation has also worked very hard at providing educational resources.

The CoderDojo Foundation is an outgrowth of a volunteer-led, community-based programming club established in Cork, Ireland in 2011. That model was later cloned in many other places and can now be found in 63 countries, where local coding clubs operate under the CoderDojo banner.

So both organizations clearly share a keen interest in having young people learn about computers and coding. Indeed, the Raspberry Pi Foundation had earlier merged with Code Club, yet another U.K. organization dedicated to helping young people learn to program computers. With all this solidarity of purpose, it would seem only natural for such entities to team up, or so you might think. Curmudgeon as I am, though, I’d like to share a different viewpoint.

The issue is that, well, I don’t think that the Raspberry Pi is a particularly good vehicle to teach young folks to code. I know that statement will be considered blasphemy in some circles, but I stand by it.

The problem is that for students just getting exposed to coding, the Raspberry Pi is too complicated to use as a teaching tool and too limited to use as a practical tool. If you want to learn physical computing so that you can build something that interacts with sensors and actuators, better to use an 8-bit Arduino. And if you want to learn how to write software, better to do your coding on a normal laptop.

That’s not to say that the Raspberry Pi isn’t a cool gizmo or that some young hackers won’t benefit by using them to build projects—surely that’s true. It’s just not the right place to start in general. Kids are overwhelmingly used to working in OSx or Windows. Do they really need to switch to Linux to learn to code? Of course not. And that just adds a thick layer of complication and expense.

My opinions here are mostly shaped by my (albeit limited) experiences trying to help young folks learn to code, which I’ve been doing during the summer for the past few years as the organizer of a local CoderDojo workshop. I’ve brought in a Raspberry Pi on occassion and shown kids some interesting things you can do with one, for example, turning a Kindle into a cycling computer. But the functionality of the Raspberry Pi doesn’t impress these kids, who just compare them with their smartphones. And the inner workings of the RasPi are as inaccessible to them as the inner workings of their smartphones. So it’s not like you can use a RasPi to help them grasp the basics of digital electronics.

The one experience I had using the Raspberry Pi to teach coding was disastrous. While there were multiple reasons for things not going well, one was that the organizer wanted to have the kids “build their own computers,” which amounted to putting a Raspberry Pi into a case and attaching it to a diminutive keyboard and screen. Yes, kids figured out how to do that quickly enough, but that provided them with a computer that was ill suited for much of anything, especially for learning coding.

So I worry that the recent merger just glosses over the fact that teaching kids to code and putting awesome single-board computers into the hands of makers are really two different exercises. I’m sure Eben Upton and lots of professional educators will disagree with me. But as I see things, channeling fledgling coders into using a Raspberry Pi to learn to program computers is counterproductive, despite surface indications that this is what we should be doing. And to my mind, the recent merger only promises to spread the misperception.

In the Future, Machines Will Borrow Our Brain’s Best Tricks

Steve sits up and takes in the crisp new daylight pouring through the bedroom window. He looks down at his companion, still pretending to sleep. “Okay, Kiri, I’m up.”

She stirs out of bed and begins dressing. “You received 164 messages overnight. I answered all but one.”

In the bathroom, Steve stares at his disheveled self. “Fine, give it to me.”

“Your mother wants to know why you won’t get a real girlfriend.”

He bursts out laughing. “Anything else?”

“Your cholesterol is creeping up again. And there have been 15,712 attempts to hack my mind in the last hour.”

“Good grief! Can you identify the source?”

“It’s distributed. Mostly inducements to purchase a new RF oven. I’m shifting ciphers and restricting network traffic.”

“Okay. Let me know if you start hearing voices.” Steve pauses. “Any good deals?”

“One with remote control is in our price range. It has mostly good reviews.”

“You can buy it.”

Kiri smiles. “I’ll stay in bed and cook dinner with a thought.”

Steve goes to the car and takes his seat.

Car, a creature of habit, pulls out and heads to work without any prodding.

Leaning his head back, Steve watches the world go by. Screw the news. He’ll read it later.

Car deposits Steve in front of his office building and then searches for a parking spot.

Steve walks to the lounge, grabs a roll and some coffee. His coworkers drift in and chat for hours. They try to find some inspiration for a new movie script. AI-generated art is flawless in execution, even in depth of story, but somehow it doesn’t resonate well with humans, much as one generation’s music does not always appeal to the next. AIs simply don’t share the human condition.

But maybe they could if they experienced the world through a body. That’s the whole point of the experiment with Kiri.…

It’s sci-fi now, but by midcentury we could be living in Steve and Kiri’s world. Computing, after about 70 years, is at a momentous juncture. The old approaches, based on CMOS technology and the von Neumann architecture, are reaching their fundamental limits. Meanwhile, massive efforts around the world to understand the workings of the human brain are yielding new insights into one of the greatest scientific mysteries: the biological basis of human cognition.

The dream of a thinking machine—one like Kiri that reacts, plans, and reasons like a human—is as old as the computer age. In 1950, Alan Turing proposed to test whether machines can think, by comparing their conversation with that of humans. He predicted computers would pass his test by the year 2000. Computing pioneers such as John von ­Neumann also set out to imitate the brain. They had only the simplest notion of neurons, based on the work of neuro­scientist ­Santiago Ramón y Cajal and others in the late 1800s. And the dream proved elusive, full of false starts and blind alleys. Even now, we have little idea how the tangible brain gives rise to the intangible experience of conscious thought.

Today, building a better model of the brain is the goal of major government efforts such as the BRAIN Initiative in the United States and the Human Brain Project in Europe, joined by private efforts such as those of the Allen Institute for Brain Science, in Seattle. Collectively, these initiatives involve hundreds of researchers and billions of dollars.

With systematic data collection and rigorous insights into the brain, a new generation of computer pioneers hopes to create truly thinking machines.

If they succeed, they will transform the human condition, just as the Industrial Revolution did 200 years ago. For nearly all of human history, we had to grow our own food and make things by hand. The Industrial Revolution unleashed vast stores of energy, allowing us to build, farm, travel, and communicate on a whole new scale. The AI revolution will take us one enormous leap further, freeing us from the need to control every detail of operating the machines that underlie modern civilization. And as a consequence of copying the brain, we will come to understand ourselves in a deeper, truer light. Perhaps the first benefits will be in mental health, organizational behavior, or even international relations.

Such machines will also improve our health in general. Imagine a device, whether a robot or your cellphone, that keeps your medical records. Combining this personalized data with a sophisticated model of all the pathways that regulate the human body, it could simulate scenarios and recommend healthy behaviors or medical actions tailored to you. A human doctor can correlate only a few variables at once, but such an app could consider thousands. It would be more effective and more personal than any physician.

Re-creating the processes of the brain will let us automate anything humans now do. Think about fast food. Just combine a neural controller chip that imitates the reasoning, intuitive, and mechanical-control powers of the brain with a few thousand dollars’ worth of parts, and you have a short-order bot. You’d order a burger with your phone, and then drive up to retrieve your food from a building with no humans in it. Many other commercial facilities would be similarly human free.

That may sound horrifying, given how rigid computers are today. Ever call a customer service or technical support line, only to be forced through a frustrating series of automated menus by a pleasant canned voice asking you repeatedly to “press or say 3,” at the end of which you’ve gotten nowhere? The charade creates human expectations, yet the machines frequently fail to deliver and can’t even get angry when you scream at them. Thinking machines will sense your emotions, understand your goals, and actively help you achieve them. Rather than mechanically running through a fixed set of instructions, they will adjust as circumstances change.

That’s because they’ll be modeled on our brains, which are exquisitely adapted to navigating complex environments and working with other humans. With little conscious effort, we understand language and grasp shades of meaning and mood from the subtle cues of body language, facial expression, and tone of voice. And the brain does all that while consuming astonishingly little energy.

That 1.3-kilogram lump of neural tissue you carry around in your head accounts for about 20 percent of your body’s metabo­lism. Thus, with an average basal metabolism of 100 watts, each of us is equipped with the biological equivalent of a 20-W supercomputer. Even today’s most powerful computers, running at 20 million W, can’t come close to matching the brain.

How does the brain do it? It’s not that neurons are so much more efficient than transistors. In fact, when it comes to moving signals around, neurons have one-tenth the efficiency. It must be the organization of those neurons and their patterns of interaction, or “algorithms.” The brain has relatively shallow but massively parallel networks. At every level, from deep inside cells to large brain regions, there are feedback loops that keep the system in balance and change it in response to activity from neighboring units. The ultimate feedback loop is through the muscles to the outside world and back through the senses.

Traditionally, neurons were viewed as units that collect thousands of inputs, transform them computationally, and then send signals downstream to other neurons via connections called synapses. But it turns out that this model is too simplistic; surprising computational power exists in every part of the system. Even a single synapse contains hundreds of different protein typeshaving complex interactions. It’s a molecular computer in its own right.

And there are hundreds of different types of neurons, each performing a special role in the neural circuitry. Most neurons communicate through physical contact, so they grow long skinny branches to find the right partner. Signals move along these branches via a chain of amplifiers. Ion pumps keep the neuron’s cell membrane charged, like a battery. Signals travel as short sharp changes of voltage, called spikes, which ripple down the membrane.

U.S. Slips in New Top500 Supercomputer Ranking

In June, we can look forward to two things: the Belmont Stakes and the first of the twice-yearly TOP500 rankings of supercomputers. This month, a well-known gray and black colt named Tapwrit came in first at Belmont, and a well-known gray and black supercomputer named Sunway TaihuLight came in first on June’s TOP500 list, released today in conjunction with the opening session of the ISC High Performance conference in Frankfurt. Neither was a great surprise.

Tapwrit was the second favorite at Belmont, and Sunway TaihuLight was the clear pick for the number-one position on TOP500 list, it having enjoyed that first-place ranking since June of 2016 when it beat out another Chinese supercomputer, Tianhe-2. The TaihuLight, capable of some 93 petaflops in this year’s benchmark tests, was designed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and is located at the National Supercomputing Center in Wuxi, China. Tianhe-2, capable of almost 34 petaflops, was developed by China’s National University of Defense Technology (NUDT), is deployed at the National Supercomputer Center in Guangzho, and still enjoys the number-two position on the list.

More of a surprise, and perhaps more of a disappointment for some, is that the highest-ranking U.S. contender, the Department of Energy’s Titan supercomputer (17.6 petaflops) housed at Oak Ridge National Laboratory, was edged out of the third position by an upgraded Swiss supercomputer called Piz Daint (19.6 petaflops), installed at the Swiss National Supercomputing Center, part of the Swiss Federal Institute of Technology (ETH) in Zurich.

Not since 1996 has a U.S. supercomputer not made it into one of the first three slots on the TOP500 list. But before we go too far in lamenting the sunset of U.S. supercomputing prowess, we should pause for a moment to consider that the computer that bumped it from the number-three position was built by Cray and is stuffed with Intel processors and NVIDIA GPUs, all the creations of U.S. companies.

Even the second-ranking Tianhe-2 is based on Intel processors and co-processors. It’s only the TaihuLight that is truly a Chinese machine, being based on the SW26010, a 260-core processor designed by the National High Performance Integrated Circuit Design Center in Shanghai. And U.S. supercomputers hold five of the 10 highest ranking positions on the new TOPS500 list.

Still, national rivalries seem to have locked the United States into a supercomputer arms race with China, with both nations vying to be the first to reach the exascale threshold—that is, to have a computer that can perform a 1018 floating-point operations per second. China hopes to do so by amassing largely conventional hardware and is slated to have a prototype system readyaround the end of this year. The United States, on the other hand, is looking to tackle the problems that come with scaling to that level using novel approaches, which require more research before even a prototype machine can be built. Just last week, the U.S. Department of Energy announced that it was awarding Advanced Micro Devices, Cray, Hewlett Packard, IBM, Intel, and NVIDIA US $258 million to support research toward building an exascale supercomputer. Who will get there first, is, of course, up for grabs. But one thing’s for sure: It’ll be a horse race worth watching.

Global Race Towards Exascale Will Drive Supercomputing

For the first time in 21 years, the United States no longer claimed even the bronze medal. With this week’s release of the latest Top 500 supercomputer ranking, the top three fastest supercomputers in the world are now run by China (with both first and second place finishers) and Switzerland. And while the supercomputer horserace is spectacle enough unto itself, a new report on the supercomputer industry highlights broader trends behind both the latest and the last few years of Top500 rankings.

The report, commissioned last year by the Japanese national science agency Riken, outlines a worldwide race toward exascale computers in which the U.S. sees R&D spending and supercomputer talent pools shrink, Europe jumps into the breach with increased funding, and China pushes hard to become the new global leader, despite a still small user and industry base ready to use the world’s most powerful supercomputers.

Steve Conway, report co-author and senior vice president of research at Hyperion, says the industry trend in high-performance computing is toward laying groundwork for pervasive AI and big data applications like autonomous cars and machine learning. And unlike more specialized supercomputer applications from years past, the workloads of tomorrow’s supercomputers will likely be mainstream and even consumer-facing applications.

“Ten years ago the rationale for spending on supercomputers was primarily two things: national security and scientific leadership, and I think there are a lot of people who still think that supercomputers are limited to problems like will a proton go left or right,” he says. “But in fact, there’s been strong recognition [of the connections] between supercomputing leadership and industrial leadership.”

“With the rise of big data, high-performance computing has moved to the forefront of research in things like autonomous vehicle design, precision medicine, deep learning, and AI,” Conway says. “And you don’t have to ask supercomputing companies if this is true. Ask Google and Baidu. There’s a reason why Facebook has already bought 26 supercomputers.”

As the 72-page Hyperion report notes, “IDC believes that countries that fail to fund development of these future leadership-class supercomputers run a high risk of falling behind other highly developed countries in scientific innovation, with later harmful consequences for their national economies.” (Since authoring the report in 2016 as part of the industry research group IDC, its authors this year formed the spin-off research firm Hyperion.)

Conway says that solutions to problems plaguing HPC systems today will be found in consumer electronics and industry applications of the future. So while operating massively parallel computers with multiple millions of cores may today only be a problem facing the world’s fastest and second-fastest supercomputers—China’s Sunway TaihuLight and Tianhe-2, running on 10.6 and 3.1 million cores, respectively—that fact won’t hold true forever. However, because China is the only country tackling this problem now means they are more likely to develop the technology first, technology that the world will want when cloud computing with multiple millions of cores approaches the mainstream.

The same logic applies to optimizing the ultra-fast data rates that today’s top HPC systems use and minimizing the megawatt electricity budgets they consume. And as the world’s supercomputers approach the exascale, that is, the 1 exaflop or 1000 petaflop mark, new challenges will no doubt arise too.

So, for instance, the report says that rapid shut-down and power-up of cores not in use will be one trick supercomputer designers use to trim back some of their systems’ massive power budgets. And, too, high-storage density—in the 100 petabyte range—will become paramount to house the big datasets the supercomputers consume.

“You could build an exascale system today,” Conway says. “But it would take well over 100 megawatts, which nobody’s going to supply, because that’s over a 100 million dollar electricity bill. So it has to get the electricity usage under control. Everybody’s trying to get it in the 20 to 30 megawatts range. And it has to be dense. Much denser than any computing today. It’s got to fit inside some kind of building. You don’t want the building to be 10 miles long. And also the denser the machine, the faster the machine is going to be too.”

Conway predicts that these and other challenges will be surmounted, and the first exaflop supercomputers will appear on the Top500 list around 2021, while exaflop supercomputing could become commonplace by 2023.