Category Archives: 99 Computer Hardware

Basic Components of Sonobuoy

The sonobuoy is gadget that accumulates and transmits data from the world’s seas. It is an innovation that dates to the principal world war, when it was created by the military to track the exercises of passing vessels. Today, an extensive variety of enterprises use the data assembling and transmitting abilities of the gadget, from military gatherings, to oil and gas investigation, to logical gatherings. They are utilized to screen various sorts of sea exercises. Movement propensities for marine life, fluctuating sea temperatures, evolving streams, transport action, and sea landscape are just a couple of the numerous things that a sonobuoy can be utilized to screen or track.

1. Floatation gadget

Every one of them have some type of buoyancy gadget. This permits a man to send it adrift, where it can drift for a timeframe and record and transmit information and not have it sink to the base of the sea. The highest point of it buoys and the base is submersed, so it can assemble data. Much of the time, alternate segments are associated by links that buoy openly. While the thought is straightforward, a bit of gear connected to a float, the completed item can be profoundly specialized.

2. Transmitter

The transmitter is the part of the sonobuoy that transmits the information it gathers to another PC for capacity and handling. It is a basic segment, on the grounds that numerous sonobuoys are lost in the savage conditions, making recovering them troublesome or inconceivable. By transmitting the information, the group handling the data is better ready to get the data they require.

3. Sonar hardware

Not all sonobuoys have sonar hardware, but rather it is a typical segment. Those that have them are called dynamic. Those that don’t are called detached. The gadget radiates a ping and after that measures the time it takes for the sound waves to skip off of something and return. This helps the controller portray the earth in which the sonobuoy was sent. The item has made point by point maps of already obscure parts of the sea.

4. Rough development

Sonobuoys are utilized as a part of a standout amongst the most rough and troublesome situations on earth… amidst seas. Extreme climate, creature experiences, erosion, salt, and vicious developments are all ordinary events for the normal float. They have to withstand the components sufficiently long to assemble the data for which they were conveyed. It is critical to ensure that the item you buy has been completely tried for strength and unwavering quality. The nature of the information gathered relies on upon the nature of the following hardware.

Enlisting the Computer Troubleshooters

There are numerous suppliers of PC repair administrations. Be that as it may, you ought to procure the best IT specialists to settle the PC issues of your business PCs. There are a few things that you ought to pay special mind to while enlisting specialists to settle your PC IT issues.

They include:

Information: The best specialists to repair your business PCs ought to be learned professionals. These ought to have experienced proficient preparing and obtained significant involvement in settling distinctive PC issues.

Worried about your business: Hire specialists who think about the accomplishment of your business. The best specialists ought to know the effects of the disappointment of your business IT framework. In that capacity, they ought to settle your PC issue productively and viably.

Tried and true: Look for specialists that you can rely on upon to settle any PC issue. This likewise implies the specialists ought to be accessible at whatever time you require them. They ought to likewise react rapidly as guaranteed and take care of the issue professionally.

Extensive arrangements: There are many interconnected dangers that face your business IT framework. They incorporate infections and malware which can meddle with the best possible working of your IT framework. The best IT specialists ought to give exhaustive answers for these issues.

It is exceptionally essential that you consider the expert capabilities of the PC IT master that you contract to settle your business PC issues. This is critical in light of the fact that exclusive IT specialists who have experienced proficient preparing can give the best private venture IT bolster. In this way, consider the level of expert preparing of a professional before enrolling their administration.

Furthermore, consider the experience of a PC specialist before procuring their administration. The best neighborhood PC professional to contract to settle your PC issues ought to be knowledgeable about settling distinctive PC issues. A specialist who has been putting forth PC repair benefits in your general vicinity has likewise settled an advantageous notoriety. In this manner, once you procure a legitimate specialist you are guaranteed that you will get quality administrations in light of the fact that there is no issue that an expert and experienced professional can’t settle.

Get Advantages of Electronic Recycling

On the planet today there are such a large number of gadgets, for example, computers, TVs, mobile phones, tablets that when they are no longer useable something must be finished with them. Consistently there is roughly forty-nine million metric huge amounts of electronic waste created worldwide as indicated by the United Nations Environmental Program. Of these forty-nine million metric tons, roughly three million of that electronic waste is delivered in the United States alone. At the point when a man or business picks to utilize electronic reusing they are empowering these gadgets to be reused and are changing waste material into new items. There are many advantages in electronic reusing.

There will be a cleaner domain. Most electronic waste is being dumped or sent to landfills, which are getting to be distinctly scarcer. Doing this involves a ton of space and spreads hurtful poisons by corrupting the ground water. This makes an unsafe circumstance for people, creatures, and vegetation. When you reuse your hardware, it spares space in your landfills and counteracts different territories of the land from getting to be dumping grounds. Reusing will likewise keep the poisons from bringing on ecological contamination.

There are additionally the medical advantages of electronic reusing. Electronic items are produced from various sorts of gasses and plastics alongside hurtful components like lead. Whenever individuals and organizations simply dump their gadgets these chemicals are discharged into the air and the earth. These chemicals not just represent a danger to the soundness of the general population who dump the gadgets additionally to other people who live close to the dumping grounds and landfill.

A hefty portion of the gadgets, particularly computer equipment, has numerous things that can be re-used and some of them don’t require any preparing. Some enormous organizations even have their own reusing offices where the greater part of the reusable material is put to great use after it is dealt with and the reset is discarded legitimately. Apple is one of those enormous organizations that do this. By doing this, the reused parts that were taken out don’t need to be made again sparing vitality and assets to make another part.

One of the best advantages of electronic reusing is that if these items are reused organizations won’t need to produce the vast majority of the segments in gadgets without any preparation. The drawback of this is there could be the loss of a few employments. By reusing gadgets the creation expenses will be lessened making hardware more moderate to everybody. In the end this will help the economy.

To use these advantages of electronic reusing you should take your gadgets to legitimate reusing focuses that are controlled by experts who know how to reuse them utilizing the correct security measures. The reusing focus ought to be government endorsed.

Fujitsu Liquid Immersion Not All Hot Air When It Comes to Cooling Data Centers

Given the prodigious heat generated by the trillions of transistors switching on and off 24 hours a day in data centers, air conditioning has become a major operating expense. Consequently, engineers have come up with several imaginative ways to ameliorate such costs, which can amount to a third or more of data center operations.

One favored method is to set up hot and cold aisles of moving air through a center to achieve maximum cooling efficiency. Meanwhile, Facebook has chosen to set up a data center in Lulea, northern Sweden on the fringe of the Arctic Circle to take advantage of the natural cold conditions there; and Microsoft engineers have seriously proposed putting server farms under water.

Fujitsu, on the other hand, is preparing to launch a less exotic solution: a liquid immersion cooling system it says will usher in a “next generation of ultra-dense data centers.”

Though not the first company to come up with the idea, the Japanese computer giant says it’s used its long experience in the field to come up with a design that accommodates both easy maintenance and standard servers. Maintenance is as straightforward to perform as on air-cooled systems, for it does not require gloves, protective clothing or special training, while cables are readily accessible.

Given that liquids are denser than air, Fujitsu says that immersing servers in its new system’s bath of inert fluid greatly improves the cooling process and eliminates the need for server fans. This, in turn, results in a cooling system consuming 40 percent less power compared to that of data centers relying on traditional air-cooling technology. An added bonus is the fanless operation is virtually silent.

“It also reduces the floor space needed by 50 percent,” says Takashi Yamamoto, Vice President, Mechanical & Thermal Engineering Div., Advanced System R&D Unit, Fujitsu. Yamamoto showed off a demonstration system at the company’s annual technology forum held in Tokyo this week.

A cooling bath measures 90cm x 72cm x 81cm WDH, while the rack it fits into measures 110cm x 78cm x 175cm WDH. The coolant used is an electrically insulating fluorocarbon fluid manufactured by 3M called Fluorinert.

A bath has a horizontal 16-rack unit space. Two baths in their racks can be stacked vertically one on top of the other, and dedicated racks holding eight baths in two rows of four are available.

“There is no limitation on the number of stacks that can be used,” says Yamamoto. He also points out that a bath’s dimensions are compatible with regular 48-centimeter rack-width specifications. So any air-cooled rack-mountable servers can be used as long as they meet the depth requirements of the bath and when unnecessary devices like fans are removed.

The scheme employs a closed bath, single-phase system in which the servers are directly submerged in the dialectic fluid. A lid is used to cover the bath to prevent evaporation.

A coolant distribution unit (CDU) incorporates a pump, a heat exchanger, and a monitoring module. The fluid captures the heat generated by the servers’ HHDs, SSDs or other devices, and transfers it via the CDU to the heat exchanger where it is expelled outside the data center by means of a water loop and cooling tower or chilling unit. The fluid is then pumped back into the bath after filtering. The monitoring system warns maintenance engineers of any abnormal conditions via a network.

Yamamoto says because the fluid protects the servers, the system can be deployed anywhere, no matter how harsh the conditions or environment may be.

No word on a price tag yet, but Yamamoto reveals some companies are already evaluating the system. Fujitsu expects to ship the product later this year.

Editor’s note: This post was updated 19 May 2017 to correctly attribute quotes from Takashi Yamamoto, Vice President, Mechanical & Thermal Engineering Div., Advanced System R&D Unit, Fujitsu. Previously, Yamamoto’s quotes were incorrectly attributed to Ippei Takami, chief designer in Fujitsu’s Design Strategy Division in Kawasaki near Tokyo. Spectrum regrets the error.

Google Plans to Demonstrate the Supremacy of Quantum Computing

Quantum computers have long held the promise of performing certain calculations that are impossible—or at least, entirely impractical—for even the most powerful conventional computers to perform. Now, researchers at a Google laboratory in Goleta, Calif., may finally be on the cusp of proving it, using the same kinds of quantum bits, or qubits, that one day could make up large-scale quantum machines.

By the end of this year, the team aims to increase the number of superconducting qubits it builds on integrated circuits to create a 7-by-7 array. With this quantum IC, the Google researchers aim to perform operations at the edge of what’s possible with even the best supercomputers, and so demonstrate “quantum supremacy.”

“We’ve been talking about, for many years now, how a quantum processor could be powerful because of the way that quantum mechanics works, but we want to specifically demonstrate it,” says team member John Martinis, a professor at the University of California, Santa Barbara, who joined Google in 2014.

A system size of 49 superconducting qubits is still far away from what physicists think will be needed to perform the sorts of computations that have long motivated quantum computing research. One of those is Shor’s algorithm, a computational scheme that would enable a quantum computer to quickly factor very large numbers and thus crack one of the foundational components of modern cryptography. In a recent commentary in Nature, Martinis and colleagues estimated that a 100-million-qubit system would be needed to factor a 2,000-bit number—a not-uncommon public key length—in one day. Most of those qubits would be used to create the special quantum states that would be needed to perform the computation and to correct errors, creating a mere thousand or so stable “logical qubits” from thousands of less stable physical components, Martinis says.

There will be no such extra infrastructure in this 49-qubit system, which means a different computation must be performed to establish supremacy. To demonstrate the chip’s superiority over conventional computers, the Google team will execute operations on the array that will cause it to evolve chaotically and produce what looks like a random output. Classical machines can simulate this output for smaller systems. In April, for example, Lawrence Berkeley National Laboratory reported that its 29-petaflop supercomputer, Cori, had simulated the output of 45 qubits. But 49 qubits would push—if not exceed—the limits of conventional supercomputers.

This computation does not as yet have a clear practical application. But Martinis says there are reasons beyond demonstrating quantum supremacy to pursue this approach. The qubits used to make the 49-qubit array can also be used to make larger “universal” quantum systems with error correction, the sort that could do things like decryption, so the chip should provide useful validation data.

Brain Computers: Bad in Math, Good in Others

Painful exercises in basic arithmetic are a vivid part of our elementary school memories. A multiplication like 3,752 × 6,901 carried out with just pencil and paper for assistance may well take up to a minute. Of course, today, with a cellphone always at hand, we can quickly check that the result of our little exercise is 25,892,552. Indeed, the processors in modern cellphones can together carry out more than 100 billion such operations per second. What’s more, the chips consume just a few watts of power, making them vastly more efficient than our slow brains, which consume about 20 watts and need significantly more time to achieve the same result.

Of course, the brain didn’t evolve to perform arithmetic. So it does that rather badly. But it excels at processing a continuous stream of information from our surroundings. And it acts on that information—sometimes far more rapidly than we’re aware of. No matter how much energy a conventional computer consumes, it will struggle with feats the brain finds easy, such as understanding language and running up a flight of stairs.

If we could create machines with the computational capabilities and energy efficiency of the brain, it would be a game changer. Robots would be able to move masterfully through the physical world and communicate with us in plain language. Large-scale systems could rapidly harvest large volumes of data from business, science, medicine, or government to detect novel patterns, discover causal relationships, or make predictions. Intelligent mobile applications like Siri or Cortana would rely less on the cloud. The same technology could also lead to low-power devices that can support our senses, deliver drugs, and emulate nerve signals to compensate for organ damage or paralysis.

But isn’t it much too early for such a bold attempt? Isn’t our knowledge of the brain far too limited to begin building technologies based on its operation? I believe that emulating even very basic features of neural circuits could give many commercially relevant applications a remarkable boost. How faithfully computers will have to mimic biological detail to approach the brain’s level of performance remains an open question. But today’s brain-inspired, or neuromorphic, systems will be important research tools for answering it.

A key feature of conventional computers is the physical separation of memory, which stores data and instructions, from logic, which processes that information. The brain holds no such distinction. Computation and data storage are accomplished together locally in a vast network consisting of roughly 100 billion neural cells (neurons) and more than 100 trillion connections (synapses). Most of what the brain does is determined by those connections and by the manner in which each neuron responds to incoming signals from other neurons.

When we talk about the extraordinary capabilities of the human brain, we are usually referring to just the latest addition in the long evolutionary process that constructed it: the neocortex. This thin, highly folded layer forms the outer shell of our brains and carries out a diverse set of tasks that includes processing sensory inputs, motor control, memory, and learning. This great range of abilities is accomplished with a rather uniform structure: six horizontal layers and a million 500-micrometer-wide vertical columns all built from neurons, which integrate and distribute electrically coded information along tendrils that extend from them—the dendrites and axons.

Like all the cells in the human body, a neuron normally has an electric potential of about –70 millivolts between its interior and exterior. This membrane voltage changes when a neuron receives signals from other neurons connected to it. And if the membrane voltage rises to a critical threshold, it forms a voltage pulse, or spike, with a duration of a few milliseconds and a value of about 40 mV. This spike propagates along the neuron’s axon until it reaches a synapse, the complex biochemical structure that connects the axon of one neuron to a dendrite of another. If the spike meets certain criteria, the synapse transforms it into another voltage pulse that travels down the branching dendrite structure of the receiving neuron and contributes either positively or negatively to its cell membrane voltage.

Connectivity is a crucial feature of the brain. The pyramidal cell, for example—a particularly important kind of cell in the human neocortex—contains about 30,000 synapses and so 30,000 inputs from other neurons. And the brain is constantly adapting. Neuron and synapse properties—and even the network structure itself—are always changing, driven mostly by sensory input and feedback from the environment.

General-purpose computers these days are digital rather than analog, but the brain is not as easy to categorize. Neurons accumulate electric charge just as capacitors in electronic circuits do. That is clearly an analog process. But the brain also uses spikes as units of information, and these are fundamentally binary: At any one place and time, there is either a spike or there is not. Electronically speaking, the brain is a mixed-signal system, with local analog computing and binary-spike communication. This mix of analog and digital helps the brain overcome transmission losses. Because the spike essentially has a value of either 0 or 1, it can travel a long distance without losing that basic information; it is also regenerated when it reaches the next neuron in the network.

Another crucial difference between brains and computers is that the brain accomplishes all its information processing without a central clock to synchronize it. Although we observe synchronization events—brain waves—they are self-organized, emergent products of neural networks. Interestingly, modern computing has started to adopt brainlike asynchronicity, to help speed up computation by performing operations in parallel. But the degree and the purpose of parallelism in the two systems are vastly different.

The Benefits of Building an Artificial Brain

In the mid-1940s, a few brilliant people drew up the basic blueprints of the computer age. They conceived a general-purpose machine based on a processing unit made up of specialized subunits and registers, which operated on stored instructions and data. Later inventions—transistors, integrated circuits, solid-state memory—would supercharge this concept into the greatest tool ever created by humankind.

So here we are, with machines that can churn through tens of quadrillions of operations per second. We have voice-recognition-enabled assistants in our phones and homes. Computers routinely thrash us in our ancient games. And yet we still don’t have what we want: machines that can communicate easily with us, understand and anticipate our needs deeply and unerringly, and reliably navigate our world.

Now, as Moore’s Law seems to be starting some sort of long goodbye, a couple of themes are dominating discussions of computing’s future. One centers on quantum computers and stupendous feats of decryption, genome analysis, and drug development. The other, more interesting vision is of machines that have something like human cognition. They will be our intellectual partners in solving some of the great medical, technical, and scientific problems confronting humanity. And their thinking may share some of the fantastic and maddening beauty, unpredictability, irrationality, intuition, obsessiveness, and creative ferment of our own.

In this issue, we consider the advent of neuromorphic computing and its prospects for ushering in a new age of truly intelligent machines. It is already a sprawling enterprise, being propelled in part by massive research initiatives in the United States and Europe aimed at plumbing the workings of the human brain. Parallel engineering efforts are now applying some of that knowledge to the creation of software and specialized hardware that “learn”—that is, get more adept—by repeated exposure to computational challenges.

Brute speed and clever algorithms have already produced machines capable of equaling or besting us at activities we’ve long thought of as deeply human: not just poker and Go but also stock picking, language translation, facial recognition, drug discovery and design, and the diagnosis of several specific diseases. Pretty soon, speech recognition, driving, and flying will be on that list, too.

The emergence of special-purpose hardware, such as IBM’s TrueNorth chips and the University of Manchester’s SpiNNaker, will eventually make the list longer. And yet, our intuition (which for now remains uniquely ours) tells us that even then we’ll be no closer to machines that can, through learning, become capable of making their way in our world in an engaging and yet largely independent way.

To produce such a machine we will have to give it common sense. If you act erratically, for example, this machine will recall that you’re going through a divorce and subtly change the way it deals with you. If it’s trying to deliver a package and gets no answer at your door, but hears a small engine whining in your backyard, it will come around to see if there’s a person (or machine) back there willing to accept the package. Such a machine will be able to watch a motion picture, then decide how good it is and write an astute and insightful review of the movie.

But will this machine actually enjoy the movie? And, just as important, will we be able to know if it does? Here we come inevitably to the looming great challenge, and great puzzle, of this coming epoch: machine consciousness. Machines probably won’t need consciousness to outperform us in almost every measurable way. Nevertheless, deep down we will surely regard them with a kind of disdain if they don’t have it.

Trying to create consciousness may turn out to be the way we finally begin to understand this most deeply mysterious and precious of all human attributes. We don’t understand how conscious experience arises or its purpose in human beings—why we delight in the sight of a sunset, why we are stirred by the Eroica symphony, why we fall in love. And yet, consciousness is the most remarkable thing the universe has ever created. If we, too, manage to create it, it would be humankind’s supreme technological achievement, a kind of miracle that would fundamentally alter our relationship with our machines, our image of ourselves, and the future of our civilization.

We Could Build an Artificial Brain Right Now

Brain-inspired computing is having a moment. Artificial neural network algorithms like deep learning, which are very loosely based on the way the human brain operates, now allow digital computers to perform such extraordinary feats as translating language, hunting for subtle patterns in huge amounts of data, and beating the best human players at Go.

But even as engineers continue to push this mighty computing strategy, the energy efficiency of digital computing is fast approaching its limits. Our data centers and supercomputers already draw megawatts—some 2 percent of the electricity consumed in the United States goes to data centers alone. The human brain, by contrast, runs quite well on about 20 watts, which represents the power produced by just a fraction of the food a person eats each day. If we want to keep improving computing, we will need our computers to become more like our brains.

Hence the recent focus on neuromorphic technology, which promises to move computing beyond simple neural networks and toward circuits that operate more like the brain’s neurons and synapses do. The development of such physical brainlike circuitry is actually pretty far along. Work at my lab and others around the world over the past 35 years has led to artificial neural components like synapses and dendrites that respond to and produce electrical signals much like the real thing.

So, what would it take to integrate these building blocks into a brain-scale computer? In 2013, Bo Marr, a former graduate student of mine at Georgia Tech, and I looked at the best engineering and neuroscience knowledge of the time and concluded that it should be possible to build a silicon version of the human cerebral cortex with the transistor technology then in production. What’s more, the resulting machine would take up less than a cubic meter of space and consume less than 100 watts, not too far from the human brain.

That is not to say creating such a computer would be easy. The system we envisioned would still require a few billion dollars to design and build, including some significant packaging innovations to make it compact. There is also the question of how we would program and train the computer. Neuromorphic researchers are still struggling to understand how to make thousands of artificial neurons work together and how to translate brainlike activity into useful engineering applications.

Still, the fact that we can envision such a system means that we may not be far off from smaller-scale chips that could be used in portable and wearable electronics. These gadgets demand low power consumption, and so a highly energy-efficient neuromorphic chip—even if it takes on only a subset of computational tasks, such as signal processing—could be revolutionary. Existing capabilities, like speech recognition, could be extended to handle noisy environments. We could even imagine future smartphones conducting real-time language translation between you and the person you’re talking to. Think of it this way: In the 40 years since the first signal-processing integrated circuits, Moore’s Law has improved energy efficiency by roughly a factor of 1,000. The most brainlike neuromorphic chips could dwarf such improvements, potentially driving down power consumption by another factor of 100 million. That would bring computations that would otherwise need a data center to the palm of your hand.

The ultimate brainlike machine will be one in which we build analogues for all the essential functional components of the brain: the synapses, which connect neurons and allow them to receive and respond to signals; the dendrites, which combine and perform local computations on those incoming signals; and the core, or soma, region of each neuron, which integrates inputs from the dendrites and transmits its output on the axon.

Simple versions of all these basic components have already been implemented in silicon. The starting point for such work is the same metal-oxide-semiconductor field-effect transistor, or MOSFET, that is used by the billions to build the logic circuitry in modern digital processors.

These devices have a lot in common with neurons. Neurons operate using voltage-controlled barriers, and their electrical and chemical activity depends primarily on channels in which ions move between the interior and exterior of the cell—a smooth, analog process that involves a steady buildup or decline instead of a simple on-off operation.

MOSFETs are also voltage controlled and operate by the movement of individual units of charge. And when MOSFETs are operated in the “subthreshold” mode, below the voltage threshold used to digitally switch between on and off, the amount of current flowing through the device is very small—less than a thousandth of what is seen in the typical switching of digital logic gates.

The notion that subthreshold transistor physics could be used to build brainlike circuitry originated with Carver Mead of Caltech, who helped revolutionize the field of very-large-scale circuit design in the 1970s. Mead pointed out that chip designers fail to take advantage of a lot of interesting behavior—and thus information—when they use transistors only for digital logic. The process, he wrote in 1990 [PDF], essentially involves “taking all the beautiful physics that is built into…transistors, mashing it down to a 1 or 0, and then painfully building it back up with AND and OR gates to reinvent the multiply.” A more “physical” or “physics-based” computer could execute more computations per unit energy than its digital counterpart. Mead predicted such a computer would take up significantly less space as well.

In the intervening years, neuromorphic engineers have made all the basic building blocks of the brain out of silicon with a great deal of biological fidelity. The neuron’s dendrite, axon, and soma components can all be fabricated from standard transistors and other circuit elements. In 2005, for example, Ethan Farquhar, then a Ph.D. candidate, and I created a neuron circuit using a set of six MOSFETs and a handful of capacitors. Our model generated electrical pulses that very closely matched those in the soma part of a squid neuron, a long-standing experimental subject. What’s more, our circuit accomplished this feat with similar current levels and energy consumption to those in the squid’s brain. If we had instead used analog circuits to model the equations neuroscientists have developed to describe that behavior, we’d need on the order of 10 times as many transistors. Performing those calculations with a digital computer would require even more space.

Opinion: Raspberry Pi Merger With CoderDojo Not All

This past Friday, the Raspberry Pi Foundation and the CoderDojo Foundationbecame one. The Raspberry Pi Foundation described it as “a merger that will give many more young people all over the world new opportunities to learn how to be creative with technology.” Maybe. Or maybe not. Before I describe why I’m a bit skeptical, let me first take a moment to explain more about what these two entities are.

The Raspberry Pi Foundation is a charitable organization created in the U.K. in 2009. Its one-liner mission statement says it works to “put the power of digital making into the hands of people all over the world.” In addition to designing and manufacturing an amazingly popular line of inexpensive single-board computers—the Raspberry Pi—the Foundation has also worked very hard at providing educational resources.

The CoderDojo Foundation is an outgrowth of a volunteer-led, community-based programming club established in Cork, Ireland in 2011. That model was later cloned in many other places and can now be found in 63 countries, where local coding clubs operate under the CoderDojo banner.

So both organizations clearly share a keen interest in having young people learn about computers and coding. Indeed, the Raspberry Pi Foundation had earlier merged with Code Club, yet another U.K. organization dedicated to helping young people learn to program computers. With all this solidarity of purpose, it would seem only natural for such entities to team up, or so you might think. Curmudgeon as I am, though, I’d like to share a different viewpoint.

The issue is that, well, I don’t think that the Raspberry Pi is a particularly good vehicle to teach young folks to code. I know that statement will be considered blasphemy in some circles, but I stand by it.

The problem is that for students just getting exposed to coding, the Raspberry Pi is too complicated to use as a teaching tool and too limited to use as a practical tool. If you want to learn physical computing so that you can build something that interacts with sensors and actuators, better to use an 8-bit Arduino. And if you want to learn how to write software, better to do your coding on a normal laptop.

That’s not to say that the Raspberry Pi isn’t a cool gizmo or that some young hackers won’t benefit by using them to build projects—surely that’s true. It’s just not the right place to start in general. Kids are overwhelmingly used to working in OSx or Windows. Do they really need to switch to Linux to learn to code? Of course not. And that just adds a thick layer of complication and expense.

My opinions here are mostly shaped by my (albeit limited) experiences trying to help young folks learn to code, which I’ve been doing during the summer for the past few years as the organizer of a local CoderDojo workshop. I’ve brought in a Raspberry Pi on occassion and shown kids some interesting things you can do with one, for example, turning a Kindle into a cycling computer. But the functionality of the Raspberry Pi doesn’t impress these kids, who just compare them with their smartphones. And the inner workings of the RasPi are as inaccessible to them as the inner workings of their smartphones. So it’s not like you can use a RasPi to help them grasp the basics of digital electronics.

The one experience I had using the Raspberry Pi to teach coding was disastrous. While there were multiple reasons for things not going well, one was that the organizer wanted to have the kids “build their own computers,” which amounted to putting a Raspberry Pi into a case and attaching it to a diminutive keyboard and screen. Yes, kids figured out how to do that quickly enough, but that provided them with a computer that was ill suited for much of anything, especially for learning coding.

So I worry that the recent merger just glosses over the fact that teaching kids to code and putting awesome single-board computers into the hands of makers are really two different exercises. I’m sure Eben Upton and lots of professional educators will disagree with me. But as I see things, channeling fledgling coders into using a Raspberry Pi to learn to program computers is counterproductive, despite surface indications that this is what we should be doing. And to my mind, the recent merger only promises to spread the misperception.

In the Future, Machines Will Borrow Our Brain’s Best Tricks

Steve sits up and takes in the crisp new daylight pouring through the bedroom window. He looks down at his companion, still pretending to sleep. “Okay, Kiri, I’m up.”

She stirs out of bed and begins dressing. “You received 164 messages overnight. I answered all but one.”

In the bathroom, Steve stares at his disheveled self. “Fine, give it to me.”

“Your mother wants to know why you won’t get a real girlfriend.”

He bursts out laughing. “Anything else?”

“Your cholesterol is creeping up again. And there have been 15,712 attempts to hack my mind in the last hour.”

“Good grief! Can you identify the source?”

“It’s distributed. Mostly inducements to purchase a new RF oven. I’m shifting ciphers and restricting network traffic.”

“Okay. Let me know if you start hearing voices.” Steve pauses. “Any good deals?”

“One with remote control is in our price range. It has mostly good reviews.”

“You can buy it.”

Kiri smiles. “I’ll stay in bed and cook dinner with a thought.”

Steve goes to the car and takes his seat.

Car, a creature of habit, pulls out and heads to work without any prodding.

Leaning his head back, Steve watches the world go by. Screw the news. He’ll read it later.

Car deposits Steve in front of his office building and then searches for a parking spot.

Steve walks to the lounge, grabs a roll and some coffee. His coworkers drift in and chat for hours. They try to find some inspiration for a new movie script. AI-generated art is flawless in execution, even in depth of story, but somehow it doesn’t resonate well with humans, much as one generation’s music does not always appeal to the next. AIs simply don’t share the human condition.

But maybe they could if they experienced the world through a body. That’s the whole point of the experiment with Kiri.…

It’s sci-fi now, but by midcentury we could be living in Steve and Kiri’s world. Computing, after about 70 years, is at a momentous juncture. The old approaches, based on CMOS technology and the von Neumann architecture, are reaching their fundamental limits. Meanwhile, massive efforts around the world to understand the workings of the human brain are yielding new insights into one of the greatest scientific mysteries: the biological basis of human cognition.

The dream of a thinking machine—one like Kiri that reacts, plans, and reasons like a human—is as old as the computer age. In 1950, Alan Turing proposed to test whether machines can think, by comparing their conversation with that of humans. He predicted computers would pass his test by the year 2000. Computing pioneers such as John von ­Neumann also set out to imitate the brain. They had only the simplest notion of neurons, based on the work of neuro­scientist ­Santiago Ramón y Cajal and others in the late 1800s. And the dream proved elusive, full of false starts and blind alleys. Even now, we have little idea how the tangible brain gives rise to the intangible experience of conscious thought.

Today, building a better model of the brain is the goal of major government efforts such as the BRAIN Initiative in the United States and the Human Brain Project in Europe, joined by private efforts such as those of the Allen Institute for Brain Science, in Seattle. Collectively, these initiatives involve hundreds of researchers and billions of dollars.

With systematic data collection and rigorous insights into the brain, a new generation of computer pioneers hopes to create truly thinking machines.

If they succeed, they will transform the human condition, just as the Industrial Revolution did 200 years ago. For nearly all of human history, we had to grow our own food and make things by hand. The Industrial Revolution unleashed vast stores of energy, allowing us to build, farm, travel, and communicate on a whole new scale. The AI revolution will take us one enormous leap further, freeing us from the need to control every detail of operating the machines that underlie modern civilization. And as a consequence of copying the brain, we will come to understand ourselves in a deeper, truer light. Perhaps the first benefits will be in mental health, organizational behavior, or even international relations.

Such machines will also improve our health in general. Imagine a device, whether a robot or your cellphone, that keeps your medical records. Combining this personalized data with a sophisticated model of all the pathways that regulate the human body, it could simulate scenarios and recommend healthy behaviors or medical actions tailored to you. A human doctor can correlate only a few variables at once, but such an app could consider thousands. It would be more effective and more personal than any physician.

Re-creating the processes of the brain will let us automate anything humans now do. Think about fast food. Just combine a neural controller chip that imitates the reasoning, intuitive, and mechanical-control powers of the brain with a few thousand dollars’ worth of parts, and you have a short-order bot. You’d order a burger with your phone, and then drive up to retrieve your food from a building with no humans in it. Many other commercial facilities would be similarly human free.

That may sound horrifying, given how rigid computers are today. Ever call a customer service or technical support line, only to be forced through a frustrating series of automated menus by a pleasant canned voice asking you repeatedly to “press or say 3,” at the end of which you’ve gotten nowhere? The charade creates human expectations, yet the machines frequently fail to deliver and can’t even get angry when you scream at them. Thinking machines will sense your emotions, understand your goals, and actively help you achieve them. Rather than mechanically running through a fixed set of instructions, they will adjust as circumstances change.

That’s because they’ll be modeled on our brains, which are exquisitely adapted to navigating complex environments and working with other humans. With little conscious effort, we understand language and grasp shades of meaning and mood from the subtle cues of body language, facial expression, and tone of voice. And the brain does all that while consuming astonishingly little energy.

That 1.3-kilogram lump of neural tissue you carry around in your head accounts for about 20 percent of your body’s metabo­lism. Thus, with an average basal metabolism of 100 watts, each of us is equipped with the biological equivalent of a 20-W supercomputer. Even today’s most powerful computers, running at 20 million W, can’t come close to matching the brain.

How does the brain do it? It’s not that neurons are so much more efficient than transistors. In fact, when it comes to moving signals around, neurons have one-tenth the efficiency. It must be the organization of those neurons and their patterns of interaction, or “algorithms.” The brain has relatively shallow but massively parallel networks. At every level, from deep inside cells to large brain regions, there are feedback loops that keep the system in balance and change it in response to activity from neighboring units. The ultimate feedback loop is through the muscles to the outside world and back through the senses.

Traditionally, neurons were viewed as units that collect thousands of inputs, transform them computationally, and then send signals downstream to other neurons via connections called synapses. But it turns out that this model is too simplistic; surprising computational power exists in every part of the system. Even a single synapse contains hundreds of different protein typeshaving complex interactions. It’s a molecular computer in its own right.

And there are hundreds of different types of neurons, each performing a special role in the neural circuitry. Most neurons communicate through physical contact, so they grow long skinny branches to find the right partner. Signals move along these branches via a chain of amplifiers. Ion pumps keep the neuron’s cell membrane charged, like a battery. Signals travel as short sharp changes of voltage, called spikes, which ripple down the membrane.