Return to site

How does a Quantum Computer work?

In a previous post we explained what quantum mechanics is. By now you probably also know that Quantum Computers somehow harness the power of quantum mechanics in order to perform calculations and to process data. You might even know that Quantum Computers are able to deliver more computing power than any classic (super)computer will ever be and they are the key to solving many important problems. But this probably still leaves you lingering with one significant question: how does a Quantum Computer actually work?

Well, the first answer that comes to mind is “it’s complicated”. Once we get that out of the way, we can usually add “it’s very complicated”. At this very moment your mind might be taking a few steps back and starting to wonder “do I even need to understand this? why should I bother? is my classic computer not enough? come think of it, how does actually a classic computer work?”.

The logic we followed in this brief post on Quantum Computers tries to answer those questions and alleviate your doubts. We started first with a quick reminder of how a classical computer works followed by an explanation of why classic computers are not enough for our needs. Eventually, somewhat by way of comparison to traditional computers, we explain how Quantum Computers work.

How do classic computers work?

Modern computers are all in all simple machines. Each computer is composed of a main memory – which contains and represents data, an arithmetic unit – which processes the data, and a control unit – which provides a control mechanism for the whole system.

Processing power of a computer is provided by computer chips which are composed of modules. Those modules contain logic gates. As for the logic gates, they are composed of many transistors.

The transistor is the most basic form of data processor (or calculator) contained in a computer. It is nothing more than an on/off switch which can either open or block the information coming through.

That information is – you guessed it – made of bits which are either a 0 or a 1. By combining several bits we are able to represent more complicated information. Transistors are combined to create logic gates which send an output which can be an AND,[1] an OR,[2] a NAND[3] or a XOR.[4] Several logic gates can be combined into modules than can add or multiply numbers. Once a computer can multiply it is able to do any kind of operations.

So at its very heart a computer is a combination of many simple calculators that answer simple math questions – like additions or multiplications. A large combinations of those calculators gives enough power to run advanced science algorithms, play video games or simply create spreadsheets.

So what exactly is wrong with this?

All modern computers rely on the same principles as those highlighted above. If you take your everyday laptop or if you look at the most powerful supercomputers in the world – the underlying principles are still the same.

In fact, those principles got us very far. For the past sixty years we have been making exponential advances in computing power. Our computers were getting smaller while their power was increasing. We are still doing this and will continue to do so. Currently, the two most powerful computers in the world are Chinese: the Sunway TaihuLight (93 PFLOPS of power) and the Tianhe-2 (33 PFLOPS of power). The US is planning to build an exasale computer by 2020 which will be able to perform a billion calculations per second. We need such powerful computers to perform (as best as we can) many complex calculations such as weather predictions, nuclear simulations, AI etc.

Tianhe-2 supercomputer. Photo credits: nextbigfuture.com

So what exactly is wrong here? Well, with the computers we are building nowadays, we are slowly reaching the limits of how powerful they can be. You might be wondering why that is. There are three good reasons.

First of all, it’s not practical. If you take the Tianhe-2 for example. It cost about $390m, it takes up 720m2 of space and consumes about 24 MW of energy (that’s enough to power 20 000 houses). So it is becoming very costly and very impractical to build such powerful computers. Nevertheless by itself this is not a complete barrier. After all we are investing in other – much more energy and space consuming – buildings and eventually there will always be someone who will pay the necessary price.

The true limitation to classic computers comes from the next two reasons.

The second reason for the dawn of modern supercomputers is related to something called Moore’s Law. Gordon Moore was a Silicon Valley pioneer in the 60’s and 70’s and one of the co-founders of Intel. In 1965 he wrote a paper in which he predicted that the number of transistors on a computer circuit board doubles every year. So if you take a square inch of a computer’s board, you will find today twice the same number of transistors you had there last year. This leads however to a small problem – there are physical limits to how small transistors can actually get. Today a typical transistor is 14 nanometers. This is about 500 times smaller than a red blood cell which has 7 µm (millionths of a meter) in diameter. In case the size of a red blood cell doesn’t tell you much let’s try a human hair. A human hair has anywhere between 17-181 µm in diameter – making it somewhere between 2-25 times bigger than a red blood cell and approx. a 1,000-13,000 times bigger than a transistor. If the transistor becomes the size of a few atoms it will not be able to work properly. After all, transistor are an on/off switch which let’s an electricity current go through. Electricity is electrons and if transistors are to small than those electrons might be able to “squeeze in” through a transistor when they’re supposed to be blocked.

The third reason is related to Amdahl’s Law. Gene Amdahl – another Valley guy from the 60’s that used to work for IBM – designed a formula which allows to calculate the power of a computer architecture that relies on parallel computing. In layman terms – if you bundle many computers (or processors) together, how much power will you get? You would think that the more processors there are, the more power there will be. But that is true only to a certain extent.

Amdahl's Law graph. Credits: Wikipedia.

Eventually you system will reach it’s peak limit and won’t be able to perform better, no matter how many processors you add. The Tianhe-2 for instance has about 3,100,000 processing cores.

In short, computers relying on classic rules of computing are reaching their limits. We will not be able to build any more powerful computers by applying the same technology as we do today. Despite our best efforts and the resources we put into this, there are limits which we cannot bypass. This means that some problems will never be solved on classic computers. Problems such as molecule simulation, weather predictions or optimisation problems

Enter the Quantum Computer

Just like classical computers use bits (0 or 1), Quantum Computers use quantum bits/qubits which can also be linked to a 0 or 1 value.

A qubit can be composed of a photon or an electron. All electrons have magnetic fields so they act like a tiny bar magnet (or like a compass). That property of an electron is called spin. Once you place an electron in a magnetic field it will align with that field - similar to a compass pointing north due to the magnetic field of the earth. This state - where electrons are aligned with the magnetic field - requires little to no energy so its called the „0 state” or „spin down”. It is possible to put the electron in a „1 state” or a „spin up” as long as some energy is applied to it. Similarily by applying some force with your finger to the compass needle you could move it so that the north arrow would point south.

The qubit in principle works similarily to a classic bit - it can be a 0 or a 1. But quantum bits respect the principles of quantum physics such as superposition. This means that the qubit can be in both states at once. It can be both a 0 and a 1. Since a given particle can exist in two or more states at the same time, it in fact exists as a combination of multiple states corresponding to different possible outcomes. And this property has huge implications for the power of Quantum Computers once two or more qubits are put together.

Combining two bits means you can have four different possibilities but in the end what you get is just two pieces of information - one from each bit.

With qubits, we also have four possible combinations but the principle of superposition allows us to make coefficients out of several states. This means that out of two qubits we can get four different coefficients and eventually for bits of information.

By way of comparison, where two classical bits give us two bits of information, two qubits gives us 4 bits of information. If we use three qubits and make three spins we will get eight bits of information. With four qubits we reach sixteen bits of information.

In fact, the rule is that for each N number of qubits we are able to achieve 2N amount of information of classical bits. And once you reach 300 qubits then your Quantum Computer contains 2300 bits of information which is the number of particles in the universe.

This property of qubits means that with every additional qubit the computing power of a Quantum Computer doubles. Just 20 qubits can store already over a million combinations. And with 60 qubits you will be able to achieve more power than the most powerful (super)computers we have today.

Conclusion

Quantum Computers are a complicated area of science. Complicated to explain and even more complicated to implement. But they will lead to a significant breakthrough in many areas such as optimisation, molecule simulation or cryptography.

[1] An AND gate is a basic digital logic gate that implements logical conjunction. A HIGH output (1) results only if both the inputs to the AND gate are HIGH (1). If neither or only one input to the AND gate is HIGH, a LOW output results

[2] The OR gate is a digital logic gate that implements logical disjunction. A HIGH output (1) results if one or both the inputs to the gate are HIGH (1). If neither input is high, a LOW output (0) results. In another sense, the function of OR effectively finds the maximum between two binary digits, just as the complementary AND function finds the minimum.

[3] A NAND gate (negative-AND) is a logic gate which produces an output which is false only if all its inputs are true; thus its output is complement to that of the AND gate.

[4] A XOR gate gives a true (1/HIGH) output when the number of true inputs is odd. An XOR gate implements an exclusive or; that is, a true output results if one, and only one, of the inputs to the gate is true. If both inputs are false (0/LOW) or both are true, a false output results. XOR represents the inequality function, i.e., the output is true if the inputs are not alike otherwise the output is false. A way to remember XOR is "one or the other but not both".

All Posts
×

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly