This company sold Google a quantum computer. Here’s how it works.

January 10

D-Wave officials with one of their computers. (Steve Jurvetson)

Last week I did an interview with Scott Aaronson, an MIT computer scientist and a leading expert on quantum computing. Aaronson had harsh words for D-Wave, a leading quantum computing vendor that Aaronson considers "the leaders on generating hype."

I thought it would be interesting to hear from D-Wave, a start-up that has raised more than $130 million in venture capital and employs over 100 people. So I interviewed Jeremy Hilton, D-Wave's vice president of processor development. He has been at the company since 2000. We spoke on the phone on Tuesday. The transcript has been edited for length and clarity.

Timothy B. Lee: Let's start at the beginning. Can you explain what D-Wave's products do and why they're useful?

Jeremy Hilton: Let me give you a bit of history of why D-Wave ended up going down the path it went down. We were founded back in 1999. Up until 2005, we were really focused on surveying the field of quantum computing. We asked, "of all the different ways to realize a quantum computer, what was the best technology that would be able to scale up to a relevant computational scale?"

That was a fascinating exercise. It showed a lot of things. It showed some of the massive challenges in implementing the conventional model for building a quantum computer. That model is called Gate Model Quantum Computing. [Editor's note: Scott Aaronson explained that model in a recent Switch interview.] And when people typically describe quantum, they are thinking of this gate model approach. It's an intuitive way to describe quantum computing because it extends our current understanding of how computers work.

But in the early 2000s, some bright researchers from MIT published an alternative model called adiabatic quantum computing. There are a number of differences between gate and adiabatic QC. It's been proven in the literature—not by D-Wave—that those are equivalent models of quantum computation. That being said, what D-Wave built is not universal quantum computing.

So why did you decide to go with the adiabatic approach?

When the original models were proposed at MIT and this adiabatic approach was born, then there were some follow-on researchers at MIT who also extended that into a physical implementation in superconducting qubits. D-Wave at the time, 2004, in particular Geordie Rose, founder and now CTO, was looking at this stuff. They were trying to figure out how D-Wave could start building hardware and do something useful in a shorter time frame than is possible with this gate model approach. The [adiabatic] algorithms that have been published were strongly connected to [solving] optimization problems. That gets you into the space of solving a complexity class called NP-Hard.

There are a lot of things that differentiate gate model and adiabatic quantum computing (AQC). One of the key things is the role of decoherence. One of the criticism we often hear from the academic community is that basically you sink or swim based on your decoherence time. [i.e. the amount of time, typically a small fraction of a second, that a quantum computing device can maintain its quantum characteristics.] [Academics believe that] if your qubit doesn't pass a high threshold, you can't get started. That's why over the last 15 years, that community has focused on building better qubits. [They're still working with] two or three qubits, but decoherence times have really improved.

In 2005 when we were starting to think about this AQC model, we noticed something different. The states that you care about are always the ground state of your system of qubits. That means you have some inherent robustness and immunity from noise and things like decoherence. That means we could start building AQC processors and use it to solve these problems that have practical applications we care about [without building the more robust qubits needed for gate-model quantum computing].

Can you be more concrete about this? What kind of hardware is inside a D-Wave device and how does it work?

In concrete hardware terms, all of these models [of quantum computing] can be implemented in superconducting hardware, ion hardware, many other types of hardware. [There are] implementations of AQC in ion trap configurations. That's all possible.

D-Wave has focused on the superconducting side of things to benefit from the infrastructural advancement the semiconductor has made. The fabrication of superconductors is all [mature] semiconductor technology. We fabricate [our chips] at Cyprus Semiconductor. We don't have exotic tools to make those devices. That was an important aspect for D-Wave, we want to scale up to a high level. If all those problems have already been solved, we'll be able to take advantage more quickly. [If we had used] ion trap technology, new technologies would have needed to scale up.

D-Wave's processor is custom-designed to solve something called the Ising spin glass in a transverse magnetic field. That problem can be mapped to other problems that are very interesting. There are problems that are known in logistics and scheduling, things like satisfiability. These are mathematical challenges that represent problems people have. The traveling salesman problem is the classic example of this type of optimization problem. From there, the spectrum of potential applications is quite a lot larger.

Google is one of your partners. How has Google put D-Wave's technology to use?

In 2009, we did some work with Google to develop a car classifying [system]. It did object identification in image search. The idea was to be able to use our 128 qubit processor to train an algorithm to be able to identify if a car was in a picture or not. These kinds of questions seem a bit odd to us, because as humans it's very easy to determine if there's a car in a picture. But a machine looks at it in terms of pixels rather than objects. A lot of machine learning is how do we get machines to think of pictures as the objects instead of the raw data.

So this work that we did with Google, and that was later published, was really about adapting their state-of-the-art conventional algorithms and adapting it and see how it would perform [with quantum hardware]. We ran the training [e.g. had the computer analyze example data] for a couple of weeks on the hardware. The outcome was a "trained" piece of software that no longer needed to be on the quantum processor. And it was quite good at identifying cars—comparable or slightly better than what they were able to do [with conventional computers]. It's a great illustration for us of the potential of the technology and how it could be used.

The algorithm looks for features of the image. The identification and evaluation of those features turns into an optimization type of problem. That optimization problem was adapted to run on our hardware. We had to change the problem around slightly so it can run on our hardware, cast it in a form that's actually a little harder for people to solve [with conventional computers].

Even the fact that it was in the ballpark of the conventional algorithms in the field was very exciting. We didn't really officially release the 128 qubit product until 2011. This was on a very early version of this technology

I thought the point of quantum computing was that it could solve problems dramatically faster than conventional computers. Why should people be impressed if your hardware only produced results that are "comparable or slightly better" than conventional methods?

An important element of D-Wave's technology is our roadmap. We're continuously improving and building bigger processors. Recently we released a 512 qubit processor. If we look at the performance roadmap of that hardware, where performance in solving these problems relative to the state of the art, we saw that between that 128 qubit and 512 qubit, a 300,000x improvement in performance. That kind of performance gain is really unprecedented.

With that 512 qubit processor, it was able to meet and match the state of the art in classical algorithms and computers. It's very exciting that we had achieved that. So when we look at that we say we continue to make these things bigger and more powerful.

Right now, we have a 1000 qubit processor in our lab. We plan to release it later in 2014. The major thing that's changing aside from some of the design details is the scale of the problem you can represent, going from a 500-variable graph to a 1000-variable graph. Complexity of that is growing tremendously. [It leads to an] unimaginable exponential blowup of the number of solutions. That scale of problems is getting that much harder for classical algorithms to solve.

[After that] we're planning to release a 2000-bit processor design. That's pushing into a scale of territory where we're tackling problems that are very difficult for people to solve [with conventional methods]. The community is working on getting a few qubits to work at the scale they're trying to work at.

We're at a point where we see that our current product is matching the performance of state-of-the-art classical computers. Over the next few years, we should surpass them. The ideal is to get into a space that is fundamentally intractable with classical machines. In the short term all we focus on is showing some scaling advantage and being able to pull away from that classical state of the art.

Can you help me gain an intuition for how this works under the hood? What does a D-Wave quantum computer do that makes it so much faster than a conventional one?

Let me give you a metaphor. An optimization problem can be represented by hills and valleys. The best answer is the lowest point in that energy landscape. The challenge in any algorithm is to explore that energy landscape in some way, to try to find that minimum. The reason these problems become so difficult is that the state space is enormous. Even at 512 qubits, with 512 variables, that 2512 states. Each one of those states is an energy in your problem. You're trying to find the one energy that is the lowest. The one bit state that represents the lowest energy in your algorithm. Classic algorithms drop themselves in that energy landscape and run downhill. More sophisticated algorithms have some way of running uphill. But you can boil it down to running downhill as fast as possible.

With quantum computing, you have the same energy landscape picture, but you start off in a superposition of all the possible states, effectively expanding the energy landscape. At any given point in time, you can use tunneling to explore different valleys. Entanglement, it's not known what the role would be. If you're in a superposition, entanglement allows those valleys to interact and interfere in a way that allows the system to find its lowest-energy optimization.

We're trying to expand the way nature works where nature is trying to find its lowest-energy configuration and bringing quantum mechanical effects to bear to help explore that energy landscape.

One of the most frequently mentioned quantum computing algorithms is Shor's algorithm, which provides an efficient way to factor large numbers. Can you implement Shor's algorithm on D-Wave's hardware?

D-Wave hasn't focused on factoring much. For very clear reasons, the general field of QC galvanized around Shor's algorithm. It's an application that captures the imagination. Talk about being able to crack public key cryptography gets peoples' attention.

There are algorithms related to the factoring problem that can be run on our hardware and we've done some basic work along those lines. It's simply not a particularly interesting market segment for a business. We've focused our energy more on problems that can connect to things like machine learning, financial modeling, logistics and scheduling. They clearly relate to major business challenges.

Why do you think some people have formed a perception that D-Wave is a secretive company?

That's a historic effect. In 2007 and really prior to 2009, we were working out a lot of the challenges associated with building a scalable technology and seeing if we could make this work. Our focus wasn't on publication. In 2009, once we got it working, we started publishing at an academic level. We had 22 scientific papers. [We released a] detailed blueprint of our technology. We've focused on having a level of transparency that's unprecedented. If people want to see our lab, we invite them and give them a tour. We make sure our scientists show them the technology.

Comments
Show Comments
Most Read Business
Next Story
Brian Fung · January 10