D-Wave 2000Q hands-on: Steep learning curve for quantum computing

By Chris Lee

A child writes on a whiteboard cluttered with equations.
Enlarge / Algorithms, a complicated work in progress.
Editor's note: I realize that I do not correctly calculate the Bragg transmission in either the classical or the quantum case; however, it is close enough to get an idea of the differences between programming a classical and a quantum computer.

Time: non-specific 2018. Location: a slightly decrepit Slack channel.

"You know Python?"

Questions from John Timmer, the Ars Technica Ruler of All Things Science, are sometimes unexpected. If Slack could infuse letters with caution, my "Yes" would have dripped with it.

It turns out that D-Wave was unleashing its quantum optimizer (the company had just announced a new version) on the world via an application programming interface (API). Ars was being invited to try it out, but you needed to know some Python. I was up for that.

I had envisioned D-Wave producing some fantastic API that could take my trademark code-that-makes-coders-weep and turn it into quantum-optimizer-ready lines. Alas, when I got my hands on the quantum optimizer, that was not the case. I quickly found myself buried in documentation trying to figure out what exactly I was supposed to do.

I think D-Wave's press officer had in mind someone who knew enough to be able to run the pre-written examples. But I'm kind of stubborn. I had come up with three or four possible problems that I wanted to test out. I wanted to find out: could I master the process of solving those problems on the D-Wave computer? How easy is it to make the conceptual leap from classical programming to working with a quantum annealer? Were any of my problems even suitable for the machine?

To give away the stunning conclusion, the answers are: maybe not quite "master," difficult, and #notallproblems.

Choosing something to code

Despite what you may or may not think of me, I'm what you might call a practical programmer. Essentially, anyone skilled in the art of programming would wince (and quite possibly commit murder) at the sight of my Python.

But I can come up with problems that require code to solve. What I want is something, for instance, that calculates the electric fields due to a set of electrodes. Something that finds the ground state of a helium atom. Or something that calculates the growth of light intensity as a laser starts up. These are the sorts of tasks that interest me most. Going in, I had no idea if the D-Wave architecture could solve these issues.

I chose two problems that I thought might work: finding members of the Mandelbrot set and calculating the potential contours due to a set of electrodes. These also had the benefit of being problems that I could quickly solve using classical code to compare answers. But I quickly ran into trouble trying to figure out how to run these on the D-Wave machine. You need a huge shift in the way you think about problems, and I am a very straightforward thinker.

For instance, one issue I struggled with is that you are really dealing with raw binary numbers (even if they are expressed as qubits rather than bits). That means that there are, effectively, no types. Almost all of my programming experience is in solving physics problems that rely on readily available floating-point numerical types.

This forces you to think about the problem in a different way: the answer should be expressible as a binary number (preferably true or false), while all the physics (e.g., all the floating-point numbers) should be held in the coupling between qubits. I could not for the life of me figure out how to do that for either of my problems. While buried in teaching, I let the problem simmer (or possibly curdle).

After about six months, I finally hit on a problem that I was familiar with and that I might be able to solve using D-Wave's computer. Light transmission through a Bragg grating can be expressed as a binary problem: does the photon exit the filter or not? All the physics is in the coupling between qubits, while the answer is read out from the energy of the solution.

Bragg gratings

A 1D Bragg grating is a layered material. Each interface between two layers reflects a small amount of light. The total transmission through the whole structure is determined by the spacing between the interfaces. To get light through, we need the waves from different interfaces to add up in phase. The transmission spectrum of a perfect Bragg grating with 50 layers with 0.1 percent reflectivity at the interfaces is shown below.

The amount of light that passes through a Bragg filter. Only a very small band of wavelengths can pass; the rest are absorbed or reflected.
Enlarge / The amount of light that passes through a Bragg filter. Only a very small band of wavelengths can pass; the rest are absorbed or reflected.

Here is the code to generate the data for that graph.

ld = np.linspace(ld_center-3e-9, ld_center+3e-9, num_ld) k = 2*np.pi/ld T = np.ones(ld.shape) for j in range(num_layers): T = T*(1-A)*np.cos(j*k*layer_sep)**2 

Here, we explicitly calculate the relative contribution of each interface in terms of the optical power that it will contribute to the next interface. Constructive and destructive interference is taken into account by reducing or increasing the contribution of the interface, depending on how close the layer spacing is to a half wavelength.

This is a necessary hack, because the couplings between qubits are only real-value numbers, not complex numbers (the physics is best expressed as a complex number that contains the amplitude and phase of the light). Nevertheless, the output of the classical code "looks" approximately correct—the lack of sidebands is concerning and shows the model is incomplete, but that's not important for now.

The missing part of the model in the classical code is that there is no test for self-consistency. I've calculated the result based on an assumption about the way the wave will propagate. Even if that assumption is wrong, the result of the calculation ends up being the same—while the equations are based on physics, there's no way within the code to ensure they get the physics right.


Page 2

In the D-Wave system, we need to create a string of qubits, each of which represents the light intensity at an interface. Each qubit is coupled to its neighbors with a strength that represents the transmission from one interface to the next, taking into account the distance between interfaces. If the distance between the two interfaces is a half wavelength, then light can resonate between the two interfaces and be transmitted. Likewise, if the distance is a quarter wavelength, destructive interference is complete, and the coupling should be a minimum.

The per-interface transmission is set to 99.9 percent, so the coupling between the 0th and 1st qubits is:


Between the 1st and 2nd, it is:

And the coupling between the 2nd and 3rd are:

In these formula, d is the physical separation between layers and λ is the wavelength of light. If d/λ = 0.5, then the cosine is unity and we get, hopefully, perfect transmission.

In the D-Wave system, this means that any two adjacent qubits should be coupled by:

In the above equation, the power of i takes into account the influence of the upstream qubits and simplifies the coupling scheme.


Page 3

Now that we know how to couple our qubits, we need to look at the physical connectivity between the qubits to determine how to arrange the problem. This is called a minor embedding.

We have to give the computer a list of coupling strengths between each qubit. We also need to have biases, which weight the importance of each qubit. In our case, all qubits are equally important, so that is set to -1 for all qubits (I took that number from a standard example). The biases and couplings have to be associated with physical qubits and links between qubits. For big problems, finding your own mapping would be very time-consuming.

The code to generate the couplings and biases is straightforward.

 #qubit labels (q[j], q[j]) are used as keys in the #dictionary of couplings linear.update({(q[j], q[j]): -1}) #biases are self-to-self #nearest neighbors have a non-zero coupling if j-k == -1: quad.update({(q[j], q[k]): (J_vals[j])**k}) #all other couplings are zero else: quad.update({(q[j], q[k]): 0}) 

Luckily, the D-Wave API will attempt to find a minor embedding for you. I found that the minor embedding worked for a 50-layer filter but could not handle a 100-layer filter. This seems a bit strange to me, because with 2,000 qubits and a chain length of one (we do not need to combine multiple physical qubits to create single logical qubits), it should have been able to embed the larger problem. On reflection, I think the failure is because of the many zero-strength couplings that I specified.

An alternative might be to simply not specify the zero-strength couplings. This frees the algorithm to find more embeddings where qubits have no connection at all. I did not try this, but it seems the logical next step.

In any case, setting the D-Wave machine in motion is very simple:

response = EmbeddingComposite(DWaveSampler()).sample_qubo(Q_auto, num_reads=num_runs) 

Learning to speak qubit

Now, the difference between my cobbled-together classical algorithm and the D-Wave solution is that the D-Wave computer does not directly return an understandable answer. The result comes in multiple parts. You get a list of energies, and the lowest should be the solution. For multiple runs (say 1,000), you get the number of times each energy was obtained. And, for each run, you get the "answer," which is the value of the qubits. It was not immediately obvious to me which, if any, of these could be interpreted as the filter transmission.

In the end, I decided that the minimum energy of the solution was the best representative, since this (kind of) represented the amount of energy stored in the filter. So, a solution with a higher energy represents a larger filter transmission, as shown below. Turning that into an actual transmission is left as an exercise for someone who knows what they are doing.

The minimum energy of the D-Wave computer's solution for each wavelength. Higher energy corresponds to more transmission.
Enlarge / The minimum energy of the D-Wave computer's solution for each wavelength. Higher energy corresponds to more transmission.

Beyond that, it is possible to get more physical insight by examining the qubit values for the lowest energy solutions. Below, you can see the bit values for solutions corresponding to the peak of the transmission curve (500nm), the 50-percent transmission point (500.6nm), and the 5-percent transmission point (501.4nm).

The value of each qubit for different solutions. The top row corresponds to a wavelength with just 5-percent transmission, the middle row to 50-percent transmission, and the bottom row to 100-percent transmission. The qubit pattern becomes more organized as the transmission increases.
Enlarge / The value of each qubit for different solutions. The top row corresponds to a wavelength with just 5-percent transmission, the middle row to 50-percent transmission, and the bottom row to 100-percent transmission. The qubit pattern becomes more organized as the transmission increases.

At the edge of the transmission peak, the qubits tend to cluster into groups of ones. At the peak, however, qubits are alternately ones with zeros. This latter solution is a binary picture of how the intensity of the light inside a Bragg filter varies. In other words, the D-Wave solution has also directly represented the physics, unlike my classical code.

That is where the real power of annealing lies. Yes, I can get the filter transmission curve from a classical calculation quite nicely. But obtaining a picture of what is going on internally would have required a more sophisticated script. In the quantum annealing approach, however, you get that for free. I think that is pretty cool.

Another advantage is that the annealing process ensures that the solution is self-consistent within the bounds of my decisions about coupling strengths. This means that the quantum computing solution is probably more reliable than that obtained from my classical code.

Reflecting on quantum coding

The hardest part of programming the D-Wave machine is that it requires a different way of thinking about problems. I am used to minimization problems in terms of curve fitting to data, for instance. But I found it quite a challenge to change the way I conceive of a physical problem so that I could even begin to write the code. It doesn't help that most of the examples provided by D-Wave are (to me) abstract and not readily adaptable to physical problems. That, however, will change as a larger variety of users release code.

Likewise, figuring out what the answer means can be a challenge. Especially, when, like me, you jump in with your brain switched off.

Overall, I like the simplicity of the API, and I love the way you get extra insight for free. I can use the D-Wave machine on real problems in the near future.