*By Nathan Killoran and Josh Izaac*

Last year, we wrote about QML 1.0, our vision of the road ahead for quantum machine learning. In the months since then, we’ve been working furiously to bring that vision to fruition. Today, we can finally announce that **PennyLane****, our general-purpose library for quantum computing and machine learning, now integrates with ****PyTorch****, ****TensorFlow****, ****Strawberry Fields****, ****Forest** (Rigetti) **and** **Qiskit** (IBM). PennyLane is the first software in the world to bring together the best of quantum computing with the best of machine learning. Now anyone can build and train hybrid computations across CPUs, GPUs, and QPUs¹!

The links between quantum computing and machine learning go back several years. However, many of the “textbook” QML algorithms require fault-tolerant quantum computers. In the current era of noisy intermediate-scale quantum (NISQ) devices, a different set of algorithms, tools, and strategies has begun to take shape. With this in mind, it’s worthwhile to step back, present the key ideas for machine learning in the NISQ era, and survey the current state of play.

Taken most literally, ‘quantum neural networks,’** **or *QNNs* refers to quantum circuits or algorithms which closely mimic the structure of classical neurons and neural networks, while extending and generalizing them with powerful quantum properties. We recently proposed a QNN which naturally takes advantage of photonics, and there are a number of other proposals in the literature.

However, deep learning has moved beyond the original “vanilla” neural network, with specialized architectures such as ConvNets, LSTMs, ResNets, and GANs rising to prominence. What links these different models together is not the (perhaps dated) notion of an artificial neuron, but rather a broader idea which we will call *differentiable computing*.

From this perspective, computation is carried out using continuous functions with known derivatives (or gradients). These functions can have adjustable parameters, and the gradients tell us how to adjust (or *train*) these parameters so that the functions output some desired values. Modern deep learning software libraries, like TensorFlow or PyTorch, are capable of *automatic differentiation*, making gradient-based optimization and training of deep networks near-effortless for the user.

“A quantum neural network is any quantum circuit with trainable continuous parameters”.

In the NISQ era, quantum computing is increasingly — and productively — being viewed as a form of differentiable computing. After all, quantum computing is just linear algebra in very high-dimensional vector spaces², with quantum gates having the form of matrix multiplications. The parameters of these gates, associated with the strength of the gate or the length of time it is active, are analogous to the parameters of a deep learning model. With this in mind, we can extend our earlier definition. A ‘quantum neural network’ is any quantum circuit with trainable continuous parameters.

This viewpoint of quantum computation also goes by a more technical name, *variational (quantum) circuits*³, in the scientific literature. This is because such circuits were initially proposed for chemistry problems under the name variational quantum eigensolvers, or VQE. However, variational circuits have lately been extended more broadly into full-fledged general-purpose quantum machine learning algorithms.

Unlike the more familiar quantum algorithms due to pioneers like Shor and Grover, the variational circuits paradigm has only been established in the past few years — largely because the NISQ era forced a rethink of quantum algorithms, but also due to the increasing cross-fertilization of quantum computing from machine learning.