Advances in quantum machine learning in 2016 and in early 2017

Posted on 13 May 2017

It has been nearly a year and a half since I last tried to digest the flood of QML papers. My backlog exceeded a hundred unread papers again, so I could not postpone processing them anymore. I gave up any attempt at being comprehensive -- it would have been futile.

The terminology is evolving. We used to mean quantum-enhanced ML by saying QML, that is, learning algorithms that use a quantum protocol for an improvement. The meaning of QML is expanding: it includes quantum-enhanced learning algorithms, but also a new application area of learning theory in many-body physics, and now we can also safely lump a whole range of applications of classical ML in quantum physics under this label. Perhaps the most accurate definition of QML is that as long as the work has an AI/ML-ish element plus something vaguely quantum, it belongs here. I am perfectly content with this definition, and I reviewed the latest advances with this broad view on the field. Below I summarize my much biased findings.

No shortage of events

There have been at least three workshops over the last year and a half: one in Belgium, one in South Africa, and one in Canada. A workshop on machine learning and quantum many-body physics is coming up soon in China, and a quantum machine learning workshop in Canada. South Africa hosted a summer school, which was the best scientific event I ever showed up on. On regular conferences, NIPS had at least two QML-related papers.

The topic is slowly moving out of purely academic interest. Toronto has a relevant Meetup group. The Creative Destruction Lab in Toronto is busy organizing the first cohort of a startup incubator focusing on QML technologies -- I happen to be involved with this one. I am also organizing a reading group in my institute, which is sufficiently open to attract the occasional third-party contribution. The quantummachinelearning.org was revised to act as a gateway to further information. The revision was so successful that now we exceed a hundred visitors in a good month. We also set up a LinkedIn group that does not do much yet.

No shortage of reviews

Apart from a book, and introduction and an overview paper from earlier, now we have a fairly comprehensive laundry list of QML papers (Biamonte et al., 2016), and a survey on the theoretical foundations of quantum learning (Arunachalam & de Wolf, 2017). Add to this a revised Wikipedia article, and another introduction (Schuld & Petruccione, 2016), and you will realize you are spoilt for choices if you want to get your head around.

Machine learning and quantum many-body physics

Easily the most interesting development is that the prehistoric connection between many-body physics and machine learning has been revived, producing a series of high-profile papers. Most notably, representing quantum states by Boltzmann machines lead to a Science paper (Carleo & Troyer, 2017) Later it was shown that the ansatz holds well for long-range entanglement, and given how compact this representation is, the implications are far reaching (Deng et al., 2016; Deng et al., 2017). Another study showed a form of no-flattening theorem, showing that non-deep Boltzmann machines have their limits (Gao & Duan, 2017).

A parallel line of work studied classical (Carrasquilla & Melko, 2017) and quantum phase transitions (Broecker et al., 2016) in a supervised learning scenario. This line of thought was extended to unsupervised settings, not knowing the transition point a priori (Nieuwenburg et al., 2017; Wan et al., 2016, 2017). Boltzmann machines come handy for many-body state tomography (Kieferova & Wiebe, 2016; Torlai et al., 2017). A marginally related new application is to classify separable and entangled states: one approach is via old-school feature engineering of the state space and then bagging (Lu et al., 2017), and another one is via shallow neural networks deployed on correlations (Ma & Yung, 2017). Applying machine learning to tackle fundamental questions in physics is becoming a standard tool.

If states can be represented by learning methods, and learning algorithms are so successful in a range of physics problems, we may wonder if there is a more profound connection. Initial signs indicate an affirmative answer. Tensor networks and deep learning have some natural similarities. We can map restricted Boltzmann machines to tensor network states (Chen et al., 2017), and we can establish an equivalence between convolutional arithmetic networks and tensor networks (Cohen et al., 2017).

More annealing

We can never get enough of quantum annealing. Articles on the topic keep multiplying great numbers, but I perceive a clear trend emerging, at least as far as D-Wave's commercial architecture is concerned. The attention shifted from finding the optimum of discrete optimization problems to sampling. First of all, simulated quantum annealing can be exponentially faster than ordinary simulated annealing (Crosson & Harrow, 2016), and the result on achieving \(10^8\)-times speedup on a hand-crafted problem by quantum annealing is eroded in more realistic scenarios (Mandrà et al., 2016; King et al., 2017). A negative result on boosting also emerged (Dulny III & Kim, 2016). So the paradigm shift is welcome.

On the sampling side, much have been happening. First of all, we can train a truly quantum Boltzmann machine by adding a transverse term in the Hamiltonian, although the impact on learning is inconclusive (Amin et al., 2016). It already has an extension to reinforcement learning (Crawford et al., 2016). Effective temperature estimation remains a problem (Raymond et al., 2016), but we can train fully visible Boltzmann machines (Korenkevych et al., 2016), and, in fact, arbitrarily connected probabilistic graphical models (Benedetti et al., 2016). This leaves me with a hope that our proposal for a quantum-enhanced Markov logic network can be efficiently embedded in the Chimera graph (Wittek & Gogolin, 2017).

A twist on the learning by annealing paradigm is learn to anneal, which is the reverse problem: we give a target state we want at the end of annealing, and we learn the weights of the spin system (Behrman et al., 2016).

Finally of pure theoretical interest: we should consider open quantum systems in the adiabatic framework (Wild et al., 2016). I wonder why there aren't more studies on machine learning and open quantum systems.

Quantum neural networks that make more sense

I am negatively biased against any paper that discusses quantum neural networks, but a few papers passed my filter recently. A cool blog post gives more details. A highlight is a quantum photonic neural network with applications in physics, where the backpropagation phase is fully classical, and the number of ancilla system might blow up (Wan et al., 2016). Similar applications show up in (Romero et al., 2016).

A NIPS poster studied a quantum perceptron, proving rigorous bounds on generalization (Wiebe et al., 2016), which I believe is a step in the right direction.

It has always been clear that whatever quantum computer or quantum-machine learning technology we build, it will work hand-in-hand with traditional computers. A new paper adds neuromorphic hardware to the mix, and argues that the combination of technologies enables building complex deep learning networks (Potok et al., 2017).

HHL plateau

We are really running out of options where we can deploy the HHL algorithm for an exponential boost. We have seen a quantum singular value decomposition protocol of nonsparse low-rank matrices(Rebentrost et al., 2016), quantum discrimant analysis (Cong & Duan, 2016), data fitting for prediction (Schuld et al., 2016). and recommendation systems (Kerenidis & Prakash, 2016).

I found a new iterative algorithm interesting. It uses Newton's method (a second-order numerical algorithm) for gradient descent in unconstrained polynomial optimization problems (Rebentrost et al., 2016). Technically, there is one constraint, a normalization, but it is not something the user chooses. It is a curious algorithm that uses HHL as a subroutine, and it needs more and more initial resources if we want to increase the number of iterations. This iterative idea has already been improved (Kerenidis & Prakash, 2017).

With all these tempting applications of the HHL, it would be great to see more implementations. Continuous variable systems seem to be a good candidate for scalable implementations (Lau et al., 2017).

Reinforcement learning

Quantum control sees more and more instances of reinforcement learning, even though the optimization landscape is almost always trap-free (Russell et al., 2016). Example applications include dynamic decoupling sequences with recurrent neural networks (August & Ni, 2017), predicting qubit decoherence (Mavadia et al., 2017), or stepwise modifications of quantum channel properties (Clausen & Briegel, 2016).

Reinforcement learning is the transition point between classical and quantum learning protocols. The system under control is quantum, so it is worth considering the scenario where the reinforcement learning protocol itself is quantum, and acts directly on the system without extracting classical information, or with extracting less. An elementary reinforcement learning on superconducting qubits was considered with involving classical information(Lamata, 2017), and bounds on what can be achieved by fully quantum agents have been established (Dunjko et al., 2016).

Quantum machine learning at your fingertips

The zoo is expanding: more and more quantum technologies are maturing to the point that they are becoming suitable for machine learning. One of the most fun papers I read on QML is a recent experiment using the IBM Quantum Experience to implement a distant measure (Schuld et al., 2017). The starting point of this paper is that we have been looking at the problem the wrong way: we tried to twist existing learning algorithms to some quantum-friendly form, whereas we could just come up with new learning algorithms based on what a particular quantum hardware platform can do. In other superconducting news, there is a proposal for small-scale reinforcement learning that should be feasible using near-future technology (Lamata, 2017).

Optical systems are coming up. I mentioned the continuous-variable version of the HHL (Lau et al., 2017), for which hardware is being built by a startup. Then coherent states can calculate an RBF kernel, generalizations thereof in a quantum optical system (Chatterjee & Yu, 2016), and another photonic setup has been demonstrated in decision problems (Naruse et al., 2016).

Quantum learning theory

A great survey paper (Arunachalam & de Wolf, 2017) achieved a conceptual clarification: what statistical learning theory is to machine learning is what quantum learning theory is to quantum-enhanced machine learning. A handful papers have been published on asking the theoretical question whether quantum resources enable us to learn anything that we cannot with classical resources, and this field of inquiry now has a name. The authors of the survey also had results on optimal quantum sample complexity, and the difference is only a constant factor compared to the classical case (Arunachalam & de Wolf, 2016). Given that states can be entangled, it is not even that clean what a learning problem is: this was rigorously defined for supervised learning, establishing a clear separation of training and testing phases even in the quantum case (Monràs et al., 2017), and also for pattern recognition (Holik et al., 2016). A connection between quantum learning and cryptography has also been made (Grilo & Kerenidis, 2017).

Tags: Quantum machine learning, Machine learning, Quantum information theory

Share on: LinkedIn Facebook Google+ Twitter