More on quantum learning of unitaries, process tomography, and classical regression

AQIS just concluded, and I presented a poster on transductive and active learning in the quantum learning of unitaries (Wittek, 2014). We had some good discussions on the topic, particularly on the differences between process tomography and learning of unitary transformations, and whether this whole idea of comparing to classical regression analysis makes any sense. This entry summarizes some of the points made.

One of the referees of the extended abstract wrote:

[I]t is essential to avoid the confusion between "learning" and "tomography". In tomography, one tries to infer a classical description of the unknown gate. In learning, the goal is to simulate the application of the gate on a new input state, without necessarily having a classical description. In general, gate tomography (or, more precisely, gate estimation) is a suboptimal strategy for learning. The fact that, in the presence of symmetry, estimation is sufficient to achieve the optimal performance of gate learning is a highly non-trivial result.

This is a crucial distinction that I was not aware of. So in the case of process tomography, we have an explicit, classical description of the estimated transformation \hat{U} that we can use on an arbitrary number of states in the future. In classical learning, this is a pure case of induction: based on some finite N training instances, we infer a function, which we deploy on other data instances not present in the training set.

The poster did not concern this case, it only discussed the suboptimal coherent and optimal incoherent strategies, and how they related to transduction and induction. The interesting thing about the incoherent strategy is that we perform an optimal POVM measurement, so we actually learn classical information about the unitary, but not as much as in the case of process tomography.

In classical regression, we have N training instances, each with a real-valued label: (\mathbf{x}_1, y_1),\ldots, (\mathbf{x}_N, y_N) . In the quantum learning scenario, we have N disposal of a black box. If we want to match this in the classical case, we would need the original function f that generates the training instances: (\mathbf{x}_1, f(\mathbf{x}_1)),\ldots, (\mathbf{x}_N, f(\mathbf{x}_N)) . It would not make much difference to classical learning algorithms. In the quantum case, there is an optimal input state that reveals the most about the unitary in question (provided some symmetry, as pointed out by the referee). Furthermore, this optimal input state should be used in parallel, that is, applying the N -times tensor product of the unitary on the state (Bisio et al., 2010). Apparently, this theoretical result may not translate well to an implementation: a sequential approach is more feasible. In this case, subsequent optimal states would depend on what the previous state revealed of the process. To spice things up, this sequence of optimal states could be augmented by classical learning and the parametric control of estimating the unitary (Hentschel & Sanders, 2010)

The next question is what the input and output data might be. I believe it is a clear case of quantum input and output, a distinction I like to make. I find it useful to separate this class of algorithms from ones that operate on classical data while still offering a speedup, like Grover's search on classical databases and its variants. Yes, we can argue that at some point the quantum states have to be initialized classical, and at that time we introduce at least linear computational complexity. At the other end of the pipeline, sooner or later we will want classical information, which implies state tomography of the output states with all its problems. As one visitor to the poster pointed out, quantum machine learning is at severe disadvantage compared to classical algorithms, and this is one of the reasons. Yet, we can picture learning processes where several quantum learners are attached, or a learner aids a quintessentially quantum procedure, obliterating the need for a transition to the classical domain. So I maintain that it makes sense to talk about quantum input and output data.

References

Bisio, A.; Chiribella, G.; D'Ariano, G.; Facchini, S. & Perinotti, P. Optimal quantum learning of a unitary transformation. Physical Review A, 2010, 81, 032324.
Hentschel, A. & Sanders, B. C. Machine Learning for Precise Quantum Measurement. Physical Review Letters, 2010, 104, 063603.
Wittek, P. Transduction and Active Learning in the Quantum Learning of Unitary Transformations. Poster Session at AQIS-14, 14th Asian Quantum Information Science Conference, 2014, Kyoto, Japan.

One Comment

  1. […] Update: An extended version of this post will appear in the upcoming book Quantum Machine Learning: What Quantum Computing Means to Data Mining. Update 2: Some clarifications are made in a new post. […]

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>