Αναζήτηση αναρτήσεων

Παρασκευή 15 Μαρτίου 2019

Machine learning is implemented on an IBM quantum processor

Machine learning is implemented on an IBM quantum processor

14 Mar 2019 Hamish Johnston





Machine-learning algorithms have been run on a quantum computer by physicists at IBM. Although the proof-of-concept demonstration did not involve practical tasks, the team hopes that scaling-up the algorithms to run on larger quantum systems could give machine learning a boost.

Many machine learning algorithms are “kernel methods”, which determine similarities between patterns. The strategy is to transform the data – pixels in a digital image, for example – into a higher-dimensional representation that has clear boundaries between classification types. All images of cats, for example, would reside in one region of this higher-dimensional, space whereas all images of dogs reside in another. Machine learning is a type of artificial intelligence that involves a computer working-out how to do a task by analyzing large numbers of examples of the task being done. A typical task could be to tell the difference between photographs of cats and dogs. The machine learning system would be “trained” by inputting lots of images of cats and dogs and the system would create a mathematical model that has a clear boundary between cats and dogs.
Size restrictions

A challenge for those using this method is that computational limitations restrict the size of the higher-dimensional representation – which in turn limits how detailed the classification can be. A system could distinguish the pointy ears of a cat, for example, but not be able to discern more subtle aspects of a cat’s body shape that would be obvious to a human.

The answer could be to use quantum computers, which – at least in principle — are much more efficient than conventional computers at performing calculations in very large representation spaces. In February, Maria Schuld and Nathan Killoran published a paper inPhysical Review Letters that describes two approaches for using quantum computers in machine learning. Schuld and Killoran work for Xanadu, a Toronto-based company that builds optical quantum-computing chips and designs software for quantum computers.

Working independently, Kristan Temme and physicists at IBM have proposed similar strategies and have implemented them using a very basic quantum computer. Temme and colleagues describe their work Nature.
Hardware add-on

One strategy involves using a quantum computer as a hardware add-on to a conventional machine learning system. In this scenario, data – images of cats and dogs, for example – are sent to a quantum computer to be classified. Similarity data are then returned to the conventional system, which performs machine learning.

The second strategy involves performing the learning on a quantum computer, with the assistance of a classical computer.A

Quantum computers are still at a very early stage of development, so the IBM demonstrations were very basic. The team used two quantum bits (qubits) of IBM’s smallest commercial quantum computer – which has five superconducting qubits. This meant that the quantum representation space only contained four dimensions.

While the IBM experiments were successful as a proof of concept, Schuld points out that it is not clear whether scaled-up versions would provide meaning measures of similarity in practical applications such as learning how to classify pictures of animals.

Temme told Physics World that the team is now working on scaling-up their implementations so that they can run on more qubits. IBM, for example, has commercial quantum computers with as many as 20 qubits. He also says that the team is also trying to understand what sorts of data sets would benefit most from a quantum approach.


physicsworld.com 15/3/2019

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου