Αναζήτηση αναρτήσεων

Τετάρτη 31 Ιανουαρίου 2024

Machine learning takes the hassle out of cold-atom experiments

 

Machine learning takes the hassle out of cold-atom experiments

31 Jan 2024 Margaret Harris


Automatic adjustments: A view into the vacuum chamber containing the Tübingen group's rubidium magneto-optical trap (MOT). The frequency of the MOT lasers is controlled by a reinforcement learning agent. (Courtesy: Malte Reinschmidt)

Cold atoms solve many problems in quantum technology. Want a quantum computer? You can make one from an array of ultracold atoms. Need a quantum repeater for a secure communications network? Cold atoms have you covered. How about a quantum simulator for complicated condensed-matter problems? Yep, cold atoms can do that, too.

The downside is that doing any of these things requires approximately two Nobel Prizes’ worth of experimental apparatus. Worse, the tiniest sources of upset – a change in laboratory temperature, a stray magnetic field (cold atoms also make excellent quantum magnetometers), even a slammed door – can unsettle the complicated arrays of lasers, optics, magnetic coils and electronics that make cold-atom physics possible.


To cope with this complexity, cold-atom physicists have begun exploring ways of using machine learning to augment their experiments. In 2018, for example, a team at the Australian National University developed a machine-optimized routine for loading atoms into the magneto-optical traps (MOTs) that form the starting point for cold-atom experiments. In 2019, a group at RIKEN in Japan applied this principle to a later stage of the cooling process, using machine learning to identify new and effective ways of cooling atoms to temperatures a fraction of a degree above absolute zero, where they enter a quantum state known as a Bose-Einstein condensate (BEC).
Let the machine do it

In the latest development in this trend, two independent teams of physicists have shown that a form of machine learning known as reinforcement learning can help cold-atom systems handle disruptions.

“In our laboratory, we found that our BEC-producing system was fairly unstable, such that we only had the ability to produce BECs of reasonable quality for a few hours out of the day,” explains Nick Milson, a PhD student at the University of Alberta, Canada who led one of the projects. Optimizing this system by hand proved challenging: “You have a procedure underpinned by complicated and generally intractable physics, and this is compounded by an experimental apparatus which is naturally going to have some degree of imperfection,” Milson says. “This is why many groups have tackled the problem with machine learning, and why we turn to reinforcement learning to tackle the problem of building a consistent and reactive controller.”

Reinforcement learning (RL) works differently from other machine learning strategies that take in labelled or unlabelled input data and use it to predict outputs. Instead, RL aims to optimize a process by reinforcing desirable outcomes and punishing poor ones.

In their study, Milson and colleagues allowed an RL agent called an actor-critic neural network to adjust 30 parameters in their apparatus for creating BECs of rubidium atoms. They also supplied the agent with 30 environmental parameters sensed during the previous BEC-creation cycle. “One may think of the actor as the decision-maker, trying to figure out how to act in response to different environmental stimuli,” Milson explains. “The critic is trying to figure out how well the actions of the actor are going to perform. Its job is essentially to provide feedback to the actor by assessing the ‘goodness’ or ‘badness’ of potential actions taken.”

After training their RL agent on data from previous experimental runs, the Alberta physicists found that the RL-guided controller consistently outperformed humans at loading rubidium atoms into a magnetic trap. The main drawback, Milson says, was the time required to collect training data. “If we could introduce a non-destructive imaging technique like fluorescence-based imaging, we could essentially have the system collecting data all the time, no matter who was currently using the system, or for what purpose,” he tells Physics World.
Step by step

In a separate work, physicists led by Valentin Volchkov of the Max Planck Institute for Intelligent Systems and the University of Tübingen, Germany, together with his Tübingen colleague Andreas Günther, took a different approach. Instead of training their RL agent to optimize dozens of experimental parameters, they focused on just two: the magnetic field gradient of the MOT, and the frequency of the laser light used to cool and trap rubidium atoms in it.

The optimum value of the laser frequency is generally one that produces the greatest number of atoms N at the lowest temperature T. However, this optimum value changes as the temperature drops due to interactions between the atoms and the laser light. The Tübingen team therefore allowed their RL agent to adjust parameters at 25 sequential time steps during a 1.5-second-long MOT loading cycle, and “rewarded” it for getting as close as possible to the desired value of N/T at the end, as measured by fluorescence imaging.

While the RL agent did not come up with any previously-unknown strategies for cooling atoms in the MOT – “a quite boring result”, Volchkov jokes – it did make the experimental apparatus more robust. “If there is some perturbation at the time scale of our sampling, then the agent should be able to react to it if it’s trained accordingly,” he says. Such automatic adjustments, he adds, will be vital for creating portable quantum devices that “cannot have PhD students tending them 24-7”.
A tool for complex systems

Volchkov thinks RL could also have wider applications in cold-atom physics. “I firmly believe that reinforcement learning has the potential to yield new modes of operations and counter-intuitive control sequences when applied to the control of ultracold quantum gas experiments with sufficient degrees of freedom,” he tells Physics World. “This is especially relevant for more complex atomic species and molecules. Eventually, analysing these new modes of control might shed light on physical principles governing more exotic ultracold gases.”READ MORE



Milson is similarly enthusiastic about the technique’s potential. “The use-cases are probably endless, spanning all areas of atomic physics,” he says. “From optimization of loading atoms into optical tweezers, to designing protocols in quantum memory for optimal storage and retrieval of quantum information, machine learning seems very well suited to these complicated, many-body scenarios found in atomic and quantum physics.”

The Alberta team’s work is published in Machine Learning: Science and Technology. The Tübingen team’s work appears in an arXiv preprint.This article was amended on 31 January 2024 to clarify Valentin Volchkov’s affiliations and details of the Tübingen experiment.

Margaret Harris is an online editor of Physics World

FROM PHYSICSWORLD.COM   1/2/2024

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου