Machine learning owes much of its power to two physicists who never studied computers.

The Nobel Prize in Physics for 2024 has been awarded to John J Hopfield and Geoffrey E Hinton for their pioneering work on machine learning using artificial neural networks, a technological breakthrough that is now at the core of modern artificial intelligence (AI). 

The Royal Swedish Academy of Sciences has honoured their “foundational discoveries and inventions” which have had a profound impact on how machines learn to process and interpret data.

John Hopfield, a physicist from Princeton University, is credited with developing a form of associative memory in 1982. 

This structure, called the Hopfield network, can store and reconstruct patterns in data, such as images, even when they are incomplete or distorted. 

Hopfield’s approach was inspired by physics, specifically the behaviour of atomic spins in magnetic materials. 

In these systems, atoms align their spins to minimise the overall energy, forming stable magnetic domains. Similarly, Hopfield’s network adjusts the values of its nodes, similar to neurons, to minimise energy, allowing it to “recover” stored images or patterns. 

This idea laid the groundwork for using physics-based concepts to design computational models capable of learning and memory.

Geoffrey Hinton, based at the University of Toronto, built upon Hopfield’s framework by creating the Boltzmann machine in 1985. 

This network introduced a new way of learning from data, using statistical physics to detect patterns and generate new ones. 

The Boltzmann machine, named after the physicist Ludwig Boltzmann who studied thermodynamics and statistical mechanics, learns by running through possible configurations and adjusting its connections based on the probabilities of different states occurring. 

This approach allowed the network to recognise and classify images, a significant step toward the sophisticated image recognition systems used in AI today.

Artificial neural networks, which form the backbone of machine learning, are inspired by the structure of the human brain. 

In these networks, nodes represent neurons, and connections between them mimic synapses. 

Through training, the connections between nodes are strengthened or weakened, enabling the network to learn from examples rather than following explicit instructions. 

This flexible approach allows AI systems to perform tasks like identifying objects in images or understanding natural language.

“The laureates’ work has already been of the greatest benefit,” says Ellen Moons, Chair of the Nobel Committee for Physics. 

“In physics, we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties.”

While AI applications such as language translation or image recognition have become common in everyday life, the work of Hopfield and Hinton has also been instrumental in scientific research. 

For example, artificial neural networks have helped analyse the massive datasets involved in particle physics, including the search for the Higgs boson, and are used in astronomy to find exoplanets. 

More recently, AI has been applied in materials science, predicting molecular structures and helping develop new materials with tailored properties, such as those used in next-generation solar cells.

The development of machine learning over the past few decades, particularly since the 2010s, has been enabled by massive increases in computing power and the availability of large datasets for training networks. 

What began with Hopfield’s 30-node network in the 1980s - too small to be effective with the computers available at the time - has now expanded to modern networks with trillions of parameters, such as the large language models used in today’s AI systems.

Hinton’s work, particularly his role in the resurgence of interest in neural networks during the 2000s, helped pave the way for the current deep learning revolution. 

His innovations, including the development of more efficient training methods for deep networks, have led to widespread AI applications in areas ranging from healthcare to autonomous vehicles.

Although computers do not “think” in the way humans do, the advances made by Hopfield and Hinton have allowed machines to mimic some brain-like functions, such as memory and pattern recognition. 

Their work has reshaped not only computer science but also the broader scientific landscape, where machine learning is now a powerful tool for solving complex problems.

Thanks to their discoveries, the world is in an era where AI can assist with everything from diagnosing diseases to designing new technologies.

More technical details of their developments are accessible in PDF form, here.

This email address is being protected from spambots. You need JavaScript enabled to view it. CareerSpot News