TURNING an old PC into a supercomputer worth tens of millions of pounds is now possible, according to a new study.

Supercomputers are thousands of times more powerful than everyday desktops.

But because they are so expensive, only a few large organisations and government departments can afford them.

This has been a major barrier for researchers studying complex things such as the human brain and mental disorders.

Now, scientists at the University of Sussex have come with a much cheaper option.

SEE ALSO: Brighton scientists in the race to build quantum computer

Their technology gives basic computers the power to perform extremely complicated calculations and could benefit scientists around the world, the researchers say.

Research Fellow in Computer Science Dr James Knight said: “I think the main benefit of our research is one of accessibility.

The Argus: The University of SussexThe University of Sussex

“Outside of these very large organisations, academics typically have to apply to get even limited time on a supercomputer for a particular scientific purpose.

“This is quite a high barrier for entry which is potentially holding back a lot of significant research.”

The researchers made their discovery used Graphical Processing Units (GPUs), which require 10 times less energy to run mind-boggling simulations.

A GPU is a specialised electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images, and is found in many devices, including mobile phones.

An American researcher called Eugene Izhikevich pioneered a similar method for large-scale brain simulation in 2006.

But back then, computers were too slow for it to become widely applicable.

The researchers applied Izhikevich’s technique to a modern GPU, 2,000 times more powerful than the one he used 15 years ago.

They were able to produce a “cutting-edge” model of a monkey’s visual cortex, the brain region responsible for processing visual information.

Until now, mapping the Macaque’s neurones and synapses could only be done with a supercomputer.

READ MORE: Sussex scientist to launch vital Covid-19 rapid testing kit

But the model was up and running in six minutes and capable of processing information 35 per cent faster than a supercomputer, the researchers found.

It took just 7.7 minutes or 8.4 minutes, depending on whether the animal was resting or active, to simulate each second of brain activity.

Professor of Informatics in the School of Engineering and Informatics, Thomas Nowotny, said: “Large-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains.

“However, even small mammals such as mice have 1012 synaptic connections, meaning that simulations require several terabytes of data - an unrealistic memory requirement for a single desktop machine.”

The Argus:

In comparison, when IBM’s Blue Gene/Q supercomputer was tested in 2018, it was ready in five minutes, but took 12 minutes to process each second of data.

The computer-boosting technology could also improve artificial intelligence, the researchers say.

Dr Knight said: “Our hope for our own research now is to apply these techniques to brain-inspired machine learning so we can help solve problems that biological brains excel at, but which are currently beyond simulations.”

Professor Nowotny added: “This research is a game-changer for computational Neuroscience and AI researchers who can now simulate brain circuits on their local workstations, but it also allows people outside academia to turn their gaming PC into a supercomputer and run large neural networks."

The researchers are now exploring ways of cutting down the GPUs processing time even more.

Dr Knight said: “As well as the advances we have demonstrated in procedural connectivity in the context of GPU hardware, we also believe that there is also potential for developing new types of neuromorphic hardware built from the ground up for procedural connectivity.

“Key components could be implemented directly in hardware which could lead to even more truly significant compute time improvements."

The researchers' findings were published in the journal Nature Computational Science.