There’s been a lot of talk about quantum computers being able to solve far more complex problems than conventional supercomputers. The authors of a new paper say they’re on the path to showing an optical computer can do so, too.

The idea of using light to carry out computing has a long pedigree, and it has gained traction in recent years with the advent of silicon photonics, which makes it possible to build optical circuits using the same underlying technology used for electronics. The technology shows particular promise for accelerating deep learning, and is being actively pursued by Intel and a number of startups.

Now Chinese researchers have put a photonic chip to work tackling a fiendishly complex computer science challenge called the subset sum problem. It has some potential applications in cryptography and resource allocation, but primarily it’s used as a benchmark to test the limits of computing.

Essentially the task is to work out whether any subset of a given selection of numbers adds up to a chosen target number. The task is NP-complete, which means the time required to solve it scales rapidly as you use a bigger selection of numbers, making it fundamentally tricky to calculate large instances of the challenge in a reasonable time using normal computing approaches.

However, optical computers work very differently from standard ones, and the device built by the researchers was able to solve the problem in a way that suggests future versions could outpace even the fastest supercomputers. They even say it could be a step on the way to “photonic supremacy,” mimicking the term “quantum supremacy” used to denote the point at which quantum computers outperform classical ones.

The chip the researchers designed is quite different from a conventional processor, though, and did not rely on silicon photonics. While most chips can be reprogrammed, the ones built by the researchers can only solve a particular instance of the subset problem. A laser was used to etch the task into a special glass by creating a network of wave-guides that channel photons through the processor as well as a series of junctions that get the light beams to split, pass each other, or converge.

They used a laser and series of lenses and mirrors to shoot a beam of light into one end of the processor, and a light detector then picked up the output as it came out the other side. The network of channels is designed to split the light into many different beams that explore all possible combinations of numbers simultaneously in parallel.

The team created two chips designed to solve the problem for sets of three and four numbers, and they showed it could do both easily and efficiently. Problems that small aren’t especially tough; you could probably do them on the back of an envelope, and conventional chips can work them out in fractions of a nanosecond.

However, the researchers say their approach could fairly simply be scaled up to much bigger instances of the problem - and that’s where things get interesting. For their approach, the time it takes to compute is simply a function of the speed of light and the longest path in the network. The former doesn’t change and the latter goes up fairly gradually with bigger problems, and so their calculations show computing time shouldn’t shift much even scaling up to far bigger problems.

Conventional chips have to do a brute-force search of every possible combination of numbers, which expands rapidly as the problem gets bigger. The group’s calculations suggest that their chip would surpass a state-of-the-art Intel i7 CPU at a problem size of just six, which they think they should be able to demonstrate in their next experiment. Their estimates also predict their approach would overtake the world’s most powerful supercomputer, Summit, at a problem size of just 28.

Obviously, the proof is in the pudding, and until they’ve built much larger chips it’s hard to predict if there might be unforeseen roadblocks. The fact that each chip is bespoke for a particular problem would seem to make it impractical for most applications.

While there is some prospect of mapping real-world problems onto subset problems that could be solved in this way, it’s likely any practical application would use an alternative chip design. But the researchers say it’s a great demonstration of the potential for photonic approaches to vastly outstrip conventional computers at some problems.

I am a freelance science and technology writer based in Bangalore, India. My main areas of interest are engineering, computing and biology, with a particular focus on the intersections between the three.

2021 Will Set an All-Time Record for New Renewable Energy
https://t.co/Ri18Lh3wQ5 https://t.co/AFivDha0Bo

The Metaverse Will Need 1,000x More Computing Power, Says Intel
https://t.co/CEuMXOn10X https://t.co/bjfoQB17Cx

Texas just had its hottest December since 1889
https://t.co/eMtJk2US0s https://t.co/WGLHKd3o0W