Artificial intelligence develops all by itself

Artificial intelligence develops all by itself

19. April 2020 0 By Horst Buchwald

Artificial intelligence develops all by itself

 

New York, 19.4.2020

 

Researchers have developed software that borrows concepts from Darwinian evolution, including “survival of the fittest”, to create AI programs that improve one generation at a time without human intervention. The program replicated decades of AI research in a matter of days, and its developers believe that one day it could discover new approaches to AI.

 

“While most people were still taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved in the work. “This is one of those works that could set in motion a lot of future research”.

 

Creating an AI algorithm takes time. Take neural networks, a common form of machine learning used for language translation and driving. These networks loosely mimic the structure of the brain and learn from training data by changing the strength of the connections between artificial neurons. Smaller subcircuits of neurons perform specific tasks – for example, recognizing traffic signs – and researchers can spend months trying to figure out how to connect them together so that they work together seamlessly.

 

In recent years, scientists have accelerated the process by automating some steps. But these programs still rely on assembling finished, man-made circuits. This means that the output is still limited by the engineers’ imagination and their existing distortions.

 

So Quoc Le, a computer scientist at Google, and his colleagues developed a program called AutoML-Zero, which could create AI programs with virtually no human input, using only basic mathematical concepts that a high school student would know. “Our ultimate goal is to actually develop novel concepts of machine learning that even researchers couldn’t find,” he says.

 

The program discovers algorithms with a loose approximation of evolution. It starts by creating a population of 100 algorithm candidates by randomly combining mathematical operations. Then it tests them on a simple task, such as an image recognition problem where it has to decide whether an image shows a cat or a truck.

 

In each cycle, the program compares the performance of the algorithms with hand-written algorithms. Copies of the top performers are “mutated” by randomly replacing, editing or deleting part of their code to create slight variations of the best algorithms. These “children” are added to the population, while older programs are eradicated. The cycle is repeated.

 

The system creates thousands of these populations at once, allowing it to run tens of thousands of algorithms per second until it finds a good solution. The program also uses tricks to speed up the search, such as occasionally exchanging algorithms between populations to avoid evolutionary dead ends, and automatically sorting out duplicate algorithms.

 

In a preprint paper published last month on arXiv, the researchers show that the approach can stumble across a number of classical machine learning techniques, including neural networks. The solutions are simple compared to today’s most advanced algorithms, admits Le, but says the paper is proof of principle and he is optimistic that it can be scaled to much more complex AIs.

 

Nevertheless, Joaquin Vanschoren, a computer scientist at Eindhoven University of Technology, says it will be a while before the approach can compete with the state of the art. One thing that could improve the program, says Vanschoren, is that it doesn’t have to start from scratch, but instead uses some of the tricks and techniques that people have discovered. “We can push the pump with learned machine learning concepts”.

 

This is something Le wants to work on. Concentrating on smaller problems instead of whole algorithms is also promising, he adds. His group published another paper on arXiv on April 6, which used a similar approach to redesign a popular ready-made component used in many neural networks.

 

But Le also believes that increasing the number of mathematical operations in the library and providing even more computing resources to the program could enable him to discover entirely new AI capabilities. “This is a direction we really want to go in,” he says. “To discover something really fundamental that people will only discover after a long time,” he says.

Hits: 4