Robotic – News: Hunting down criminals with three ki technologies/ learning to touch like babies/ the first decentralized algorithm for autonomous driving11. March 2020
Robotic – News: Hunting down criminals with three ki technologies/ learning to touch like babies/ the first decentralized algorithm for autonomous driving
New York, 11.3.2020
Almost daily, robot researchers report extraordinary progress. Although the role model here is the human being, in more and more cases robots are learning abilities that are clearly superior to those of humans. Here are some examples from this week.
Scientists at Dongguk University have constructed an innovative police system called Googi, which is the result of combining three AI technologies: virtual reality, robotics and Big Data. The result: more accurate crime prediction.
The team of scientists from Dongguk University, led by Prof. Joong-Yeon Lim, was prompted to take this step after discovering the sophistication with which criminals use modern technologies.
Of the research areas, the first focuses on virtual reality, which is used to reconstruct crime scenes through simulation.
The second area studies how robotic devices can detect and respond to crimes when they occur. This technology is used in forensic dentistry to identify victims and suspects.
The third involves large amounts of data, where offline and online information is analyzed to predict and prevent crime. Virtual reality data, information provided by citizens and forensic data collected by robots are analysed for crime prediction and the results are delivered to local authorities.
By combining these three modes, the team has developed an algorithm that uses information from crime scenes, local communities, and forensics to enable the police to identify crime hotspots in real time.
The Googi system also encourages citizen participation. As a result, the communities involved become more secure. Having successfully built a prototype at local level, the research team is now trying to take the technology to a global level in 2022.
A tactile robot finger without blind spots
Researchers at Columbia Engineering have introduced a new type of robotic finger with tactile sensitivity. It can locate the touch with very high precision over a large, multi-curved surface, similar to a human finger.
Until now, it has been difficult to integrate touch sensors into robot fingers using current methods. Now a team from Columbia Engineering has taken a new approach: The researchers used light to sense touch and they designed the data so that it could be processed by machine learning algorithms. Because there are so many signals, all of which partially overlap, the data is too complex to be interpreted by humans. The result is a fully integrated, low wire count sensorized robotic finger built with accessible manufacturing methods and designed for easy integration into skilled hands.
“Skilled robotic manipulation is now needed in areas such as manufacturing and logistics, and is one of the technologies that will be needed in the longer term to enable personal robotic assistance in other areas, such as healthcare or services,” adds project leader Ciocarlie.
Robots could operate in huge swarms – but what for?
If we want to achieve a situation in which a group or robots can work together in a confined space, we need innovative solutions. But when will that be the case? Soon, because autonomous vehicles are not unlike swarming robots in their behavior. Scientists at Northwestern University might have found a solution.
Humans are quick decision makers. They see a strange situation on the road and quickly find a way around it. Autonomous cars are still not that smart. In many cases they react confused and cause traffic jams. Autonomous cars are not controlled from a central control panel, which means that each individual car has to use its own mind to make decisions.
Scientists have now developed the first decentralized algorithm with a collision-free, non-blocking guarantee. They tested it in simulation with 1,024 robots and on a swarm of 100 real robots in the laboratory. These small robots were able to operate in close proximity to each other without crashing or causing deadlocks. They formed given shapes in less than a minute and worked efficiently and smoothly.
Creating shapes is not the same as driving on the road, but it is not far from it. Both ask the robots to work together in a pattern. In this case it is a shape, on the road – a tiny and efficient flow of traffic. It would not be too difficult to achieve this with some kind of central control unit, but it would not be realistic. In practice, central swarm control could be a problem in itself, as it would become a single point of complete failure. Michael Rubenstein, the study’s director, said: “In a decentralized system, there is no leader who tells all the other robots what to do. Each robot makes its own decisions. If one robot in a swarm fails, the swarm can still complete the task.
Each robot communicates with its environment. It senses an empty area and communicates with 3 or 4 surrounding robots to determine if this area will remain free in the next few seconds. Then it occupies this area without causing a collision or standstill.
This study is not limited to autonomous vehicles. Warehouses could benefit greatly from swarming robots. They could quickly and autonomously pick up goods and deliver them to pickup locations within the facility. Factories could also use swarms of robots – these tiny workers could deliver parts or pick up finished products. This would increase efficiency and reduce costs.