Showing posts with label Robot. Show all posts
Showing posts with label Robot. Show all posts

Friday, August 6, 2021

Engineers who don't play games can't do autopilot well

 Human beings are likely to "hand over" their lives to autonomous driving in the future, but few people know that many companies are entrusting the task of realizing this "life-threatening" technology to a bunch of "game-playing" engineers. 

This is not a joke. 

"There are even people in the industry using the GTA 5 game engine to do research and development related to autonomous driving." A person engaged in research and development of autonomous driving technology said to Pinwan . GTA 5 is a very popular open world adventure video game. The content involves violence, gangsters, gun battles, etc. Of course, it also includes grabbing a walking car and then rampaging in the virtual world. 

For the first time, you can hardly imagine the relationship between such a "crude" game and "safety first" autonomous driving. 

In fact, what engineers are after is the role of GTA 5 as a ready-made "simulation platform". 

The so-called simulation platform test is simply to test the autonomous driving in a virtual world that simulates the real road, so as to improve the code of automatic driving faster and more cost-effectively. 

To achieve a broad sense of automatic driving, the difficulty is no less than the realization of strong artificial intelligence. Although the field of autonomous driving still seems to be very hot, today, companies are still a long way from reaching the L4 level of commercial landing and making profits. Among them, safety reasons are one of the most important influencing factors. 

In March 2018, Uber's self-driving car crashed with a pedestrian, which directly caused its road test permit to be revoked, which further caused regulatory agencies in various countries to treat the test vehicle on the road more strictly. But on the other hand, a large amount of real vehicle scene data is one of the important conditions for the continuous "evolution" of autonomous driving. In 2016, the RAND Think Tank pointed out that an autonomous driving system needs to test 11 billion miles to meet the conditions for mass production applications. This means that even a self-driving fleet with 100 test vehicles will take about 500 years to test at an average speed of 25 miles (40 kilometers) per hour for 7×24 hours. So a paradox appears: the regulator believes that the car must be safe enough to get on the road, but in terms of technical realization, autonomous driving must rely on more on the road to collect more real data and become safer. 

Therefore, practitioners have set their sights on the simulation platform one after another. 

Similar to the actual road test, the simulation test of autonomous driving also needs to absorb a large amount of scene data to accelerate the iteration of the algorithm. Judging from the published test data , Waymo, which was the first to be involved in autonomous driving technology research, has been formally established in 2009. As of January 2020, it has measured 20 million miles of roads and 10 billion miles of virtual simulation tests. These are completely two. Magnitude. 

The simulation test of autonomous driving also reduces the R&D and operating costs of enterprises. 20 million miles and 11 billion miles are still not a small distance. If you want to use real-vehicle testing, almost no company can afford the time cost and capital cost. Waymo burned at a rate of US$1 billion a year when working on its Robotaxi (self-driving taxi) project, and the installation cost of lidar alone reached US$75,000. Studies have shown that the large-scale intelligent simulation system greatly reduces the test cost of the actual vehicle, and its cost is only 1% of the cost of the road test, and it can also expand the mileage of thousands of times the actual road test. 

Since the occurrence of the Uber incident, countries have become vigilant about the conduct of autonomous driving road tests by various companies, and the management of actual vehicle tests on public roads has become more and more stringent. Test sites and designated roads issue test licenses to applicants. Due to the limited volume of road sites, companies often queue up. Therefore, various objective restrictions have slowed down the accumulation of data in real-vehicle testing methods. 

At the same time, the fixed places and designated roads also make the scenes in the actual vehicle test relatively limited, which cannot meet the test requirements of various special road conditions, in other words, the needs of the long tail scene. People usually understand the long-tail scene as all sudden, low-probability, and unpredictable scenes, such as intersections with faulty traffic lights, drunk vehicles, extreme weather, and so on. On the simulation platform, in order to exhaust the various scenarios that the autonomous driving system may encounter and ensure the safety and reliability of the system, practitioners need to do more simulations and tests on the long tail scenario. 

Through the above description, the industry has a lot of demand for simulation testing of autonomous driving. So what exactly is simulation testing of autonomous driving? 

The team of Professor Henry Liu from the Intelligent and Connected Transportation Research Center of the University of Michigan once told Pinwan: “In simple terms, the simulation test is like building a game based on the real world, allowing an autonomous car in this virtual world. Keep running.” You can even think that the data of players driving in GTA 5 can be used for testing and utilization to some extent. 

The actual situation is also true, and it has become the choice of many companies to use game engines Unity, Unreal, and UE4 as an autonomous driving virtual simulation platform. Autonomous driving simulation platforms developed based on UE4 include open source AirSim, Carla and Tencent 's TAD Sim. TAD Sim, a simulation platform developed by Tencent, uses its own technology accumulation in the game field, and uses in-game scene restoration, 3D reconstruction, physics engine, MMO synchronization, Agent AI and other technologies to improve the reproducibility and efficiency of automated driving simulation platform testing. . Baidu 's Apollo platform chose to cooperate with Unity to build its full-stack open source autonomous driving software platform. 

Autopilot manufacturers choose to use game engines as the simulation platform. The main reason is that it can produce a full-stack closed-loop simulation, especially in the simulation of the perception module. It can reconstruct a three-dimensional environment and simulate the camera in the three-dimensional environment. Various input signals such as lidar. 

Simulation is equivalent to the construction of the real world. In the training of the perception algorithm, the simulation system comes with the true value of the scene elements, and automatically generates various weather and road conditions without labeling to ensure coverage. The truth value is the objective attribute and objective value of all items. Usually, what the human eye or the autopilot sensor sees is the observation value, and the truth value is the absolute objective attribute of an object that is not transferred by any observer's observation result. All elements are generated by the simulation system itself, so it has the objective value of all the elements in the scene, without the observer's observation, and the self-labeling required by the perception system is directly carried out through the truth value of the simulation system. 

Generally, the perception training of traditional algorithms requires manual labeling. For example, a little girl will draw a frame to label when riding a bicycle. The labor cost of third-party labeling needs 1 billion US dollars each year. Waymo uses a large number of virtual tests to complete algorithm development and regression. test. 

When the engineer adjusts the algorithm, it may only take a few minutes to test on the simulation test platform, but if it is a road test, it may take half a day or a day to make an appointment for the adjustment of the autonomous driving fleet, and choose to test on the simulation platform. As long as the computing power permits, high-concurrency tests such as 1000 vehicles and 2000 vehicles can be carried out at the same time. 

In summary, the core capabilities of automated driving simulation testing include geometric restoration of scenes, 3D scene simulation + sensor simulation; restoration of scene logic, decision planning simulation; physical restoration of scenes, control + vehicle dynamics simulation; high concurrency , Cloud simulation and other advantages. 

Therefore, new and old autopilot companies are recruiting compound talents with virtual engine backgrounds such as Unity and Unreal. An industry person said to Pinwan: “One is that the simulation environment needs to be rendered more and more realistic, and the other is that these talents can make some optimizations based on the virtual engine to reduce the cost of the entire simulation test.” 

Recruitment requirements for simulation engineers of an autonomous driving company 

Autonomous driving companies are also looking for ways to improve the efficiency of simulation tests. Improving the fidelity of virtual scenes is usually considered a reasonable way, but it is not that easy. 

Professor Henry Liu said: "Because this environment is constructed based on a mathematical model, if we want the calculation results to be closer to the real world, then the construction of this model will be more complicated and the calculation speed will be slower." Create a highly realistic virtual The difficulty of the world may not be less than the realization of autonomous driving. Simulation testing is not complete United States , and the results have certain limitations, it does not help companies solve all the problems autopilot. Although Waymo's 20 million miles of autonomous driving in 2020 and more than 15 billion miles of autonomous driving simulation mileage in 2020 can kill most of the latecomers in the autonomous driving industry, 20 million miles are a drop in the price of achieving autonomous driving, and data is not omnipotent, and there is no evidence. It shows that autonomous driving simulation can fully simulate the complex situation of the real world. 

At present, all data-driven methods will always fail scenarios . First, the uncertainty of the data itself. For example, for many occluded objects, even human labels will have a lot of uncertainty; secondly, due to the high complexity of the model, it is difficult to identify all models in the virtual engine, especially in Europe and the United States. Scene. 

Different from the human eye, it is much more difficult for the algorithm to recognize the image. Once a few key elements in the image undergo subtle changes, the algorithm’s output recognition results may have huge differences, not to mention the human eyes. When misjudged. Therefore, massive data is not a sufficient condition to realize L4 or even L5 autonomous driving. It cannot be expected to prove the absolute safety of offline roads through hundreds of millions of kilometers of "safe driving" on the virtual simulation platform. 

A professional said to Pinwan: “Although the use of game engines for simulation testing has solved some of the problems of autonomous driving to a certain extent, in the final analysis, its focus is still on testing. Its main purpose is to prevent the possibility of autonomous driving algorithms. Make mistakes on some mistakes that have already been made, or test them in advance in scenarios that engineers can think of. It is more to ensure the correctness of the logic of the entire system." 

This article is from the WeChat public account "Pinwancool" (ID: pinwancool) , author: Hong Yuhan , 36氪 published with authorization.

Tuesday, June 15, 2021

The most human-like "robot" in the world is itself a human

 If robots have a choice, I don't know if they want to be made by humans. After all, although robots are carrying heavy objects, doing investigations, serving dishes and washing dishes, and want to take care of the tedious tasks of human beings, they are considered too cumbersome and inflexible. And if it is too flexible and smart, it will be judged to make people fearful. Some people are afraid that they will be attacked by robots in the next second.

Of course, if the robot can really make a choice, it will be even more frightening.

In this era when robots become stronger and make people feel flustered, there are people who hope that people can help robots become stronger, and they must use robots to eliminate the loneliness behind humans. Not only to eliminate the loneliness of humans, but also to eliminate the loneliness of the supporters behind the robot.

In this robot restaurant, the waiters are all human

It may be because of the serious aging that Japan's attitude towards robots is much more relaxed. Although less than every household can accept robots and treat them as family members, Japan's acceptance of robots is far higher than other countries.

An obvious example is that other countries' film and television works on robots are "Love, Death and Robots" and "I, Robot", while Japanese film and television works are "My Robot Girlfriend" and "Master Robot". The former is keen to imagine the threat of robots to humans in the future world, while the latter wants to work hard to reconstruct the relationship between humans and robots. The body may be a machine, but the emotions belong to humans.

The uncle disguised as a robot went to the toilet and shocked the people around him. Picture from: "Uncle Robot"

The cafe that just opened in Tokyo in June this year is the best example of "machine body, human heart". This is the result of the project of Avatar Robot Cafe DAWNver.beta. We can also call this cafe the Avatar Cafe.

One of the characteristics of Avatar Cafe is the robot waiter, but unlike other robot cafes/restaurants, its waiters are all controlled by humans.

Although the robot itself has an operating system, humans give orders and the robot completes its tasks. But usually several robots are controlled by one person, and commands are given through the system. The difference of the Avatar Cafe is that the controller behind each robot is a real person, and the exclusive operator of this robot is called the "pilot."

The person lying in the hospital bed is the ``pilot'' behind the robot

An important reason why these "pilots" need to provide services through robots is that they cannot provide coffee services on site. They come from all over Japan. Most of them suffer from diseases such as amyotrophic lateral sclerosis and spinal muscular atrophy. They have limited mobility and are disabled people who have difficulty getting out of their homes.

With Avatar Robot Restaurant, "pilots" can work from home. By home, hospital far programmable system robot, "pilots" can also provide services to customers at the cafe line. A patient with amyotrophic lateral sclerosis who works at Avatar has also worked as a barista. He can even input data into a computer to provide customized services to make coffee according to customers' preferences.

These robots are 120 cm tall, and the "pilots" can control the computer by moving their eyes and fingertips to control the robot OriHime-D. The robot has a built-in speaker, through which the "pilot" can talk to the guests. Even the "pilot" who cannot speak can also use the artificial voice service to speak.

Promotional poster of Avatar Cafe

Through OriHime-D, people with limited mobility can also see the guests’ movements and expressions at home, and then feed back the voice service from a human perspective. They can even convey their thoughts through the emojis on the robot’s "face" and body movements. Have physical communication with customers.

Before the opening of this coffee shop, the founder of the robot maker Ory Laboratories had long dreamed of using his robot to help people with disabilities work. Therefore, before the Avatar Cafe received enough crowdfunding funds, he was trying to promote the mobile robot OriHime-D to enter more restaurants to provide services.

Cafe picture

A reporter from the Japan Mainichi Shimbun once went to a local burger restaurant to experience OriHime-D's service. The service behind it was a 24-year-old girl suffering from spinal muscular atrophy. The other party said that she often provides services to elderly customers who are not adapted to the mobile Internet era, and she can introduce them how to buy and how to pay according to the habits of the elderly.

When the elderly feel inconvenient in this cashless coffee shop, it is the people behind the machine who provide the service. If they have been unable to complete the self-service payment, OriHime-D’s eyes will light up green-this means that the pilot is providing services: "You can try to deflect the bar code to the right to make it easier for the machine to read."

Robots providing services at the cash register

She will also make product recommendations from a personal perspective, whether it is based on age or gender, or based on solar terms, these are all subjective recommendations for planting grass. From this perspective, OriHime-D is a robot that "does not need" artificial intelligence, because these are replaced by the pilot behind it.

"Replace" robots with humans and become rigidly needed

Robots were born to do high-risk jobs or do things that humans can’t reach. The original intention of robots was to help humans.

Most of the robot application cases we have seen belong to the field of heavy industry. They carry heavy objects, cut materials, and assemble components. And the most common ones around are the service robots in restaurants, and the sweeping robots at home.

As one of the robots, OriHime-D also undertakes the mission of doing things that humans can't reach. It is difficult to use one person as a spokesperson for the disabled, at least in terms of cost control, this is not a cost-effective business.

The existence of robots makes it possible for these people to be hired. Even though there are more cases that can be used for marketing promotion, it is also very helpful for people who can't get out of the house. Yoshitomi Taro, the initiator of the Avatar Cafe project and the creator of the robot, said:

Of course, there is a possibility of failure at the beginning, but all we have to do is find the cause and improve it. My ultimate goal is to create a hopeful society for people who use wheelchairs or are bedridden due to illness, where they can work and serve others. I want to use this cafe as an opportunity to make cooperation with virtual robots an option for society.

OriHime-D in service

Yoshitoto Taro was often unable to go to school because of illness when he was young, and it was difficult to integrate into the school after he recovered. His inability to communicate with others was his long-term experience. These special experiences made him hope that he could "eliminate loneliness." When he was in college, he therefore proposed the idea of ​​communicating with people through robots, but the deeper and deeper his understanding of artificial intelligence, the more he felt that he was looking for the wrong direction.

In his view, it must be humans, not computer programs, that can reduce the loneliness of others. So he began to connect the lonely and trapped people with ordinary people, allowing them to communicate and create new memories.

In this process, the people behind the robot can communicate with people. They don't need to be trapped in a room of several square meters, and load on the robot to go to work and service. The people being served can also communicate with others. They don't have to face the artificial intelligence and standard templates that answer the questions, because there are real human attendants behind the robots.

Looks like a robot waiter, but actually humans are providing services

China also has similar public welfare platforms for the visually impaired. When the visually impaired encounters difficulties, they can upload the platform for help by taking photos and other methods. People who are not visually impaired can use their eyes and voice to help the visually impaired in 1-2 minutes. In this link, the visually impaired people do not rely on automatic recognition algorithms, but on the connection, help, and communication between people.

This kind of person-to-person communication is probably more important as you think. We introduced in "Anyone Afraid of Being Killed by Robots, Only Japanese" that Japanese nursing homes will purchase certain robots to accompany the elderly. This is true, but beyond that, what Japanese old people still need is love from humans. Robot scholar Marketta Niemelä said:

When I was investigating in Japan, I found that Japanese elderly care institutions rarely purchase a large amount of robotic equipment. On the contrary, people need more care from human beings.

Before the humorous and charming artificial intelligence systems like "Her" have been produced, the world may need more OriHime-D to help the two humans communicate, helping both the server and the served. .

The male protagonist in the movie is in love with artificial intelligence through headphones. Picture from: "Her"

Return to the essence of robots and serve humanity

There is a well-known person in the group that meets the requirements for the use of OriHime-D, and that is Hawking. Also suffering from amyotrophic lateral sclerosis, Hawking’s work tools are made by the world’s top technology companies. Apple provides facial recognition and eye tracking technology for his wheelchair, Microsoft provides computer systems, and Intel provides A supplementary context awareness toolkit.

But there is only one person in this predicament, Hawking, onlookers marvel at his smart brain, and smart people provide tools to help him continue his research. It is normal for other people in this situation to be ignored. Even if there is an ice bucket challenge for fundraising purposes, it is still difficult for the neglected to get practical help, and it is difficult for them to prove their worth.

The technical support of Hawking wheelchairs is the major technology companies

Robots like OriHime-D may not be as powerful as Hawking's wheelchair, but he brings hope to more sick people.

Through these seemingly stupid robots, it is possible to "go out" to work and communicate with people normally. These initial projects may become one of the selling points of commercial stores, just like small shops like "Bear Claw Coffee", but if you stick to it, this may be a normal future.

Paralyzed runner Adam Gorlitsky can challenge the marathon with the help of exoskeleton; paralyzed patients can use brain-computer interfaces to make up for their own mechanism defects; visually impaired people can also quickly adapt to the Internet world through auxiliary functions such as narration.

Paralyzed runner Adam Gorlitsky broke the world record with the help of exoskeleton

Whether it is a robot or an exoskeleton, these technologies have been given to help human use at the beginning. In today's social progress, the existence of similar technologies may help more neglected niche groups, giving them more opportunities and future.

While Boston Robot Dog and Virgin Atlantic are exploring commercial technology and the boundaries of the universe, there are still people using robots to serve the neglected people, and they want to use robots to help people who cannot move to establish more communication connections. When people are worried about the negative impact of the development of technology, the existence of robots such as OriHime-D still brings people peace of mind and hope.

Introducing Your Go-To Source for the Latest in AI Technology News

Introducing Your Go-To Source for the Latest in AI Technology News AI technology is rapidly advancing and changing the way we liv...