Friday, August 6, 2021

Engineers who don't play games can't do autopilot well

 Human beings are likely to "hand over" their lives to autonomous driving in the future, but few people know that many companies are entrusting the task of realizing this "life-threatening" technology to a bunch of "game-playing" engineers. 

This is not a joke. 

"There are even people in the industry using the GTA 5 game engine to do research and development related to autonomous driving." A person engaged in research and development of autonomous driving technology said to Pinwan . GTA 5 is a very popular open world adventure video game. The content involves violence, gangsters, gun battles, etc. Of course, it also includes grabbing a walking car and then rampaging in the virtual world. 

For the first time, you can hardly imagine the relationship between such a "crude" game and "safety first" autonomous driving. 

In fact, what engineers are after is the role of GTA 5 as a ready-made "simulation platform". 

The so-called simulation platform test is simply to test the autonomous driving in a virtual world that simulates the real road, so as to improve the code of automatic driving faster and more cost-effectively. 

To achieve a broad sense of automatic driving, the difficulty is no less than the realization of strong artificial intelligence. Although the field of autonomous driving still seems to be very hot, today, companies are still a long way from reaching the L4 level of commercial landing and making profits. Among them, safety reasons are one of the most important influencing factors. 

In March 2018, Uber's self-driving car crashed with a pedestrian, which directly caused its road test permit to be revoked, which further caused regulatory agencies in various countries to treat the test vehicle on the road more strictly. But on the other hand, a large amount of real vehicle scene data is one of the important conditions for the continuous "evolution" of autonomous driving. In 2016, the RAND Think Tank pointed out that an autonomous driving system needs to test 11 billion miles to meet the conditions for mass production applications. This means that even a self-driving fleet with 100 test vehicles will take about 500 years to test at an average speed of 25 miles (40 kilometers) per hour for 7×24 hours. So a paradox appears: the regulator believes that the car must be safe enough to get on the road, but in terms of technical realization, autonomous driving must rely on more on the road to collect more real data and become safer. 

Therefore, practitioners have set their sights on the simulation platform one after another. 

Similar to the actual road test, the simulation test of autonomous driving also needs to absorb a large amount of scene data to accelerate the iteration of the algorithm. Judging from the published test data , Waymo, which was the first to be involved in autonomous driving technology research, has been formally established in 2009. As of January 2020, it has measured 20 million miles of roads and 10 billion miles of virtual simulation tests. These are completely two. Magnitude. 

The simulation test of autonomous driving also reduces the R&D and operating costs of enterprises. 20 million miles and 11 billion miles are still not a small distance. If you want to use real-vehicle testing, almost no company can afford the time cost and capital cost. Waymo burned at a rate of US$1 billion a year when working on its Robotaxi (self-driving taxi) project, and the installation cost of lidar alone reached US$75,000. Studies have shown that the large-scale intelligent simulation system greatly reduces the test cost of the actual vehicle, and its cost is only 1% of the cost of the road test, and it can also expand the mileage of thousands of times the actual road test. 

Since the occurrence of the Uber incident, countries have become vigilant about the conduct of autonomous driving road tests by various companies, and the management of actual vehicle tests on public roads has become more and more stringent. Test sites and designated roads issue test licenses to applicants. Due to the limited volume of road sites, companies often queue up. Therefore, various objective restrictions have slowed down the accumulation of data in real-vehicle testing methods. 

At the same time, the fixed places and designated roads also make the scenes in the actual vehicle test relatively limited, which cannot meet the test requirements of various special road conditions, in other words, the needs of the long tail scene. People usually understand the long-tail scene as all sudden, low-probability, and unpredictable scenes, such as intersections with faulty traffic lights, drunk vehicles, extreme weather, and so on. On the simulation platform, in order to exhaust the various scenarios that the autonomous driving system may encounter and ensure the safety and reliability of the system, practitioners need to do more simulations and tests on the long tail scenario. 

Through the above description, the industry has a lot of demand for simulation testing of autonomous driving. So what exactly is simulation testing of autonomous driving? 

The team of Professor Henry Liu from the Intelligent and Connected Transportation Research Center of the University of Michigan once told Pinwan: “In simple terms, the simulation test is like building a game based on the real world, allowing an autonomous car in this virtual world. Keep running.” You can even think that the data of players driving in GTA 5 can be used for testing and utilization to some extent. 

The actual situation is also true, and it has become the choice of many companies to use game engines Unity, Unreal, and UE4 as an autonomous driving virtual simulation platform. Autonomous driving simulation platforms developed based on UE4 include open source AirSim, Carla and Tencent 's TAD Sim. TAD Sim, a simulation platform developed by Tencent, uses its own technology accumulation in the game field, and uses in-game scene restoration, 3D reconstruction, physics engine, MMO synchronization, Agent AI and other technologies to improve the reproducibility and efficiency of automated driving simulation platform testing. . Baidu 's Apollo platform chose to cooperate with Unity to build its full-stack open source autonomous driving software platform. 

Autopilot manufacturers choose to use game engines as the simulation platform. The main reason is that it can produce a full-stack closed-loop simulation, especially in the simulation of the perception module. It can reconstruct a three-dimensional environment and simulate the camera in the three-dimensional environment. Various input signals such as lidar. 

Simulation is equivalent to the construction of the real world. In the training of the perception algorithm, the simulation system comes with the true value of the scene elements, and automatically generates various weather and road conditions without labeling to ensure coverage. The truth value is the objective attribute and objective value of all items. Usually, what the human eye or the autopilot sensor sees is the observation value, and the truth value is the absolute objective attribute of an object that is not transferred by any observer's observation result. All elements are generated by the simulation system itself, so it has the objective value of all the elements in the scene, without the observer's observation, and the self-labeling required by the perception system is directly carried out through the truth value of the simulation system. 

Generally, the perception training of traditional algorithms requires manual labeling. For example, a little girl will draw a frame to label when riding a bicycle. The labor cost of third-party labeling needs 1 billion US dollars each year. Waymo uses a large number of virtual tests to complete algorithm development and regression. test. 

When the engineer adjusts the algorithm, it may only take a few minutes to test on the simulation test platform, but if it is a road test, it may take half a day or a day to make an appointment for the adjustment of the autonomous driving fleet, and choose to test on the simulation platform. As long as the computing power permits, high-concurrency tests such as 1000 vehicles and 2000 vehicles can be carried out at the same time. 

In summary, the core capabilities of automated driving simulation testing include geometric restoration of scenes, 3D scene simulation + sensor simulation; restoration of scene logic, decision planning simulation; physical restoration of scenes, control + vehicle dynamics simulation; high concurrency , Cloud simulation and other advantages. 

Therefore, new and old autopilot companies are recruiting compound talents with virtual engine backgrounds such as Unity and Unreal. An industry person said to Pinwan: “One is that the simulation environment needs to be rendered more and more realistic, and the other is that these talents can make some optimizations based on the virtual engine to reduce the cost of the entire simulation test.” 

Recruitment requirements for simulation engineers of an autonomous driving company 

Autonomous driving companies are also looking for ways to improve the efficiency of simulation tests. Improving the fidelity of virtual scenes is usually considered a reasonable way, but it is not that easy. 

Professor Henry Liu said: "Because this environment is constructed based on a mathematical model, if we want the calculation results to be closer to the real world, then the construction of this model will be more complicated and the calculation speed will be slower." Create a highly realistic virtual The difficulty of the world may not be less than the realization of autonomous driving. Simulation testing is not complete United States , and the results have certain limitations, it does not help companies solve all the problems autopilot. Although Waymo's 20 million miles of autonomous driving in 2020 and more than 15 billion miles of autonomous driving simulation mileage in 2020 can kill most of the latecomers in the autonomous driving industry, 20 million miles are a drop in the price of achieving autonomous driving, and data is not omnipotent, and there is no evidence. It shows that autonomous driving simulation can fully simulate the complex situation of the real world. 

At present, all data-driven methods will always fail scenarios . First, the uncertainty of the data itself. For example, for many occluded objects, even human labels will have a lot of uncertainty; secondly, due to the high complexity of the model, it is difficult to identify all models in the virtual engine, especially in Europe and the United States. Scene. 

Different from the human eye, it is much more difficult for the algorithm to recognize the image. Once a few key elements in the image undergo subtle changes, the algorithm’s output recognition results may have huge differences, not to mention the human eyes. When misjudged. Therefore, massive data is not a sufficient condition to realize L4 or even L5 autonomous driving. It cannot be expected to prove the absolute safety of offline roads through hundreds of millions of kilometers of "safe driving" on the virtual simulation platform. 

A professional said to Pinwan: “Although the use of game engines for simulation testing has solved some of the problems of autonomous driving to a certain extent, in the final analysis, its focus is still on testing. Its main purpose is to prevent the possibility of autonomous driving algorithms. Make mistakes on some mistakes that have already been made, or test them in advance in scenarios that engineers can think of. It is more to ensure the correctness of the logic of the entire system." 

This article is from the WeChat public account "Pinwancool" (ID: pinwancool) , author: Hong Yuhan , 36氪 published with authorization.

Thursday, June 17, 2021

The brains are wide open, and the scientist Nature issued an article: After artificial intelligence, the rise of "smart matter" computing?

 Artificial intelligence (AI) is not a new concept anymore. We know that it is inspired by the human brain and neural networks. The human brain is particularly good at computationally intensive cognitive tasks, such as pattern recognition and classification.

Regarding AI, a long-term development goal is decentralized neuromorphic computing, that is, relying on a distributed core network to simulate the large-scale parallel computing of the brain, so as to realize a super information processing method inspired by nature. By gradually transforming interconnected computing blocks into continuous computing organizations, it is possible to conceive advanced material forms with basic characteristics of intelligence. This "smart material" can learn and process information in a non-localized manner, and can receive and Respond to the interaction of external stimuli and the environment, and at the same time can adjust the structure autonomously in order to be able to distribute and store information reasonably. Does this kind of thinking broaden your cognitive boundaries of the word "intelligence" again?

On June 17, a team of scientists from the University of Münster, Germany and the University of Twente, the Netherlands, published an article in the "Nature" magazine to give an overview of "smart substances". They reviewed and analyzed the current industry's use of molecular systems, soft materials or solid-state materials. The progress of intelligent materials realized by materials, as well as the practical applications in soft robots, adaptive artificial skin and distributed neuromorphic computing.

Although the intelligent substances in the thesis do not show the kind of intelligence (such as recognition ability or language ability) that the public is familiar with, their functions have far exceeded the characteristics of static substances, and their potential applications are inspiring.

How to understand smart matter?

Under normal circumstances, we can understand intelligence as the ability to perceive information and use it as a knowledge reserve in order to complete adaptive behavior in a constantly changing environment. Although there is no exact definition of intelligent matter, researchers believe that when it comes to the concept of "intelligence", at least two main characteristics must be included: first, the ability to learn; second, the ability to adapt to the environment. So far, most of these two abilities exist in organisms.

With the popularity of AI technology, people are stepping up efforts to realize the machine learning and adaptation skills in an increasingly complex systems, these systems will be integrated in various functional components together . In addition to these functional architectures, it is worth noting that artificial synthetic substances themselves also show many intelligent characteristics, which may constitute a brand-new concept of AI.

Because advanced AI applications generally need to process a large amount of data, it is very challenging to regulate the behavior of intelligent substances in a centralized manner, especially when using traditional computers based on the von Neumann architecture for centralized information processing. The limit is quickly reached, moving data from the memory to the processor and back, not only greatly reduces the calculation speed, but also requires a lot of power consumption.

Therefore, new methods and computational paradigms need to be implemented directly at the material level, so that smart matter itself can interact with the environment, self-regulate its behavior, and even learn from the input data it accepts.

For the development and design of smart materials, inspiration from nature is very useful. The macroscopic functions of natural substances come from the complex internal structure and the interaction of molecular, nano-scale and macro-scale building blocks. In artificial substances, the combination of bottom-up and top-down methods can make the architecture have various novel characteristics and functions.

Researchers believe that the intelligence of artificial materials can be defined in a hierarchical manner. For example, smart substances are realized by combining four key functional elements: (1) the sensor interacts with the environment and receives input and feedback; (2) the actuator responds to the input signal and adjusts the performance of the material; (3) for long-term use A memory for storing information; (4) A communication network for processing feedback.

Ideally, these elements can form a functional processing continuum, which does not require a centralized processing unit, but provides local and distributed information processing capabilities.

Figure|Structural substances are static and cannot change their properties after synthesis, such as pure silicon; octopus tentacles, with embedded sensors, actuators and nervous system, represent smart substances (source: Nature)

The most basic structural substance, it may contain a highly complex but static structure. Although it has a wide range of applications, its properties cannot be changed after synthesis. At an advanced level, reactive substances can change their characteristics (shape, color, hardness, etc.) in response to external stimuli, such as light, current, or force.

At present, scientists are working hard to explore adaptive substances, which have the inherent ability to deal with internal and external feedback. Therefore, it can respond to different environments and stimuli. This definition is similar to "life-like materials", that is, synthetic materials inspired by living things and living substances.

Researchers believe that transcending adaptive substances will ultimately promote the development of smart substances. Smart substances will include four major functional elements (sensors, actuators, networks, and long-term memory), and can show the highest level of complexity and functionality.

What are the things that tend to be smart?

Researchers outlined in the paper traces the development of smart materials, has given no isozyme example of a complex system level of energy, thereby showing the development of intelligent material may trend.

The first type is cluster-based self-organizing materials (such as nanoparticle assemblies, molecular materials).

A prominent form of complex behavior is to rely on the collective interaction of a group or a large number of individuals in a group. In such a system, multiple individually responding entities will organize and communicate in a special way, thereby realizing large-scale adaptive phenomena and forming a collective protection model. In nature, this behavior is usually observed in insect communities, fish schools, birds and even mammal populations.

When using this concept to implement building blocks on a microscopic scale, this concept of basic intelligence is particularly interesting for the realization of intelligent matter. For example, cluster robots interact with a large group of small robots. Each small robot is about one centimeter high and has limited capabilities, but they can be arranged in complex, predefined shapes.

When considering group behavior on the nanoscale, similar logic is still available, such as nanoparticle assemblies. In self-assembled material systems, the local communication between weakly coupled and highly dynamic components takes place in the form of particle interactions. .

Based on chain formation, repulsive fluid and attractive magnetic interactions between structural nanoparticles, and according to the initial shape, micro-groups can perform reversible anisotropic deformation, controlled splitting and merging with high modal stability, and navigational motions, but these Shape adaptation relies on external programmer input, magnetic field control, etc., so the particles themselves do not show intelligent behavior.

Figure|Adaptive group behavior and colloidal clusters (Source: Nature)

Interesting adaptive behaviors are also found in synthetic molecular systems, and feedback comes from the interaction between the reaction network and coupling molecules. In addition, the transmission of information about the size of self-replicating molecules can be observed. From ancestors to offspring replicons, this behavior is somewhat similar to the norm in biology.

However, the lack of memory in this type of material prevents the material's ability to learn from past events.

The second is the realization of soft matter (such as reactive soft matter, soft matter embedded in memory, and adaptive soft matter).

In biological systems, softness, elasticity, and flexibility are notable features. Molluscs can achieve continuous deformation in a crowded environment, thereby achieving smooth motion. Natural skin also exhibits the remarkable characteristics of basic intelligence, including the tactile sensation of force, pressure, shape, texture and temperature, tactile memory and even self-healing ability.

The goal of the field of soft robotics is to transform these characteristics into soft matter. The soft robot can simulate biological movement by adjusting its shape, grip and touch. Compared with rigid materials, due to the conformity of the materials, when they come into contact with humans or other fragile objects, the risk of injury is greatly reduced.

Figure|Responsive soft matter and soft matter with embedded memory function (Source: Nature)

Soft matter contains reactive soft matter, and the most common drive is the change of shape and softness with input.

A typical example is a self-contained artificial muscle composed of a silicone rubber matrix. Its driving relies on the vapor phase transition when the liquid is embedded in ethanol microbubbles and heated. This sensitive artificial muscle can repeatedly lift more than 6 kilograms of weight.

Another case is based on the double cross-linked responsive hydrogel induced by DNA hybridization. With the help of an external DNA trigger, the volume contraction of the material is locally controlled to imitate human hand gestures. There is also an artificial skin developed by using the triboelectric effect, which can actively sense the proximity, contact, pressure and humidity of the touched object without the need for an external power source, and the skin can autonomously generate an electrical response.

There are also scientists who use the ion gradient between the micro-polyacrylamide hydrogel compartments of the cation and anion selective hydrogel membranes to create an "artificial eel" that uses a retractable stacking or folding geometry to activate thousands at the same time. A voltage of 110 V is generated after a series of gel chambers. Unlike typical batteries, these systems are soft, flexible, transparent and potentially biocompatible.

Soft matter embedded in memory, this type of functional soft matter combines material memory and perception capabilities. Some scientists have verified this concept in a mechanical hybrid material, in which a resistance switching device is used as a storage element on an island of rigid polymer photoresist (SU-8), which is embedded with stretchable polydimethylsiloxane In (PDMS), the microcracks in the gold film evaporated on polydimethylsiloxane act as electrodes and stress sensors at the same time. This kind of motion memory device allows the detection of humans based on changes in stress and subsequent information storage Movement of the limbs.

In addition, self-healing is also an important property of soft materials, allowing materials to quickly restore their original properties after being disturbed/bent, and is a way to eliminate past traumatic memories. A scientific team has reported an organic thin film transistor. This kind of transistor is made of stretchable semiconducting polymer, which can work normally even when folded, twisted and stretched on the moving human limbs, and this kind of polymer can repair itself after special solvent and heat treatment, almost completely The field effect mobility is restored.

Information processing usually involves counting, which requires a perception ability and a storage unit that stores the latest value. A scientific research team has proposed a design concept based on subsequent biochemical reactions to calculate substances, which can release specific light pulses based on the number of detected light pulses. The output molecules or enzymes to achieve the actual counting process.

In addition to sensing and driving, the adaptive soft matter also includes a precisely customized chemical-mechanical feedback loop. One way to realize adaptive soft matter is the autonomous particle motion model system proposed by scientists. It contains an elegant combination of sensing and driving, coupled through a reaction network, for example, there is a material that can regulate the growth and contraction of oxygen bubbles in the capsule. , Which leads to the antagonistic adjustment of effective buoyancy, and realizes the oscillating vertical movement of the colloid in the water driven by the enzyme.

The third type is the realization of solid materials (such as neuromorphic materials, distributed neuromorphic systems).

At present, the information processing technology of solid materials is much more advanced. For example, the traditional computer core is constructed by physical devices (such as chip transistors). Non-traditional computing surpasses standard computing models, especially biology, which can be considered as non-traditional computing systems.

Programmable and highly interconnected networks are particularly suitable for performing computational tasks, while brain-inspired or neuromorphic hardware is designed to provide physical implementation. Although in the top-down manufacturing of the semiconductor industry, mature semiconductor materials are used to enable neuromorphic hardware (such as Google’s tensor processing unit) to be realized, the bottom-up approach using nanomaterials may be unconventional and Efficient calculations provide new ways.

Researchers believe that combining the above-mentioned various material realizations, the hybrid method may eventually lead to the realization of smart materials.

For example, the use of phase change materials to simulate neuromorphic computing systems has become a key enabling factor for brain-inspired or neuromorphic hardware, allowing artificial neurons and synapses to be implemented in artificial neural networks, using them to be heated in an amorphous or crystalline state by Joule heating Under the programmability to achieve fast, accessible room temperature non-volatile memory function.

The memory behavior of phase change materials further makes them suitable for brain-inspired calculations, where they usually embody synaptic weights or nonlinear activation functions. In addition, two-dimensional (2D) materials, such as graphene , molybdenum disulfide, tungsten diselenide, or hexagonal boron nitride, have also appeared in experiments with neuromorphic devices, allowing the design of compact artificial neural networks.

A recent study showed that it is possible to perform nonlinear classification and feature extraction on disordered networks of boron-doped atoms in silicon at a temperature of 77K. Many other research results show that the deep neural network model of nanoelectronic devices can be used to effectively adjust the device through the gradient descent method to complete various classification tasks instead of achieving functions through artificial evolution.

These works reveal the potential for efficient calculations at the nanometer scale using the inherent physical properties of matter.

Figure|Neuromorphic materials and systems (Source: Nature)

It is worth noting that in the neuromorphic system, information processing and memory are co-localized, which is strictly different from the traditional von Neumann structure. One promising study is the optical neural network model, because light itself can be calculated by interacting with matter or interfering with itself without pre-defined paths. In addition, this model allows data to be processed at the speed of light (in the medium). Processing, and the power consumption is extremely low compared with electronic equipment.

When light propagates through different diffractive layers, the information is processed at the same time, similar to the preprocessing of data in human skin before it is transmitted to the brain through the nervous system.

In addition, the researchers also believe that each material reservoir has its own physical problems, and material learning can be used to make the reservoir emerge from the system instead of designing the material matrix into a good reservoir.

Looking forward to the future development path

So, what are the challenges in the future?

Researchers believe that the difficulty lies in the development of effective methods for manufacturing, amplifying and controlling smart substances.

Smart substances must contain dynamic materials with considerable degrees of confocal freedom, mobility, and nano-level component exchange. This means that the interaction between nanoscale components must be weak enough to be manipulated by external stimuli. In addition, such substances must exhibit a certain degree of internal organization of nano-level components, so that feedback and long-term memory elements can be embedded, and in order to fully receive and transmit external input, addressability with spatial and temporal accuracy is required. These requirements may be contradictory to a large extent, and may not be compatible.

Obviously, the key elements of smart materials are easier to implement in different material types, but researchers hope that hybrid solutions can solve the incompatibility problem.

So, what will the road map to smart matter look like? They have an idea.

First, a demonstrator and design rules are needed to develop an adaptive substance with an inherent feedback path. By integrating nano-scale building blocks, the self-assembly and top-down manufacturing of nanostructures can be reconfigurable and adaptable;

Then, it must start with adaptive substances that can handle feedback and develop into substances with learning capabilities ("learning substances"). These materials will be enhanced by embedded memory functions, material-based learning algorithms and sensor interfaces;

In addition, it is also necessary to develop from learning materials to truly intelligent materials, receive input from the environment through sensory interfaces, display the required response through embedded memory and artificial networks, and respond to external stimuli through embedded sensors.

Therefore, the development of smart materials will require coordinated, interdisciplinary and long-term research efforts.

Ultimately, considering that overall performance is the collective response of components and connections, a complete system-level demonstration is necessary to accelerate the use of smart materials. Various technological applications of smart materials are foreseeable, and the collaborative integration with existing AI and neuromorphic hardware will be particularly attractive. In this regard, applications in life sciences and biological cybernetic organisms also require biological Compatible realization.

Reference

https://www.nature.com/articles/s41586-021-03453-y

Introducing AIrbq.com: Your Go-To Source for the Latest in AI Technology News

Introducing AIrbq.com: Your Go-To Source for the Latest in AI Technology News AI technology is rapidly advancing and changing the way we liv...