• Home > News > News

    AGV's third generation technology

    Data:2019-10-12 09:38:51      Views:      

     AGV's third generation technology

     
    The highlight is the third generation. At present, in the capital market, all kinds of machine vision and lidar are hot. As of October 2018, nearly ten machine vision companies have completed tens of millions of financing, and a few days ago, Lidar company Sagitar Juchuang completed the largest single financing in the industry invested by Cainiao, SAIC and BAIC-300 million .
     
    SLAM (Simultaneous Localization And Mapping), that is, simultaneous localization and map construction, SLAM technology is crucial for the ability of robots or other agents to move and interact, because it represents the basis of this ability: know where you are and how the surrounding environment is To know how to act autonomously in the next step. It can be said that all agents with certain action capabilities have some form of SLAM system.
     
    In the future of various types of SLAM algorithm navigation, laser SLAM based on lidar and visual SLAM based on machine vision (VSLAM) are the two most studied and most likely large-scale landing SLAM, which basically represents the third generation of AGV navigation technology Development direction.
     
    In these two SLAM navigation methods, laser SLAM is currently widely used. Laser SLAM is born out of early positioning methods based on ranging (such as ultrasound and infrared single-point ranging). The lidar distance measurement is relatively accurate, the error model is simple, and it runs stably in an environment other than direct light. The feedback information itself contains direct geometric relationships, which makes the robot's path planning and navigation intuitive. The theoretical research of laser SLAM is also relatively mature, and the landing products are more abundant.
     
    VSLAM, it can obtain massive and redundant texture information from the environment, and has super strong scene recognition ability. Early visual SLAM was based on filtering theory, and its nonlinear error model and huge amount of calculation became obstacles to its practical landing. In recent years, with the sparse nonlinear optimization theory (Bundle Adjustment) as well as the progress of camera technology and computing performance, visual SLAM running in real time is no longer a dream.