• Home > News > News

    AGV's third generation technology

    Release date:2019-10-12 09:38:51      Views:      

    AGV's Third Generation Technology Reaching the Same Goal

     

    The highlight is the third generation. At present, in the capital market, all kinds of machine vision and lidar are hot. As of October 2018, nearly ten machine vision companies have completed tens of millions of financing. A few days ago, Lidar enterprise Sagitar Juchuang completed the largest single financing in the industry invested by Cainiao, SAIC and BAIC-300 million. .

     

    SLAM (SimultaneousLocalization And Mapping), that is, simultaneous positioning and map construction. SLAM technology is critical to the action and interaction capabilities of robots or other agents, because it represents the basis of this ability: knowing where you are and how the surrounding environment is To learn how to act autonomously next. It can be said that all agents with certain action capabilities have some form of SLAM system.

     

    Among the various types of SLAM algorithm navigation in the future, laser SLAM based on lidar and vision SLAM based on machine vision (VSLAM) are the two most researched and most likely large-scale landing SLAMs, which basically represent the third generation of AGV navigation technology Development direction.

     

    Among these two SLAM navigation methods, laser SLAM is currently used more. Laser SLAM is born out of early ranging-based positioning methods (such as ultrasound and infrared single-point ranging). Lidar distance measurement is more accurate, the error model is simple, and it runs stably in environments other than direct sunlight. The feedback information itself contains direct geometric relationships, making the robot's path planning and navigation intuitive. Laser SLAM theoretical research is relatively mature, and the products on the ground are more abundant.

     

    VSLAM, which can obtain massive and redundant texture information from the environment, and has super scene recognition capabilities. Early vision SLAM was based on filtering theory, and its non-linear error model and huge calculation amount became an obstacle to its practical landing. In recent years, with the sparse non-linear optimization theory (Bundle Adjustment) and the advancement of camera technology and computing performance, visual SLAM running in real time is no longer a dream.