According to the global status report on road safety from the World Health Organization, approximately 1.35 million people died in traffic accidents in 2018, and 3,700 people died in car accidents every day, that’s to say, around 3,700 people died in car accidents every day. Nevertheless, almost all of these injuries and deaths were caused by human error. As one of the most important future technologies in the 21st century, we believe that self-driving cars will effectively eliminate accidents that are caused by human error.
LiDAR is the most important and indispensable part of the environmental perception of autonomous vehicles. It enables robots or vehicles to have perception capability more superior to humans and ensures the safety of future mobility.
Most of the current autonomous driving solutions, due to the limited vertical field of view (FOV) and the vehicle roof-top installation of the LiDAR, have blind-spot areas around the vehicle body which are difficult to be scanned by the LiDAR, and may result in a large number of undetectable dangerous corner cases and objects (such as pets, children, etc.). Today we will introduce three common LiDAR solutions tackling the near-field blind-spot detection.
Image link: https://drive.google.com/open?id=1mJhFzGjfZZ4UN8AEkDymsOAJat41OPt9
Source from the Internet
Image link: https://drive.google.com/open?id=1duPcJdL0eiSFscmzHhuhneFMkduBIfIx
Vehicle roof-top installation of the LiDAR (the red highlights the area that can be detected while the yellow indicates the undetectable blind-spot zone)
Plan A: Fusion of Primary and Auxiliary LiDAR
Image link: https://drive.google.com/open?id=1RFLlelyr41XIi28EUqxkGRcYBlOqVrAF
(The red highlights the detectable area of primary LiDAR and the green indicates the detectable area of the auxiliary LiDAR)
However, the auxiliary LiDAR is not specially designed for blind spots detection. Its vertical FOV is usually between 30 ° to 40 °, therefore there are still small blind areas flanking the two sides of the vehicle body.
In addition, the function of the auxiliary LiDARs in the detection of blind areas below the front and rear of the vehicle is very limited.
Image link: https://drive.google.com/open?id=1H4sJ4MtXwjQ1N1Ji62YfJJD5WiwqNfjj
(The red highlights the detectable area of the primary LiDAR, the green indicates the detectable area of the auxiliary LiDAR, and the yellow shows the undetectable blind spots area)
Plan B: Add LiDARs “As Much As Possible”
Just add a LiDAR where there is a blind spot. By increasing the number of LiDARs, the blind spots can be reduced. The installation scheme varies according to different vehicle models.
However, due to the limitation of most LiDARs’ vertical field of view (only 30 ° ~ 40 °), to completely eliminate the blind zone in the near-field space, a large number of LiDARs are required, causing extremely high cost and low efficiency. In addition, a large number of LiDARs installed on the vehicle is a sore to the eyes.
Image link: https://drive.google.com/open?id=13-pVklW5RKnmyJKotxACVlud6i-SOQK0
Source from https://thelastdriverlicenseholder.com/
Plan C: Specialized LiDAR to Achieve Blind Spots Full Coverage
RS-Bpearl is a new type of short-range LiDAR designed specifically for the detection of near-field blind spots. Loaded with RoboSense's innovative signal processing technology, RS-Bpearl is able to detect objects within a few centimeters, plus an approximately 360°x 90° super-wide field of view, RS-Bpearl can effectively detect the blind spots around the vehicle.
Image link: https://drive.google.com/open?id=1xyuBZgQYT0rhpzWWBCwu9EcjRn0pzCbD
Image link: https://drive.google.com/open?id=15re4u1tWnkQlmRfDj16FsE6vZywiAOfB
Image link: https://drive.google.com/open?id=1QOcXt8mJBGIkVOy83NITD9ZhMV3fpeR7
Image link: https://drive.google.com/open?id=128ZxdDJp6g9thVU3aLH0jTzmYfqzxk-h
Image link: https://drive.google.com/open?id=1VnsBtYPE7Ss7VmHALfQDapH_2Kca4MBd
Image link: https://drive.google.com/open?id=199WLixpHdpM0mb7SZfSK3cJNCutsu8De
(The red highlights the detectable area of the primary LiDAR, the green indicates the detectable area of the RS-Bpearl. The blind-spot zone is fully covered)
RS-Bpearl has a super-wide hemispheric FOV coverage of 90 ° * 360 ° approximately, which can detect the actual height information in particular scenarios, such as bridge tunnels and culverts, further improving autonomous driving decision-making and driving safety.
Image link: https://drive.google.com/open?id=1cHRDId-8rDfYP1XOlqQzaq9MoGcL4Ynt
?RS-Bpearl point cloud image in multiple scenarios
Image link: https://drive.google.com/open?id=1TnTUGyDmm1Q7k4frywmfhNpbo5IZzlO6
?RS-Bpearl point cloud image of speed bumps
Image link: https://drive.google.com/open?id=1iQ0rMjuQKmo1lc62JQ8xnUjsSGGK7nVl
?RS-Bpearl point cloud image of the car crossing the bridge
Image link: https://drive.google.com/open?id=1JSD330MfU3h3wHivVn-KMlpqLQdv03N_
?RS-Bpearl point cloud image of the car passing through tunnel
Currently, the minimum detection distance of LiDAR available on the market is generally ranging from 20cm to 50cm, which means when installed on the autonomous vehicle, it cannot guarantee complete detection of obstacles near the vehicle body.
The RS-Bpearl with the minimum detection range of less than 5 centimeters, can precisely identify objects around the vehicle body, and assist the vehicle to easily handle with corner cases such as detection of pets, children and navigation in narrow lanes and dense traffic flow. It can further comprehensively achieve zero blind spots in the sensing zone to ensure the safety of autonomous driving.
Image link: https://drive.google.com/open?id=1seOT21U6d_P0ecY3xV0wHbN5EpmLA5FT
?5 cm is the length of the Shift key
Image link: https://drive.google.com/open?id=1Sp3GWgaAf8lfhXITEmvTiHtX6sowjfov
? RS-Bpearl point cloud image of traffic roadblock beside the car
Image link: https://drive.google.com/open?id=1easfJFd0A7x-2NnMlFi3WLFm5xP-mH4V
?RS-Bpearl point cloud image of a vehicle passing by
Image link: https://drive.google.com/open?id=18H16DEoABfEMyemUS0RwHYv8YUwKlQhj
?Image of RoboSense RS-Bpearl ?φ100mm * H111mm?
The compact size of the RoboSense RS-Bpearl (100mm * 111 mm) and the top located hemispherical optical window, guarantee that the non-optical part of the product can be completely embedded in the vehicle body. In addition, the innovative modular design of the RS-Bpearl dramatically reduces costs while making the product more flexible, compact and customizable.
Laser Lines: 32
Laser Wavelength: 905nm
Points Per Second: 576,000pts/s (single return mode)
Points Per Second: 1,152,000pts/s (dual return mode)
Weight (without cabling): ~0.92 kg
Dimension: φ100mm * H111 mm
Operating Temperature: -30°C ~ +60°C
4 RS-Bpearls embedded sideways around the vehicle, with each to provide a hemisphere scanning area relative to the vehicle's perspective, and the total four RS-BPearls will guarantee a complete 360° surrounding view as well as full coverage of the sensing area with zero blind spots in the vehicle's driving space.
Image link: https://drive.google.com/open?id=1kqxko7MlR1MxtjWwlAGjUnVAnNEJGj51
For more information about RS-pearl, please visit https://www.robosense.ai/rslidar/RS-Bpearl
About RoboSense:
Founded in 2014, RoboSense (Suteng Innovation Technology Co., Ltd.) is the leading provider of Smart LiDAR Sensor Systems incorporating LiDAR sensors, AI algorithms and IC chipsets, that transform conventional 3D LiDAR sensors to full data analysis and comprehension systems. The company's mission is to possess outstanding hardware and artificial intelligence capabilities to provide smart solutions that enable robots (including vehicles) to have perception capability more superior to humans.
Attracted an all-star team from leading corporations and institutions around the world?there are 500+ employees in 6 global locations-Shenzhen, Beijing, Shanghai, Suzhou, Stuttgart, and Silicon Valley to support RoboSense's fast-growing in innovation and development. Until 2019, RoboSense owns more than 500 patents globally.
Market-oriented, the company provides customers with various Smart LiDAR perception system solutions, including the MEMS, mechanical LiDAR HWs, fusion HW unit, and the AI-based fusion systems.
Garnered the AutoSens Awards, Audi Innovation Lab Champion and twice the CES Innovation Award, RoboSense has laid a solid foundation for market success. To date, RoboSense LiDAR systems have been widely applied to the future mobility, including autonomous driving passenger cars, RoboTaxi, RoboTruck, automated logistics vehicles, autonomous buses and intelligent road by domestic and international autonomous driving technology companies, OEMs, and Tier1 suppliers.