# Imaging with Lasers



## griz (Jan 9, 2006)

I've gone plumb dumb on this embedded electronics stuff. Was going to build a small autonomous bot but hey this is Texas so why not go all in. I'm modifying my mobility scooter to be a voice controlled autonomous driving picture taking platform. Just think how nice it would be if you were seated looking through your viewfinder trying to line up the perfect shot and you could just say move forward and have the scene pan as you look through the viewfinder. Or drive along avoiding any people or obstacles while you look for shots. The mapping so it can do all these things is done with a small laser time of flight detector and a 3-d camera. Its called SLAM simultaneous location and mapping. Some really exotic math involved but there are libraries out there for the math challenged like me  I have the laser part working now. Its a Pulsed Light Lidar-Lite unit on a 2-axis turret.










It produces maps from scans that look like this.










This is a single slice scan. And only 1 sweep so lots of outliers. The whole 3-d map is built up from many of these slices. Now that I have the 2-axis rig up and running I'm modifying my code to do the whole thing. There is a public domain operating system for robot construction called ROS. I'm doing everything within that framework. Really makes it a lot easier as you don't have to code all the underlying processes and services. Its all there for you and most sensors etc have drivers already.

Also using a Bosch 9degree of freedom inertial motion unit. This is a cool device that will show you the orientation heading and all that for any object its attached too.










The box top left shows the current orientation. Move the device and that changes. Also calculates the acceleration and heading.

Fusing the data from the Lidar and the IMU and one other item results in being able to calculate the robot's pose motion and with the scans pointing out objects autonomous driving. Has a GPS as well so you could ask it to take you to food etc with the help of a pre loaded map. Program in all the locations for first aid stations and even if you were partially disabled it could get you to help. Race tracks are probably one of the best places to be if you have heart problems. A medical team on every corner and plenty of ambulances and heli's. The last piece of this puzzle is stereo vision. I've been using a couple of microsoft lifecam web cameras to do it so far. But a few days back I read about the new Zed camera. Its more expensive but knowing how long its going to take to make it work from scratch now its a bargin. So one of those will be here in a few days and I'll be ready to start testing. Hopefully the Nov races at COTA I'll have it to a point it can start learning objects etc while I drive around. I was hoping to have it done for the Le Mans race coming up. Can't wait for that. The e-mail from FIA came the other day with instructions on how to pick up my VIP passes. Haven't touched my cameras in ages. Taking the 7DMkII and EF400 into Precision for a cleaning and adjust on the way to Houston next week. So it will be all tuned up for the event.

Check out this site for the Zed camera. Pretty sweet system. Putting all the work on silicone is the way to go.

stereolabs.com

check out the video and you'll see how I'm going to use it. Just wish I had more energy. I could have this done already. But I'll plod along the mental stim I've gotten from it has made me feel pretty good lately. The computer powering all this is a cluster of nvidia Tegra TK-1 Jetson cards. A 4 core ARM processor with a big GPU grafted on the side. I have 4 of them in a cluster which works out to 4Terra Flops of computing power. 4X the fastest computer on the planet when I was working. A more loaded up version of this is in the Google cars I've heard. And Audi uses them for their cockpit. Five inches square.

Griz


----------



## 47741 (Jan 5, 2010)

Sounds impressive.


----------



## griz (Jan 9, 2006)

*check out this camera*

I just ordered a Zed camera. Its a stereo camera with point clouds and all that built into its processor. Instant 3-d vision for the bot. Now I'll just be using the Lidar as a scanner for the area the Zed can't see. Just finished up getting the GPS chip installed and hooked up. That is all the sensors integrated now except the Zed. Probably going to get a Roomba Create so I can test in the house. They are reconditioned roomba vacuum units with the vacuum taken out and a different top plate so you can use it as the base for a bot. Already has the motors encoders and controller card so its easy to get it working. Put a couple of shelves on it and it will make a nice test bed. The website for the Zed is stereolabs.com Check out the video with the two drones. I've been trying to do this with a pair of webcams but the point clouds have lots of holes. The ones from the Zed look very good compared to what I'm getting now. It puts out an image with the distance of each pixel from the camera encoded with the color data. So while it looks black and white kinda ghostly all the info is there to use for navigation and obstacle avoidance. I'm learning a ton about image processing along with this. I'm sure that will come in handy with my photography. Never hurts to know what's going on under the hood.

Griz


----------



## Alstang1 (Aug 18, 2015)

Wow. That's some way impressive shi.....stuff there. Don't I feel dumb now. 


Al Supak's iPhone 6 plus using Tapatalk


----------



## griz (Jan 9, 2006)

*Not that hard to do*

The Robot Operating System ROS has drivers for all the stuff I'm using. So getting it up and running isn't that difficult. The research involved finding the right parts and all is a big hill to climb. The Zed camera really makes it much easier to get the mapping going. Instead of you having to code all that it just sends you a picture with the depth info included in every pixel instead of color information which is captured in a separate image. Its doable with a pair of webcams but the results are nowhere near as good. Lots of holes etc that the software in the Zed takes care of for you. And its quick. Best I could manage with my set-up was almost 100msec per frame. You need to be around 50msec for a driving vehicle. With the Zed and the computer I'm using that value is around 30msec so plenty of headroom. Putting all that hard math on the chip really helps and it also uses the cuda cores that are on the Jetson computer I'm using. You just have to look for sensors that give you cooked readings over serial or I2C then its a lot easier. You pay a little more but the most expensive sensor I've bought so far was 40 bucks so they aren't that expensive. In 5 years there will be cameras like the Zed for under 100 bucks. They just came out in May so the price is still high. The Zed RPlidar and board to run it came in at about 1000bucks. But its all top of the line stuff easy to integrate and program.

Griz


----------

