3D LIDAR Point Cloud Scanner
When I took the Robots, Sensors, and Actuators course at Hopkins, we had a project where we were allowed to make whatever we wanted as long as it incorporates the material we learned throughout the semester. My friend Will and I decided to build a device that can scan a room and plot the room as points in a 3D point cloud to visualize the room on a computer.
LIDAR-Lite V2 (it was $39 on Amazon compared to $130 for the V3)
NEMA 17 Stepper Motor
Pololu A4988 Stepper Motor Driver
Hobby Servo Motor
Nokia 5110 LCD Screen
It's not perfect, but it's not a bad proof-of-concept. I have some ideas to improve this design at the bottom of this page.
How It Works:
The scanner works within the spherical polar coordinate space meaning it uses theta, phi, and r rather than X, Y, and Z. First, the user uses the three potentiometers to define the constraints for the scan meaning that the user has control over the minimum and maximum phi as well as the theta that the scanner sweeps.
After the user presses the start button, the scanner uses its LIDAR as a distance sensor to measure the r (radius). The Arduino converts the spherical coordinates to Cartesian coordinates and then prints these coordinates as space-delimited strings to serial. A program called PuTTY reads in from serial and saves the values in a .txt file. We found an open source point cloud viewer called Displaz which we used to display the coordinates visually.
Since we were using an Arduino, the code is written in Arduino which is basically C/C++.
First, we have to include our libraries, define our pins, declare our variables, and initialize our objects. One interesting trend that I've noticed is that people tend to declare global variables when programming Arduinos even though that tends to be looked as bad coding practice when programming software for PC. I'm not quite sure why (maybe memory management and speed?).
This code also converts spherical coordinates to Cartesian and the user is able to define the degrees (theta and phi) for the motors to sweep and scan. That's why there's pin labelled PHI_MIN, PHI_MAX, and STEP_GOAL (which is converted to theta later on) and it's also why there are global variables called phi, phiMin, phiMax, and theta.
We designed the code like a finite state machine, so it uses an enum variable to save the current state and a switch-case statement in the void loop as the program overarching control scheme.
Next comes the setup. Here, I initialized the LIDARLite object called myLidarLite, set the baud rate to 115200 bits per second, initialized the motors, set up the Nokia LCD display, and defined the global variables.
One odd thing that we noticed was that the stepper motor stepped a lot more consistently after it moves a bit first, so we made the stepper motor rotate counter-clockwise 25 steps in the setup but it returns back to the initial position in the void loop with the resetMotor() function.
The void loop is really where you can see the finite state machine control scheme. It's a very simple switch-statement and we modularized the code by breaking it out into different functions.
This toggleOp() interrupt service routine function is run when the momentary switch is pressed. We used a simple debouncing solution by checking if 50 ms has elapsed since the last time the button was pressed. The button allows the user to start the scanning or halt the scan mid-scan and reset it to the initial position.
The servoWrite(int angle) moves the servo to the angle given in the parameter. There's a PHI_OFFSET constant because the phi = 0 did not make the motor point directly up due to the way it was mounted. By adding PHI_OFFSET to angle, the write() command moves the motor to the actual angle position. There's a 10 ms delay because the Arduino does not wait for the servo to move to the position before it executes the next commands. The 10 ms delay is there so that the servo has enough time to move to the position given by angle. The servo angle will only be incremented 1 degree at a time in the scan() method so 10 ms was adequate.
The scan() method moves both motors such that the servo makes a full sweep from phiMin to phiMax (or phiMax to phiMin), steps the stepper motor one degree, and then reverses the servo direction and makes a full sweep. The scan() method also checks if the scan is completed (meaning that it has scanned the area that the user set the scanner to sweep) and if it is, the program goes into the RESETTING state.
The resetMotor() function moves the servo such that the LIDAR points directly upwards where phi equals 0 and it also moves the stepper motor to wherever the scanner started the scan.
The writeOutput() function converts the spherical coordinates to Cartesian coordinates and prints the output as a space-delimited string to serial. The units are centimeters. According to the Arduino reference guide, the Serial.print() function defaults to printing two decimal places. By adding the "4" in the parameter of that function, it prints four decimal places.
The updateDisplay() function makes the Nokia LCD display different things depending on the current state of the program. If the program is in the SCANNING state, this function will show the percentage of progress the scan has made so far on the LCD. If the program is in the RESETTING state, the LCD will display "Resetting...". If the program is in the SETUP state, the LCD will display the current scan parameters read in from the potentiometers so that the user can set the parameters to whatever he/she wants.
Note: this circuit diagram is a little out-of-date. Originally, we were going to use a TFMini, but we had a lot of issues with that sensor so we switched it out for the LIDAR-Lite V2.
Also we drew the circuit rather than using a circuit CAD program because we were lazy...
We also changed the Arduino Mega for the Arduino Uno.
This was the only part I needed to 3D print to get it to work. It mounts the servo motor to the stepper motor.
How to Use/Workflow:
Download PuTTY (https://www.putty.org/) and go to session > logging and check “printable output” then change the log file name to anything with a *.txt extension.
Download and install displaz (https://github.com/c42f/displaz), the program we use for viewing point clouds.
Connect the Arduino Mega to your computer via USB and connect the 12V and 5V power supplies to those respective terminals of the apparatus.
In PuTTY’s session menu, make sure that the “connection type” is serial and that the correct serial line is inputted as well as 115200 for the baud rate.
Upon bootup, the screen will display “Phi min: … Phi max: .... Theta …” Use the three potentiometers to change these parameters where the far left potentiometer changes phi min, the middle potentiometer changed phi max, and the far right pot changes theta. We used the normal spherical coordinate system where phi min corresponds from a sweep from the Z axis to the XY plane.
Press the button to begin scanning.
Once the button is pressed, the screen will display a progress status to indicate the percentage of the scan that has been completed.
Upon completing the scan, the motors will reset back to the original starting position.
Close the PuTTY terminal. The data will now be saved in a text file.
Open up the scanned data and remove the PuTTY session header such that the text file only has X Y Z plot data.
Launch displaz and File > Add the data text file. The points should now be plotted in the viewer of the program.
There are two major issues with this design, but they come with easy-to-implement solutions:
It's slow. The scan you see on this page took about four minutes. It would be nice if we could scan a room in less than ten seconds or so. One way to do this would be to use a brushless motor with encoder on both axes. The LIDARLiteV3 has an update rate of 650 Hz so the limiting factor right now is the motors. Another way to create a much faster scanner would be to use a Microsoft Kinect. I've been playing with a Kinect V2 in my squat form corrections project, FormCheck. It's fairly accurate at depth sensing up to 8 meters (26 feet) which is good enough for most rooms. It's also very fast and should be able to create a depth map within a second. Two reasons why IR Time-of-Flight sensors like the one used in the Kinect isn't used in technologies like self-driving cars is that its range is much shorter than LIDAR (50-100m depending on how reflective the target is) and sunlight can affect its accuracy (though, I think some IR sensors use wavelengths that the sun doesn't emit).
It's not very precise. We can see that for some points on the walls that should be very close to each other in the point cloud look like they're a foot apart. It's not the LIDAR's fault because the specifications say that the LIDAR is accurate within +/-1 inch in optimal conditions. I think the inaccuracy is from the servo motor because hobby servo motors aren't meant for precise control. Another stepper motor, a brushless motor with encoder, or a solid-state 2D LIDAR (expensive) would be better.