Projects

  • Travis

YOLObike: Blind Spot Detection For Bikes - HopHacks 1st Place Winner

Background:

One of my first friends that I made at Hopkins was Thomas Keady, who majored in Electrical Engineering and ultimately got his masters in Robotic Engineering at Hopkins. We promised each other that we would eventually participate in a hackathon or two together, but we always ended up pushing it off because we're both very busy people. As the years flew by, it eventually became early 2019 and Thomas was graduating that May so we knew we had to do a hackathon that semester. We both signed up for HopHacks Spring 2019.


Both of us wanted to work on something meaningful and tech-intensive (both hardware and software), but we didn't have any particular ideas in mind.


One early morning, I was riding my electric scooter as quickly as possible through the Baltimore streets because I was running late for class. As anyone from the area knows, many of the drivers here are pretty reckless and could care less about the safety for anyone around them. Therefore, as I rode my electric scooter down the roads, in the back of my head I always had a lingering sense of anxiety that a driver would try passing me but cut too closely and accidentally hit me. Or, I might wander a bit too much into the middle of the lane, directly into the path of a car. However, every time I looked behind me searching for cars, I took my eyes off the road in front of me and there may be a car, pedestrian, or pothole directly ahead of me that I don't see because I'm looking behind myself. Although the idea sprouted from riding an electric scooter, we ultimately decided to create a solution for bikes as there are more bikes on the roads.

With blind spot detection technology becoming pervasive in automobiles, we wondered: "why hasn't anyone created a decent similar solution for bicycles?"On urban roads, bicyclists are at a far higher risk of getting injured or killed in an automobile collision. Therefore, we wanted to build a system that allowed bicyclists to know where cars are behind them without having to look behind themselves. With the problem we wanted to solve defined, on the first night of HopHacks, we started brainstorming ways to create a prototype... within 36 hours (we both got like ~2-3 hours of sleep each of the nights).

One of the hardest things about pulling off a hardware project during a hackathon is that we're limited in the materials we can use (especially because neither of us have cars on campus). Until the day Amazon comes with their drone deliveries, we're stuck with two-day shipping. Luckily enough, both Thomas and I have done a bunch of past projects for fun and for our courses so we had more material to work with than most people, but we didn't have everything so we had to make a couple compromises during this project. Another issue was that although 3D printing is cool, it's also very slow. The prints used in this project took about 24 hours to print so there was no room for CAD mistakes. Either way, this project was definitely a success given the limited materials and time we had.

Video:

(In this demo, the object detection algorithm is used on people because we didn't want to try our prototype on a busy street yet. Also because the HopHacks demos are indoors)

How Big of an Issue Is This?

Just to make sure we weren't trying to solve a problem that doesn't actually exist, we did some quick research to make sure that this is a problem in need of a solution. According to the US National Highway Traffic Safety Administration, 840 bicyclists died in automobile collisions in the USA in 2016. Further, according to the Centers for Disease Control and Prevention, another 467,000 bicyclists were injured in the USA in 2015. These numbers were much greater than we imagined, so we knew that this was definitely an issue.

Of the fatal automobile-bicyclist collisions, an astounding 40% of fatal crashes occur when the bicyclists gets rear-ended by a car.

What YOLObike Does:

The image above is a picture of the dashboard that goes in the front of the bike so that the rider knows where the cars are behind him/her. There's a small extrusion at the top which is a representation of the bike and the LEDs surrounding it are the nearby cars. The closer the lit LED is, the closer the car and vice versa. The lit LEDs also show the rider where the cars are relative to them because the LED will light in the direction of the car. It's basically one of those radars they show in military movies except it filters out everything except cars:

However, we couldn't simply use radar or sonar alone because a simple implementation of them (to my knowledge) cannot differentiate different objects (ie. a car or a bus stop). To my knowledge, these technologies can detect roughly how big objects are (with probably pretty bad precision) but they cannot tell a stationary car apart from a car-sized object like a bus stop. That's why radar works well for airplanes because there's nothing that's airplane-sized flying in our skies except for airplanes.

How YOLObike Works:

On the rear of the bike, there's a camera and a LIDAR distance sensor mounted to a servo motor. The camera is used along with an object detection algorithm called MobileNet-SSD. We originally called this project YOLObike because we were planning on using TinyYOLOv3 (and with our lack of sleep it sounded like a good name at the time), but we couldn't get it to work on the Raspberry Pi with the NCS2 in the short amount of time we had. Shoutout to the Github user Kodamap for posting some quick instructions on how to get MobileNet-SSD to work on the RPi with the NCS2.


So, the camera is used to detect cars (in our demo, we used the algorithm to detect people because we didn't want to try our prototype on busy roads yet) and if there's a car near the center of the frame, the LIDAR will find the distance to the car and the correct LED will be lit. After that, the servo motor moves five degrees and the entire process repeats.

One huge challenge about this project was making it portable. This project had to be portable because it's intended to be used on bikes. Therefore, everything had to run on batteries which meant we couldn't use high power-consuming computers and had to run the computer vision on just a Raspberry Pi.


For reference and for people who are unfamiliar with the Raspberry Pi 3B+, an old Samsung Galaxy S6 phone has far more compute power. The Raspberry Pi has a quad-core CPU running at 1.4GHz with 1GB of RAM. The Samsung Galaxy S6 has a octa-core CPU running at 2.1GHz with 3GB of RAM.


Further, even a lot of machine learning algorithms that many people think are running on their phones are actually run in a data center somewhere with powerful servers. We didn't have the luxury of running our object detection algorithm on the cloud because a slow connection would be detrimental to the functionality and a biker could easily bike into a tunnel or on a road with no mobile connection at all.


Luckily enough, my partner Thomas had an Intel Neural Compute Stick 2 that we could plug into the Raspberry Pi's USB port so that most of the computations would occur on the stick's Vision Processing Unit as opposed to the Pi. Without the NCS2, we were only getting 0.5 fps (and this is a cut-down version of the object detection algorithms). With the NCS2, we got 5 fps! I think any engineer would cheer for a 10x increase in framerate.


To those with a keen eyesight, you may have noticed that there are two battery packs - a powerbank and four AAs. Ideally we would be using only one battery pack, but the motor was causing so much electrical noise that delicate electronics like our Raspberry Pi would shutdown as soon as the motor started rotating. We tried fixing this with capacitors, but we ended up just using another power source.

On the front of the bike, we had the LED dashboard that contained an Arduino microcontroller and a potentiometer to sense the rotations of the handlebars. What may seem surprising to many people is that we used the Arduino as the "brains" of the system even though it has significantly less compute power than the RPi. We figured that we could "blackbox" the RPi-camera setup such that it simply acted as a sensor that detects if there's a car directly in front of the camera. Whenever the Arduino wanted to know if there was a car there it would send the RPi a byte via serial and then the RPi would check and send the Arduino a 1 if there's a car there and a 0 if there wasn't.


The Arduino was connected via a USB cable to the RPi. It was also connected to all 15 LEDs, the motor, and the potentiometer. We fully maxed out every single GPIO pin on the Arduino Nano.


We wanted to know the angle of the handlebars because we thought it would be more effective for the bike to only scan for cars in the direction the bike is turning. Say the rider is merging or turning left, the only cars that may hit the rider from the rear would be the cars on the left side. Therefore, if we could detect when the rider is making a turn, we should only scan in the direction of the turn to effectively double the rate we're scanning the area. We used the potentiometer to find the angle of the handlebars with a 3D printed mount and I programmed some logic in the Arduino to change the scanning range.

Code:

The code is split into two parts, one for the Arduino and one for the Raspberry Pi. The Arduino code is located here - https://github.com/tchanxx/HophacksS2019/blob/master/Arduino%20Code/arduinoHopHacks/arduinoHopHacks.ino

(you will also have to install the LIDARLite package (I used a LIDARLite V2 instead of a V3 because it was 1/3 of the price)


The Raspberry Pi Code is located here - https://github.com/tchanxx/HophacksS2019/tree/master/piSide (On the Raspberry Pi, after you git clone the repo, to run the script just change your directory to piSide and type "sh myscript.sh" which will run the object_detection_demo_ssd_async2.py file that includes the code to detect objects and communicate with the Arduino. You can also uncomment any of the lines in myscript.sh to have the algorithm use a different neural net that is trained on cars, pedestrians, or bikes)

CAD Models:

Although 3D printing is cool, it's also very slow. The prints used in this project took about 24 hours to print so there was no room for CAD mistakes because there was no way we could do a reprint. Luckily enough, most of the CAD was okay (I had to do a little Dremeling to get it to fit the bike a bit better).

Arduino Holder & Bike Mount

LED Dashboard

Potentiometer Bike Mount

You can download any of these CAD models at my GitHub - https://github.com/tchanxx/HophacksS2019/tree/master/CAD

Future Improvements:

As is typical with any project built in 36 hours, there's a lot that can be improved. In particular I would like the next version (if we continue this project) to have:

  • A stationary wide view camera. This would make the object detection algorithm much more accurate because it wouldn't have to deal with as many shaky frames as the motor wouldn't have to rotate it anymore.

  • A faster LIDAR like the ones they use in self-driving cars and advanced Roombas. Since we wouldn't be limited by the speed at which the camera can identify cars, we could quickly and continuously measure the distances of all objects behind the bike.

  • A faster embedded computer. A huge bottleneck of this project was how fast the computer could identify cars. A Raspberry Pi could only manage a measly 0.5 fps and even with the NCS2 we were only getting 5 fps. At the time of writing, Nvidia recently announced a product called the Jetson Nano which promises to run TinyYoloV3 at 25 fps which would make an incredible increase in speed.

Conclusion:

Thomas and I won first place at HopHacks and received $1,024! It was a great experience for us and I would love to do it again (but maybe get a bit more sleep next time).

Full Code - https://github.com/tchanxx/HophacksS2019