top of page

Miniature Self-Driving Car

A miniature self-driving car (Duckie town robot) was developed to navigate a miniature town with traffic signals, stop signs, street signs and to demonstrate lane following and adaptive cruise control

Overview:


The software running the robot during the final demo was split across the arduino and the pi. The pi was responsible for user interaction, path planning, running the finite state machine, setting the target velocity, and lane following through vision. The arduino was responsible for setting motor PWM, obstacle detection, and encoder-based PD control.


The pi ran multiple threads. These were the main thread, vision thread, and finite state machine thread. The main thread lsitened for user commands such as adding destinations to its route, or shutting the robot down. The vision thread kept populating new images from the camera. The finite state machine thread performed most of the work. It looped until shutdown, initiating and performing actions to transition between states.

Both the pi and the arduino maintained their own “robot” class which contained relevant methods and state. The pi robot is high level, and contained commands such as “shutdown,” or “drive straight”. These commands were passed down to the lower level arduino robot, which handled setting the motor speeds. It also stored state such as the number of encoder segments, and the circumference of the wheels. The arduino also was also responsible for avoiding collisions. The arduino was responsible for odometry and for executing all non-lane following actions such as “drive straight” or “turn right or left.”


PD Control Implementation:


It was decided to use ΔS (change in displacement) as a proxy for ΔV (change in velocity) for setting up the odometry controller. Setting up the controller based on velocity control lead to unstable behaviour, since for calculating the instantaneous velocities at every interrupt count, the time step was determined which was really small and dividing by this value lead to a huge error that lead to corresponding changes/adjustments in the PWM signal. Scaling this value by using appropriate gains was possible, however, the desired behavior was not obtained. Since, del. T will be the same for left and right wheel; del. S was used instead for setting up the controller. The relationship factor C between Sl and Sr was set accordingly based on the three cases. Termination criteria or exit state was the distance reference values for left and right wheels.

The objective of the PD controller was to reduce errors in relative distances travelled by each wheel for 3 different cases namely:

1) Straight path: (Sref = 0.5m; C = 1)

2) Left turn: (Sref = 0.75m; C = 1.75)

3) Right turn: (Sref = 0.35m; C = -1)

Odometry control was used to handle intersections where the vision controller or the camera could not see the lanes. Appropriate states were identified to switch to odometry at intersections. Post exit state the vision controller was set to take over to continue lane following. The following governing equations were used for encoder PD control:

err = (Sr - Sl);

where Sl and Sr are the distances travelled by the left and right wheels respectively,

ΔS = K*err + B*err_dot;

where, err_dot = errt-err(t-1)

Finally, the actual velocities were set based on the open loop relationship obtained between the linear velocities for each wheel and the respective PWM signals from the open loop testing carried before.

Vl = Vl + ΔS;

Vr = Vr - ΔS;

For the encoder based PD control it was observed that if the robot initially veers to either direction then the PD control cannot correct this change in heading making the robot drive in that direction. It was thought, having more resolution on the encoders i.e.using 16 segments instead of 8 would solve this issue. However it was noticed that this makes the placement of encoders really difficult, since one has to be at the start of the segment and other at the center of another segment. Also any minor perturbations in the encoder placement threw off the encoder counts. Hence, it was decided to revert to the initial 8 segment encoders.


Adaptive cruise control:


Various conditions were set using the PING sensor based on the distance between the robot and the potential obstacle. The reference velocities were set to ~0.1 m/s for roadways other than the straightways. On the straighways the velocity was set to 0.2 m/s. If an obstacle detected by the PING sensor was beyond 40 cm then the robot would assume reference velocity values. However, if the obstacle detected by the PING sensor was within 40 cm range; then the velocities were scaled based on a non-linear relation between the current velocity and the distance from obstacle. If the distance between the object and the robot went below 15 cm, then the robot was set to brake.


Finite State Machine:


The finite state machine loops through this sequence:

● Use path planning to get next state and a sequence of actions for getting to that state

● For each action, check if the action is safe. Wait until it is, and then execute it once safe

● Update state. If relevant, update the robot’s odometry based on the state

The actions that can be performed are “laneFollowToStop,” “laneFollowToLoc”, “driveStraight,” “turnLeft,” and “turnRight.” Each of these actions blocks until it is completed. In the case of arduino-implemented actions, the pi would send a command to the arduino, then wait until the arduino responded to indicate that the action had completed. Each of these actions specifies allows for a “velocity” argument, which is the velocity that the action should be executed at. Additionally, the “laneFollowToLoc” action also takes a “heading” argument. When this command is encountered, the robot will perform visual lane following until the heading is “close” to the desired heading. This lets us transition for the straightaway speedup section. For example, when transitioning from state 10 to 7, the robot waits until the heading is near pi. This worked consistently when first implemented, but shortly before the demo ceased to work, since the angles reported by the robot no longer lined up with what we expected. We tried resetting the angle to what was expected every time it entered a new state, but this also did not work, and since there was no time to debug, we simply disabled the straightaway speedup section.

Another challenge we had to overcome with state transitioning was with the seam between tiles. When performing right or left turns, the slower wheel would often get stuck on the boundary between tiles, making the robot drastically over-turn. We fixed this by simply increasing the velocity that turns are performed at. Also, when exiting driving straight through an intersection and entering lane following, the robot would sometimes see the red stop sign of the other lane. We fixed this by having it ignore stop signs for the first 2 seconds of lane following. Finally, one issue we did not have the chance to fix is with turning while ping is enabled. Sometimes the robot would detect another vehicle in the other lane while turning, and stop to avoid it. We could fix this by disabling ping while turning, but then run the risk of rear-ending a slow-turning vehicle. More time would be needed to figure out how to differentiate vehicles ahead of us and vehicles in nearby lanes.


Path planning:


For path planning our robot was fully capable of being given a set of locations, planning a path, getting a list of actions associated with this path, and then executing these actions. To start this off, I did not want to hard code in values, so I created a JSON with a format that would represent the track, and the actions that needed to be taken. Then we parsed this JSON and created a map out of all the information. Once this was done, we ran Dijkstra's algorithm to plan the optimal path for the robot to take. Lastly, we would convert this path into a list of actions that the Finite State Machine would execute.


Vision:


We had lane following working previously, however it wasn’t flawless. We had issues with glares causing our robot to recognize yellow lines as white, and we could only load and process a new image about once every second, which was too slow.

In order to fix the glare issue, a few features were added. First of all, while we played around with HSV color model. In the end, we switched to using HSL color model, which is slightly different than HSV, and were able to reduce the influence glare had on our image recognition. In addition to this, we order a polarized lense, which was able to further reduce the amount of glare.

Our other issue was that we needed to process more frames every second. In order to do this, I had to time how long each piece of code took to run, and then try to bring down run times. When I started to check, loading in the image itself usually took about .5 seconds, and then the image processing was usually taking about .7 seconds. To load in our image faster, I switched the camera’s settings to use the video port which brought the time down to about .1 seconds. Then to process our images faster, I put in some optimizations so that we were not running as many calculations, and in addition to this processed less pixels than we previously were. In total, we processed every 10th pixel on the x and y axis, which means we only had to process 1 pixel for every 100 in the image. This brought the run time for this down to about .2 seconds. With these changes being made, we were about to process about 3-4 frames every second, compared to the 1 that we were doing before.

PEOPLE

Collaborators

Jake Valladeres, Timothy McNamara

Sponsors

PROJECT GALLERY

bottom of page