Team Status Report 08/05/21

This week we mainly fixed up the kinks during integration. We all worked together to try and iron out some of the path planning issues that we were seeing when testing the lead vehicle’s navigation system. Part of the reason for the inconsistency was the frequency of updates of both the path planning node and the localisation node. After decreasing the frequency of updates we were able to observe much better performance .

We also tweaked with the weight distribution of the cars to give it a more smooth motion and achieve less drift overall. Some of the differences in material and other irregularities with the optical optoencoder and wheel dynamics made the vehicles operate slightly differently and it took a little bit of calibration to allow the vehicles to move in a somewhat similar manner. With more time we would have liked to see identical movement by implementing better feedback and maybe passing more specific data between the cars, however, due to time constraints we were left with small hacks to make the motion pretty similar.

Joel also tweaked some issues with the communication to allow for more consistent communications.

The rest of the week was dedicated to testing the system together to try and iron out any issues we saw. Although the vehicles are a little finicky and their behaviour is a little unpredictable due to subtle changes in the positions of the motor axes, as well as accumulation of dirt in the wheels and terrain and small perturbations in the vehicle themselves, we were able to have our two cars successfully navigate a simple track with 5 obstacles.  Since the drift between the two vehicles became too significant after 5m, we ended up shortening our demo video to a track of 5m in length.

Jeffrey’s Status Report 08/05/21

This week I focused on getting the project ready ready for the demo. We fixed some issues with the following car not going straight by adding weights to counterbalance the weight distribution. This allowed the car to drift less and led to more accurate localisation and odometry data which helped with the movement of the following vehicle. We also ironed out some of the kinks with the path planning of the lead vehicle. To accomplish this we had t fix some issues with the object detection, mainly we created a history of depth values of the detected object and if the object detection spit out a depth of 0, we would simply use the previously detected depth as an approximate. This allows for a much smoother object detection algorithm and more consistent objects for path planning. We were also working on integrating the lead and following vehicle together, however, ran into an issue with the following vehicle’s odometry, which resulted from a faulty optocoupler. In the next few days we have to finish filming the demo of the lead and following vehicles working together.

Jeffrey’s Status Report 1/5/21

This week I mostly worked with Joel and Fausto together on integration. We amended our path planning algorithm to rely on A*, that would be rerun every time an obstacle is found. Therefore, navigation would depend on localisation data, and sending discrete x,y and tile locations that the car must get to. These locations are dynamically updated every time A* is run when a new obstacle is found. After making these changes, we gave the car a straight path, but found that there was significant drift in the car’s odometry data. Therefore, we implemented a PD controller to allow the wheels to get feedback so that the wheel velocities would be fairly consistent. We also developed a ramp up and ramp down period to help with odometry, so there would be a number of steps before the wheel velocities hits the target instead of immediately ramping the wheels to the target velocities. We found better performance in odometry data after implementing these changes. This allowed the car to go straight. We then started integrating object detection and path planning and had some difficulties moving around the object. The issue is with figuring out the next location to go to to change directions. This still needs to be tinkered with as sometimes the vehicle fails to plan around the object but it’s performing well.

Next week we plan to continue to fine tune the path planning and object detection of the lead car, as well as begin to test the following car. Our current ideas is to either send object locations and have the follow car run A* as well, or for the lead car to send the same landmarks that it follows for the path to the follow car. We expect the latter to have difficulties because of the differences in drift between the two cars, but further experimentation is needed.

Jeffrey’s Status Report 24/4/21

This week we focused on finalising the object detection algorithm and porting it into ROS for integration purposes. We originally ran into an issue where the network we were using was too big and consuming too much power and memory on the Jetson causing the frame rate to be too low to be useful. Upon further research, we realised that neural networks on the Jetson are supposed to be developed using TensorRT rather than vanilla Tensorflow. Primed with this information, we went back to the Jetson development page to look at examples of neural networks using TensorRT, and converted the object detection module to use Jetson documentation and TensorRTs. From here, we then ran it on the Jetson Nano and mounted the camera on the car to see if the object detection algorithm was working. We found that using gatorade bottles worked consistently, as in the algorithm could routinely detect and plan around the gatorade bottle. I also worked on converting the object detection algorithm into ROS to be able to integrate and communicate with the other nodes.

In the next few days, we will begin real time testing on the autonomous navigation capabilities of the car, primarily its ability to navigate around obstacles, and we will also begin testing the following capabilities of the following car. The following car will rely on a map that the lead car sends to trace the lead car’s path through the course.

Team Status Report 10/4/21

This week our team focused on perfecting and finalising the individual subsystems to make integration go smoother. Joel finalised and updated the actual vehicle specifications and characteristics, and amended the odometry functionality of the vehicles. Construction of the remaining vehicles will be completed at a later date once integration begins. Joel and Fausto have both begun porting their modules over to ROS and developing a broadcast system to make communication between the subnodes easier. Fausto also begun preparing the communication protocol and establishing limits as to how far the vehicles can be before connection breaks. He found that the connection is extremely robust to distance and once the cars establish a connection at the beginning they are highly unlikely to have that connection broken. Additionally, Fausto has also begun researching more about ROS and figuring out the best way of designing his module in order to have it seamlessly integrate on the ROS platform. This will eventually allow for a more fluid integration, as they can help integrate the object detection node into the ROS platform during next week. On the object detection front, Jeffrey began finalising the pipeline and integrating the Intel RealSense camera with object detection. Rather than using RGB-D image values, only RGB will be inputted to MobileNet v2, and the resulting bounding box with additional padding will be redrawn onto the depth map to extract the distance of the object from the car, and extrapolate what angle at which to turn the wheels in order to avoid the obstacle. The depth map wit the bounding box will also be broadcasted to the localisation node to further develop the map of the environment.

Looking ahead, we want to begin integration next week, and hopefully have an integrated product for interim demos. When integration inevitably runs into issues, our subsystems are ready to demo and are pretty much finalised. This makes integration our main priority, and we will also begin to further develop our communication protocol beginning next week.

Jeffrey’s Status Report 10/4/21

This week I finalised and began compiling together the python script that does all the object detection. I also did more robust testing for the distance at which an object could be consistently detected. I found that using a small cardboard box, the laptop camera could routinely detect the image at ~40cm away, which is well within our requirements. This was done using OpenCV and using a pretrained MobileNet v2 using the laptop camera. There are issues with latency, however I believe this has to do with the laptop camera and is not bottlenecked by the actual model. Once the object was detected I found a way to draw the bounding box onto the image with additional padding to ensure the entire object is safely encompassed in the bounding box. Furthermore I also began integration with the Intel RealSense camera. There were originally a lot of issues with the realSense documentation, but the object detection system should be fully functioning as of now. The specifications are as follows. The module takes in raw RGB camera feed from the intel real sense, passes it through MobileNet v2 to extract the bounding box and redraw the bounding box information on the depth map. From here we can approximate the width of the object by looking at the edges of the bounding box, and use arctan to find the angle at which to move the car to avoid the obstacle. From here, we can also use the depth map and to update the map of the course in order to facilitate localisation of the car.

For next week, we want to begin integration and porting my module over to ROS and integrating with the car. There needs to be work to figure out the optimal angle at which to angle the RealSense camera to maximise object detection.

Jeffrey’s Status Report 3/4/21

This week I focused on fine tuning the object detection algorithm and began writing the planning algorithm. I had a little trouble downloading all the drivers for the Intel RealSense camera, but managed to get everything installed properly by the middle of the week. After that I experimented with extracting the RGB footage values and using OpenCV to process them into something that MobileNet v2 can use to make object classification and detection decisions. After that I started determining the robustness of the Intel camera by moving it around the room to see if there are any jitters in the feed, to determine the fastest sampling rate at which we can sample from the camera while still maintaining clear images. I found that we can sample faster than MobileNet v2 can process the image, making our lives easy in the future. Starting tomorrow, I will hook up the Intel camera to MobileNet v2 to see what the real time object detection looks like. After that we can start integrating and I can begin determining heuristic values for the planning algorithm.

Additionally, we plan to meet up to figure out what the obstacles will look like and generate realistic feeds from the car to further determine the robustness of the object detection algorithm.

Honestly what does it even matter, you work so hard only to have Jalen Suggs hit a half court shot to win the game.

 

Jeffrey’s Status Report 27/3/21

This week I mainly focused on the planning aspect of the project. I had to figure out how to use the bounding boxes created by the object detection algorithms to figure out what angle to set the wheels such that the car would avoid the obstacle. The issue was that the bounding box around the object does not contain depth information and thus we don’t know how far the object is. We can assume that the object is a short distance from the car, but this would lead to wide turns around the obstacles and may affect planning if the course is dense with obstacles.  Therefore we needed some way to approximate the horizontal distance necessary to avoid the obstacle. Once this is determined, we can use the depth map from the Intel RealSense to create a right angled triangle with depth and horizontal distance as the legs, and the angle created by the hypotenuse and the depth leg would be the angle necessary to turn the wheels. 

In the diagram above, I let D1 and D2 be the distance from the centre of the image to the edges of the bounding box. I figured if we could scale these distances D1 linearly, i.e apply some affine transformation to it, we can approximate the horizontal distance necessary. This works, since the planning algorithm will only account for the left/rightmost bounding box when making decisions, and since the closest obstacle will be detected at around a range o 0.2m, we can approximate a transformation such that the horizontal distance will not be too far off the true distance, and the vehicle will never collide with the obstacle.

For next week I will go back to object detection and continue experimenting with MobileNet on my laptop and try and integrate MobileNet with the intel RealSense camera.

Team Status Report 14/3/21

For this week, we mostly reviewed our feedback from the design review presentation and adjusted our scope and design a little and started implementing small parts. We finalised the CAD model for our RC cars, and the parts arrived late this week, so we will begin assembly this weekend and into next week. After that we will begin testing our RC car design using the metrics we have outlined in our design and proposal presentations. On the object detection front, we have multiple candidate algorithms that we will be implementing and experiment locally first, while hoping to port them onto the Jetson hardware once they arrive.  We will also start experimenting and developing the communication protocols for the V2V communication. We still need to work out how to incorporate the IMU data into the communication and how exactly we will lead the following car to ensure it stays on track and doesn’t drift into any of the obstacles.

Moving forward, we hope to have the cars assembled and begin testing by the end of next week, as well as tinkering and ironing out the details in the object detection algorithm so we can focus on a control algorithm for planning and figuring out some of the convoy mechanics.

Jeffrey’s Status Report 6/3/21

This week I mainly focused on finalising some of the object detection algorithms and figuring out what is possible given our limited compute power. I brushed up on CNNs including some of the state of the art object detection algorithms and investigated the possibility of using RGB-D information as part of the object detection. This would mean using depth as well as the image to make detection decisions, rather than simply using the depth information for planning. This led me to read about Faster R-CNN algorithm that uses RGB-D information for state of the art object detection. I read more about it and how it can be used in conjunction with VGG16. The authors of the paper had a frequency of around 5 fps processing rate on COCO datasets using a standard GPU.  This is definitely one algorithm I will look more into, however, it depends on the quality of data we can get from the depth camera. Since they used data generated from high quality depth camera such as Intel Real Sense, the point clouds they can generate using the RGB-D information will be much higher quality than what we can generate with a make shift PS4 camera.  More experimentation will be needed to see if such algorithms can work even if the RGB-D information isn’t as rich. Looking into VGG16, it seems like a very lightweight and accurate algorithm for our purposes. To get around the problem of the network being trained on real objects we plan to print out pictures and paste them on our obstacles so there isn’t a need to generate a new dataset and retrain the network. We can simply freeze most of the weights and tune the network to give the desired precision and recall.

Moving forward, I want to start generating RGB-D data from the PS4 camera that we bought this week and begin testing object detection algorithms to see what works best and whether or not we need to rethink our approach.