Jeffrey’s Status Report 24/4/21

This week we focused on finalising the object detection algorithm and porting it into ROS for integration purposes. We originally ran into an issue where the network we were using was too big and consuming too much power and memory on the Jetson causing the frame rate to be too low to be useful. Upon further research, we realised that neural networks on the Jetson are supposed to be developed using TensorRT rather than vanilla Tensorflow. Primed with this information, we went back to the Jetson development page to look at examples of neural networks using TensorRT, and converted the object detection module to use Jetson documentation and TensorRTs. From here, we then ran it on the Jetson Nano and mounted the camera on the car to see if the object detection algorithm was working. We found that using gatorade bottles worked consistently, as in the algorithm could routinely detect and plan around the gatorade bottle. I also worked on converting the object detection algorithm into ROS to be able to integrate and communicate with the other nodes.

In the next few days, we will begin real time testing on the autonomous navigation capabilities of the car, primarily its ability to navigate around obstacles, and we will also begin testing the following capabilities of the following car. The following car will rely on a map that the lead car sends to trace the lead car’s path through the course.

Team Status Report [4/24/21]

This week our team focused on preparing our subcomponents so we could integrate them together properly. In regards to the vehicle mechanics, we switched from standard wheels to mecum wheels, which allows for the vehicle to more easily maneuver around the course and maintain more accurate localization information. Additionally, accommodations to the body of the car were made in order to support the camera being mounted on the vehicle. This week we ran into issues with running our object detection program on the Nvidia Jetson. We ran into an issue where the network we were using was too big and consuming too much power and memory, so the frame rate was too low for our usage. So we moved to a Tensor RT base for the neural network inference and switched to mobile net-v1 since it was too difficult getting v2 to load. This solved the issue and we are now steadily hitting around 30 FPS and no longer run into resource issues. Finally, throughout the week we focused on migrating our code into ROS and looking forward to next week, we will fully integrate our systems.

Joel’s Status Report for 04/24/2021

This week focused on completing the vehicle mechanics and Pose estimation portion of the project. With the new mecum wheels, the vehicles are able to move in any angle direction without angular rotation around the z-axis. This made it better for tracking the position of the vehicle with wheel encoders and IMU sensors due to less overall slipping of the wheels. The code for this was translated into ROS code and integrated to publish the information.

This week I also helped to get the issues with object detection fixed. This involved moving to a Tensor RT base for the neural network inference and switching to mobile net-v1 due to difficulty getting v2 to load. This allowed us to get a much higher FPS (around 30) as well as also drastically reduced the resource usage of the inference model. Further, I added the camera mount and made it to be adjustable so that the angle of the camera could be modified if needed.

Lastly we are tidying up the last couple of path planning loose ends. Namely, the original idea may be a bit unstable due to jittering frames of the object detection algorithm. Therefore I worked on a motion planning algorithm grounded in A* to potentially use if we are unable to smoothen the result from the object detection. This planning does not require too much time (around 0.1 seconds) so the solution should be functional if needed.

This next week will be typing up any loose ends and fully integrating the systems to be fully independent .

Fausto’s Status Update [04/24/2021]

This week I worked on converting the Bluetooth communication code between Autovots into ROS code. I created new packages specific to client-side and server-side communication and tested the ROS nodes with two Jetson Nano boards. Previously I had gotten the Bluetooth connection set up and enabled communication across board but this week , I integrated the communication code into ROS in order to begin integrating communication with other components of our project (such as object detection and planning). At first, I sent arbitrary coordinates across Autovots and made sure connections were being setup properly and data was accurately being sent. Then, after debugging ROS mechanics, I created MapUpdate messages and made the server-side node a subscriber to a ‘map_update’ topic which calls a function to send map updates to the client. The client then receives the data through Bluetooth and publishes newly acquired map information to another ‘map_update’ topic.

Team Status Report 10/4/21

This week our team focused on perfecting and finalising the individual subsystems to make integration go smoother. Joel finalised and updated the actual vehicle specifications and characteristics, and amended the odometry functionality of the vehicles. Construction of the remaining vehicles will be completed at a later date once integration begins. Joel and Fausto have both begun porting their modules over to ROS and developing a broadcast system to make communication between the subnodes easier. Fausto also begun preparing the communication protocol and establishing limits as to how far the vehicles can be before connection breaks. He found that the connection is extremely robust to distance and once the cars establish a connection at the beginning they are highly unlikely to have that connection broken. Additionally, Fausto has also begun researching more about ROS and figuring out the best way of designing his module in order to have it seamlessly integrate on the ROS platform. This will eventually allow for a more fluid integration, as they can help integrate the object detection node into the ROS platform during next week. On the object detection front, Jeffrey began finalising the pipeline and integrating the Intel RealSense camera with object detection. Rather than using RGB-D image values, only RGB will be inputted to MobileNet v2, and the resulting bounding box with additional padding will be redrawn onto the depth map to extract the distance of the object from the car, and extrapolate what angle at which to turn the wheels in order to avoid the obstacle. The depth map wit the bounding box will also be broadcasted to the localisation node to further develop the map of the environment.

Looking ahead, we want to begin integration next week, and hopefully have an integrated product for interim demos. When integration inevitably runs into issues, our subsystems are ready to demo and are pretty much finalised. This makes integration our main priority, and we will also begin to further develop our communication protocol beginning next week.

Joel’s Status Report for 04/10/2021

The focus for this week was to complete the design of the vehicles with the added battery holder and cuttouts for the optocoupler IR units for odometry. With this design completed I was able to make a final vehicle model out of acrylic which will be used in the actual task. I decided to delay the fabrication of the other vehicles in order to test the current one for potentially design oversights as well as work on other components of the project. Since the vehicles don’t take too long to make this should not be an issue moving forward.

Another thing I worked on this week was writing the wheel encoder code for both the Arduino Pro Micro and the ROS. As of right now we are computing the RPM for both the left and right rear wheel. The calculation for RPM is computed on the Arduino and sent over serial to the Jetson. This is done in a ROS node that publishes the data to a topic. We may consider expanding to have 4 active encoders instead of 2 for better accuracy, but we have to be weary of the I/O requirements and potentially slowdown in sampling speed on the Arduino.

Lastly I worked on preparing more ROS code for the RC control portion of the project. This involved sending commands to the Arduino Uno motor control system over serial after getting update messages. This component works now; However, we need to integrate the IMU for orientation data to create a feedback loop that adjusts the cars motion to match the desired path.

Moving forward, we need to develop the localization and path planning in a ROS environment and connect the various systems together. Most of the localization/path planning work will involve input from the object detection portion of the project. I will be working with Jeffrey to design and implement this portion of the project.

By the end of next week we should have completed our subsystem demos and having most of the core functionality complete. The core task moving forward will be integration and tuning.

Fausto’s Status Update [04/10/2021]

This week I improved the Bluetooth communication protocol by constantly looking for connection between Autovots. Now, if one device is not found immediately (for example, if the device is out of range) then when the device becomes discovered, connection will be formed. Additionally, I performed some stress tests and found that once Bluetooth connection was formed, it was not easily broken (even if we moved far away or moved behind walls). For the purposes of our project, we are comfortable knowing that the Bluetooth connection will stay connected throughout the drive on the track (since the connection would only be broken when distances are more than twice the distance of our track length). Also, connection is still formed between moving Autovots within a range of 4 meters, comfortably (in open spaces).

Additionally, I have begun researching ROS in depth and was performing tutorials and hope to integrate communication within ROS soon. I have also developed a rough sketch and description of what our ROS architecture will look like (note this is the first sketch so it is brief and is subject to modification).

Jeffrey’s Status Report 10/4/21

This week I finalised and began compiling together the python script that does all the object detection. I also did more robust testing for the distance at which an object could be consistently detected. I found that using a small cardboard box, the laptop camera could routinely detect the image at ~40cm away, which is well within our requirements. This was done using OpenCV and using a pretrained MobileNet v2 using the laptop camera. There are issues with latency, however I believe this has to do with the laptop camera and is not bottlenecked by the actual model. Once the object was detected I found a way to draw the bounding box onto the image with additional padding to ensure the entire object is safely encompassed in the bounding box. Furthermore I also began integration with the Intel RealSense camera. There were originally a lot of issues with the realSense documentation, but the object detection system should be fully functioning as of now. The specifications are as follows. The module takes in raw RGB camera feed from the intel real sense, passes it through MobileNet v2 to extract the bounding box and redraw the bounding box information on the depth map. From here we can approximate the width of the object by looking at the edges of the bounding box, and use arctan to find the angle at which to move the car to avoid the obstacle. From here, we can also use the depth map and to update the map of the course in order to facilitate localisation of the car.

For next week, we want to begin integration and porting my module over to ROS and integrating with the car. There needs to be work to figure out the optimal angle at which to angle the RealSense camera to maximise object detection.

Team Status Report for 04/03/2021

This week our team made great progress on individual components for our project. A lot of our hardware parts arrived so we were able to do a lot of hands-on work with them; time was devoted to installing the proper drivers and setting up the new hardware parts. Soon enough we will enter the integration stage of our project, but next week, we will be tying up loose ends in the specific project areas we worked on (detection, mechanics, communication).

In regards to our detection system for our lead car, we experimented with the intel RealSense camera (previously we have been testing our detection algorithm using OpenCV and our laptop camera), We were able to extract RGB values and use them to classify objects detected. Additionally, we tested robustness and how moving the camera around would affect the image feed + process speed of MobileNet v2 and the observed behavior is meeting our expectations.

In regards to our communication architecture, we were pleased to find that after obtaining Bluetooth adapters which integrated with the Jetson Nanos successfully, connection was established easily. We found that stable connection is maintained within 3m and that when a connection was formed it did not break (we need to ensure this is true when both Jetsons are moving in the upcoming week with further tests). We found useful information about the Bluetooth connection and will work on making the Bluetooth software between boards as robust as possible.

In regards to the vehicle mechanics, we received new batteries and UEBC device to power Jetson and we are pleased that they last more than an hour under a stress test ! Additionally, a new CAD design was made to accommodate the new hardware components. The IMU component we are using does give us an sub-optimal sense of displacement and localization so we will be adding an optical wheel encoder for the 2 rear wheels so that our vehicle localization will improve. The drive system is almost done, along with refined code for the vehicle (the serial communication now works such that the Jetson can signal the vehicle motors on how to move).

In the upcoming week we are looking to ensure our specific project areas are functioning as expected and are robust, in order to begin integrating components.

 

 

Joel’s Status Report for 04/03/2021

This week was focused on completing the drive system for the vehicle. This involved getting some more refined code for the Arduino and Jetson Interface.  The serial communication now works for this portion of the project so the Jetson can tell the vehicle how to move.

This week I also worked on integrating the IMU into the system for localization as well as come up with a way of doing odometry. As far the IMU goes, I have be able to develop code that provides displacement; however, as anticipated there is considerable noise in the system. I used a few techniques to mitigate this noise and found that with a sampling rate of 20hz, the IMU would be about 20cm off within a 1 minute of running. This is somewhat reasonable, but not optimal so we turn to odometry for slightly better results . In particular, we want to integrate an optical wheel encoder for two rear wheels of the vehicle to determine the displacement of the vehicle. By measuring the RPM we should have a good idea of displacement. Since we are using an encoding method that only provides speed and not direction, we can rely on IMU gyroscope data to provide us with that.

The new batteries also came this week and we were able to run a stress test on the system. This included using the UEBC device to power the Jetson. For the stress test I ran the motors at full speed while running some high intesity gpu benchmarks on the Jetson. The battery was able to last longer than 1 hour while doing this. This solution worked well so we can reliably use only one battery for our vehicle.

With the addition of the new batteries and planned odometry I also had to update the CAD model to accommodate the design.  I designed a holder to keep the batteries in place and need to add mounts and cutouts for the optocoupler encoder devices.

Lastly I took the car for a test run on the track with some boxes to see how the driving would work on the track environment. I found the the turns are a little slow due to the higher resistance on the track surface. We may have to account for this by try to turn early in the path planning.

Next week we will try to have the vehicles done including the camera mounts and all design portions touch on above. I also want to transition fully from the vehicle mechanics to the communication and path planning/localization.