Jeffrey’s Status Report 3/4/21

This week I focused on fine tuning the object detection algorithm and began writing the planning algorithm. I had a little trouble downloading all the drivers for the Intel RealSense camera, but managed to get everything installed properly by the middle of the week. After that I experimented with extracting the RGB footage values and using OpenCV to process them into something that MobileNet v2 can use to make object classification and detection decisions. After that I started determining the robustness of the Intel camera by moving it around the room to see if there are any jitters in the feed, to determine the fastest sampling rate at which we can sample from the camera while still maintaining clear images. I found that we can sample faster than MobileNet v2 can process the image, making our lives easy in the future. Starting tomorrow, I will hook up the Intel camera to MobileNet v2 to see what the real time object detection looks like. After that we can start integrating and I can begin determining heuristic values for the planning algorithm.

Additionally, we plan to meet up to figure out what the obstacles will look like and generate realistic feeds from the car to further determine the robustness of the object detection algorithm.

Honestly what does it even matter, you work so hard only to have Jalen Suggs hit a half court shot to win the game.

 

Team Status Report for 03/27/2021

This week everyone on the team was able to get the respective parts needed for their portion of the project. The last week we discovered that we would have issues with power give our previous specs so this tweak was focused on correcting some of those issues. With some research and testing we were able to settle on 2S 7.4v LiPo batteries for our unified power delivery system. We also switched from L298N bridges to TB6612FNG for our motor controller for much more effect motor control design.

For vehicle mechanics we are mostly done with the mockup vehicle and have a few more systems to finalize before we make the real vehicles. Next week we are planning to go to the track with the mock vehicle to do some testing for potential fixes to our current track parameters and potentially some odometer/localization testing.

For object detection have started to work with MobileNetV2 on laptops with small scale examples to simulate what we will see on the track. For now we need to focus on meeting our specs for object detection and moving forward we will get more realistic data to tune the model with. For performance we are considering a few things to ensure we meet what we need. For example we may reduce number of FP bits from 32 to 16.

For communication we have started framing the Bluetooth comms architecture and are able to send bytes between the Jetson and another computer. Moving forward we need to scale up the comms to handle more than one connection and keep connection alive for a more realistic testing scenario. Next couple of weeks we hope to be able to test Jetson-to-Jetson connections.

 

Joel’s Status Report for 03/27/2021

This week I spent on getting down the main mechanics for the car completed. This included a lot of testing of the power systems and the motor response. One thing I found was that our original power delivery system with a 9v battery would not be enough to drive the 4 motors. This lead me to research some solutions that could help to alleviate these issues. Namely, we had to address the efficiency of the motor shields as well as the need for higher battery capacity.

The L298N Drivers for the motors draw up to 2A of current at max capacity with a voltage drop of about 3.6v. This meant that the driver was at about 50~60% efficient. For this issue I decided to move to a new solution using the TB6612FNG based Adafruit motor controller. This MOSFET based design was much more efficient with around 90~95% efficiency. This means that the motors get much higher voltages overall and also alleviates some of the power draw issues.

I also looked into new battery solutions for the vehicle. Both the motors and the compute system were too power hungry for our original specifications. For this task we needed a solution that would provide high capacity, high discharge rate and relatively low weight and volume. This lead us to using choosing 2s 7.4v LiPo batteries with 2200mAh capacity. With this we are able to provide sufficiency voltage to the motors as well as the Jetson Nano with a single battery. Notably as the Jetson requires 5v, we also needed to get a universal battery eliminator circuit (UBEC) to step down to 5v. This solution is yet to be tested, but we are confident that it should suffice for the project based on our research.

Lastly I was able to build ‘development’ vehicle to allow us to begin to test the track parameters and any other things that may require a mock vehicle such as data for object detection. The vehicle is able to move with all 4 motor but the Jetson-to-Arduino command system is still in development. Below is a quick look at what the vehicle looks like:

For next week I need to finish developing the control system to work with the Jetson. Once this is done and I have tested the power delivery, we should be good to go with making the final vehicles with acrylic.

 

Fausto’s Status Update [03/27/2021]

This week I received the NVIDIA Jetson Nano 2GB, set it up and familiarized myself with the Jetson Nano working environment. In regards to communication, each Jetson will be discoverable to other Bluetooth devices with the naming heuristic starting with “autovot” followed by a number from 1 to 3.  After a connection is formed between the lead car and trailing cars, the lead car will be able to convey information which the other cars can utilize for localization, so they can avoid obstacles. Originally we were planning on sending full instructions on how to maneuver around obstacles but now we will convey information to trailing cars so they can develop a map with they can use to plan a path around obstacles. An action item we have is testing Bluetooth reliability while the cars are moving in order to ensure trailing cars can securely receive information from the lead car.

Jeffrey’s Status Report 27/3/21

This week I mainly focused on the planning aspect of the project. I had to figure out how to use the bounding boxes created by the object detection algorithms to figure out what angle to set the wheels such that the car would avoid the obstacle. The issue was that the bounding box around the object does not contain depth information and thus we don’t know how far the object is. We can assume that the object is a short distance from the car, but this would lead to wide turns around the obstacles and may affect planning if the course is dense with obstacles.  Therefore we needed some way to approximate the horizontal distance necessary to avoid the obstacle. Once this is determined, we can use the depth map from the Intel RealSense to create a right angled triangle with depth and horizontal distance as the legs, and the angle created by the hypotenuse and the depth leg would be the angle necessary to turn the wheels. 

In the diagram above, I let D1 and D2 be the distance from the centre of the image to the edges of the bounding box. I figured if we could scale these distances D1 linearly, i.e apply some affine transformation to it, we can approximate the horizontal distance necessary. This works, since the planning algorithm will only account for the left/rightmost bounding box when making decisions, and since the closest obstacle will be detected at around a range o 0.2m, we can approximate a transformation such that the horizontal distance will not be too far off the true distance, and the vehicle will never collide with the obstacle.

For next week I will go back to object detection and continue experimenting with MobileNet on my laptop and try and integrate MobileNet with the intel RealSense camera.

Joel’s Status Update for 03/13/2021

This week I focused on getting the work done for the mechanical system of the vehicle. I was also able to finalized the parts order and complete the CAD modeling of the vehicle systems. Most of the other work this week was getting the system to allow the car to move. This meant writing a good deal of the Arduino code and testing the circuits for the motors and the motor shield. As it stands the code is able to control all 4 motors with a discrete space to define the direction of the vehicle. In essence, the system controls the motors based on an angle and velocity provided by an external system (compute system) over serial communication. The angles map to various controls to direct the car in the direction of the angle from the horizontal. This is done by specifying a byte over serial in [0, 127] which maps to an angle on the unit circle. The velocity is currently fixed but in the future I would like to add the ability specify the velocity.

Since the parts arrived today I was unable to start the building process; However, will attempt to get at least one car built by the end of Monday this coming week.

As far as the next week plans, I would like to test my implementation of the vehicle control system with an actual built car and tweak as necessary. I think I would also be a good idea to re-parametrize our final testing metrics/specifications based off how the car performs on a simple track test. In conjunction with this, I would also like to start development on the wireless communication platform. More specifically, developing a framework to allow the various portions of the project to coordinate in a ROS environment.

 

 

Team Status Report 14/3/21

For this week, we mostly reviewed our feedback from the design review presentation and adjusted our scope and design a little and started implementing small parts. We finalised the CAD model for our RC cars, and the parts arrived late this week, so we will begin assembly this weekend and into next week. After that we will begin testing our RC car design using the metrics we have outlined in our design and proposal presentations. On the object detection front, we have multiple candidate algorithms that we will be implementing and experiment locally first, while hoping to port them onto the Jetson hardware once they arrive.  We will also start experimenting and developing the communication protocols for the V2V communication. We still need to work out how to incorporate the IMU data into the communication and how exactly we will lead the following car to ensure it stays on track and doesn’t drift into any of the obstacles.

Moving forward, we hope to have the cars assembled and begin testing by the end of next week, as well as tinkering and ironing out the details in the object detection algorithm so we can focus on a control algorithm for planning and figuring out some of the convoy mechanics.

Fausto’s Status Update [03/13/2021]

This week was a bit hectic for me so I did not accomplish as much as I had originally hoped. I was expecting to have tested basic Bluetooth communication between 2 different devices, but I currently just have boilerplate code setup. Fortunately, most of our hardware has arrived so next week we can all work on our physical cars. I will continue to establish Bluetooth connection between 2 devices and once we have our RC cars moving, we can think about how to send instructions from the lead car to the trailing cars.

Team Update for 03/06/2021

This week, our team has made great progress in narrowing down technical design choices for our project. Last week we decided to replace the LIDAR sensor on our lead car with a PS4 camera to simplify the object detection and this week, our team acquired the camera and will soon set it up, such that it can be used on our RC car. Additionally, in regards to the hardware for our RC cars, the CAD work for our car has just about been finished so we will be able to construct the car soon. We have also decided how the motor system will be setup for the car turning mechanism; we will use two LN298 H bridges to control the front and rear motor pair which will hopefully give us the agility in turning that we want.

We will be constructing up to 3 cars (the lead car being equipped with a camera for object detection). The lead car plays a huge role in our project so we will begin working on object detection as soon as possible. This week we read up on a R-CNN algorithm that uses RGB-D information for state of the art object detection, so we will experiment with that. This means we will want to setup the PS4 camera next week to see what we’re working with.

This week consisted of a lot of research for our team, and served as an awesome week for feeling out the scope of our project and getting more comfortable with the technologies we will be working on. We worked hard to come up with metrics we want to hit for our project and worked on the upcoming presentation. As we lay the groundwork for our project, we will also start thinking more deeply on tunable/modifiable design choices (for example, what type of information to send between cars: raw data, instructions, etc.).

Fausto’s Status Update [03-06-2021]

This week, I read up on Bluetooth technology so that next week I can begin programming communication between Bluetooth devices. I mainly read a pdf I found by MIT on Bluetooth programming (https://people.csail.mit.edu/rudolph/Teaching/Articles/BTBook.pdf) which contained a lot of interesting details about how Bluetooth works and some intricacies with the technology. This pdf details how to create sockets such that data can be transmitted and includes information on piconets (which are ad hoc networks consisting of 8 devices, where one acts as a master, and the others act as slaves). One thing I didn’t know about Bluetooth is that an actively communicating Bluetooth device changes channels every 625 microseconds, so devices within a system have to sync their channel changes (I am sure Bluetooth hardware handles this but it was interesting to learn about). Next steps are: setting up basic communications between Bluetooth devices, getting a better idea of what information we want to transmit across our RC cars.