Fausto’s Status Update [04/03/2021]

This week, my team and I met in person to work on Jetson-to-Jetson communication. One Jetson acted as the master node and the other Jetson acted as a trailing node (aka the RC cars which will be following the lead car). We made the cars discoverable via Bluetooth and set up a pluggable Bluetooth adapter for the trailing node. We tried to setup one of the Bluetooth adapters that we ordered online (we ordered two different ones) but it did not work on the Linux system on the car, so  we will have to return that one, but now we know the other one works, so we can just order another one of the working adapter. After making the boards discoverable, we practiced by sending brief messages (like “Hello world”) and also tried sending streams of data to the lead node.

We found that a minimum of 8ms delay between sending messages was needed in order for the communication to not crash (which is perfectly fine because we will not be spamming messages one after the other in such short time frames; as of now, we plan on sending a message about every 250ms). Additionally, with our current setup we found that communication was very feasible between 3m, any further was not guaranteed to establish. As of now, the messages we send are in JSON format and we use a dictionary to encode and decode the fields we are sending. In the coming weeks I will be ensuring reliability and make the communication process more robust (by adding error-handling and add retrying if one device is not found).

Jeffrey’s Status Report 3/4/21

This week I focused on fine tuning the object detection algorithm and began writing the planning algorithm. I had a little trouble downloading all the drivers for the Intel RealSense camera, but managed to get everything installed properly by the middle of the week. After that I experimented with extracting the RGB footage values and using OpenCV to process them into something that MobileNet v2 can use to make object classification and detection decisions. After that I started determining the robustness of the Intel camera by moving it around the room to see if there are any jitters in the feed, to determine the fastest sampling rate at which we can sample from the camera while still maintaining clear images. I found that we can sample faster than MobileNet v2 can process the image, making our lives easy in the future. Starting tomorrow, I will hook up the Intel camera to MobileNet v2 to see what the real time object detection looks like. After that we can start integrating and I can begin determining heuristic values for the planning algorithm.

Additionally, we plan to meet up to figure out what the obstacles will look like and generate realistic feeds from the car to further determine the robustness of the object detection algorithm.

Honestly what does it even matter, you work so hard only to have Jalen Suggs hit a half court shot to win the game.