Alejandro’s Status Report for 4/12

This week I mainly focused on getting the path sequencing to work with the motors. I changed the code in Arduino to run on a switch case statement control flow with four main states. There are some bugs with this as when the motors are told to move to a specific trash bin coordinate they move to the origin, which is wrong. Additionally, I implemented the button and the speed control frontend and backend on the web app portion of our system. Now I just need to ensure that the Arduino can take the data relayed from the Jetson and execute it correctly. I also got the web app to receive the bounding box coordinates from the Jetson.

As for the verification of the measurements, I intend to schedule the pick up and drop sequence without suction. My analysis will be based on a scale of time. If the gantry is able to reach a location within 15 seconds and complete the entire execution of pickup and drop off within that time (excluding the suction) then I know that the path sequencing component works as per the design requirements on my end. Additionally, I will test the latency of the web app video feed by running a timestamp on the jetson when it sends data to my server and then timestamp the receipt of it on the server. Then I will perform the analysis by taking the difference of the two times to see if the latency meets the use case requirement.

My plans for the future are to fix the path sequencing code and finalize the canvas drawing on the front end of the web app. I also plan on coding the controls for the conveyor belt. I also need to ensure that the Arduino receives the commands from the Jetson’s relay of commands from the web app interface.

I am currently on schedule.

Alejandro’s Status Report for 3/29

I spent the majority of the week figuring out how to increase the speed on the motors controlled by the Arduino and fixed the vibration issue that would occur with them. The movement produced by the gantry system in the xy plane involved a considerable amount of vibration and slow movement. In order to fix this, I had to modify the C++ code for the Arduino so that the commands sent to the motors were more frequent to keep the motors from stopping at each step that they would execute in terms of their movement (they’re stepper motors). Additionally, I started implementing the code to control the Z axis, in this case the end effector motor. We don’t have a third cable to control the motor so I have to wait for it to arrive in order to verify that it works in conjunction with the other two motors of the gantry system.

My progress is currently on schedule.

For next week, I plan on finishing the code to execute a complete pick up and drop sequence for specific trash types. I also plan on programming the conveyor belt to move once built.  

Team Status Report for 3/22

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The most significant risk to our project is getting the machine learning model to run at a speed that’s close enough to real-time speed.  To do this we plan on making sure that we make a thorough analysis of the speed and accuracy tradeoffs until we reach the most optimal point of performance in the balance between the two. On another note, I next greatest risk is the failure of the gantry system to move fast enough to handle various objects moving along the conveyor belt. A contingency plan is that we’ll have a speed control feature on the web app to slow down the speed in the case that there are too many objects being passed through the conveyor belt for sorting. This would allow our robot to have enough time to sort all of the items being passed through without having to skip any.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There are no current changes made to the existing design of the system.

Provide an updated schedule if changes have occurred.

There are no schedule changes as of now.

Alejandro’s Status Report for 3/22

I spent the majority of the week programming the Arduino to control the stepper motors to move the gantry’s xy movement. I spent a significant amount of time figuring out how to move each stepper motor to yield a specific spot on the xy coordinate plane. I can now calculate a specific location on the xy plane and move the location where the end effector is to be placed to that specific location. The C++ code is written inside an Arduino file that controls all of the movement and stepper motors. Additionally, I fixed the issue with the web app streaming by moving the server to run on Powershell so that there are no port confusion or firewall issues when running it from WSL on my Windows machine.

My progress is currently on schedule.

For next week, I plan on writing the code to translate the coordinates given from the Jetson to the values needed for the motors of the gantry due to its unique pulley configuration. Additionally, I’ll work on controlling the end effector once it’s physically been added to the system.

Ethan’s Status Report for 3/22

This week, I started full dataset training for the YOLOv8 OBB model. I implemented the loss function using three tunable hyperparameters that adjust the weight of the bounding box regression loss, the classification loss, and the angle loss. The intent of this was so to show more transparency in the unified loss calculation (the sum of all the previously mentioned loss), and if one loss was too high I could adjust how much the penalty would be. This method would hopefully allow us to better control the model’s convergence and allow for a better check pointing scheme that would save each best model in all four loss types (unified, regression, classification, and angle) to retrain later on. While the model started to train, I started working on creating a Docker image for the Jetson to make porting over the model easier.

Next week, working on some visualization code that would plot the predicted bounding box and class on static images in a clean manner (the core logic of this will be scaled out later for the Jetson)

Currently, I am on schedule.

Alejandro’s Status Report for 3/15

I spent the majority of this week working on reconnecting the Jetson to the webserver, since it no longer connects to my WSL server.js instance. I spent a considerable amount of time trying out different ports and connection methods. The first method I tried was to try and reset the firewall privileges on my laptop, but that failed to work. I then moved onto attempting to change the port forwarding on my device to the WSL port, but that also failed to work properly. The Jetson pings to my laptop but no longer wants to connect to the server. If I am unable to get the websocket working again as I had it before, we may have to shift to a tethered wired connection for the time being. Additionally, the canvas drawing feature is implemented. Also, the gantry xy axis is fully assembled now.

In lieu of this obstacle, I will plan to try one more time to get the websocket connection to work, and if it fails I will shift to using a wired connection. Additionally, I will test the canvas drawing with some test values next week as well.

I am currently on schedule aside from the minor hiccup.

 

Ethan’s Status Report for 3/1

The majority of effort this week was spent on finding bugs in our YOLO codebase that caused the results from last week to look really poor. I discovered it was the loss function. Previously I was using a more naive approach that combined a weighted mean-square loss for the bounding boxes with a weighted cross-entropy loss for classification. After reading a couple of Medium articles about YOLO, I realized that I was implementing an entirely different loss function. Once I fixed that I also a little bit more training infrastructure that would hopefully making training analysis easier: curve plotting for each loss. By monitoring loss, I can identify parts of the model that would need tuning in future runs.

Next week, I plan on getting detection working on toy images.

Currently I am on schedule.

Alejandro’s Status Report for 3/1

I spent the majority of the week working on the assembly of the x-y axis of the gantry system. The assembly was time-consuming and laborious. The assembly requires a few more steps that will take some considerable amount of time. I will work on finishing the assembly the week I get back from Spring break. My progress is slightly behind schedule. In this case, I will prioritize the assembly of the gantry’s xy axis in the following week of work. Additionally, I was experimenting with the use of the canvas drawing onto the camera feed but I will need more time to solidify the feature. I’m also running a bit behind with the canvas drawing feature but I’ll have that done promptly after assembling the gantry. So next week I intend on completing the gantry xy axis and finishing the implementation of the bounding box HTML feature of mine.

Alejandro’s Status Report for 2/22

I successfully established the WebSocket connection between the Jetson Orin Nano and my laptop’s localhost server. I then configured my server.js to stream the video served over the WebSocket connection to my web app. Additionally, I had to change the index.html code to properly display the video being streamed. After several hours debugging, I finally got the webapp to stream video. Now, the web app displays the video of the camera that’s connected to the Jetson Orin Nano. Here’s an image of the video feed of me being displayed on the webapp. 

I am currently slightly behind in that my team had planned to assemble the xy axis of the gantry system this week; however, my team ordered only two rods of 3 ft when we expected two rods of 6ft each. The other two 3ft steel rods should arrive next week. In order to make up for this delay, I’ll take a day sometime next week to assemble the xy axis of the gantry system.

Next week, I need to also implement box shading onto the HTML of the webapp to display identified objects on the camera feed. And as previously stated, I will also assemble the gantry system. At the very least, I’ll assemble half of it in the case that the other two steel rods do not arrive in time.

Bytes Sent over Websocket Between Jetson and Server

 

 

Ethan’s Status Report for 2/22

This week, I was able to finish the training infrastructure to train the YOLOv8 OBB model. Now that I am able to train the model, I need to employ some sort of verification strategy determine that the model was implemented correctly before I do full batch training. I decided on training the model one single basic image (I am defining basic as an image where the object is close up and on top of a distinct background). After training on this image for a significant number of epochs, I found that the detected bounding box was completely off. Currently, I believe that something went wrong with the model’s OBB detection head and spent a majority time this week trying to verify this assumption.

Next week, I plan on getting detection working on this toy image and hopefully training using the entire dataset and analyzing the results from there.

Currently, I am on schedule.