Aichen’s Status Report for 3/25/2023

This week, the absolute of my efforts went into setting up/debugging the camera. After being able to run inference on Jetson, we started setting up the camera (MIPI connected to Jetson), which could be “found” but not able to capture using different commands as well as cv2 module. After the first camera got burned, we switched to the backup camera which threw us into the same trouble. After a few hours of debugging, we decided to go with different routes now. First, I’m currently setting up the backup Jetson Nano to see if it could work with a USB camera. Second, if that works, we could also try it with the MIPI camera. If that still works, then it might be the Jetson Xavier’s problem and we would have to set up YOLO environment on Jetson Nano.

Schedule wise, the camera’s issues are a blocker for me, but we will do the best we could to have the camera subsystem functioning.

There’re some code updates in the github repo (the same link as posted before).

Besides that, I worked with Vasudha to get the “communication” between Jetson and Arduino working. I used a script in Python to send serial data (which will later be run on Jetson) to Arduino. After the whole team’s working, our HW circuit could behave based on the serial input it receives.

Vasudha’s Status Report for 03/18/23

Last week, I focused more on helping out with the software side, and worked on debugging our 1305 GPU setup and code. Since a lot of libraries were missing and our code was in a Jupyter notebook format rather than a Python file, I went through, installing what was missing, fixing dependencies, and trying to get the code to run. This set up is a backup plan to the Google Colab set up that Aichen and Ting are working on, and since the hardware components have arrived, I will be focusing more on that set up to catch up in regards to our schedule. However, the software debugging is still something I am currently working on, and hopefully we will be able to figure one of these solutions out before the end of next week so that we can then migrate this code to the Jetson. 

On the hardware side, some of our parts had finally arrived. I started working on setting up the Arduino and respective components, and hope to have this fully set up and operating the basic design before integrating the mechanical aspect. After attempting to connect and operate the neopixels, I found that the type we had purchased did not match the simulated component (uses SPI communication and therefore requires a different library and extra pin connection for clock). Therefore the respective simulated programming did not work as intended, and will need to be modified.

In terms of schedule, while we are quite behind in hardware due to the delay in the parts arriving, we caught up a little on the software side, and plan to devote more time to the hardware this week to catch up. 

As mentioned, next week I plan on finishing up the the hardware component assembly and implementation of the basic design, and finish ordering parts for our final mechanical design (which we modified once again after discussion with the professor).

 

Ting’s Status Report for 3/18/23

This week I got the training to run on GCP. I created a non-CMU account, and got $300 of free credit on GCP which I used to train 10, and then 20 epochs. The code runs much faster, with 10 epochs taking only 5 minutes. The training accuracy after 20 epochs is over 90% for all four types of drinking waste.

I also started setting up the Jetson. I worked on setting up an instance to be able to use with colab. I was able to connect colab with the local GPU.  But after some roadblocks that came when I tried to run inference on a random picture of a bottle, and a conversation w Prof. Tamal, we realized that we can just run on terminal, which we will try next week. We also connected the camera to the Jetson, and will work on the code to have the snapshots from live camera stream be the source to the ML inference.

I believe in terms of the software and hardware coding, we are on track. We are slightly behind on the Jetson  and camera integration, having it working this week would have been better.  We are definitely behind on the mechanical portion, as our materials have not arrived yet. We plan to digitalize a detailed sketch-up with measurements to send to Prof. Fedder.

 

Our accuracy is >90% for all 4 types of drinking waste after training for 20 epochs

example of yolov5 doing labeling and giving confidence values. they are all mostly 90%, which is past our agreed threshold

Aichen’s Status Report for 3/18/2023

This week, after finishing the ethics assignment, I started setup of Jetson. There were a few hiccups such as missing the micro SD card reader and the very long downloading of the “image” (basically the OS for Jetson). However, my team and staff have all been very helpful, Ting and I were able to fully set up the Jetson by Thursday and we were able to run inference on Jetson on Friday.

I also helped during debugging as we migrate code to GPU on both Google Colab and HH machines. Those could all work now and training results proved to be exceeding expectations.

On Friday, we were able to connect the CSI camera to Jetson and I am currently working on capturing images using the CSI camera and integrating those with the detection code that I wrote earlier. On Monday, we would test this part on Jetson directly. If everything stays on track, Jetson would be fully integrated with the software subsystem by interim demo, which is about two weeks away from now.

For working with Jetson’s camera, I am planning to use the PyPI camera interface, implemented in Python3. This, I believe, is the best choice as our detection & classification code are all written in Python and Jetson naturally supports Python. Here’s a link to the module: https://pypi.org/project/nanocamera/

 

Team Status Report for 3/18/2023

This week, we started with setting up YOLO running on GPU with both Google Colab and HH machines. Along the side, we have also started setting up Jetson. After the YOLO code could be run on Google Colab, we trained with 20 epochs and the results are shown with images in the end. In short, the code exceeded an accuracy of 90% for all 4 types without fine tuning. Using the weights learned by the training, we have done test runs of inference using Jetson which could also successfully run to finish. The reason we decided to pursue both tracks is that gcp is quite expensive where 10 epochs cost $40. Fortunately, we are able to reach a decent accuracy for now with 20 epochs.

For the images shown at the end of this report, the first one shows a sample result where an item is detected and classified. The text shows its classification result and the number is the confidence level of the classification. Our code could also run on the HH machines now, but because it is already working on Jetson, we decided to pause that route for now.

Our biggest risk still lies in our limited experience with mechanical design. However, we have communicated the concern with the staff in meetings and we have professors and TAs who could help us along the way, especially the woodwork. For now, we are working on design graphics with specific measurements and will share that with staff soon. Besides that, there are no design changes made.

Due to delayed shipping of parts, we are slightly behind on building the hardware part. Besides that, the software system and integration with Jetson is going fine. We are planning to finish implementation of detection & classification on Jetson in two weeks and make sure basic communication between Jetson and Arduino will work by the interim demo. For communication, Arduino needs to receive a boolean value in real time that Jetson sends. The Serial module (same name for Arduino and Python) would be used for that.

Vasudha’s Status Report for 03/11/23

This past week, I mainly worked on finishing up the report and once again tuning the mechanical design based on feedback from the TA and Professor. For the report, I modified and consolidated our diagrams according to feedback to show both our use case requirements and design requirements, and created tables to organize some of the information in the report. Updated design can be found here: https://drive.google.com/file/d/15j3AFNqRnFr1QK4KOhniSD_vvZ2-SMlD/view?usp=sharing

I also worked on defining the abstract, design architecture, and information related to materials, hardware, and mechanical aspects of the project, and helped out with formatting to make sure all parts of the report flowed with each other. 

Earlier in the week, Ting and I met with Professor Mukherjee to set up the environment we needed to work on our project on the 1305 GPU machines. Using XQuartz, we set up pytorch using miniconda and created a shared afs folder to allow for our members to all access the project files. This setup process is something we had to learn in order to use the GPUs to train our model. 

The mechanical design and building aspect is something that we will need to continue to learn more about in order to properly execute our bin itself. During class, I worked with Ting to finalize the mechanical measurements for the bin given the newer set up with an axle. After feedback during Wednesday’s meeting, we found that the design that we had come up with (all acrylic, clamp to hold dowel on lid frame and other end glued/drilled into door) was mechanically weak due to the thinness of the material and the space we had we were attempting to drill into (ie. height and depth). We then decided to redesign the bin with as little change as possible to avoid having to switch up materials, this time having a wooden lid frame with support legs while maintaining the bin and acrylic door. This way we can more securely drill into the frame and not have the weight of the entire frame rest on the plastic bin below. I then looked into the new materials we needed for this implementation. 

In terms of progress, we are behind in regards to the mechanical and hardware portions since our ordered parts have yet to arrive. However, we have simulated the hardware parts and hope that by end of spring break, they will arrive so that we can begin building.

The week after spring break, I hope to have started the mechanical and hardware building and finish ordering any remaining parts from the new design so that we can catch up to our original schedule.

Ting’s Status Report for 3/11/23

This week we got the code for YOLO to sucessfully run through, and we were able to start training. This is a great development for us, as we had been working to decipher the file structure to get the code to run for a couple of weeks. It trained very slowly though, since it was running on CPU, with 1 epoch taking almost 30 minutes. We will use the GPU on the ECE machines to run the code. Vasudha and I met with Prof. Mukherjee on 3/2 to get FastX set up to be able to do that. If that does not work, I will be using my leftover GCP credits to train. We are on track in terms of the software, and the next step after break is to start mass training, getting in as many epochs as we can with whatever GPU limits we have.

We further improved on the mechanical design of our structure, discussing with the professor and TA about a wooden frame that would encase the bin. As we wrote the design document, we decided that we will have 2 designs, one of which is backup. If the wooden frame design is too hard to woodwork, we will have the lid be overhanging the rest of the structure, which serves as handles on the side. We spent most of the week working on the design document. We are slightly behind in the mechanical aspect. Hopefully our materials that we ordered will arrive by the time we’re back from break, and we’ll start construction when the team is back from break. New tools needed for us is using the GPU on ECE machines, and learning how to run our python jupyter notebook on it. 

Team status report for 3/11/23

Great news for us during the last week before spring break: the model could finally run through training! On Friday, Ting and Vasudha met with Tamal to set up the HH 1305 machines environment so we could run YOLO on GPU, which would make training a lot faster. Coming back from spring break, we will run through training with 150 epochs (or more) and start fine tuning with more datasets that we have collected.

We have also decided to switch from Jetson Nano to Jetson Xavier NX after studying their capabilities. We determined that Xavier’s computational power, measured in FPS (frames per second) for GPU , is more suitable for running a CV model as complicated as YOLO.

We have also finalized our mechanics design (mostly the material and structure of the lid and the swinging door) after discussion with the staff meeting. Besides that, most of our work is devoted to finishing the design report. We have pinned down details of the core components mentioned in the design presentation and we have done more research to justify our design decisions. Overall, it was an all-inclusive overview of our design and experimenting process over the first half of this semester.

Our biggest risk is once again the mechanical construction of the lid, once again due to our lack of knowledge in the area. To mitigate the risk, we have spent 3 hours a week researching and redesigning to improve our design, and this past week, after talking to the TA and Professor, we once again updated our design, this time constructing a wooden lid frame supported by four legs that the trash can can be easily removed from for emptying. Here is a picture of our latest design https://drive.google.com/file/d/1kVl_is8CKyrQoq8ftapcW93fXAPjJfYD/view?usp=drivesdk

As contingency plans, we have kept several backup design plans that can be adapted if our main bin design fails. Regarding construction itself, we found that the techspark workers can possibly help us out with cutting the materials we need, lowering the overall risk as well. With the success of our model being able to run through training, we are on track in terms of software, but still a little behind mechanically due to the materials not yet arriving, as well as our recent changes in design. 

Another risk is not being able to get the code running on the 1305 machines. We mitigate this by setting up GCP credits. 

Next week our goal is to train as many epochs as we can on the 1305 machines. We also plan to start constructing our structure. 

 

Q: As you’ve now established a set of subsystems necessary to implement your project, what new tools have your team determined will be necessary for you to learn to be able to accomplish these tasks?

  1. Right after the break, we will start setting up Jetson and deploy our detection & classification code to it. So working with Jetson and the CIS camera attached to it will be a major task.
  2. We will also migrate the ML model from Google Colab to run on the HH 1305 ECE machines. We will use FastX as our new tool. 
  3. Although not immediately, after integration with Jetson is on track, we will also start some mechanical work such as laser cutting to build the mechanics section of Dr. Green.

Aichen’s Status Report for Mar 11th

Good news before spring break! After writing scripts to process the labeling files as Ting and I have discussed, our model is finally able to run through the whole training process! Using Google Colab without GPU, training for a single epoch took more than half an hour (while the default number of epochs is 150). Therefore, we have worked on using the HH 1305 machines as well as GCP credits to accelerate the training process. Once running on GPU, training will be done soon and we will use backup datasets to practice fine tuning.

As we were starting to set up Jetson, we realized that the Jetson Xavier NX (paired with a microSD memory card) is a better choice in terms of computing power than the Jetson Nano that we’ve chosen before. After a brief research, we decided to switch. After spring break, we will set up Jetson & camera and deploy image capture and (change) detection code to Jetson.

Besides that, this week I have used the most of our time on the design doc. Alas, simply moving stuff from a google doc to the latex version took so long. On Friday (Mar 3rd) as I am writing now, I have just finished 2 hours of proofreading and reformatting and we are finally ready to submit.

In the coming weeks, setting up Jetson and integrating what we have now will be a major task. As none of us has worked with Jetson extensively before, there would be challenges. Either way, I am excited to transform work from computers to reality.

 

Link to code (update mostly on scripts organizing training data):

https://github.com/AichenYao/capstone-scripts

 

Vasudha’s Status Report for 02/25/23

This past week, I worked on finishing up the slides for the design presentation, adding last minute information regarding the hardware and mechanical design that I had been working on and helped Ting practice (since she was the speaker for this phase). After this, I updated our materials list with specifics needed after some design changes (ex. neo-pixel strip instead of singular pixel, dowel structure, etc.). I also took time this week to look into the design report, taking the presentation feedback into consideration to better define the design and update the diagrams accordingly. I looked into fine tuning the swing door design support of the swing door after realizing our last plan had the door being held up and controlled solely by a single servo. After looking into potential axle setups and talking to our TA about how we could implement this, we decided to go with a dowel supporting the ends of the door, with other designs in mind in case this support is not enough (ex. gluing an axel to the bottom of the door and having it lie on a loop connected to each side of the main frame, getting a new lid material that is thicker so that we can drill further into it, etc.). I updated the circuit simulation to reflect the hardware component changes. Additionally, I tried planning out a more detailed schematic regarding mechanical building by drawing out where connections with the hardware would need to be made, and then looking up possible implementations (screw placement, frame dimensions, placement, etc.).

As mentioned last week, my progress is now slightly behind on the hw/mechanical side due to the fact that our materials order was placed quite late. However, to account for this, I spent more time on simulating, improving our current design, and the design report to save future time and be more prepared for when the parts actually arrive. With these proactive actions, hopefully the time spent for building will be reduced so that we can still stay on track. 

In regards to actions for the next week, since our team has been struggling on the software side, I plan to help debug the model set up so that progress can be made while waiting for materials to arrive. Additionally, I plan to finish off the report early on in the week so that more time can be spent on the technical end before Spring Break.