Aichen’s Status Report for 04/08/2023

For the first half of this week, I’ve worked on integration and testing of Jetson (and camera) and the Arduino. By demo, feeding real-time pictures to the YOLO model and parsing classification results and sending those to Arduino could all run smoothly. By far, the integration between software and hardware is pretty much done.

One major issue we’ve found is that due to the limited variety of our current training datasets, the model could barely detect and classify any item that is not in a shape close to bottles. As we are getting a lot of “no detections”, more than we expected, the major next step is to train with more datasets containing items of all other types. 

After the demo, I have also integrated the “FSM” implementation so that YOLO is only run when a significant change happens under the camera. This change is to save resources and avoid unexpected classification (i.e. when the platform is not “ready”). When I tested with my local laptop, the code works all right. There are still some capture issues when I tried it on Jetson on Friday, which might be due to the usb camera. A lot of other possibilities are thought out with Ting and Samuel, and my biggest guess now is that the “usb camera” needs a longer turnaround time in between two captures and I would use a wait loop before each capture instead of a fixed sleep time as of now. Before, this was not an issue as each capture is followed by an inference call which is way longer than the sleep times I use now. More debugging will be done on Sunday and Monday.

By next week, the FSM part should be fully integrated and I will also help Ting with training a new dataset. Training of the new dataset will hopefully be done within next week too.

Team Status Report for 4/8/2023

This week, the main tasks we focused on were prepping a basic set up for our demo, starting the mechanical building, further training our model, and improving our detection algorithm. 

 

For our demo, we presented the overhead camera capturing images, our model detecting and classifying the objects in the image and then sending a value that the hardware circuit then acted upon. The hardware circuit included all the parts for the baseline, the neopixel strip, speaker, and servos. Some issues that we had during the demo was the Aruduino controlling the hardware kept switching ports, stopping the process midway, and the model not detecting anything that was not “recyclable” according to the model. Our plan for the next few weeks is to fix these two major issues by writing a script for the port switching issue and further training our model with a trash dataset so that we can detect both. On the side, we will start developing the mechanical building so that everything is hopefully built by the deadline. 

 

Our biggest risk is the mechanical portion once again. This is mainly due to the fact that all of the other requirements for our baseline have already been somewhat developed/have a basic implementation currently running, while the mechanical aspect has yet to be physically developed. However we started working on this later this past week once all the parts arrived, and our plan to deal with this risk is to in parallel develop the different remaining parts so that our overall integrated system operates better than during our internim demo. 

 

In regards to our schedule, we realized that we needed to train our model more and improve our detection algorithm, and so will need to allocate more time for that this week. However, in regards to the hardware, we are pretty much on track. We are on track for the mechanical building as per our latest schedule, and will need to make significant progress in the next one or two weeks to have a working system. 

 

Next week, our plan is to have our model detecting both recyclables and waste, and have the basic mechanical frame start taking shape so that we can start integrating the hw to it the following week. Integration of the FSM part, before running YOLO inference, is under testing now and will be working by next week.

 

While we did complete some basic tests and integration for the internim demo, as we enter the verification and validation phase of the project, we are planning to complete some more individual tests on each of our subsystems, and then test the overall integration. 

Team Status Report 4/1/23

Our biggest risk, unable to set up the MIPI camera, from last week is resolved by switching to a usb camera. After some cross debugging, we decided to use an external usb camera instead. After the switch on Sunday, we have been able to take pictures and run YOLO inference on those pictures in real time. Our software progress with respect to integrating camera capture and triggering detect.py (inference) has been pretty smooth and we are almost caught up with the schedule. With most of the mechanical parts here, we are able to do some physical measurements to determine the height and angle that the camera should be placed. The mechanical building is now our biggest risk, as despite our rigorous planning there are some issues that could occur during implementation, which will force us to tweak our design as we go. 

One example of such a situation was seen this past week, after the acrylic piece arrived. After comparing this to the plywood in techspark and the wood pieces the professor provided us with, we decided to go with the acrylic as it was more fitting in terms of size and thickness for our bin. To reduce the amount of risk, we will be testing on scrap pieces before cutting into our acrylic piece, and if something still goes wrong in this process, our backup plan will be to glue the plywood pieces in techspark to make it thicker and go with that.

Updated Design:

https://drive.google.com/file/d/1wiW573DBv5wk_PY8Wq1K_Vxar4fNopX1/view?usp=sharing

As of now, since the materials from our original order will be used, we do not have any additional costs except for the axle, shaft coupler, and clamps mentioned in previous updates. A big thanks to the professor for giving us some extra materials for our frame!

 

In terms of design, switching to a usb camera did not change other components involved.

We talked to Techspark workers, and they said it would be possible to CAD up the design for us if we gave them a rough sketch, and they could also help us cut using the laser cutters. Next week, we will work with them to start CADing up the swinging lid design. 

 

Here is a picture of our new camera running the detection program. Next week we’ll have to fine tune the model some more, since here this is obviously not a glass bottle. 

For the demo we will show the hardware components (servos, lights, speaker) as well as the webcam classification from a sample bottle . 

 

 

Vasudha’s Status Report for 04/01/23

This week, I focused on making sure the integration between the Jeston and Arduino worked and finalizing the mechanical design so that we can officially start building this following week. As of last week, I was only able to get 1 value sent from the Jetson/received by the Arduino. The Arduino was working correctly in response to the specific value it received. However, when testing a string of inputs from the Jetson, the Arduino seemed to only output the result of the first input, making it seem like there was some buffer issue. Eventually, I was able to fix this, however, the transmission+receive time is a little too slow, and was something that I continued to work on for the rest of the week to improve. On the mechanical side, a few things occurred. The acrylic piece that we had ordered for our previous door design arrived, the Professor had provided us with wood pieces for our frame, and I talked to people at Techspark regarding purchasing plywood from them. The plywood pieces at Techspark ended up being too small and thin for our purpose, while the acrylic seemed to look like the better option, with some tweaking with the wooden pieces. With all of this in mind, I once again modified the mechanical design to now use the acrylic piece as the lid and door, with wood pieces on each side to create for the axle to then be attached onto using clamps. The overall acrylic lid will be connected to the wooden frames. I made a few drawings (shown in the team report) and decided on measurements that we can then provide to Techspark next week so that we can start cutting out our pieces and start building.

We are on schedule, prepared for the interim demos this following week, with the software and hardware subsystems somewhat operating successfully (though the integration of getting actual model output values rather than a script being sent to the Jetson still needs to be implemented). While there are a few issues with our implementation that we will need to go back and resolve (ex. Further fine tuning and getting proper outputs from the model in a format that can be sent to the Arduino), the individual parts are showing some form of operation, which we hope to improve upon in the next few weeks as we continue to integrate. 

My next steps for the following week are to successfully complete the demo, start mechanical building, and help resolve the format issue regarding the model output and Jetson+Arduino integration.

Ting’s Status Report 4/1/23

This week after the CSI camera struggles, we were successfully able to start the camera webstream using the USB camera. We were also able to run the YOLOv5 detection script using the webcam stream as the input, where the script would do the detection and draw bounding boxes in real time. Prof. Fedder brought in the wood that he cut for us, so we were able to test the camera height against the wood backing and see how big the platform would have to be to have an appropriate field of view. We found that after the script identifies an object, it would save an image of the captured object with the bounding box and label. Turns out we just had to add a –save-txt flag to have the results saved, but it only saved the label and not the confidence value. Upon further inspection though it turns out there is a confidence-threshold parameter that we can set, so anything that classifies under that threshold doesn’t get a bounding box at all. We are pretty on schedule in terms of the software, we just have to train the model slightly further next week since right now it is classifying a plastic hand sanitizer bottle as glass with 0.85 confidence. We will work with techspark workers next week or after the interim demo to CAD up our lid design, and they also said that they could help us use the laser cutter.