Vasudha’s Status Report for 04/08/23

This week, I spent time prepping for the interim demo and starting mechanical construction. For the demo, I focused on getting the hardware system ready. The main circuit and operation was constructed a few weeks back, and I spent some time making sure the integration of the Jetson and Arduino worked. I wrote a script for the Jetson to take the output of the model and send it to the Arduino rather than the initial hard-coded test script I was using. As the demo neared, I began having issues with the Arduino port switching when multiple model outputs in a stream were sent (when trying to have multiple rounds of waste being sorted). I spent some time attempting to research and fix the issue, and will continue later this weekend. On the mechanical side, I finalized the remaining parts that we needed and put an order in early in the week. I then used the Techspark laser cutter to cut our acrylic frame and door. Once the parts ordered arrived, I assembled our axle structure and aligned it to the servo so it turned the right amount and direction. 

My progress overall aligns with the newer schedule. However, I might need to spend a little more time on the Jetson Arduino integration, as while it works for one iteration before the port resets, it is important that it works for multiple inputs. Mechanical construction has also begun so I am on track for that as well. 

Next week, I plan to finish constructing a good portion of the main frame (making holes for the screws in the acrylic and hardware cables, connecting the door to the frame) so that we can start integrating the hardware, and see if I can solve the port issue. 

In terms of testing, I have been conducting a few tests as each part of the hardware subsystem was built, mainly in regards to operation accuracy (which is one of our most important requirements). While integrating each component (neopixels, speaker, and servos), I did them one at a time, and made sure they either displayed the correct color, produced the right sound, or turned the right direction. After this, I tested multiple components at once, initially making sure that the color, sound, and directions were correct, then adding in delays, specifically for the servos, so that the correct lock unlocked before the main servo turned so that the door doesn’t get stuck, and the lock closing after the door returns back to place. After making sure the logic for all the components together were correct based on the model value that the Arduino would receive (fulfilling the requirement for the correct alerts and door movement being produced), I started working on Jetson integration. First, I tested that the correct hardcoded value was being sent over serially, then tested multiple, then tested actual integration with the output of the model. As I work on the mechanical door, I will have to test and tune the angle and direction of the servo now with the additional parts attached to it. Additionally, once the door works, I will increase/reduce the delays currently inputted so that the door works properly and as fast as possible without getting stuck with the locks (another requirement). 

Going forward, once the system works accurately, I plan on testing individual times and attempting to reduce them so that overall system operation time is reduced as having our system operate each round in less than 5 seconds was one of our defined requirements. I will make sure the door operation takes less than 2-3 seconds, and the overall circuit begins to operate within one second of model output.

Aichen’s Status Report for 04/08/2023

For the first half of this week, I’ve worked on integration and testing of Jetson (and camera) and the Arduino. By demo, feeding real-time pictures to the YOLO model and parsing classification results and sending those to Arduino could all run smoothly. By far, the integration between software and hardware is pretty much done.

One major issue we’ve found is that due to the limited variety of our current training datasets, the model could barely detect and classify any item that is not in a shape close to bottles. As we are getting a lot of “no detections”, more than we expected, the major next step is to train with more datasets containing items of all other types. 

After the demo, I have also integrated the “FSM” implementation so that YOLO is only run when a significant change happens under the camera. This change is to save resources and avoid unexpected classification (i.e. when the platform is not “ready”). When I tested with my local laptop, the code works all right. There are still some capture issues when I tried it on Jetson on Friday, which might be due to the usb camera. A lot of other possibilities are thought out with Ting and Samuel, and my biggest guess now is that the “usb camera” needs a longer turnaround time in between two captures and I would use a wait loop before each capture instead of a fixed sleep time as of now. Before, this was not an issue as each capture is followed by an inference call which is way longer than the sleep times I use now. More debugging will be done on Sunday and Monday.

By next week, the FSM part should be fully integrated and I will also help Ting with training a new dataset. Training of the new dataset will hopefully be done within next week too.

Team Status Report for 4/8/2023

This week, the main tasks we focused on were prepping a basic set up for our demo, starting the mechanical building, further training our model, and improving our detection algorithm. 

 

For our demo, we presented the overhead camera capturing images, our model detecting and classifying the objects in the image and then sending a value that the hardware circuit then acted upon. The hardware circuit included all the parts for the baseline, the neopixel strip, speaker, and servos. Some issues that we had during the demo was the Aruduino controlling the hardware kept switching ports, stopping the process midway, and the model not detecting anything that was not “recyclable” according to the model. Our plan for the next few weeks is to fix these two major issues by writing a script for the port switching issue and further training our model with a trash dataset so that we can detect both. On the side, we will start developing the mechanical building so that everything is hopefully built by the deadline. 

 

Our biggest risk is the mechanical portion once again. This is mainly due to the fact that all of the other requirements for our baseline have already been somewhat developed/have a basic implementation currently running, while the mechanical aspect has yet to be physically developed. However we started working on this later this past week once all the parts arrived, and our plan to deal with this risk is to in parallel develop the different remaining parts so that our overall integrated system operates better than during our internim demo. 

 

In regards to our schedule, we realized that we needed to train our model more and improve our detection algorithm, and so will need to allocate more time for that this week. However, in regards to the hardware, we are pretty much on track. We are on track for the mechanical building as per our latest schedule, and will need to make significant progress in the next one or two weeks to have a working system. 

 

Next week, our plan is to have our model detecting both recyclables and waste, and have the basic mechanical frame start taking shape so that we can start integrating the hw to it the following week. Integration of the FSM part, before running YOLO inference, is under testing now and will be working by next week.

 

While we did complete some basic tests and integration for the internim demo, as we enter the verification and validation phase of the project, we are planning to complete some more individual tests on each of our subsystems, and then test the overall integration. 

Team Status Report 4/1/23

Our biggest risk, unable to set up the MIPI camera, from last week is resolved by switching to a usb camera. After some cross debugging, we decided to use an external usb camera instead. After the switch on Sunday, we have been able to take pictures and run YOLO inference on those pictures in real time. Our software progress with respect to integrating camera capture and triggering detect.py (inference) has been pretty smooth and we are almost caught up with the schedule. With most of the mechanical parts here, we are able to do some physical measurements to determine the height and angle that the camera should be placed. The mechanical building is now our biggest risk, as despite our rigorous planning there are some issues that could occur during implementation, which will force us to tweak our design as we go. 

One example of such a situation was seen this past week, after the acrylic piece arrived. After comparing this to the plywood in techspark and the wood pieces the professor provided us with, we decided to go with the acrylic as it was more fitting in terms of size and thickness for our bin. To reduce the amount of risk, we will be testing on scrap pieces before cutting into our acrylic piece, and if something still goes wrong in this process, our backup plan will be to glue the plywood pieces in techspark to make it thicker and go with that.

Updated Design:

https://drive.google.com/file/d/1wiW573DBv5wk_PY8Wq1K_Vxar4fNopX1/view?usp=sharing

As of now, since the materials from our original order will be used, we do not have any additional costs except for the axle, shaft coupler, and clamps mentioned in previous updates. A big thanks to the professor for giving us some extra materials for our frame!

 

In terms of design, switching to a usb camera did not change other components involved.

We talked to Techspark workers, and they said it would be possible to CAD up the design for us if we gave them a rough sketch, and they could also help us cut using the laser cutters. Next week, we will work with them to start CADing up the swinging lid design. 

 

Here is a picture of our new camera running the detection program. Next week we’ll have to fine tune the model some more, since here this is obviously not a glass bottle. 

For the demo we will show the hardware components (servos, lights, speaker) as well as the webcam classification from a sample bottle . 

 

 

Vasudha’s Status Report for 04/01/23

This week, I focused on making sure the integration between the Jeston and Arduino worked and finalizing the mechanical design so that we can officially start building this following week. As of last week, I was only able to get 1 value sent from the Jetson/received by the Arduino. The Arduino was working correctly in response to the specific value it received. However, when testing a string of inputs from the Jetson, the Arduino seemed to only output the result of the first input, making it seem like there was some buffer issue. Eventually, I was able to fix this, however, the transmission+receive time is a little too slow, and was something that I continued to work on for the rest of the week to improve. On the mechanical side, a few things occurred. The acrylic piece that we had ordered for our previous door design arrived, the Professor had provided us with wood pieces for our frame, and I talked to people at Techspark regarding purchasing plywood from them. The plywood pieces at Techspark ended up being too small and thin for our purpose, while the acrylic seemed to look like the better option, with some tweaking with the wooden pieces. With all of this in mind, I once again modified the mechanical design to now use the acrylic piece as the lid and door, with wood pieces on each side to create for the axle to then be attached onto using clamps. The overall acrylic lid will be connected to the wooden frames. I made a few drawings (shown in the team report) and decided on measurements that we can then provide to Techspark next week so that we can start cutting out our pieces and start building.

We are on schedule, prepared for the interim demos this following week, with the software and hardware subsystems somewhat operating successfully (though the integration of getting actual model output values rather than a script being sent to the Jetson still needs to be implemented). While there are a few issues with our implementation that we will need to go back and resolve (ex. Further fine tuning and getting proper outputs from the model in a format that can be sent to the Arduino), the individual parts are showing some form of operation, which we hope to improve upon in the next few weeks as we continue to integrate. 

My next steps for the following week are to successfully complete the demo, start mechanical building, and help resolve the format issue regarding the model output and Jetson+Arduino integration.

Ting’s Status Report 4/1/23

This week after the CSI camera struggles, we were successfully able to start the camera webstream using the USB camera. We were also able to run the YOLOv5 detection script using the webcam stream as the input, where the script would do the detection and draw bounding boxes in real time. Prof. Fedder brought in the wood that he cut for us, so we were able to test the camera height against the wood backing and see how big the platform would have to be to have an appropriate field of view. We found that after the script identifies an object, it would save an image of the captured object with the bounding box and label. Turns out we just had to add a –save-txt flag to have the results saved, but it only saved the label and not the confidence value. Upon further inspection though it turns out there is a confidence-threshold parameter that we can set, so anything that classifies under that threshold doesn’t get a bounding box at all. We are pretty on schedule in terms of the software, we just have to train the model slightly further next week since right now it is classifying a plastic hand sanitizer bottle as glass with 0.85 confidence. We will work with techspark workers next week or after the interim demo to CAD up our lid design, and they also said that they could help us use the laser cutter.

Aichen’s Status Report for 04/01/2023

After some cross debugging (setting up backup jetson and trying different MIPI/USB cameras), we have finally sorted out the camera issue. On Sunday I was able to get the usb camera working and taking videos/images. On Monday, the whole team double checked and we decided to switch to usb cameras now. During labs, we have done measurements to determine the height and angle of the camera. Ting and I also worked on integrating the camera code with inference YOLO code (detect.py). We are able to capture images and trigger detect.py from the script that makes the camera captures images. We have done initial rounds of testing and the result makes sense. We are working on transforming image results to text files so that we could parse detection results with code and pipeline it to Arduino. We are able to find the classifications and confidence values and we could also direct them to text files. Right now as I am writing on Friday, we are still working on the logic of outputting a single result when there are multiple items identified. Overall, progress is much smoother this week compared to the last. We have also done initial mechanical measurements to better prepare for the interim demo.

Ting’s Status Report for 3/25/23

This week, we were able to successfully migrate our scripts that were running on Colab to run on the Jetson. We tested with some random water bottle images, and the results were okay, but not great. They would serve the purpose of our classification since our threshold is 0.85. We spent a lot of time after this focusing on getting the camera set up. It’s not going as well as expected, we kept getting a black screen when we booted up the camera stream. We tried multiple different commands,  but even though the Jetson recognized the camera none of the commands could get the stream to show up. I also tried running a script for using CV2, but that wasn’t working either. I met with Samuel to debug, but he thought that we would be better off shifting tracks and using a USB camera instead. We will be trying that next week. We are on schedule with the hardware, all the servos and lights and speaker are coded up and linked to the Arduino. We drew out specific dimensions for the mechanical part, and Prof. Fedder will cut the wood for us. If we get the camera stream working next week and get it integrated with the detect script of YOLOv5 we will be on track with the software.

Team Status Report for 3/25/23

Team Status Report for 3/25/2023

This week our team got together to discuss the ethics of our project with other teams. The biggest ethical issue that our team came up with was that if users were to hack the Jetson, they could alter the built in rules and it could misinform children as well as contaminate recycling. During the discussion, the only concern that was brought up was that our idea would just not be effective, so it would teach children a false sense of hope when the outlook of recycling is so bleak. Other than the ethics, we worked on the hardware and Jetson camera integration. 

 

On the hardware side, we got the physical circuit built, working, and able to accept serial input from a usb connected computer (a set up that we will migrate to the Jetson). 

In terms of mechanical design, a major change we made after talking to the TA and professor was deciding to make the entire frame and door out of wood to avoid the risk of structural integrity that came with the acrylic. We also decided to use a flexible coupling shaft between the arduino and the axle for better movement. 

Aichen and Ting struggled to get the camera working with the Jetson. We got the Jetson to recognize that the camera was present, but when the streaming screen popped up, it would just be black and there would be errors. Ting worked with Samuel to debug, but found that it would be better to just try a USB camera instead. Samuel lent the camera that his team used with their capstone, and Aichen is working on testing it with Jetson to see if this is the correct path to go down. 

Our biggest risk now lies in the camera subsystem. Without the camera working and being able to capture, we cannot test our detection and YOLO classification algorithm on real time pictures. Because we are still not sure what the roots of the problem are (broken Jetson or unsuitable camera type), debugging and setting up the camera may take some time. Other than that, another risk lies in the sizing of the inner door. However, this part should not be too concerning as we could always start with extra space and make adjustments after experimentations.

A great thing for us, the hardware work is caught up and the whole hardware circuit, as well as (serial) communication between Jetson and Arduino is both done. We are a little behind on mechanics as there are still more parts to be ordered and it is hard to finalize measurements without physical experimentation. The ML system alone is on track as training seemed to return great results and the inference code could be run on Jetson as expected. Soon, we will start fine tuning. As mentioned in risks, the camera is taking us much longer than expected and it is hard to do live testing of detection & classification without that set up well.

Looking ahead, our goal for this next week is to catch up as much as we can in terms of our schedule so that we are prepared enough for the interim demo. For the demo itself, as a minimum we hope to have the Jetson classification + hardware operation working, however, our goal by this time is to have everything besides the mechanical portion somewhat integrated and working together. This is essentially to buy time for our mechanical parts to be completely ordered and arrive, and after the demo we can spend the final three weeks fine tuning and connecting our hardware to the mechanical parts to the best of our abilities. 

Vasudha’s Status Report for 03/25/23

This week, in addition to the ethics assignment and lecture, I focused on setting up the hardware circuit and helping debug the camera set up. For the hardware circuit, I initially attempted setting up the circuit according to my simulated MVP design. However, I soon found that some of the parts were not exactly the same as the ones in the simulation, and therefore could not be connected the same way or use the same libraries. A big example of this was the neopixel strip, which was actually the “Dotstar” version that used SPI, therefore needing a separate clock and data line. I therefore had to move the neopixel strip pins to use 11 and 13, as those were the pins on the Arduino Uno that had  the MSIO and CLK SPI signals. Besides this, the speaker and the servos could be connected as per simulation. After some debugging, I was able to successfully connect and operate the neopixel strip, servos, and speaker. After this, I added code to have the Arduino accept and use serial input, connected the Aduino with a usb to my computer, and ran a python script to see if the Arduino would accept and correctly control the components based on the given input, which after some debugging, worked as well. The next step in this process will be running the input script on the Jetson itself to make sure the connection between it and the Arduino works the same way. 

After setting up the hardware subsystem, I helped Ting and Aichen debug the CSI camera connection. Although the device could recognize that a camera was connected, despite trying to install and use multiple streaming commands, none of them seemed to work. The errors produced by these commands seemed to  indicate that something might be wrong with the camera+Jetson connection itself, so we will try using the camera on the nano to see if there is an issue with the camera or the Jetson port, and temporarily use a USB camera in case the CSI one does not end up working.

I also worked on further fine tuning our mechanical design and specified measurements for the different parts of the frame and door. 

Schedule wise, we have now caught up in terms of hardware. We are behind on the mechanical side due to the constant revision of the plan and not having all the parts we need for the new design. We are also a little behind on the software side due to unexpected issues regarding the camera. Therefore, our plan is to focus our time fixing and fine tuning the software and the camera+Jetson subsystem so that we have that working with the hardware in time for the interim demo, and then focus more time on the mechanical portion once those parts arrive. 

This next week, I plan on testing the Jetson+Arduino integration and helping out with the camera/software debugging so that we get back on track and have everything ready for the demo the following week.