Team Status Report for 4/26

We finally have a decent replacement camera, not the camera module 3 which is still not here, but an Arducam IMX519 Raspberry Pi camera which meets all the requirements we had except the IR functionality which won’t affect our demo. As such, at this point, the biggest risk is the time crunch and to finish all deliverables on time.

No changes were made to the existing design of the system other than the use of this new camera. It was needed since our raspberry pi camera module 3 is not here and our demo/final deliverables are coming up. The costs are minimal since the camera is easy to replace in our system pipeline, and did not cost too much money.

Currently the next things to do would be to integrate the new camera and fully test its capabilities. In addition to that, work is going to be needed in making our poster and final deliverables. 

Testing we have done:

Unit tests:

  • Cooling and power supply
    • We ran the raspberry pi for 55 minutes with the full implementation
    • We saw the temperature stay below 75C, avoiding thermal throttling
    • Power supply kept our device stable the whole time
    • Both met our requirements
  • ML Models
    • We evaluated the model pipeline on images at different distances (5m, 10m) and weather/lighting conditions (day, night, rain, night+rain) of ideal image quality
    • We got a 98.6% precision and 79.1% recall, meeting our 90% precision requirement but not our 85% recall requirement
    • We will adjust with cutoff confidence levels to try and trade some of the precision for recall to meet both requirements
  • GPS Accuracy:
    • We got GPS location at 10 locations we know the true coordinates of outside (ex. CMU Flagpole) and compared it with the GPS reading at these locations
    • All results were in 31m, well within our 200m requirement
  • Edge-Cloud Communication:
    • We ran 15 simulated matches on the raspberry pi, which should show up properly in supabase and 15 updates to the database should be added to the Edge database
    • This should update 100% of the time which we observed in our testing

 

System tests:

  • Timing requirement:
    • We ran the raspberry pi for 55 minutes with the full implementation
    • All images were checked within our 40 second requirement
  • Full system performance test:
    • We evaluated the model pipeline on images at different distances (5m, 10m) and weather/lighting conditions (day, night, rain, night+rain) of our camera’s quality (the old webcam at the time)
    • 64.7% precision and 23.6% recall observed, well below our 90% and 85% targets respectively
    • We have a new camera to test, which should have much better results

 

Tzen-Chuen’s Status Report for 4/26

Last report!

After testing our replacement camera last week, we now need to test our new camera. This past week hasn’t seen much work on my end, as I’ve been swamped with other coursework, but as our system is effectively complete, it was not a big deal.

On top of new camera work, there’s also final deliverable work to be done such as the poster and demo preparation. We’ll need to collect a good representative video to showcase our device working.

Progress is on track.

Tzen-Chuen’s Status Report for 4/19

Final week is coming up, more of the same of putting the finishing touches on our system. On my end this means in person testing of the camera. To this end, I’ve created python scripts for on boot recording and image capture. The major threat to the overall success is the replacement camera not arriving. It should have arrived last week, yet is still not here. Contingencies include using our MVP camera with no IR capability, or using the arducam.

Tomorrow will be going into the real world and getting our tests done, along with presentation work. Progress is on track, aside from the camera there are no concerns.

What I’ve needed to learn for this capstone project was the art of device selection and a shift in mentality to start working and ask questions later. By recognizing that the only way to solidify the knowledge from reading articles on the internet was to simply start working on something, things become so much clearer.

Tzen-Chuen’s Status Report for 4/12

Post Carnival and demo day we are in the final stretch. There are no major threats to the overall success of the project anymore, as we have our MVP and are currently only iterating and testing full capabilities at this point. Feedback from said demo day was encouraging, as it was all positives and suggestions on how to proceed for the final showcase.

This week I focused on the final steps in making our system into a true dashcam, meaning the adapters and code for video capture. I also drafted a testing plan which includes taking our system into a vehicle and capturing footage for showcase. Next would be actually getting that footage, which will be next week when the newly ordered parts get picked up. Upon getting that footage and images at the design document specified distances, I would then run our model on it and measure/validate performance.

Progress is on track, and once again there are no concerns.

Team Status Report for 3/29

The biggest risk currently is that we found out that the main camera module we purchased seems to be broken, which means we need to get a replacement working as soon as possible. Our backup camera is detecting, but is acting very weird, taking extremely blurry photos of a certain solid color, and the images are completely unusable. We’re currently planning to use a simple usb webcam temporarily for the demo and to get a working replacement.

We have made no changes to our design, or anything major to our schedule, since we expect getting the camera working in our code once we have a working one will be relatively quick. The change that was made to our schedule was mainly working around the faulty cameras, but after our problem is addressed, should be ok.

Outside of the camera issues, we have gotten the rest of the pipeline operational and tested, meaning it is ready for the demo and MVP.

Tzen-Chuen’s Status Report for 3/29

With the demo being Monday, we have all systems working right now besides the camera, which is alarming to say the least. The camera that we originally intended to use, the camera module 3 was being programmed and almost done, with pictures being taken and everything. Our backup camera is working though, but the problem with that is that it is a much more advanced camera, needing lots of configuration.

So right now we are working on our third alternative, a usb webcam that is acting as a substitute, which should need much less configuring than the arducam first backup. While hairy, we are on track to have a successful demo.

Tzen-Chuen’s Status Report for 3/22

Ideally I would have been finishing the integration of UI, raspberry pi, and physical components, but I’ve been stuck on getting the main camera program working. The issue is that the camera appears to be plugged in, but is not detected by the computer. This is strange as before it was working just fine and we were able to take pictures with it.

Aside from the camera, much of the work this week was the ethics assignments, as it was a really unexpected time sink. Currently I need to solve the camera connection issue as soon as possible, and get back to work on other parts of the project where I’m needed like the UI.

Progress is currently behind schedule because of the camera hiccups and the ethics work, but with a bit of extra elbow grease next week we should be back on track. (Hopefully.)

Tzen-Chuen’s Status Report for 3/15

This week was dedicated to coding all the systems together. I worked on adapting the OCR models to functions that can be called in a main routine on the raspberry pi. The supabase is created and linked to the pi as well, and work has begun on the front end implementation.

In addition to adapting the OCR models, I also began work coding the camera to take images in our program. This was done through using the picamera2 libraries. An unforeseen obstacle is the sheer depth of configuration that one can do on the settings of the camera. I sill need to work through what are the optimal settings for us, and if there should be some form of dynamic camera configurations for different conditions.

The next steps would be to determine what is the best way to host the UI frontend and begin significant work on that portion. Next week will be working on the UI and testing our current code on the pi and camera. Progress is on schedule, no concerns on my end.

Tzen-Chuen’s Status Report for 3/8

This week I configured the raspberry pi. This included installing a new headless OS for improved performance, and configuring the IPv4 address to allow for remote ssh. This allows us to set the raspberry pi up on CMU wifi and let us ssh into it to program it from wherever we are. The github repo is now also installed on it and progressing towards basic integration.

I didn’t get to creating a table and linking supabase to a user interface yet, but I have been looking into setting it up on a github domain. Now instead of zero cameras we have two cameras, and one cable to test the feedback from the other similar group that we may not have the requisite resolution.

Another significant part of the work this past week was the design report. I handled the Introduction and Project management sections, along with the Testing and Verification sections. While the design report seemed simple on its surface, actually putting every idea down in writing had a very clarifying effect on the overall direction of the project.

Progress is on schedule and now that all major components are here progress will go even smoother.

Tzen-Chuen’s Status Report for 2/22/25

This week had a spanner thrown into the plans. On top of a very important presentation for EPP that took more time out of my week than usual, the design presentation unveiled new considerations that need to be looked into.

The work that went into the design presentation helped the team straighten out the exact direction we want to head in, and the post-presentation feedback from professor Brumley was also extremely helpful. To incorporate that feedback into something tangible, I’ve been tinkering with Supabase and lovable.ai. Another group brought up that there may be an issue in our selected camera, and I’ve been doing some research into camera resolution, field of view, and how that relates to clarity at a distance.

What’s worrying is I haven’t received an email about our camera being delivered (suitable or not), and although the raspberry pi repository hasn’t been fully fleshed out yet I have a more complete picture of what needs to be done. Next week I should be more free to steam ahead and catch back up.