Team status report for 4/12

During demos, we verified that we already have a working and functional MVP. The risk is that we need to verify that the replacement camera we bought can work with the system properly, and testing for this will begin as soon as the replacement arrives. However, this is a small risk, as the previous one functioned already, and our backup is sufficient for demonstrating functionality. 

There are no changes to the design, as we are in the final stages of the project. Only testing and full Sagemaker integration remain. 

On the validation side, we plan to test the fully integrated system once we receive and install the replacement camera. We’ll run tests using real-world driving conditions and a portable battery setup to simulate actual usage. We will also test in various lighting conditions. 

More specifically, while we have not run comprehensive tests yet, our initial testing on the timing requirements as well as the database matching are all meeting our requirements of 40 seconds and 100% accuracy, respectively. To test these at a more comprehensive level, we will run 30 simulated matches with different images to make sure all are within the timing and match requirements. Once we receive the camera we will use in our final implementation, we will take images of varying distances and weather/lighting conditions, and test the precision and recall of our whole pipeline. These images will also be put into platerecognizer.com, a commercial model, to see if it is our models that need improvement or the camera. The details of these tests are the same as what we have in the design report. Finally, we will either run this system in an actual car, or take a video of a drive with the camera and feed this video into the pipeline to simulate normal use, and make sure it detects all the matches of the video. 

In the last two weeks,  we finalized our MVP and are working on additional features as well as testing.

Team Status Report for 3/29

The biggest risk currently is that we found out that the main camera module we purchased seems to be broken, which means we need to get a replacement working as soon as possible. Our backup camera is detecting, but is acting very weird, taking extremely blurry photos of a certain solid color, and the images are completely unusable. We’re currently planning to use a simple usb webcam temporarily for the demo and to get a working replacement.

We have made no changes to our design, or anything major to our schedule, since we expect getting the camera working in our code once we have a working one will be relatively quick. The change that was made to our schedule was mainly working around the faulty cameras, but after our problem is addressed, should be ok.

Outside of the camera issues, we have gotten the rest of the pipeline operational and tested, meaning it is ready for the demo and MVP.

Team status report for 3/22

A significant risk that could jeopardize the success of the project is currently the camera and the pi. We were able to take images, testing baseline functionality, but when we moved our setup, the camera stopped being detected. Though this can probably be resolved, the impact to our timeline will still be felt. If it is the camera, pi, or cable that is the issue, we need to diagnose soon and order replacements from inventory or the web. 

A minor change was made to the system design by replacing the HTTP POST Supabase edge function with event triggers. This change was necessary because the Raspberry Pi already inserts data directly into the possible_matches table, and may result in more complexity compared to the edge function method.

Right now, our locally run code is finalized outside of the camera problems and can be found here. We also have a working prototype for the website for law enforcement to view matches, and can also be found here.

Team Status Report for 3/15

From the feedback received for our design report, our current risks are to figure out the cloud implementation in more detail and have something we can use to integrate with the Edge compute part. Additionally, we need to test the camera to see if it matches what we need as soon as possible. To that end, we have gotten a barebones and minimal but still usable cloud implementation for our case, and Tzen-Chuen will be hooking up the camera in the next couple days.

Currently, there are no changes to the existing design. We forecast changes next week however, as the major components should be integrated and we will begin testing our MVP, likely learning what could be improved. The current schedule is MVP testing next week, with working on new/replacement components/making the requisite changes the week after that. 

Right now, we have the locally run code mostly finished, and it can be found here. We also have made large strides on the cloud side of things, with databases set up.

Team’s status report for 3/8/25

The most significant risks have remained unchanged, with it being the delay of MVP. These risks are being mitigated by having spare components ordered from ECE inventory to substitute instead of our actual desired components. In terms of contingency plans, there are none, as MVP is a crucial step that cannot be circumvented in any way. However, we are testing the parts of our design as they finish, so we are confident that they will work as we assemble them into an MVP.

We made a few changes to the system design. We updated the cloud reliability target from 95% to 97% to reduce downtime risks and ensure timely database lookups for license plate matching, as AWS’s baseline uptime guarantee is closer to 95%. This is pretty realistic and also follows the published statistics for server uptime on AWS and shouldn’t change our costs. We also refined the edge-to-cloud processing pipeline to improve accuracy and efficiency. Both high and low-confidence detections are sent to the cloud, but low-confidence results also include an image for additional verification using more complex models. This change ensures that uncertain detections receive extra processing while still keeping the system responsive and scalable.

This will not significantly alter the current schedule, as lowering the accuracy requirement will make training easier and potentially quicker.

In addition, we have written the code for using the ML models on the raspberry pi and it can be found here.

Part A (Richard):

Our design should make the world a safer place with regards to child kidnapping. Our device, if deployed at scale, will be able to locate the cars of suspected kidnappers using other cars on the road quickly and effectively, allowing law enforcement to act as fast as possible. While we currently only plan on using the device with amber alerts, a US system, the design should largely work in other countries. The car and license plate detection models are not trained on the cars and plates of any specific country, and PaddleOCR supports multiple languages (over 80) if needed with foreign plates. This means that if other countries have a similar system to amber alerts, they can use our design as well. Our device may also motivate other countries who do not have a similar system to start their own in order to use our design and better find suspected kidnappers in their country.

Part B (Tzen-Chuen):

CALL sits at a conflicting cross-section of cultural values. Generally, our device seeks to protect children, a universal human priority. It accomplishes this through a distributed surveillance network, akin to the saying “it takes a village to raise a child.” By enabling a safer, more vigilant nation, we are in consideration of a global culture.

In terms of traditional “American values,” CALL presents a privacy problem. While not explicitly a constitutional right, it is implied in the 4th amendment. A widespread surveillance network is bound to raise concerns among the general public. We attempt to mitigate this concern by only sending license plate matches that have a certain confidence level to the cloud, and never to the end users. This way we balance the shared cultural understanding of child protection with the American tradition of privacy.

Part C (Eric):

Our solution minimizes environmental impact by leveraging edge computing, which reduces reliance on energy-intensive cloud processing, lowering power consumption and data transmission demands depending on the confidence of the edge model output. The system runs on a vehicle’s 12V power source, eliminating the need for extra batteries and reducing electronic waste. Additionally, its modular design makes it easy to repair and update, extending its lifespan compared to full replacements. These considerations ensure efficient operation while reducing the system’s environmental footprint.

Team’s Status Report for 2/22/25

Currently the most important risks that could jeopardize the success of our project is the MVP being delayed by any reason, as getting the MVP off the ground and tested will reveal any weak points that we need to address. The MVP being delayed will likely mean we will be time crunched when trying to iterate. 

We made a modification to the timing requirements based on further research into the Amber Alert use case after receiving feedback. Initially, the system was designed with a 60-second processing requirement, which aligns with the average lane change frequency on highways (2.71 miles). However, after analyzing worst-case merging scenarios, which would require about 20 seconds, we found 40 seconds would be a more appropriate constraint for the MVP to shoot for as a middle ground between these two cases, which once achieved, we would continue to target that worst-case timing requirement. This would better ensure timely license plate detection before a vehicle potentially exits the field of view. This wouldn’t have any direct costs, but it may affect the requirements we have on the processor, depending on how long it takes to do model inferencing.

Another change we are making is moving to supabase for our backend server, as it presents a much more user-friendly interface for our use case targets (law enforcement, amber alert) and is more setup-friendly.

Our schedule has not changed.

In addition, we have worked on our camera to OCR pipeline, and have made two versions of the code we will use: version 1, version 2.

Team’s Status Report for 2/15/25

The most significant risk is that the edge compute solution may not guarantee enough performance (precision and recall) to meet our MVP. The contingency plan is having a two-phase approach where if more accuracy than the edge compute raspberry pi can give us is needed, we then send the image into the cloud, where a more sophisticated model can give us better results.

A change we made to the existing design was that we are now using a Raspberry Pi 4 rather than a 5. This change was made since all the raspberry pi 5s available in storage were claimed very quickly, and since we wanted to test our software as soon as possible on actual hardware, we took a raspberry pi 4 instead. While unfortunate that we’re unable to use the most powerful hardware available, this should not have any impact on our ability to create an MVP or final device since the process for loading the models on these devices are nearly identical. When we run our model, if the performance is in the order of magnitude fitting of a compact processor, we can spend our currently plentiful remaining budget on the more powerful raspberry pi 5.

We have trained the model we will most likely use for our MVP, a YOLOv11n model trained on an open source license plate detection dataset for 400 epochs. It can be found here. We have also looked into existing OCR methods and chosen the PaddleOCR out of them, which we’re currently experimenting with.

Aside from the model, the rpi 4 is currently being developed, with a github repo to be populated by next week. The camera module is also expected to arrive next week as well.

Part A written by Richard Sbaschnig:

A.

Our device aims to improve public safety. This is done by detecting the license plates listed in active amber alerts in a dashcam. Since these alerts are sent to identify suspected kidnappers of children, by increasing the search coverage of amber alerts with our device, law enforcement will be able to find these vehicles sooner and catch kidnappers sooner. This should also have a deterrent effect, since would-be kidnappers would be less inclined knowing that there are these devices all around that can identify their car and notify the police automatically.

Part B written by Tzen-Chuen:

Our device’s social considerations don’t quite appear as an obvious point of concern. The different groups that will be interacting through our device is the manufacturer, the consumer, and the potential child abduction victim. The main point of contention may be between the consumer/end user and the manufacturer, as the manufacturer may install our device without the end user being aware of it, but I believe that this can be mitigated through an explicit opt-in system.

 

Part C written by Eric:

Our license plate recognition system is designed with one of the focuses being affordability, utilizing low-cost Raspberry Pis and camera modules. This provides a more accessible alternative to expensive surveillance systems, ensuring that even communities with limited resources can use our system. This can be especially beneficial for those in rural areas with little existing infrastructure since our device would be mounted as a dash cam, allowing for wider reach and greater impact.

Team’s Status Report for 2/8/25

The most significant risk is likely not being able to get the edge-compute model working well enough in time, and not having enough time to switch our integration strategy to have the license plate recognition happen on the cloud. As such, we are looking into both edge-compute models as well as models that we could use to run on the cloud, and are considering how we would integrate them in each scenario so that any necessary transitions can be made without too much trouble.

The design was not solidified before this week, but the fundamental requirements have been selected, namely image recognition latency and plate detection range. These “changes” are necessary as we need concrete and realistic goals to work towards while building our design. The costs that this change incurs are minimal, as the design was not formalized previously. 

Since nothing has changed from our plans, only that our design approach is solidifying, we have not made any changes to the schedule. However, we are looking into how we can make an MVP as early as possible to begin testing early so any major changes that need to be made will happen earlier in the process.

In investigating models for license plate detection we have made a jupyter notebook for training YOLOv11 for license plate detection, linked here.