Summary

This week, the team made progress across multiple aspects of the project. On the frontend, we revamped the user interface for responsive design, while backend API endpoints were implemented to support dynamic updates. Ongoing testing ensures cross-device compatibility and seamless integration between components. A little more progress was made in speech recognition. The team advanced the development of a DeepSpeech-based module, training it on custom datasets and addressing noise-handling challenges. Backend integration is nearing completion. Additionally, we are working on the CAD design for the Raspberry Pi camera holder, focusing on wall-mount compatibility and addressing alignment and stability issues through simulations and prototyping. Other than that, the team also worked on optimizing memory and reducing unnecessary processing so that the system minimizes inferences on unchanged or empty rooms and limits image storage to prevent memory overload. A “sample trace” was recorded using a Raspberry Pi camera, capturing images every 5 seconds over 3-4 hours. This trace includes periods of activity, inactivity etc., simulating realistic scenarios to test algorithms for filtering irrelevant images and avoiding redundant inferences. There has also been additional progress with regard to integrating grounding dino, creating vision model validation scripts, and adding in new training data/modifying existing training data for more model robustness.

 

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The biggest risk is that our ML model will not be robust enough for minor deviations in types of object. Given the limited test dataset and somewhat poor data quality of coco given it has a lot of False Negatives on the newly introduced object categories, this still remains an issue. However, both aspects of this issue are being worked on with photos identified in coco having new categories are being modified and more data is continuously being added.

There is also a failsafe being created for better generalization via Grounding Dino which is able to achieve YOLO levels of accuracy on COCO without pretraining on it. During the next week a final determination on grounding dino will be made and otherwise the more data + COCO modification should prove helpful.

 

Provide an updated schedule if changes have occurred.

No significant changes in the schedule other than a deadline to determine the viability of Grounding DINO by the end of the week. Most other parts of the projects are able to progress autonomously.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Although no change to the design has officially been made a determination on Groudning DINO’s inclusion in the final model will be made within the next 3 days. On one hand it is accurate but on the other it is slow to run.

This is also the place to put some photos of your progress or to brag about acomponent you got working.

 

 

 

Initial Prints

Sample Stress Test (It fails the stress test)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Validation Testing

End-to-End Functionality Testing

Verify that the system can detect objects, store data in the database, and return results seamlessly and correctly when queried through the front-end.

Test the flow of data from camera capture to object detection, database storage, and user-facing query results.

Performance Validation

Ensure response times meet specifications (e.g., <5 seconds for object detection, <20 seconds for query responses).

Test under load conditions, simulating multiple queries and detections simultaneously.

Integration Validation

Validate interactions between the SQLite database, backend API, front-end interface, and speech recognition system.

Test data integrity across subsystems to confirm no data is lost or corrupted during processing.

Environmental Validation

Test system performance in varying lighting, network connectivity, and environmental conditions to ensure robustness.

User Experience Validation

Conduct usability testing to confirm the interface is intuitive and meets user expectations, including accessibility standards.

Model Only Functionality Validation

  • Extensive efforts have been put in to ensure that the model can be effectively validated. By basically just running a script with only a model as parameter one can know how well it performs on any dataset supplied to it.

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *