Feb 10th 2024 Status Report — Surya Chandramouleeswaran

This week, I helped scope out some of the hardware tools needed to interface with our imaging system, which is the first component of interest for our product. I helped develop a plan for our team to have this component of our product finished and fully tested in about 6 weeks.

The first aspect of this involved researching existing cameras that met the frame rate and pixel resolution benchmarks that we needed for our OCR technology. Last week, during our weekly meetings, Prof. Savvides was kind enough to let me borrow a camera for the image recognition pipeline, suggesting that simple desk cameras would suffice for our applications, given the product would be held in a stationary manner. Additionally, Prof. Savvides offered to extend his expertise for making the detection algorithm as efficient as possible; I look to work with him and my teammates to build this processing pipeline.

Additionally, the layout of our apparatus, as well as details of the interconnect, were finalized; as the primary person in charge of hardware in this group, I took the initiative on this front. For computing reasons, and after prolonged discussion with the software-inclined members in our team, I decided that running the detection algorithms on an Arduino would not be feasible. While we certainly will order Ardunios (RS232 is rather cheap given the budget constraints of the class), I envision that offloading the computational processes to an external computer may be the best course of action in the beginning. After speaking with Prof. Savvides and our TA Neha last week, we agreed that it is best to test the performance of such tools before dismissing them.

 

We are on schedule progress-wise. We need to start building the recognition system and integrating that with the camera. I look forward to working with Steven on building some helper functions for our algorithm through Tesseract. Right now, we would like to test classification on our MacBook Laptop Camera, then integrate with the web camera that Professor Savvides provided us.

 

One of the design questions that has arisen is if we should implement an auto-capture feature on the camera (where the photo is captured when the camera deems it in focus). For training purposes, it is best if the image set is rather standardized in nature. This weekend I will brainstorm approaches to ensuring that the image classification algorithm gets quality images with ambient lighting and clarity in reading.

 

In the next week, I look to work with Steven in scaffolding the image recognition system and understanding what is expected from the camera which the system should speak to. I plan to begin testing on our built-in computer cameras, as I expect to work on some form of an autofocus-autocapture feature on the camera to ensure the training process of our classification neural network goes smoothly.

 

Attached please find a copy of our proposed design sketch:

Design Sketch

Leave a Reply

Your email address will not be published. Required fields are marked *