Samuel’s Status Report – 19 Feb 22

This week, we worked on the design review slides, and as part of the process, we finalized our designs for the attachment system, CV algorithms, UI interface and backend. Notably, I made a contribution to a new scanner system design, where I suggested making the camera scan overhead onto the platform as opposed to a front-facing camera as originally designed. This will allow for a more intuitive and less intrusive scanning process.

In particular, as the one in charge of the CV algorithm, I wrapped up research on the various algorithms to use for classification. In particular, I decided to go with the ResNet based CNN instead of traditional SURF/SIFT methods because of the better accuracy and performance. I modified the code of this tutorial to train a classifier and was able to successfully train a model that achieved 98% accuracy after 10 epochs.

 

 

 

 

 

 

However, it remains to be seen if the classifier will work well with validation data (ie check for overfitting), and especially whether it will work with real-world data (our actual setup) . Next week, I will be working on the C++ PyTorch code to run said trained network, meant for optimized runningon the Jetson. I will also begin working on a basic webcam setup (the webcam just arrived this Wednesday!) and collect real-data images that I can use for testing.

Oliver’s Status Report – 12 Feb 22

This week, I focused on back-end architecture and API design, and also came up with our testing strategy, task distribution, as well as task schedule (Gantt Chart) to guide us in our journey ahead.

Before beginning any back-end development, it is of vital importance to plan out the details such as the technology stack used, database system, and the specifications for API endpoints. The entire system will consist of up to 4 discrete components including the back-end. Hence, coming up with an exhaustive list of API endpoints, their functionality, and their input and outputs is of vital importance not just in guiding the design of the API endpoints and the back-end architecture, but also guiding the technical direction of all the discrete components, serving as a sort of technical “glue”. It is much easier to come up with a definitive list of use cases and develop around that structure, than it is to add arbitrary use cases as they arise during development. The architecture I have developed is linked here. Together with the API, I have also been exploring technology stacks such as Node.js with Express and SQLite, and selecting the most appropriate technology stack that will serve our use case best, including factors such as scalability and ease of integration with our other system components.

I have also worked on the task schedule, including the task distribution. The Gantt Chart is a visual, “waterfall”, breakdown of our tasks as well as phases of development. It enables us to keep track of whether we are on schedule and any upcoming tasks, in every stage of our project. Currently, we are on schedule and I see no significant roadblocks in the immediate future – let’s hope this keeps up!

Our current Gantt Chart

Alex’s Status Report – 12 Feb 22

This week, I mainly worked on the presentation (I was the presenter), and the UI / Front end.

For the presentation: I worked on creating a lot of the content, such as the use case requirements and the solution approach. I practiced presenting and presented our presentation to the class. The presentation went very well and we are set to begin working on our project.

I started the UI off by designing our logo (shown as our website favicon). I set up basic bootstrap for the website.

Next week, I will create the basic calendar interface and the popup for correcting CV detections. Hopefully, when the API endpoints are created, I can start working on basic requests.

Samuel’s Status Report – 12 Feb 22

This week, we finalized the idea for our project, and successfully ironed out some issues.

Notably, I am in charge of the CV system; we were able to find a dataset of many fruits and vegetables (Fruits360 Dataset) which we could possibly use to train our CNN classifier. It is a fairly extensive dataset, with 90483 images and 131 classes of fruits and vegetables. Following a TA’s suggestion, we were also be able to find a ResNet based CNN classifier which we could potentially use for our project, and which I am currently trying to implement.

However, Prof Mario’s observed that the dataset images consisted of a white background, which implied that if trained using these images, our classifier might not be able to detect fruits in an arbitrary background. With this in mind, I came up with the idea of using a platform and a white screen that fruit can be put on, thus allowing us to not only easily detect and segment out the fruit from the (white) background, but also allow us to use the extensive dataset.

I am also fairly proud of my contribution to the “Use Case Requirements” part of the proposal presentation, where we considered our product’s speed accuracy and cost metrics from the perspective of tangible monetary cost.