Team Status Report for 3/16

The main risk to our project is the projector mount. If not designed properly with the correct material, the projector could fall and break which would compromise the entire system – especially given our budget. In order to prevent this, we have to doubly engineer our system to ensure the safety of the projector. Additionally, we have to design hard stops to indicate our presets in order to make the user experience more intuitive.

Slight changes were made to the mounting after receiving a larger tripod.

No changes to schedule. We are on track to integration.

Tahaseen’s Status Report for 03/16

This week, I talked to Marios and acquired a much larger tripod that is perfect for mounting the projector. Since the new one is sturdier than the first, the first will be used to mount the camera + AGX and the second for the projector. This is a positive change from our previous system and will require new brainstorming on mounting designs. Additionally, I outlined the projector calibration and have begun implementing the script for it. Currently the procedure is (1) inform user calibration is taking place, (2) detect table outline through user markers, (3) verify that the projected calibration grid matches an unwarped version and either reimage in software or prompt the user to raise the projector to preset height/angle changes. Caroline and I also discussed the pros and cons of using the homography that came with the Flutter UI vs a Python script that would require a frame by frame input.

I am on track and this week I plan to schedule out large portions of time to implement a working calibration script and design a hardware mount with our new materials. Our team is on track for integration before the interim demo

Sumayya’s Status Report – 3/16

Progress Update:

I implemented the SIFT object tracker this weekend using a guide from Siromer. The tracking was accurate but quite slow as shown in the video below. I applied the algorithm on every other frame but this did not improve the speed. I need to do further research on how to improve performance.

Given the time constraints, I will be pausing my work on object tracking as I have two reliable trackers so far (CSRT and SIFT). I will be focusing on object re-identification logic that takes images of items in the Ingredients Grid and correlates label to ingredient.

https://drive.google.com/file/d/1vGplh-lNeOkQUPdKc1fmZiyRzDx353vl/view?usp=share_link

Schedule:

On Track!

Next Week Plans:

  • Object Re-identification / Labeling
    • Be able to identify Ingredients Grid
    • Be able to read from Labels JSON file to identify label of each cell
    • Take images of occupied cells and assign label to ingredient
    • Make this process modular so that new ingredients can be identified later in the recipe
  • Create JSON file template
  • Do gesture recognition on a region of interest (to mimic projected buttons)
  • Once above steps are done, create a unit function that can track specified object given its label.

Caroline’s Status Report for 03/16

I finished the majority of the projector UI work and worked on the backend integration of the voice commands and interface. The projector UI connects to a socket, and whenever a voice command such as “play video” is spoken, it goes into a queue that is processed by the socket, then sent to the interface that plays the video. An example is here: https://drive.google.com/file/d/1TG8wGAeKivAeXD9qwv5chu-26XNts_kV/view?usp=drive_link. The voice command and server socket are both processes in Python that are launched in one script. There is also support for changing the color of the boxes depending on if the ingredients are for the current recipe, incorrectly picked up, or not being used.

Status: on schedule

Next week, I will work on warping the image in flutter with the homography Tahaseen gave me. I will also start working on the recipe web app.

Tahaseen’s Status Report for 03/09

This week, I placed the order for a stronger projector and received it. I had been testing with a smaller Panasonic projector, but after testing in different light modes, I found that the brightness was insufficient to meet the requirements of the project. Afterwards, I helped my team setup our coding workspace in Bitbucket so that we could track our progress by issue using Jira and regularly merge with each other to avoid conflicting code. This way, integrating everything at the end will be more seamless and we can avoid any downstream issues. There was a shuffle in task assignment so I will no longer be focusing on setting up the backend of the user site. Instead, I will be focusing more time in making sure the projector is correctly and intelligently calibrating to the workspace table. Therefore, I focused on researching potential steps for the projector calibration mode through existing calibration methods for projectors. After reviewing various resources (these 2 were particularly helpful: https://us.seenebula.com/blogs/buying-guides/projector-calibration and https://simplehomecinema.com/2022/07/06/what-you-need-to-calibrate-your-tv-or-projector/), I was able to gain more insight into the types of things I needed to ensure were calibrated for a satisfactory user experience.

I am currently on track with my tasks. Next week, I will have a barebones setup for a calibrator that I can display through the projector. Additionally, I need to think of an alternative mount for the larger projector on the tripod.

Team Status Report for 3/9

The most significant risks that could jeopardize the success of the project include the processing speed of our object tracking algorithm and the calibration process of the projector. This has been a risk throughout the semester and a risk we plan to manage with lots of testing. The AGX should be sufficient in managing the computing power necessary for all CV tasks. We are working with multiple professors to make sure the warp logic/math is correct. The projector we are using was recently delivered so we plan to test the logic soon.

No changes made to existing design of the system.

No changes to schedule.

A was written by Caroline, B was written by Sumayya and C was written by Tahaseen

Global Factors:

TableCast is a product that anyone can use, given that they have a table, outlets, and cooking equipment. Ease-of-use is an important factor that we considered during our design process. Even if someone does not have a background in technology, we designed a system that anyone can set up with our step by step instructions. For example, a server will launch automatically after device boot, so that a user does not have to log into the AGX and set up that manually. Instead, all they have to do is type a link into their browser, which most people regardless of tech background can do. Additionally, we will have an intuitive, visually guided calibration step that anyone can follow along with. TableCast is a product anyone from any background can use to improve their skills in the kitchen.

Cultural Factors:

The goal of TableCast in a kitchen environment is to make cooking easier. It encourages independence but results in a better sense of community. Our design allows users to easily follow recipes even though they have never made the dish before. We strongly believe in empowering individuals to make their own meals, especially if they are afraid of making mistakes in the kitchen. With TableCast users can feel more confident in their abilities and improve their quality of life. Consequently, users are likely to want to share what they made with their loved ones and can be active in community gatherings such as potlucks and picnics.

Environmental Factors:

Our product, TableCast, does not directly cause harm or benefit the natural environment. However, there are several long term benefits to using our product rather than traditional paper cookbooks and/or expensive electronics in the kitchen. Reducing the reliance on paper cookbooks promotes great environmental benefits for reforestation causes. Given the chaotic and messy nature of a kitchen, electronics can easily become damaged requiring them to be replaced. The resource and production pipeline of consumer electronics like cell phones and tablets is notorious for being noxious and wasteful. By removing these devices from a risky environment like the kitchen, the longevity of the devices can be promoted, reducing the necessity of regular replacements.



Sumayya’s Status Report – 3/9

Progress Update:

I completed the Swipe Gesture Recognition as planned last report. I would need to test in a kitchen environment with our UI to further improve the gesture. This will be done once our UI is programmed.

I researched algorithms for object tracking along with some suggestions from Prof. Marios. I will be using the SIFT algorithm to track objects using their features. Since the object will be in a known location in the “first frame”, we will have an easy reference image to start tracking. In my research, I found a video by Shree K. Nayar explaining how feature tracking works. I plan to use this as a basis for my implementation.

Before attempting Nayar’s implementation, I attempted some tracker libraries that already exist in opencv. Specifically, I tried the KCF (Kernalized Correlation Filter) to track a couple objects. I followed the example by Khwab Kalra. I found that the KCF algorithm works great for easily identifiable and slow moving objects such as a car on a road. But it struggled to track a mouse moving across a screen. I’m not sure why this is the case yet and have much more testing to do with various example videos. OpenCV has 8 different trackers all with different specializations. I will test each of these trackers next week to see which works best. If they are not robust enough, I plan to use Nayar’s implemention with SIFT.

Link to Car tracking using KCF: https://drive.google.com/file/d/1zUjeYSGWuIXzmaCbMIEO1zvzxUYZY6Lv/view?usp=sharing

Link to Mouse tracking using KCF:

https://drive.google.com/file/d/1zUjeYSGWuIXzmaCbMIEO1zvzxUYZY6Lv/view?usp=sharing

As for the AGX, I have received confirmation from Prof. Marios’s grad students that it has been flashed and ready to be used.

Schedule Status:

On track!

Next Week Plans:

  • Implement and improve the 8 OpenCV trackers by next weekend. Select the tracker that works best for this project.
  • Implement SIFT for object tracking if OpenCV trackers are not good
  • If above step is complete, run algorithm on a real time video stream.

Caroline’s Status Report for 03/09

I finished the flutter UI. All the elements are placed in the correct spots, and the dynamic elements are working (videos, timers, buttons). I have been working on the integrating with the backend. I was able to use Flask to run a server locally and access the program as a website. Now, I am working on getting the sockets to work between the UI and a python server. I have been having some difficulty with this step due to unclear documentation, but I am still on track.

On schedule.

To Do: Finish implementing the sockets and test with voice commands.

Tahaseen’s Status Report for 02/24/2024

This week, I calculated the homographies for the projector and was able to connect to the projector and project a warped image onto the table.  Connecting  It took a long time to tune and test the warping because I had several errors in my maths. The angles of the projection still need to be adjusted and the overall image roughly fits the table.  The luminosity is relatively fine but a better projector can give better lighting.  We can probably upgrade the projector, but the current one is able to work for initial testing. I was able to create an initial mockup with my team members after checking in and making sure that we were all on the same page.

I am currently on track with my tasks. Next week, I will test the projector at varying lighting and on bigger counters. Additionally, I will set up the server for the UI.

Team Status Report for 2/24

The most significant risk is still mounting the projector. We still plan on placing the projector on a tripod angled downwards on the table. We are currently looking for stable tripods and secure attachment options to manage this risk. Potential options include making a CAD model and 3D printing attachments like we are for the camera case or buying off the shelf options. We are still testing the homography and seeing if the warped image is high quality enough to completely use this method. Our contingency plan is still placing the projector at a 90 degree angle if this does not work. There have not been any major design changes since last week. There are no changes to the schedule. We will be planning system integration this week and will implement it after spring break.