Caroline’s Status Report for 04/27

This week, I continued to work on integration. We switched to using my laptop for the project, so I was mostly working on system integration. The calibration script has been added to the pipeline so that the program waits for when the user interacts with the table calibration part. Before, it was assumed that the calibration would happen without user interaction, but now, when the user performs a new calibration, they web app shows a loading screen on the web app -> tells the user to move the red dots on the table -> the user confirms on the web app that it is done -> web app shows calibration loading screen until calibration is complete. I also have been working on fine-tuning the voice commands to work more consistently when everything is running together. It works very well independently, but it still does not recognize enough when running all together.

The voice commands are a bit behind schedule, as I hoped to be finished by now, but I don’t think this will take too much longer.

To Do:
– improve voice command accuracy
– integration and recipe processing
– polish web app (just appearance wise)

Team Status Report for 4/20

Right now, our risk is meeting our user requirement of allowing this device to work on any table. We are tuning the configuration to the table and lighting set up of the conference room we are using, so it is unknown how well our system will run on other tables. Right now, we are more focused on getting our initial configuration to work, and then we will focus more on generalizing to other tables. If we do not have enough time to do this, we will just reduce the scope of the system, possibly still working on different tables but maybe tables more similar to the one in the conference room. We will also have to tune to the room and table where we are doing the final showcase, which would be the next step.

No changes made to system.

Implementing the software on the AGX was pushed back due to hardware constraints. No other schedule changes.

Caroline’s Status Report for 04/20

Last week, I worked on changing the voice module to use the GPU instead of the CPU. It involved me configuring CUDA on my device and changing function calls in python. I did see better results when the system was running, but it still needs to be tested with a fully integrated system. I focused on moving our design to the AGX. I had many issues this week while installing it but I have been making progress. First, I realized that there was only 32 GB available on the device given to us, so we had to buy an SSD to continue using it. I had attempted to make progress with just the 32GB, but I used so much storage that the device would no longer boot, and I had to reflash. Then, I got the SSD on Friday and started moving our design to it. I had a lot of trouble making Cuda work, but it currently works as of Saturday morning. Cuda worked easily on my own laptop when I tested this week, but it was very difficult for me on the Jetson. I am almost done working through compatibility issues with our code on the AGX as well.

I am a bit behind schedule. I was hoping to have finished installing things on the AGX earlier in the week, but it was pushed back due to the hardware issue. By the end of Saturday, I expect for our code to fully run on the AGX.

The is coming week, I plan to tweak my UI flutter apps to work smoothly with the overall system. It is already mostly implemented, but it will change slightly when we integrate further.

Caroline’s Status Report for 04/06

I continued to work on both the web application and projector interface. Both components are mostly functional, but they need some more work. For example, the web application works with hard coded data, but I am working on making it so that recipe data can be dynamically taken from files on the device. I was also working on integrating the calibration step into the user flow. For demo, we just assumed that there is an old calibration step saved, so now I am working on giving the user the option to start a new calibration and wait for the script to finish in the backend.

I am on schedule.

This upcoming week, I will install the wifi card on the AGX and make sure that the network interface is set up properly. I also try to reduce the latency for the running processes that we noticed during demo. I want to focus on these things this coming week and then work on the rest of the UI if I have time.

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

One module that I am in charge of is the voice commands module. In the design review, we had outlined tests that I was planning on doing to verify that the voice commands work. One aspect of the latency of the voice commands. During demo, I noticed that there was a higher latency than expected, so we will run tests where we were say each command and measure both how long it takes for the command to be recognized in the script. It should only take 2-3 seconds to register the command. This is important to test to figure out if the latency is more of a problem within the voice command module or when it is integrated. Another test is checking the accuracy of the commands themselves. We will say commands and take a count of how many it correctly identifies. Additionally, we will try to do it with people talking in the background to see the accuracy in that environment.

Another module that I am in charge of is the web application. I will make sure to test each possible path of navigation and make sure that there are no errors while a user traverses the website. I will also make sure to test the display on different devices by physically testing and simulating to make sure that the interface spacing and size of components is styled accurately on different screen sizes.

To verify the projector interface, I will make sure that all the required elements are there and function correctly together. I will make sure that different components work simultaneously, such as the video and the timer.

Caroline’s Status Report for 03/30

I worked on the backend integration part. I was able to build a web page for the web app user interface, launch it on a server using python, and created a socket that connected the user interface interactions with the rest of the system. I also implemented using a pub/sub model instead of a queue because it is easier and more efficient to send messages this way because multiple modules need to be able to see values pushed from another module. I used ZeroMQ in python to do this and was successful in implementing it in the web app user interface. Additionally, I have been working with my team on integrating the system at large.

On schedule.

Next week, I will continue to integrate with my team by implementing the pub/sub model in other modules and also polishing the UI for demo day.

Team Status Report for 3/23

There are two significant risks – one is the projector, which has been an ongoing risk. The risk of knocking down or dropping the projector is still a concern because we have not finished this portion of the project. However, we have figured a way to somewhat mitigate this risk, which will be discussed in the system design changes paragraph. Another risk is the object reidentification. It might not work as well once implemented due to lighting conditions. Right now, the implementation uses the UI wireframe, which has less uncertainties as in a real kitchen. We will have to do testing to see how much of a problem this will be – if it is one. To manage this, we will experiment with different lighting conditions.

One change that we have made is simplifying the projector mount. Instead of having the user set different angles during calibration, we will instead fit the projection to any angle – within reason (the projector still must be pointed at the table). We are doing this by computing a homography between a sample image taken by the camera and the image on the screen. Now, we do not need to make the projector mount rotate up and down. The change will make the mount cost less because we do not need a rotation mechanism.

No changes to schedule. We are on track.

Caroline’s Status Report for 03/23

I completed the user interface for the web application where the user chooses the recipe. This involved designing the user interface on Figma and then implementing it in flutter.

On schedule.

Next week, I will work on connecting this UI to the backend and also work on the network interface on the AGX.

Caroline’s Status Report for 03/16

I finished the majority of the projector UI work and worked on the backend integration of the voice commands and interface. The projector UI connects to a socket, and whenever a voice command such as “play video” is spoken, it goes into a queue that is processed by the socket, then sent to the interface that plays the video. An example is here: https://drive.google.com/file/d/1TG8wGAeKivAeXD9qwv5chu-26XNts_kV/view?usp=drive_link. The voice command and server socket are both processes in Python that are launched in one script. There is also support for changing the color of the boxes depending on if the ingredients are for the current recipe, incorrectly picked up, or not being used.

Status: on schedule

Next week, I will work on warping the image in flutter with the homography Tahaseen gave me. I will also start working on the recipe web app.

Caroline’s Status Report for 03/09

I finished the flutter UI. All the elements are placed in the correct spots, and the dynamic elements are working (videos, timers, buttons). I have been working on the integrating with the backend. I was able to use Flask to run a server locally and access the program as a website. Now, I am working on getting the sockets to work between the UI and a python server. I have been having some difficulty with this step due to unclear documentation, but I am still on track.

On schedule.

To Do: Finish implementing the sockets and test with voice commands.

Team Status Report for 2/24

The most significant risk is still mounting the projector. We still plan on placing the projector on a tripod angled downwards on the table. We are currently looking for stable tripods and secure attachment options to manage this risk. Potential options include making a CAD model and 3D printing attachments like we are for the camera case or buying off the shelf options. We are still testing the homography and seeing if the warped image is high quality enough to completely use this method. Our contingency plan is still placing the projector at a 90 degree angle if this does not work. There have not been any major design changes since last week. There are no changes to the schedule. We will be planning system integration this week and will implement it after spring break.