Rebecca’s Status Report for April 12, 2025

Report

I corrected the soldering issue on the I2C pins as best as I could, and while the connection is still inexplicably intermittent, it’s consistent enough that I feel like sinking more time into fixing it is not worth the cost and there are better things I could be doing. We’ll have to account for the fact that when messaging over I2C it may not go through the first time you try with the software, by giving it multiple attempts until it receives the correct acknowledgement. As the amount of data being sent is minimal, this seems to me like a fairly low-cost workaround.

Additionally, since the first 3D print of the headset itself, I’ve reworked the design of the CAD into three parts- the front and the sidebars- that will all print flat, which is simpler, lower-cost, and will not require cutting away so much support material after the fact, which was a major problem in the initial design. I plan to attach them after printing with a small amount of short work-time epoxy resin at the joints. Additionally since the display came in the week of carnival, I’m working on the mount for it and the battery, as well as the power management board. I worry that because of the battery being mounted at the front of the headset, the device will be very front-heavy, but believe that this is the best of the options I considered, which are as follows:

  • Mounting to the front-middle, over the brow. Front-heavy, and also places the battery fairly close to the user’s face. I don’t expect the battery to get hotter than somewhat warm, so I don’t think this is a problem.
  • Mounting to one of the sides, which would make the device side-heavy. I believe that side-heaviness, asymmetry across the human body’s axis of symmetry, is more difficult to deal with and more uncomfortable for the user than front-heaviness.
  • Mounting at the rear of the head, from, for instance, an elastic that attaches to the headset’s sidebars. The battery is light enough that acquiring an elastic strong enough to support it would be possible. However, this demands that the power lines between the battery and the devices on the headset be very long and constantly flexing, which is a major risk. The only mitigation I could come up with would be making the stiff plastic go all the way around the head, but this severely constrains the usability.

So to the front of the frame it goes. If it proves more of a difficulty than I expect, I will reconsider this decision.

Progress Schedule

My remaining tasks, roughly in order of operation:

  • Finish designing the mount for the display on the CAD.
  • Print the CAD parts and epoxy them together.
  • Solder the display wires to the corresponding Rasppi and run its AV output.
  • Purchase machine screws and nuts to attach the devices onto the frame.
    • I need #3-48 (0.099″ diameter) screws that are either 3/8″ or 1/2″ long, and nuts of the same gauge and thread density.
  • Purchase car HUD mirror material for the display setup.
  • Cut clear acrylic to the shape of my lenses.
  • Solder together the power supply (charging board & battery).
  • Solder the Rasppis’ power inputs to the power supply.
  • Mount the Rasppis, camera, display, and power supply to the headset.

Next Week’s Deliverables

Pretty much everything as listed above has to get done this week. As much as absolutely possible.

Also, the slides and preparation for the final presentation, which I’m going to be giving for my group.

New Tools & Knowledge

I’ve learned how to use a great deal of new tools this semester. I’d never worked with a Raspberry Pi before of any sort, only much more barebones Arduinos, and just about all of the libraries we used to write the software therefore were also new. I learned about these through the official documentation, of course, but also through forum posts from the Raspberry Pi community, FAQs, and in a few cases a bit from previous 18500 groups’ writeups. I’ve used I2C before, but only with higher-level libraries than I had to use this time, because I had to manually set up one of my Raspberry Pis as an I2C slave device.

Also, my previous knowledge about 3D printing is minimal, though I’ve worked with SolidWorks and Onshape before (I’m using Onshape for this project). I learned a lot about how it works, how the tools work, the tenets of printable design, and so on, kind of through trial and error but also from some of my mechanical engineer friends who have to do a great deal of 3D printing for their coursework.

Charvi’s Status Report for 4/12/25

The week of carnival (right before interim demo), I coded the I2C integration with the display renderer, which has the raspi receiving and processing signals from the gesture recognition output from the other pi. We discussed this during the interim demo, but at that point there were metallic connection issues so it did not work. As of this thursday those issues have been fixed by Rebecca, and the pis were handed off to me to debug an additional issue with the display pi not showing up as an I2C slave, probably due to a minor issue on the script. I have since taken the pis and have to debug this issue tommorow, but since then I have been working on the wifi connection between the display pi and the webapp, which I will discuss in a later section of this report.

Right after interim demo and at the beginning of this week, Diya and I worked on writing up the features, details, and user / technical requirements for the analytics feature. We discussed this with Tamal and Josh, and continue to further refine this.

This week, my primary work was getting the WiFi connection between the display raspi and the webapp working. Essentially, we want the user to click a button on the webapp and send information about that particular recipe such as steps and ingredients to the glasses, that can then display that information. And after the cooking session is done, send information about the stats of that session (time spent, steps flagged) to the webapp.

Originally, we did not do enough research on how exactly to do this and thought to use bluetooth. Then, realizing this was not feasible for an AWS deployed webapp (our original plan), we switched to possibly deploying the server on the pi itself. We should’ve done a lot more research admittedly way before this, but this is the sitatuion we were in.

Before starting on this task, I did several hours of research on how to best do this, as I have never worked with a raspi before, rarely ever worked with networking, and certianly never deployed a server on a raspi. I quickly realized this was not very feasible on a pico and also not very necessary or scalable, and realized that it would be a better idea to simply have the raspi and the webapp have a websockets connection.

Implementing this involved setting up daphne and channels for websockets requests on the webapp, setting up the websocket routing and comsumer commands, and adding the feature to send information over to the pi. Then on the pi side, sending an initial connection request to the webapp on boot up, setting up a listener on the display script, and listening for and displaying information that was sent. This fully works, and now the pi and webapp are connected and can send and receive info from one another over WiFi.

While doing this I realized that something happened with our webapp, probably because of merge conflicts, that has caused diya and I to have very different model definitions on the webapp (my guess is that when diya started working on the webapp frontend some of the models I made did not get pushed / pulled, and Diya wrote additional definitions for these models that do not match with the old db). This has caused our database to be messed up, and a lot of recipes have mismatched ids and feild names. I am meeting with Diya tommorow to sort this out, should not be too bad hopefully. I have also noticed I am having some issues where my local git clone is not letting me pull (repo does not exist error), which I hope is just a local issue. I can probably delete the clone off of my computer then reclone, but I am waiting on confirmation from my teammates that this is a me issue and not a repo issue.

This coming week, we have a lot of work to do. For me personally, once this issue is sorted out, It will be relativley easy to display the info that the webapp sends to the glasses and vice versa, so we can demo this on Monday. Then, I will figure out the I2C bug. After that, I will see where else the team needs help. I assume this will be on the analytics feature on the webapp, so that will be my task for the next week including random things that will inevitably come up here and there with integration.

I feel like I have done a lot of work this week especially on the WiFi connection so I feel on track in that sense, but I definetley have a lot of work to do in these coming weeks!

Diya’s Status Report for 04/12/2025

I have worked on the following this week:

  1. I’ve been ironing out the design details for the post-cooking analytics feature. Based on concerns raised during our last meeting especially around how we detect when a step is completed and how to compute time per step. I am thinking of a few options such as to reduce noise from accidental flicks we already debounce each gesture using a timer. Only gestures that persist for a minimum duration (e.g. more than 300ms) are treated as intentional. If the user moves to the next step and then quickly goes back it’s pretty much a  signal that they may have skipped accidentally or were just reviewing the steps. In these cases, the step won’t be marked as completed unless they revisit it and spend a reasonable amount of time. I’ll implement logic that checks if a user advanced and did not return within a short window making that as a strong indicator the step was read and completed. Obviously there is still edge cases to consider for example,
    1. Time spent is low, but the user might still be genuinely done. To address this I was thinking of tracking per-user average dwell time. If a user consistently spends less time but doesn’t flag confusion or goes back on steps, mark them as ‘advanced’. If a user shows a gesture like thumbs up or never flags a step we would treat it as implicit confidence even with short duration.
    2. Frequent back and forth or double checking. User behavior might seem erratic even though they are genuinely following instructions. I was thinking for this i won’t log a step as completed until user either a) proceeds linearly and spends threshold time or b) returns and spends more time. If a user elaborates or flags a step before skipping, we lower the confidence score but still log it as visited
    3. user pauses cooking mid step for example when they are using an oven and long time spent doesn’t always mean engagement. As we gather more data from a user, we plan to develop a more personalized model that will combine the gesture recognition, time metrics and NLP analysis of flagged content.
  2. I’ve been working on integrating gesture recognition using the pi camera and mediapipe. The gesture classification pipeline runs entirely on thepi. Each frame from the live video feed is passed through the mediapipe model, which classifies gestures locally. Once a gesture is recognized, a debounce timer ensures it isn’t falsely triggered. Valid gestures are mapped to predefined byte signals, and I’m implementing the I2C communication such that the Pi (acting as the I2C master) writes the appropriate byte to the bus. The second Pi (I2C slave) reads this signal and triggers corresponding actions like “show ingredients”, “next step”, or “previous step”. This was very new to me since I have never worked with writing an I2C communication. This still has to be tested.
  3. I’m also helping Charvi with debugging the web app’s integration on the Pi. Currently, we’re facing issues where some images aren’t loading correctly and also a lot of git merge conflicts. I’ll be helping primarily with this tomorrow.

Team Status Report for March 29, 2025

Project Risks & Mitigation Strategies

Our most significant risk, the arrival of the display, is officially behind us. I’m sure there’s something else that we need to keep an eye on but I really just can’t remember right now.

Changes to System Design

We may have to change how the battery is mounted on the frame from what we originally drafted, since we’ve realized that where it was would likely cause the headset to be untenably front-heavy. I haven’t figured out where it’s going to go instead, but I’m working on it.

After receiving feedback during our last check-in, Diya is going to contribute to the CAD design for the smart glasses to support Rebecca’s work on the hardware. The first printable version is complete, and we can move forward with integration in earnest.

More changes to individual contribution plans discussed in individual reports.

Schedule Progress

We always knew this upcoming week was going to be functionally unusable and we’ve done our best to work around it. There’s a bit of work that has to be done tomorrow to prep for the demo, but besides that it’s looking very light.

Next Steps:

  • Begin testing the integration of all systems
  • Finalize and connect the recipe database to the display
  • Continue refining both software and hardware components after the interim demo
  • (More immediately) Meet briefly to create a script / plan for our interim demo

 

Rebecca’s Status Report for March 29, 2025

Report

I soldered pins into the I2C GPIOs on the Rasppi boards to make accessing them simpler. With a steadier metallic connection I was able to test the Python version of the I2C library and got it to work as well, which makes wrapping the core code on each of the boards in the communication harness necessary much simpler since it’s all in the same language (and also I have an example of it working events-based, instead of just polling on every loop, so I don’t have to fight through figuring out events-based code in C++). I measured the current draw of each of the Rasppis running their core code, so I know how heavy of a battery I need to purchase, and it actually turned out to be a little less than I expected it to be. 1200mAh should do it; I’ve put in an order for a 3.7V LiPo that size (I think? This week has been a little hazy in general. If not I’ll do it tomorrow, either way it should get ordered early next week) and I have a 1200mAh LiPo battery on hand from a personal project that I can start to work with and wire things to on a temporary basis before it arrives.

Also the display arrived! As I understand it it arrived on time (March 22) but I didn’t actually get the ticket from receiving until much later in the week since the original one went to the wrong email (one attached to my name but on the wrong domain, and which I thought had been closed. Oops). But I have it now. It’s here! I feel so much less terrified of this thing now that it’s here! I need to get my hands on a reflective surface (will probably just order little mirror tiles and cut them to size, or a reflection tape that I can stick to the same sort of acrylic plastic that I’m going to cut the lenses out of. Gonna see what’s cheaper/faster/etc on Amazon).

I modified the draft of the CAD so I’ll be able to mount the Rasppis and the camera to it for interim demo. I ran out of time to do anything else, because the rest of the things are more complicated and the display came too late in the week for me to fight with it this week.

Progress Schedule

Things are getting done. I don’t know. It’s Carnival week and I am coming apart at the seams. I will reevaluate after next Sunday.

Next Week’s Deliverables

Interim demo. That’s it. I plan on printing the headset and attaching the boards I have tomorrow, and then I’ll have the wire lengths to connect the I2C pins. It’s gonna. get done

 

Charvi’s Status Report for 3/29/25

This week, our team got some feedback about lack of complexity, specifically on my end as the webapp wasn’t enough complexity wise. As a result, we reshuffled some responsibilities and assigned new ones. I finished what I planned to do, here is a summary:

I completley developed the pygame display which will be displayed on our glasses, including all the instructions (go forward, go back, show ingredients, elaborate steps). This information will be displayed at all times to the user as a HUD, so I made sure to take extra consideration to provide text wrapping, intuitive controls (ex. locking progression on steps when ingredients are up to reduce amount of input confusion, making sure “back” returns to the beginning of the previous step and not the wrapped text), and easy to edit variables for line amount and textbox sizes in case things change when the program is rendered on the actual display. I also added a progress bar that displays at the bottom of the glasses to show how many steps have been completed. I used arbitrary keyboard inputs as placholders, which Diya then attached to the gesture recognition signals. The display output is now fully hooked up to the gesture recognition.

Tommorow, I need to add the functionality of recipe experience levels (exp gained through finishing recipes) and display them on the profiles on the webapp. Diya is currently experiencing some git issues which have temporarily slowed down progress as integration of the changes on her branch are not merging with the others (something to do with virtual environment issues), but we are resolving that and then I will implement this functionality.

We also discussed what we can implement after the demo, and had a discussion about what exactly we want to do with the networking feature.

I had the idea that to integrate the glasses and the webapp networking features more, we should have a feature that allows the user to pick one other person to cook with before starting the recipe, and once it starts, should be able to see their progress on the recipe live on the glasses. This will require some lower level work with WiFi connections / networking, and will also require some work on the arduino and I2C- so I’m excited to work on this after the interim demo.

EDIT: upon discussion today (sunday march 30th), we have decided that an analytics feature may be more useful and focused on the user requirements and use case. More on this in the next report.

In addition, if the display is able to handle the current display, Diya and I thought it would also be cool to be able to display the recipe selection on the glasses. I will be in charge of this if that happens, which will be another task for after interim demo.

Diya’s Status Report for 29 March 2025

After getting feedback to increase the complexity of my contributions beyond gesture recognition, I’ve significantly expanded my role across both software and hardware components of the project:

  • Hardware/CAD:

  • I supported Rebecca by taking over the CAD design for the smart glasses. Although this is my first time working with CAD, I’ve been proactive in learning and contributing to the hardware aspect of the project.

  • Frontend Development:

    • I added TailwindCSS and javascript to enhance the styling of our web app interface.

    • I also redesigned the frontend structure since the original wireframes didn’t align with the actual website architecture. I restructured and implemented a layout that better

    •  suits our tech stack and user experience goals.

  • Integration Work:

    • I successfully integrated the gesture recognition system with Charvi’s display functionality,. This now allows for seamless communication between hand gestures and what is shown on the glasses.

I plan to integrate the recipe database with the Pygame-based display, enabling users to view and interact with individual recipes on the smart glasses.

This past week, I definitely went beyond the expected 12 hours of work. I’m feeling confident about our current progress and believe we’re in a strong position for the interim demo. I’ve taken initiative to broaden my scope and contribute to areas outside my original domain.  

Team Status Report for March 22, 2025

Project Risks and Mitigation Strategies

Everything, as it sort of always does, takes longer than you think it does. With the interim demo and Carnival both fast approaching, the sheer amount of time that we possibly have to sink into this project is becoming very limited, and we’re beginning to really consider what are the most important parts of it that we need to hit as soon as possible, and what can be put off to after the demo or potentially (unfortunately) pitched, without damaging the meat of it.

Changes to System Design

We have not yet committed to it, but we’ve begun discussing seriously (as we recognized some time ago may be necessary) dropping Bluetooth communication in favor of WiFi; we were worried about how power-hungry WiFi is, but since the headset needs to be in contact with the web app at only a few particular points we may be able to mitigate this. WiFi is simpler both for the hardware as it’s already set up on the Rasppis and simply reduces the number of different thing we need to figure out, and for the software, since we know our server host cooperates better with WiFi signals than Bluetooth. This may become a system design change soon.

Schedule Progress

Independent schedule progress is addressed in our individual reports. While the order of some tasks has shuffled, our work is roughly on track.

Web scraping works now, and Diya and Charvi will work together to integrate this into the webapp in the coming week.

Rebecca’s Status Report for March 22, 2025

Report

I’ve got HDMI output working from the Rasppi without a camera. As per the usual, everything that could go wrong did go wrong, and I spent an unfair amount of time troubleshooting. The display was meant to arrive today (March 22) so in theory, if it did, I’ll get word about it from the receiving office on Monday. I’ve got access to a device that takes AV input, so if it isn’t here by then I’ll put in an order for an AV cable, cut an end off it, and solder the free wires directly to the Rasppi’s test pads. Then when I need to hook it up to the final output I can just cut the cable again to get necessary length and bare the other end of the wires. I might end up with a little more insulation than I was expecting, but really I can’t imagine it’ll be anything more than marginal.

I’ve been working today (and will return to working on it after I finish this writeup) on getting the Rasppis to be able to talk to each other over I2C. In theory it’s possible, but since their default settings are strongly weighted toward being I2C masters getting one to act as a slave is proving inconvenient (as per, again, the usual), though every document and forum post I’ve found more recent than 2013 is holding that the hardware is capable of it and the software exists to make it happen. Worst case I resort to using the GPIOs as GPIOs and just manually run a barebones protocol for the communication, which I think should be fine, considering we are not running more than like, a single byte a handful of times a second across the line.

Edit, two hours later: it works!!

Currently the slave device is running C code that consists of an infinite loop constantly monitoring for messages. I’d like to swap this out for Python (for language consistency) that does event monitoring, to reduce the loaded power consumption. The wires between my two boards are NOT soldered in right now, which feels… suboptimal, but hey, whatever works. Or works sometimes, I guess.

Progress Schedule

I’ll do my best to get them talking to each other tonight; if I can’t, the display arriving becomes my real hard deadline. They are talking.

I also really actually need to order the power supply this week. It is still very much on my radar.

Next Week’s Deliverables

If I can catch just a teeny tiny bit of luck at least one of my displays will have actually arrived this weekend and I can pry it apart next week. Then I’ll only be sans the power supply for things I have to order, and can put all of the things together if only powered by my laptop.

Charvi’s Status Report for 3/22/25

This week, I worked on the webapp further.

I have finished functionality for the recipe page, including adding reviews and displaying information about the recipe like ingredients and steps.

I have also finished functionality for the recipe selection page including filtering with tags.

It does not look great, but the functionality works as it should and is pretty complete.

I have currently been testing with manually inputting recipes in a form, which includes inputing name, steps, ingredients, and a tag. This should be pretty easy to migrate to real recipes as I have set up the models so that reading from the JSON file that Diya has scraped from the web into models should allow it to easily update through the whole existing webapp, though I am sure that there will have to be some adjustment.

I have accomplished the goals I set for myself last week, so I would say I am on track.

The next step is spending some time finishing up the recipe running page and adding a score bar (for the player level) on the profile page that updates with completed recipes – this should be quick. I aim to finish this page tommorow, so I can be on track to complete the remaining integration tasks. Then, Diya and I will work on integrating the actual web scraped recipe database into the webapp, which will probably take some time as there will be a few details that have to be adjusted here and there. Once that is done, I can remove all the placeholder data. And lastly, the entire team needs to figure out how to connect this application to the glasses. This includes getting text to display on the glasses, which the three of us will figure out. Once that is done, the webapp portion should be good to go for the interim demo. If there is extra time, Me and/or Diya can work on making the webapp look good (though I will probably spend some time rearranging things to make the website more legible anyways).