Grace Liu’s Status Report for March 23rd, 2024

This week, I made significant progress in frontend design and usability. I used bootstrapping to make formatted input and output design which is particularly ideal for responsive designs where CSS is more ideal for layouts that require more flexibility (also used in our web application). I also worked towards building secure database access. In order to do so, I sanitized user inputs and outputs to prevent SQL injections and executable JavaScript attacks. It is important to validate and catch any potential malicious user input as well as encoding the output to stop malicious inputted data from users that would trigger any questionable behavior from the web browser. This was motivated primarily by the ethics discussions in which privacy was of utmost importance for our team and we now know how big of a responsibility it is in regards to out capstone project.

Another thing I looked into was web sockets and improving its speed, efficiency, and syncing functionalities. There was a big issue with a loop that caused extremely slow access time to the database. Going back to the previous discussion, security concerns that arise with this API include cross-site scripting, cross-site request forgery, and injection attacks. Input validation is also a huge part of preventing this types of attacks. I defined data types for expected input structure so there are constraints on user input messages for instance on our globals page where users can make posts. I added a lot of data to the database to simulate a real-world use case which includes images of all types that I gathered and gave Steven for his ML training dataset from the previous week. This included around a couple thousand data points and values for the modeling.

Likewise, I worked with Steven in figuring out how to integrate ChatGPT API into the web design framework. We made a dummy prototype page to test if we could easily interface it with basic input and outputs using the API and also considering the user experience and interface design. Now there is a somewhat smooth communication on our framework between users and ChatGPT since we are considering more engaging interactions. The next step we look forward to working on more is to allow ChatGPT to recognize and analyze images which we will experiment with next week. With our sights set on this experimentation, we believe this propels our project towards even more functionality and are excited how this will pan out during our interim demo.

Team Status Report 03/23/24

As a team, we enjoyed the focus on ethics this week. We enjoyed the pod discussions and began to consider things that we would have never thought of. A big issue we identified with our product involved both body image concerns as well as privacy issues. We want our product to promote a healthy lifestyle, but we do not want our users to develop eating disorders and other personal issues. Likewise, we do not want private data on the consumption habits of our users to be leaked to other users as well as companies who participate in targeted advertising. As a team, we were able to discuss fixes to these issues including database security design and user-friendly/encouraging dialogue on the front end. Despite being done with discussions on ethics, we plan to keep all these principles in mind as we further our design.

We found it interesting in the pod discussion that two other groups had a product of similar functionality to ours. While one group anticipated projecting ingredients and recipes onto a tabletop using a projector at a calculated angle, the other is using AI to generate recipes on a phone app from ingredients added to their food inventory. We were able to clearly address that the MVP for our system would only be able to classify canned foods and certain types of fruits as opposed to cooked foods or ingredients in a bowl. The biggest difference between our product and these is that ours is used as a calorie tracking device and more focused on physical wellness, so more ethical concerns arise with this focal point. This will definitely be a greater consideration of ours while working on user interface integration and user experience with our web application.

The parts for scale integration have steadily come in through the ECE delivery center, so Surya began work on understanding the hardware layout of the scale and assessing the 2 main approaches to writing scale measurements to the database. He is waiting on the RPIs to assess the camera approach to read scale values. Another option could be to start on this with the Arduino chip that came in the mail already, but typically, Arduinos are not great choices for image processing because of limited on-board RAM and limited functionality with other cameras (RPIs are fantastic for such applications and several resources exist online for subsequent support). Additionally, he plans on working with Steven and Grace in sculpting a presentation strategy for the rapidly approaching interim demo.

 

In the meantime, he has also learned more about how the load cell amplifier works and the wiring topology. An important thing to do when working with a functional scale is to ensure that the correct wires are snipped and soldered; a wiring schematic can be found below for the reader’s convenience:

Load cell wiring, wheatstone bridge formation

 

Steven did a lot of work patching up the ML infrastructure of the project. He optimized the accuracy of the various components. The first optimization was done to the classification algorithm for canned foods and fruit using the AdaBoost algorithm to combine multiple decision boundaries. The second one involved classification within the groups of fruit using k-nearest neighbors. This was combined with GoogLeNet and OpenCV to produce better results more specific to our project. Lastly, the ChatGPT API does not need optimization, but Steven worked with integration into the front-end. He plans to work alongside Grace in the upcoming weeks to test the basic functionality and syntax of the API to perform label reading and classification, if needed.

Steven Zeng Status Report 03/23/24

This week I was on track with the schedule and achieved results to analyze. I first want to discuss the work in implementing a k-nearest neighbor approach algorithm. The highest accuracy run turned out to be k = 5 using 500 training samples. The image below represents an example of the classification accuracy and results from our first tests using k = 3 and 100 training samples. 

However, I was able to boost accuracy by introducing more samples and hyper-tuning k to be 5. The resulting graph is below:

The accuracy was sufficient enough in combination with the GoogLeNet algorithm to produce results that satisfy our ideal confusion matrix discussed in our design report. The next issue I have to patch involves latency because this approach took a relatively long time when I ran it locally on my computer. I hope to remove redundancies in code with hopes of speeding the process up. It is a positive sign that the accuracy results were sufficient. Now, I need to focus on computational efficiency. I will look into methods to optimize the computations that incorporate matrix properties to speed up the algorithm.

 

The next area I worked on was the AdaBoost algorithm. The features I considered were size (total area), color (scaling of RGB values), and character count (number of character/text on product). This created a 3-D model which is relatively simple. However, I still need to work on parsing such values through images. For the sake of the algorithm, I hard-coded values for various images to test. The algorithm was able to improve accuracy better than one soft-margin SVM decision boundary. This is a good sign, and I hope to see it work for images taken from my MacBook camera which is the next step. Extracting the features from the images will be the next challenge. I am reading articles on extracting features from images (i.e. size, color, and character count); I expect to use some sort of python library to compute such values.

 

The last portion of my work this week involved the ChatGPT API. I researched the pricing model to determine the most efficient plan that minimizes the cost.  Likewise, I am still trying to understand ways to incorporate the API into the website design. I watched several videos, but this video (https://www.youtube.com/watch?v=_gQITRGs4y0) really provided good guidance for me to move forward with the product. I wrote some code changes to the repository; however, I have yet to test them out. There were several syntax issues that I need to patch up, but the overall design and structure is pretty laid out already. I hope to test these prompts locally and compute their corresponding accuracy and latency values next week.

Status Report March 23, 2024 – Surya Chandramouleeswaran

With parts needed for the scale integration coming in this week, this provided a good opportunity to get my hands dirty on this component of our design. These include the Arduino Mega, 2 ArduCam cameras, a physical scale, the ESP wifi chip, and HX711 amplifiers. I am still waiting on the RPIs to integrate with the ArduCam cameras; to recap from last week, I plan on trying 2 different implementations and seeing which method works better.

For now, I will emphasize our original implementation plan: to remove the casing on the scale, disconnect the load cell wiring from the original Wheatstone arrangement, run the 4 load cell wires into the amplifier, which would then feed into the Arduino and subsequently the ESP. From there, the ESP would just need to generate HTTP requests as I have scripted in the code I wrote last week. The factors that I am wary of include not damaging any sensitive equipment in the process of rewiring the scale. I plan on snipping the wires from the PCB in the scale as opposed to desoldering them because the last thing I want is there to be frayed copper at the ends of these connections. I hope Techspark has some heat-shrinking plastic casings that I can keep around the connections I need to resolder so that the bare ends are preserved somewhat. If any of these connections are challenged or compromised, the scale cannot be used, so that is a consideration I have in mind when working on this. Here is an example of the material (in black):

Figuring out the wiring between the amplifier and load cell was also a bit nonintuitive for me. Here is a sample diagram that I plan on using to ensure the reconnections are going to the right places. To keep things simple, red will match with red, blue with blue (these are the wires that “talk” to each other to get the total aggregate weight from all 4 load cells), and I’ll send the white wires to the amplifier. I adapted this from the documentation on the load cells:
Load Sensors Wired in Wheatstone Bridge Configuration

I have slowed down work on our web application; I think it is up to a point where all the basic functionality is present, and as a group, we will have to build the rest of the application around the hardware we will implement. One theme readers may observe across the group this week is a shift towards implementation hardware as the backend frameworks are in place.

In parallel, I will have conversations with the group to build an idea of what we want to show off for the upcoming interim demo. Overall, we are on track with schedule but we recognize that the next few weeks are essential to the completion of this project.

Grace Liu’s Status Report for March 16th, 2024

This week, our group really focused on understanding each component of the ethics assignment and how our project is applicable in real world scenarios. Out of the two case studies, I thought “Do Artifacts Have Politics?” by Langdon Winner was especially interesting since this is a perspective that most people wouldn’t consider merely looking at a piece of technology. It is very eyeopening to see how things such as a bridge design in Long Island had such symbolic meaning behind it, truly reflecting the creator’s political viewpoints and opinions. While this is applicable to inventions of things to resolve societal affairs (keeping the lower class from using recreational resources), another category of these inventions involves those that are part of political relationships. I liked taking a current technology that is hot right now and applying these types of concepts to it since this really helped me realize the design and ethics components behind each step of the design process. In terms of considerations for public health, public safety, and public welfare, the added perspectives from the case studies will definitely inspire us to pay more attention to these details to ensure a safe and friendly product for our users.

In terms of the web application, changes were added to include public posts for users to interact with each other. This was inspired by the realization of public welfare and the negative effects they may cause for users in terms of mental health and body image issues. The globals page is in attempts to promote an environment that is inclusive and encourages positive self-image where users can voluntarily choose to post their food consumptions and add a caption to it as well. While for MVP, this feature would only be useable for those who share the product in the same household where all the food inventory is gathered, we envision there is a lot more potential on a global scale for it to become something more impactful and makes our product something beyond a food tracking tool.

From the previous week, while OAuth was set up to work properly, some debugging on the registration page had to be done to ensure all the information was rendered properly on the profile page. An issue that emerged was displaying the uploaded profile picture is that we have to take into consideration the file size and file format compatibility of the image users choose to upload. A large image could potentially consume too much bandwidth, so either limiting file sizes would have to be implemented or we could use a content delivery network to improve our website’s performance and speed. This approach can be particularly more beneficial since files can quickly be uploaded from any part of the world, and this is something important to consider since we want our application to be used on a global scale for more user interaction and positivity promotion. I would still like to do a bit more research on this approach since it can be more costly as opposed to other methods such as using the web server’s file system or cloud storage that would also come at an additional cost.

I look forward to seeing more of our project flesh out as Surya gathers the physical components together and more progress is being made on the ML algorithm with the data I collected with Steven last week. I envision a user-friendly product that will really be an application beyond merely calorie tracking and a food inventory.

Team Status Report — March 16th

With the interim demo coming up on April 1st, coming out of Spring break with a clear plan of attack was important for our group. We spent this week planning out some of the main features we would like to highlight for the interim demo; furthermore, with a realization that the interim demo is a presentation of an intentionally incomplete product, we spent some time deciding how the features we decide to demonstrate will interact with one another.

Surya continued work on scale integration and helped Grace tidy up the web application. In addition, he developed potential approaches to scale integration through a block diagram as seen below:

Scale integration can be done either by hacking into a scale and forwarding values through a microcontroller, or through an OCR on the digital scale reading panel. In evaluating the first option, a strong understanding of scale hardware is required before it can be opened up. The main aspects he focused on were the difference between strain gauges and load cells, how and why they are configured in a Wheatstone arrangement, and the rewiring necessary to forward the load cell measurement through an amplifier and into a microcontroller such as an Arduino.

Given the delicacy of this process and the surprising nuance of hardware behind camera scales, he discerned that an OCR model reading the segmented display panel of the scale may be cleaner to implement for an MVP. This, however, presents its own challenges. Images require more memory to deal with than a stream of bits representing scale readings, so latency becomes a pronounced issue.  Furthermore, the accuracy of digit classification is an additional factor that is simply not a problem when one is simply forwarding bits to and from components. The counterargument to this approach is the reduced potential of damages, and for the sake of the MVP and wanting an operational scale as first priority, this is a significant consideration to keep in mind.

In either case, because both options are largely cost-effective, Surya began ordering parts common between both approaches and plans to get his hands dirty with both approaches to see which method works better for the application over the next 2 weeks. He encourages readers to view his weekly status report for more details on the above design plan for scale integration.

Steven made significant progress in testing various ML methodologies. He was able to determine that soft-margin SVMs was not effective enough to include in our implementation. However, the SVMs have provided nice decision boundaries that we plan to use for our backup option: AdaBoost Algorithm. This algorithm establishes weights for varying decision boundaries, so it takes into account multiple boundaries. He was able to research the math and coded up the preliminary functions to compute the weights and train the model.

Steven also shifted a lot of focus into working with GoogLeNet and working with a k-nearest neighbors approach to boost the accuracy of the algorithm in the classification of different fruits. He plans to work on testing next week as well as validation. We hope to have all this tested, and the results analyzed in no more than two weeks. However, another goal next week is to integrate the GoogLeNet classification algorithm directly into the website without any modifications to test the compatibility as well as the preliminary accuracies and latencies.

Regarding progress in integration, Steven did research on ChatGPT4. We are currently hesitant on purchasing the API to save some money. However, Steven wrote the pseudocode for integrating it into the front-end. Likewise, he looked into creating formatted responses given images and formatted inputs. Steven will also begin shifting focus away from local testing on his MacBook Camera and work closely with Surya and Grace to take Arduino or Rpi images and/or images from the website.

Grace was able to take this week’s ethics assignment and apply it towards public safety and public welfare considerations. We realized food tracking applications could induce negative side effects for users with high self-discipline, so an additional globals page could help promote a body positive environment that shows users what others are up to as well. A caption can also be added so our web application is used more as a social media, but of course this is optional for users and they can always choose to opt-out. With this potentially being more on a global scale, I want to consider larger options for file uploads to be able to accommodate this. One option is instead of using the web application’s file system, a content delivery network could be used since their servers are scattered around the world. This would definitely help improve speed and performance of our web application in the long-run.

March 16th 2024 Status Report — Surya Chandramouleeswaran

Coming off break and getting back into the swing of things, this week offered an opportunity for us as a team and individually to reflect on our progress to meet our NutrientMatch MVP. Entering the project work period where we had an ambitious idea but were largely uninformed of implementation strategies, we are in a much better space now as we have taken a deep dive into building the stages of the design. For me, this was a week where my preconceived notions and ideas of integrating the scale into the project were challenged, and I considered alternative options in the form of risk mitigation and to complete our MVP. I’d like to dedicate this week’s blog posts to explaining what I learned and subsequent approaches.

 

Our first idea to integrate the scale with a remote database (and subsequent display to a custom website) involved a physical scale, signal amplifiers, microcontrollers and logic, and a WiFi chip which would make requests to a backend engine for website rendering. The first design consideration was whether to build a custom scale or hack into an existing one. We favored the latter, as scales built in the market today are more complex than one would think, and have several nuanced electronic components that ensure the measurements are accurate. The downside to this approach, however, is the lack of customizability given the scale is prebuilt, and the fact that we would need to exercise extreme caution so as to not cut the wrong wires or the solder components to the wrong sections of the scale.

 

The next step involved rewiring the load cells to run through an amplifier. A load cell is just a component that translates experienced mechanical force into a signal to be interpreted digitally. When running these signals to an Arduino for display, the signals themselves are typically low in amplitude compared to the sensitivity of the ADC input of the microcontroller; hence the need for a load cell amplifier.

 

For testing, a nice thing to do would be to wire the load cell amplifier outputs to a 7-segment display, just to get an idea of the numbers we are dealing with. In implementation, the next step would be to send these signals to the ESP32 chip. This is where we will develop a significant component of the logic to control when the load cells are sampled. Furthermore, the ESP32 script needs to generate an HTTP request wirelessly to our cloud-hosted web application so it can write in the measured weight when certain conditions are met. These include if the weight is stable, if the website hasn’t been updated yet, and so on. Find below some of the logic I wrote for the ESP implementing the above concepts in detail. Some of it is in pseudocode, for simplicity:

 

void loop() 
{
  static float history[4];
  static float ave = 0;
  static bool stable = false;
  static bool webUpdated = false;
  static float weightData = 0.0;

  // When should we send a request to the server? this controls that 
  if (stable && !webUpdated)
  {
    // Only try if the wireless network is connected
// WL_CONNECTED is an imported state from the earlier logic in the ESP
    if((wifiMulti.run() == WL_CONNECTED)) 
    {
      HTTPClient http;

      Serial.print("[HTTP] begin...\n");
      Serial.println(weightData);

      // Write string address of where website is hosted here 

      String weight = String(weightData, 1);
      String fullAddress = String(address + weight);
      http.begin(fullAddress);

      Serial.print("[HTTP] GET...\n");

      // start connection and send HTTP header
      int httpCode = http.GET();
     
      if(httpCode > 0) 
      {
        // HTTP header has been send and Server response header has been handled
        Serial.printf("[HTTP] GET... code: %d\n", httpCode);

        // file found at server, response 200
        if(httpCode == HTTP_CODE_OK) 
        {
          // clear the stable flag as the data is no longer valid
          stable = false;
          webUpdated = true;
          Serial.println("Web updated");
        }
      } 
      else 
      {
// debugging case 
        Serial.printf("[HTTP] GET... failed, error: %s\n", http.errorToString(httpCode).c_str());
      }

      http.end();
    }
  }

  // Check the weight measurement every 250ms (THIS IS ARBRITRARY)
  if (millis() - scaleTick > 250)
  {
    scaleTick = millis();
    // Read from the HX711. have 2 do this before using data
    LoadCell.update();
   
    weightData = abs(LoadCell.getData());
    //running average
    history[3] = history[2];
    history[2] = history[1];
    history[1] = history[0];
    history[0] = weightData;
    ave = (history[0] + history[1] + history[2] + history[3])/4;

    // Logic to control when to write in the weight measurement 
    if ((abs(ave - weightData) < 0.1) && 
        (ave > 30) && 
        !webUpdated)
    {
      stable = true;
    }

    // IF we've updated the website AND
    //  the average weight is close to zero, clear the website updated flag
    //  so we are ready for the next weight reading
    if (webUpdated && ave < 1)
    {
      webUpdated = false;
    }

    Serial.print("Load_cell output val: ");
    Serial.println(weightData);

    // Write in the scale reading here!
  }
}

The web application handles incoming HTTP requests from here on out, so there is no more to implement on the scale side of things.

Now, this plan that our team developed in theory works very well, but some of the things that concern us mainly have to do with uncertainties on the scale end. Hacking a scale requires a strong understanding of how the scale is built to begin with, so we can avoid damaging existing wiring and components. So I began to think about alternative options for scale integration that wouldn’t involve the internals of the scale electronics. A less risky, but equally stimulating implementation idea would be to hook up an RPI camera and physically read the digital scale reading; this would also require talking to an external sensor which would tell the camera to only “sample” and take pictures when the door of our apparatus is closed. The module would forward the image result to an ML algorithm which would detect the weight measured and write it to the database.

The drawbacks to this approach are the increased probability of measurement inaccuracy, as well as the fact that it would be harder to control the logic for when to capture the images. Also, images are more memory intensive than streams of data, so the RPI would need to have a decent amount of memory onboard to handle the image capturing. In either case, we have plenty of money under the budget to implement both (highly cost effective) ideas, so we will be trying both in parallel and seeing which approach makes more sense.

I envision the next 2 weeks in advance of our demo to be an important time for our project development; I’m excited to get to work on the scale once our parts come in!

Steven Zeng Status Report 03/16/24

This week I continued on my progress in developing the ML backbone to this project. The first set of tasks I completed involved the soft-margin SVMs. I was able to fine-tune it to the best of my ability, but the accuracy values are not up to the value we like. As a result, I will experiment with more decision boundaries using the AdaBoost algorithm. This will assign different weights to a set of unique decision boundaries to improve the classification accuracy from the SVM formulation. The AdaBoost algorithm allows the classifier to learn from misclassifications by adjusting the weights of various decision boundaries. I did a lot of research on the mathematics and rationale behind this algorithm, so next week, I hope to implement a rough version of it and analyze its effect on improving classification between canned foods and fruits.

Next, I looked more into the GoogLeNet model.  Thanks to feedback from the Professor Marios and Neha, I decided to steer away from fine-tuning the last layer. Instead, the plan is to experiment with k-nearest neighbors to classify the real data using the training data. I created a design tradeoff report on the usage of k-nearest neighbors. I began coding up the algorithm behind it using google Colab, and I will compile a testing dataset of bananas, oranges, and apples at the same time. The plan right now is to start with k = 5; however, if time permits, I plan to use 5-fold validation to experiment with k= 3, 5, 10, 20, 40.

The last thing I did this week involved familiarizing myself with the ChatGPT4 API. I tested it locally on our website using localhost, and it worked quite well. The next step is to format the inputs to the API and get formatted outputs back to store into the database. The computational power and accuracy is quite remarkable, and I plan to experiment more with all the features it has. The goal is to have sufficient knowledge in using this API in the event that the other ML models and algorithms we plan to employ end up missing the accuracy benchmarks.

Next week, I plan to work closely with the website design to integrate the various features I have been testing. Currently, all the testing has been done using my MacBook camera and locally on my computer in a contained environment. As a result, there was not many errors or difficulties in running tests and experiments. Likewise, I hope to conduct latency tests on the website using dummy data and values. The goal is to be able to retrieve values from the database and feed them in as input to our ML models to output responses. Likewise, I plan to work with Surya to figure out the camera design of our system and how to accept images from the Arduino.

Grace Liu’s Status Report for March 9th, 2024

The week prior to spring break, our group spent most of efforts in producing our design report since this is a significant part of our capstone. Since most of my work involves knowledge from the class Web Application Development, I spent a lot of time combing through old lectures for React and JavaScript help since it had been a while since I’ve taken the class. The mockups added into the report for a registration page, a login page, and an inventory/main page can be seen below and have been used as inspiration for the actual frontend designs:

The rendering between these pages have been complete in view.py along with the logout button functionality. Users have the option of logging in using Google OAuth as opposed to creating a separate account. Many benefits come with this service, including the elimination of users sharing their passwords to third-party applications by using a token-based authentication system and providing a more seamless user experience that is one click away. While our product is pretty self explanatory to use, we still want our users to have a most simplified user experience as possible. This was set up on the Google Developer Console to configure the OAuth consent screen for how it will be displayed to users. During further testing processes, we will test that the authentication flow works as smoothly as possible with the best security measures.

Since most of the frontend application has been completed, I was able to take some time and start on setting up the database retrieval process on our website using Django. Upon evaluation and comparison of different databases in our design report, we ended up settling on MySQL for many reasons. It all ended up boiling down to its reliability in handling large datasets and high compatibility with web application and Arduino programming. The structure of our database will look similar to the personal inventory page as pictured above, but may be incline to slight changes depending on how we want the item status to be particularly displayed.

I also assisted Steven in gathering data for his ML testing. Specifically, this included feature data that would help distinguish between the packaging between fruits and canned foods, label data for training the SVM classifier, and some other relevant information that was used during the labeling of each food item. For the feature data, various attributes such as visual appearance and material composition were collected to help capture the differences in packaging appearances related to fruits and canned foods. Additionally, the labeling process involved the careful examination of the packaging and verification against established criteria to ensure enough accuracy. This comprehensive dataset served as a strong foundation for Steven’s ML testing and will enable him to develop an effective classification system.

Team Status Report 03/02/24 and 03/09/24

In addition to addressing how our product meets market needs in terms of considerations to public health, safety, and welfare, as social, and economic factors related to the system of production, distribution, and consumption of goods and services, our group would like to explore public concerns from a different perspective. Hence, A was written by Steven, B was written by Surya, and C was written by Grace.

Part A: With consideration of global factors, NutrientMatch can be used by just about anyone across the globe, whether or not their wellness goal consists of consuming more calories or better keeping track of the types of foods they are consuming daily. Regarding younger adults, as fitness is becoming a bigger trend with the emergence of technology and the expansion of gyms, more of these people are aware of their nutrient intake. Our product is heavily targeted towards this group since they tend to live with roommates as opposed to other family members. With this in mind, it is easier to get foods mixed up when groceries are purchased separately. NutrientMatch is able to track each user’s food inventory as soon as they are logged in to avoid these confusions. On the other hand, family members may also want to use this product to track their own intake. Our product can also be used by those who are not as tech-savvy as others. The usage process is easy: after each food item is either scanned or classified and weighed, it is automatically forwarded to the cloud database which will make backend calculations to be displayed on the webpage for users to see. Hence, while NutrientMatch is not a necessity in people’s lives, the increasing trend of physical wellness makes it a more desirable product overall.

Part B: Cultural considerations in the design process are crucial to making NutrientMatch an accessible and adaptable product for those around the globe. The most obvious example that comes to mind is the acknowledgment of dietary preferences in cultural and religious contexts, including but not limited to plant-based diets and veganism, Halal foods in Arabic Culture, and Kosher foods as delineated by the Jewish faith. To that end, we will dedicate significant resources to training a recognition model that can broadly recognize and perform similarly across a variety of testing inputs sampled across these cultural lines, so as to not perpetrate any implicit biases in the recognition process.

Another vital cultural aspect to consider is the idea of nutritional makeup and body image. Various cultures around the world prioritize specific eating lifestyles and different preferences for caloric makeup and subsequent ideas for how the ideal physique should be maintained. While some groups may prefer thinness, with an increased focus on portion control and calorie counting, in other cultures food is synonymous with hospitality and status. Despite these seemingly disparate perspectives on the notion of “ideal” nutrition, NutrientMatch should perform similarly and without bias in all scenarios. Our product will be carefully designed to avoid reinforcing specific nutritional ideals; similar to the idea above, our recognition model and recommendation algorithms will perform as intended irrespective of the actual nutritional habits of a certain user.

Finally, the notion of privacy and communication/integrity of data is an issue that crosses cultural lines as well.  Some cultures view technology and its role in human life differently than others. In developing NutrientMatch, we seek to build trust with our customer base by demonstrating a commitment to the privacy and confidentiality of data; this will help us garner popularity and positive standing even amongst cultures that may demonstrate initial skepticism towards such technology. MySQL automatically hashes and encrypts the stored data in the database, and the HTTPS protocol that will be used to interact with such a backend will prevent hackers from intercepting data based on an extensive SSL certificate authorization process that checks the users and the domain name against what is listed on the SSL certificate.

Part C: Lastly, there is a strong connection between previously analyzed economic factors to environmental considerations. One of our greatest concerns when doing research for the product’s use case requirements was the high amount of foods that spoiled in the United States. This does not only directly impact the economy as people are wasting their groceries, but more foods are added to landfills. Obviously, landfills take up a lot of space and need to be managed periodically. In addition to this, landfills produce odors and methane gas that directly contribute to global warming and the air we breathe. It is important to effectively utilize all the resources we have as the global population continues to increase, and NutrientMatch can help users maximize this use by tracking every item entering their food inventory system. This reduces user stress in remembering the groceries they bought but can also help them manage what to buy next or what types of foods they are currently lacking. Besides personal health, the environment remains one of our greatest concerns and we hope that NutrientMatch encourages users to consume and productively make use of all the ingredients in their home before directly disposing of them.

Shifting towards a discussion on our team’s progress over these past two weeks, we were successful in a lot of the software design from ML to web app development. First, Steven was able to progress substantially in the ML design portion. He was able to finalize tests and validation for the soft-margin SVM formulation to distinguish between fruit and canned foods; however, there are still issues with accuracy which will need to be fixed in the upcoming weeks. Likewise, Steven made substantial progress in researching ChatGPT4 capabilities and the API to integrate its label-reading algorithms into our web app design. Everything is on if not ahead of schedule regarding the ML aspect of our project which will give us extra flexibility in fine-tuning and maximizing the results. We hope to gather statistics to support our design choices as well shortly.

Additionally, Grace was able to help Steven gather data for the ML testing portion, including feature data that would help distinguish between the packaging between fruits and canned foods, label data for training the SVM classifier, and some other relevant information that was used during the labeling of each food item. With the given issues, some fine-tuning may still need to be done further. Regarding individual progress, Grace has completed mockups for the design report and used this towards calibrating the front-end designs. Rendering functionalities between user login, user registration, user home page, and logout pages have been completed. More progress is being made on the web application database retrieval with MySQL and setting up the tables regarding what information should be displayed to users. More testing will need to be done to handle database requests and deployment to the server environment.

Surya focused on advancing the web application in advance of a functional demonstration of the basic web app features at the next staff meeting on Monday, March 11. To reiterate, the demo will focus on site navigation, front-end development, user interaction, and view routing. Some of the particular accomplishments include a refinement of the OAuth feature for registering new users through their email addresses, as well as weight-tracking visuals using the chart.js framework. Looking ahead, the team plans to develop more user interaction features; these include progress posting, friend networks, and a Websocket-based chat service.

Surya also continued research on the scale integration process and will order parts this week. Despite initial hesitancy due to the unfamiliarity with market options, he compiled a list of microcontrollers, with a consideration of factors such as power consumption, pin layouts, and opportunities for streamlined design through custom SoC options. This constitutes a fascinating stretch goal, and one we look to tackle in early April, but for now, we would like to proceed as planned with our simple yet effective MVP approach with ESP protocols.

Despite the progress schedule remaining largely unchanged, we understand this is the point of the semester where we expect to make significant progress toward our product with the interim demonstration arriving soon.