Team Status Report for 10/26

Project Risks and Management

The most significant risk currently is achieving real-time emotion recognition accuracy on the Nvidia Jetson without overloading the hardware or draining battery life excessively. To manage this, Noah is testing different facial recognition models to strike a balance between speed/complexity and accuracy. Noah has begun working on a custom model based on ResNet and a few custom feature extraction layers, aims to optimize performance.

Another risk involves ensuring reliable integration between our hardware components, particularly for haptic feedback on the bracelet. Kapil is managing this by running initial tests on a breadboard setup to ensure all components communicate smoothly with the microcontroller.

Design Changes

We’ve moved away from using the Haar-Cascade facial recognition model, opting instead for a custom ResNet-based model. This change was necessary as Haar-Cascade, while lightweight, wasn’t providing the reliability needed for consistent emotion detection. The cost here involves additional training time, but Noah has addressed this by setting up an AWS instance for faster model training.

For hardware, Kapil is experimenting with two Neopixel configurations to optimize power consumption for the bracelet’s display. Testing both options allows us to select the most efficient display with minimal impact on battery life.

Updated Schedule

Our schedule is on track with components like the website and computer vision model being ahead of schedule.

Progress Highlights

  • Model Development: Noah has enhanced image preprocessing, improving our model’s resistance to overfitting. Preliminary testing of the ResNet-based model shows promising results for accuracy and efficiency.
  • Website Interface: Mason has made significant strides in developing an intuitive layout with interactive features.
  • Hardware Setup: Kapil received all necessary hardware components and is now running integration tests on the breadboard. He’s also drafting a 3D enclosure design, ensuring the secure placement of components for final assembly.

Photos and Updates

Adafruit code for individual components

Training of the new facial recognition model based on ResNet:

Website Initial Designs:

Noah’s Status Report for 10/26

Realistically, this was a very busy week for me which meant that I didn’t make much progress on the ML component of our project. Knowing that I wouldn’t have much time this past week, I overloaded a lot of work during the fall break so I am still ahead of schedule. These are some of the minor things I did:

  • Significantly improved image preprocessing with more transformations which have kept our model from overfitting.
  • Testing a transition away from the Haar-Cascade facial recognition model.
    • I realized that while lightweight, this model in more very good or reliable.
    • I have been working on creating our own model using Resnet as well as multiple components that I have built on top.
  • Set up AWS instances to train our model in a much more efficient and faster way.

I am still ahead of schedule given that we have a usable emotion recognition model much before the deadline.

Goals for next week:

  • Continue to update the hyperparameters of the model to obtain the best accuracy as possible.
  • Download the model to my local machine which should allow me to integrate with OpenCV and test the recognition capabilities in real-time.
    • Then, Download our model onto the Nvidia Jetson to ensure that it will run in real-time as quick as we want it.
  • After this, I want to experiment with some other facial recognition models and boundary boxes that might make our system more versatile.

 

Mason’s Status Report for 10/26

1. What did you personally accomplish this week on the project?

This week, I made substantial progress on our application by setting up an EC2 instance and deploying the application on it. This involved handling critical tasks like configuring the server, running application migrations, and integrating various components to ensure the system functions cohesively.

I also refined the application mocks, which are now available for review in the team report, and consolidated the app to be more lightweight and efficient to run. The application is almost fully operational, and I am close to making it accessible via a URL, enabling team members and future users to access it directly.

2. Is your progress on schedule or behind?

My progress is on schedule. Deploying the application to EC2 and reaching this stage without major issues is a significant milestone, and I am pleased with the current pace.

3. What deliverables do you hope to complete in the next week?

Next week, my primary focus will be A. getting the application fully accessible via web domain, and B. setting up the API call from the NVIDIA Jetson to the server, which will control the server’s output based on input from the Jetson device. Additionally, I plan to establish the Jetson’s TCP connection and enhance the web interface by incorporating features like bar charts to display confidence levels and a stopwatch to track time elapsed per detected emotion.

Kapil’s Status Report for 10/26

1. What did you personally accomplish this week on the project?

  • Made significant progress by receiving all of the hardware components needed for the project. With the parts now in hand, I started working on integrating them to verify that they function as intended.
  • Participated in two team meetings to align our progress and refine the action items with my group members.
  • I began developing the Adafruit code to control the individual components, focusing on the setup of communication protocols between the microcontroller and the other hardware elements.

  • In parallel, I reviewed some preliminary designs for the 3D printed enclosure. My goal was to identify any structural issues early and ensure that the design can securely house all components while allowing sufficient space for wiring and connections. To facilitate this, I started exploring software that would allow me to edit and customize the enclosure design. This involved downloading and experimenting with CAD tools to see which ones offer the most flexibility for making the necessary design adjustments.

2. Is your progress on schedule or behind?

  • My progress is on schedule. Receiving all the components and beginning the coding process was a critical milestone, and I’m pleased to have reached it without any delays.

3. What deliverables do you hope to complete in the next week?

  • By next week, I want a working circuit on a breadboard with the different Adafruit components.  This will involve conducting integration tests to confirm that each part works seamlessly with the microcontroller and making adjustments based on the test results.
  • I also aim to complete a final version of the 3D printed enclosure design in 3 weeks.

Noah’s Status Report for 10/20

These past 2 weeks, I focused heavily on advancing the facial recognition and emotion detection components of the project. Given that we decided to create our own emotion detection model, I wanted to get a head start on this task to ensure that we could reach accuracy numbers that are high enough for our project to be successful. My primary accomplishments these couple weeks included:

  • Leveraged the Haar-Cascade facial recognition model to act as a lightweight solution to detect faces in real-time
    • Integrated with open-CV to allow for real-time processing and emotion recognition in the near future
  • Created the first iterations of the emotion detection model
    • Started testing with the FER-2013 dataset to classify emotions based on facial features.
    • Created a training loop using PyTorch. Learning about and implementing features like cyclic learning rates and gradient clipping to stabilize training and prevent overfitting.
      • The model is starting to show improvements reaching a test accuracy of over 65%. This is already at the acceptable range for our project; however, I think we have enough time to improve the model up to 70%.
      • The model is still pretty lightweight using 5 convolution layers; however, I am considering simplifying a little bit to keep it very lightweight.
  • Significantly improved image preprocessing with various transformations which have kept our model from overfitting.

I am now ahead of schedule given that we have a usable emotion recognition model much before the deadline.

Goals for next week:

  • Continue to update the hyperparameters of the model to obtain the best accuracy as possible.
  • Download the model to my local machine which should allow me to integrate with OpenCV and test the recognition capabilities in real-time.
    • Then, Download our model onto the Nvidia Jetson to ensure that it will run in real-time as quick as we want it.
  • After this, I want to experiment with some other facial recognition models and boundary boxes that might make our system more versatile.

 

Team Status Report for 10/20

Part A: Global Factors by Mason

The EmotiSense bracelet is developed to help neurodivergent individuals, especially those with autism, better recognize and respond to emotional signals. Autism affects people across the globe, creating challenges in social interactions that are not bound by geographic or cultural lines. The bracelet’s real-time feedback, through simple visual and haptic signals, supports users in understanding the emotions of those they interact with. This tool is particularly valuable because it translates complex emotional cues into clear, intuitive signals, making social interactions more accessible for users.

EmotiSense is designed with global accessibility in mind. It uses components like the Adafruit Feather microcontroller and programming environments such as Python and OpenCV, which are globally available and widely supported. This ensures that the technology can be implemented and maintained anywhere in the world, including in places with limited access to specialized educational resources or psychological support. By improving emotional communication, EmotiSense aims to enhance everyday social interactions for neurodivergent individuals, fostering greater inclusion and improving life quality across diverse communities.

Part B: Cultural Factors by Kapil

Across cultures, emotions are expressed and more importantly interpreted in different ways. For instance, in some cultures, emotional expressions are more subdued, while in others, they are more pronounced. Despite this, we want EmotiSense to recognize emotions without introducing biases on cultural differences.  To achieve this, we are designing the machine learning model to focus on universal emotional cues, rather than specific cultural markers.

One approach we are taking to ensure that the model does not learn differences in emotions across cultures or races is by converting RGB images to grayscale. This eliminates any potential for the model to detect skin tone or other race-related features, which could introduce unintended biases in emotion recognition. By focusing purely on the structural and movement-based aspects of facial expressions, EmotiSense remains a culturally neutral tool that enhances emotional understanding while preventing the reinforcement of stereotypes or cultural biases.

Another way to prevent cultural or racial biases in the EmotiSense emotion recognition model is by ensuring that the training dataset is diverse and well-balanced across different cultural, ethnic, and racial groups. This way we can reduce the likelihood that the model will learn or favor emotional expressions from a specific group.

Part C: Environmental Factors by Noah

While environmental factors aren’t a primary concern in our design of EmotiSense, ensuring that we design a product that is both energy-efficient and sustainable is crucial. Since our system involves real-time emotion recognition, the bracelet and the Jetson need to run efficiently for extended periods without excessive energy consumption. We’re focusing on optimizing battery life to ensure that the bracelet can last for at least four hours of continuous use. To complete this task while maintaining a lightweight bracelet, the design naturally requires us to maintain energy efficiency.

Additionally, a primary concern would be ensuring that our machine-learning model does not utilize excessive energy in its prediction. By maintaining a lightweight model running on a pretty efficient machine, the Nvidia Jetson, we minimize our reliance on computations relying on lengthy computation.

Project Risks and Management: A new challenge for EmotiSense that we have identified is ensuring the emotion detection is accurate without draining the battery quickly. We’re tackling this by fine-tuning our model to be more efficient and focusing on battery management. If we find the system uses too much power, we’ll switch to more efficient data protocols as a backup plan.

Design Changes: After receiving some insightful feedback, we simplified the emotion recognition model and tweaked the hardware to enhance system response and conserve power. We did this by buying two Neopixel options for our bracelet, so we can test the power consumption and decide which display works best for our project. These adjustments have slightly shifted our design, but the key components are the same as in our report.

Updated Schedule: Website deployment will be handled this week. Testing and enhancement for the model is begun intermittent with other tasks.

Progress Highlights: We’ve successfully incorporated the Haar-Cascade model for facial recognition, which has significantly lightened the load on our system. Early tests show that we’re achieving over 65% accuracy with the FER-2013 dataset, which is a great start and puts us ahead of our schedule. We’ve also made significant improvements to the web app’s interface, enhancing its responsiveness and user interaction for real-time feedback. We have also received our parts for the bracelet and are beginning work on the physical implementation.

Kapil’s Status Report for 10/20

1. What did you personally accomplish this week on the project?

  • Last week, I completed my initial circuit design and sent it to three faculty members and one TA for feedback. After receiving extensive feedback, I made several modifications to the design, ensuring that all components had sufficient current draw to operate properly.

  • I also created a more visual version of the design to clearly explain how the different components will communicate with each other, making it easier to understand for both the team and external reviewers.
  • After finalizing the design, I placed the order for all necessary parts, and the TA approved the order form.
  • In addition, we explored the haptic feedback system, considering whether it will correspond to specific emotions (e.g., vibration for happiness vs. sadness) or whether it will reflect the confidence/strength of the detected emotion.

2. Is your progress on schedule or behind?

  • My progress is on schedule. The circuit design has been finalized and approved, and the parts have been ordered. I’m currently awaiting their arrival to begin assembly and testing.

3. What deliverables do you hope to complete in the next week?

  • Receive the ordered parts and begin initial assembly and testing of the circuit.
  • Finalize the decision on how the haptic feedback will function—whether it will correspond to specific emotions or reflect the confidence level of the emotion detected.
  • Begin prototyping the 3D printed enclosure for the bracelet, ensuring it fits all the necessary components.

 

Mason’s Status Report

Mason’s Status Report for 10/20

This week, I focused on significant front-end enhancements for the EmotiSense web app. These updates improve user interaction and provide better feedback on emotion recognition results. Specific updates are:

  • Added individual views for each of the six emotions (happiness, sadness, surprise, anger, fear, and neutral), allowing users to see a dynamic visual representation for each detected emotion.
  • Implemented a display for the confidence score alongside the emotion, which helps users gauge the accuracy of the detection.
  • Created logic to track and display the time elapsed from the last emotion transition, offering insight into how quickly emotions change during a session.

Additionally, I spent time researching how to make API calls directly from the Nvidia Jetson Xavier AGX. This research focused on how the Jetson communicates with cloud services and how it handles real-time emotion data processing efficiently. Key learnings included optimizing the use of TCP/IP protocols and managing data transmission with minimal latency.

Is your progress on schedule or behind?

My progress is on schedule. The front-end enhancements have been successfully implemented, and I have made significant progress in understanding how to integrate the Nvidia Jetson for real-time data transmission.

What deliverables do you hope to complete in the next week?

  • Finalize API integration design to enable real-time data processing between the Nvidia Jetson Xavier AGX and the cloud-based web app.
  • Conduct latency testing to ensure the system can handle real-time emotion data efficiently.
  • Further enhance the user interface with responsive design elements and user feedback on system performance.

Kapil’s Status Report for 10/5

1. What did you personally accomplish this week on the project?

  • This week, I prepared and presented the design presentation to the team and class, where I detailed the bracelet design and its components.
  • Based on feedback from the presentation, I made changes to the bracelet’s haptic feedback system. Initially, we planned for binary (on/off) feedback, but I adjusted the design to include a microcontroller that allows the vibration intensity to ramp up and down, offering more dynamic feedback based on different emotional states.
  • I finalized the exact Adafruit Feather model (Assembled Adafruit HUZZAH32 – ESP32 Feather Board – with Stacking Headers)  that we’ll use, ensuring it has the required functionality for our project.
  • I sent my final circuit design to the group TA and two faculty members for review before ordering the necessary components.
  • After further consideration of the design, I concluded that we do not need to create a PCB for this phase of the project. Instead, we will 3D print an enclosure and solder the components together to demonstrate the project effectively. My design inspiration came from researching other similar projects. The most relevant is the following:
    • https://learn.adafruit.com/ble-vibration-bracelet

2. Is your progress on schedule or behind?

  • My progress is on schedule. I have made significant design adjustments based on feedback, finalized the necessary components, and completed the design review process.

3. What deliverables do you hope to complete in the next week?

  • I plan to place the order for the components after receiving feedback from the TA and faculty.
  • Begin working on the 3D printed enclosure for the bracelet.

Noah’s Status Report for 10/5

My goals before the October break are to make major strides in the feature recognition component and image preprocessing component of the project. Having changed out project goals to make our model from scratch, I am putting additional efforts into making sure our facial recognition component is fully fleshed out. Here are some of the things that I did this week that constitute my effort toward this goal:

  • Creating a facial recognition architecture to run quickly
    • Draws a bounding box around the face of the conversational partner
    • Currently working on potentially isolating the eyebrows, lips, and eyes of the partner as these are seen to be the most indicative of the emotion
      • Likely to try this programmed-in feature recognition as well as getting the model to recognize these patterns by itself.
  • Connected the camera to my PC which with the help of OpenCV enabled me to process video in real-time
    • Allows for the initial attempt to draw the bounding box, although it was not very reliable as I believe my image pre-processing is incorrect with the size and resolution of the images
      • This is my next goal for early this week
  • Look into both the RFB-320 facial recognition model and VGG13 emotional recognition model
    • I believe I will be creating variants of these models

Goals for next week:

  • Potentially submit an order for a more reliable Logitech camera as this one has been giving me issues
  • Keep working on the facial recognition and preprocessing to ensure the emotion recognition model can grow properly on top of that