Kapil’s Status Report for 11/2

1. What did you personally accomplish this week on the project?

  • This week, I assembled the preliminary circuit for the bracelet, bringing together the main components to start testing their functionality as a system. I successfully got the haptic feedback motor to operate with different vibration modes, confirming that the motor can provide the varied feedback required by the design.

  • During testing, I identified an issue with the NeoPixel’s power supply. The NeoPixel requires 5V for both power and the data line, but the Adafruit Feather can only supply 3.3V. Although we initially thought this voltage difference wouldn’t affect performance, I observed unexpected behavior in the NeoPixel’s operation. To address this, I ordered a part specifically designed to handle the voltage issue, ensuring consistent functionality for the NeoPixel.

https://www.adafruit.com/product/2945

  • While waiting for this part, I’m preparing to start working on the 3D printed enclosure, finalizing the design so it will be ready for printing. I also plan to begin establishing the communication protocols between the Adafruit Feather and the Jetson, which will be crucial for our system’s data flow.

2. Is your progress on schedule or behind?

  • My progress is on schedule, despite the NeoPixel power issue. The haptic motor testing was successful, and I’ve already ordered the part needed to resolve the NeoPixel issue. I’ll stay productive by focusing on the enclosure design and the Feather-Jetson communication setup while waiting for the part to arrive.

3. What deliverables do you hope to complete in the next week?

  • Next week, I plan to finalize the 3D printed enclosure design and begin printing, ensuring the bracelet housing is ready for the final assembly.
  • I’ll also aim to complete the initial setup for communication between the Feather and Jetson, which will allow data transmission and control commands to be integrated into the bracelet.
  • Once the new part arrives, I’ll revisit the NeoPixel setup to confirm that the circuit operates smoothly with the correct voltage levels.

Team Status Report for 10/26

Project Risks and Management

The most significant risk currently is achieving real-time emotion recognition accuracy on the Nvidia Jetson without overloading the hardware or draining battery life excessively. To manage this, Noah is testing different facial recognition models to strike a balance between speed/complexity and accuracy. Noah has begun working on a custom model based on ResNet and a few custom feature extraction layers, aims to optimize performance.

Another risk involves ensuring reliable integration between our hardware components, particularly for haptic feedback on the bracelet. Kapil is managing this by running initial tests on a breadboard setup to ensure all components communicate smoothly with the microcontroller.

Design Changes

We’ve moved away from using the Haar-Cascade facial recognition model, opting instead for a custom ResNet-based model. This change was necessary as Haar-Cascade, while lightweight, wasn’t providing the reliability needed for consistent emotion detection. The cost here involves additional training time, but Noah has addressed this by setting up an AWS instance for faster model training.

For hardware, Kapil is experimenting with two Neopixel configurations to optimize power consumption for the bracelet’s display. Testing both options allows us to select the most efficient display with minimal impact on battery life.

Updated Schedule

Our schedule is on track with components like the website and computer vision model being ahead of schedule.

Progress Highlights

  • Model Development: Noah has enhanced image preprocessing, improving our model’s resistance to overfitting. Preliminary testing of the ResNet-based model shows promising results for accuracy and efficiency.
  • Website Interface: Mason has made significant strides in developing an intuitive layout with interactive features.
  • Hardware Setup: Kapil received all necessary hardware components and is now running integration tests on the breadboard. He’s also drafting a 3D enclosure design, ensuring the secure placement of components for final assembly.

Photos and Updates

Adafruit code for individual components

Training of the new facial recognition model based on ResNet:

Website Initial Designs:

Noah’s Status Report for 10/26

Realistically, this was a very busy week for me which meant that I didn’t make much progress on the ML component of our project. Knowing that I wouldn’t have much time this past week, I overloaded a lot of work during the fall break so I am still ahead of schedule. These are some of the minor things I did:

  • Significantly improved image preprocessing with more transformations which have kept our model from overfitting.
  • Testing a transition away from the Haar-Cascade facial recognition model.
    • I realized that while lightweight, this model in more very good or reliable.
    • I have been working on creating our own model using Resnet as well as multiple components that I have built on top.
  • Set up AWS instances to train our model in a much more efficient and faster way.

I am still ahead of schedule given that we have a usable emotion recognition model much before the deadline.

Goals for next week:

  • Continue to update the hyperparameters of the model to obtain the best accuracy as possible.
  • Download the model to my local machine which should allow me to integrate with OpenCV and test the recognition capabilities in real-time.
    • Then, Download our model onto the Nvidia Jetson to ensure that it will run in real-time as quick as we want it.
  • After this, I want to experiment with some other facial recognition models and boundary boxes that might make our system more versatile.

 

Kapil’s Status Report for 10/26

1. What did you personally accomplish this week on the project?

  • Made significant progress by receiving all of the hardware components needed for the project. With the parts now in hand, I started working on integrating them to verify that they function as intended.
  • Participated in two team meetings to align our progress and refine the action items with my group members.
  • I began developing the Adafruit code to control the individual components, focusing on the setup of communication protocols between the microcontroller and the other hardware elements.

  • In parallel, I reviewed some preliminary designs for the 3D printed enclosure. My goal was to identify any structural issues early and ensure that the design can securely house all components while allowing sufficient space for wiring and connections. To facilitate this, I started exploring software that would allow me to edit and customize the enclosure design. This involved downloading and experimenting with CAD tools to see which ones offer the most flexibility for making the necessary design adjustments.

2. Is your progress on schedule or behind?

  • My progress is on schedule. Receiving all the components and beginning the coding process was a critical milestone, and I’m pleased to have reached it without any delays.

3. What deliverables do you hope to complete in the next week?

  • By next week, I want a working circuit on a breadboard with the different Adafruit components.  This will involve conducting integration tests to confirm that each part works seamlessly with the microcontroller and making adjustments based on the test results.
  • I also aim to complete a final version of the 3D printed enclosure design in 3 weeks.

Noah’s Status Report for 10/20

These past 2 weeks, I focused heavily on advancing the facial recognition and emotion detection components of the project. Given that we decided to create our own emotion detection model, I wanted to get a head start on this task to ensure that we could reach accuracy numbers that are high enough for our project to be successful. My primary accomplishments these couple weeks included:

  • Leveraged the Haar-Cascade facial recognition model to act as a lightweight solution to detect faces in real-time
    • Integrated with open-CV to allow for real-time processing and emotion recognition in the near future
  • Created the first iterations of the emotion detection model
    • Started testing with the FER-2013 dataset to classify emotions based on facial features.
    • Created a training loop using PyTorch. Learning about and implementing features like cyclic learning rates and gradient clipping to stabilize training and prevent overfitting.
      • The model is starting to show improvements reaching a test accuracy of over 65%. This is already at the acceptable range for our project; however, I think we have enough time to improve the model up to 70%.
      • The model is still pretty lightweight using 5 convolution layers; however, I am considering simplifying a little bit to keep it very lightweight.
  • Significantly improved image preprocessing with various transformations which have kept our model from overfitting.

I am now ahead of schedule given that we have a usable emotion recognition model much before the deadline.

Goals for next week:

  • Continue to update the hyperparameters of the model to obtain the best accuracy as possible.
  • Download the model to my local machine which should allow me to integrate with OpenCV and test the recognition capabilities in real-time.
    • Then, Download our model onto the Nvidia Jetson to ensure that it will run in real-time as quick as we want it.
  • After this, I want to experiment with some other facial recognition models and boundary boxes that might make our system more versatile.

 

Team Status Report for 10/20

Part A: Global Factors by Mason

The EmotiSense bracelet is developed to help neurodivergent individuals, especially those with autism, better recognize and respond to emotional signals. Autism affects people across the globe, creating challenges in social interactions that are not bound by geographic or cultural lines. The bracelet’s real-time feedback, through simple visual and haptic signals, supports users in understanding the emotions of those they interact with. This tool is particularly valuable because it translates complex emotional cues into clear, intuitive signals, making social interactions more accessible for users.

EmotiSense is designed with global accessibility in mind. It uses components like the Adafruit Feather microcontroller and programming environments such as Python and OpenCV, which are globally available and widely supported. This ensures that the technology can be implemented and maintained anywhere in the world, including in places with limited access to specialized educational resources or psychological support. By improving emotional communication, EmotiSense aims to enhance everyday social interactions for neurodivergent individuals, fostering greater inclusion and improving life quality across diverse communities.

Part B: Cultural Factors by Kapil

Across cultures, emotions are expressed and more importantly interpreted in different ways. For instance, in some cultures, emotional expressions are more subdued, while in others, they are more pronounced. Despite this, we want EmotiSense to recognize emotions without introducing biases on cultural differences.  To achieve this, we are designing the machine learning model to focus on universal emotional cues, rather than specific cultural markers.

One approach we are taking to ensure that the model does not learn differences in emotions across cultures or races is by converting RGB images to grayscale. This eliminates any potential for the model to detect skin tone or other race-related features, which could introduce unintended biases in emotion recognition. By focusing purely on the structural and movement-based aspects of facial expressions, EmotiSense remains a culturally neutral tool that enhances emotional understanding while preventing the reinforcement of stereotypes or cultural biases.

Another way to prevent cultural or racial biases in the EmotiSense emotion recognition model is by ensuring that the training dataset is diverse and well-balanced across different cultural, ethnic, and racial groups. This way we can reduce the likelihood that the model will learn or favor emotional expressions from a specific group.

Part C: Environmental Factors by Noah

While environmental factors aren’t a primary concern in our design of EmotiSense, ensuring that we design a product that is both energy-efficient and sustainable is crucial. Since our system involves real-time emotion recognition, the bracelet and the Jetson need to run efficiently for extended periods without excessive energy consumption. We’re focusing on optimizing battery life to ensure that the bracelet can last for at least four hours of continuous use. To complete this task while maintaining a lightweight bracelet, the design naturally requires us to maintain energy efficiency.

Additionally, a primary concern would be ensuring that our machine-learning model does not utilize excessive energy in its prediction. By maintaining a lightweight model running on a pretty efficient machine, the Nvidia Jetson, we minimize our reliance on computations relying on lengthy computation.

Project Risks and Management: A new challenge for EmotiSense that we have identified is ensuring the emotion detection is accurate without draining the battery quickly. We’re tackling this by fine-tuning our model to be more efficient and focusing on battery management. If we find the system uses too much power, we’ll switch to more efficient data protocols as a backup plan.

Design Changes: After receiving some insightful feedback, we simplified the emotion recognition model and tweaked the hardware to enhance system response and conserve power. We did this by buying two Neopixel options for our bracelet, so we can test the power consumption and decide which display works best for our project. These adjustments have slightly shifted our design, but the key components are the same as in our report.

Updated Schedule: Website deployment will be handled this week. Testing and enhancement for the model is begun intermittent with other tasks.

Progress Highlights: We’ve successfully incorporated the Haar-Cascade model for facial recognition, which has significantly lightened the load on our system. Early tests show that we’re achieving over 65% accuracy with the FER-2013 dataset, which is a great start and puts us ahead of our schedule. We’ve also made significant improvements to the web app’s interface, enhancing its responsiveness and user interaction for real-time feedback. We have also received our parts for the bracelet and are beginning work on the physical implementation.

Kapil’s Status Report for 10/20

1. What did you personally accomplish this week on the project?

  • Last week, I completed my initial circuit design and sent it to three faculty members and one TA for feedback. After receiving extensive feedback, I made several modifications to the design, ensuring that all components had sufficient current draw to operate properly.

  • I also created a more visual version of the design to clearly explain how the different components will communicate with each other, making it easier to understand for both the team and external reviewers.
  • After finalizing the design, I placed the order for all necessary parts, and the TA approved the order form.
  • In addition, we explored the haptic feedback system, considering whether it will correspond to specific emotions (e.g., vibration for happiness vs. sadness) or whether it will reflect the confidence/strength of the detected emotion.

2. Is your progress on schedule or behind?

  • My progress is on schedule. The circuit design has been finalized and approved, and the parts have been ordered. I’m currently awaiting their arrival to begin assembly and testing.

3. What deliverables do you hope to complete in the next week?

  • Receive the ordered parts and begin initial assembly and testing of the circuit.
  • Finalize the decision on how the haptic feedback will function—whether it will correspond to specific emotions or reflect the confidence level of the emotion detected.
  • Begin prototyping the 3D printed enclosure for the bracelet, ensuring it fits all the necessary components.

 

Kapil’s Status Report for 10/5

1. What did you personally accomplish this week on the project?

  • This week, I prepared and presented the design presentation to the team and class, where I detailed the bracelet design and its components.
  • Based on feedback from the presentation, I made changes to the bracelet’s haptic feedback system. Initially, we planned for binary (on/off) feedback, but I adjusted the design to include a microcontroller that allows the vibration intensity to ramp up and down, offering more dynamic feedback based on different emotional states.
  • I finalized the exact Adafruit Feather model (Assembled Adafruit HUZZAH32 – ESP32 Feather Board – with Stacking Headers)  that we’ll use, ensuring it has the required functionality for our project.
  • I sent my final circuit design to the group TA and two faculty members for review before ordering the necessary components.
  • After further consideration of the design, I concluded that we do not need to create a PCB for this phase of the project. Instead, we will 3D print an enclosure and solder the components together to demonstrate the project effectively. My design inspiration came from researching other similar projects. The most relevant is the following:
    • https://learn.adafruit.com/ble-vibration-bracelet

2. Is your progress on schedule or behind?

  • My progress is on schedule. I have made significant design adjustments based on feedback, finalized the necessary components, and completed the design review process.

3. What deliverables do you hope to complete in the next week?

  • I plan to place the order for the components after receiving feedback from the TA and faculty.
  • Begin working on the 3D printed enclosure for the bracelet.

Noah’s Status Report for 10/5

My goals before the October break are to make major strides in the feature recognition component and image preprocessing component of the project. Having changed out project goals to make our model from scratch, I am putting additional efforts into making sure our facial recognition component is fully fleshed out. Here are some of the things that I did this week that constitute my effort toward this goal:

  • Creating a facial recognition architecture to run quickly
    • Draws a bounding box around the face of the conversational partner
    • Currently working on potentially isolating the eyebrows, lips, and eyes of the partner as these are seen to be the most indicative of the emotion
      • Likely to try this programmed-in feature recognition as well as getting the model to recognize these patterns by itself.
  • Connected the camera to my PC which with the help of OpenCV enabled me to process video in real-time
    • Allows for the initial attempt to draw the bounding box, although it was not very reliable as I believe my image pre-processing is incorrect with the size and resolution of the images
      • This is my next goal for early this week
  • Look into both the RFB-320 facial recognition model and VGG13 emotional recognition model
    • I believe I will be creating variants of these models

Goals for next week:

  • Potentially submit an order for a more reliable Logitech camera as this one has been giving me issues
  • Keep working on the facial recognition and preprocessing to ensure the emotion recognition model can grow properly on top of that

Team Status Report for 10/5

1. What are the most significant risks that could jeopardize the success of the project?

  • Facial Recognition and Image Preprocessing: Noah is working on developing the facial recognition and image preprocessing components from scratch, which presents a risk if model performance doesn’t meet expectations or integration issues arise with Jetson or other hardware. However, Noah is mitigating this by researching existing models (RFB-320 and VGG13) and refining preprocessing techniques.
  • Website Deployment and User Experience Testing: Mason’s deployment of the website is slightly delayed due to a busy week. If this stretches out, it could impact our ability to conduct timely user experience testing, but Mason plans to have it completed before the break to refocus on the UI and Jetson integration.
  • Component Delivery Delays: There is a risk that delays in ordering and receiving key hardware components (such as the microcontroller and potentially a new camera for Noah’s work) could affect the project timeline. We are mitigating this by ensuring orders are placed promptly and making use of simulation tools or placeholder hardware in the meantime.

2. Were any changes made to the existing design of the system?

  • Haptic Feedback System: Based on feedback from our design presentation, we decided to move away from binary on/off vibration feedback in the bracelet. Instead, the haptic feedback will now dynamically adjust its intensity. This required the addition of a microcontroller, but it improves the overall functionality of the bracelet.
  • Facial Recognition Model: Noah is creating a custom facial recognition model instead of using a pre-built model, as our project goals shifted to developing this from scratch. This adjustment will give us more flexibility and control over the system’s performance, but also adds additional development time.
  • Website User System and Database: Mason has made progress on the user system and basic UI elements but is slightly behind due to other commitments. No structural changes have been made to the overall website design, and deployment is still on track.

3. Provide an updated schedule if changes have occurred:

  • Bracelet Component and PCB Changes: The decision to remove the PCB from the bracelet and instead use a 3D printed enclosure has been made. This simplifies the next steps and focuses more on the mechanical assembly of the bracelet.
  • Website Deployment: Mason’s deployment of the web app is scheduled to be completed before the October break, and the UI/Jetson configuration work will continue afterward.

Photos and Documentation:

  • The team is awaiting final feedback from TA and faculty on the circuit design for the bracelet before ordering components.
  • Noah’s bounding box and preprocessing work on facial recognition will need further refinement, but initial results are available for review.