Team Status Report for 10/20

Part A: Global Factors by Mason

The EmotiSense bracelet is developed to help neurodivergent individuals, especially those with autism, better recognize and respond to emotional signals. Autism affects people across the globe, creating challenges in social interactions that are not bound by geographic or cultural lines. The bracelet’s real-time feedback, through simple visual and haptic signals, supports users in understanding the emotions of those they interact with. This tool is particularly valuable because it translates complex emotional cues into clear, intuitive signals, making social interactions more accessible for users.

EmotiSense is designed with global accessibility in mind. It uses components like the Adafruit Feather microcontroller and programming environments such as Python and OpenCV, which are globally available and widely supported. This ensures that the technology can be implemented and maintained anywhere in the world, including in places with limited access to specialized educational resources or psychological support. By improving emotional communication, EmotiSense aims to enhance everyday social interactions for neurodivergent individuals, fostering greater inclusion and improving life quality across diverse communities.

Part B: Cultural Factors by Kapil

Across cultures, emotions are expressed and more importantly interpreted in different ways. For instance, in some cultures, emotional expressions are more subdued, while in others, they are more pronounced. Despite this, we want EmotiSense to recognize emotions without introducing biases on cultural differences.  To achieve this, we are designing the machine learning model to focus on universal emotional cues, rather than specific cultural markers.

One approach we are taking to ensure that the model does not learn differences in emotions across cultures or races is by converting RGB images to grayscale. This eliminates any potential for the model to detect skin tone or other race-related features, which could introduce unintended biases in emotion recognition. By focusing purely on the structural and movement-based aspects of facial expressions, EmotiSense remains a culturally neutral tool that enhances emotional understanding while preventing the reinforcement of stereotypes or cultural biases.

Another way to prevent cultural or racial biases in the EmotiSense emotion recognition model is by ensuring that the training dataset is diverse and well-balanced across different cultural, ethnic, and racial groups. This way we can reduce the likelihood that the model will learn or favor emotional expressions from a specific group.

Part C: Environmental Factors by Noah

While environmental factors aren’t a primary concern in our design of EmotiSense, ensuring that we design a product that is both energy-efficient and sustainable is crucial. Since our system involves real-time emotion recognition, the bracelet and the Jetson need to run efficiently for extended periods without excessive energy consumption. We’re focusing on optimizing battery life to ensure that the bracelet can last for at least four hours of continuous use. To complete this task while maintaining a lightweight bracelet, the design naturally requires us to maintain energy efficiency.

Additionally, a primary concern would be ensuring that our machine-learning model does not utilize excessive energy in its prediction. By maintaining a lightweight model running on a pretty efficient machine, the Nvidia Jetson, we minimize our reliance on computations relying on lengthy computation.

Project Risks and Management: A new challenge for EmotiSense that we have identified is ensuring the emotion detection is accurate without draining the battery quickly. We’re tackling this by fine-tuning our model to be more efficient and focusing on battery management. If we find the system uses too much power, we’ll switch to more efficient data protocols as a backup plan.

Design Changes: After receiving some insightful feedback, we simplified the emotion recognition model and tweaked the hardware to enhance system response and conserve power. We did this by buying two Neopixel options for our bracelet, so we can test the power consumption and decide which display works best for our project. These adjustments have slightly shifted our design, but the key components are the same as in our report.

Updated Schedule: Website deployment will be handled this week. Testing and enhancement for the model is begun intermittent with other tasks.

Progress Highlights: We’ve successfully incorporated the Haar-Cascade model for facial recognition, which has significantly lightened the load on our system. Early tests show that we’re achieving over 65% accuracy with the FER-2013 dataset, which is a great start and puts us ahead of our schedule. We’ve also made significant improvements to the web app’s interface, enhancing its responsiveness and user interaction for real-time feedback. We have also received our parts for the bracelet and are beginning work on the physical implementation.

Leave a Reply

Your email address will not be published. Required fields are marked *