Team Status Report for 10/20

Part A: Global Factors by Mason

The EmotiSense bracelet is developed to help neurodivergent individuals, especially those with autism, better recognize and respond to emotional signals. Autism affects people across the globe, creating challenges in social interactions that are not bound by geographic or cultural lines. The bracelet’s real-time feedback, through simple visual and haptic signals, supports users in understanding the emotions of those they interact with. This tool is particularly valuable because it translates complex emotional cues into clear, intuitive signals, making social interactions more accessible for users.

EmotiSense is designed with global accessibility in mind. It uses components like the Adafruit Feather microcontroller and programming environments such as Python and OpenCV, which are globally available and widely supported. This ensures that the technology can be implemented and maintained anywhere in the world, including in places with limited access to specialized educational resources or psychological support. By improving emotional communication, EmotiSense aims to enhance everyday social interactions for neurodivergent individuals, fostering greater inclusion and improving life quality across diverse communities.

Part B: Cultural Factors by Kapil

Across cultures, emotions are expressed and more importantly interpreted in different ways. For instance, in some cultures, emotional expressions are more subdued, while in others, they are more pronounced. Despite this, we want EmotiSense to recognize emotions without introducing biases on cultural differences.  To achieve this, we are designing the machine learning model to focus on universal emotional cues, rather than specific cultural markers.

One approach we are taking to ensure that the model does not learn differences in emotions across cultures or races is by converting RGB images to grayscale. This eliminates any potential for the model to detect skin tone or other race-related features, which could introduce unintended biases in emotion recognition. By focusing purely on the structural and movement-based aspects of facial expressions, EmotiSense remains a culturally neutral tool that enhances emotional understanding while preventing the reinforcement of stereotypes or cultural biases.

Another way to prevent cultural or racial biases in the EmotiSense emotion recognition model is by ensuring that the training dataset is diverse and well-balanced across different cultural, ethnic, and racial groups. This way we can reduce the likelihood that the model will learn or favor emotional expressions from a specific group.

Part C: Environmental Factors by Noah

While environmental factors aren’t a primary concern in our design of EmotiSense, ensuring that we design a product that is both energy-efficient and sustainable is crucial. Since our system involves real-time emotion recognition, the bracelet and the Jetson need to run efficiently for extended periods without excessive energy consumption. We’re focusing on optimizing battery life to ensure that the bracelet can last for at least four hours of continuous use. To complete this task while maintaining a lightweight bracelet, the design naturally requires us to maintain energy efficiency.

Additionally, a primary concern would be ensuring that our machine-learning model does not utilize excessive energy in its prediction. By maintaining a lightweight model running on a pretty efficient machine, the Nvidia Jetson, we minimize our reliance on computations relying on lengthy computation.

Project Risks and Management: A new challenge for EmotiSense that we have identified is ensuring the emotion detection is accurate without draining the battery quickly. We’re tackling this by fine-tuning our model to be more efficient and focusing on battery management. If we find the system uses too much power, we’ll switch to more efficient data protocols as a backup plan.

Design Changes: After receiving some insightful feedback, we simplified the emotion recognition model and tweaked the hardware to enhance system response and conserve power. We did this by buying two Neopixel options for our bracelet, so we can test the power consumption and decide which display works best for our project. These adjustments have slightly shifted our design, but the key components are the same as in our report.

Updated Schedule: Website deployment will be handled this week. Testing and enhancement for the model is begun intermittent with other tasks.

Progress Highlights: We’ve successfully incorporated the Haar-Cascade model for facial recognition, which has significantly lightened the load on our system. Early tests show that we’re achieving over 65% accuracy with the FER-2013 dataset, which is a great start and puts us ahead of our schedule. We’ve also made significant improvements to the web app’s interface, enhancing its responsiveness and user interaction for real-time feedback. We have also received our parts for the bracelet and are beginning work on the physical implementation.

Noah’s Status Report for 10/5

My goals before the October break are to make major strides in the feature recognition component and image preprocessing component of the project. Having changed out project goals to make our model from scratch, I am putting additional efforts into making sure our facial recognition component is fully fleshed out. Here are some of the things that I did this week that constitute my effort toward this goal:

  • Creating a facial recognition architecture to run quickly
    • Draws a bounding box around the face of the conversational partner
    • Currently working on potentially isolating the eyebrows, lips, and eyes of the partner as these are seen to be the most indicative of the emotion
      • Likely to try this programmed-in feature recognition as well as getting the model to recognize these patterns by itself.
  • Connected the camera to my PC which with the help of OpenCV enabled me to process video in real-time
    • Allows for the initial attempt to draw the bounding box, although it was not very reliable as I believe my image pre-processing is incorrect with the size and resolution of the images
      • This is my next goal for early this week
  • Look into both the RFB-320 facial recognition model and VGG13 emotional recognition model
    • I believe I will be creating variants of these models

Goals for next week:

  • Potentially submit an order for a more reliable Logitech camera as this one has been giving me issues
  • Keep working on the facial recognition and preprocessing to ensure the emotion recognition model can grow properly on top of that

Noah’s Status Report for 9/28

For me, this week was mainly focused on fleshing out the computer vision component of our project. Originally, we had planned to utilize an existing model and made modifications to that model as well as integrating it into our system. However, after further discussion and research, we decided we would learn more by creating our own model. This would allow us to showcase our prowess, while also achieving viable results. Here is a bulleted list of the tasks completed:

  • Conducted research on existing computer vision models
    • Noted the methods varying projects attempted and what level of success was achieved with each
    • Chose to utilize a convolutional neural network within Pytorch
      • This will allow us to run our model using CUDA software on the Jetson so that processing is in real-time
  • Created the final design presentation
    • Conducted research on existing facial recognition models to understand the limits of our project in terms of latency and accuracy
    • Set an accuracy goal of 70-75%
  • Lead discussion with my teammates regarding my choices for the model allowing for feedback
  • Updated my part of the schedule to reflect the start of my work on our computer vision component
  • Attended 2 team meetings to work together with teammates
  • Found 2 existing datasets that we might use for our machine-learning model

I’d say that we are on schedule.

Goals for next week:

  • Submit orders for our hardware components to ensure they come on time
  • Get more feedback from our peers during the design presentation and make reasonable changes if any problems are noticed

Team Status Report for 9/28

  1. What are the most significant risks that could jeopardize the success of the
    project? How are these risks being managed? What contingency plans are ready?

The most significant risk at this stage is developing an emotion recognition model that is accurate enough to be useful. Top-of-the-line models currently are nearing the 80% accuracy rate which is somewhat low. Coming within reach of this metric will be crucial to ensuring our use cases are met. This has become a larger concern now that we will be making our own, custom model. This risk is being managed by ensuring that we have backup, pre-existing models that can be averaged with our model in case our base accuracy is too low.

Additionally, user testing has become a concern for us as we do not want to trigger the need for review by the IRB. This prevents us from doing user testing targeted at autistic individuals; however, we can still conduct them on the general population.

2. Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

The main change was cited above. We will be making our own computer vision and emotional recognition model. This is primarily with the goal to show our prowess within the area of computer vision and not leveraging too many existing components. This constricts Noah’s time somewhat; however, Mason will be picking up the camera and website integration so that Noah can fully focus on the model.

3. Provide an updated schedule if changes have occurred.

No Schedule changes at this time.

4. This is also the place to put some photos of your progress or to brag about athe component you got working.

The physical updates are included in our design proposal.

Additionally, here is the dataset we will use to train the first version of the computer vision model (https://www.kaggle.com/datasets/jonathanoheix/face-expression-recognition-dataset).

Part A (Noah Champagne):

Due to the intended audience of EmotiSense being a particularly marginalized group of people, it is our utmost concern that the product can deliver true and accurate readings of emotions. We know that EmotiSense can greatly increase neurodivergent people’s ability to engage socially, and want to help facilitate that benefit to the best of our abilities. But, we care deeply about maintaining the safety of our users and have duly considered that a false reading of a situation could discourage or bring about harm to them. As such we have endeavored to create a system that only offers emotional readings once a very high confidence level has been reached. This will help to ensure the health of those using the product who can confidently use our system.

Additionally, our product serves to increase the welfare of a marginalized group who suffers to a larger extent than the general population. Providing them with this product helps to close the gap in conversational understanding between those with neurodivergence and those without it. This will help to provide this group with more equity in terms of welfare.

Part B (Kapil Krishna):

EmotiSense is highly sensitive to the needs of neurodivergent communities. We want to emphasize the importance of being sensitive to the way neurodivergent communities interact and support one another. The design of EmotiSense aims to be sensitive to this fact and simply enhances self-awareness and social interactions, not combat neurodivergence itself. Additionally. EmotiSense aims to provide support for those who interact with neurodivergent individuals in providing a tool and strategy to facilitate better communication without compromising autonomy. Lastly, we observe that there has been increasing advocacy for technology to empower those with disabilities. EmotiSense aims to align with this trend in reducing social barriers and creating more inclusivity and awareness. With regard to economic organizations, we notice that affordability and availability of the device are key in creating the most impact possible.

Part C (Mason Stark):

Although our primary aims fit better into the previous two categories, Emotisense can still suit significant use cases for economic endeavors. One such use case is customer satisfaction. Many businesses will try to get customer satisfaction data by polling customers online, or even by physical customer satisfaction polling systems (think of those boxes with frown and smile faces in airport bathrooms). However these systems are oftentimes under utilized, and subject to significant sampling biases. Being able to get the emotions of customers in real time could better assist businesses in uncovering accurate satisfaction data. When businesses have access to accurate customer satisfaction data, they can leverage that data to improve their business functionality and profitability. Some specific businesses where emotisense could be deployed include “order at the counter” restaurants, banks, and customer service desks.

Mason’s Status Report for 9/21

The team’s focus this week was primarily delivering the presentation. We also made significant devisions about deadlines and goals for the coming weeks. Here’s some of what I worked on:

  • Making the proposal presentation and preparing the delivery
    • Researched industry standards for latency and speed with REST APIs
    • Researched wired communication speeds
    • Identified Model accuracy guidance
  • Met up with and brainstormed together with teammates
  • Build off of TA and instructor feedback
  • Gave detailed feedback on other presentations
  • Researched AJAX implementations

Upcoming goals for the week:

  • Finalize and order any hardware needed
  • Work on website, get early version running locally
  • Get more TA and professor feedback during the week
  • Identify our wired communication protocol from camera to Jetson

Kapil’s Status Report for 9/21

Progress

  • This week, I began the preliminary design for the PCB, focusing on the use case requirements and what hardware is best for meeting these requirements. I also narrowed down on how the individuals components of the embedded device will communicate with each other. I also communicated budget requirements with the team to ensure we are choosing the best hardware devices for the application.
  • I reviewed design tools like SolidWorks to assess which would best suit our project. I also researched the pin configurations and layout considerations needed for our facial recognition system.
  • Additionally, I attended all team meetings and participated in discussions regarding hardware latency and accuracy goals.
  • I also supported the team during our proposal presentation by giving input on how hardware and design choices will impact latency and overall performance.

Schedule

  • Currently, my progress is on schedule. The PCB design phase will official begin next week according to the Gantt chart but I’m feeling more confident given the initial research I’ve conducted this past week. However, I anticipate that the next phase, involving component orders and integration testing, will need close attention to stay on track.

Upcoming Deliverables

  • Finalize the component choices for the PCB.
  • Complete the initial PCB layout and design considerations.
  • Submit component orders once the design and budget are confirmed.

Team Status Report for 9/21

  1. What are the most significant risks that could jeopardize the success of theproject? How are these risks being managed? What contingency plans are ready?

Primary risks include Model accuracy, environment control, and speed/user experience. We are planning to try to manage accuracy by experimenting with a few models and by using a composition of models if needed. The environment control is something that we will working on when prototyping the hardware and our physical system. We make a protocol for keeping the environment consistent. User experience and speed will be monitored as we get into the later stages of development.

2. Were any changes made to the existing design of the system (requirements,block diagram, system spec, etc)? Why was this change necessary, what costsdoes the change incur, and how will these costs be mitigated going forward?

No changes at this moment.

3. Provide an updated schedule if changes have occurred.

No Schedule changes at this time.

4.This is also the place to put some photos of your progress or to brag about acomponent you got working.

We don’t have many physical updates at this time but they certainly will be included in the next report. The majority of our work this week surrounded prep for and rehearsing our presentation.

Noah’s Status Report for 9/21

This week was mainly focused on our proposal presentation, finalizing our respective roles within the project, and getting a start on the computer vision component of our project. Here is a bulleted list of the taste completed:

  • Created the proposal presentation
    • Conducted research on existing facial recognition models to understand the limits of our project in terms of latency and accuracy
    • Finalized our hardware usage and found latency bounds on our wired and wireless connections
      • Set an accuracy goal of 70-75%
  • Gave the presentation to the class and updated our goals based on responses from the professors, TAs, and classmates
  • Finalized the schedule and division of work for the project
  • Attended 3 team meetings to work together with teammates
  • Attended proposal meetings for other teams and gave feedback
  • Found 2 existing datasets that we might use for our machine-learning model
  • Found an existing facial recognition model that we may base ours of

I’d say that we are on schedule.

Goals for next week:

  • Submit orders for our hardware components to ensure they come on time
  • Finalize which facial recognition models we will use and what additions, modifications, and integrations will be used to make it our own
  • Get more feedback from our peers and make reasonable changes if any problems are noticed