Mason’s Status Report for 11/30

This week, I focused on preparation for the final presentation, testing, and output control ideation. Here’s what I accomplished:

This week’s progress:

  • Testing
    • Ran tests on API latency using python test automation.  
    • Found that the API speed is well within our latency requirement (in good network conditions).
    • Ran latency tests on the integrated system, capturing the time between image capture and system output. 
    • Found that our overall system is also well within our latency requirement of 350 ms, in fact operating consistently under 200 ms. 
    • Wrote out user testing feedback form for user testing next week.
  • Final presentation
    • Worked on our final presentation, particularly the testing, tradeoff, and system solution sections.
    • Wrote notes and rehearsed the presentation in lead up to class.
  • Control, confidence score Ideation
    • In an effort to enhance our systems confidence score efficacy, I decided to integrate a control system.
    • I plan to use a Kalman filter to regulate the display of the system output in order for account for the noise present in the system output.
    • By using the model output probabilistic weights, I will try to analyze the output and make a noise adjusted likelihood estimate.
    • I will implement this with numpy on the Jetson side of the system and update the API in conjuction with this.

Goals for Next Week:

  • Implement kalman filter on Jetson.
  • Assist Kapil with BLE Jetson integration for the Adafruit bracelet.
  • Continue user testing and integrate changes into UI in conjunction with feedback. 
  • Work on poster and finalize system presentation for final demo.

I’d say we’re on track for final demo and final assignments.

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

For the web app development, I’ve had to learn better how to optimize UIs for fast latency times, reading the Jquery Docs to figure out how to best integrate AJAX to dynamically update the system output. I had to make a lightweight interface, and build an API that can run fast enough to have real time feedback, for which I read the requests (HTTP for humans) docs. I read some forum postings on fast APIs in python, as well as watched a couple development videos on people building fast apis for their own applications in python. For the system integration and Jetson configuration, I watched multiple videos on flashing the OS, and read forum posts and docs from NVIDIA. I also consulted LLMs on how to handle bugs with our system integrations and communication protocols. The main strategies and technologies I used were forums (stack overflow, NVIDIA), Docs (Requests, jQuery, NVIDIA), and Youtube videos (independent content creators).

Images:

Kapil’s Status Report for 11/30

1. What did you personally accomplish this week on the project?

  • For the demo,  integrated the bracelet with the NVIDIA Jetson, ensuring that the entire setup functions as intended.
  • Inquired at TechSpark to gather more information about the 3D printing process and requirements
    • Base of the bracelet can be printed at TechSpark, however, the flexible straps require external printing.
  • Downloaded the new Bluetooth Low Energy libraries onto the Adafruit Feather
    • CircuitPython is not compatible with these libraries and requires an alternative method for pushing code. Exploring best approach to resolve this issue.
  • Tested and validated UART communication latency.
    • Measured latency was approximately 30ms, which is significantly below the 150ms requirement
  • Worked on the final presentation for the project

2. Is your progress on schedule or behind?

  • My progress is on schedule. The successful demo and latency validation ensure that the system is performing as required. I’ve also laid the groundwork for the remaining tasks, including Bluetooth integration and 3D printing.

3. What deliverables do you hope to complete in the next week?

  • Implement Bluetooth Low Energy (BLE) integration to replace UART communication, ensuring seamless wireless connectivity.
  • Begin printing the 3D enclosure, ensuring it securely houses all components while maintaining usability

Noah’s Status Report for 11/30

Given that integration into our entire system went well, I spent the last week tidying things up and getting some user reviews. The following details some of the tasks I did this past week and over the break:

  • We decided to reduce the emotion count from 6 to 4. This was because surprise and disgust had much lower accuracy than the other emotions. Additionally, replicating and presenting a disgusted or surprised face is pretty difficult and these faces come up pretty rarely over the course of a typical conversation.
  • Increased the threshold to present an emotion.
    • Assume a neutral state if not confident in the emotion detected.
  • Added a function to output a new state indicating if a face is even present within the bounds.
    • Updates website to indicate if no face is found.
      • Also helps for debugging purposes.
  • Did some more research on how to move our model to Onyx & Tensorflow which might speed up computation and allow us to load a more complex model.
  • During the break, I spoke to a few of my autistic brother’s teachers from my highschool regarding what they thought about the project and what could be better.
    • Overall, they really liked and appreciated the project and liked the easy integration with the IPads already present in the school

I am ahead of schedule, and my part of the project is done for the most part. I’ve hit MVP and will continue making slight improvements where I see fit.

Goals for next week:

  • Add averaging of the detected emotions over the past 3 seconds which should increase the confidence in our predictions.
  • Keep looking into compiling the CV model onto Onyx – a Jetson-specific way of running models – which would lower latency and allow us to use our slightly better model.
  • Help Mason conduct some more user tests and get feedback so that we can make small changes.
  • Writing the final report and getting ready for the final presentation.Additional Question: What new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?
  • The main tools I needed to learn for this project were Pytorch, CUDA, Onyx, and the Jetson.
    • I already had a lot of experience in Pytorch; however, each new model I make teaches me new things. For emotion recognition models, I watched a few YouTube videos to get a sense of how existing models already work.
      • Additionally, I read a few research papers to get a sense of what top-of-the-line models are doing.
    • For CUDA and Jetson, I relied mostly on the existing Nvidia and Jetson documentation as well as ChatGPT to help me pick compatible softwares.
      • Public forums are also super helpful here.
    • Learning Onyx has been much more difficult and I’ve mainly looked at the existing documentation. I imagine that ChatGPT and YouTube might be pretty helpful here going forward.

Team Status Report 11/30

  1. What are the most significant risks that could jeopardize the success of the project?

BLE Integration: Transitioning from UART to BLE has been challenging. While UART meets latency requirements and works well for now, our BLE implementation has revealed issues with maintaining a stable connection. Finishing this in time for the demo is critical to reach our fully fledged system.

  • Limited User Testing: Due to time constraints, the scope of user testing has been narrower than anticipated. This limits the amount of user feedback we can incorporate before the final demo. However, we are fairly satisfied with the system given the feedback we have received so far. The bracelet will be a major component of this though.
  1. Were any changes made to the existing design of the system?
  • We planned to add a Kalman filter to smooth the emotion detection output and improve the reliability of the confidence score. This will reduce the impact of noise and fluctuations in the model’s predictions. If this does not work we will instead use a rolling average
  • Updated the web interface to indicate when no face is detected, improving user experience and accuracy of the output.
  1. Updated schedule if changes have occurred:

We remain on schedule outside of bracelet delays and have addressed challenges effectively to ensure a successful demo. Final user testing and BLE integration are prioritized for the upcoming week.

Testing results:

 

Model output from Jetson to both Adafruit and Webserver:

Team’s Status Report 11/16

Team’s Status Report 11/16

1. What are the most significant risks that could jeopardize the success of the project?

  • Jetson Latency: Once the model hit our accuracy threshold of 70%, we had planned to continue doing improvement as time allowed. However, upon system integration, we have recognized that the Jetson cannot handle a larger model due to latency concerns. This is fine; however, does limit the potential of our project while running on this Jetson model.
  • Campus network connectivity: We identified that the Adafruit Feather connects reliably on most WiFi networks but encounters issues on CMU’s network due to its security. We still need to tackle this issue, as we are currently relying on a wired connection between the Jetson and braclet.

2. Were any changes made to the existing design of the system?

  • We have decided to remove surprise and disgust from the emotion detection model as replicating a disgusted or surprised face has proven to be too much of a challenge. We had considered this at the beginning of the project given that they emotions were the 2 least important; however, it was only with user testing that we recognized that they are too inaccurate.

3. Provide an updated schedule if changes have occurred:

We remain on schedule and were able to overcome any challenges allowing us to be ready for a successful demo.

Documentation:

  • We don’t have pictures of our full system running; however, this can be seen in our demos this week. We will include pictures next week when we have cleaned up the system and integration.

 

Model output coming from jetson to both Adafruit and Webserver:

Model output running on Jetson (shown via SSH):

Noah’s Status Report for 11/16

This week, I focused on getting the computer vision model running on the Jetson and integrating the webcam instead of my local computer’s native components. It went pretty well, and here is where I am at now.

  • Used SSH to load my model and the facial recognition model onto the Jetson
  • Configured the Jetson with a virtual environment that would allow my model to run. This was the complicated part of integration.
    • The Jetson’s native software is slightly old, so finding compatible packages is quite the challenge.
    • Sourced the appropriate Pytorch, Torchvision, and OpenCV packages
  • Transitioned the model to run on the Jetson GPUs
    • This requires a lot of configuration on the Jetson including downloading the correct CUDA drivers and ensuring compatibility with our chosen version of Pytorch
  • Worked on the output of the model so that it would send requests to both the web server and bracelet with proper formatting.

I am ahead of schedule, and my part of the project is done for the most part. I’ve hit MVP and will be making slight improvements where I see fit.

Goals for next week:

  • Calibrate the model such that it assumes a neutral state when it is not confident in the detected emotion.
  • Add averaging of the detected emotions over the past 3 seconds which should increase the confidence in our predictions.
  • Add an additional output to indicate if a face is even present.
  • Look into compiling the CV model onto onyx – a Jetson specific way of running models – so that there will be lower latency.

Mason’s Status Report for 11/16

This week, I focused on integration tasks to prepare the system for the demo. Here’s what I accomplished:

This week’s progress:

  • Jetson and Adafruit UART Integration
    • Worked with Kapil to implement UART communication between the Jetson and Adafruit
    • Helped write, run, and debug the code required to communicate via the Jetson’s UART pins.
    • Created and executed test scripts for this communication, eventually achieving functionality with the output of the model.
  • Model Deployment on Jetson
    • Resolved compatibility challenges related to leveraging the Jetson’s GPU for efficient model execution. 
    • Successfully installed the necessary packages and verified the model running in real-time using a webcam.
  • System Integration
    • Made changes to the model to integrate it with the API and UART communication, ensuring smooth output transmission.
    • Finalized the Jetson setup for the demo: the model now runs locally on the Jetson and transmits outputs as specified in the project write-up.

Goals for Next Week:

  • Collaborate with Kapil on Bluetooth integration between the Jetson and the bracelet/Adafruit.
  • Work with Noah to improve the model’s efficiency and reduce latency.
  • Conduct tests for API latency to ensure real-time responsiveness.
  • Begin user and user experience testing to evaluate the system’s performance and usability.

I’d say we’re on track and I feel good about the state of our project going into the demo on Monday.

UART (bracelet) working alongside API (iPad):

I forgot to capture photos of the model on Jetson/whole system in action.

Kapil’s Status Report for 11/16

1. What did you personally accomplish this week on the project?

  • Received the NeoPixel and soldered header pins on.
  • Connected the NeoPixel to the Adafruit Feather.
    • Initially, it did not work, but after debugging a physical connection issue, I successfully resolved the problem and got it operational.
  • Attempted to connect Adafruit Feather to the WiFi
    • Unfortunately, due to persistent connectivity issues, I was unable to establish a connection. To work around this, I identified a program that allows me to push code to the Feather via a wired cable, enabling continued progress without relying on WiFi connectivity.
  • Attended two group meetings focusing on integrating the Jetson.
    • First meeting: encountered issues with securing an SSH connection
    • Second meeting: Successfully established the entire pipeline. The Jetson now communicates with both the iPad display and the embedded bracelet via UART, achieving the intended functionality for the demo.

2. Is your progress on schedule or behind?

  • My progress is on schedule. Despite the WiFi connectivity setbacks, I was able to get the NeoPixel working and successfully integrated the Jetson pipeline. The system is now demo-ready with UART communication functioning as intended.

3. What deliverables do you hope to complete in the next week?

  • Post-demo, my focus will shift to implementing Bluetooth Low Energy (BLE) to replace UART for wireless communication.
  • Conduct timing analysis to ensure the system meets its real-time performance requirements
  • Begin finalizing the 3D-printed enclosure and organizing the circuit for a more polished appearance.

4. Tests conducted and planned for verification and validation:

  • Verification Tests (Subsystem-level):
    • The embedded bracelet must achieve a latency of 150ms. As outlined in the Design Report, I will be using an oscilloscope for this. I will have two latencies one for UART (wired communication), and one for BLE (wireless)
      • testing BLE latency and comparing it to the UART baseline to ensure that BLE does not compromise system responsiveness
    • For the user testing portion, I have two goals
      • Feedback Recognition Accuracy: At least 90% of participants should correctly identify the type of feedback (haptic or visual) associated with specific emotions within 3 seconds of actuation.
      • Error Rate: The system must maintain an error rate of less than 10% for incorrectly signaling emotions, ensuring reliability.

Team’s Status Report 11/9

Team’s Status Report 11/9

1. What are the most significant risks that could jeopardize the success of the project?

  • Model Accuracy: While the model achieved 70% accuracy, meeting our threshold, real-time performance with facial recognition is inconsistent. It may require additional training data to improve reliability with variable lighting conditions.
  • Campus network connectivity: We identified that the Adafruit Feather connects reliably on most WiFi networks but encounters issues on CMU’s network due to its security. We will need to tackle this in order to get the Jetson and Adafruit communication working.

2. Were any changes made to the existing design of the system?

  • AJAX Polling and API inclusion for Real-time Updates: We implemented AJAX polling on the website, allowing for continuous updates from the Jetson API. This feature significantly enhances user experience by dynamically displaying real-time data on the website interface.
  • Jetson Ethernet: We have decided to use ethernet for the jetson to connect it to the cmu network.

3. Provide an updated schedule if changes have occurred:

The team is close to being on schedule, though some areas require focused attention this week to stay on track for the upcoming demo. We need to get everything up and running in order to transition fully to testing and enhancement following the demo.

Photos and Documentation:

  • Jetson-to-website integration showcasing successful data transmission and dynamic updates. emotisense@ubuntu is the ssh into the Jetson. The top right terminal is the jetson, the left and bottom right are the website UI and  request history respectively. I also have a video of running a test file on the jetson but video embedded doesn’t work for this post.

Noah’s Status Report for 11/9

This week was mostly focused on getting prepared for integration of my CV component into the larger system as a whole. Here are some of the tasks we completed this week:

  • Spent some more time doing a randomized grid search to determine the best hyperparameters for our model.
    • Made the model slightly better up to 70% accuracy which was our goal; however, it doesn’t translate super well to real-time facial recognition.
    • Might need to work on some calibration or use a different dataset
  • Conducted research on the capabilities of the Jetson we chose and how to load our model onto that Jetson so that it would properly utilize the GPU’s
    • I have a pretty good idea of how this is done and will work on it this week once we are done configuring the SSH on the Jetson.

I am on schedule and ready to continue working on system integration this upcoming week!

Goals for next week:

  • Start testing my model using a different dataset which closer mimics the resolution we can get from our webcam.
    • If this isn’t promising, we might advice adding some calibration to each individual person’s face.
  • Download the model to the Jetson and use the webcam / Jetson to run the model instead of my computer
    • This will fulfill my portion of the system integration
    • I would like to transmit the outfit from my model to Mason’s website in order to ensure that we are getting reasonable metrics.