Sanjana’s Status Report for 9/30

This week, I worked on finalizing the tech stack for our display and how integration with different APIs would work. I collaborated with my team members in order to modify our scope and identify the new constraints and use case of SoundSync. We additionally spent a lot of time understanding and planning different override conditions for our system. For example, if the eye tracking indicates we are elsewhere on the page while the audio indicates we are nearing the last few beats of the last line, we turn the page according to our audio alignment and ignore the misleading eye tracking data. Additionally, a lot of my time went into preparing the slides and information for the design review.

In order to build SoundSync, we are going to deploy a variety of engineering, mathematic, and scientific principles. Our main technologies are centered around machine learning, signal processing, web development, and eye tracking. I have learned these concepts from a series of courses I took over the past 3 years at Carnegie Mellon. While none of the following courses focus on eye tracking or its related APIs, I gained experience using APIs and integrating and interfacing with peripherals through 17-437 and 18-349. 18-453: XR systems, a course I’m taking this semester, defined several of the challenges involved in eye tracking. Here are the notable ones:

  1. 10-601 Introduction to Machine Learning: This course provided the fundamentals to explain the math underlying various ML models that we’ll be using.
  2. 18-290 Signals and Systems: This course introduced key concepts like convolution, filtering, sampling, and Fourier transforms. We’ll be utilizing these while filtering audio and sampling it. Fourier transforms are going to be an integral part of our audio alignment process using the Dynamic Time Warping algorithm.
  3. 18-349 Introduction to Embedded Systems: This course teaches the engineering principles behind embedded realtime systems and covers the integrated hardware and software aspects of embedded processor architectures. I learned how to read through documentation and how to interface with a MCU, skills directly applicable to SoundSync because of the peripherals and board we are using.
  4. 17-437 Web Application Development: This course introduces the fundamental architectural elements of programming web sites that produce content dynamically with the Django framework for Python and Java Servlets.

My progress is on schedule. Due to changes we made in the tech stack, I have not begun implementing a React webpage, and am instead familiarizing myself with Python to display our UI. For next week, I will have some pseudocode for displays or eye tracking data collection/filtering. We also intend to order parts in the upcoming week.

Leave a Reply

Your email address will not be published. Required fields are marked *