Team Status Report 10/5/2024

After the design review, our team sat down and had a discussion on whether we should do some pruning to our features following Professor Tamal’s advice. As such, we have made some changes to our design. We have decided to remove 2 features – the ultrasonic sensor and the photoresistor from our robot design.  Our changes were to address significant risks with regards to implementing too many features and not having enough time to test and properly integrate them with each other. Doing so will also provide more slack time in between to address issues. We also further discussed specifics regarding the microphones we will have on our robot. One potential risk to mitigate would be the speaker and motor servos design. We plan to start our implementation with a speaker that can fit in the robot body along with motor servos that can provide left/right translation coupled with up/down z-axis translation. We would implement this plan if the robot is unable to do a more swivel type motion, so that our robot can still maintain interactivity without being too difficult to program its movements.

Changes to existing design: Removal of Ultrasonic Sensor and Photoresistor

The purpose of the ultrasonic sensor was to sense the user’s presence to decide whether to keep the timer running or not during a study session. However, this use case clashes with the text-to-speech(TTS) use case where if the user is using TTS and leaves the desk area to wander around, the sensor would trigger the timer and pause the study session and the TTS although the user did not intend for it to be paused. Even if it was possible to allow the user to continue listening to the generated speech, it limits the user from being able to walk around during studying. By removing the sensor, this allows a more flexible study style among users. We will be replacing this with a timer pause/play button on the robot’s leg. If the user needs to quickly get away from the desk they can click the button to pause the timer, and can also walk around/ fidget when studying. Furthermore, this resolves the issue of having to add additional features like an alert asking the user if they are still there because in practice, the sensor could eventually stop noticing the user, if the user is very still.

As for the photoresistor, the use case was when the robot is already turned on, but goes into an idle state and so if the user turns a light on, the robot should be able to “wake up” and greet the user. We felt that this use case was too niche, and although a nice perk to have, not integral to the design of the robot. Fundamentally, the robot is meant to help a student study but also provide entertainment when the study is tired/needs a break. Thus, this feature felt not as crucial to include in our project. We believe it would be more beneficial for us to remove it and focus on making our other features better instead. 

Changes to existing design: Addition of an idle state 

An additional feature that our team devised was to implement a sleep state for the robot to conserve power and prevent the Raspberry Pi from overheating. If the user leaves in the middle of a study session or doesn’t return after a break reminder, the robot will enter a sleep state after 10 minutes of inactivity, upon which the robot’s DCI display will feature a sleeping face screensaver. We believe that a sleep state is useful to both save power and pause all processes, and if users choose to return to a study session, the robot will be able to wake up on command and resume processes such as study timers and interactive games immediately.

Specification of existing design: Microphones

We have decided that we will be using two pairs of  compact ½”cardioid condenser microphones. Each placed at the corners of the robot to pick up sound within a 3 feet radius. This will not incur additional costs, as it will be borrowed from the ECE department. 

Update to schedule: 

Removal of testing and integration for the ultrasonic sensor and photoresistor to allow for more integration time for all other components. Otherwise, everything remains the same.

Shannon’s Weekly Report 9/28/2024

This week, I focused on narrowing down the specifics of the Robot and the WebApp with my team.  We wanted to have a clear idea of what exactly our robot will look like and what the WebApp would look like. We discussed in-depth on what our robot dimensions should be and came to the conclusion that the robot should be roughly 12-13 inches in height to account for eye level on a desk. Since the LCD display will be around 5 inches, the base will have a height of about 7 inches. We also discussed the feet dimensions, which came out to be 2.5 inches wide to account for the 3 rock paper scissors buttons and 1 inch in height to account for the buttons sticking out. Then, I lead the discussion around what the WebApp should look like, what pages we should have, and what each page should do. We decided on 4 main pages:

  • a Home page displaying the most recent study sessions and todo lists,
  • a Timer page that allows the user to set timers for tasks and a stopwatch to time how long they take to do tasks,  
  • a Focus Time/Study Session page where the study can start, pause, and end a study session, and view statistics/analyze their study sessions,
  • a Rock-Paper-Scissors page, where the user can start a game with the robot.

Following our discussion, I have started working on the Timer page for our WebApp. I have finished the basic timer and stopwatch features, so now a user can start a timer, and they can start and stop a stopwatch. Attached is a screenshot of this. I also plan on adding a feature where the previous timer and stopwatch timings are recorded with tags the user can add to the previous activity.

 

According to the Gantt chart, I am on target. 

In the next week, I’ll be working on:

  • Completing the Timer Page
  • Coding up the Focus Time/Study Session Page
  • Fully finalizing a plan on how to integrate the robot with the WebApp

Team’s Status Report for 9/21/2024

The most significant risk right now would be the usage of the MEMS microphone. We have some concerns about how it may be difficult to triangulate well with our robot’s small form factor. We are planning on mitigating this risk by adding a directional microphone. Performing audio analysis on the MEMS requires careful and precise array design to have good accuracy. Using a directional microphone will help in identifying the general source of the audio and pairing that with an array of MEMS microphones will allow for better audio localization and recognition. 

An array of three MEMS microphones are sufficient to perform a 2D plane triangulation along the X and Z axis of the robot head. Microphones will be placed on the back of the head, on the left and on the right. Performing signal analysis on the input from each microphone will allow us to identify the source of the audio with a margin of error of about 5 degrees.

We believe that this combination of changes to the microphone system would improve the accuracy of the system. As an additional layer of risk mitigation, we plan to reduce the activation distance of the audible cues. This will ensure that the microphones would be able to accurately pinpoint the location. To that end, our desired goal would be for the audible cues to activate once the student is at the desk, within 3 feet of the robot.

Based on the feedback, slight changes to the Gantt chart have been made to account for time to identify specific parts that we need to purchase for our robot. We looked at the purchasing pdf to prepare a purchase request form from the parts inventory and we are working on narrowing down the list of parts that we will need. This will help us have a clearer picture of exactly what components our robot will have and for what purpose.

Shannon’s Status Report for 9/21/2024

This week, I focused on researching various components for our proposal presentation. To better define our use case, I looked into a few different research papers with a similar end goal as us, and found good support that there is a need for our robot (because it can ​​fulfill the psychological needs of students and will motivate learners and improve their learning output). I also properly scoped out the 6 main features for our robot including the 3 features for studying and the 3 features for interaction and what problems each feature can address with regards to our use case. I also worked on more concretely defining our technical challenges, and coming up with possible risk mitigation strategies by doing some simple research and seeing what options were available and more appropriate for our project. For example, when it came to the issue of having fast real-time communication between app and robot, there were a few options including using REST API with polling (too high latency) but I ultimately decided to implement WebSockets because it is a commonly used lightweight protocol that is suited to our project.

I have also started on creating the frontend pages for our WebApp using Django. I have not fully fleshed out all the pages (timer pages, studying session pages) and will work on that starting next week, but for now there is some basic navigation (login, logout, register, and a home page). I will further discuss with my team to fix on a design for the wireframes and then I will correspondingly update the frontend views for our WebApp. 

According to the Gantt chart, I am on target. 

In the next week, I’ll be working on:

  1. Design Presentation Slides
  2. Finalising our parts ordering list with my team
  3. Finishing up the WebApp frontend pages
  4. Starting work on timers, focus times, and todo list features on the WebApp