Team Status Report 10/5/2024

After the design review, our team sat down and had a discussion on whether we should do some pruning to our features following Professor Tamal’s advice. As such, we have made some changes to our design. We have decided to remove 2 features – the ultrasonic sensor and the photoresistor from our robot design.  Our changes were to address significant risks with regards to implementing too many features and not having enough time to test and properly integrate them with each other. Doing so will also provide more slack time in between to address issues. We also further discussed specifics regarding the microphones we will have on our robot. One potential risk to mitigate would be the speaker and motor servos design. We plan to start our implementation with a speaker that can fit in the robot body along with motor servos that can provide left/right translation coupled with up/down z-axis translation. We would implement this plan if the robot is unable to do a more swivel type motion, so that our robot can still maintain interactivity without being too difficult to program its movements.

Changes to existing design: Removal of Ultrasonic Sensor and Photoresistor

The purpose of the ultrasonic sensor was to sense the user’s presence to decide whether to keep the timer running or not during a study session. However, this use case clashes with the text-to-speech(TTS) use case where if the user is using TTS and leaves the desk area to wander around, the sensor would trigger the timer and pause the study session and the TTS although the user did not intend for it to be paused. Even if it was possible to allow the user to continue listening to the generated speech, it limits the user from being able to walk around during studying. By removing the sensor, this allows a more flexible study style among users. We will be replacing this with a timer pause/play button on the robot’s leg. If the user needs to quickly get away from the desk they can click the button to pause the timer, and can also walk around/ fidget when studying. Furthermore, this resolves the issue of having to add additional features like an alert asking the user if they are still there because in practice, the sensor could eventually stop noticing the user, if the user is very still.

As for the photoresistor, the use case was when the robot is already turned on, but goes into an idle state and so if the user turns a light on, the robot should be able to “wake up” and greet the user. We felt that this use case was too niche, and although a nice perk to have, not integral to the design of the robot. Fundamentally, the robot is meant to help a student study but also provide entertainment when the study is tired/needs a break. Thus, this feature felt not as crucial to include in our project. We believe it would be more beneficial for us to remove it and focus on making our other features better instead. 

Changes to existing design: Addition of an idle state 

An additional feature that our team devised was to implement a sleep state for the robot to conserve power and prevent the Raspberry Pi from overheating. If the user leaves in the middle of a study session or doesn’t return after a break reminder, the robot will enter a sleep state after 10 minutes of inactivity, upon which the robot’s DCI display will feature a sleeping face screensaver. We believe that a sleep state is useful to both save power and pause all processes, and if users choose to return to a study session, the robot will be able to wake up on command and resume processes such as study timers and interactive games immediately.

Specification of existing design: Microphones

We have decided that we will be using two pairs of  compact ½”cardioid condenser microphones. Each placed at the corners of the robot to pick up sound within a 3 feet radius. This will not incur additional costs, as it will be borrowed from the ECE department. 

Update to schedule: 

Removal of testing and integration for the ultrasonic sensor and photoresistor to allow for more integration time for all other components. Otherwise, everything remains the same.

Jeffrey’s Status Report For 10/05/2024

For this week, I am currently on track with the Gantt Schedule. I will start prioritizing looking into coding certain functionalities, such as the speaker output from the robot that we plan to utilize via TTS and the motor servos that control horizontal/vertical movement and up/down translation. I also placed an additional order for microphones, but since the ECE inventory might be out, we will look for another alternative on the market.

Mahlet and I have a task on our Gantt Chart for having the robot base complete by this week. We are behind schedule on this task, but plan to make time after our midterms to meet at Techspark. There, we will focus on building the acrylic base, ensuring that the body compartment has space to store the speaker and other features such as the RPi 5, and this will segue into our next task, which is ensuring that the servo motors mounted on the robot body (connected to the DCI display) is able to function properly. 

To prepare for those tasks, next week, I will focus on creating logic to connect the speakers, as well as for the motor servos. The past week, I was doing research into the right kind of speaker that would fit within the body compartment but also be volume-controlled, so the user can alter the settings as they see necessary. Over the course of this week, I will look into specific speakers that our team can acquire since the ECE inventory didn’t have the types we were looking for. Furthermore, for the servo motors, we will order them this week, so next weekend, once we have the base complete, we can add the motor servos to the build and work on integrating them into the robot body. 

Shannon’s Weekly Report 9/28/2024

This week, I focused on narrowing down the specifics of the Robot and the WebApp with my team.  We wanted to have a clear idea of what exactly our robot will look like and what the WebApp would look like. We discussed in-depth on what our robot dimensions should be and came to the conclusion that the robot should be roughly 12-13 inches in height to account for eye level on a desk. Since the LCD display will be around 5 inches, the base will have a height of about 7 inches. We also discussed the feet dimensions, which came out to be 2.5 inches wide to account for the 3 rock paper scissors buttons and 1 inch in height to account for the buttons sticking out. Then, I lead the discussion around what the WebApp should look like, what pages we should have, and what each page should do. We decided on 4 main pages:

  • a Home page displaying the most recent study sessions and todo lists,
  • a Timer page that allows the user to set timers for tasks and a stopwatch to time how long they take to do tasks,  
  • a Focus Time/Study Session page where the study can start, pause, and end a study session, and view statistics/analyze their study sessions,
  • a Rock-Paper-Scissors page, where the user can start a game with the robot.

Following our discussion, I have started working on the Timer page for our WebApp. I have finished the basic timer and stopwatch features, so now a user can start a timer, and they can start and stop a stopwatch. Attached is a screenshot of this. I also plan on adding a feature where the previous timer and stopwatch timings are recorded with tags the user can add to the previous activity.

 

According to the Gantt chart, I am on target. 

In the next week, I’ll be working on:

  • Completing the Timer Page
  • Coding up the Focus Time/Study Session Page
  • Fully finalizing a plan on how to integrate the robot with the WebApp

Team’s Status Report for 9/28/2024

The most significant risk for our team right now would be the integration of the Robot with the WebApp. We are thinking of using WebSockets because of its low-latency nature with full-duplex communication which would allow better real-time communication between the robot and the WebApp.  However, a key challenge is that none of us have prior experience with using WebSockets in this specific context, creating uncertainty around implementation and potential delays. To manage this risk, we plan on scheduling dedicated time for learning WebSocket integration and seeking advice from mentors or who have used WebSockets.  As for our contingency plans, we plan to possibly switch to a standard HTTP-based communication using REST APIs over WiFi, (though this might introduce higher latency), or using a physical Ethernet connection to reduce the risk of network disruptions, (though this would reduce flexibility in robot placement and mobility).

Another possible challenge is integrating the DCI display with the Raspberrypi. Ensuring a reasonable frame per second value along with a smooth facial transition, such as blinking and smiling,  to ensure human-like interaction with the bot. To implement this, we will use certain python graphics libraries like Pygame for simple 2D rendering, or Kivy for a more advanced interface.  To maximize the lifespan of the screen, we will be using screensaver and sleep mode  during idle moments or periodically change the content displayed on the screen by making slight changes. This can be done during timer countdowns and is generally not a concern if the user is not using the bot. 

We also got together as a group to update our system specification diagram, which we included in our design proposal.

We decided to allocate specific time to the WebSockets integration of the robot next week.

Part A was written by Mahlet Mesfin

Our StudyBuddyBot is a study companion meant to motivate students to study, track their study habits and provide relaxation when they take breaks in between their study sessions. This allows students to have a good experience while being productive inducing psychological satisfaction. The bot can guide students to follow optimal study schedules(such as the Pomodoro technique), which ensures a well balanced approach to work and rest. This will help prevent overworking, overall leading to a better mental and emotional health.  In addition, we will be incorporating reminders for the user to take breaks in reasonable intervals reducing fatigue and eye strain. 

The game feature of this StudyBuddyBot allows for a short but fun experience during these study sessions, timed well so that they don’t cause prolonged distraction. This will also help in fostering the sense of companionship and reducing the feeling of isolation for those who can’t focus well in the presence of other individuals. This can boost the well-being of someone through emotional support. 

Part B was written by Shannon Yang

The StudyBuddyBot will improve productivity and the well-being of students in academic environments. It will serve as a structured study companion that can help students. In situations where students have limited access to in-person interaction due to cultural factors, the robot is able to simulate a studying environment with a friend. The features for interaction that the robot has can also help to bridge gaps in the social and emotional support systems that students may lack from their surroundings. Some of the robot’s features could also be used to cater to specific cultural or social preferences (for example, setting prayer time reminders for those who observe religious practices). By incorporating both study assistance and social engagement, the robot aligns with the growing trend of technology being used to support mental health and productivity, acknowledging the cultural and social importance of companionship in learning and promoting greater work-life balance. 

Part C was written by Jeffrey Jehng

With the StudyBuddyBot, we want to use cost-effective components to balance affordability with quality. By implementing a modular design, we can have a scalable distribution in the future and ensure durability of the final product. 

An example of our final product use-case could be in a school setting, where administration/students may have a limited budget for these educational tools. By designing the StudyBuddyBot with affordable components and integration with a companion web-app to decrease the need for high-performance hardware, we can focus on developing key functionalities such as robot interaction and features to motivate student studying. The emphasis on affordable components under our $600 budget can make our design a cost-effective solution to assist schools in integrating advanced technology into the classroom.   

Jeffrey’s Status Report for 9/28/2024

For this week, I spent one meeting session with Mahlet and Shannon to work on our system specification design on paper before I transcribed it into a diagram for the Design Presentation. We broke down our StudyBuddyBot into the software and hardware components, which consists of the Web Application Companion App for the Software, and the Raspberry Pi 5 8GB for the Hardware. Within the Hardware and Software components, we broke down the inputs and outputs and described the tasks we had to complete in green. The boxes in purple, combined with the Raspberry Pi 5 in green, are components that we will buy.

Earlier in the week, I spent some time following my Gantt Chart Schedule to decide on the right speaker and microphone components to use. With the approval of my teammates, I was able to place orders through the Capstone Inventory for those parts, along with the Raspberry Pi we plan to use.

 

Afterwards, I started looking into the logic for coding certain features, which I will continue doing into the next week, since once we have our components, our goal is to start working on combining those components with our Raspberry Pi and ensuring that our features/functionalities work as planned.

 

Since I am currently on schedule, I can devote more time the upcoming week in understanding the specification sheets of our components and doing analysis/simulation on our design to ensure that these features we plan for our StudyBuddyBot will be feasible and work.

Mahlet’s Status Report for 9/21/2024

This week, I worked on some of the feedback we got from the proposal presentation regarding the audio triangulation, the robot motion, choices of microphones, and justification for using 3 microphones. According to my research, I can justify the use of three microphones by indicating that it allows for 2D audio recognition, when paired with a directional microphone. The three microphones are going to be MEMS microphones. The triangulation technique requires very accurate time measurements and using MEMS might introduce some timing delays affecting the precision of audio. However, since a directional microphone can give us a sense of the general origin of an audio, using the signals coming through both of these and aligning and processing them will help us get a more precise audio output with an aim of 5 degree of margin of error.

In light of the proposal update, I have made slight modifications to my goals within the next few weeks on the Gantt chart. I will be working on identifying specific components for the purposes mentioned above, and I swapped the deadline for the robot neck logic and the triangulation math, based on priority. I am on track based on the Gantt chart.

During next week, I will be working on identifying good directional microphones to integrate with the MEMS microphone to have good results, and identifying the motion motors for the neck of the robot. I will also do more research on allowing audio triggers within a certain length radius from the robot. Once I identify the servos I will be using, I will work on the audio triangulation method. I will be working on the bill of materials(BOM) with my team to finalize the parts list. 

Jeffrey’s Status Report for 09/21/2024

Updates to Proposal Presentation from Abstract:

This week I was focused on the Proposal Presentation and working on the slides and the script for the presentation. For research, I looked into the ethical implications of our project. I looked into similar literature that presented a interactive study buddy robot such as: https://www.jongwon.net/Interactive-Desktop-Study-Buddy-Robot-Stubie.pdf

From here, I did some research on human-robot interaction ethics and the rules and principles that govern how robots and technology should operate in this domain that is ethically fair to humans.

Acquiring Parts and Considering Design Choices:

After the presentation was finished on Monday, I focused on looking into parts for the motor servos to control the robot and speaker system for the TTS application. Given that our robot has a small functional form, we would want a compact speaker that can deliver ample noise in a confined environment, which goes hand in hand with using a small but powerful speaker.

I was looking into small Arduino speakers ranging from $3.00 to $10.00 that would be compact enough to fit into the back or front of our design. Furthermore, I also had to consider the logic behind the robot responding to audible cues, which is my first task according to the Gantt Schedule.

Future Goals:

I am currently on track and spent time at the end of the week to catch up with my team members to consider overall designs together, such as the microphone system that we plan to use.

In the upcoming week, I will focus on finalizing the logic for both the robot inputting audible cues via microphone system and outputting audio via our speaker system. I also will start working with Mahlet on planning the building of our robot motor system and thinking about the most effective parts to acquire so our compact robot will be capable of movement along the X and Y axis, attached to the neck. We decided that movement along the Z-axis is unnecessary and would add additional complications to how we utilize the motors connecting the body to the head/display of the robot.

 

 

Team’s Status Report for 9/21/2024

The most significant risk right now would be the usage of the MEMS microphone. We have some concerns about how it may be difficult to triangulate well with our robot’s small form factor. We are planning on mitigating this risk by adding a directional microphone. Performing audio analysis on the MEMS requires careful and precise array design to have good accuracy. Using a directional microphone will help in identifying the general source of the audio and pairing that with an array of MEMS microphones will allow for better audio localization and recognition. 

An array of three MEMS microphones are sufficient to perform a 2D plane triangulation along the X and Z axis of the robot head. Microphones will be placed on the back of the head, on the left and on the right. Performing signal analysis on the input from each microphone will allow us to identify the source of the audio with a margin of error of about 5 degrees.

We believe that this combination of changes to the microphone system would improve the accuracy of the system. As an additional layer of risk mitigation, we plan to reduce the activation distance of the audible cues. This will ensure that the microphones would be able to accurately pinpoint the location. To that end, our desired goal would be for the audible cues to activate once the student is at the desk, within 3 feet of the robot.

Based on the feedback, slight changes to the Gantt chart have been made to account for time to identify specific parts that we need to purchase for our robot. We looked at the purchasing pdf to prepare a purchase request form from the parts inventory and we are working on narrowing down the list of parts that we will need. This will help us have a clearer picture of exactly what components our robot will have and for what purpose.

Shannon’s Status Report for 9/21/2024

This week, I focused on researching various components for our proposal presentation. To better define our use case, I looked into a few different research papers with a similar end goal as us, and found good support that there is a need for our robot (because it can ​​fulfill the psychological needs of students and will motivate learners and improve their learning output). I also properly scoped out the 6 main features for our robot including the 3 features for studying and the 3 features for interaction and what problems each feature can address with regards to our use case. I also worked on more concretely defining our technical challenges, and coming up with possible risk mitigation strategies by doing some simple research and seeing what options were available and more appropriate for our project. For example, when it came to the issue of having fast real-time communication between app and robot, there were a few options including using REST API with polling (too high latency) but I ultimately decided to implement WebSockets because it is a commonly used lightweight protocol that is suited to our project.

I have also started on creating the frontend pages for our WebApp using Django. I have not fully fleshed out all the pages (timer pages, studying session pages) and will work on that starting next week, but for now there is some basic navigation (login, logout, register, and a home page). I will further discuss with my team to fix on a design for the wireframes and then I will correspondingly update the frontend views for our WebApp. 

According to the Gantt chart, I am on target. 

In the next week, I’ll be working on:

  1. Design Presentation Slides
  2. Finalising our parts ordering list with my team
  3. Finishing up the WebApp frontend pages
  4. Starting work on timers, focus times, and todo list features on the WebApp