Jeffrey’s Status Report for 12/07/2024

After finishing the final presentation slides earlier this week, the team got together to plan a final week schedule. In here, we broke down some of the remaining tasks we had to complete. For me, I was prioritizing finishing the Study Session feature with Shannon, as well as the RPS.

While the study session took longer than anticipated, we were able to make it such that the study session (in DSIdisplay.py, and studysession_start.html, studysession_inprog.html, and studysession_end.html contained all the necessary code for expected behavior. We were able to abide by the choices in our design report and ensure that all features were implemented. Now all that is left is verify the study session integration with other features, as well as pushing our Web App to AWS servers. For validation and testing, we worked on verify individual components, such as duration being sent to the RPi, or actions such as pause/resume will trigger the appropriate HTML redirection. For instance, we see “study session in progress” on the Web App, and when we press pause on the DSI display via touchscreen, we see the Web App instantly change to “study session on break”. This kind of low latency behavior is exactly what we expected. Similarly, with studysession end, we address the multiple cases, where we either end a session early (from the Web App), in which case we need to handle the timer stopping completely, or a study session ending upon reaching the desired goal duration. In the latter case, we had to handle two additional cases: 1) Student user chooses to stop studying, and we go back to the default home screen, or 2) Student continues studying, in which case there is no longer a target goal duration, the student can study as they please, but keep the timer running (from where they left off) so they can track their total study time while retaining pause + resume functionalities.

Furthermore, on Thursday and Friday, I worked on the RPS display, and am able to finish our contingency plan, where the user can take a break and play multiple rounds of RPS by sending a number of rounds to play (from Web App). We are able to play X # of rounds, and after the final round, the DSI display screen will redirect back to home. I handle cases such as when no inputs are pressed. All code written is linked in the first 15 pages of the google doc: https://docs.google.com/document/d/17t1l_ZAiQ-rBkdr-X1iHHmmFroEZomcFpkQHONkKzW4/edit?tab=t.0

In the next upcoming week, I will keep working on keypad button integration, as we try to refactor our code from software/touchscreen, to just relying on the hardware components. I plan to do this with evdev and the USB-C keypad we have purchased (listed in our BOM). I also have broken down our frame states, and with Shannon, we drew out the expected behavior of our screen for the RPS feature. We plan to use both keypads for RPS, and pause/resume study session

I have also done a lot of system testing, both individual, and for integration. Overall, the testing helped me uncover a lot of issues to debug on the software side. In terms of design changes, there aren’t any drastic changes from what we had written up in the design report, and in fact, I believe we have been able to make improvements to the UI logic and handling various inputs. For the websocket validation and hardware abstraction, we are still working on integrating those final aspects but with thorough unit testing, we should be able to combine these aspects without running into major issues. 

Team’s Status Report for 12/07/2024

 

Currently the most significant risk for the audio localization is noise introduced through the USB ports connection using the RaspberryPi. When tested on a local computer, the signals are in the expected range with numerically and visually different outputs for different audio input intensity. However on the RPi, every signal is already amplified and louder audio is indistinguishable from regular audio input due to the resolution range of the Arduino Uno. Lowering the voltage supplied to the microphones to the minimum viable option also doesn’t solve this issue. The contingency plan for this is to perform the audio processing on the computer that hosts the web app. This ties into the system change if we can’t find any other alternative solution for it. No financial cost is incurred due to this change.

 

Another aspect of the schedule currently worked on is hardware integration of the keypad to the RPi5. I (Jeffrey) am behind on this currently, but am trying to incorporate this with the existing software code that controls the RPS game/logic. We have already adapted the contingency plan, as the current RPS game and logic works with touchscreen display, so we have that prepared in the case that integration with hardware proves more difficult. We haven’t made significant changes to the schedule/block diagram and have built in slack time on tonight and Sunday to ensure that we can continue working on system integration while getting our poster/video demo in by Tuesday/Wednesday night. 

 

Schedule Changes: 

Our final 2 weeks schedule is included in the following spreadsheet. 

https://docs.google.com/spreadsheets/d/1LDyzVmJhB0gaUmDfdhDsG_PiFwMn-f3uwn-MvJZU7K4/edit?usp=sharing

 

TTS unit testing:

For the text-to-speech feature,  we tested various word lengths to find the appropriate default string to prevent high latency when using the feature. This included one sentence (less than 10 words, ~0.5s), 25 words (~2s), 50 words (~2.5-3s), 100 words (~6-8s) and a whole page of text (~30s). We also tried jotting down notes as the text was read and 50 words was determined to be the ideal length in terms of latency and how much one could note down as text is being read continuously. During this testing, we also noted the accuracy of the text being read to us and it was accurate almost all the time,  with the exception of certain words that reads slightly weirdly (e.g. “spacetime” is read as “spa-cet-ime”).  

 

Study Session unit testing:
We tested this by creating multiple study sessions, and then testing out the various actions that can occur. We tried:

  • Creating a Standard Study Session on the WebApp and seeing if the robot display timer starts counting up
  • Pausing on the robot display and seeing if the WebApp page switches to Study Session in Progress
  • Resuming on the robot display and seeing if the WebApp page switches to Study Session on Break
  • Ending the Study Session (before the user-set goal duration) on the WebApp and seeing if the robot display timer stops and reverts back to the default display
  • Letting the Study Session run until the user-set goal duration is reached and seeing if the pop-up asking the user if they would like to continue or to end the session appears
  • If the user clicks OK to continue, the robot display continues to count up
  • If the user clicks Cancel to stop, the robot display reverts back to default display, the WebApp shows an End Session screen displaying session duration information.
  • Created a Pomodoro Study Session and set break and study intervals on the WebApp, seeing if the robot display starts counting up
  • Waiting until the study interval is reached, and seeing if the break reminder audio “It’s time to take a break!” is played
  • Waiting until the break interval is reached, and seeing if the study reminder audio “It’s time to continue studying!” is played

 

TTS + Study Session (Studying features) system testing:

  • Create a Standard Study Session, test to make sure that while a study session is in progress (not paused), that the user can use the text-to-speech feature with no issues → user can use the text-to-speech feature, return back to the study session, and also if the goal duration is reached while the user is using the TTS feature, when they return back the pop-up still occurs
  • Test that using text-to-speech feature while no Study Session is ongoing is not allowed (if no study session created → redirects to create new study session page, if study session created but on pause → redirects back to current study session)
  • Test that using text-to-speech feature during a Pomodoro Study Session is not allowed 
    • This was a design change as the break reminder sent during a Pomodoro Study Session would interfere with the audio being played while the user is using the text-to-speech feature

 

Audio unit testing: 

For audio localization simulation, an array of true source values from 0 to 180 have been fed into the program, and have received an output as follows. The general trend it follows is consistent with the expectation, and goal of 5 degrees margin of error. The mean error calculation based on the output turned out to be 2.00 degrees. It is done by calculating the mean absolute error between the true angles and the estimated angles. This mean error provides a measure of the accuracy of the angle estimation. The lower the mean error, the higher the accuracy of the results. 

The total time it takes to compute one audio cue is 0.0137 seconds. 

This is done by using the time library in python and keeping track of the start and end times of computation.

 

Audio + Neck rotation testing: 

Audio testing on the microphones, signal outputs show a clap lands on the threshold of above 600 with the Arduino Uno’s ADC pin resolution. This value is used to detect a clap cue. This input is being sent to the RPi, and there have been results of inconsistent serial communication latency. This issue is still undergoing more testing. The latency is approximately 20 to 35 seconds to send angle computation from the RPi to the Arduino to change the stepper motor’s position. The most frequently occurring latency of communication is 0.2 seconds or less. 

The time it takes for the stepper motor to go from 0 to 180 degrees(our full range of motion) is 0.95 seconds.This value is inline with our expectation of less than 3 seconds for response assuming latency of 0.2 seconds. 

The accuracy testing for audio localization accuracy using a microphone is still under progress due to the latency issue, enough data hasn’t been collected to finalize this. This will be thoroughly addressed in the final report.

 

RPS logic unit testing:

Tested individual functions such as determine winner, and ensure register_user_choice() printed out that the right input was processed. 

I also tested the play_rps_sequence to display the correct Tkinter graphics and verified that the sequence continued regardless of user input timing. For the number of rounds, I haven’t been able to test the rounds being sent from the Web App to the RPi5, but with a hardcoded number of rounds, I’ve been able to verify that the game follows expected behavior and terminates once the number of rounds is reached, which includes rounds where no input from the user is registered. Furthermore, I had to verify Frame Transitions,

Where we would call methods to transition from state to state depending on the inputs received. For example. If we are in the RPS_confirm, we would want to transition to a display that would showcase the RPS sequence, once OK is pressed (in this case, OK would be an UP arrow key press on the keypad)

 

Finally, I had to handle websocket communication to verify that information sent from the Web App could be received on the RPi5 (in the case of handle_set_duration). The next step would be ensuring that I can properly parse the message (in the case of handle_set_rounds) to have an accurate retrieval of information sent from the Web App.

 

For overall system tests, I’ve been working on the RPS game flow, testing that 3 rounds can be played and that the game produces expected behaviors. I found some cases of premature termination of displaying the sequence or missing inputs, but was able to fix that through iterative testing until the game worked fully as expected utilizing touchscreen buttons. The next step from here would be eliminating the touchscreen aspect and transitioning the code to utilize key pad inputs with evdev (a linux based system that the RPi5 can incorporate). Web App integration also needs to be worked on, for the RPS game. For the study session, Shannon and I have worked on those aspects and ensured that the study session behavior works fully as expected. I would also have to start doing overall system tests on the hardware, which would be the keypad presses in this case. I want to verify that the evdev library can properly detect hardware input on RPi5, and translate that into left -> “rock”, down -> “paper”, right -> “scissors”. In the case of the up key, we would expect different behavior depending on self.state. For instance, in the rps_confirm state, an up press would act as a call to start_rps_game. If we are in the results screen, an up arrow press would act as proceeding to the next round. If we are in the last round of a set of rps games, the up arrow would act as a return home button, with the statistics of those rounds being sent to the Web App.

Jeffrey’s Status Report for 11/30/2024

This week, I was focused on the final presentation slides and Web App/DSI display integration

 

I focused on implementing and debugging the communication between the DSI display and the web app using WebSockets. This involved ensuring real-time synchronization of start, pause, and end session actions across both interfaces. While I am still working on the bilateral communication, I am happy to say that the Web App to RPi5 connection is working very well, where we are able to input the dynamic session parameters (name, duration, etc), and have that sent from the Web App to RPi5, with HTML redirection on the Web App side as well.

I also worked to verify that events emitted from the DSI display (e.g., end session) triggered appropriate changes on the web app. This required adding debugging tools like console.log and real-time WebSocket monitoring.

 

I will attach screenshots showing WebSocket debugging and synchronized end session 

I am currently on progress but will dedicate additional hours tonight and Sunday to verify end-to-end testing of all SBB components.

I also want to finalize RPS game integration and real-time user interaction using the DSI display and WebSockets.I have existing code that ensures that the RPS logic is sound, so we just need to integrate this with the display. I also have created code this past week that allows the button inputs from the keypad to be processed. We chose the left arrow to be rock, down arrow to be paper, the right arrow to be scissors, and the up arrow to be the input selection.

 

One risk that can jeopardize the success of the project is websocket synchronization issues, so we are working to ensure that we can have both inputs sent from RPi5 to Web App (currently working), and HTML redirection on the Web App side (work in progress). After discussing with Mahlet today, we realized we should implement more changes in views.py. If I can work with Shannon tomorrow, I am confident we can have redirection working by the middle of next week and that would fulfill the second part of the bilateral communication we desire (from RPi5 to Web App).

 

We also hope to have our system ready so we can run user survey feedback/tests to see if the SBB is actually helpful in both interaction and productivity when studying.

I have used tools like event listeners and logging to trace and debug issues. Gotten more familiar with HTML and Javascript, as well as incorporating socket events in python to trigger the appropriate responses. I have added robust error handling and reconnection logic to improve reliability.

 

Jeffrey’s Weekly Report for 11/16/2024

One portion of the project that I am working on is the GPIO inputs -> Raspberry Pi 5 -> Web App connection/pipeline. To test the GPIO, we would connect the arrow keys via wires to GPIO pins. We would want to verify the corresponding output on the terminal. When that is working, we would be able to ensure that signals sent via GPIO can be processed by the Raspberry Pi. From there, we would use socket.IO for our Web App (acting as client) that would listen for messages sent from RPi5 (acting as server). Our goal is to validate that the arrow keypad would increase SBB interactivity with the user. In this case, we would test that the robot can seamlessly transition between states such as break screen to home screen, or play games of rock, paper, scissors with the user. Our main goal in validation is survey feedback, to see if users engaging with the SBB would say that it made a difference on their study session, compared to a group of users who are studying normally. Another goal is to test for Web App latency, to ensure communication between SBB and Web App is <250 ms. That way, users can have an easy time setting up their study sessions to promote productivity. For the display, we want users to be able to interact with the timer. Our validation goal is that the timer is displayed clearly on the DSI display, and users can easily input their study duration 

In summary, our goals are categorized as such:

  1. Ensure Web App (client) can communicate with SBB (server) via WebSockets and accurate data can be received on both ends to display correct information (e.g. WebApp can display study session being paused/resumed, and SBB can display correct information: timer stopping/resuming, synced together)
  2. We also desire seamless communication between subsystems with minimal latency (<250ms per our design report).

In the google doc below (Starting from page 1 to page 8 ), I document some of the work I’ve done this week. With most of my work being on the actual github repo.

https://docs.google.com/document/d/17t1l_ZAiQ-rBkdr-X1iHHmmFroEZomcFpkQHONkKzW4/edit?usp=sharing

We want to verify that RPi/GPIO inputs are wired, and able to be processed on the RPi side. Furthermore, we want to test socket.IO connections, such that the submissions from the Web App are valid and processable by the RPi, via emits and listen, for specific signals, such as “break” or “resume”. I am currently able to verify timer functionality, as it properly ticks up, and when the duration is entered, the timer can start from 00:00:00, and stop when the goal duration is reached. To adapt this, we would want the DSI display to be able to enter a break or default home state depending on if the user pauses the timer, or decides to end a session early.

 

For validation, we would want to simulate a full study session for the user, and use feedback surveys to gauge how receptive students are about their studying when they use SBB versus without any interactive aid.

 

I am currently on track, once I finish the Web App to RPi communication, which I plan to work on, on Sunday.  In terms of future goals, I want to set up the Rock Paper Scissors, so this would be adapting the current HTML/JS into a form where we can communicate button inputs to the RPi5, which can then send that information independently to either the DSI display to show sprites or the Web App to show game history.

Jeffrey’s Status Report for 11/09/2024

For this week, I’ve been working on looking into system integration. In particular, between the RPi 5 and the DSI display. Over the course of the week, we have done a lot of work on connecting the RPi5 to the DSI display, and ensuring that we can show the timer ticking up, as well as the break and home screen. Currently, we have the touch screen working, which is our mitigation plan. How that works is that user is able to pause, reset, and continue a study session straight from the DSI display. Our next goal is to be able to have that information being recorded by the Web App. Via sockets, we will be able to pause on the DSI display, or in the future, with the GPIO buttons. So we will be able to take in button inputs and directly pause or resume the study session, and have the Web App process the inputs with low latency.

Below are the pictures of the touch screen display working:

https://docs.google.com/document/d/1qfM2qQyuzhxMKzobmMc1_AZmue12hhRD4ch7_ocTri8/edit?usp=sharing

The three photos from top to bottom show:

  1. DSI display connection via ribbon connector
  2. Break time screen
  3. Timer counting up combined with the option to pause/resume/end study session

Furthermore, the past week, we have also built the robot base using lasercutting, and I have worked on Web App processing. As Shannon has set up the Web App, my goal is to be able to parse data from the JSON, to be able to take in inputs from the Web App, and send it to the RPi5. For instance, we want the user to be able to input a duration for the study session, and we have a function that can parse the duration from the JSON dictionary and send it to the timer function, so the timer can count up for the study session.

 

In the upcoming week, my goal is to work on integrating the GPIO inputs with the RPi5 and Web App. Our goal is for inputs to be sent from GPIO button presses to the RPi5, and then that information can be sent to the Web App via sockets, so we can store the information. For instance, we want to be able to have button presses for rock/paper/scissors options, and for the Web App to record the win/loss/tie accordingly, as well as decreasing the number of rounds.

Jeffrey’s Weekly Status Report for 11/02/2024

For this week, I was focused on code for certain functionalities. Since we had just gotten the robot base laser cutted, we haven’t been able to piece together the robot base to test properties such as motor movements. However, I currently have written code that will move the motors left or right based on the microphone readings as such:

Servo Motor Code

With the current code, our goal is to be able to have the one servo motor rotate between -90 and 90 degrees. Once we are able to test with the physical servo motors connected to the DCI display, we can increase the range of mapping a sound display appropriately. Once we have the code for pinpointing the sound based on the microphones, we can be more precise on how we rotate the DCI display. Furthermore, we also need to test physical sound inputs over varying time delays, to see if there are possibly latency issues between when the robot hears the sound to when the DCI display is rotating.

 

DCI Display Code

For the DCI display, we currently have code that will help us manage transitions between the possible display screen states. For instance, we will need to handle transitions from study screen to break screen, or study screen to RPS screen, etc. Currently, we have code that will enable us to work with three screen options: Study screen, break screen, home screen. The next updates I have to make are screens for the RPS game. This would include win, lose, and tie screens, as well as the appropriate robot celebration face to reflect the result of that round i.e. happy face if user wins, neutral if tie, and sad if player loses.

 

In the upcoming week, we plan to focus on integrating together the robot base, to ensure that we can start putting parts in the base, such as speakers for TTS, microphones, and servo motors. I am slightly behind schedule on revising the RPS logic from last week. I still have to edit some parts and focus on integration with the webapp, as the webapp determines how many rounds to play. By completing the logic, we can focus on the HTML display for all the possible screen options. From here, we can start testing to ensure that the latency of the DCI display is sufficient in being able to transition between different screens.

Jeffrey’s Status Report for 10/26/2024

This week was focused on the ethics assignment as well as the weekly meeting, where we were able to discuss our ideas more in depth with Professor Bain and Ella. Our goals was to focus on figuring out ways to implement the microphone and web sockets to the RPi.

 

On Thursday, the group met up and we worked on implementing the web sockets. Our goal was to set up the RPi for the first time and write some python code that would enable the web app to act as a client, and have the RPi act as a server and send messages to the client and for the client to confirm that a message has been received (this was tested by having the web app change the text when a message has been received from the server). The next step in our goal is to have the server be able to serve information to the web app, and have the web app store information from the RPi acting as the server (information such as study sessions completed or games won/lost in RPS).

Since we are currently still waiting for parts, we are a bit behind on the actual construction of the robot. Mahlet and I still have to build the robot base in acrylic at Techspark. We plan to just create the base of the robot and then drill holes as need to add internal components such as microphones/speakers. We want to be able to have the robot built so we can ensure that all the components we want to use can fit within the base. I also been working on finetuning the GPIO pin logic. By using the GPIO library, we can enable button presses will be inputted by the RPi and we can process them accordingly.

 

The biggest upcoming goal is testing the GPIO library with buttons wired directly to the RPi. I will also work on the RPS logic that was intended to be completed last week. My primary goal there is just make sure that the algorithm can randomly selected an option out of R/P/S and then depending on win/loss/tie, output the result accordingly.

Jeffrey’s Status Report for 10/19/2024

Since last week, my main focus was finishing the design report, and preparing for the week 6 tasks on the Gantt Chart. Over Fall Break, I worked on preparing for three tasks. The servo motors, the speakers, and the GPIO pins that connect the RPi to the buttons on the robot base.

For the GPIO pins, I plan to use Python with the GPIO library and have written preliminary code:

  • import RPi.GPIO as GPIO
  • import time 
  • # Use Broadcom pin-numbering scheme 
    • GPIO.setmode(GPIO.BCM) 
  • # Set up the GPIO pin (e.g., Pin 17) as input with internal pull-down resistor 
    • button_pin = 17 GPIO.setup(button_pin, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) def button_callback(channel): print(“Button was pressed! ‘X’ has been selected.”) 
  • # Add an event detection for button press 
    • GPIO.add_event_detect(button_pin, GPIO.RISING, callback=button_callback, bouncetime=200) try: while True: 
  • # Keep the program running to detect button press:
    •  time.sleep(1) except KeyboardInterrupt: 
  • # Clean up the GPIO setup on exit: 
    • GPIO.cleanup()

For testing and validation, I will look into ways to ensure that the latency of button inputs is under 250 ms. And also look into methods to test debouncing, to ensure that multiple unintended button presses  aren’t triggering unintended inputs. 

 

For the speakers, Mahlet put in the order for a USB speaker. This goes along with one of my tasks that is hello/goodbye greetings when the robot is powered on or off. From button presses, the GPIO pins can act as an input the the RPi and call the Python TTS library functions to trigger voice activation greetings. Since it is USB, the wiring of the speaker to the RPi should be trivial, but we would want to ensure that latency won’t be an issue, and that the RPi can take in inputs that can cause a corresponding correct output from the speakers.

 

Finally, we have the servo motors, which comes with a bracket mount. I looked into the specs of the servo motors we bought, at 1.6 inches. So we would need a <5 inch bracket mount, that we can then connect the DCI display to.

 

For the upcoming week, my goal is to simulate the rotation in Matlab, to ensure that the X and Y axis rotation is possible to achieve. Furthermore, Mahlet and I will be meeting in Techspark early next week to work on the acrylic base. Once we have the base, we can more easily measure dimensions for bracket mounts and buttons, that will then segue to use being able to test the features that we have implemented.

I am currently behind on the Gantt Chart for testing the speakers since we haven’t acquired them yet. I have prepared the python code necessary to test 2-3 fixed phrases.

Jeffrey’s Status Report For 10/05/2024

For this week, I am currently on track with the Gantt Schedule. I will start prioritizing looking into coding certain functionalities, such as the speaker output from the robot that we plan to utilize via TTS and the motor servos that control horizontal/vertical movement and up/down translation. I also placed an additional order for microphones, but since the ECE inventory might be out, we will look for another alternative on the market.

Mahlet and I have a task on our Gantt Chart for having the robot base complete by this week. We are behind schedule on this task, but plan to make time after our midterms to meet at Techspark. There, we will focus on building the acrylic base, ensuring that the body compartment has space to store the speaker and other features such as the RPi 5, and this will segue into our next task, which is ensuring that the servo motors mounted on the robot body (connected to the DCI display) is able to function properly. 

To prepare for those tasks, next week, I will focus on creating logic to connect the speakers, as well as for the motor servos. The past week, I was doing research into the right kind of speaker that would fit within the body compartment but also be volume-controlled, so the user can alter the settings as they see necessary. Over the course of this week, I will look into specific speakers that our team can acquire since the ECE inventory didn’t have the types we were looking for. Furthermore, for the servo motors, we will order them this week, so next weekend, once we have the base complete, we can add the motor servos to the build and work on integrating them into the robot body. 

Jeffrey’s Status Report for 9/28/2024

For this week, I spent one meeting session with Mahlet and Shannon to work on our system specification design on paper before I transcribed it into a diagram for the Design Presentation. We broke down our StudyBuddyBot into the software and hardware components, which consists of the Web Application Companion App for the Software, and the Raspberry Pi 5 8GB for the Hardware. Within the Hardware and Software components, we broke down the inputs and outputs and described the tasks we had to complete in green. The boxes in purple, combined with the Raspberry Pi 5 in green, are components that we will buy.

Earlier in the week, I spent some time following my Gantt Chart Schedule to decide on the right speaker and microphone components to use. With the approval of my teammates, I was able to place orders through the Capstone Inventory for those parts, along with the Raspberry Pi we plan to use.

 

Afterwards, I started looking into the logic for coding certain features, which I will continue doing into the next week, since once we have our components, our goal is to start working on combining those components with our Raspberry Pi and ensuring that our features/functionalities work as planned.

 

Since I am currently on schedule, I can devote more time the upcoming week in understanding the specification sheets of our components and doing analysis/simulation on our design to ensure that these features we plan for our StudyBuddyBot will be feasible and work.