Jeffrey’s Status Report for 12/07/2024

After finishing the final presentation slides earlier this week, the team got together to plan a final week schedule. In here, we broke down some of the remaining tasks we had to complete. For me, I was prioritizing finishing the Study Session feature with Shannon, as well as the RPS.

While the study session took longer than anticipated, we were able to make it such that the study session (in DSIdisplay.py, and studysession_start.html, studysession_inprog.html, and studysession_end.html contained all the necessary code for expected behavior. We were able to abide by the choices in our design report and ensure that all features were implemented. Now all that is left is verify the study session integration with other features, as well as pushing our Web App to AWS servers. For validation and testing, we worked on verify individual components, such as duration being sent to the RPi, or actions such as pause/resume will trigger the appropriate HTML redirection. For instance, we see “study session in progress” on the Web App, and when we press pause on the DSI display via touchscreen, we see the Web App instantly change to “study session on break”. This kind of low latency behavior is exactly what we expected. Similarly, with studysession end, we address the multiple cases, where we either end a session early (from the Web App), in which case we need to handle the timer stopping completely, or a study session ending upon reaching the desired goal duration. In the latter case, we had to handle two additional cases: 1) Student user chooses to stop studying, and we go back to the default home screen, or 2) Student continues studying, in which case there is no longer a target goal duration, the student can study as they please, but keep the timer running (from where they left off) so they can track their total study time while retaining pause + resume functionalities.

Furthermore, on Thursday and Friday, I worked on the RPS display, and am able to finish our contingency plan, where the user can take a break and play multiple rounds of RPS by sending a number of rounds to play (from Web App). We are able to play X # of rounds, and after the final round, the DSI display screen will redirect back to home. I handle cases such as when no inputs are pressed. All code written is linked in the first 15 pages of the google doc: https://docs.google.com/document/d/17t1l_ZAiQ-rBkdr-X1iHHmmFroEZomcFpkQHONkKzW4/edit?tab=t.0

In the next upcoming week, I will keep working on keypad button integration, as we try to refactor our code from software/touchscreen, to just relying on the hardware components. I plan to do this with evdev and the USB-C keypad we have purchased (listed in our BOM). I also have broken down our frame states, and with Shannon, we drew out the expected behavior of our screen for the RPS feature. We plan to use both keypads for RPS, and pause/resume study session

I have also done a lot of system testing, both individual, and for integration. Overall, the testing helped me uncover a lot of issues to debug on the software side. In terms of design changes, there aren’t any drastic changes from what we had written up in the design report, and in fact, I believe we have been able to make improvements to the UI logic and handling various inputs. For the websocket validation and hardware abstraction, we are still working on integrating those final aspects but with thorough unit testing, we should be able to combine these aspects without running into major issues. 

Jeffrey’s Status Report for 11/30/2024

This week, I was focused on the final presentation slides and Web App/DSI display integration

 

I focused on implementing and debugging the communication between the DSI display and the web app using WebSockets. This involved ensuring real-time synchronization of start, pause, and end session actions across both interfaces. While I am still working on the bilateral communication, I am happy to say that the Web App to RPi5 connection is working very well, where we are able to input the dynamic session parameters (name, duration, etc), and have that sent from the Web App to RPi5, with HTML redirection on the Web App side as well.

I also worked to verify that events emitted from the DSI display (e.g., end session) triggered appropriate changes on the web app. This required adding debugging tools like console.log and real-time WebSocket monitoring.

 

I will attach screenshots showing WebSocket debugging and synchronized end session 

I am currently on progress but will dedicate additional hours tonight and Sunday to verify end-to-end testing of all SBB components.

I also want to finalize RPS game integration and real-time user interaction using the DSI display and WebSockets.I have existing code that ensures that the RPS logic is sound, so we just need to integrate this with the display. I also have created code this past week that allows the button inputs from the keypad to be processed. We chose the left arrow to be rock, down arrow to be paper, the right arrow to be scissors, and the up arrow to be the input selection.

 

One risk that can jeopardize the success of the project is websocket synchronization issues, so we are working to ensure that we can have both inputs sent from RPi5 to Web App (currently working), and HTML redirection on the Web App side (work in progress). After discussing with Mahlet today, we realized we should implement more changes in views.py. If I can work with Shannon tomorrow, I am confident we can have redirection working by the middle of next week and that would fulfill the second part of the bilateral communication we desire (from RPi5 to Web App).

 

We also hope to have our system ready so we can run user survey feedback/tests to see if the SBB is actually helpful in both interaction and productivity when studying.

I have used tools like event listeners and logging to trace and debug issues. Gotten more familiar with HTML and Javascript, as well as incorporating socket events in python to trigger the appropriate responses. I have added robust error handling and reconnection logic to improve reliability.

 

Jeffrey’s Weekly Report for 11/16/2024

One portion of the project that I am working on is the GPIO inputs -> Raspberry Pi 5 -> Web App connection/pipeline. To test the GPIO, we would connect the arrow keys via wires to GPIO pins. We would want to verify the corresponding output on the terminal. When that is working, we would be able to ensure that signals sent via GPIO can be processed by the Raspberry Pi. From there, we would use socket.IO for our Web App (acting as client) that would listen for messages sent from RPi5 (acting as server). Our goal is to validate that the arrow keypad would increase SBB interactivity with the user. In this case, we would test that the robot can seamlessly transition between states such as break screen to home screen, or play games of rock, paper, scissors with the user. Our main goal in validation is survey feedback, to see if users engaging with the SBB would say that it made a difference on their study session, compared to a group of users who are studying normally. Another goal is to test for Web App latency, to ensure communication between SBB and Web App is <250 ms. That way, users can have an easy time setting up their study sessions to promote productivity. For the display, we want users to be able to interact with the timer. Our validation goal is that the timer is displayed clearly on the DSI display, and users can easily input their study duration 

In summary, our goals are categorized as such:

  1. Ensure Web App (client) can communicate with SBB (server) via WebSockets and accurate data can be received on both ends to display correct information (e.g. WebApp can display study session being paused/resumed, and SBB can display correct information: timer stopping/resuming, synced together)
  2. We also desire seamless communication between subsystems with minimal latency (<250ms per our design report).

In the google doc below (Starting from page 1 to page 8 ), I document some of the work I’ve done this week. With most of my work being on the actual github repo.

https://docs.google.com/document/d/17t1l_ZAiQ-rBkdr-X1iHHmmFroEZomcFpkQHONkKzW4/edit?usp=sharing

We want to verify that RPi/GPIO inputs are wired, and able to be processed on the RPi side. Furthermore, we want to test socket.IO connections, such that the submissions from the Web App are valid and processable by the RPi, via emits and listen, for specific signals, such as “break” or “resume”. I am currently able to verify timer functionality, as it properly ticks up, and when the duration is entered, the timer can start from 00:00:00, and stop when the goal duration is reached. To adapt this, we would want the DSI display to be able to enter a break or default home state depending on if the user pauses the timer, or decides to end a session early.

 

For validation, we would want to simulate a full study session for the user, and use feedback surveys to gauge how receptive students are about their studying when they use SBB versus without any interactive aid.

 

I am currently on track, once I finish the Web App to RPi communication, which I plan to work on, on Sunday.  In terms of future goals, I want to set up the Rock Paper Scissors, so this would be adapting the current HTML/JS into a form where we can communicate button inputs to the RPi5, which can then send that information independently to either the DSI display to show sprites or the Web App to show game history.

Jeffrey’s Status Report for 11/09/2024

For this week, I’ve been working on looking into system integration. In particular, between the RPi 5 and the DSI display. Over the course of the week, we have done a lot of work on connecting the RPi5 to the DSI display, and ensuring that we can show the timer ticking up, as well as the break and home screen. Currently, we have the touch screen working, which is our mitigation plan. How that works is that user is able to pause, reset, and continue a study session straight from the DSI display. Our next goal is to be able to have that information being recorded by the Web App. Via sockets, we will be able to pause on the DSI display, or in the future, with the GPIO buttons. So we will be able to take in button inputs and directly pause or resume the study session, and have the Web App process the inputs with low latency.

Below are the pictures of the touch screen display working:

https://docs.google.com/document/d/1qfM2qQyuzhxMKzobmMc1_AZmue12hhRD4ch7_ocTri8/edit?usp=sharing

The three photos from top to bottom show:

  1. DSI display connection via ribbon connector
  2. Break time screen
  3. Timer counting up combined with the option to pause/resume/end study session

Furthermore, the past week, we have also built the robot base using lasercutting, and I have worked on Web App processing. As Shannon has set up the Web App, my goal is to be able to parse data from the JSON, to be able to take in inputs from the Web App, and send it to the RPi5. For instance, we want the user to be able to input a duration for the study session, and we have a function that can parse the duration from the JSON dictionary and send it to the timer function, so the timer can count up for the study session.

 

In the upcoming week, my goal is to work on integrating the GPIO inputs with the RPi5 and Web App. Our goal is for inputs to be sent from GPIO button presses to the RPi5, and then that information can be sent to the Web App via sockets, so we can store the information. For instance, we want to be able to have button presses for rock/paper/scissors options, and for the Web App to record the win/loss/tie accordingly, as well as decreasing the number of rounds.

Jeffrey’s Weekly Status Report for 11/02/2024

For this week, I was focused on code for certain functionalities. Since we had just gotten the robot base laser cutted, we haven’t been able to piece together the robot base to test properties such as motor movements. However, I currently have written code that will move the motors left or right based on the microphone readings as such:

Servo Motor Code

With the current code, our goal is to be able to have the one servo motor rotate between -90 and 90 degrees. Once we are able to test with the physical servo motors connected to the DCI display, we can increase the range of mapping a sound display appropriately. Once we have the code for pinpointing the sound based on the microphones, we can be more precise on how we rotate the DCI display. Furthermore, we also need to test physical sound inputs over varying time delays, to see if there are possibly latency issues between when the robot hears the sound to when the DCI display is rotating.

 

DCI Display Code

For the DCI display, we currently have code that will help us manage transitions between the possible display screen states. For instance, we will need to handle transitions from study screen to break screen, or study screen to RPS screen, etc. Currently, we have code that will enable us to work with three screen options: Study screen, break screen, home screen. The next updates I have to make are screens for the RPS game. This would include win, lose, and tie screens, as well as the appropriate robot celebration face to reflect the result of that round i.e. happy face if user wins, neutral if tie, and sad if player loses.

 

In the upcoming week, we plan to focus on integrating together the robot base, to ensure that we can start putting parts in the base, such as speakers for TTS, microphones, and servo motors. I am slightly behind schedule on revising the RPS logic from last week. I still have to edit some parts and focus on integration with the webapp, as the webapp determines how many rounds to play. By completing the logic, we can focus on the HTML display for all the possible screen options. From here, we can start testing to ensure that the latency of the DCI display is sufficient in being able to transition between different screens.

Jeffrey’s Status Report for 10/26/2024

This week was focused on the ethics assignment as well as the weekly meeting, where we were able to discuss our ideas more in depth with Professor Bain and Ella. Our goals was to focus on figuring out ways to implement the microphone and web sockets to the RPi.

 

On Thursday, the group met up and we worked on implementing the web sockets. Our goal was to set up the RPi for the first time and write some python code that would enable the web app to act as a client, and have the RPi act as a server and send messages to the client and for the client to confirm that a message has been received (this was tested by having the web app change the text when a message has been received from the server). The next step in our goal is to have the server be able to serve information to the web app, and have the web app store information from the RPi acting as the server (information such as study sessions completed or games won/lost in RPS).

Since we are currently still waiting for parts, we are a bit behind on the actual construction of the robot. Mahlet and I still have to build the robot base in acrylic at Techspark. We plan to just create the base of the robot and then drill holes as need to add internal components such as microphones/speakers. We want to be able to have the robot built so we can ensure that all the components we want to use can fit within the base. I also been working on finetuning the GPIO pin logic. By using the GPIO library, we can enable button presses will be inputted by the RPi and we can process them accordingly.

 

The biggest upcoming goal is testing the GPIO library with buttons wired directly to the RPi. I will also work on the RPS logic that was intended to be completed last week. My primary goal there is just make sure that the algorithm can randomly selected an option out of R/P/S and then depending on win/loss/tie, output the result accordingly.

Jeffrey’s Status Report for 10/19/2024

Since last week, my main focus was finishing the design report, and preparing for the week 6 tasks on the Gantt Chart. Over Fall Break, I worked on preparing for three tasks. The servo motors, the speakers, and the GPIO pins that connect the RPi to the buttons on the robot base.

For the GPIO pins, I plan to use Python with the GPIO library and have written preliminary code:

  • import RPi.GPIO as GPIO
  • import time 
  • # Use Broadcom pin-numbering scheme 
    • GPIO.setmode(GPIO.BCM) 
  • # Set up the GPIO pin (e.g., Pin 17) as input with internal pull-down resistor 
    • button_pin = 17 GPIO.setup(button_pin, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) def button_callback(channel): print(“Button was pressed! ‘X’ has been selected.”) 
  • # Add an event detection for button press 
    • GPIO.add_event_detect(button_pin, GPIO.RISING, callback=button_callback, bouncetime=200) try: while True: 
  • # Keep the program running to detect button press:
    •  time.sleep(1) except KeyboardInterrupt: 
  • # Clean up the GPIO setup on exit: 
    • GPIO.cleanup()

For testing and validation, I will look into ways to ensure that the latency of button inputs is under 250 ms. And also look into methods to test debouncing, to ensure that multiple unintended button presses  aren’t triggering unintended inputs. 

 

For the speakers, Mahlet put in the order for a USB speaker. This goes along with one of my tasks that is hello/goodbye greetings when the robot is powered on or off. From button presses, the GPIO pins can act as an input the the RPi and call the Python TTS library functions to trigger voice activation greetings. Since it is USB, the wiring of the speaker to the RPi should be trivial, but we would want to ensure that latency won’t be an issue, and that the RPi can take in inputs that can cause a corresponding correct output from the speakers.

 

Finally, we have the servo motors, which comes with a bracket mount. I looked into the specs of the servo motors we bought, at 1.6 inches. So we would need a <5 inch bracket mount, that we can then connect the DCI display to.

 

For the upcoming week, my goal is to simulate the rotation in Matlab, to ensure that the X and Y axis rotation is possible to achieve. Furthermore, Mahlet and I will be meeting in Techspark early next week to work on the acrylic base. Once we have the base, we can more easily measure dimensions for bracket mounts and buttons, that will then segue to use being able to test the features that we have implemented.

I am currently behind on the Gantt Chart for testing the speakers since we haven’t acquired them yet. I have prepared the python code necessary to test 2-3 fixed phrases.

Jeffrey’s Status Report For 10/05/2024

For this week, I am currently on track with the Gantt Schedule. I will start prioritizing looking into coding certain functionalities, such as the speaker output from the robot that we plan to utilize via TTS and the motor servos that control horizontal/vertical movement and up/down translation. I also placed an additional order for microphones, but since the ECE inventory might be out, we will look for another alternative on the market.

Mahlet and I have a task on our Gantt Chart for having the robot base complete by this week. We are behind schedule on this task, but plan to make time after our midterms to meet at Techspark. There, we will focus on building the acrylic base, ensuring that the body compartment has space to store the speaker and other features such as the RPi 5, and this will segue into our next task, which is ensuring that the servo motors mounted on the robot body (connected to the DCI display) is able to function properly. 

To prepare for those tasks, next week, I will focus on creating logic to connect the speakers, as well as for the motor servos. The past week, I was doing research into the right kind of speaker that would fit within the body compartment but also be volume-controlled, so the user can alter the settings as they see necessary. Over the course of this week, I will look into specific speakers that our team can acquire since the ECE inventory didn’t have the types we were looking for. Furthermore, for the servo motors, we will order them this week, so next weekend, once we have the base complete, we can add the motor servos to the build and work on integrating them into the robot body. 

Jeffrey’s Status Report for 9/28/2024

For this week, I spent one meeting session with Mahlet and Shannon to work on our system specification design on paper before I transcribed it into a diagram for the Design Presentation. We broke down our StudyBuddyBot into the software and hardware components, which consists of the Web Application Companion App for the Software, and the Raspberry Pi 5 8GB for the Hardware. Within the Hardware and Software components, we broke down the inputs and outputs and described the tasks we had to complete in green. The boxes in purple, combined with the Raspberry Pi 5 in green, are components that we will buy.

Earlier in the week, I spent some time following my Gantt Chart Schedule to decide on the right speaker and microphone components to use. With the approval of my teammates, I was able to place orders through the Capstone Inventory for those parts, along with the Raspberry Pi we plan to use.

 

Afterwards, I started looking into the logic for coding certain features, which I will continue doing into the next week, since once we have our components, our goal is to start working on combining those components with our Raspberry Pi and ensuring that our features/functionalities work as planned.

 

Since I am currently on schedule, I can devote more time the upcoming week in understanding the specification sheets of our components and doing analysis/simulation on our design to ensure that these features we plan for our StudyBuddyBot will be feasible and work.

Jeffrey’s Status Report for 09/21/2024

Updates to Proposal Presentation from Abstract:

This week I was focused on the Proposal Presentation and working on the slides and the script for the presentation. For research, I looked into the ethical implications of our project. I looked into similar literature that presented a interactive study buddy robot such as: https://www.jongwon.net/Interactive-Desktop-Study-Buddy-Robot-Stubie.pdf

From here, I did some research on human-robot interaction ethics and the rules and principles that govern how robots and technology should operate in this domain that is ethically fair to humans.

Acquiring Parts and Considering Design Choices:

After the presentation was finished on Monday, I focused on looking into parts for the motor servos to control the robot and speaker system for the TTS application. Given that our robot has a small functional form, we would want a compact speaker that can deliver ample noise in a confined environment, which goes hand in hand with using a small but powerful speaker.

I was looking into small Arduino speakers ranging from $3.00 to $10.00 that would be compact enough to fit into the back or front of our design. Furthermore, I also had to consider the logic behind the robot responding to audible cues, which is my first task according to the Gantt Schedule.

Future Goals:

I am currently on track and spent time at the end of the week to catch up with my team members to consider overall designs together, such as the microphone system that we plan to use.

In the upcoming week, I will focus on finalizing the logic for both the robot inputting audible cues via microphone system and outputting audio via our speaker system. I also will start working with Mahlet on planning the building of our robot motor system and thinking about the most effective parts to acquire so our compact robot will be capable of movement along the X and Y axis, attached to the neck. We decided that movement along the Z-axis is unnecessary and would add additional complications to how we utilize the motors connecting the body to the head/display of the robot.