Akintayo Status Report for April 19th, 2025

WORK ACCOMPLISHED:

This week, I worked with other members of the group in order to integrate the bluetooth audio earpiece to the Raspberry Pi.  In doing this, instead of relying on a separate microphone and speaker, the user is able to give the destination through voice commands and hear the navigation instructions on one modular device – achieving the requirement that the system is hands-free. Teammates also provided a more efficient and accurate speech-to-text endpoint to use. Work had to be done to integrate this update to the existing navigation code. Additionally, we have begun testing the individual subsystems. I have been testing the accuracy of the speech-to-text framework for extracting the user’s destination. 

PROGRESS:

I, and the rest of the team, made good progress this past week, hence we are on schedule.

NEXT WEEK DELIVERABLES:

This upcoming, I will be primarily focusing on testing the audio feedback systems for the navigation instructions and verifying that the appropriate navigation instruction is produced based on a user’s GPS coordinates and verifying these outputs by looking at actual Google Maps routes.

NEW LEARNING:

One new skill I developed during the development of this project is the ability to analyze API documentation, particularly when using the Google Maps Routes and Geocoding API. I learned how to navigate complex documentation with different API endpoints, identify the endpoints and parameters relevant to my use case, and implement the APIs efficiently in my code. This skill is very important as sometimes it is not necessary to reinvent the wheel, but rather one can use existing technologies in an efficient manner.

Akintayo Status Report for April 12th, 2025

WORK ACCOMPLISHED:

In the past 2 weeks, I have been collaborating with members of the team in order to integrate the navigation and audio components with the entire system as a whole. Additionally, we were able to begin some testing of the audio input with GPS and navigation on a test route from Porter Hall to Phipps Conservatory. I have also worked on enabling the system to relay the navigation instructions to the user via a speaker. 

PROGRESS:

I slightly behind in regards to testing the subsystem and building out the audio feedback components. I will be working on catching up with this work this week.

NEXT WEEK’S DELIVERABLES:

For the upcoming week, we will be testing the GPS and navigation using sample GPS coordinates and real-time GPS coordinates while riding the bike. Additionally, there will be extensive testing of the audio aspect of the system for recognizing the user’s destination from their voice command. 

For the upcoming week, I will be primarily working on the audio feedback portion where the user can receive the navigation instruction via audio on a speaker and potentially a bluetooth headset. Additionally, validation and testing will be done to ensure that the navigation and audio feedback systems work as expected and provide the navigation instructions accurately and in a timely manner in accordance with the user design requirements.

TESTING:
1. For recognizing destination from voice commands:
testing with 5 different voices (from different people) for 20 different destinations within the Pittsburgh area and checking the accuracy from the output of the speech-to-text system
2. For accurate navigation instructions:
test multiple GPS coordinates on 10 different routes and check that the generated navigation instructions are accurate by comparing with the actual turns on a map. Additionally, I will be testing that the audio for the instructions works and returns the audio command with low latency

Akintayo Status Report for March 29th, 2025

WORK ACCOMPLISHED:

This week, I completed integrating the navigation logic to the Raspberry Pi 4 system and began work to join the GPS and navigation subsystems together. Also, began testing different locations for the speech recognition part of the project. 

PROGRESS:

I am slightly behind with tasks; I would have liked to begin integration of the navigation and GPS functionalities by now.

NEXT WEEK’S DELIVERABLES:

For next week, I will be working on my individual subsystem in preparation for my interim demo and do more testing for my subsystem before integration. 

 

Team Status Report for March 29th, 2025

This week, we made good progress on getting more continuous and accurate GPS data. We were able to receive an external GPS piece, called the Adafruit breakout GPS. Having an external GPS connected with our notecard allows us to simultaneously collect GPS data while keeping a cellular connection going. This can be super useful, for when we’re in areas that don’t have good wifi signal, we can depend on using the notecards cellular data to receive any API requests. It also allows us to get a more continuous stream of GPS data, which allows our location to be a bit more precise. I was also able to get the logic working that allows for our GPS to continuously run and update a file that’s being read from for our navigation system to work functionally. This connection would allow for a semi-working version of our navigation system. 

For the haptic feedback subsystem, code was written for the sensor script so it can send a signal to the HC-05 when an object is detected within a certain range. We haven’t been able to test it due to a road block with configuring the HC-05 bluetooth modules. We are facing issues with establishing connection between to the two HC-05 modules because we have been unable to get a response from them individually using AT Commands on the Micro Arduino. The AT Commands are needed to sync the modules and dictate which one is the “master”. We’ve tried various solutions I’ve seen online but will now pivot to trying to configure the HC-05 with the RPi4 instead because this seems to be a common issue with the micro Arduino.

Additionally, Completed integrating the navigation logic to the Raspberry Pi 4 system and began work to join the GPS and navigation subsystems together. Also, began testing different locations for the speech recognition part of the project.

RISK:

In regards to the GPS subsystem, there’s a risk of the GPS not having a clear view of the sky, if we have it in some sort of box, so we might need an external antenna that would have a clear view of the sky to get the most accurate GPS data. 

Considering the issues we’ve had with configuring the bluetooth network, there’s a major risk to the integrity of the system if objects are detected by the sensor but users aren’t warned through the vibration of the wristband. It’s important to establish this communication between systems using the HC-05 as soon as possible in order to reach the requirements for the system.

NEXT WEEK DELIVERABLES: 

We are primarily focused on completing our 3 different subsystems in preparation for the interim demos. Then, the rest of the week will be spent testing the individual subsystems before beginning integration.

Akintayo’s Status Report for March 22nd, 2025

WORK ACCOMPLISHED:

This week, I primarily worked on the navigation generation aspect of the project. Essentially, I worked on the code for suggesting the next direction instruction based on the user’s current location. Since we will begin working on integration of two distinct subsystems, one potential risk is how the subsystems are talking to each other. For now, the tentative solution is that the GPS subsystem will be periodically writing the user’s GPS location to a text file, and then the navigation subsystem will be reading that GPS location from that file. One issue that may arise from this is the timing between the two processes and how outdated the data may become based on the timing at which the navigation subsystem reads that data. Additionally, the accuracy of the GPS data will affect the functionality of the navigation system 

PROGRESS:

I am slightly behind with tasks; I would have liked to begin integration of the navigation and GPS functionalities by now.

NEXT WEEK’S DELIVERABLES:

For next week, I will be collaborating with other members of the team in integration of the navigation subsystem and the GPS subsystems in order to have a fully functional system that tracks the user’s GPS location in relation to the route for their journey.

 

Team Status Report for March 22nd, 2025

At this point in the project, we’re heavily focused on working through our individual parts. We have been working on the navigation piece, and trying to integrate it with the Raspberry Pi. We were able to get the GPS working and and are receiving GPS data such as longitude and latitude, however the results have not been as accurate as we wanted them to be. So we decided to move forward with triangulation as our primary method for determining where the user is – this has proved to be more accurate. Having said this,  we need to find a way to integrate their API for requesting triangulation data with our Raspberry Pi. 

Additionally, some progress was made in regards to the haptic feedback for the wristband system. We were able to set up a circuit on the mini breadboards, which allow the ERM motor to vibrate from a script on the micro Arduino. Time was spent learning how to use the HC-05 bluetooth module in order to send data that the Arduino can use to dictate when the motor should vibrate. We currently working on adding code to the sensor script so it can send a signal to the HC-05 when an object is detected within a certain range. We are currently working on adding code to the sensor script so it can send a signal to the HC-05 when an object is detected within a certain range.

This week, we also worked on the navigation generation aspect of the project. Essentially, we worked on the code for suggesting the next direction instruction based on the user’s current location.

RISK:

In relation to the accuracy of the current GPS system, if using the triangulation alternative is not getting accurate enough data, we might have a hard time determining when a user is heading down the wrong path. As a result, we might have to consider other GPS systems. 

To ensure the safety of our system, it is very important to establish the communication between the haptic feedback on the wristband and the object detection from the sensor on the bike.  Otherwise, there’s a major risk if objects are detected by the sensor but users aren’t warned through the vibration of the wristband. 

Since we will begin working on integration of two distinct subsystems, one potential risk is how the subsystems are talking to each other. For now, the tentative solution is that the GPS subsystem will be periodically writing the user’s GPS location to a text file, and then the navigation subsystem will be reading that GPS location from that file. One issue that may arise from this is the timing between the two processes and how outdated the data may become based on the timing at which the navigation subsystem reads that data. Additionally, the accuracy of the GPS data will affect the functionality of the navigation system 

NEXT WEEK DELIVERABLES:

For next week, we will be collaborating together to begin the integration of the navigation subsystem and the GPS subsystems in order to have a fully functional system that tracks the user’s GPS location in relation to the route for their journey and suggesting appropriate navigation instructions. 

In relation to the haptic feedback system, we wanted to be able to send data to the motor circuit from a python script by now. We aim to have this done later today though. We also may still need to find a better sensor, but want to make sure we can get basic functionality of the blindspot detection subsystem before we spend more time trying to improve accuracy.

Akintayo Status Report for March 15th, 2025

 WORK ACCOMPLISHED:

This week, a lot of time was spent testing the capabilities of different models for Google Speech-to-Text AI for extracting the destination for a journey from the user’s voice commands. After testing the different models, the decision was made to use the Chirp 2 model with model adaptation. The use of model adaptation is very important as it improves the accuracy of the recognition system. When testing, it was noticed that the system struggles with words that sound very similar such as “weather” and “whether”. As a result, with model adaptation, I can set a “boost value” for a phrase such as “weather” so that the system is optimized for identifying specific phrases.

Additionally, the logic for navigation suggestions was developed a bit and we have worked on some code that uses R-tree algorithms for identifying the appropriate navigation instruction based on the user’s real-time GPS location.

Snippet of Navigation code using R-tree algorithm:

Sample output:

PROGRESS:

I am currently on progress with my work.

NEXT WEEK DELIVERABLES:

For next week, we will work on building out the navigation system and handling the case when the user is completely off path. Also, we will be working on fabrication and 3D printing for the bike mount. Also, we will start looking at how to convert text of navigation to audio.

Akintayo’s Status Report for March 8th, 2025

WORK ACCOMPLISHED

The past 2 weeks, I was able to accomplish a number of important tasks as well as some deliverables that were overdue.

Firstly, I was able to complete the integration of the USB Microphone with the Raspberry Pi, so I was able to successfully record sample audio of a user giving the location for their journey.

Following having obtained a sample audio, I began to test the Google speech-to-text AI framework and did some testing for which models work best for extracting the required text.

Log of different tests with different models

Sample transcript from audio (actual audio is myself saying: “Ride take me to Tepper”)

I also worked on some of the logic for translating the destination text from speech-to-text endpoint to the longitudinal and latitudinal representation using the Google Maps Geocoding API.

Code Snippet:

Output for Destination of “Tepper School of Business”:

PROGRESS:

I am on schedule as I made some good progress this week. I probably need to speed up a bit to ensure I have ample time to join the different components of my feature together.

NEXT WEEK DELIVERABLES:

I will primarily continue testing for the appropriate model for speech-to-text translation. I will also start working on the logic for navigation suggestion using a R-tree algorithm.

 

Akintayo’s Status Report for February 22nd, 2025

WORK ACCOMPLISHED:

This week, I tried to work on setting up the Raspberry Pi 4, but I realized I would require a micro SD card reader; hence, I was unable to move forward as I was missing the device. I also worked more on the Google Maps API.

Additionally, I decided to modify the design of the system by removing the web server and localizing the navigation and audio system to the Raspberry Pi instead. This drastically reduces the latency required for our system.

PROGRESS:

Due to some issues I faced, I’m currently behind schedule as I had expected to finish up with how to record audio files from the Raspberry Pi and also begin to work on integrating the Google Speech-to-Text AI.

NEXT WEEK’S DELIVERABLES:

I am mostly will try and catch up on last week’s deliverables. So, I will working on how to record audio files from the Raspberry Pi and sending it to the Navigation endpoint. I will also begin to work on integrating the Google Speech-to-Text AI.

Akintayo’s Status Report for February 15, 2025

WORK ACCOMPLISHED:

This week, I primarily worked on designing the workflow for using the user’s voice commands to extract the destination for the trip and also began thinking about the relevant data that will be required for the Google Maps API call.

Google Maps API url

(Cleaned) API response  with locations and navigation instructions

Additionally, I decided to change the type of microphone being used for the system from a MEMS Omnidirectional Microphones to a standard USB microphone. The main reasoning behind this was that the USB microphone is easier to configure and has better sound quality compared to the initial microphone.

PROGRESS:

I am in progress for the moment.

NEXT WEEK DELIVERABLES:

For the upcoming week, I will be working on how to record audio files from the Raspberry Pi and sending it to the Navigation endpoint. I will also begin to work on integrating the Google Speech-to-Text AI.