Emmanuel’s Status Report for March 15th, 2025

WORK ACCOMPLISHED:

This week I spent time creating a script to use and test the ultrasonic sensors along with completing the ethics report assignment.

A significant portion of my time was dedicated to creating a script for the ultrasonic sensors and researching the proper configuration settings for RPi4 when the using the Rx and Tx pins with different interfaces . The script allowed me to test the sensors object detection capabilities by printing out the distances of an object that’s detected.  I did some basic tests indoors, outdoors, while the sensors are stationary, and while they’re moving but still need to conduct stronger field of view tests to gain a better understanding of their accuracy. There are some risk concerns from testing that are in our team status report.  I also focused on completing the ethics report assignment. This involved delving into ethical considerations and principles relevant to the field, which not only enhanced my understanding of the subject but also allowed me to reflect on the broader implications our project.

PROGRESS:

I’m still currently slightly behind with tasks, I wanted to have the basic circuit for the wristband built right now, but I think I can make up time next week. There needs to be more exploration of the bluetooth module as well.

NEXT WEEK’S DELIVERABLES:

Next week, I aim to do more sensor testing, make a decision on wether to order a different sensor, and setup circuit for the wristband with a stretch goal of transmitting data to it through the bluetooth module.

Akintayo Status Report for March 15th, 2025

 WORK ACCOMPLISHED:

This week, a lot of time was spent testing the capabilities of different models for Google Speech-to-Text AI for extracting the destination for a journey from the user’s voice commands. After testing the different models, the decision was made to use the Chirp 2 model with model adaptation. The use of model adaptation is very important as it improves the accuracy of the recognition system. When testing, it was noticed that the system struggles with words that sound very similar such as “weather” and “whether”. As a result, with model adaptation, I can set a “boost value” for a phrase such as “weather” so that the system is optimized for identifying specific phrases.

Additionally, the logic for navigation suggestions was developed a bit and we have worked on some code that uses R-tree algorithms for identifying the appropriate navigation instruction based on the user’s real-time GPS location.

Snippet of Navigation code using R-tree algorithm:

Sample output:

PROGRESS:

I am currently on progress with my work.

NEXT WEEK DELIVERABLES:

For next week, we will work on building out the navigation system and handling the case when the user is completely off path. Also, we will be working on fabrication and 3D printing for the bike mount. Also, we will start looking at how to convert text of navigation to audio.

Forever’s Status Report for March 15th, 2025

WORK ACCOMPLISHED:

This week my primary focus was on ensuring the GPS information was being properly read in a way that we could use it for our navigation system. I did testing in different environments inside and outside. There were a lot of issues testing, since there needs to be a stable connection that allows for the GPS information to be read properly. I noticed that the most accurate location information being shown was when triangulation was used.  I also spent time on the ethics report, and did the team discussion surrounding the assignment.

Longitude and latitude information being displayed on the notehub map.

Testing code to view location, time, and connection information

Sample output when device isn’t moving ( gps isn’t showing ).

 

PROGRESS:

This week I accomplished the tasks I was supposed to, so I am currently on track.

NEXT WEEK’S DELIVERABLES:

Next week I hope to clean up any loose ends with the GPS information and format it in a way to be used by our navigation system. I also plan to help integrate with the rest of our systems.

Team Status Report for March 15th, 2025

In the beginning of the week we spent time discussing the ethics assignment. Ethics is an important factor in our project, and we wanted to make sure we were all on the same page. The majority of our time this week was spent on developing the three separate areas of our project. We started off in the beginning of the week working on our object detection. We had to configure the RPi to use the RT and TX pins for the sensor.  A script was created to detect the distance between an object and the sensor. We tested this script in different scenarios, with the sensor moving as well as the object moving back and forward. However, we’re planning on testing the field of view more to see if the accuracy of the sensor is enough for our project. Towards the latter part of the week we spent time on the navigation part of the project. We spent time testing the capabilities of different models for Google Speech-to-Text AI for extracting the destination for a journey from the user’s voice commands.  We ended up choosing the Chirp 2 model, due to it’s accurate speech recognition. We also worked on the GPS tracking, as it was something we wanted to have working by this week ( see below, sample output and code ). We were able to do some testing outside and were able to receive longitude and latitude measurements to be used for our navigation system ( see below, map image capture). 

In terms of risks that we’re looking at after this week, we’re considering a couple of factors. Since we’re unable to change our distance sensors range, we believe that theres a possibility for our sensors to be detecting other objects that are outside of the range that we’re worried about. This would slow down our processing time, so we’re currently testing to ensure that this isn’t the case. Another risk we’re worried about is the accuracy of our GPS information displaying our longitude and latitude. As we were testing this week, we were seeing longitude and latitude information being sent, however depending on the mode ( triangulation or GPS location), the location being displayed wasn’t as accurate as we wanted it to be. We’re doing continuous testing to see if this is only an issue in certain areas, and are considering using a different GPS system or better antennas.

Code + Sample output for Speech to Text system

Longitude and Latitude information being sent to notehub and displayed on notehub Map.

NEXT WEEK DELIVERABLES:

Mini-integration of all of our parts to see how they work together. We’re getting to the latter end of our project, so we’re attempting a mini integration to see how they fit together, incase we need to change things up. We are also considering ordering new parts, so doing research on different types of distance sensors as-well as new GPS modules.

Team Status Report for March 8th, 2025

Our current progress on the Rid3 device is going well. The main tasks for this week were to complete the setup for raspberry pi, finish setup for the blues starter kit, and get basic object detection with sensors. We were able to accomplish most of these goals since our last progress report. We have the raspberry Pi set up and have made it compatible with the blues starter kit by using ssh for programming on the Pi. In addition, we worked on configuration for converting speech to text using the Google Speech API and were able to see sample outputs. We also ordered materials for a new mount for our bicycle which we designing a component that is compatible with the mount ( see below ).

One of the risks that we are currently facing is the fact that the GPS information is not being sent properly through to the Raspberry Pi, this could be a problem as we need accurate GPS data for the proper directions to be sent.

NEXT WEEK DELIVERABLES:

Continue testing for speech-to-text translation and beginning implementation of R-Tree algorithm. Set up circuit for wristband + establishing basic object detection for sensors. Fixing GPS issues and storing GPS data to be used by the RPi.

.

component compatible with mount.

sample output for Google speech API

ADDITIONAL QUESTIONS:

Part A: … with consideration of global factors. Global factors are world-wide contexts and factors, rather than only local ones. They do not necessarily represent geographic concerns. Global factors do not need to concern every single person in the entire world. Rather, these factors affect people outside of Pittsburgh, or those who are not in an academic environment, or those who are not technologically savvy, etc.

There is a global need for increased safety and accessibility in urban mobility. Our device Rid3 can help meet this increased need for bicyclists’ safety by enhancing phone-less navigation in increasingly crowded urban settings. Bike lanes and road support for micro-mobility are being expanded in many major cities worldwide, however riders are often still at risk due to poor visibility, car blind spots, and distractions from checking navigation devices. This technology lowers the risk of accidents and increases traffic safety by enabling cyclists to receive crucial blind spot alerts and clear directions without taking their eyes off the road by fusing voice navigation with a haptic feedback wristband. This solution is particularly impactful in regions where cycling infrastructure is still developing or where road conditions are less predictable. Additionally, our device is intended to work in various climates expanding beyond what typically occurs in Pittsburgh, like high dust environments. The usability of our device is also simplistic and meant to be intuitive to promote use for all people regardless of technological expertise. By enhancing safety in diverse environments, the product contributes to broader global efforts to promote sustainable transportation, reduce urban congestion, and improve public health. A was written by Emmanuel.

Part B: … with consideration of cultural factors. Cultural factors encompass the set of beliefs, moral values, traditions, language, and laws (or rules of behavior) held in common by a nation, a community, or other defined group of people.

Within the context of our project, the main cultural factor for consideration is the fact that different countries have different road laws hence it is important that the Rid3 devices adhere to these cultural norms. Specifically, it is important that our device functions in a way that it is intuitive to a biker on the road. Consequently, it is essential that the audio feedback for navigation instructions adheres to road safety rules. Consequently, research will have to be done to ensure that the system’s feedback adheres to the rules of the region. B was written by Akintayo 

Part C : … with consideration of environmental factors. Environmental factors are concerned with the environment as it relates to living organisms and natural resources.

Environmental factors play an important role in our project. We want to make sure that we’re using resources that do not pollute the environment and are safe for the environment. It is battery powered and does not release any toxins into the air when it is running, so the environmental concerns are limited. One thing to take note of is the potential for coming across different animals in the environment. If we detect animals in our blind spot, we want to notify the user so that they don’t hit the animal coming across. C was written by Forever.

Akintayo’s Status Report for March 8th, 2025

WORK ACCOMPLISHED

The past 2 weeks, I was able to accomplish a number of important tasks as well as some deliverables that were overdue.

Firstly, I was able to complete the integration of the USB Microphone with the Raspberry Pi, so I was able to successfully record sample audio of a user giving the location for their journey.

Following having obtained a sample audio, I began to test the Google speech-to-text AI framework and did some testing for which models work best for extracting the required text.

Log of different tests with different models

Sample transcript from audio (actual audio is myself saying: “Ride take me to Tepper”)

I also worked on some of the logic for translating the destination text from speech-to-text endpoint to the longitudinal and latitudinal representation using the Google Maps Geocoding API.

Code Snippet:

Output for Destination of “Tepper School of Business”:

PROGRESS:

I am on schedule as I made some good progress this week. I probably need to speed up a bit to ensure I have ample time to join the different components of my feature together.

NEXT WEEK DELIVERABLES:

I will primarily continue testing for the appropriate model for speech-to-text translation. I will also start working on the logic for navigation suggestion using a R-tree algorithm.

 

Forever’s Status Report for March 8th, 2025

WORK ACCOMPLISHED:

This week I wanted to successfully run the blues starter kit process, and collect gps longitude and latitude data. I also wanted to help program the raspberry pi to be compatible with the other devices that we are using such as the sensors and bluetooth modules. I was able to accomplish most of the tasks except for a few setbacks. I worked on making the Raspberry Pi compatible with our devices by making SSHing into the system possible, this way we could have wireless connections between the devices. I also installed a virtual view system that allows us to see what is happening on the Raspberry Pi while we’re programming on it. I also was able to install the Blues Starter Kit notecard CLI, which allows for programming with the Raspberry Pi and the notecard. I am currently able to read the GPS data being read, however for some reason the GPS kept turning off. This is something I hope to figure out by the next progress report. I

PROGRESS:

This week I accomplished the tasks I was supposed to, except I was not able to fully successfully read GPS data.

NEXT WEEK’S DELIVERABLES:

Next week I hope to fix the issue with the GPS and start working on storing that GPS information in order for it to be used for the Navigation system.

Emmanuel’s Status Report for March 8th, 2025

WORK ACCOMPLISHED:

This week I spent time working on our team’s design report, refining the device encasing mount design, and getting setup with RPi4 to use the sensors.

Refinement of our design through working on our report took a significant amount of time. It led to me doing more research on other sensors like the doppler radar sensor. This might be used in the future if I struggle to develop an algorithm that allows use to differentiate incoming versus stationary objects with the the ultrasonic sensors. Creating a compatible Nite Rite bike mount for the device encasing seems more complicated than creating one for the GoPro bike mount so I decided to switch and order a GoPro mount since my AutoCad skills aren’t super strong. Lastly, I set up our RPi4 for my laptop and ran some test scripts in Thonny in preparation to program the sensors.

PROGRESS:

I’m currently slightly behind with tasks, I wanted to have the basic distance detection script for sensors working by now. I think I can make up time next week. We found out parts for the wristband got lost in delivery so we had them reordered and will pick them up after spring break.

NEXT WEEK’S DELIVERABLES:

Next week, I aim to establish basic object and distance detection functionality with the sensors and setup circuit for the wristband.

Akintayo’s Status Report for February 22nd, 2025

WORK ACCOMPLISHED:

This week, I tried to work on setting up the Raspberry Pi 4, but I realized I would require a micro SD card reader; hence, I was unable to move forward as I was missing the device. I also worked more on the Google Maps API.

Additionally, I decided to modify the design of the system by removing the web server and localizing the navigation and audio system to the Raspberry Pi instead. This drastically reduces the latency required for our system.

PROGRESS:

Due to some issues I faced, I’m currently behind schedule as I had expected to finish up with how to record audio files from the Raspberry Pi and also begin to work on integrating the Google Speech-to-Text AI.

NEXT WEEK’S DELIVERABLES:

I am mostly will try and catch up on last week’s deliverables. So, I will working on how to record audio files from the Raspberry Pi and sending it to the Navigation endpoint. I will also begin to work on integrating the Google Speech-to-Text AI.

Emmanuel’s Status Report for February 22nd, 2025

WORK ACCOMPLISHED:

This week I spent time familiarizing my self with the CMU’s 3D printing process and AutoCad in order to help tweak our device encasing design.  Additionally, I took time to look at different bikes around campus to get a better understanding of how our device will be attached.

Through my time exploring and even riding a bike during the city’s busy periods I realized a velcro strap would be unstable for securing our device. Additionally, our encasing protrusions will be difficult to design in way that keeps our sensors secure and in place when hitting bumps while riding. I did research into existing bike mounts that can clamp to the bike seat shaft and we aim to pivot so our encasing can clip into one of those mounts (specifically NiteRider design). Working in AutoCad for the first time in years too longer than expected but a rough idea of our new bike encasing (newer than in team status report) is below. Edits have yet to be made for the sensor holders (protrusions) because we just got the sensors at the end of the week.

PROGRESS:

I’m currently on schedule with my tasks. Still waiting on the mini bread board and the vibrations, hopefully they arrive next week.

NEXT WEEK’S DELIVERABLES:

Next week, I aim to establish basic object and distance detection functionality with the sensors, submit order for bike mount, and complete written design report.