Team status report 12/10/2022

This week we spent time preparing our final presentation. This entailed fine-tuning our path tracking algorithms and gathering the data regarding our use-case requirements to see if we made them or not. We also spent time figuring out exactly what we want to show at the demo and we decided that we want to show the device giving feedback for a pre-configured route outside of Hamerschlag. We also want to show some slides on a laptop with some route data to give a visual representation of how the device operates.

Next week after the demo we plan on incorporating our snap-to-sidewalk feature that will allow us to re-route on the go for our final video/report.

Team status report 12/3/2022

This week we spent time refining our project, implementing our state transitions, and gathering test data (battery life, test routes). Additionally, we are also rescoping our project to be within the area of Pittsburgh. This helps us specify which unfriendly crosswalks we will be avoiding, instead of tailoring our algorithm to a more general case.

This upcoming week, we will be preparing for our final presentation, and doing testing and fine-tuning for our project. We expect both team members to present, with each presenting around half of the content.

Team status report 11/19/2022

A hard problem we are facing this week is coming up with a heuristic to determine whether a crosswalk can be avoided or not. If a route has multiple blind unfriendly crosswalks, choosing to avoid or not avoid one can have consequences on subsequent unfriendly crosswalks as well. Currently, we think a flat rate distance may be the right option (ie. avoiding blind-unfriendly crosswalks should not increase trip length by X meters in total). If we do it by crosswalk instead of in total (avoiding blind unfriendly crosswalk should not increase trip length by more than X distance of percentage), this may cause the aggregate trip length to be very inflated, even if each individual crosswalk does not increase trip length by that much.

Next week we plan on doing more integration and testing as a group, as we need to make sure that the software and hardware components are working together well, and that we are covering edge cases properly. We hope to be able to run a full cached route with feedback by the end of next week.

Team Status Report 11/12/2022

This week each of our team members spent most of their time polishing up their portions of the project and now plan to spend most of the remaining time collaborating and working on the routing/feedback features of the device. Colin finished most of the system level work to be able to output audio and interface with the cellular data network while on the go. Zach worked more on the routing/feedback functionality to be able to give good user feedback.

One of the hardest problems that our team is facing right now is the fact that we only have to avoid blind-unfriendly intersections if the user has to cross a crosswalk, otherwise the user could just continue on the sidewalk and turn down a different street. The APIs that we are using right now make it hard to incorporate that feature into our system so we are looking into the possibility of finding some sort of routing library that could help with this problem. As a last resort, our system would still be functionally correct if we did not route the user to these intersections, even if they did not have to cross a crosswalk, it may just create a more convoluted route for the user.

Next week we plan to incorporate the feedback into the physical device and run some simulations to be able to test the routing capability of the device, and hopefully have a device that we could walk a route and get feedback in real time.

Team Status Report 11/05/2022

This week our team was preparing for the interim demo that will be coming up next week. We built a physical device that contains all of the necessary hardware which will be capable of meeting the use case requirements. The device is able to generate geo-location data (latitude, longitude, and heading) along with a timestamp for each of the points. This data will be transmitted to the back-end routing thread to perform analysis on where the user currently is and where they need to go next.

For the demo next week we will show the physical device along with a visualization of a path that we walked with the device along some sidewalks. We also plan to show the ability for the device to generate a route that does not contain blind-unfriendly crosswalks, based off of the current coordinates of the device as a starting location.

Team Status Report 10/29/2022

This week each of us focused on getting our preliminary setup down so that we can start to collaborate much more this up coming week. Zach worked on interfacing with the HERE API, and Colin worked on setting up the RPi along with all of the peripherals that we will be using. Our goal is to walk some routes next week and gather the location data along the way in order to have some real data and to test the HERE API out with. We decided that our first goal will be to have the ability for the device to correctly navigate routes without re-routing. This offers us three advantages, the first being that it will be easier to develop without having to worry about re-routing at first. The second advantage is that we can quickly develop this due to the fact that re-routing will be the only reason that we will need to talk to the internet, so we do not need to focus time on the wireless communication aspect. Due to the fact that we can cache the route ahead of time we can connect the device to a WiFi network, download the route data from the HERE API, and then test the route out offline. The third advantage has to do with risk mitigation in that we will be able to reduce the scope of the project to only having to be connected to the internet at the beginning of the route process in case we cannot get the wireless communication to work over the cellular data network. An example of how this would be used would be that the user has to load the route into the device before they leave their house.

Due to the interim demo coming up quickly in a week or two, we would like to have some sort of working example of the main functionality which would be navigating based on the GPS/IMU output. This is what is driving us to our decision to not focus on the re-routing for now, and hopefully after next week we can have a good demo.

Team Status Report 10/08/2022

Some risks that could jeopardize the success of our project: we have to make sure that when our device is mounted on a user, that it does not move/tilt significantly with respect to the user, as that could affect the coordinates given by our IMU/GPS.  As such, we need to come up with a way to design our device so that it is stable on the user’s body. Another problem is something we discussed during the presentation, which is detecting whether a person is deviating from the route that the device has given them. A significant challenge here is that just because someone’s distance from their next checkpoint is increasing, does not necessarily mean that they are deviating from the path (ie. road could be curved, etc.). There are several ways we could approach this issue; one way could be detecting if a person’s distance to their next checkpoint has increased over the past n epochs. If it has, then we assume they are deviating and reroute them.

Although we haven’t made any explicit changes in our design, we are thinking about adding an input system (probably auditory) that will prompt the user to give directions to where they want to go. This will make our system more complete and give a more well-rounded user experience. We will discuss this in the upcoming week and change our design accordingly.

Team Status Report for 10/1

This week Eshita decided to drop the class due to an overwhelming workload. The two of us remaining thought about continuing the project in the same direction but realized that we lacked the machine learning knowledge to be able to confidently proceed. We thought of how we could still aid the visually impaired without the use of machine learning and decided to go towards direction and navigation instead. Our new project will tell a user how to get from point A to B, avoiding blind unfriendly crosswalks.

The system will be comprised of 2 Raspberry Pis communicating with each other over a wired network. The front-end Pi will be gathering location data using a GPS and IMU, and will communicate that data to the back-end Pi. The back-end Pi will take the location data and will interface with the Google Directions and Roads APIs via a cellular data chip to determine where the user should go next to reach the destination. Information such as the distance to the next intersection, the direction to turn at the intersection, and ETA will be periodically reported via a speaker and a Text-To-Speech engine running on the front-end Pi.

Our team is behind schedule at this point considering that we have to restart most of the research and design, however we are working hard to catch up to where we should be. The skills needed for the project suit our areas of specialization very well and we should be able to dedicate most of our time towards development as opposed to research.

Team Status Report for 9/24

At the moment the most significant risk that could jeopardize the success of the project is the accuracy of our object detection algorithms. We do not want to tell a blind person to cross the road when they are not supposed to. We are currently looking into options to mitigate this risk, one option may be to reduce the scope of the project to just doing crosswalk detection or cross sign detection to allow us to focus more time to one the algorithms and to hopefully make it better. We should also focus on the rate being less than 1%, a metric we were thinking of would probably be 0.5% for the detection of whichever application we pick, and if we pusue both as well. The design of the system has been unchanged, however we are looking into how to get the false positive rate as close to 0% as possible.