Zachary’s status report 10/29/2022

This week I finished implementing an initial iteration of initial routing and caching.  I implemented my cache using a list of lists, where the inner lists describe information about actions, which are verbal feedback from the HERE API, and coordinates of the next waypoint. I was also successfully avoid certain intersections using the API provided by here. The HERE API avoids areas by makign the user draw bounding boxes on the areas they want to avoid. For now, I’m drawing a 20m square around the intersection that needs to be avoided (I had to use lat/lng to meters conversion formula . I think this may cause issues like avoidin the entire street instead of just the intersection, but I will continue to look at that over the next few weeks.

HERE also has a map widget Jupyter Notebook that lets users visualize the routes they are drawing. Unfortunately even after spending several hours on it only a gray screen would pop up when I try to show the widget. I think this is a good testing tool so I will keep working on it and hopefully be able to display the widget.

Next week I want to create a communication protocol with Colin’s frontend code (integration), finish setting up the HERE map widget on Jupyter, and start implementing request processing from the frontend.

Colin’s Status Report for 10/29/2022

This week all of our parts that we ordered came in and I was able to connect all of them to the RPI. I mainly worked on interfacing with the GPS/IMU. Zach and I think it would be a good idea if we could have something running by the end of this week that would allow us to gather location data while walking around. This way we can walk some routes and save the data so we can simulate walking the route at a later time for development purposes without having to physically be walking the route whenever we want to test.

This brings up the need for some mobile system that includes a power supply and a case to protect all of the components. I should be able to throw together some sort of case quickly, however I need to get the battery ordered. I did some research about batteries for the design report and found this 26800 mAh battery: https://www.amazon.com/Charmast-26800mAh-Portable-Li-Polymer-Compatible/dp/B07P5ZP943. The battery outputs 5v which is how much voltage the RPi runs at, and since our design constraint of battery life is 16 hours, this gives us 8.375 Watts maximum. The RPi uses about 5 Watts at 100% CPU load, and the peripherals will use less than 2 Watts combined, giving us about 7 Watts of power being used. This estimate is on the high end and we may be able to go much lower with careful attention to minimizing power consumption. This battery will give us an estimated 1.375 Watts of slack which is a comfortable amount, especially because I do not want to rely on the battery performing properly at less than 5% charge.

My goals and schedule have somewhat changed from our initial thoughts, with the need to get some real data quickly, I am now focusing more on the construction of the device and the battery. I aim to receive a battery next week and perform some route walks while gathering the location data along the way so that we can do testing/development. I will be somewhat ahead of schedule regarding the battery and device development, however I will be going slightly behind on schedule regarding the wireless communication aspect of the device in order to focus on the location data collection.

Team Status Report 10/29/2022

This week each of us focused on getting our preliminary setup down so that we can start to collaborate much more this up coming week. Zach worked on interfacing with the HERE API, and Colin worked on setting up the RPi along with all of the peripherals that we will be using. Our goal is to walk some routes next week and gather the location data along the way in order to have some real data and to test the HERE API out with. We decided that our first goal will be to have the ability for the device to correctly navigate routes without re-routing. This offers us three advantages, the first being that it will be easier to develop without having to worry about re-routing at first. The second advantage is that we can quickly develop this due to the fact that re-routing will be the only reason that we will need to talk to the internet, so we do not need to focus time on the wireless communication aspect. Due to the fact that we can cache the route ahead of time we can connect the device to a WiFi network, download the route data from the HERE API, and then test the route out offline. The third advantage has to do with risk mitigation in that we will be able to reduce the scope of the project to only having to be connected to the internet at the beginning of the route process in case we cannot get the wireless communication to work over the cellular data network. An example of how this would be used would be that the user has to load the route into the device before they leave their house.

Due to the interim demo coming up quickly in a week or two, we would like to have some sort of working example of the main functionality which would be navigating based on the GPS/IMU output. This is what is driving us to our decision to not focus on the re-routing for now, and hopefully after next week we can have a good demo.

Zachary’s status report for 10/22/2022

This week I spent most of my time on the design report. Aside from that, I read documentation on the HERE API (which we decided to pivot to from Google Maps API), and have obtained and API key for it, and was able to start sending and receiving requests from it.

Due to having a busy week and working on the design report, I wasn’t able to spend as much time on implementation. Next week, I hope to finish the route planning and cache implementation. I will also work with Colin on communication between our frontend and backend threads.

Colin’s Status Report for 10/22/2022

This week I accomplished everything that I wanted to based off of my prior status report. I wanted to get all of our hardware ordered (except for the battery) so we could begin to experiment with the equipment as well as working out a framework for the software to run on the Raspberry Pi. Zach and I also spent a lot of time on the design review report.

Most of my time this week was spent on developing the software to run on the RPi. My main goal was to come up with a system where all three of our main threads could run while communicating with each other. I decided to add a fourth overall thread to the system which would be the controlling thread. This thread tells the other three threads when to run and handles the data communication. Since we will be using a single process and asyncio in python to be able to run threads, we do not have to worry about concurrency issues when communicating data since only one thread will be running at once. The controlling thread will initially tell the location thread to gather location data, and then put that data into a buffer. Then the second thread that will be interpreting the location data and communicating with the API’s will be called. That thread will be given the buffer to take the location data and then use it to determine what sort of feedback to give to the user. The feedback will then be put into a separate buffer and given to the third thread by the controlling thread. The third thread will then run the text-to-speech engine and output to the 3.5mm audio jack for the user to listen to. This process will continue until the user has reached their destination.

Next week I would like to hook up our components that have been ordered into the physical system. I would like to communicate to the components through the software and be able to gather the necessary location data to be able to know where the user is. I would also like to hook up the software to the blues wireless notecard API to be able to communicate to our external directions API. If I could get all of this done next week I would be on schedule and I would be able to begin to collaborate with Zach in order to start getting the skeleton of our entire system working. We would hopefully be able to have the hardware supplying data to the directions thread which would allow us to work out the next steps to a fully functional system.

Colin’s Status Report for 10/8

This week I worked on figuring out the compatibility of the parts that we want to use on the project. Initially, we thought about using two RPIs, and dedicating one to the front end and one to the back end. The only reason for this would be to make the GPS and cellular data card easier to hook up to the system. However, the increased development time and complexity of two RPIs communicating data to each other is not worth it. I did some research on the data card and the GPS and have determined that we can hook both of them up to one RPi. Since we aren’t going to be anywhere near the computational limits of the RPi, it seems as if the most logical route to take. The GPS unit has the ability to change it’s I2C address, so we can run both the GPS unit and the cellular data card on the same I2C lines. An alternative would be to communicate to the cellular data card via I2C, and the GPS via UART if problems arise.

I also did research on the environment we will be running our tasks in. I originally contemplated scheduling 3 separate python processes on the RPi kernel, one for GPS data gathering and filtering, another for the backend, and another for the audio output, however the communication of the data to and from each process is not simple. An easier way to do this would be through the use of one single python process utilizing asyncio to perform cooperative multitasking with each of the 3 threads. Since we are not bound by computation power, we do not need to utilize all of the cores on the RPi, and this would allow for data communication between the threads to be much simpler. We also do not have very hard RTOS requirements, so we do not need preemption if we work out a simple cooperative scheduling scheme. Any extra development time that can be taken away from the environment that can be put towards the main direction algorithm of the project will be very useful for us.

I am doing okay in terms of the schedule. I accomplished a lot of what I wanted to in terms of figuring out exactly what parts to use and how they will communicate with each other. I have ordered the RPi from the ECE inventory, and have figured out what we will be running on the RPi in terms of software. Something that I would have liked to get done was to actually receive the RPi last week, however I was not able to and I will be doing so Monday morning.

Next week I need to get a few things done. The first would be to set up python on the RPi and start on the frameworks for all of our threads to communicate with eachother. The most important goal for next week is to order all of the extra parts that we will need for the project. Those parts would be the GPS/IMU, cellular data card, antennas for both of those parts, and some wires to hook up the devices to the RPi.

Team Status Report 10/08/2022

Some risks that could jeopardize the success of our project: we have to make sure that when our device is mounted on a user, that it does not move/tilt significantly with respect to the user, as that could affect the coordinates given by our IMU/GPS.  As such, we need to come up with a way to design our device so that it is stable on the user’s body. Another problem is something we discussed during the presentation, which is detecting whether a person is deviating from the route that the device has given them. A significant challenge here is that just because someone’s distance from their next checkpoint is increasing, does not necessarily mean that they are deviating from the path (ie. road could be curved, etc.). There are several ways we could approach this issue; one way could be detecting if a person’s distance to their next checkpoint has increased over the past n epochs. If it has, then we assume they are deviating and reroute them.

Although we haven’t made any explicit changes in our design, we are thinking about adding an input system (probably auditory) that will prompt the user to give directions to where they want to go. This will make our system more complete and give a more well-rounded user experience. We will discuss this in the upcoming week and change our design accordingly.

Zachary’s status report for 10/08/2022

This week I worked on setting up a Google Cloud account and gaining access to the Google Maps APIs. After doing that, I was able to play around with the API  and read more documentation to gain insight on how it works. Initially, I thought that we would need another API (maybe Overpass) to be able to geocode intersections, however I figured out that Google Maps actually has this capability, which is great. I also did research on communication latency with the Google Maps API. I expect the average will be around ~200ms, with maximum ~500ms. While this is fine, I want to implement a cache so we do not have to repeatedly ping Google Maps (hypothetically this is also a great cost-saving method).

Additionally, I worked on the design presentation slides that Colin presented on Wednesday. One particular point of feedback that I thought was important was that we currently have no form of input to our system. While Colin and I had discussed this before our presentation, we did not have time to come up with a solution since we were short on time due to pivoting. However, I think perhaps an audio input with an on-off switch (ie. button) could be a viable way of approaching this problem. I will talk more with Colin about this.

Referencing our Gantt chart, I will start implementing the backend this week. I think we are still slightly behind schedule given our late pivot, but hopefully we will be able to catch back up over the next two weeks. I will also be writing the design report with Colin.

Zachary’s status report for 10/1/2022

Due to Eshita dropping the course, Colin and I have decided to quickly pivot on  our project after meeting with Prof. Mukherjee yesterday. As our team status report indicates, we’ve pivoted towards a route planning project, which helps visually impaired people navigate from point A to point B, while helping them avoid “unfriendly” crosswalks.

Because Eshita only told us she was dropping yesterday, I’ve only had time today to do research on our new project. Since alot of the focus of this project will be on route planning and identification of crosswalks, I searched through potential APIs that could be useful for this. In particular, I spent alot of time going through the Google Maps API, and looking through its capabilities.

In addition, the identification of street intersections/crosswalks is my biggest concern right now. As far as I know, the Google Maps API does not have the capability to give information on things like “the nearest intersection from point X”, or if a coordinate is an intersection or not. A potential solution I’ve found so far is the overpass API, which can given information on the location of an intersection, given two street names as input.

Due to this unforseen circumstance, I am currently behind schedule. However, Colin and I are prepared to work hard in the upcoming weeks to get back on track. For next week, I want to read more into the Google Maps and Overhead APIs, and start interacting with them, and also talk more with Colin so we can flesh out the details of the design.

Colin’s Status Report for 10/1

This week our team altered our project to now provide directions along blind-friendly routes to aid the visually impaired. Due to Eshita dropping the class, Zach and I lack the machine learning knowledge to be able to proceed with the prior design.

I will now be focusing on the front-end of our system. I will be using a Raspberry Pi to gather data from a GPS unit to be able to determine the user’s location. The SparkFun GPS-RTK Dead Reckoning pHAT board appears to be a good unit for the project. The unit attaches to an RPi4 through the 40 pin header, and is easily interfaced with I2C. The unit contains a GPS locator, and an IMU to provide more accurate position readings when a loss of GPS signal is encountered. The unit has heading accuracy of within 0.2 degrees, however the unit does not contain a magnetometer. It achieves this by relying on the GPS moving, combined with accelerometer readings. This may be a potential problem for us given that our user may be standing still for a long period of time, and the heading reading will be prone to drift without the user moving in a direction. A solution to this would be to add a secondary board with a magnometer to tell direction, however this may not be necessary will significantly increase complexity of the unit because we would no longer be able to use the PiHAT 40 pin connector for the GPS and we would have connect both boards to the RPi, sharing the header.

I will also be taking commands from the back-end Pi to give directions to the user via audio. I will be using a Text-To-Speech engine to tell the user where to go and give various status updates given from the back-end Pi. The RPi4 comes with a 3.5mm audio jack capable of outputting audio to a wired ear bud which the user will be able to hear the directions from.

I am currently behind schedule given that our team is re-designing the project, however I am very happy about the new direction of the project. In the past day we have been focusing heavily on the project and will continue to do so in order to have a good design done by Sunday for the design review.