This week I spent time doing testing, and finding and fixing bugs in our state machine code. I also spent time with Colin preparing for what we plan to show for the demo. We will be doing a integrated test run on our fake route tomorrow to prepare for our demo on Monday, and also plan a separate fake route to show off our rerouting capability if time (and space) allows. After the demo, the remaining week will be spent on working on the final video and final report.
Zachary’s status report 12/3/2022
This week I spent most of my time implementing the state transitions for routing the user, and testing that implementation. The state transitions are working correctly on our default test case of forbes-morewood to craig-fifth, but I will spend more time next week to do testing to make sure that it is correct. Tomorrow, I will be working with Colin on the final presentation, and I will be spending the next week performing testing, polishing our project, and getting ready for the final demo.
Zachary’s status report 11/19/2022
This week i worked on finishing up code for how to direct users/verbal commands for users, and also outputting that feedback using a text-to-speech API (pyttsx3). I also wrote code to orient users at the start of the trip. This is to help users know which direction they need to rotate to begin the trip (left, slight left, right, slight right, turn around, no action). Tomorrow, I will meet with Colin to do integration and testing for our mini-demo on Monday.
Next week I want to start implementing two of the more advanced concepts, which are redirecting, and heuristics for whether to avoid an unfriendly crosswalk (distance based). I will also start doing testing by writing a suite of units tests for my code.
Zachary’s status report 11/12/2022
This week I spent some time setting up a basic communication protocol with Colin for the interim demo. We were able to get good feedback from the instructors about what issues need to be solved, which I’ll talk about at the end. I also continued working on the response request for the backend. particularly retuning a string to indicate how far the user has left till the next action (ie. turn). If the next action is <5m away, I instruct them to prepare to do the next action. Once their distance from the previous checkpoint is >5m, and their distance to the next checkpoint is decreasing, I increment my cache index, indicating that the user is on the next portion of the trip.
A critical piece of feedback we got this week was, how do we know which of the street we’re on, and if there is a crosswalk when we turn? I have been doing reading on the HERE API to address this problem, but have not been able to see a feasible solution so far, as most features on the API, such as this, are focused on car routing instead of pedestrian routing. Another alternative I’ve been considering is reviewing other APIs, such as Google Maps, to see if they potentially have functionality that can solve this issue. In the worst case, if we are unable to solve this, the most brute force way to solve this issue is to assume that on any checkpoint, the user may have to cross a crosswalk. That is, we would tell the user that they may have to cross an intersection, and to use their walking cane/other senses to determine if they actually need to.
Zachary’s status report 11/5/2022
This week I began writing code to handle requests from the frontend while a person is on the trip. Specifically, I used an index to refer to the location in the cache, where the next checkpoint is for the individual, and I would take the coordinates given by the frontend to calculate the Haversine distance, which converts coordinates to meters. This is not completely done, because I still need to account for when I can increment the index, edge cases like telling the user that they need to wait for a (blind-friendly) crosswalk, and I need to come up with textual feedback to give back to frontend. I was also able to debug the Jupyter notebook app and display the routes I am planning for the user. I am also seeing that given a blind-unfriendly crosswalk, the route is successfully avoiding that crosswlak, which is good.
Tomorrow, I will work with Colin to implement a basic communication protocol between frontend and backend, for the interim demo on Monday.
Next week, I want to finish the request handling module and work more with Colin to develop the communication protocal between frontend and backend.
Zachary’s status report 10/29/2022
This week I finished implementing an initial iteration of initial routing and caching. I implemented my cache using a list of lists, where the inner lists describe information about actions, which are verbal feedback from the HERE API, and coordinates of the next waypoint. I was also successfully avoid certain intersections using the API provided by here. The HERE API avoids areas by makign the user draw bounding boxes on the areas they want to avoid. For now, I’m drawing a 20m square around the intersection that needs to be avoided (I had to use lat/lng to meters conversion formula . I think this may cause issues like avoidin the entire street instead of just the intersection, but I will continue to look at that over the next few weeks.
HERE also has a map widget Jupyter Notebook that lets users visualize the routes they are drawing. Unfortunately even after spending several hours on it only a gray screen would pop up when I try to show the widget. I think this is a good testing tool so I will keep working on it and hopefully be able to display the widget.
Next week I want to create a communication protocol with Colin’s frontend code (integration), finish setting up the HERE map widget on Jupyter, and start implementing request processing from the frontend.
Zachary’s status report for 10/22/2022
This week I spent most of my time on the design report. Aside from that, I read documentation on the HERE API (which we decided to pivot to from Google Maps API), and have obtained and API key for it, and was able to start sending and receiving requests from it.
Due to having a busy week and working on the design report, I wasn’t able to spend as much time on implementation. Next week, I hope to finish the route planning and cache implementation. I will also work with Colin on communication between our frontend and backend threads.
Zachary’s status report for 10/08/2022
This week I worked on setting up a Google Cloud account and gaining access to the Google Maps APIs. After doing that, I was able to play around with the API and read more documentation to gain insight on how it works. Initially, I thought that we would need another API (maybe Overpass) to be able to geocode intersections, however I figured out that Google Maps actually has this capability, which is great. I also did research on communication latency with the Google Maps API. I expect the average will be around ~200ms, with maximum ~500ms. While this is fine, I want to implement a cache so we do not have to repeatedly ping Google Maps (hypothetically this is also a great cost-saving method).
Additionally, I worked on the design presentation slides that Colin presented on Wednesday. One particular point of feedback that I thought was important was that we currently have no form of input to our system. While Colin and I had discussed this before our presentation, we did not have time to come up with a solution since we were short on time due to pivoting. However, I think perhaps an audio input with an on-off switch (ie. button) could be a viable way of approaching this problem. I will talk more with Colin about this.
Referencing our Gantt chart, I will start implementing the backend this week. I think we are still slightly behind schedule given our late pivot, but hopefully we will be able to catch back up over the next two weeks. I will also be writing the design report with Colin.
Zachary’s status report for 10/1/2022
Due to Eshita dropping the course, Colin and I have decided to quickly pivot on our project after meeting with Prof. Mukherjee yesterday. As our team status report indicates, we’ve pivoted towards a route planning project, which helps visually impaired people navigate from point A to point B, while helping them avoid “unfriendly” crosswalks.
Because Eshita only told us she was dropping yesterday, I’ve only had time today to do research on our new project. Since alot of the focus of this project will be on route planning and identification of crosswalks, I searched through potential APIs that could be useful for this. In particular, I spent alot of time going through the Google Maps API, and looking through its capabilities.
In addition, the identification of street intersections/crosswalks is my biggest concern right now. As far as I know, the Google Maps API does not have the capability to give information on things like “the nearest intersection from point X”, or if a coordinate is an intersection or not. A potential solution I’ve found so far is the overpass API, which can given information on the location of an intersection, given two street names as input.
Due to this unforseen circumstance, I am currently behind schedule. However, Colin and I are prepared to work hard in the upcoming weeks to get back on track. For next week, I want to read more into the Google Maps and Overhead APIs, and start interacting with them, and also talk more with Colin so we can flesh out the details of the design.
Zachary’s status report for 9/24
This week, I was mainly focused on editing the slides and preparing for the presentation, which I presented on Monday. I appreciated the feedback and questions that we received from the Instructors and classmates, and particularly the pushback on the false positive rate, which I feel is a valid concern. As an aside, I felt that some of the feedback I received saying that I did not know the material well, or that I was underprepared, was unwarranted, as I had spent a substantial amount of time on the presentation. However, perhaps due to being too softspoken and having technical difficulties during the presentation, I was not able to reflect that.
Additionally, I have also spent a bit of time doing research on object detection algorithms for the implementation of the walk sign detection.
I am currently on schedule, as our team has put aside time in our schedule for the first four weeks, to specifically do research and flesh out our design (as well as prepare for presentations) before we start implementing.
Since I have limited experience with ML, I really want to get a head start on the material and implementation. In this upcoming week, I will be doing more research, as well as working with Eshita to find/create a dataset for walk signs fo the ML model that I will implement. Additionally, I also hope to set up a github repo and start writing down some code, if possible. Lastly, I will talk with my teammates to see if AWS may be needed for model training, and talk with the TAs and professor if we do to set that up.