Colin’s Status Report for 10/22/2022

This week I accomplished everything that I wanted to based off of my prior status report. I wanted to get all of our hardware ordered (except for the battery) so we could begin to experiment with the equipment as well as working out a framework for the software to run on the Raspberry Pi. Zach and I also spent a lot of time on the design review report.

Most of my time this week was spent on developing the software to run on the RPi. My main goal was to come up with a system where all three of our main threads could run while communicating with each other. I decided to add a fourth overall thread to the system which would be the controlling thread. This thread tells the other three threads when to run and handles the data communication. Since we will be using a single process and asyncio in python to be able to run threads, we do not have to worry about concurrency issues when communicating data since only one thread will be running at once. The controlling thread will initially tell the location thread to gather location data, and then put that data into a buffer. Then the second thread that will be interpreting the location data and communicating with the API’s will be called. That thread will be given the buffer to take the location data and then use it to determine what sort of feedback to give to the user. The feedback will then be put into a separate buffer and given to the third thread by the controlling thread. The third thread will then run the text-to-speech engine and output to the 3.5mm audio jack for the user to listen to. This process will continue until the user has reached their destination.

Next week I would like to hook up our components that have been ordered into the physical system. I would like to communicate to the components through the software and be able to gather the necessary location data to be able to know where the user is. I would also like to hook up the software to the blues wireless notecard API to be able to communicate to our external directions API. If I could get all of this done next week I would be on schedule and I would be able to begin to collaborate with Zach in order to start getting the skeleton of our entire system working. We would hopefully be able to have the hardware supplying data to the directions thread which would allow us to work out the next steps to a fully functional system.

Colin’s Status Report for 10/8

This week I worked on figuring out the compatibility of the parts that we want to use on the project. Initially, we thought about using two RPIs, and dedicating one to the front end and one to the back end. The only reason for this would be to make the GPS and cellular data card easier to hook up to the system. However, the increased development time and complexity of two RPIs communicating data to each other is not worth it. I did some research on the data card and the GPS and have determined that we can hook both of them up to one RPi. Since we aren’t going to be anywhere near the computational limits of the RPi, it seems as if the most logical route to take. The GPS unit has the ability to change it’s I2C address, so we can run both the GPS unit and the cellular data card on the same I2C lines. An alternative would be to communicate to the cellular data card via I2C, and the GPS via UART if problems arise.

I also did research on the environment we will be running our tasks in. I originally contemplated scheduling 3 separate python processes on the RPi kernel, one for GPS data gathering and filtering, another for the backend, and another for the audio output, however the communication of the data to and from each process is not simple. An easier way to do this would be through the use of one single python process utilizing asyncio to perform cooperative multitasking with each of the 3 threads. Since we are not bound by computation power, we do not need to utilize all of the cores on the RPi, and this would allow for data communication between the threads to be much simpler. We also do not have very hard RTOS requirements, so we do not need preemption if we work out a simple cooperative scheduling scheme. Any extra development time that can be taken away from the environment that can be put towards the main direction algorithm of the project will be very useful for us.

I am doing okay in terms of the schedule. I accomplished a lot of what I wanted to in terms of figuring out exactly what parts to use and how they will communicate with each other. I have ordered the RPi from the ECE inventory, and have figured out what we will be running on the RPi in terms of software. Something that I would have liked to get done was to actually receive the RPi last week, however I was not able to and I will be doing so Monday morning.

Next week I need to get a few things done. The first would be to set up python on the RPi and start on the frameworks for all of our threads to communicate with eachother. The most important goal for next week is to order all of the extra parts that we will need for the project. Those parts would be the GPS/IMU, cellular data card, antennas for both of those parts, and some wires to hook up the devices to the RPi.

Colin’s Status Report for 10/1

This week our team altered our project to now provide directions along blind-friendly routes to aid the visually impaired. Due to Eshita dropping the class, Zach and I lack the machine learning knowledge to be able to proceed with the prior design.

I will now be focusing on the front-end of our system. I will be using a Raspberry Pi to gather data from a GPS unit to be able to determine the user’s location. The SparkFun GPS-RTK Dead Reckoning pHAT board appears to be a good unit for the project. The unit attaches to an RPi4 through the 40 pin header, and is easily interfaced with I2C. The unit contains a GPS locator, and an IMU to provide more accurate position readings when a loss of GPS signal is encountered. The unit has heading accuracy of within 0.2 degrees, however the unit does not contain a magnetometer. It achieves this by relying on the GPS moving, combined with accelerometer readings. This may be a potential problem for us given that our user may be standing still for a long period of time, and the heading reading will be prone to drift without the user moving in a direction. A solution to this would be to add a secondary board with a magnometer to tell direction, however this may not be necessary will significantly increase complexity of the unit because we would no longer be able to use the PiHAT 40 pin connector for the GPS and we would have connect both boards to the RPi, sharing the header.

I will also be taking commands from the back-end Pi to give directions to the user via audio. I will be using a Text-To-Speech engine to tell the user where to go and give various status updates given from the back-end Pi. The RPi4 comes with a 3.5mm audio jack capable of outputting audio to a wired ear bud which the user will be able to hear the directions from.

I am currently behind schedule given that our team is re-designing the project, however I am very happy about the new direction of the project. In the past day we have been focusing heavily on the project and will continue to do so in order to have a good design done by Sunday for the design review.

Team Status Report for 10/1

This week Eshita decided to drop the class due to an overwhelming workload. The two of us remaining thought about continuing the project in the same direction but realized that we lacked the machine learning knowledge to be able to confidently proceed. We thought of how we could still aid the visually impaired without the use of machine learning and decided to go towards direction and navigation instead. Our new project will tell a user how to get from point A to B, avoiding blind unfriendly crosswalks.

The system will be comprised of 2 Raspberry Pis communicating with each other over a wired network. The front-end Pi will be gathering location data using a GPS and IMU, and will communicate that data to the back-end Pi. The back-end Pi will take the location data and will interface with the Google Directions and Roads APIs via a cellular data chip to determine where the user should go next to reach the destination. Information such as the distance to the next intersection, the direction to turn at the intersection, and ETA will be periodically reported via a speaker and a Text-To-Speech engine running on the front-end Pi.

Our team is behind schedule at this point considering that we have to restart most of the research and design, however we are working hard to catch up to where we should be. The skills needed for the project suit our areas of specialization very well and we should be able to dedicate most of our time towards development as opposed to research.

Team Status Report for 9/24

At the moment the most significant risk that could jeopardize the success of the project is the accuracy of our object detection algorithms. We do not want to tell a blind person to cross the road when they are not supposed to. We are currently looking into options to mitigate this risk, one option may be to reduce the scope of the project to just doing crosswalk detection or cross sign detection to allow us to focus more time to one the algorithms and to hopefully make it better. We should also focus on the rate being less than 1%, a metric we were thinking of would probably be 0.5% for the detection of whichever application we pick, and if we pusue both as well. The design of the system has been unchanged, however we are looking into how to get the false positive rate as close to 0% as possible.

Colin’s Status Report for 9/24

This week I did research about all aspects of the hardware for the project. I wanted to tie in all of the components at a high level and see how they would all interact together in the project. In particular, I have decided to go with a BMA180 accelerometer to feed in data into the Jetson to determine if the user is stationary or walking. I can use a python library for this particular accelerometer to get the data, or I can write a small c program to gather the data and run some algorithms to determine the state of the user. I figured it would be nice to be able to easily gather the data using python given that we will be using python for the object detection portion of the project, and that the data from the accelerometer must be communicated to the object detection portion. I believe that doing both of these in the same python code would significantly increase both robustness and speed of development. I have also been looking into cameras that can stream data to the Jetson, and I believe that the SainSmart IMX219 would work well with a Jetson Nano, which is what we plan on using. Currently, I am on track according to the schedule given that for now all of us are working towards the design proposal, and the work that I have done this week all has to do with the design of the project on the hardware side. My primary goal next week is to look into the options involving the audio communication to the user of whether or not they should cross the street and what direction to go. I would also like to receive a Jetson Nano within the next week and start to install python/OpenCV on it. When installing python, I would also like to look into the option of building a multi-threaded python program to be able to get the accelerometer data at regular intervals and to communicate that data to the thread that decides whether to look at the walk sign detection or the crosswalk detection.