This week, I did more research on the implementation details of our project. I proposed the idea to switch the camera from being mounted on the person to the ceiling instead. After digging into SLAM algorithms, we realized it may be difficult to mount a camera on a person and use that as a perception tool to map the person towards the target. Thus, we decided to potentially change the location of the camera to the ceiling instead, where both the user and the target is in view. From this single point of view, we can run a path planning algorithm to guide the person towards the target through haptic sensors. Our use case would be slightly shifted, where instead of having the blind person navigate unfamiliar indoor environments, they would be navigating inside an indoor setting that is blind-friendly. These environments could be their office or homes, where a one-time camera set-up including some pre-configurations would be necessary. After this, the blind person would wear a haptic vibrating belt that would guide them to their location with voice-assisted commands. We also tested an online image classification library with a top-down view image to see if there are existing libraries for our use case. The library we found was able to detect objects from a bird’s eye view quite well, so we believe this should be feasible.
Our progress is slightly behind schedule, since we have shifted the implementation details quite a bit. That was because we realized the original solution may not be feasible under the requirements given. To catch up to project schedule, we are planning to iron out specific implementation details of our project by next week. Once we know what toolchains, libraries, and technology stack we are using for each specific component, we can think about how they can all fit in together. This is our target for next week. We would also like to order all materials by early next week, so we can start working with each of them soon.