Xiaoran Lin’s weekly update 4/16

This week, I have mostly been doing some preparation work for finalizing our project in the final week. I have ordered some new parts to help rebuild the belt. Currently I am still waiting on the parts to arrive before I reroute the circuit. But I have tested out the individual parts and ruled out components that are broken (since vibration motor disks and sensors are quite cheap, they can be quite unrealiable). I have done some basic testing to make sure that the ultrasonic and vibration sensors work as expected, and only need to wait on a few components to finish up assembling the final form of the belt next week.

Apart from the belt, I am also looking into developing a visualization system for the belt’s functionality. The basic idea is to feed the processed information from the raspberry pi to another PC through USB. I would hope to include both the video feed from the depth camera, as well as the information from the sensors. The visuals would indicate how the belt is producing effective warnings and how it is recognizing risks, so that observers can understand the belt without having to put it on. This would help us demonstrate the belt much better.

Next week, I will look to finish up on the two points above, and transition to testing before the weekend.

Xiaoran Lin’s Report 4/11

For this week, I was focused on adjusting the physical model for the belt and fixing the arduino interface. Following last week’s progress, I have completed a working prototype that consists of most of the components we need other than the battery and depth camera. Furthermore, I have adjusted part of the arduino interface to allow for a faster response rate from the ultrasonic sensors.

Later on this week, I have not made too much progress in face of carnival, but in the following week, I plan to reroute most of my circuits, as well as replace some of the parts that I found to be malfunctioning. I will also add a attachment mechanism for the depth camera as well as the battery.

Ning Cao’s status report 04/10/2022

This week I mainly worked on incorporating different nodes of the python DepthAI package to work with our new algorithm for the depth camera.

Our depth camera now incorporates a feature tracker that finds and tracks image features from RGB camera frames, and I successfully incorporated the RGB frame with depth frames, so now for each feature generated by the feature tracker, we can know its position in the frame and the distance data in the corresponding depth frame (available for comparison with the baseline if necessary) across different frames.

I’ve also tested the performance of the feature tracker with Kelton, and we found that the feature tracker performs poorly when the camera is shaken, in which case we may lose track of the features. We are considering the necessity of adding an IMU node in our depth camera code, but more testing is required to decide.

Kelton’s Status Report 04/10/2022

This week I mostly worked with Ning on modeling the depth image to rate threat levels of obstacles. My first attempt was based around the assumption that comparing with the same baseline surface used for calibration, the mean squared error generated from  depth image with obstacles and an extension of the baseline surface without obstacles will be different enough to enable thresholds. But this assumption proved to be incorrect.

Subsequently, I discussed with Ning and we agreed on using Oak-D’s feature tracker module to narrow the range of pixels used for computing the mean squared error. We observed that the features tracked are fairly stable across time but could vanish in the event of shaking the camera. To smooth out this outlier, we planned to take the average of several historical frames (taken within one tenth of a second assuming 60 FPS).

I also proposed adding a Stop mode where all six coin motors will vibrate in cases where the user could not simply walk around, e.g., when facing a staircase or wall.  To build this feature, I searched online for neural net model that can classify stairs and managed to feed depth image into an example TensorFlow Lite classification model.

Team Status Report 04/10/2022

This week we mostly worked on modeling the depth image. At first, the idea is to compute the mean squared error between baseline ground surface and the incoming image stream, and place a higher weight on closer data as illustrated below. However, it is discovered that the errors were relatively unchanged from a baseline surface to one with obstacle. 

Now, the updated plan is to use Oak-D’s feature tracker module to pinpoint significant pixels to compare with their counterparts in baseline.  Since these feature pixels represent qualitative difference from the baseline surface, we reasoned that the subsequent mean squared error difference will be separable from baseline. In terms of implementation, the RGB camera at the center will be used to align features detected from the left and right stereo cameras through Oak-D’s built-in alignment functionality.

Additionally, we are considering adding a Stop mode where all six coin motors will vibrate in cases where the user could use some help and not simply walk around an obstacle, e.g., when facing a staircase or wall.  To achieve this, we have looked into classic computer vision algorithm and neural net model that can classify objects. This feature would also make more sense with audio feedback, which we would most likely leave as future work.

Kelton’s Status Report 04/02/2022

This week I brainstormed with Ning on how exactly the depth camera will pass data to raspberry pi and how the initial calibration would determine a legitimate starting path. For now, we will use the center vector of the whole frame  as baseline “ground truth” to avoid noise from adjacent objects.

I also worked on belt put-up with Alex and added the feature of running the control script upon system boot of the raspberry pi, i.e., turning on the power switch.

Team Status Report 04/02/2022

This week, we assembled most of our parts to the belt and implemented an algorithm for depth camera calibration.

For belt assembly, we have successfully mounted ultrasonic sensors, vibration motors, and the Arduino onto the belt. A photo of the current status of the belt is shown below. For more details, check out Alex’s status report.

For depth camera calibration, we have successfully found a way to sample only one pixel column of a single depth camera frame and construct a sufficiently accurate ground truth model. A graph of the modeling (orange line) of depth information (blue line) is shown below. For more details, check out Ning’s status report.

Next week we plan to finish assembling the belt and complete implementing the obstacle detection algorithm for the depth camera.

Ning Cao’s status report 04/02/2022

This week I worked with Kelton on object detection logic for the depth camera, specifically calibration. We are adopting the strategy of calibrating the surroundings upon system boot and constructing a “ground truth” frame with which we compare to real-time depth information. We assume that the ground in front of the user is flat during calibration, and found a way to model the depth information with linear regression (see charts in the team report.) For simplicity and to avoid the interference of obstacles on the side, we only take one pixel column in the middle of the frame and assume that it contains the correct information about a flat ground. We find that the reciprocal of the samples can be modeled by simple Ridge regression. We observe that the samples at the top of the frame are sometimes unstable, so we assign a smaller weight to those samples. Please note that this is only based on one pixel columns; the construction of the entire “ground truth” frame should be completed by tomorrow.

Next week I will focus on:

  • finishing assembly of the physical belt with Alex; and
  • implementation of obstacle detection & threat classification based on the “ground truth” frame with Kelton.

Xiaoran Lin’s Report 4/2

This week, I am primarily focused on the physical prototype for our belt. We wanted to put a model of the belt together before we conduct more software testing. Using myself as a user, I have planned out the placing of our components on the belt. This mainly includes the ultrasonic sensors, vibration units, arduino board, breadboard, raspberrypi, depth camera and the battery. Overall our sketch seems to be valid. So far we have attached all of the sensors as well as the vibration units and arduino board and breadboard. A rough image of what the board would look like is included in the team report.

I will try to finish up the connection on Sunday so we can demo our current progress on Monday.

A minor arduino adjustment this week was I changed the rate in which the arduino interface sends signals to the raspberry pi. Instead of sending a data for each sensor we operate, we would be doing so for every cycle of  6 sensors.

Moving on, we would primarily focus on software testing. We are also considering improving the physica modeling to hide more of the circuit. The exact way to do this is still under discussion, but will likely be decided after our physical prototype is all complete.