Ning Cao’s status report 04/17/2022

This week I primarily worked on setting up the pipeline of the depth camera. The diagram below illustrates the pipeline.

I also added a column index-based lower boundary on the depth map due to the nature of stereo vision.

Next week I will be working with the team on the belt assembly and with Kelton on depth camera testings.

Team status report 4/16

For this week, our team has been focused on making sure that the core algorithm ran in the raspberry pi would function as expected and correctly evaluate incoming risks. We are mainly focused on processing the information from the depth camera. For more detailed information on this, check Kelton’s blog.

The other aspect is the information from the ultrasonic sensors. We tested out our machine in a regular lab environment, to see how well it would respond. So far the stats from the ultrasonic sensors seem quite stable. We did not make much change to the core algorithm behind processing the sensor data, only a few minor bug fixes here and there. Overall, the prototype belt has met our minimal expectancy in a select settings. We have yet to fully test out its functionality, which we hope to focus more on next week.

For the physical belt assembly, we are currently waiting on new materials such as foam casing for the ultrasonic sensors, shorter female jumpers for rerouting the circuits, and tape covering for exposed wires. We will finalize our assembly next week.

In our final week before the presentation, we would rebuild the physical belt one last time to make sure there are no component failures and everything is well organized. Then we would test out our belt and make any minor adjustments as needed. We will also add a new feature to visualize our belt’s functionality so we can better display our product.  Our team has agreed to finish up other dues over the weekend so we can focus on testing throughout next week and especially over the weekends. So far we are expected to meet our goals before the presentation.

Kelton’s Status Report 04/16/2022

This week I implemented the new modeling method of using the narrower range of feature pixels from Oak-D’s feature tracker module to compute the mean squared error against the baseline. Additionally, the non-overlapping region in the frame between the two stereo cameras’ field of view is filtered out based on depth threshold.

Another minor update in the data processing pipeline is that the parsing in raspberry pi main is matched with the new data format of multiple ultrasonic sensor readings in one line of message.

Next week, I will test if the new mean squared error is interpretable and significant enough to detect ground-level obstacles from depth image. Meanwhile, I will explore the professor’s suggestion of somehow retrieving derivatives of pixels across time to calculate obstacle speed.

Xiaoran Lin’s weekly update 4/16

This week, I have mostly been doing some preparation work for finalizing our project in the final week. I have ordered some new parts to help rebuild the belt. Currently I am still waiting on the parts to arrive before I reroute the circuit. But I have tested out the individual parts and ruled out components that are broken (since vibration motor disks and sensors are quite cheap, they can be quite unrealiable). I have done some basic testing to make sure that the ultrasonic and vibration sensors work as expected, and only need to wait on a few components to finish up assembling the final form of the belt next week.

Apart from the belt, I am also looking into developing a visualization system for the belt’s functionality. The basic idea is to feed the processed information from the raspberry pi to another PC through USB. I would hope to include both the video feed from the depth camera, as well as the information from the sensors. The visuals would indicate how the belt is producing effective warnings and how it is recognizing risks, so that observers can understand the belt without having to put it on. This would help us demonstrate the belt much better.

Next week, I will look to finish up on the two points above, and transition to testing before the weekend.

Xiaoran Lin’s Report 4/11

For this week, I was focused on adjusting the physical model for the belt and fixing the arduino interface. Following last week’s progress, I have completed a working prototype that consists of most of the components we need other than the battery and depth camera. Furthermore, I have adjusted part of the arduino interface to allow for a faster response rate from the ultrasonic sensors.

Later on this week, I have not made too much progress in face of carnival, but in the following week, I plan to reroute most of my circuits, as well as replace some of the parts that I found to be malfunctioning. I will also add a attachment mechanism for the depth camera as well as the battery.

Ning Cao’s status report 04/10/2022

This week I mainly worked on incorporating different nodes of the python DepthAI package to work with our new algorithm for the depth camera.

Our depth camera now incorporates a feature tracker that finds and tracks image features from RGB camera frames, and I successfully incorporated the RGB frame with depth frames, so now for each feature generated by the feature tracker, we can know its position in the frame and the distance data in the corresponding depth frame (available for comparison with the baseline if necessary) across different frames.

I’ve also tested the performance of the feature tracker with Kelton, and we found that the feature tracker performs poorly when the camera is shaken, in which case we may lose track of the features. We are considering the necessity of adding an IMU node in our depth camera code, but more testing is required to decide.

Kelton’s Status Report 04/10/2022

This week I mostly worked with Ning on modeling the depth image to rate threat levels of obstacles. My first attempt was based around the assumption that comparing with the same baseline surface used for calibration, the mean squared error generated from  depth image with obstacles and an extension of the baseline surface without obstacles will be different enough to enable thresholds. But this assumption proved to be incorrect.

Subsequently, I discussed with Ning and we agreed on using Oak-D’s feature tracker module to narrow the range of pixels used for computing the mean squared error. We observed that the features tracked are fairly stable across time but could vanish in the event of shaking the camera. To smooth out this outlier, we planned to take the average of several historical frames (taken within one tenth of a second assuming 60 FPS).

I also proposed adding a Stop mode where all six coin motors will vibrate in cases where the user could not simply walk around, e.g., when facing a staircase or wall.  To build this feature, I searched online for neural net model that can classify stairs and managed to feed depth image into an example TensorFlow Lite classification model.

Team Status Report 04/10/2022

This week we mostly worked on modeling the depth image. At first, the idea is to compute the mean squared error between baseline ground surface and the incoming image stream, and place a higher weight on closer data as illustrated below. However, it is discovered that the errors were relatively unchanged from a baseline surface to one with obstacle. 

Now, the updated plan is to use Oak-D’s feature tracker module to pinpoint significant pixels to compare with their counterparts in baseline.  Since these feature pixels represent qualitative difference from the baseline surface, we reasoned that the subsequent mean squared error difference will be separable from baseline. In terms of implementation, the RGB camera at the center will be used to align features detected from the left and right stereo cameras through Oak-D’s built-in alignment functionality.

Additionally, we are considering adding a Stop mode where all six coin motors will vibrate in cases where the user could use some help and not simply walk around an obstacle, e.g., when facing a staircase or wall.  To achieve this, we have looked into classic computer vision algorithm and neural net model that can classify objects. This feature would also make more sense with audio feedback, which we would most likely leave as future work.

Kelton’s Status Report 04/02/2022

This week I brainstormed with Ning on how exactly the depth camera will pass data to raspberry pi and how the initial calibration would determine a legitimate starting path. For now, we will use the center vector of the whole frame  as baseline “ground truth” to avoid noise from adjacent objects.

I also worked on belt put-up with Alex and added the feature of running the control script upon system boot of the raspberry pi, i.e., turning on the power switch.

Team Status Report 04/02/2022

This week, we assembled most of our parts to the belt and implemented an algorithm for depth camera calibration.

For belt assembly, we have successfully mounted ultrasonic sensors, vibration motors, and the Arduino onto the belt. A photo of the current status of the belt is shown below. For more details, check out Alex’s status report.

For depth camera calibration, we have successfully found a way to sample only one pixel column of a single depth camera frame and construct a sufficiently accurate ground truth model. A graph of the modeling (orange line) of depth information (blue line) is shown below. For more details, check out Ning’s status report.

Next week we plan to finish assembling the belt and complete implementing the obstacle detection algorithm for the depth camera.