Ning Cao’s status report 04/30/2022

This week my work primarily focused on testing. During testing, we found our system to be less robust than expected, so I worked with Kelton to increase the robustness. I also helped in developing the testing scenarios. Planned testing scenarios include:

  • stationary obstacles: we place multiple objects of varying heights (chairs, boxes, etc.) and lead the blindfolded user wearing the belt (“user” in later occurrence) to the site. We will ask the user to tell us the direction of the obstacles.
  • Moving obstacles: We ask the user to stay still while our team members move in front of the user, effectively acting as moving obstacles. We then ask the user to tell us the movement directions of the moving obstacles.
  • Mixed scenarios: We will ask the user to walk down a hallway with obstacles we place in advance. We will time the movements and observe the occurrence of any collisions.

Ning Cao’s status report 04/23/2022

This week I mainly focused on working with Kelton on condensing the 720P depth map into 2 threat level numbers. We eventually settled on using hard-coded thresholds to classify threats.

Below is a showcase of our work. From a single frame, we use hard thresholds to generate masks that signal a threat of at least level 1 (bottom center), 2 (bottom left), or 3 (top right). In these masks, the yellow part is where the threat lies. When we add all three masks together and pass it through a validity mask (top center, yellow parts are valid), we get the frame simplified to 4 threat levels (bottom right; the brighter the color, the higher the threat level.)

We do notice that the ground is considered a threat level of 1/2; we have theorized a way that utilizes the baseline matrix that we managed to generate in previous weeks to filter it out. Hopefully, we can show our result in our presentation next Monday.

Team status report 04/23/2022

For this week, our week focuses on the algorithm of the depth camera and the assembly of the belt. We will attach the depth camera to the belt and test the whole system on Sunday.

Below is a showcase of our work on the depth camera. We successfully separated the depth information into 4 threat levels (lv. 0 to 3). The frame on the bottom right is the processed frame, in which the yellow area (a chair in the vicinity) is correctly identified as a level 3 threat. We will filter out the ground (green part) on Sunday before testing.

Below are showcases of our sponge-protected ultrasonic sensors and the partly assembled belt. We expect the sponge to serve as protection as well as cushioning that prevents the vibration of one vibrator to pass through the entire belt. The center of the belt, now covered with a piece of yellow paper, is where the depth camera will be.

Ning Cao’s status report 04/17/2022

This week I primarily worked on setting up the pipeline of the depth camera. The diagram below illustrates the pipeline.

I also added a column index-based lower boundary on the depth map due to the nature of stereo vision.

Next week I will be working with the team on the belt assembly and with Kelton on depth camera testings.

Ning Cao’s status report 04/10/2022

This week I mainly worked on incorporating different nodes of the python DepthAI package to work with our new algorithm for the depth camera.

Our depth camera now incorporates a feature tracker that finds and tracks image features from RGB camera frames, and I successfully incorporated the RGB frame with depth frames, so now for each feature generated by the feature tracker, we can know its position in the frame and the distance data in the corresponding depth frame (available for comparison with the baseline if necessary) across different frames.

I’ve also tested the performance of the feature tracker with Kelton, and we found that the feature tracker performs poorly when the camera is shaken, in which case we may lose track of the features. We are considering the necessity of adding an IMU node in our depth camera code, but more testing is required to decide.

Team Status Report 04/02/2022

This week, we assembled most of our parts to the belt and implemented an algorithm for depth camera calibration.

For belt assembly, we have successfully mounted ultrasonic sensors, vibration motors, and the Arduino onto the belt. A photo of the current status of the belt is shown below. For more details, check out Alex’s status report.

For depth camera calibration, we have successfully found a way to sample only one pixel column of a single depth camera frame and construct a sufficiently accurate ground truth model. A graph of the modeling (orange line) of depth information (blue line) is shown below. For more details, check out Ning’s status report.

Next week we plan to finish assembling the belt and complete implementing the obstacle detection algorithm for the depth camera.

Ning Cao’s status report 04/02/2022

This week I worked with Kelton on object detection logic for the depth camera, specifically calibration. We are adopting the strategy of calibrating the surroundings upon system boot and constructing a “ground truth” frame with which we compare to real-time depth information. We assume that the ground in front of the user is flat during calibration, and found a way to model the depth information with linear regression (see charts in the team report.) For simplicity and to avoid the interference of obstacles on the side, we only take one pixel column in the middle of the frame and assume that it contains the correct information about a flat ground. We find that the reciprocal of the samples can be modeled by simple Ridge regression. We observe that the samples at the top of the frame are sometimes unstable, so we assign a smaller weight to those samples. Please note that this is only based on one pixel columns; the construction of the entire “ground truth” frame should be completed by tomorrow.

Next week I will focus on:

  • finishing assembly of the physical belt with Alex; and
  • implementation of obstacle detection & threat classification based on the “ground truth” frame with Kelton.

Ning Cao’s status report 03/26/2022

This week I explored different ways of collecting and processing data. I found a way to calculate depth based on the disparity of the stereo cameras, and wrote a successful sample code generating a map of depth information. I have tried combining stereo depth with a YOLO neural network, but I find the classification of objects to be rather unstable. After a discussion with the team about how our system should interpret the depth map, we decided that we will split the depth map into horizontal layers which we will tune the warning thresholds differently.

Next week my work will be primarily focused on tuning the number of horizontal layers and warning thresholds through extensive testing.

Ning Cao’s status report 03/19/2022

For this week I got a deeper understanding of the DepthAI Python API. I have successfully implemented code with stereo depth camera and edge detector enabled. I have also read a paper on ground-level obstacle detection, but I have not integrated the algorithm into the system.

Next week I will work on other possibilities of the stereo depth camera and integration of obstacle detection/threat classification.

 

Ning Cao’s status report 02/26/2022

This week, I focused on working with the integration of OAK-D depth camera and the Raspberry Pi. We have successfully connected the Raspberry Pi to CMU-DEVICE and installed proper environment to run sample scripts from OAK-D documentation.

Next week my work will focus on finding appropriate NN model for the OAK-D camera as well as explore the necessity of running the system in headless/standalone mode.