Ning Cao’s status report 03/26/2022

This week I explored different ways of collecting and processing data. I found a way to calculate depth based on the disparity of the stereo cameras, and wrote a successful sample code generating a map of depth information. I have tried combining stereo depth with a YOLO neural network, but I find the classification of objects to be rather unstable. After a discussion with the team about how our system should interpret the depth map, we decided that we will split the depth map into horizontal layers which we will tune the warning thresholds differently.

Next week my work will be primarily focused on tuning the number of horizontal layers and warning thresholds through extensive testing.

Kelton’s status report 3/26/2022

This week I added the logic of alerting for only a duration instead of indefinitely for a stationary obstacle, and sending control messages from Raspberry Pi to Arduino based on Alex’s encoding of multiple vibrator control in one line. Another update following Alex’s suggestion is that messages are sent only when intended intensity for a vibrator changes for more efficient serial communication. Meanwhile, for processing depth camera data, I aligned with Ning on how to start feeding arrays of depth data associated with different vibrators to the Raspberry Pi alert model.

I also broke two of the vibrators as their wires are too thin. This experience led me to push for casing all components onto the belt so that the system can be tested consistently and safely. Consequently, I got a mini breadboard from techspark and ordered purchase of more ultrasonic sensors(now we only have 5 instead of 6), new vibration motors, and an acrylic Arduino case.

Next week, I plan to incorporate depth camera data into the model and hopefully start testing the system against a set environment and gauge the accuracy of our sensing.

Xiaoran Lin report 3/26

This week, I mainly focused on testing the system together with kelton, and made sure that connections between the arduino and the raspberry pi were working as expected. So far, the interface seems fine and we would need to move on to more testing to make sure that our threat detection are working correctly. I also started making physical models and developed circuit connection schemes that we will carry on next week.  Moving on, we will start to assemble our physical belt together first, and conduct software testing from that point onward.

 

Team Status Report 3/26

This week, we focused on testing the multiple sensor and vibration system, made progress on how the depth camera identifies ground level threats, and started to make physical parts for assembling the belt.

For the sensor and vibration system, we have finished developing both the arduino interface to communicate with the sensors and vibration units, as well as the python interface on the part of raspberry pi. After some basic testing, the serial communication works quite well, and we were able to operate the system with multiple sensors and vibrators active at the same time. We have  developed a basic threat determining model in our python code, and conducted some testing to ensure that the system operates as expected.

In the testing process, we also realized that our vibration unit cables were extremely thin and fragile, and may not satisfy the requirements for a wearable device. Therefore, we decided to order new models for the vibration units that were a little more durable. We have also started to develop protective cases for our arduino board and breadboard circuits. Since we now have a finalized circuit connection model, we can simply build our circuits and continue our software testing without ever having to modifying the circuits. We plan to purchase existing arduino protection cases from amazon and use laser cutting and acrylic boards to build a custom case for our breadboard.

For detailed updates on the depth camera development, please see Ning’s report.

 

 

Ning Cao’s status report 03/19/2022

For this week I got a deeper understanding of the DepthAI Python API. I have successfully implemented code with stereo depth camera and edge detector enabled. I have also read a paper on ground-level obstacle detection, but I have not integrated the algorithm into the system.

Next week I will work on other possibilities of the stereo depth camera and integration of obstacle detection/threat classification.

 

Xiaoran Lin’s Status Report 3/19

This week, I focused on finishing up the arduino interface. I remodeled our pin assignment, since I found a way to use the analog pins provided to manage the trigger pins. As a result, we now have 18 pins in total that we can use, which satisfies our requirement. Furthermore, I was able to finish up the arduino interface and tested on operating 3 sensors and vibrators at the same time.I have concluded our work on arduino for now, and next week, I will  focus on the python script in raspberrypi.

Kelton’s Status Report 03/19/2022

This week I reviewed comments of our design report and prototyped the obstacle alert model based on ultrasonic distance and speed. The model has 3 alert levels(high, medium, low) that correspond to vibration intensities, given conditions of speed>3m/s or distance <= 1m, speed>2m/s or distance <= 2m, and speed>1m/s or distance <= 4.5m respectively. The model is further tested  with multiple ultrasonic sensors and vibration motors.

Team Status Report 03/19/2022

For Arduino and Raspberry Pi integration, an API is established with semantics of activateSensor, modifyVibrator <num 1-6><level 0-3>, deactivateVibrator <num 1-6> and deactivateSensor. 

For control logic, a primitive alert model based on obstacle’s ultrasonic distance and speed is developed. The model has 3 alert levels(high, medium, low) that correspond to vibration intensities, given conditions of speed>3m/s or distance <= 1m, speed>2m/s or distance <= 2m, and speed>1m/s or distance <= 4.5m respectively.

The above integration and control logic are tested to work with multiple ultrasonic sensors and vibration motors.

 

Kelton Zhang’s Status Report for 03/05/2022

This week I mostly worked on the design report and integrating Raspberry Pi with depth camera imaging.

For the design report, I wrote the sections of abstract, introduction, use case and design requirements, bill of material, schedule, and related work. While writing the report, we realized that we need to order a belt and more vibration motors and I put in the request.

For depth camera sensing, I looked into Oak-D’s examples of edge detection, stereo depth with Ning and tested out their effects after integration into our raspberry pi main. Our initial approach will be forwarding the stereo depth processed frames, which shows depth difference without noisy features like printed text on the surfaces, to the edge detection module. The implementation will be left to after spring break.

 

After spring break, I will further work on integrations with Raspberry Pi and design of sensor data modeling.