Ging’s Weekly Report 12.9

This week, we did our final presentation and peer review. I edited the final poster as well. Also, I conducted more user testing in Scaife, and recorded several videos showing the ability of our product. I also started to write our final report.

The progress is on schedule. What we need to do is final demo, final video and final report. We’re not behind.

Next week, we plan to submit final video on Monday, and continue finishing the remaining part of our final report and submit it by Friday.

Ging’s Weekly Report 12.2

This week, I’ve done software accuracy testing for the deep learning model. I computed its mIOU, mAP, and plotted its loss curve. Also, I participated in our team’s testing on its usability Wednesday. We debugged it by recording a trace folder of the testing objects and pinpointing specific bugs. Friday, we conducted a user-testing in real world and record a demo video.

My progress is on schedule. We will submit final presentation this weekend and do final demo next week.

Next week, we will continue debugging small bugs. The main task is to make sure everything is ready for demo.

Ging’s Weekly Report 11/18

This week, I have refined the model to add more labels to the output. The model was not very good because the dataset is small and not class-balanced. I tried to use focal loss to deal with the class imbalance, but the result was still not good though. Also, I have helped my teammates to construct the stand holding the sensors. We also did testings in real-world scenarios. We modified the scripts and the model based on the result. The result was not bad. The picture of us doing the testing can be found in our team weekly report.

We are on schedule. No changes need to be made to the schedule. Next week, I hope to test the modified system and iteratively add new modifications. I will be ready for the demo.

Ging’s Status Report 11/11

This week I have completed the model for object detection. I tested FRCNN and YOLO for tradeoff and integrated YOLO to the inferencing pipeline. Also, I’ve done data processing to transform the dataset to well-formatted trainable dataset.

I’m on schedule. No need to update the schedule. Next week, I want to refine the models for higher accuracy and start testing and validation.

For testing the model, I plan to run an accuracy test on the model. We will simulate the environment of the blind, and record the number of catches/misses of the system. We will also weigh these catches/misses by the bounding box mismatches. We will then use the weighted average as a metric to examine whether the design meets the requirement. We will also measure the weight of the system to make sure it doesn’t exceed the 5lb requirement.

Ging’s Status Report 11/04

This week, I’ve written a program to turn training set images from yolo5 format to the custom format for training. Also, I’ve finished the training of our object detection model. It is a custom model with a backbone of resnet and has similar, but smaller structure as YOLO. I will compare its inferencing accuracy with pre-trained model’s accuracy and pick the most optimal one.

My progress is on schedule and will produce a roughly workable prototype for next week’s interim demo. There is no change to our schedule.

Next week, we will do the interim demo and find out missing details in real-world testings.

Ging’s Status Report for 10/28

This week I have built the model for object detection. I explored the differences between a one-stage model and two-stage model, and I built a two-stage model and started training a little bit. I learned the concept of anchors, NMS algorithm and Jaccard index, the outcome is fruitful. The model works pretty well. And I’m on track this week. The plan doesn’t need to be modified.

Another thing we’ve been doing is collecting data. After Wednesday’s meeting, professor Tamal suggested switching to a small dataset and adding post-processing. So next week I will explore how to post-process the dataset and ask my teammates to collect data together.

Attached is the training loss report for one epoch showing this week’s work.

# Loss output
# Epoch:1/1 || Epochiter: 4801/6440 || Iter: 4801/6440 || Loc: 0.9353 Cla: 1.5471 Landm: 2.1233 || LR: 0.00100000 || Batchtime: 0.6921 s || ETA: 0:18:55
# Epoch:1/1 || Epochiter: 5101/6440 || Iter: 5101/6440 || Loc: 3.4078 Cla: 3.8303 Landm: 18.6551 || LR: 0.00100000 || Batchtime: 0.6789 s || ETA: 0:15:09
# Epoch:1/1 || Epochiter: 5401/6440 || Iter: 5401/6440 || Loc: 0.6965 Cla: 1.4381 Landm: 1.0676 || LR: 0.00100000 || Batchtime: 0.6735 s || ETA: 0:11:40
# Epoch:1/1 || Epochiter: 5701/6440 || Iter: 5701/6440 || Loc: 0.7189 Cla: 1.0520 Landm: 1.5581 || LR: 0.00100000 || Batchtime: 0.6454 s || ETA: 0:07:57
# Epoch:1/1 || Epochiter: 6001/6440 || Iter: 6001/6440 || Loc: 2.2662 Cla: 2.8337 Landm: 7.2278 || LR: 0.00100000 || Batchtime: 0.8076 s || ETA: 0:05:55
# Epoch:1/1 || Epochiter: 6301/6440 || Iter: 6301/6440 || Loc: 2.2961 Cla: 3.4013 Landm: 3.2449 || LR: 0.00100000 || Batchtime: 0.6540 s || ETA: 0:01:31

Ging’s Status Report 10/21

This week I’ve explored the deep learning model for object detection. I’ve looked at the code for backbone, neck, and head and browsed which set of model I should use. This is the code of the detector:

Also, I looked at how objects are identified using bounding boxes. I ran a demo with facial recognition to show hoe human’s faces are detected with facial landmarks.

The project is roughly on schedule. Next week, I hope to achieve a workable version of the object detection network.

Individual question:

ABET #7 says: An ability to acquire and apply new knowledge as needed, using appropriate learning strategies

As you’ve now established a set of sub-systems necessary to implement your project, what new tools are you looking into learning so you are able to accomplish your planned tasks?

Personally, I will learn how object detection model works, and the fine tuning tradeoff of different CNN backbone, different FPN, and different heads. Also, to train the model, I will learn how to work with jupyter notebook.

Ging’s Status Report for 10/07

This week, I practiced and presented the design review presentation. I also completed peer reviews of design review. Personally, I learned a lot from other group’s design such as detailed hardware design and concrete PCB diagram. I also managed to finish setup on ece-cluster machine. It takes lots of time because I’m having trouble using pip on my afs space as my disk quota is entirely filled. This is a screenshot of my correspondence with ECE ITS Chad to solve this problem. My schedule is on time. For next week, I hope I can deploy the model and start training.

Ging’s Status Report for 09/30

In this week, I personally have prepared the slides for design review and practiced presenting it. I also found a dataset for our training. This is the link of the dataset I’ve found and tried: http://gibsonenv.stanford.edu/method/. The slides are in the design review tab.

My progress is on schedule. Good work!

Next week, I hope to give the presentation, and do the peer reviews. I hope to start collecting data and start training.

The ECE courses that covered the design is: 18794 computer vision, 18213 computer systems, 17313 software engineering. They are helpful for building a deep learning network and managing the project well.

Ging’s Status Report for 09/23

This week we spent lecture time watching other teams’ presentations of their project proposal and doing peer reviews. I learned a lot from other’s presentations and feedback. I realize that we should specify which feature the detection system is to measure and specify the details of the accuracy into a 2×2 grid. This is the training example I’ve found on github: https://github.com/devendrachaplot/Neural-SLAM.

This week I also did some research on object classification and image classification model to familiarize myself with computer vision models.

I’m on schedule.

Next week I will start implementing the SLAM model.