Iris’s Status Report for 12/5

This week, we met up and finished the presentation slides in preparation for our final presentation on Monday. In addition, we filmed parts of our demo video (which will appear as gifs in our presentation) in Gates and are currently editing it together. Minji and I are currently working on implementing the database and details page for the IOT stack and should have it completed by today. We will also be going over rehearsing for our final presentation tomorrow and cleaning up comments from Cindy on our slides.

We are on track to finish this project! We have almost everything completed and everything physical works as expected.

Iris’s status report for 11/21

This week, Jiamin and I got the RFID string successfully on the Jetson Nano. We are no longer losing random bytes sometimes when transferring the data through i2c from the Nucleo to the Jetson. Now, it works when a RFID is scanned, and if a byte is lost, will tell the user to rescan the RFID to get the full string. We also made a design change in our model. We will be having red and green LED lights to display if the RFID was successfully scanned vs not successful. We will be incorporating the LED into our enclosure that we are creating. Tomorrow, we will all be meeting at HH to start the IOT app that Cindy linked to us and hopefully will be getting that working.

I am on schedule for our project, and will be working hard this next week to get all the components integrated and smoothly running.

Iris’s status report for 11/14

This week we presented the demo to Cindy and Vyas and got the feedback from it! We met with Techspark and discussed what we need to build to encapsulate our hardware and make it a cohesive product. Our design picture is attached in the Team status report. I wasn’t able to work on much this week since I had a lot of assignments due, but I am on track with our Gantt chart and my next steps will be to send an average of temperature data + RFID data from the microcontroller to the cloud. I am aiming to achieve that by next week. I think right now the RFID data is a little bit buggy, however (we sometimes miss 1 or 2 bytes of the 16 byte sequence we should be receiving, so there will be some debugging needed on that side.

Iris’s status report for 11/7

So, I have decided to give up on OpenCV and haar classifiers for the algorithm and am going to use YOLOv3 and Darknet, a lightweight real time object detection algorithm. I couldn’t manage to get the FPS I wanted with OpenCV (and after much digging, figured out that on the Jetson Nano, it is almost impossible to get good results and utilize the GPU since OpenCV is not made for it) and the accuracy with Haar classifiers was lacking. I am currently working on it in YOLOv3, and am currently training the model with datasets. For the demo on Monday, I may show the Haar classifiers algorithm version if I am not done with this part by then.

However, this week Minji, Jiamin and I went to the labs and debugged our temperature sensor! We managed to get it working and measuring temperature on the Nano (very annoying, i2cdetect is super buggy on Nano) after 4 hours. The sensor just wasn’t being detected by the Nano, and we tried debugging it with an Arduino and a Raspberry Pi. We got it in the end, and the video will be posted in our team status report.

I am a little behind regarding the facial detection algorithm since I faced many roadblocks, but I am certain that Yolov3 is the way to go. I will be catching up this week since I don’t have many assignments and will be working lots with the team.

 

Update: Yolov3 works!

Iris’s Status Report for 10/31

This week was a lot of debugging and time spent in the ECE lab. I spend about 12 hours on Thursday in the lab trying to fix package compatibility issues and trying to download the right OpenCV/Tensorflow packages.  I was trying to use these OpenCV trained models for the mask detection algorithm which was running fine on the computer which I was testing it on, but it seems to be slowing the Jetson Nano considerably down to really low FPS, to the point where its unusable. I dug deeper and found that the package I’m using, DNN, doesn’t take advantage of the GPU and only uses the CPU, which takes a very long time since we are not utilizing CUDA. I restarted my algorithm and got a very basic iteration working, but its very low accuracy since the classifiers aren’t as strong. I will look into it more this week.

I am slightly behind schedule but as mentioned, I will be working on it a lot more. I aim to have a working ~75% accuracy face+mask algorithm + temperature sensing by the demo week. I will be moving onto the temperature sensor too this week.

Iris’s Status Report for 10/24

This week has been kind of slow on progress since I had a midterm and several large assignments due this week, but I did manage to get video feed working on the Raspberry Pi Camera Module V2 and the Jetson Nano! I ran into some issues trying to connect the two, but figured it out through some extensive googling. Still trying to figure out some bugs in the face and mask detection algorithm, but I think it is going along ok. We got the feedback from the second design presentation and incorporated it into our current design (thank you all).

I am slightly behind schedule right now, but I will be sure to catch up in the next week since I don’t have that many deadlines next week. I will be aiming to fix the bugs in the algorithm right now and get some temperature sensing data on the Jetson Nano.

Iris’s status update for 10/17

This week was very productive. I ended up working on the design document due on Monday for the majority of the time since it was the most pressing deadline. We included many high quality diagrams and fleshed our entire design. We took into account Vyas’s suggestions and clarified that we weren’t worried about packet loss but rather network loss in our design document ( thank you vyas!) For my part of the project, I started coding the facial detection algorithm using opencv and numpy and will be trying to upload it onto the jetson tomorrow.

I am still on schedule for the project, next week my goal is to get full video feed working on the rpi camera since we have all the hardware parts now. Good thing there is no more other deadlines like reports or presentation so we can get to real grinding.

Iris’s Status Report for 10/10

This week, Minji and I managed to get the Wifi antenna installed on the Nano Jetson. We found out we bought the wrong SD card though so we had to make an emergency purchase, that hopefully will come today.

I’ve been working on the facial and mask detection algorithm, and figuring out which training sets to use. I might need to actually create / supplement my own training set for the mask detection algorithm, so I will be needing to take pictures of people with / without masks on later on. I’ve also been helping Minji research more into IoT design, and we’ve decided to move from AWS GreenGrass to Microsoft Azure Edge since it is more well documented with use with the Nano.

Most of this week has also been doing the design presentation slides and figuring out exact specifications of each part of our project. My progress is still pretty on schedule, we did shift the Gantt chart a little back due to some parts not arriving yet and not being able to test (I can’t test the video feed until the microSD card comes). I will be focusing on that for the next week and making sure it can display the live video feed on the monitor.

 

Iris’ Status Report for 10/3

We received one of our shipments today! The Jetson Nano came in, so I can start getting familiar with the kit and experimenting with different libraries (Opencv) to try testing out some features. I am trying to attach the wifi antennas to the Jetson Nano and will see if we can get wifi compatibility on it. We are missing some of the items in the order, however so I will need to wait for the Raspberry  Pi Camera Module V2 to come in before I can see how everything works together. I’m also trying to write some rough skeleton code for the mask and facial detection algorithm, I found some good training sets online that I will probably use later on.

I am pretty on track with our proposed schedule, just need to wait for the last few parts to come in. In the next week, I hope to be able to display live video feed onto a monitor and start making significant progress on the mask and facial detection algorithms.