Brandon’s Status Report for 4/29/2023

At the beginning of the week, we did testing on the system for the final presentation. A test that was relevant to the web application was testing the notification speed to the user after being notified of a pet entering the forbidden zone. We learned that the speed was limited by the polling rate, or rate of GET requests from the frontend to the backend, for checking if a notification should be displayed to the user or not. As well, we worked on updating the block diagram and displaying a demo for the complete solution for the final presentation. In terms of technical implementation, I worked on the live video feed. My initial iteration was displaying an image of the room a frame per second. The CV would send a byte array to the backend, which would be transformed into an image, and the frontend would poll the backend every second using the useInterval React hook and display the image. Though this was easier to do implementation wise and the live video feed would be considered “functioning”, we wanted to find a way to display more frames per second. Hence, we learned about StreamingHttpReponse object in Django. Essentially, this object would take a constantly updating byte array and display the image for the specific byte array. We were able to show more frames/second of the room, but there are some issues with loading the images quickly, which can lead to image flickering. We hope to reduce the image flickering so that users can smoothly view the live video feed.

As of right now, I am behind schedule. I will be working on finishing up the live video feed, finishing up the authentication process, making a user friendly and intuitive website, and deploying the frontend before integration and testing on Tuesday.

Next week, I hope to finishing up the individual tasks for the web application as mentioned above. As well, we look to do integration and testing before the final poster and report.

In addition to the systems test we have done, I have done unit tests for each part of the web application, including choosing forbidden zones, displaying a heat map for the pet activity logs, displaying a notification a user for a pet entering the forbidden zones to name a few, so that system integration is as smooth as possible.

Rebecca’s Status Report for 4/29/2023

The very beginning of this week was  spent working on the final presentation slides on Sunday. I didn’t do anything for capstone during the middle of the week (other than attending presentations) as I had many assignments due in other classes that took priority. At the end of the week, I met up with Brandon and we put some time into getting the live video feed working – it has a reasonable framerate now, although often it goes white in between each frame, which creates an awful strobing effect. We’re still looking for ways to improve this.

I’m fairly on schedule at this point other than the RPi. I would’ve liked if we could have begun integration testing with the RPi by now, so this is a bit behind schedule, but it does at least seem to function ok on simple examples. The plan is to get 100% ready for integration tomorrow and Monday, so that we can integrate on Tuesday.

This coming week will therefore start with lots of time working with the RPi. Hopefully Brandon and I can integrate on Tuesday. The goal is to at least re-do the tests we’ve already done on laptops, now on the RPi, in time for the poster on Wednesday. Then the rest of the week will be debugging last features (mostly live video) and performing longer tests/under more realistic conditions.

Team’s Status Report for 4/29/2023

Risks and Mitigation

We have not tried to get our whole project functioning on an RPi yet. So far the RPi seems to be working as intended, but we have only tried it separately from the project, and have not worked to integrate it yet. This will be attempted starting this coming week, ASAP. If we run into issues we cannot solve, we will reach out, and worst case scenario the ‘final’ project will be run on a laptop. We will also need to redo some tests once the code is running on the RPi, which could lead previously successful tests to fail. We will handle any such cases the way we would handle a failed test on any platform – seek to alter/refine the code to improve efficiency.

Project Changes

There are no changes at this time.

Schedule Changes

A few of our remaining features are a bit behind schedule, as the last week of classes was quite busy – we’d like to improve the live video (very low frame rate), log-in is in progress, and we haven’t had a chance to integrate with the RPi. It is our goal to mostly wrap up these loose ends by the end of this weekend/very beginning of this week, so we can move more exclusively into testing for the week leading up to the live demo.

Testing 

We tested notification speed from the pet entering the forbidden zone on the camera to the CV detecting that the pet has entered the forbidden zone. This was done via slow mo video – to mark the points in time when a ‘pet’ entered a forbidden square in real life, versus when the CV detected this (marked by the camera window closing). We wanted this to be under a second, and our results showed that the detection speed was ~0.625 seconds. 

Then, we tested the notification speed from the pet entering the forbidden zone to the Web App displaying a notification to the user that a pet has entered the forbidden zone. Once again, this was done via slow mo video, marking when the pet was detected by the CV and when the notification first shows on the web app. The goal was for this to occur in under 10 seconds, and the resulting data showed that this process occurred in ~1.125 seconds. We found that the speed from pet entering the forbidden zone to the web app displaying the notification is most limited by the rate of the frontend of the web app making requests to the backend to check if a notification should be created or not.

Next, we tested the accuracy of CV tracking by sampling certain frames and marking where a ‘human’ thinks the pet is versus where the CV thinks the pet is. We wanted this to be with 1ft, and using the the difference in centers of the bounding boxes as our metric, we were usually within 3-6 inches (measured based on knowing what the distance of two spots in the frame would be, i.e. the distance from the cat’s head to its back).

Lastly, we’ve measured the time between when an animal enters the room and when it is picked up by the CV motion detection (also with slow mo video). This took about ~0.75 seconds on average, well less than our goal of 5 seconds. 

 

Note: Many of these tests (any pertaining to tracking speed/frame rate) will be repeated when we switch platforms from a laptop to an RPi. It is our hope that we have sufficient wiggle room in the laptop numbers that any slow downs on the RPi will not bring us outside of our targets. If needed, we may seek out ways to make our design more efficient.



Rebecca’s Status Report for 4/22/2023

Between now and last post, I’ve spent much of my time for this class working with Brandon to integrate the CV & Web App portions. We’ve more or less finished heat maps and zone notifications, using requests as our form of communication. We’ve also begun work on getting a live video feed working, although that isn’t fully there yet. Outside of this, I’ve also begun working on the final presentation, and I have spent a bit of time working with extremely simple examples for the Raspberry Pi.

I am slightly behind where I had hoped to be with the RPi at this point, mostly since I would like to prove that it works with the camera and simple CV code before starting to try to integrate with that. I hope to accomplish this by the end of this weekend, which would put us on track to start including the RPi at the beginning-middle of this week.

In the coming week, I intend to 1) spend more time on the final presentation; 2) work on getting the system working on the RPi rather than laptop(s); 3) work with Brandon to get live video feed up and running. Any time left at the end of the week will hopefully be dedicated to running fuller/more situationally correct tests on the integrated system.

Team Status Report for 4/22/2023

Risks and Mitigation:

We may have a hard time getting a reasonable frame rate on our live video stream, if we cannot find an efficient enough way to send these images. We plan to get it as fast as possible by compressing the data as much as is reasonable, and feel that a somewhat choppy frame rate (few frames a second) would still be acceptable to the project – the most important thing is that the user would be able to get an idea of what their pet is up to. We are still in the fairly early stages of trying to get this feature up and running, so the exact frame rate that we’ll be looking at is still very TBD.

We haven’t had a chance to hook the camera up to the RPi yet (ideally this will be tested by the end of the weekend) so there’s still a risk of mechanical failure with that. We will reach out ASAP if we are unable to troubleshoot the issue, and if it ends up being a flaw with the camera itself, we will order a new one if needed before the end of the week. (The backup backup if it all goes up in flames would be for the ‘final’ version of the project to remain on laptops).

Project Changes:

There aren’t really any changes from the last status report. We will still proceed with a more separated project as we will integrate CV and Web App  with an RPi for the primary hardware. Max will work with the Jetson for the ML side. There will be no changes in cost as the materials come from the ECE inventory.

Schedule Changes:

We will be planning to do heavier integration – particularly, including the RPi – with more realistic testing in the upcoming days, and we hope to get tasks done for each remaining system of the project individually during this week. This should put us on track for the final demo.

Brandon’s Status Report for 4/22/2023

I finished the frontend of the heat map for the pet activity logs. I switched the heat map library I was using to jsheatmap as it was compatible with TypeScript and had documentation that was easy to understand and implement with. As well, I confirmed a data format between the CV and web application for the pet activity logs. The CV would send an array containing the amount of frames a pet stayed in a specific cell of the room that goes in order from the top left to the bottom right of the room.  This array is sent to the web application, and the web application would keep the tally of the amount a pet spent in a specific cell of the room. Also, we have integrated this task between the CV and the web application.

To implement notifications for a pet entering the forbidden zone, I had to learn about polling, which is making requests to the backend every few seconds to see if the notification flag, which is turned to true by the CV as it makes a POST request whenever the pet moves into the forbidden zone, is set to true. Once the notification flag is true, then a popup is created on the frontend to notify the user that the pet has entered the forbidden zone. After trying a few methods of polling, I found a custom React hook called useInterval that was able to do the task above. We have successfully tested sending a notification from the CV to the web application.

The last task I have started but have not finished is displaying live video feed of the room to the user. We based our implementation on a previous implementation that send a constant loop of arrays of byte data representing the image taken from the CV to the web application locally, but we ran into problems with encoding and decoding the arrays, especially during GET and POST requests.

I am slightly behind schedule as I should have finished displaying the live video feed. I should be able to finish up this task next week as it is a matter of encoding and decoding the images correctly while the data is being passed on between the GET and POST requests.

In addition to finishing up displaying the live video feed of the room, I hope to add authentication by using Google OAuth2.0, and I hope to clean up the user flow of the website and make the website more intuitive, appealing, and user friendly. Lastly, I look to deploy the frontend after all tasks are finished.

The above picture uses fake data, but it is an accurate depiction of the heat map.

The above picture is what is displayed on the frontend when a pet enters the forbidden zone.

This is code currently in progress for displaying live video feed.

Brandon’s Status Report for 4/8/2023

This week, I focused on integration and creating the user flow for the website to present for the interim demo. We integrated the CV and Web App by communicating information through sockets, which took some time to understand how to implement sockets and how we can send data between the server and client as we had no prior experience through sockets. Individually, I made sure the forbidden zone data was formatted in a way that was easy to use for the CV. As well, I received a picture from the CV that was used for creating forbidden zones, and I printed a notification for when something enters the forbidden zones. Other than integration, I wanted to show the user flow of the website for the interim demo. Essentially, a user would login and can create a pet model, which is chosen on the dashboard page, for each one of his/her pets. The user would first upload pet images and then choose the forbidden zones for that pet to complete the pet model. After the interim demo, I successfully deployed the backend of the web application, which will allow us to move communication from sockets to GET and POST requests. I explored deploying through Amazon EC2 and then Amazon Elastic Beanstalk after running into some issues with EC2, but I was able to eventually run an EC2 instance of the backend.

As of the updated Gantt chart for the interim demo, my progress is on schedule.

I will be looking into finishing displaying the frontend of the heat map of the pet activity logs and confirming a data format from the CV for these pet activity logs. In terms of integration, we hope to run the same tests as we did in the interim demo but use GET and POST requests instead of sockets. I will start to look into displaying live video feed of the room on the website and displaying a notification on the website if a pet enters the forbidden zone.

In terms of testing I have done on the web application, I have made sure the functionality of uploading pet images and entering forbidden zones work successfully by entering sample data and making sure this data shows up on the backend by running “python manage.py shell”, a Django command to look at what data is in the database, and calling GET requests to the database to see if I can get back the sample pet images and forbidden zone data. As well, we did integration tests between Web App and CV as described above for the interim demo. In the next few weeks, I plan on getting around ten participants to test out my web application and see if the use case requirement of 95% of people can finish the tasks displayed on the web application without external help is satisfied.

Above picture shows the backend is deployed at 54.173.23.127.

Team’s Status Report for 4/8/2023

Risks and Mitigation

Our primary ‘risk’ currently – although it’s pretty much already happened – is that we will not be able to fully integrate the ML into our project due to unavoidable personal setbacks. The mitigation plan for this is to keep the ML separate for now and focus on integrating the CV and Web App. If things go particularly smoothly we may be able to look into integrating the ML in the last week or two of the project, but for now we will operate on the assumption that this may not happen and act accordingly. 

Project Changes (why & cost)

There aren’t really any changes from what was discussed in the last blog post. As we are proceeding with a more separated project for the time being, we will develop the CV+Web App side of the project with an RPi for the primary hardware. Max will work with the Jetson for the ML side. As these parts are all coming from inventory, this will not affect the budget for our project.

Schedule Changes

As discussed at the interim demo, the primary change to our schedule is the order in which we will focus on integration. Rather than starting with CV+ML as planned, we will be focusing very heavily on integrating CV and Web App. This will allow us to figure out the web app/hardware interactions separately from the ML while that is worked on separately.


Max’s Status Report for 4/8/2023

After meeting with my team, we have decided to leave the ML integration into the system as a later issue that may not become fully realized due to my ongoing health issues. I have been setback, but after reorganizing our groups schedule, I have reformed my goals for this week to be finishing out the current base ML model. I am still working on it, and am therefore on schedule for the new schedule, but have also been working to mitigate any risks with incorporating it with the Jetson by working on the Jetson Nano and starting setup when training my model.

Rebecca’s Status Report for 4/8/2023

The beginning of this week was a lot of time spent starting to integrate between CV and the Web App. Brandon and I worked to find a communication protocol (currently using sockets, later switching to requests) that would enable our pieces to send info, then got the notification pipeline working pretty much in full, though on a limited scale. We are now able to send an image from the camera to the web app, choose forbidden zones on that image, and relay that info back to the CV tracking. When an overlap into the forbidden zone is detected, a notification is sent back to the web app. Starting next week, we will be working on refining the communication (switching to requests) and getting activity logs roughly operational. After this, we should hopefully be able to flesh out the full functionality! I also aim to spend a bit of time tonight working with the RPi, so that hopefully we can get the project running on this platform around the end of next week or slightly thereafter.

With regards to our updated schedule, my progress is in line with what we have planned.

As already slightly mentioned above, the goals for next week will be to accomplish much more integration between the CV and Web App – change to using requests as our more official communication method, and get activity logs roughly working. If we have spare time, we may also begin to look into what it would take to send live video feed. Individually, I also hope to be able to run my tracking code on the RPi (the version without web app integration) by the end of the week (i.e. before Carnival).