Team Status Report 3/18/2023

  • Risks and mitigation

At this point, our primary risk relates to the ML, and our ability to run that. Since this will be the most computationally intensive part of our project, making sure that we have enough power to run that algorithm has been our main concern, so that we achieve the time requirements we set out for ourselves. At this point, we are experimenting with/considering 3 different methods to run our ML: 1. Running identification on the pi. This seems the riskiest, and is something we hope to firmly rule in/out by the end of this weekend or beginning of next week. 2. Running identification on the web server, i.e. completely off of the hardware. This is generally the largest unknown, and so it is something we will be doing much more research on in the next few days, but it would have the advantage of significantly reducing the complexity of what’s running on our hardware (and allow us to drive the theoretical consumer cost down even further). 3. Running identification on a Jetson Nano. This is the ‘safest option’, and so it is something that we intend to be prepared for by getting a Jetson from inventory ASAP and trying to learn its setup. However it also adds a bit of cost to the project, as well as a significant amount of theoretical consumer cost, which is why we still intend to look into options 1 and 2. We hope that looking into multiple options will allow us to mitigate the potential risk that one or several of the options proves infeasible. The primary work required to evaluate each involves setting up the platforms and having an operational CNN. 

  • Project Changes

We are considering where to run the ML, as detailed above. If we find that we can run it on the Pi, then this wouldn’t be a change. If we end up running it on the server, we will need to change the general structure of how the pi communicates with the web app, but otherwise this shouldn’t be a large change to the project as a whole. If we find that our only choice is to run it on a Jetson, this will be the largest change to the project – we can get a Nano from inventory without needing to use our budget, but would probably also have to acquire a wifi adapter of some sort, which would require a bit of budget. The current impact of the changes we are considering would mostly affect communication (if we intend to host an instance of the ML separate from the hardware) and obviously our hardware.

  • Schedule Changes

Trying to get the CV/ML code to run on hardware was always scheduled for this week, so this should stay as intended. There will be additional research needed at the beginning of this week to hopefully narrow down the feasibility. We also intend to acquire a Nano to experiment with ASAP. 

Max’s Status Report for 3/18/2023

This week I have been working on a simplified version of our final ML classifier in order to start performing preliminary speed and efficiency tests for the CNN. Our group is still bouncing around a few options in terms of where we are going to host the CNN, whether its on a Pi, Jetson, or externally. Since we are still deciding, this has put some extra stress on my portion of the project to produce some testing results, so although I was originally on track, I am a little behind due to the decision our team has made to explore potentially different implementations of our system. However, I am now much more accustomed to InceptionV4 and Tensor-flow, and have a better understanding of the transfer layers I will be implementing, but have focused on creating a classifier without extensive transfer layers in order to start platform testing.

Brandon’s Status Report for 3/4/2023

During this week, I spent most of my time working on the design review document with my team. For example, we figured out the details for the testing, verification, and validation as this was one of the points that wasn’t fully addressed in the presentation. As well, I fixed the block diagram for the web application side so that it aligns more with what a block diagram should look like. In addition, I added a flow diagram of the user experience on the web application into the design review document. I have looked into how I can allow users to choose forbidden zones on the web application, but I have not implemented this yet.

I am currently behind schedule, but I will hope to make up the task of implementing the frontend and backend for a user choosing forbidden zones this week.

I hope to fully implement choosing forbidden zones on the frontend and storing this data on the backend. As well, I hope to start looking at how to display pet activity logs on the frontend using data that would be given by the Raspberry Pi.

Max’s Status Report for 3/4/2023

Our group was finishing up our design report this week. However, since our design choices and implementation for my ML portion of the project have not changed much since the start, while I was primarily finishing up the design report with my teammates, I was also able to continue to implement my InceptionV4 architecture.

As for schedule, I am on track with where I wanted to be. This week I continued my work on the InceptionV4 architecture and I am on track for that task of making a simple classifier. I was also able to better flesh out the additional transfer layers that will be required.

Team Status Report for 3/4/2023

As per usual in the weekly reports, we would say that our biggest risks at the moment pertain to the hardware. The most imminent risk is that we have yet to really dive into trying to set up the rPi (other than reading internet tutorials/projects). As none of us have used an rPi before, there is the chance that this will not go smoothly. Our plan to mitigate this risk is to begin working with the rPi early this week (i.e. trying to run tutorial code), and if we are unable to accomplish what we need with it by the end of the week, we will reach out to course staff. The other, broader hardware risk is that we will find an rPi’s computational abilities insufficient for our project. Again, the plan to mitigate this risk is to begin trying out our algorithms on the rPi sooner rather than later, hopefully starting the week after next (3/20). If we find that it runs infeasibly slow on the rPi, our contingency plan is use a Jetson Nano.

At the moment, no major changes have been made to the existing design. We are considering acquiring a more secure domain for our web app, which would cost around $15, but otherwise shouldn’t affect the implementation or functionality of the project.

Our schedule has not changed at this time.

In terms of tools that we will need to learn for our project, nothing super new or unexpected has cropped up beyond what we’ve been assuming since the beginning of the project. We will all need to learn how to work with the rPi, including how to set up an Apache server to communicate with our web app. Rebecca will be learning how to work with OpenCV beyond what she’s previously done for classes, Max will be learning about working with Inception-v4, and Brandon will be learning more about Django and React.

Rebecca’s Status Report for 3/4/2023

The week before spring break, pretty much all of the time (and energy) I had allotted for capstone went into the design review report. Other than playing around with the different built-in tracking algorithms a bit more, I did not work directly on the project. In the time since, over break, I have done some research into the concept of motion detection with OpenCV, and found several relevant projects that I intend to use as references. I also found more resources regarding running OpenCV on a Raspberry Pi specifically that I hope will be useful to get that up and running.

Per our schedule, I would say that my progress at this point is slightly behind. In an ideal world I would have started trying to work with the rPi over break, but I simply did not have the energy. The only tasks that I am ‘behind’ on at this point are beginning to learn the set-up of an rPi – basic ‘hello world’ type of stuff and getting tutorial code to run without alteration. To get back on track, I intend to begin working with the rPi early this coming week. If I am unable to get it running reasonably well by Wednesday, I will involve my group mates for aid (we can talk about it during mandatory lab time on Wednesday morning). And if we’re still struggling by Thursday or Friday, I will reach out to course staff, probably Kaashvi. So long as we can begin trying to get our code to run on the rPi by the end of the week, we will be on track.

In the next few days,  I need to start trying to set up the Raspberry Pi. In particular, I will be looking to successfully run some sort of tutorial code by Wednesday. I also aim to get a working ‘rough draft’ of the motion detection + tracking by the end of the week, as per the schedule. I feel I have a good grasp on what tracking and motion detection in OpenCV look like separately, so I believe it should be quite doable to combine these pieces with a few hours of work. I hope to start to get this code (tracking + detection) working on the rPi by the end of the week.

Max’s Status Report for 2/25/2023

Our group is finishing up some final design implementation choices. The current options do not effect me, so I have been free to continue my work without and hardware hiccups due to changing our main hardware. As such, this week has been primarily implementing the InceptionV4 architecture and setting up a functional dog/cat breed classifier. I am still working on this, but the work is on track.

As for schedule, I am back on track with where I wanted to be. Primary research is completed and implementation is starting. This week I was able to start implementing the InceptionV4 architecture and I am on track for that task.

Brandon’s Status Report for 2/25/2023

I have set up the web application that uses React and Django and learned about how to get and send data between React and Django using Axios, which can send GET and POST requests between React and Django, and learned about the Django REST framework, which can take data from the POST requests and create objects based on the Django models to store in the database. Though I have some experience with React and Django, I had no experience with using Axios and the Django REST framework, which took a few hours of learning and debugging to learn the full capabilities of these tools. After learning these tools, I implemented allowing the user to upload images for pet classification. Specifically, I designed a frontend page that allows users to upload images and a Django model to store these images and the type of these images (png, jpeg, etc.). After the user uploads an image, a POST request is called using Axios that sends the image to the Django REST framework, which creates an object based on the Django model specified above that stores the images into the database.

Based on my schedule, I am currently behind on tasks for the web application side. The plan is to push back my tasks back a week as I have other work commitments this week and have been having technical issues with my computer recently, and there are no major tasks that other people are relying on me finishing at the moment so pushing back tasks by a week is okay.

The task I hope to work on this week is to test our ideas related to users choosing forbidden zones for a pet. I want to test both the frontend, specifically users choosing spots, which designate forbidden zones, on a grid overtop a 2D room image, and the backend, specifically storing this data in an array-like structure, implementation of these tasks. Though I am pushing back tasks by a week, I believe this is an important task to do before spring break as it is a core task of the project that other teammates will eventually rely on, and it is important to see what changes I will have to make based on testing our implementation ideas of the forbidden zones this week.


Team Report For 2/25/2023

Our biggest risks at the moment are still pertaining to technology use. Specifically, we’ll need to learn a system that none of us have worked with before. And, we’re still slightly uncertain as to whether an rPi will have sufficient computational power for what we need. However at this point, I think the next move will simply be to commit to trying it out, as this is the hardware that we’d prefer for the project (over a more specialized but also more expensive piece of hardware like a Jetson).

The only change to the project that we’re considering at this point is to include a speaker with the system, and the option for the user to request to play a deterrent sound (e.g. dog whistle). This is because, as a use case, we think that reporting bad behavior would also be more useful if there was a way to discourage said behavior, especially if the animal is doing something potentially dangerous. Otherwise, the user will know that something is happening but be completely unable to stop it, which seems unideal. In terms of added costs, this would require us to also include a small and inexpensive speaker to interface with the raspberry pi – probably something we can get from the inventory.

Regarding our schedule, as there has still been some debate going on about whether an rPi would be sufficient for our needs, this hardware hasn’t been requested from inventory yet, which we had been hoping to do by the end of this past week. However, this should not be a major setback, as everything we are doing is in software and can still be developed separately from the rPi. The intention is to get the rPi before spring break, and potentially use that time to catch up on figuring out rPi 101 if we don’t have enough time to do so prior to break. For the web application side, Brandon will be pushing back tasks a week due to other work commitments and technical issues he has been having with his computer. He hopes to work on it this week making sure our ideas on implementing users choosing forbidden zones actually work, which we believe is a core task of the project that should be tested before spring break.

Regarding major achievements, on the Web Application side, Brandon has set up React integrated with Django. He had no experience with Axios and the Django REST framework, which are tools to send data between React and Django, so he took a few hours to learn how to use them. After learning these tools, he implemented allowing the user to upload images for pet classification.

Regarding any adjustments to team work assignment, the potential addition of the speaker will require Brandon to figure out how to communicate a ‘send sound request’ through the web app to the rPi. In terms of receiving that on the rPi side (and figuring out how to incorporate an rPi with a speaker), Rebecca and/or Max will be primarily responsible. 

Brandon’s Status Report for 2/18/2023

I worked on setting up React integrated with Django. Previously I have used each framework by itself, but I have never used them together so it took some time and debugging to set this up. I looked at my previous implementation of a user uploading images on Django and thought about how to replicate this with React and Django, but I have not implemented this yet. As well, I worked on the Design Review Presentation. Specifically, I thought about the solution approach as it applies to the web application side of the project. I chose React and Django due to my previous experience with these frameworks and how they are the leading frameworks in frontend and backend respectively. One safety concern is related to the privacy of the user, specifically how users will be able to see a live video feed of their room. Hence, a malicious user may be able to see the live video feed of another user, which is an invasion of privacy. But, there are security protection tools supplied by Django that can address these concerns. I have made minor adjustments to the block diagram from last week for the web application side, and I will be finishing up the other points of the Design Review Presentation with my team within the next day. 

Currently, I am behind schedule due to the tests and homeworks of other classes. I will be working on how to use the React and Django frameworks together, and I will implement a user being able to upload pet images to the website to catch up on the schedule.

After I finish working on the tasks that I am behind on, I will be working on one of the core features of the website, which is allowing users to choose forbidden zones for a pet and figuring out how to store this data into the database.

Web Application Development (17-437) covers the engineering principles related to my part of the design process, specifically working with the backend using Django. I have basic experience with React, and I will be learning more about developing with React in the upcoming weeks.