Rebecca’s Status Report for 4/29/2023

The very beginning of this week was  spent working on the final presentation slides on Sunday. I didn’t do anything for capstone during the middle of the week (other than attending presentations) as I had many assignments due in other classes that took priority. At the end of the week, I met up with Brandon and we put some time into getting the live video feed working – it has a reasonable framerate now, although often it goes white in between each frame, which creates an awful strobing effect. We’re still looking for ways to improve this.

I’m fairly on schedule at this point other than the RPi. I would’ve liked if we could have begun integration testing with the RPi by now, so this is a bit behind schedule, but it does at least seem to function ok on simple examples. The plan is to get 100% ready for integration tomorrow and Monday, so that we can integrate on Tuesday.

This coming week will therefore start with lots of time working with the RPi. Hopefully Brandon and I can integrate on Tuesday. The goal is to at least re-do the tests we’ve already done on laptops, now on the RPi, in time for the poster on Wednesday. Then the rest of the week will be debugging last features (mostly live video) and performing longer tests/under more realistic conditions.

Rebecca’s Status Report for 4/22/2023

Between now and last post, I’ve spent much of my time for this class working with Brandon to integrate the CV & Web App portions. We’ve more or less finished heat maps and zone notifications, using requests as our form of communication. We’ve also begun work on getting a live video feed working, although that isn’t fully there yet. Outside of this, I’ve also begun working on the final presentation, and I have spent a bit of time working with extremely simple examples for the Raspberry Pi.

I am slightly behind where I had hoped to be with the RPi at this point, mostly since I would like to prove that it works with the camera and simple CV code before starting to try to integrate with that. I hope to accomplish this by the end of this weekend, which would put us on track to start including the RPi at the beginning-middle of this week.

In the coming week, I intend to 1) spend more time on the final presentation; 2) work on getting the system working on the RPi rather than laptop(s); 3) work with Brandon to get live video feed up and running. Any time left at the end of the week will hopefully be dedicated to running fuller/more situationally correct tests on the integrated system.

Rebecca’s Status Report for 4/8/2023

The beginning of this week was a lot of time spent starting to integrate between CV and the Web App. Brandon and I worked to find a communication protocol (currently using sockets, later switching to requests) that would enable our pieces to send info, then got the notification pipeline working pretty much in full, though on a limited scale. We are now able to send an image from the camera to the web app, choose forbidden zones on that image, and relay that info back to the CV tracking. When an overlap into the forbidden zone is detected, a notification is sent back to the web app. Starting next week, we will be working on refining the communication (switching to requests) and getting activity logs roughly operational. After this, we should hopefully be able to flesh out the full functionality! I also aim to spend a bit of time tonight working with the RPi, so that hopefully we can get the project running on this platform around the end of next week or slightly thereafter.

With regards to our updated schedule, my progress is in line with what we have planned.

As already slightly mentioned above, the goals for next week will be to accomplish much more integration between the CV and Web App – change to using requests as our more official communication method, and get activity logs roughly working. If we have spare time, we may also begin to look into what it would take to send live video feed. Individually, I also hope to be able to run my tracking code on the RPi (the version without web app integration) by the end of the week (i.e. before Carnival).

Rebecca’s Status Report for 4/1/2023

This has not been my most productive week to date, as I tend to spend a lot of time working on Saturday, but was quite busy today with personal business – most of my work time will be offloaded to Sunday, but thus I don’t have major progress to report in this blog. However, I’m content with the status of my portion. There’s one small bug I’m still trying to work out in the CV code, where sometimes when multiple bounding boxes overlap, the smaller bounding box is favored. Other than resolving this, I would say that the tracking and motion detection are working exactly as I had planned. I had hoped to post a video, but my computer kept crashing when I tried to edit the video and I didn’t have time before midnight to figure that out. Perhaps I will edit to add this later. Other than some tweaking of the core CV code, I have spent this week preparing to start integration for the interim demo. Specifically, I’ve written rough drafts of code for where the CV will connect to other pieces (i.e. forbidden zones and tracking activity), and done a bit of research into how we will go about connecting the web app to the CV when they’re hosted on two separate computers (likely sockets).

Other than that one last stubborn little bug, I would say that the core CV is done, which is nice and on schedule. However, we are moderately behind on really being able to dive into integration, including as it relates to the CV. The plan to remedy this is that 1) it will be our primary focus from here on out and 2) we will begin meeting in person, starting tomorrow.

In the next week, it will be full steam ahead on integration. First and foremost, the goal will be to get the CV communicating with the Web App for the interim demo on Wednesday. Depending on how Max is doing, we may try to incorporate the ML as well, though likely still keeping strictly on laptops for the time being. In the latter half of the week (once we can hopefully establish exactly how we will be moving forward with the project), I should be able to turn my focus back to the hardware we’ll be using, as this has slightly fallen on the back burner due to the uncertainty around our project.

Rebecca’s Status Report for 3/25/2023

The CV code is pretty much done. All that is left is a bit more tweaking/debugging, and of course integration with the ML (which will be more so a team activity, so that will happen when Max is ready).

My progress with regards to the CV is pretty much on track, other than the little bit of tweaking I’d still like to do. However we are fairly behind on learning to work with the hardware, so that will be almost entirely the focus for this upcoming week. I plan to work on basic set-up for the Jetson, and hopefully we can accomplish enough this week to be back on schedule for fuller integration next week.

In the next week, I intend to focus on setting up the Jetson. I have found a nice step by step for general set up the developer kit which I hope will be doable with minimal struggles. I aspire to start on this early in the week (whenever we get the Jetson), so that we can discuss any major issues in mandatory lab on Wednesday. I also intend to finish the refinements on the CV, hopefully tonight or tomorrow. Then in the later part of the week, we can look at hooking up the camera with the Jetson.

Rebecca’s Status Report for 3/18/2023

Aside from working on the ethics assignment, I’ve spent this week working on basic RPi setup and the mostly fleshed out CV algorithm. Basic RPi setup seems to have been successful, but I have yet to run it on CV-ish examples as we have yet to get a camera for the RPi, unfortunately. I have most of a ‘rough draft’ for the CV algorithm for our project, but am still debugging and filling in a couple blanks – mostly where different pieces (i.e. motion detection versus tracking versus overlap) meet up. I hope to have a full working draft by the end of tomorrow (Sunday 3/19).

My progress is slightly behind – I haven’t been able to work with Pi as fully as I had hoped by this point due to lacking a bit of necessary hardware, and the draft for the overall CV isn’t quite complete. However, if I can finish the CV draft by tomorrow this will put me at a good point. With regards to the Pi, it’s likely that we’ll go ahead with the testing we had hoped for this week regardless, so barring any significant problems that arise when we try to do this, that should also put me back on track for working with the hardware. We also intend to explore other routes for running the ML, so it’s likely that I’ll be doing additional work this week researching/experimenting with those.

In the next week, I plan to have a fully working draft of the CV portion that I can run on my laptop (by early in the week, so I can use the latter half of the week to refine it a bit). I also hope that we will be able to begin some integration testing of the ML algorithm on the Pi or other platforms, though this will also depend on where my team mates are at.

Rebecca’s Status Report for 3/4/2023

The week before spring break, pretty much all of the time (and energy) I had allotted for capstone went into the design review report. Other than playing around with the different built-in tracking algorithms a bit more, I did not work directly on the project. In the time since, over break, I have done some research into the concept of motion detection with OpenCV, and found several relevant projects that I intend to use as references. I also found more resources regarding running OpenCV on a Raspberry Pi specifically that I hope will be useful to get that up and running.

Per our schedule, I would say that my progress at this point is slightly behind. In an ideal world I would have started trying to work with the rPi over break, but I simply did not have the energy. The only tasks that I am ‘behind’ on at this point are beginning to learn the set-up of an rPi – basic ‘hello world’ type of stuff and getting tutorial code to run without alteration. To get back on track, I intend to begin working with the rPi early this coming week. If I am unable to get it running reasonably well by Wednesday, I will involve my group mates for aid (we can talk about it during mandatory lab time on Wednesday morning). And if we’re still struggling by Thursday or Friday, I will reach out to course staff, probably Kaashvi. So long as we can begin trying to get our code to run on the rPi by the end of the week, we will be on track.

In the next few days,  I need to start trying to set up the Raspberry Pi. In particular, I will be looking to successfully run some sort of tutorial code by Wednesday. I also aim to get a working ‘rough draft’ of the motion detection + tracking by the end of the week, as per the schedule. I feel I have a good grasp on what tracking and motion detection in OpenCV look like separately, so I believe it should be quite doable to combine these pieces with a few hours of work. I hope to start to get this code (tracking + detection) working on the rPi by the end of the week.

Rebecca’s Status Report for 2/18/2023

This week has been sort of a whirlwind of actually finalizing our design plans. For me, that mostly looked like plenty of research. Near the beginning of the week, I spent a while searching for ways that we might be able to configure this project to work with an FPGA to do the vision and ML algorithms (the best verdict I found, by the way, was to simplify the camera connection as much as possible by doing it as a SPI connection). However, this route was deemed infeasible at the faculty meeting on Wednesday, and so we went back to the original plan of using a Raspberry Pi. As such, I’ve spent the rest of the week seeking information/tutorials on how to use rPi’s in ways that are relevant to our project. I’ve also been working on the Design Review presentation.

Our schedule for this week was still primarily research on my end, so I would say that I’m still on schedule at this time. The intention is acquire the hardware and begin the actual  implementation process starting this coming week. In particular, my deliverables/goals for next week are to start working with the Raspberry Pi – something like being able to run a generic ‘Hello World!’ – and to start exploring the OpenCV library (on my laptop, not the rPi) to get a working video tracker in Python.

Regarding the engineering courses that pertain to my side of the design principles (Computer Vision and being the primary source for getting the rPi working), for the most part there aren’t any. I have some exposure to OpenCV through my Computer Vision course (which is technically in robotics, not engineering), but even that involved much more of learning the computer vision actual algorithms than learning how to use OpenCV, as one would expect. No other engineering courses I’ve taken are even remotely relevant. To compensate for my lack of background in rPi’s, I’ve been reading a lot of internet tutorials over the last few days.

Rebecca’s Status Report for 2/11/2023

In the last week I wrote our group’s proposal presentation (see the proposal section of the blog) and did some implementation related research. We also met as a group a few times to spend a few hours figuring out more specifics of our design – in particular, where the boundaries between our roles will lie and how they will interface – and also to discuss possible alterations to our proposal. At this point, our next major steps are to nail down our design as specifically as possible and put together our design presentation. Regarding scheduling, this point in our schedule is still dedicated primarily to research, and I’m happy with my progress on that front. I’ve had some success with figuring out possible ways to work with components we’re unfamiliar with.

Seeing other groups’ proposal presentations this past week was very illuminating. In light of the fact that several other groups have proposed to do some amount of video processing with an FPGA – something we had originally been interested in but were discouraged from early on – we are reconsidering this as an option and doing some research as to the feasibility for our project specifically. In particular, the concern is how we’ll be able to get data onto and off of the FPGA, especially image/video information.  Since we still need to be able to communicate over the internet with our web app, we are currently assuming that we will still need something like a raspberry pi to pass messages back and forth. With regards to communication between the FPGA and rPi, I’ve found an internet project that could be promising to pass wired signals between the two. It focus on simple signals, but I believe the system it  describes could be expanded. It also talks about doing this in parallel, which will definitely be something for us to take advantage of in order to achieve the highest possible frame rate. This is something I will look into further this week.