Tag: status report

Zina’s Status Report for 4/20/24

Zina’s Status Report for 4/20/24

Since my last status report, I made a lot of progress on my portion of the project, but also encountered some unexpected challenges. I tested out the PCBs that we received by hooking up some LEDs to a breadboarded circuit and ran the Arduino code 

Zina’s Status Report for 3/30/24

Zina’s Status Report for 3/30/24

This week, we received the parts we ordered from DigiKey for our Traffic Light Circuit, and I placed the order for our custom PCB after making a couple of small adjustments to the Arduino pin assignments and silkscreen text placements. The fabricated PCB should be arriving 

Ankita’s Status Report for 3/30/24

Ankita’s Status Report for 3/30/24

Work Done

Due to the amount of time needed to tag positive and negative images for the Haar classifier (last week it took 4+ hours to tag ~60 images, and to train a better classifier I would probably want 200-300) I thought it would be more efficient to look into other, more accurate classifiers (pretrained) online. I found a few; the best performing one so far is one that uses the YOLOv4 object detection model. Below is a screenshot of how it performs on one clip of the Fifth and Craig intersection that Zina took last week:

It is evidently fairly accurate; however, as expected, it does take quite a bit more time. After timing it, it takes 5-6 seconds for it to detect cars on a captured frame (but more than 10 if there are many cars. The above screenshot took 11 seconds.)

I’ll continue to remove unnecessary computations (initially the above screenshot took 15 seconds), but I’m looking into other models that can do faster computations, since this is simply too long for our project requirements.

We also ordered the new camera and it came yesterday; I will try to access the stream via RTSP today/tomorrow on my apartment WiFi, since the camera can only connect to the same WiFi that my phone is connected to (so I will work with one of my teammates on Monday to get the camera set up on their phone so that it can hopefully connect to my WiFi hotspot through that. I was unable to get it to connect to my phone’s hotspot while only using my phone.)

Schedule

We have the interim demo coming up, and I will have a faster object detection model ready by then. Due to uncertainties regarding the camera, I’m unsure if I will be able to have the model running on a live camera feed by then.

We also need to start integration, so Kaitlyn and I plan to work together next week to get the SUMO simulation set up on the RPi. I plan to get the object detection algorithm running on the RPi as well (which will be a bit of a task since as of now I’ve only gotten it running on Google Colab.)

Deliverables

By the end of next week, I will:

  • Get the SUMO simulation running on the RPi
  • Reduce the delay of the object detection model
  • Attempt to access the video feed from the IP camera through RTSP

 

Team Status Report for 3/23/24

Team Status Report for 3/23/24

Potential Risks and Mitigation Strategies We had some setbacks with the object detection model and wireless camera setup this week. For object detection, the original plan was to use Haar cascade classifiers to identify the number of traffic objects (cars, buses, pedestrians, etc.), but some 

Zina’s Status Report for 3/23/24

Zina’s Status Report for 3/23/24

This was a productive week for me, as I was able to catch up on the things that I was a bit behind on. The biggest accomplishment of the week was completing the PCB layout. There are a couple of silkscreen labels that I want 

Ankita’s Status Report for 3/23/24

Ankita’s Status Report for 3/23/24

Work Done

I finished tagging the positive and negative images needed to train the vehicle classifier from traffic camera footage as well as the footage Zina retrieved for me from the Fifth and Craig intersection. Unfortunately the commands I would use to train the classifier have since been deprecated, so I ended up using Anaconda to set up a virtual environment and installed an older version of OpenCV through that so I could run the commands. I was ultimately able to train a classifier, but it has pretty terrible accuracy so I think I need to revisit how I’m collecting the data. Most of the tutorials I found online have been using way more samples than what I’ve amassed (on the order of hundreds rather than just 60, which is what we currently have), so I think we need to go back to the drawing board in terms of data collection. Either that, or we should go with a more accurate model (YOLOv4). Also, tagging the images takes a lot of time (for the positive images, bounding box coordinates of each object in the frame need to be specified) so I may need longer to do this than initially anticipated.

I tried to access the IP camera video feed from the Raspberry Pi using RTSP, but after some difficulties I found out that Reolink’s battery-powered cameras don’t actually support RTSP streaming (source.) This means that we won’t be able to access the stream outside of Reolink’s app, so I looked into alternative wireless cameras that do support RTSP streaming and found these options: Amcrest and MubView. If these don’t work, we will probably resort to using prerecorded footage.

Schedule

I need more time to get the car detection model trained. For the pedestrian detection model, I will use a pretrained classifier (even if the accuracy is not necessarily up to our metrics) because all we need to know is if there are pedestrians at the intersection.

We also need to wait for the new camera to come in so that adds some delays to our integration as well.

Deliverables

By the end of next week, I will:

  • Train a (hopefully) better vehicle classifier after amassing more positive/negative samples and tagging them
  • Order the new camera ASAP

 

Zina’s Status Report for 3/16/24

Zina’s Status Report for 3/16/24

This week I was focused on doing the layout for our custom PCB that connects the ArduinoUNO to the 12 traffic-light-simulation LEDs via a Texas Instruments LED driver. This is my first time doing PCB layout myself, so it was a bit challenging at first 

Ankita’s Status Report for 3/16/24

Ankita’s Status Report for 3/16/24

Work Done I started tagging the positive and negative images needed to train the vehicle classifier from traffic camera footage — I’m still waiting on some footage from the Fifth and Craig intersection from Zina to add more images to the training dataset and then 

Team Status Report for 3/16/24

Team Status Report for 3/16/24

Potential Risks and Mitigation Strategies

The biggest thing we are uncertain about right now is whether or not the videos we are taking at the actual intersection we want to model (Fifth and Craig) will be sufficient to train the model. It could be challenging to position the GoPro camera at a high enough position to be able to see all of the cars on one side of the intersection. We are working on getting a good set of these videos to try using to train the CV object detection model, but it is certainly possible that it will simply be too hard to get the right angle. In that case, we will train the model using only videos found online, which is not ideal, but should suffice.

Changes to System Design

There haven’t been any changes to the design this week. Mostly we are just working on implementing the planned aspects right now, and we will further evaluate our design choices as we get closer to the interim demo.

Overall Takeaways and Progress

  • The simulation is almost set-up so that we can begin testing our optimization algorithms when the time comes
  • We managed to get the IP camera connected to the RPi, which is a good sign for our overall system integration
  • PCB layout is almost complete and we will have it ordered by this coming Tuesday
Zina’s Status Report for 3/9/24

Zina’s Status Report for 3/9/24

Given that the Design Report was due this week, we had to lock in a lot of the details that we were uncertain about up until now. The process of writing the report was very helpful and made us think critically about the more challenging