Weekly Status Reports

Ankita’s Status Report for 2/24/24

Ankita’s Status Report for 2/24/24

Work Done This week, I prepared for and gave the design review presentation for my group. I also made some progress on the car detection code, but I realized that we will probably need to train our own Haar cascade, since the ones I found 

Kaitlyn’s Status Report for 2/24/24

Kaitlyn’s Status Report for 2/24/24

Work Done I finished working on functions to call the APIs and set everything up for both TomTom and HERE. I also set up the SUMO simulation software on my laptop. This took really long because I was originally using my Macbook to install it, 

Zina’s Status Report for 2/17/24

Zina’s Status Report for 2/17/24

This week we were still in a very preliminary phase of researching/purchasing/planning before we can begin work on our implementation. We settled on the IP camera to order, ordered it, and reserved an RPi. I also did some research on Addressable LED Arduino projects and will be placing an order for Addressable LEDs in the next few days. Lastly, I worked with my teammates to finalize our design plans regarding the overall testing approach. With these ideas in mind, we worked on our Design Review presentation to summarize our current plans for the implementation and testing approaches.

In the next week, Ankita and I will get the RPi hooked up to the IP camera over WiFi to make sure that we can receive/process video footage in real time. We will then test the camera for things like image quality, data transfer latency to RPi, and battery life. I will also begin working on the Arduino code that can take in incoming information packets (simulated for now, but eventually from the RPi) and translate them into corresponding changes for the Addressable LEDs. Before I can actually begin coding, though, I need to determine a rigorous set of rules regarding should be legal/illegal for our light control system to do. Once the code is working, I will breadboard the Arduino to some standard LEDs to simulate a four-way intersection for testing.

Ankita’s Status Report for 2/17/24

Ankita’s Status Report for 2/17/24

Work Done This week, I contributed to the design review presentation with the rest of my group members (the hardware implementation plan, testing approaches, and system specification/block diagram.) I also tried to set up the Raspberry Pi and IP camera (unfortunately, we’re waiting on the 

Team Status Report for 2/17/24

Team Status Report for 2/17/24

Potential Risks and Mitigation Strategies At this time we feel more confident in our solution and we were able to finalize most of our solution approach with specific hardware and software we are using. Earlier in the week we were hesitant on how we can 

Kaitlyn’s Status Report for 2/17/24

Kaitlyn’s Status Report for 2/17/24

Work Done

This week I worked on the Design Review presentation as well as doing additional research on our solution design, specifically on the APIs we will be using as well as the optimization algorithm.

I have finalized the APIs we will be using to be the TomTom Traffic API and the HERE Traffic API. I chose these APIs because they update frequently (every 30 seconds and every minute, respectively) and they are cost efficient. With our current usage plans, we will be able to operate under the free plans for both APIs. Both APIs provide similar data, however the HERE API offers more data on specific lanes and the TomTom API offers data at a faster rate, so we will try to incorporate both APIs and average the data for the best results.

I also finalized the type of machine learning algorithm we will be using for our solution and researched existing papers. I found a paper that attempted to approach the same problem we are trying to solve and it uses Q-learning and an application called SUMO for testing.

I decided that q-learning would be a good algorithm to use because we can measure future times and use the time as a reward function relatively easily. I also am more familiar with Q-learning than other algorithms since it was taught in Intro to ML, although I will still have to learn how Deep Q-Learning differs from what was covered in class.

SUMO is a great application for our purposes because we wanted to create a rudimentary simulation originally, but now we can use a more well developed simulation. I looked into the application and they have a tutorial for importing a path from a map, so I think we can simulate Pittsburgh, or at least the area near CMU pretty easily. Another great feature SUMO has is the ability to take simulation data and import it into Python using TraCI, so this is also perfect for our ML model, since we can use an on-line model.

I started setting up our repo and installed a package manager for Python called Poetry, because I have found it difficult to work on Python code without a package manager in the past. I also started the code for making the API calls and will be finishing it up tomorrow.

Schedule

I made changes to the schedule since we no longer need training data and instead will be using SUMO for simulation purposes. I instead added a task for creating the SUMO simulation and shifted the time for the optimization algorithm back a week to make up for it since it seems more complex than I originally anticipated and I now have to create the simulation myself instead of using external training data. However, this time is made up for since we no longer have to design our own application for simulating traffic and can use SUMO for demonstration purposes as well, which we allotted 2 weeks for originally.

Tasks This Week

  • Finish traffic API methods
  • Configure SUMO to simulate Fifth and Craig intersection and nearby roads so we can begin creating training data
  • Build optimization algorithm infrastructure
Zina’s Status Report for 2/10

Zina’s Status Report for 2/10

The most important task I accomplished this week was giving my team’s proposal presentation to Section D on Monday. Last weekend, Ankita, Kaitlyn, and I put a lot of effort into making sure we covered all of the necessary components for the proposal. This process 

Team Status Report for 2/10/24

Team Status Report for 2/10/24

Potential Risks and Mitigation Strategies The main risk we currently foresee is being unable to get the IP cameras set up at an actual intersection to send data to the Raspberry Pi. We have a plan in mind for this (detailed in Ankita’s Status Report), 

Ankita’s Status Report for 2/10/24

Ankita’s Status Report for 2/10/24

Work Done

This week, I helped out with the proposal presentation slides and did some implementation planning and parts research, particularly for the camera setup. In particular, I made the solution approach and testing, verification, and metrics slides (with input from my team members to make sure we were all on the same page.) Below is the block diagram I developed for our system.

For the rest of this week, I’ve been looking into how we’re going to connect the IP Cameras (which will ideally be at each intersection) to the RPi. CMU’s WiFi is notoriously difficult to connect to with external devices (like an RPi – but that’s been done before. It’ll be a particular pain – if even possible – to connect the IP cameras), but we have a few options. We can potentially connect the RPi to CMU WiFi and use it as a hotspot of sorts that the cameras can then connect to (however, this would probably require us to use an intersection other than Fifth and Craig to test our system.) We can also purchase a mobile hotspot and connect the RPi and cameras to that.

I also looked into BLE camera setups and couldn’t find any substantive projects that were similar in scope to ours. These cameras cannot stream video, and furthermore require wired connections to BLE microcontrollers like the Arduino Nano 33 BLE or ESP32. If we can’t get the IP camera setup to work (the idea is to order one IP camera to start with to see how the setup goes), we will probably default to wired cameras (standard Raspberry Pi cameras) for one or two sides of the intersection and simulate the other sides for demo purposes.

Schedule

Progress is mostly on schedule — I did quite a bit of research on the different kinds of cameras we can use and thought about different ways of putting our whole system together. Before deciding on a camera implementation, however, I want to meet with Prof. Sullivan and Mukundh and discuss the feasibility of what we have in mind. In that sense I am slightly behind schedule as I was supposed to finish up camera research by Monday.

Tasks this Week

  • Decide on a camera implementation and get some ordered so we can start setting things up.
  • Get my hands on a Raspberry Pi 4 (hopefully there are some in the course inventory) and boot it up, then see if I can host a hotspot on the Pi itself for other devices to connect to.
  • Start writing the object detection algorithm for cars and pedestrians and test it on old traffic camera footage.
Kaitlyn’s Status Report for 2/10/24

Kaitlyn’s Status Report for 2/10/24

Work Done At the beginning of the week I helped finalize parts of the Proposal Presentation slides. This week I set up the Github and started researching the traffic API that we will use. I looked into the TomTom traffic API and set up a