12/7/24
This week, we spent the majority of our time mitigating any risks that could jeopardize our project this close to the demo. This meant testing that all of our demo equipment was working, which included our RC car, traffic lights, battery pack. Additionally, the SSH was set up so that we could control the Jetson when it was attached to the RC car without needing to connect to a monitor. As such, no other contingency plans needed to be made and no other changes to the system were made either. Furthermore, all system features were lightly tested to ensure that they would work within these environments.
Our radar tests involved setting it up in an open room, lowering the detection range to be between 3-5m and a departing distance of 1m. The unit testing involved walking away from the radar and confirming that it was able to comfortably transition into each step of the state logic, from detected to departing to departed. As of right now, very little testing has been done in the full car scenario, which involves increasing the detection range to 4-10m and the departed distance to 3m. However, as a result of this initial testing, we implemented a sliding window mechanic to help filter much of the noise that we saw getting captured when the radar was running.
So far, our lane detection testing mainly involved running on video obtained during our driving excursions. However, upon initial testing, it quickly became apparent that the system needed many improvements before it even reached a state of viability. The primary improvements came from a slope filtering to only take in lines that were mostly vertical, a better understanding of how camera placement within the car affected region of interest selection, and a second video, with more a focus on grabbing more of lane within the image and less of outside features (such as the car hood). This second video however, was darker and thus colors were not able to show up distinctly (thus making edges less likely to be detected). At this point, the system was more or less redesigned from scratch. Building this new lane detection system included many improvements, such as utilizing Bilateral Filtering instead of Gaussian Blur to smooth out noise (as it better preserved edges), running two different sizes of canny edge detection (with different aperture sizes) as it ensured that more edges were detected. To address the low light, color enhancement was added as an initial step to ensure colors had an easier time being detected, and HSV filtering was included to help distinguish between white and yellow lane markers. Additionally, euclidean distance calculations were included to distinguish between dashed white lines and solid white lines. Finally, all of these were then used to design the logic for car position, using thresholds of different features to determine the lane.
Our traffic light detection revolved around training different yoloV8 models and determining their accuracy. Additionally, we also gathered footage from the internet, to more accurately represent the types of images that we may come across. However, regardless of the yolo model employed, many failed when it came to determining color of real images. Additionally, it also failed on our images generated from our driving footage. The big problem was that many times, the color was misdiagnosed, due to the hues shown in the actual image. For example, one such image was identified as yellow, even though the red traffic light was the one on, but it could be easily seen where the confusion arose from. The color did look more yellow than it ever did red, it was simply due to our human intuition that traffic light colors go red, yellow, green, that we thought it was wrong. We realized that we could still use the bounding boxes generated by the yolo model, but instead, divide the box into 3 separate parts, and use light intensities to determine which light was actually on. We theorized that regardless of what the color looked like on video, the light that was actually on would have the highest intensity in comparison to the other two. In this way, we were able to make this system more robust to camera, lighting, and other quality issues.
One thing to note is that due to constant iteration of our systems, and the many issues regarding building our demo environment, and getting parts, such as the battery pack, we have not done full unit testing for the sake and purpose of gathering metrics and comparing them to the design requirements of our system. However, throughout this week, we have completed all the necessary steps in order to start. Currently, our plan is to fully gather testing metrics for both the lane detection and traffic light detection tomorrow. Additionally, we plan to do integration testing of the ld + tld system, which involves using a speaker to send a signal of when the red to green light transition is detected and keeping metrics of the true and false positives.
11/30/24
As we’ve worked through debugging the lane detection algorithm, we found it especially helpful to go back through the OpenCV tutorial documents for many of the image processing functions, their parameters, and how exactly they work and when they should be utilized. Additionally, we also spent time reading about specific color filtering functions such as HSV and how to enhance image brightness as he experimented with the algorithm.
As we have been working on debugging the forward car departure we have found it especially helpful to filter out the most useful information in the radar detector API by communicating with the sales associate for the radar detector. The API for the radar is very long, and being able to speak with someone who understands the product better was super useful for understanding how to more effectively use the radar detector.
Currently, our project has all semi-working features, but we need to focus all efforts on strengthening these for the demo. There aren’t any significant risks aside from the fact that our three features (lane detection, traffic light detection, and forward car departure) are not as robust as we want them to be. For our lane detection, mostly Ankit has been working on strengthening the filtering algorithm. For traffic light detection, Eunice is working on more accurate detection of small traffic lights. For forward car departure, Emily is working on analyzing the noise pattern and seeing if there are any more ways to build software around the noisy hardware to gather clearer radar readings. We are slowly cracking away at these challenges in hopes we can form a more robust product for the final demo.
11/16/24
This week we haven’t discovered any new risks to our project. We communicated with team C6 to take a look at their antenna, and since we have the exact same GPS, we put in an order for the same antenna that they are using so we can gather speed measurements for our car. Aside from that, we have been debugging our code for lane detection, traffic light detection, and forward car departure. We have also been working on ordering materials so we can power the Jetson in a car and with a battery pack when we are demoing with RC cars. We are finalizing those details and will make orders as soon as possible for those.
To address the additional prompt for this week’s status update, we have been testing our LD and TLD on images taken from dash cams from the internet as well as data we have gathered ourselves. We drove around Fifth, Forbes, and Craig to gather some preliminary testing data with our own camera to make sure that our detection works with our specific hardware. We have been feeding our LD and TLD algorithms images and videos, and we’ve been outputting annotations onto the images. We can manually check whether or not these correlate to test passes by for LD looking at whether or not the lane demarcations are properly annotated and for TLD whether or not we can detect a change from a red to green light effectively (we haven’t tested our timing requirements yet as we are working on getting the basic functionality working). For FCD, we have been setting up our radar in 1207 and walking away from the radar to imitate a leaving car and monitoring the boolean outputs of our code to see whether or not a forward car departure has been detected or not. With FCD as well we haven’t tested our timing requirements yet because we are working on making sure that our basic functionality works first.
11/9/24
We did not realize that our GPS did not come with an antenna, so we currently cannot gather any GPS data from our GPS. We are working on obtaining an antenna so we can gather data! After we can do that, we can incorporate this data into our existing development as this integration is very straightforward and doesn’t have complex additions to our logic. Aside from that, we are speeding through our development phase. The vast majority of our code for lane detection and traffic light detection is completed but untested, and we are in the process of testing our forward car departure feature. We have started doing as many tasks in parallel as possible (which is why we have already started developing + testing our forward car departure feature) due to the immense amount of setbacks we’ve experienced throughout this process. We are really proud of ourselves as a team for continuously finding areas of productivity in another area if we become blocked in a separate area!
We are considering descoping forward car collision because of the amount of time we have lost due to setbacks (Jetson breaking for two weeks, having to reorder the camera, and now GPS that doesn’t work because it did not come with an antenna).
11/2/24
This past week we spent more time trying to debug why our Jetson Orin was not working with no success, so our mitigation for that was returning our Jetson Orin and ordering a Jetson Nano, which we already have in possession and are working on setting up. We also discovered this past week that our camera does not do real-time data streaming, which is a feature we need for the success of our project; hence, we handled returning our camera and ordered a cheaper camera that has real-time data streaming and a GPS (both of which add to cost less than our original camera with built-in GPS, so we are still within budget). Moving forward, we will have to synchronize data with three devices instead of two, but synchronizing between two devices will help us synchronize a third device much more easily. Aside from this set of identified and mitigated risks, the only other risks our project has at this point are if one of our devices breaks again. In this case, we will order a new one and proceed as planned. We are all proud of how swiftly we made decisions in the face of realized risks this past week.
For design changes, instead of having the camera have a built-in GPS, we are moving forward with a separate camera and GPS, but aside from that design change, nothing else has changed.
10/26/2024
This week we finally gathered all of our materials in one place and were ready to start doing data extraction and synchronization on our Jetson; however, late in the week our Jetson stopped working so we have been trying to get it up and running during the remainder of the week. We are confident we will be able to successfully debug; however, we will not be able to move forward with development until we have completed this debugging. So far we have tried running the Jetson on many different monitors and reflashing the same SD card. Currently, we are reflashing a different SD card and will move on from there depending on what happens.
No changes have been made to the existing design of the system.
We’ve pushed our development back one week, and we will be developing lane detection and forward car departure detection next week. We are hoping to finish that before the end of the week with some room to begin working on traffic light detection and forward car collision detection before the week ends, and then we will finish the week with testing. Our schedule will proceed as planned afterwards.
10/20/2024
At this moment, there are no significant risks that could jeopardize the success of the project. We have established that our Jetson Orin is working and have found a way to power our camera as well as send data. We must start further implementation to gauge future risks. Our design requirements were rewritten from our previous design presentation, accounting for latency, transferring data, and processing data. We’ve also done more research about the exact software and design trade-offs we’ve made. While these are not exact changes to the design of our system, they further clarify what our goal is in terms of design. Our schedule remains mostly the same, except we have allotted more time for developing our core features.
Globally, our product solution would allow more people to have access to safety features. However, because our product is based on Pittsburgh roads and traffic lights in the US, it is uncertain whether or not our product would work on other streets without adjustments. Regardless, any car or vehicle in the world in a developed area could benefit from the features that come with our product, especially since there are many drivers across the globe. In parts of the world that have vehicles with roads and traffic lights, our product could make an impact because it is an accessory to any car. In these developed areas, people have access to smartphones and can use our product. However, our solution falls short in areas that do not have the same development and do not have access to vehicles or smartphones.
Eunice wrote part A.
Concerning cultural factors, our product solution helps to uphold the moral of safety accessibility across socioeconomic boundaries in our society. Accessibility is an increasingly important cultural need in an increasingly diverse society. Our product solution helps fill this need by increasing the accessibility of car safety by implementing high-end, complex safety features for a significantly lower price. Hence, the majority of people can use the same safety features at an affordable price instead of these features only being available to upper-middle and upper-class socioeconomic citizens.
Emily wrote part B.
Our product solution has few considerations when it comes to environmental factors. The device sits firmly inside a car and communicates data wirelessly to a nearby phone using bluetooth. Due to how the device operates and is used, neither directly interact with the environment or other natural factors. The only environmental factors are the natural resources that are used to build the device from any of the wires and cables, to the Jetson, dashcam, or radar and their relevant electrical components.
Ankit wrote part C.
10/5/2024
Some of the risks of the project involve the time constraints on each of our use-case requirements. As a result, we as a team, went back and reevaluated the priority for each use case, deciding on focusing on the traffic light detection and lane detection first, followed by the forward car departure, and then finally the forward collision detection. In addition, we reevaluated our initial testing plans for the forward collision warning. Instead of doing any integration testing with the RC cars, we will instead only do rigorous unit testing of the system, along with mathematical simulations as part of our demo and testing plans. This was done because as of right now, we have no way to protect our components from damage during testing, and we will not be able to procure new ones should anything happen to them. Otherwise, no changes were made to the solution approach, implementation, or system specification, and no additional costs have come up throughout this week. As of right now, we are currently on schedule and our plans for the upcoming week involve procuring the rest of our primary equipment, finalizing our power supply plans for the radar and Jetson, writing our design report, and any remaining time will be spent starting to integrate our sensors with the Jetson and testing the data transmission across.
9/28/2024
Risks that could jeopardize the success of the project include the NVIDIA Jetson not working. We are managing this by doing preliminary testing before jumping into our actual project implementation. If our NVIDIA Jetson does not work, we will try another one that is functioning or find other pieces of hardware within the ECE inventory because we don’t have the budget to purchase similar hardware. This is the only risk for now since this is the only one of our devices that has shipped, but as the devices come in we will do preliminary testing to make sure the devices are actually functional before we start implementing.
We realized that we wouldn’t be able to use a LiDAR sensor because we would have to place it outside the car, so we modified our decision to use a LiDAR sensor by switching to a radar sensor instead. Radar sensors can see through glass, and we decided to use a 24GHz sensor instead of a 77GHz sensor because they are more resistant to noise, which is best for our use case for detecting cars through the glass windshield and other potential noise such as rain. We were able to find a radar sensor that is just over $200, which is relatively similarly priced to the LiDAR sensors that we were looking at, so this doesn’t change our budget, and we are still under budget.
We rearranged our development schedule to be more detailed and designed it such that each team member works together on a feature with another team member while also having the responsibility of a separate feature individually. In this way, we all can have ownership of a feature with another team member and by ourselves for a nice balance of individual management and collaboration.
A:
We believe that our product helps to create more accessibility for public safety. All of the car safety features that our product aims to implement are found in luxury cars that most drivers cannot afford, and our product aims to create a more affordable provider for these same safety features, increasing public safety by decreasing car collision risks for a much wider socioeconomic demographic.
- A was written by Emily Szabo.
B:
We believe that our product does not meet a specific need with consideration to social factors. This is because our product is mainly focused on providing tools for a safer driving experience at a more affordable price. Thus, all social groups can and will benefit equally from the success of our product. After all, drivers, pedestrians, bikers, truck drivers all benefit from safer roads.
- B was written by Ankit Lenka.
C:
We believe that our product considers economic factors. The product focuses on delivering safety features found in luxury cars, making them affordable for those unable to purchase a new car. Our production takes less than $600, which includes our features that are built on top of a functioning dash cam. There is a large price difference between purchasing a car with these features and benefits in comparison to an additional add-on, like our product. We believe that people should not have to purchase an expensive car for safety, so our product is a more cost-effective solution.
- C was written by Eunice Lee.
9/21/2024
This week, our team completed and presented our proposal. One risk that surfaced during our presentation was the logistics of using a LiDAR sensor inside the car. The LiDAR sensor may not work accurately due to the front windshield glass. To manage this risk, we are looking into either placing the sensor outside of the car or finding an alternative sensor to detect both the speed and distance of obstacles. On a similar note, we must decide if we want to use a more expensive dash camera with built-in sensors and functionality or a cheaper, simpler camera that requires us to build a contraption to hold the device. We plan to create a list of pros and cons, comparing these choices to make the most cost-effective solution without removing functionality. This may cause changes to our block diagram. While we don’t have any changes to our cost currently, we will factor cost into our design decisions.