Team’s Status Report for April 16

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

At this point, we have two major risks. The first risk is that multiple pieces (detection, tracking, hardware, editing) of our system fail to meet the quantitative requirements. If this is the case, we will not be able to team up to make final system revisions. We would have the capability to work separately on adjustments, as this is how we began work this semester. However, this would not be the ideal situation and could possibly complicate a second integration round.

The second major risk is adding a second camera to our system. This process will hopefully go smoothly, but with the difficulty we faced getting the single-stream working, there is a possibility that adding a second camera could be equally challenging. If it becomes too difficult to integrate the second camera, then we should focus on getting the best possible single-camera system. Our goal is to have a strong and functioning proof of concept, and while not ideal, this can be accomplished with one camera.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward

No changes were made to the system.

Provide an updated schedule if changes have occurred.

No schedule changes.

Justin’s Status Report for April 16

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress.

One of our largest integration problems has been streaming the camera’s video using python 3. The demo code/library for the Pan Tilt Zoom camera was written for python 2 and throws an error when run in python 3. This is a problem because all the code we have written is in python 3. I had previously tried fixing the code by changing the streaming method to write to a pipe or using ffmpeg. After working through the errors in these methods, I realized that the problem was actually with the Threading library. I had been confused by the generic error message combined with a nonfatal error for the GStreamer library which occurred directly before (which I thought was the problem). I have spent the latter half of the week rewriting the JetsonCamera.py file and Camera class to avoid using this library, while still interfacing in the same way.

Whenever I was away from my house and did not have access to the Jetson, I continued working on the photo editing feature. I am working on training a photo editing CNN to automatically apply different image editing algorithms to photos. I plan to use the Kaggle Animal Image Dataset because it has a wide variety of animals and also has only well-edited photos.  I am still formatting the data, but I do not anticipate this task should take too long. This part is not a priority, as we identified it as a stretch goal.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am on schedule with the most recent team schedule. I plan to finish integration tomorrow to present in our meeting Monday.

What deliverables do you hope to complete in the next week?

A full robotic system that we can test with.

Sidhant’s Status Report for April 16

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress.

I made some major changes to the detection algorithm this week. I was able to understand the error with the bounding boxes which was a major step in the right direction.

While fixing this, I came across some problems related to the naming of files (because of Linux and macOS differences). Attempting to fix the naming of files should have been trivial but did not work as expected; I tried seeking help with the issue but decided on creating a script that renamed the downloaded dataset files to avoid any errors in the future.

I re-partitioned the dataset correctly and edited the representation of JSON files and the annotations for the dataset.

Following this, I was able to test the edited bounding boxes. After running tests for sets of random files (from the train, validation, and test data) we saw good results and after some scaling, I was satisfied with the way they were represented (according to the YOLOv5 format).

After testing, I was able to attempt re-training the model. I specified the correct destinations and ensured that all hyperparameters defined in the hyp.scratch.yaml and the train.py files seemed correct. Since training is a long procedure, I consulted my team for their approval and advice and then moved on to running the training script for the neural net.

The issue I came across was that training on my laptop was seeming to take very long (~300 hours) as shown below.

I asked Professor Savvides for his advice and decided to move training to Google Colab since this would be much quicker and wouldn’t occupy my local resources.

All datasets and YOLOv5 are being uploaded to my Google Drive (since it links to Colab directly) after which I will finish training the detection neural net.

In the meantime, I am refining the procedures outlines for testing and setting a timeline to ensure adequate testing by the end of the weekend. While doing so we came across some problems with relaying the video of the physical setup through the JetsonNano. After some research,  I am now focussing on implementing one camera until we can get some questions answered regarding using both in real-time.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

We fell behind a little bit due to roadblocks related to the setup, real-time video feed, and training the neural net correctly.

By the week done this week, I should be back on schedule if I am able to finish training and test the algorithm to some extent. We still need to get the physical setup to communicate with the Jetson exactly as we desire and as soon as this is done it should be a simple procedure to integrate this with the tested detection algorithm.

What deliverables do you hope to complete in the next week?

By the end of next week, I will have the detection algorithm finished and tested as described above. Further, I expect to be able to take a scan of a room using the setup as I would to detect animals, and following this, I hope to integrate and test the features of the project.

This would mean we can focus on polishing and refining the robot as needed as well as fine-tune the different elements of the project so they work well together.

 

Fernando’s Status Report for April 9

What did you personally accomplish this week on the project?

This week I didn’t contribute much to the project.  #Carnival22

Is your progress on schedule or behind?

Our progress is on schedule.  In the updated schedule, we are set to focus on integration until the following week, after which we will be testing our code.

What deliverables do you hope to accomplish this week?

This week I’d like to deliver a mostly vectorized KLT that has been integrated into the Arducam’s API.  This would allow the camera to move with its target while cooperating with the second camera to snap an adequate photo of the target.  Upon testing the code on some of our videos, I will decide if our KLT made from scratch will suffice or if we should migrate to the KLT provided by OpenCV which would most likely be faster.

Team’s Status Report for April 10

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Integration, unfortunately, took longer than expected, with errors in the camera compatibility and control. For example, the provided library for using the camera had errors that took time to debug. Now that we have a better understanding of these challenges, we should be able to finish this task in the next week. Our largest current risk is getting caught up on tasks that are not essential to meeting MVP and having a functioning robot. For this reason, we reworked our schedule. We have done sufficient individual work in the areas of detection, tracking, and editing. Now we will work together to get an operational system. We have also decided that our initial plan with multiple rounds of testing and development was too ambitious for the task we have . For this reason, we have reorganized our schedule to focus on meeting MVP.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward

No changes were made to the system.

Provide an updated schedule if changes have occurred.

Week of April 10: Finish Integration

Week of April 17: Finish Integration/ Start Testing

Week of April 24: Finish Testing / Start Adjustments

Week of May 1: Finish Adjustments and Work on Final Report

Justin’s Status Report for April 10

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress.

 

 

I made a few adjustments to our physical setup to make it more stable. I moved the platforms closer so that the MIPI cables could support a full range of rotation. I also replaced the tape holding the jetson with screws and added a top screw to stabilize the platforms.

I also began setting up testing, which proved more difficult than expected. Our detection and tracking code is being implemented separately, and is still in progress. In order to speed up the integration, I began setting up the camera controls. This included installing the camera driver on the jetson and working a bit with the API. I am still working on enabling multiple cameras.

I also began planning which images and exact processes to use testing, but I still have a lot of work to do on this.

Our Robot Setup

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am starting to fall behind a bit. I did not account for the detail and work necessary for integrating our work into one system. The planning and setup for the testing is also non-trivial.

What deliverables do you hope to complete in the next week?

Initial testing results for the robot’s search/detection and the editing algorithm. I did not finish these this week, but I hope to do this next week

Team’s Status Report for March 26

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The risk that we presented last week, failing testing in multiple areas, still remains our largest risk. Additionally, we found that the integration and setup needed for testing are larger than expected. It is vital that we are able to integrate detection and tracking into the robot this week. Also, we need to finalize the plans and materials for testing this week as well. This way we will test in time to make adjustments by the project end. We have the slack to adjust if this goal is not accomplished, but this is a very doable goal.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward

No changes were made to the system. However, we realized that we will have to print color images for testing. And ideally, these images may be larger than the paper size to replicate some animals. We can cover some of these costs with our printing quota. For the larger printouts, we need to explore options on campus but may need to use personal funds for off-campus printing services. Our definitive answer will be provided when we finalize testing plans.

Provide an updated schedule if changes have occurred.

No schedule updates.

Justin’s Status Report for March 26

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress.

This week I focused on the integration of our code onto the jetson nano. This process was admittedly far more difficult than we had anticipated. The ptz camera setup has an official Github with code to run on the Jetson, but the camera streaming only works with Python2. When running with python3 I received an error with the GStreamer framework’s interaction with OpenCV. This is a problem that is well documented, but there is apparently no straightforward solution: https://github.com/opencv/opencv/issues/10324. In order, to remedy this problem, I wrote a replacement for the JetsonCamera.py file that changed the streaming method and Camera class to use ffmpeg instead of GStreamer.  This change was successful in getting the cameras operating with Python 3.

I also added a final image editing function, vibrance, to our library. This algorithm is commonly used in photo editing but was difficult to implement because it is not well defined. The general idea of vibrance is to increase the intensity of dull pixels while keeping colorful pixels the same (to avoid getting washed out). I found pseudocode that decompiled a photo editing library’s implementation, so I coded and adapted this for our proect.

Before and after applying our vibrance effect

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

The integration phase has taken far longer than expected. In order to adjust, I am focusing fully on integration instead of photo editing (a less necessary part of the project). Our group has also adjusted our schedule to have a single round of testing with a Minimum Viable Project instead of multiple rounds of development.

What deliverables do you hope to complete in the next week?

Operational search and detection algorithms running on the robot.

Fernando’s Status Report for March 26

What did you personally accomplish this week on the project?

I have been falling behind on the tracking feature’s development and have
just begun testing. Currently the KLT has been tested on three videos consisting of moving cars and a helicopter landing platform demo. The tracker is to maintain the bounding box on the main cars in the first two videos and on the platform’s identification number in the third video. The KLT works successfully for the landing video, but refocuses the bounding box on unwanted targets for the car videos.

Is your progress on schedule or behind?

I am currently behind schedule. This next week I plan on testing the KLT on videos of actual animals in different environments under different conditions such as lighting, contrast, and varying levels of occlusions (which the KLT should be resistant to), the likes of which aren’t too disruptive.

I have yet to fit the KLT to whatever format the arducam records video
instead of the npz format that it’s being tested on now.

What deliverables do you hope to complete in the next week?

By next week, I should have tested the KLT on videos of actual animals and have it work with the video/picture format used for the arducam.

Team’s Status Report for March 19

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

With setup complete, the largest initial risk for the project has been avoided. Now the largest risk is performing poorly in multiple areas in initial testing. Our schedule is designed such that we perform initial testing ASAP, and focus our remaining time on the design requirements not met by the initial setup. With testing occurring over the next two weeks, we will receive important information on how to proceed with the rest of the project. The biggest concern is that all 3 areas of our project (detection, tracking, and editing) fall short of their quantitative requirements. If this is the case, we will each need to continue working on our area of the project. This will limit our ability to team up on the issues.  We should still be able to manage this situation as there is one person per area. However, as a last resort plan, we will focus on detection and stationary photography first, tracking second, and photo editing last.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward

Our original plan was to have our robot’s camera platforms be square with rods in the corners. However, when building the tower, we realized that the camera, when vertical, would bump into the rods. To fix this we replaced the square platforms with long rectangles. Our initial material purchases were sufficient for this adjustment, so no additional costs were created.

Provide an updated schedule if changes have occurred.

No schedule updates.