Justin’s Status Report for April 16

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress.

One of our largest integration problems has been streaming the camera’s video using python 3. The demo code/library for the Pan Tilt Zoom camera was written for python 2 and throws an error when run in python 3. This is a problem because all the code we have written is in python 3. I had previously tried fixing the code by changing the streaming method to write to a pipe or using ffmpeg. After working through the errors in these methods, I realized that the problem was actually with the Threading library. I had been confused by the generic error message combined with a nonfatal error for the GStreamer library which occurred directly before (which I thought was the problem). I have spent the latter half of the week rewriting the JetsonCamera.py file and Camera class to avoid using this library, while still interfacing in the same way.

Whenever I was away from my house and did not have access to the Jetson, I continued working on the photo editing feature. I am working on training a photo editing CNN to automatically apply different image editing algorithms to photos. I plan to use the Kaggle Animal Image Dataset because it has a wide variety of animals and also has only well-edited photos.  I am still formatting the data, but I do not anticipate this task should take too long. This part is not a priority, as we identified it as a stretch goal.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am on schedule with the most recent team schedule. I plan to finish integration tomorrow to present in our meeting Monday.

What deliverables do you hope to complete in the next week?

A full robotic system that we can test with.

Sidhant’s Status Report for April 16

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress.

I made some major changes to the detection algorithm this week. I was able to understand the error with the bounding boxes which was a major step in the right direction.

While fixing this, I came across some problems related to the naming of files (because of Linux and macOS differences). Attempting to fix the naming of files should have been trivial but did not work as expected; I tried seeking help with the issue but decided on creating a script that renamed the downloaded dataset files to avoid any errors in the future.

I re-partitioned the dataset correctly and edited the representation of JSON files and the annotations for the dataset.

Following this, I was able to test the edited bounding boxes. After running tests for sets of random files (from the train, validation, and test data) we saw good results and after some scaling, I was satisfied with the way they were represented (according to the YOLOv5 format).

After testing, I was able to attempt re-training the model. I specified the correct destinations and ensured that all hyperparameters defined in the hyp.scratch.yaml and the train.py files seemed correct. Since training is a long procedure, I consulted my team for their approval and advice and then moved on to running the training script for the neural net.

The issue I came across was that training on my laptop was seeming to take very long (~300 hours) as shown below.

I asked Professor Savvides for his advice and decided to move training to Google Colab since this would be much quicker and wouldn’t occupy my local resources.

All datasets and YOLOv5 are being uploaded to my Google Drive (since it links to Colab directly) after which I will finish training the detection neural net.

In the meantime, I am refining the procedures outlines for testing and setting a timeline to ensure adequate testing by the end of the weekend. While doing so we came across some problems with relaying the video of the physical setup through the JetsonNano. After some research,  I am now focussing on implementing one camera until we can get some questions answered regarding using both in real-time.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

We fell behind a little bit due to roadblocks related to the setup, real-time video feed, and training the neural net correctly.

By the week done this week, I should be back on schedule if I am able to finish training and test the algorithm to some extent. We still need to get the physical setup to communicate with the Jetson exactly as we desire and as soon as this is done it should be a simple procedure to integrate this with the tested detection algorithm.

What deliverables do you hope to complete in the next week?

By the end of next week, I will have the detection algorithm finished and tested as described above. Further, I expect to be able to take a scan of a room using the setup as I would to detect animals, and following this, I hope to integrate and test the features of the project.

This would mean we can focus on polishing and refining the robot as needed as well as fine-tune the different elements of the project so they work well together.

 

Fernando’s Status Report for April 9

What did you personally accomplish this week on the project?

This week I didn’t contribute much to the project.  #Carnival22

Is your progress on schedule or behind?

Our progress is on schedule.  In the updated schedule, we are set to focus on integration until the following week, after which we will be testing our code.

What deliverables do you hope to accomplish this week?

This week I’d like to deliver a mostly vectorized KLT that has been integrated into the Arducam’s API.  This would allow the camera to move with its target while cooperating with the second camera to snap an adequate photo of the target.  Upon testing the code on some of our videos, I will decide if our KLT made from scratch will suffice or if we should migrate to the KLT provided by OpenCV which would most likely be faster.

Team’s Status Report for April 10

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Integration, unfortunately, took longer than expected, with errors in the camera compatibility and control. For example, the provided library for using the camera had errors that took time to debug. Now that we have a better understanding of these challenges, we should be able to finish this task in the next week. Our largest current risk is getting caught up on tasks that are not essential to meeting MVP and having a functioning robot. For this reason, we reworked our schedule. We have done sufficient individual work in the areas of detection, tracking, and editing. Now we will work together to get an operational system. We have also decided that our initial plan with multiple rounds of testing and development was too ambitious for the task we have . For this reason, we have reorganized our schedule to focus on meeting MVP.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward

No changes were made to the system.

Provide an updated schedule if changes have occurred.

Week of April 10: Finish Integration

Week of April 17: Finish Integration/ Start Testing

Week of April 24: Finish Testing / Start Adjustments

Week of May 1: Finish Adjustments and Work on Final Report

Justin’s Status Report for April 10

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress.

 

 

I made a few adjustments to our physical setup to make it more stable. I moved the platforms closer so that the MIPI cables could support a full range of rotation. I also replaced the tape holding the jetson with screws and added a top screw to stabilize the platforms.

I also began setting up testing, which proved more difficult than expected. Our detection and tracking code is being implemented separately, and is still in progress. In order to speed up the integration, I began setting up the camera controls. This included installing the camera driver on the jetson and working a bit with the API. I am still working on enabling multiple cameras.

I also began planning which images and exact processes to use testing, but I still have a lot of work to do on this.

Our Robot Setup

 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am starting to fall behind a bit. I did not account for the detail and work necessary for integrating our work into one system. The planning and setup for the testing is also non-trivial.

What deliverables do you hope to complete in the next week?

Initial testing results for the robot’s search/detection and the editing algorithm. I did not finish these this week, but I hope to do this next week