Team’s Status Report for March 26

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The risk that we presented last week, failing testing in multiple areas, still remains our largest risk. Additionally, we found that the integration and setup needed for testing are larger than expected. It is vital that we are able to integrate detection and tracking into the robot this week. Also, we need to finalize the plans and materials for testing this week as well. This way we will test in time to make adjustments by the project end. We have the slack to adjust if this goal is not accomplished, but this is a very doable goal.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward

No changes were made to the system. However, we realized that we will have to print color images for testing. And ideally, these images may be larger than the paper size to replicate some animals. We can cover some of these costs with our printing quota. For the larger printouts, we need to explore options on campus but may need to use personal funds for off-campus printing services. Our definitive answer will be provided when we finalize testing plans.

Provide an updated schedule if changes have occurred.

No schedule updates.

Justin’s Status Report for March 26

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress.

This week I focused on the integration of our code onto the jetson nano. This process was admittedly far more difficult than we had anticipated. The ptz camera setup has an official Github with code to run on the Jetson, but the camera streaming only works with Python2. When running with python3 I received an error with the GStreamer framework’s interaction with OpenCV. This is a problem that is well documented, but there is apparently no straightforward solution: https://github.com/opencv/opencv/issues/10324. In order, to remedy this problem, I wrote a replacement for the JetsonCamera.py file that changed the streaming method and Camera class to use ffmpeg instead of GStreamer.  This change was successful in getting the cameras operating with Python 3.

I also added a final image editing function, vibrance, to our library. This algorithm is commonly used in photo editing but was difficult to implement because it is not well defined. The general idea of vibrance is to increase the intensity of dull pixels while keeping colorful pixels the same (to avoid getting washed out). I found pseudocode that decompiled a photo editing library’s implementation, so I coded and adapted this for our proect.

Before and after applying our vibrance effect

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

The integration phase has taken far longer than expected. In order to adjust, I am focusing fully on integration instead of photo editing (a less necessary part of the project). Our group has also adjusted our schedule to have a single round of testing with a Minimum Viable Project instead of multiple rounds of development.

What deliverables do you hope to complete in the next week?

Operational search and detection algorithms running on the robot.

Fernando’s Status Report for March 26

What did you personally accomplish this week on the project?

I have been falling behind on the tracking feature’s development and have
just begun testing. Currently the KLT has been tested on three videos consisting of moving cars and a helicopter landing platform demo. The tracker is to maintain the bounding box on the main cars in the first two videos and on the platform’s identification number in the third video. The KLT works successfully for the landing video, but refocuses the bounding box on unwanted targets for the car videos.

Is your progress on schedule or behind?

I am currently behind schedule. This next week I plan on testing the KLT on videos of actual animals in different environments under different conditions such as lighting, contrast, and varying levels of occlusions (which the KLT should be resistant to), the likes of which aren’t too disruptive.

I have yet to fit the KLT to whatever format the arducam records video
instead of the npz format that it’s being tested on now.

What deliverables do you hope to complete in the next week?

By next week, I should have tested the KLT on videos of actual animals and have it work with the video/picture format used for the arducam.

Team’s Status Report for March 19

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

With setup complete, the largest initial risk for the project has been avoided. Now the largest risk is performing poorly in multiple areas in initial testing. Our schedule is designed such that we perform initial testing ASAP, and focus our remaining time on the design requirements not met by the initial setup. With testing occurring over the next two weeks, we will receive important information on how to proceed with the rest of the project. The biggest concern is that all 3 areas of our project (detection, tracking, and editing) fall short of their quantitative requirements. If this is the case, we will each need to continue working on our area of the project. This will limit our ability to team up on the issues.  We should still be able to manage this situation as there is one person per area. However, as a last resort plan, we will focus on detection and stationary photography first, tracking second, and photo editing last.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward

Our original plan was to have our robot’s camera platforms be square with rods in the corners. However, when building the tower, we realized that the camera, when vertical, would bump into the rods. To fix this we replaced the square platforms with long rectangles. Our initial material purchases were sufficient for this adjustment, so no additional costs were created.

Provide an updated schedule if changes have occurred.

No schedule updates.

Justin’s Status Report for March 19

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress.

I was able to complete a couple of tasks this week. The first, and most important was the physical setup of the robot. I was able to finish all of this task, except for fixing the camera tower to the tripod. This task involved creating a 3 layered tower (camera, jetson nano, camera) where the cameras are screwed in and able to scan their full range of motion.

The next task I completed was updating the sharpening algorithm. We plan on having an algorithm automatically determine the number of effects to apply to our photos. With this in mind, it is important that the scales for applying the image processing algorithms in our library are natural and allow for enough flexibility. Our initial implementation of the ‘Sharpening’ algorithm would apply a single kernel one time for each level of sharpening. By the second or third iteration, the picture was unusable. This leaves very little flexibility in applying this effect. After research, I changed the algorithm such that the amount of sharpening is the size of the sharpening kernel. This led to a much more natural sharpening scale.

Old Sharpening Algorithm on Level 3
New Sharpening Algorithm on Level 9

I also did the ethics reading and assignment this week.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

We are on schedule with the Gantt chart shown in the design review and still have an excess of slack. We will test our first design this upcoming week.

What deliverables do you hope to complete in the next week?

Initial testing results for the robot’s search/detection and the editing algorithm.

Fernando’s Status Report for March 3

What did you personally accomplish this week on the project?

This week I helped work on the Design Review paper.

Is your progress on schedule or behind?
I’m a little behind on schedule as I would like to make more changes to the Design Review and plan on finishing it in the days to come.

What deliverables do you hope to complete in the next week?
A completed Design Review and some interface between the KLT and Nvidia Jetson Nano.