Kobe Individual Status Report 4/27

What did you personally accomplish this week on the project? 

This week I focused on debugging the depth image sensing and image pipeline. In the beginning of the week our depth image and color image was no longer triggering our main callback, after a long debug process we found that the main issue was that our images were being synced together correctly with the TimeSynchronizer object. To get around this, we decided to use an ApproximateTimeSynchronizer with an adjustable slop value. Additionally I continued to work on making our custom yolov8 model. This week I added more images of hexapods and refined our dataset with more images of people of different ethnicities so that our model would not be biased.

Is your progress on schedule or behind? If you are behind, what actions will betaken to catch up to the project schedule.

I think we are on schedule if we count the addition of slack time. We’re doing some final search behavior testing right now and we’ll be testing our multiple hexapod system soon.

What deliverables in the next week?

In the next week we just hope to complete our posters and refine our hexapod behaviors.

Kobe Individual Status Report 4/20

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).
This week I integrated yolov8 and VSLAM together with hardware acceleration enabled. I initially tried to launch the launch files of both packages together but I pivoted to just combining the VSLAM container with the yolov8 nitros container in order to have hardware acceleration and have both packages work with a single realsense camera node. This integration is pretty complex and took the vast majority of my time. Earlier during the week I was also focused on testing out VSLAM and seeing how we can use pose data to keep track of our orientation.
Slam and Object Detection working
Is your progress on schedule or behind? 
We are admittedly behind schedule, mainly due to the various intricacies in ISAAC ros hardware acceleration that we’ve run into. The issues we’ve seen are not very well documented because of how new Isaac ROS but we’ve been able to overcome them with a bit more time. To make up for falling behind in schedule, we are all working around the clock for our project.
What deliverables do you hope to complete in the next week?In this next week I want to have our entire system running together with our state machine. I want to integrate the search algorithm with our pose data from VSLAM and then ensure that the hexapod has the proper behavior for our search and rescue tasks.

 

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

Understanding ROS and specifically Isaac ROS has been vital to our project and making the various parts of our hexapod software work together. Another skill that we learned and needed was being able to delve deep into documentation in order to figure out what we needed to do in order to resolve various software issues. We also found the use of docker containers to be very important for running Isaac ROS, this took a lot of trial and error to acquire new knowledge.

 

Kobe Individual Status Report 4/6

This week our main goal was to finish up a hexapod with enough of the desired behavior to be a demo. We were able to get our image to hexapod command pipeline working. I specifically worked on the software for the hexapod, focusing on the depth and the location of object bounding boxes from the Yolov8 detection outputs. I used the location of the bounding box centers as a way for our hexapod to turn toward people and follow them to a certain distance. The rest of the week I focused on restructuring our code to have our main callback function be in charge of running our state machine. I implemented more of our desired behavior so that the hexapod actually goes through various states from Search to Investigate to Found. I debugged this on the actual hexapod and ensured that it worked well with the rest of the package. I also added various checks into the code to make sure that the data we’re basing our hexapod behaviors off is accurate. Here is a snippet from one of our states:

We adjusted our gannt charts to be more realistic and we are currently on schedule. Next week I hope to implement a more complex search algorithm.

Kobe Individual Status Report 3/30

What did you personally accomplish this week on the project? Give files orphotos that demonstrate your progress. Prove to the reader that you put sufficient
At the beginning of this week I spent most of my time trying to integrate the original eYS Stereo camera that we had into our system via getting the depth data and using the camera for VSLAM. We soon found that this camera had really poor documentation and did not work well with what we wanted to do. We pivoted to setting up a real sense camera in its place. I mostly worked on making the code for the camera and hexapod communication into ROS nodes that worked with our yolov8 visualizer. On Wednesday however we realized that our SD card got corrupted with set us back a lot. I spent the rest of the week setting up our environment again and restoring our code. I finished integrating the real sense camera with the image pipeline after some trial and error. 

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

The corrupted SD card set us back a few days but we have recovered now. On the bright side we are still not too far behind schedule and the restoring of the SD card allowed us to learn more about dockers, using SSDs, and our environment in general.What deliverables do you hope to complete in the next week?
Finish a hexapod that can do basic maneuvering, object detection, and object following.

Team Status Report 3/23

     The most significant risk that could jeopardize the success of the project is still related to our method of doing obstacle avoidance in combination with SLAM. We have multiple plans to pivot from SLAM if the data proves to be too difficult to use. One possible solution would be to gather the SLAM data but rely on our ultrasound sensor for obstacle avoidance. In this case our SLAM data could be collected and brought back to a central computer that a human SAR team member uses. This data could then be used to visualize the target area, allowing human team members to better traverse the terrain. We are a bit behind schedule but we hope to catch up soon as we are starting to implement the search algorithm on our actual robot. We made a good amount of progress in the past week as we got object detection and the hexapod controls working. These will be further talked about in the individual reports.

Kobe Individual Status Report 3/23

This week I was able to create the stereo camera to yolov8 detections pipeline. Specifically, I made an isaac ros node that was in charge of interfacing with the camera using OpenCV. I think I translated the OpenCV frames into ros2 image messages via CVBridge. I published these ros2 image messages and the yolov8 tensor rt node ran object detection on these messages. The visualizer node showed us that the object detection was working well. Casper and I are currently in the process of creating a central control node that will take the various results from object detection and other sensors in order to coordinate the behaviors of the hexapod. A lot of the time spent this week was debugging and figuring out alternatives to deprecated packages or incompatible libraries…etc. 

Now that the object detection pipeline is working, I’m more on schedule but still a bit behind since we should be implementing the search algorithm right now. This should not be too big of a hurdle to overcome since we are starting with a simple search algorithm. Over the next week I’m hoping to get the hexapod to be able to move toward search targets like a human.

Kobe Individual Report 3/16

This week I worked a lot with our camera and our ROS-Isaac environment. In the beginning I focused on interfacing with our usb connected stereo camera. I found which video input channels gave us the visuals and depth and then I tried to utilize GStreamer to create a video pipeline for Yolov8 detection. After consulting with professors that have used GStreamer before, I decided to instead switch to a v4l2 supported opencv method where I create a callback to capture image frames at a certain frequency and then publish it to the topic. I decided to create a separate package for the camera image publishing in our ros environment to modularize our system more and also allow us to use C++ for a speedup. After learning how the USB camera interfacing works I’ve converted it to a C++ node to communicate on the images topic with the Yolov8 node. 

Our progress is admittedly behind due to the fact that we went down various paths to figure out the best way for us to setup our system. While we are behind we have plans to catch back up by pivoting on some design choices. By next week I hope to have our jetson’s ros environment designed with various nodes for our subsystems and I hope to have our object detection fully working.

 

Kobe’s Individual Status Report 3/9/2024

I worked on understanding the YOLOv8 structure and how to utilize it with the Isaac-ROS based environment. Specifically, I looked deeper into the launching and the scripts written for the visualization of the image detection that we got working before the break. I took the python visualization script and began translating it to C++ since we want the majority of our YOLOv8 code to be running in C++ for the speed. This script creates the visualizer class that subscribes to the detections_output topic and registers a callback for updates on this topic which allow the visualizer to display the bounding boxes. My next steps would be to write a new file for creating a publisher node that would take images from a camera and publish it to the “images” topic that our visualizer would take as an input.

Akash’s Individual Status Report 3/9/2024

I mainly worked on setting up the controls aspect of the robot using a raspberry pi 4. I set up Ubuntu on it and ROS1 to test if the library works. There were a lot of setbacks because we were initially using a pi 3 that does not support ubuntu 20.04 desktop because of less RAM and a bunch of extra setup because ubuntu 20.04 desktop isn’t supported by default on a Pi 4 but its up and working now.

In the future we will either use it as it is or port the ros library into the Jetson and change the ros drivers package to be compliant with the jetson hardware.

Kobe Individual Report 2/24

This week I was mainly focused on setting up and benchmarking different YOLO versions on the Jetson Nano and now the Jetson Orin. The setup process for the Jetson Nano took a longer time because it could not support Python 3.7 or higher with the Jetpack 4.6. To get around this I did a separate setup that got around using the Ultralytics library and I also added a virtual environment. I was successful in setting up YOLOv7 but when running it, I found that it took 30 seconds to detect a few horses in a single image so that’s concerning to us. We decided to pivot and use a Jetson Orin Nano instead, which is 80x faster than the normal Jetson Nano. I spent the rest of the time setting up the Jetson Orin Nano. Here is an image of horses, very cool:

Progress is a tiny bit behind due to our change to Jetson Orin Nano but I think it’s a necessary delay. Next week I want to get the camera and YOLOv8 operational.