Team Status Report 27th April

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans a re ready?

As discussed in Casper’s individual report, the most significant risk currently is the “Out of frame resources” error that is occurring when trying to run our program. This is causing indeterminate crashes when we try to run the program sometimes. If we are unable to find out when exactly this error occurs, and how to solve it, we can revert back to when we are only running YoloV8 without VSLAM.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

As discussed above, it is possible that we might have to discard VSLAM due to resource limitations. This means that we will not be able to visualize the map and employ the compass coordinates from the IMU.

 

Provide an updated schedule if changes have occurred

 

List all unit tests and overall system test carried out for experimentation of the system. List any findings and design changes made from your analysis of test results and other data obtained from the experimentation.

Scalability speedup test:

We found that after 6 hexapods our speedup decreased so we shouldn’t ever scale above the number 6.

Battery test:

 

Object classification test

Through the object classification tests we decided to change our design to use a Jetson Orin Nano instead of a Jetson Nano due to the need for better computation speed and our desire to run YOLOv8 which is hardware accelerated and requires a Jetpack version that the Jetson Nano could not run. The mean accuracy readings led us to no longer need to extensively train a new model since the accuracy of our COCO dataset pretrained model was already very strong.

 

 

Casper Individual Status Report 4/27

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).
In the past week, I created the majority of slides and wrote the script for our final presentation – this included conducting some validations such as battery life calculations. I also spent a few hours creating cue cards and practicing.
In terms of advancing the project, I spent a lot of time deploying, testing, and tuning the search algorithm on the actual Hexapods. Testing on physical hardware has taken longer than anticipated. Furthermore, I suspect that the performance requirements from running both YoloV8 and VSLAM is causing too much resource depletion and overheating, where the Jetson encounters a “Out of Frame Resources” error after running the program too many times. We will need to find the root cause of this error and fix it, since it might be a bigger issue in our demo.
Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?
We are still quite behind schedule unfortunately, as we need to finish integrating the robot (though we have performed most of the component-wise validations already). We only need to finish fine tuning the search algorithm and add communications, however there might be many roadbumps (ie. the “Out of Frame Resources” error). Our team is prepared to dedicate the next several days on completing the project before our demo.
What deliverables do you hope to complete in the next week?
We have an early demo on Wednesday, so we are planning to finish our integrated robot by Monday ideally. This will involve solving the error caused by running VSLAM and Yolov8 together. If we are still unable to resolve this by early Monday, we may have to pivot off VSLAM and simply just hardcode turns and movements.

Casper’s Individual Status Report Apr 20 2024

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).
Over this past week (and during Carnival), I was able to write the search algorithm for our Hexapod, which can be seen here. I had to make some modifications in behaviour compared to our simulated search algorithm due to some physical constraints that we did not have in the simulation. Furthermore, I was able to assemble and 3D print all the required parts for a second Hexapod, as well as setup the Raspberry Pi for actuation. 
What deliverables do you hope to complete in the next week?

In this next week, I am planning to test and polish up the search algorithm on the Hexapod. This will involve interfacing and calibrating the compass bearings provided from the VSLAM library.

I am also the speaker for our final presentation this week, so I will need to prepare the slide deck and script.

Finally, I will work with the rest of the team in integrating all the Hexapod systems together (ie. image detection, search algorithm communication) then begin validating the whole system.

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

Being completely new to robotics before this capstone, I learned a lot about ROS throughout this project! I did so through auditing the 18448 class, as well as reading documentation as well as video tutorials.

Although I already knew the fundamentals, I also learned a lot more about 3D printing, as well as all the issues that come with it and how to debug them (ie. bed leveling, nozzle temperature, filament type). A majority of this learning came from simple trial and error, as well as asking others more knowledgeable about the issues that I encountered.

Casper’s Individual Status Report Apr 6 2024

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

We were able to establish a lot of progress over this past week before our interim demo! With our reflashed Jetson, Akash and I were able to create some shell scripts that automates the process for setting up our ROS environments in the Docker containers. With this code in our Github, we are able to easily setup new Jetsons with the correct dependencies for YOLOv8.

Additionally, I was able to write a simple tracking algorithm such that it is able to turn towards and follow humans up to a certain distance. After mounting battery packs and Jetson onto the Hexapod so that it is fully mobile, I was able to test that this algorithm worked (which can be seen in the group updates video)!

What deliverables do you hope to complete in the next week?

Over this coming week, I hope to have fully implemented VSLAM on our Jetson, in order to determine whether it is feasible or not. If we decide to proceed with VSLAM, I also hope to add to the shell scripts to automate the setup for this package.

I will also help Kobe with the overall state control and search algorithm since I wrote the simulation code for it. Additionally, I will assemble and manufacture our remaining Hexapods so that we have swarm SAR.

Team Status Report 4/6

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

We have made good progress with the Hexapod for the interim demonstration, where it can track and follow people. However, we are still yet to implement a proper search/path planning algorithm to find people in the first place, which is an important component of our project.  To mitigate this risk, we have many ideas for search algorithms of varying levels of difficulty, so that we can always fall back on a simple implementation if needed. Our ideas range from using SLAM map data to track all explored and unexplored areas, to simply walking straight when possible and turning in a random direction when blocked.

For localization and mapping, we are almost finished with setting up VSLAM using live data from the RealSense cameras. However, there is still worry that VSLAM may be too compute heavy, both in terms of compute resources and also memory requirements (since offline data can be gigabytes to terabytes large), so this might not be feasible. As a contingency plan, we might be able to simply use the movement commands we send to create a map, as it seems that the drifts on the Hexapod robots are tolerable.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

As discussed above, we might need to replace SLAM from our system requirements if we deem it is computationally infeasible. This means the robot will not be as accurate in localization due to drift, but will have more computing power for object detection and other algorithms.

We also made some additions and changes to our hardware, such as powering the RPi using an additional battery pack, so that all the 18650s are dedicated to powering the servo motors for longer life.

Provide an updated schedule if changes have occurred.

Our schedule is actively updated in our Gantt chart, which can be seen in this spreadsheet.

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

Controls test (already done): After setting up the hardware and software for the Hexapods, we tested that the movement and controls algorithm works as intended.

Search algorithm test: We will test that the hexapods are able to navigate around a room with obstacles without colliding. We can do so by cordoning off a section of a room and addingobstacles in front of the robot’s paths.

People detection test: We will test that our object classifier is able to detect people in a variety of poses, lighting conditions, and other environment variables. We will use test data that we took ourselves or from online.

Mock real-world test: We will create a 5m x 5m space / room, with walls and other obstacles as a simulation of a real-world scenario that the hexapods might find themselves in. We will test that the hexapods are able to navigate through the room and find all humans (ie. human figurines or images).

Casper’s Individual Status Report Mar 30 2024

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

As seen in our group update, we unfortunately had our SD card corrupted, so had to reset our Jetson OS and environment for running everything. Fortunately, we had most of our code pushed to Github, so we only had to make minor recoveries for the code.

Before the corruption, I was able to set up the Intel RealSense camera and create a Python script to read color and depth images from it. We decided to transition from the Eys3D camera due to lack of documentation in getting stereo images. Since the SD corruption, I have spent a few hours with the team on recovering our progress, particularly in learning how to use and backup our Docker containers such that we can avoid losing progress if it happens again.

Additionally, I have been in charge of designing and 3D printing the structural harness to mount the Jetson, battery packs, and Intel camera onto the robot, such that it can be fully portable without wall plugs. Since the battery harness is very large, it took 17 hours to print a full version, as well as having to level the printer bed several times (see failed print on the right :/).

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

The SD card corrupting did put us back a few days. Our team has been working a lot of additional hours to catch up, but we are still a bit behind schedule.

I think one of the most difficult parts of this project has been actually planning what to do efficiently. Since much of what we are doing is very new to us, being the first time working in ROS and on an NVIDIA embedded computer, we have spent a lot more time than anticipated figuring out exactly what to do and how to achieve it.

What deliverables do you hope to complete in the next week?

Since this coming week is interim demo, we are hoping to have one Hexapod as close to finished as possible. On my side, this involves mounting everything onto the Hexapod and resoldering the power source for the Raspberry Pi.

I will try implementing what I have learned into making the NVIDIA docker script tailored to our specific use case, and also experiment more with VSlam using the Intel camera.

Casper’s Individual Status Report Mar 23 2024

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

Over this past week, I was able to get Isaac ROS VSLAM running on our Jetson Orin Nano, using example data provided from a RosBag (see image below). We have verified that VSLAM does work through the visualizations, but are yet to verify this using our own data.

Furthermore, I have been working together with Kobe to develop our master controller node, which handles the overall flow logic of the robot (ie. transitioning from search state to find state, the actions to do within each state). This has involved

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

We are still a bit behind schedule, as we had originally planned to begin testing and integration by now. So, we will have to eat up some of our slack time.

What deliverables do you hope to complete in the next week?

Over this next week, I hope to complete the controller node such that we can have all the software (excluding VSLAM for now) fully integrated, such that the robot is able to move around and search for objects.

Casper’s Individual Status Report Mar 16 2024

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I encountered a few blockers this week, which meant I did not get to get as much done as I want unfortunately. Since we are still working on finalizing the controls for the robot, I decided to wait on implementing SLAM. Instead, I have been helping Akash on tuning the Hexapods with the ROS library that we found, but we have decided to pivot as the library does not seem to be super reliable. I have also been doing a lot of learning into Docker containers and ROS, since I am not super knowledgeable, by completing some assignments from relevant robotics courses on my own Jetson Nano.

Additionally, since we are using stereo cameras on our robots now, I designed and printed some retrofit mounts for the camera, which can be seen in the photos.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I think we are falling a bit behind schedule, as we will need to implement SLAM fairly soon. However, we have redefined our MVP to be simpler, such that we do not necessarily need SLAM for the product to work – this means that we will not have the Medbot be able to track other bots. However, we will still endeavour to have this done by demo.

What deliverables do you hope to complete in the next week?

This week, I will help Akash in finalizing the controls algorithm, which involves redesigning the Freenove library. Hopefully after that is complete, I can begin working on SLAM.

Once we decide on the power bank / battery pack for the Jetson, I will also design a mount for that.

Casper’s Individual Status Report Mar 9 2024

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

Over the past week(s), I was able to deploy Isaac ROS YOLOv8 on the Jetson Orin Nano – it took a while to setup all the dependencies as well as understand how to use Docker containers. Isaac ROS YOLOv8 is able to run within a second or so on a simple photo, which is significantly faster than YOLOv7 on the Jetson Nano.

I was also able to complete the simulated search algorithm to include and account for multiple Hexapods, and profile the expected speedups we should see through multiple simulated runs. The example below shows that we should expect to see over a 2x speedup with 3 hexapods. The example on the left simulates the Hexapods without optimized behaviour (walk randomly), and the example on the right simulates the Hexapods with optimized behaviour (avoid places that you have previously walked).

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I think I am more or less on our original schedule. However, since we failed to account for the work we had to do in SLAM and map merging, I think I will need to put in more effort for the second half of the semester.

What deliverables do you hope to complete in the next week?

For this week, I think I will take the initiative and begin implementing the SLAM and map merging algorithms on the robots, which first involves installing the cameras onto the robot and interfacing it with the Jetson Orin Nanos.

Team Status Report for Mar 9 2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?
The main risk we are facing right now is implementing some form of SLAM to allow our hexapods to localize and determine where other hexapods are in the area. Our contingency plan for this risk is to utilize a prebuilt Isaac-ROS april tags library to use april tags for pose estimation of our hexapods instead of having a full SLAM implementation. We think that this would be a lot easier to implement though we have to constrain our scope a bit more.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?
One of the changes made to the existing design of the system is that we’re now using Isaac-ROS on our Jetson Orin Nano. This is because Isaac-ROS supports our Yolov8 object detection algorithm and is already hardware accelerated with the NITROS nodes that it provides. This hardware acceleration is very important for us because we want to make our object detection fast.

Part A: … with consideration of global factors. Global factors are world-wide contexts and factors, rather than only local ones. They do not necessarily represent geographic concerns. Global factors do not need to concern every single person in the entire world. Rather, these factors affect people outside of Pittsburgh, or those who are not in an academic environment, or those who are not technologically savvy, etc.
With consideration of global factors, the main “job to be done” for our design is to provide accessible search and rescue options for countries around the world to deploy. With the rise of natural disasters and various conflicts around the globe, it’s important that we are armed with the appropriate, cost effective, and scalable solutions. Our hexapod swarm will also be trained with a diverse dataset ensuring that we can account for all kinds of people from around the world. Our hexapod’s versatility and simplicity will allow it to be deployed around the world by people with limited technology ability. (Written by Kobe)

Part B: … with consideration of cultural factors. Cultural factors encompass the set of beliefs, moral values, traditions, language, and laws (or rules of behavior) held in common by a nation, a community, or other defined group of people.
With consideration of cultural factors, it is part of our moral belief and instinct to help those in times of need, such that we are also able to receive help when we are vulnerable ourselves. Our product solution is designed to address this common moral, across all different cultures and backgrounds. We will train the object detection models such that it can recognize people from all cultures and backgrounds, and design the product generally so that it can be deployed in many versatile places. (Written by Casper)

Part C: … with consideration of environmental factors. Environmental factors are concerned with the environment as it relates to living organisms and natural resources.
With respect to environmental factors the reason we chose a hexapod robot is so that the robot can traverse a lot of different terrains. We have to do physical testing to ascertain this. We also will account for domestic pets like dogs and cats and have that in our training dataset so we can identify and rescue them as well. We also will test our model with dolls and humanoid lookalikes to check if it doesn’t get confused by them. We have a fault tolerant requirement to account for changing environments that could compromise robots such as water damage or collapsing infrastructure. (Written by Akash)