Team Status Report for 4/20/24

For the past two weeks, the main thing we as a team have been working on revolves around integration and connecting together all our separate subsystems.  When connecting our object detection/depth map code that instructs the motors to turn and move a specific direction, we ran into difficulty parallelizing the two processes, either using Python’s multiprocessing and socket modules. The two control loops were blocking each other and preventing us from progressing in either program, but as stated in Varun’s status report, the end program is an overarching one acting as a state machine. After fully assembling the rover and running integration tests that involve the full pipeline of navigation -> detection -> pickup, the most significant risk lies on the detection side. Earlier in the week, we were running into an issue with the coordinates that the object detection pipeline was giving for the bounding box it created in that the y-coordinate it outputted wasn’t necessarily the top of of the object, but the center relative to some part of the camera, causing our suction to undershoot because of inaccurate kinematics. After investigating the various dimensions and y-values that exist in the code and comparing them to our hand measurements, we found that the detection.y value it spit out did reflect the top of the object, but regarding its magnitude, it reflected the distance from the bottom of the camera. To mitigate and manage this risk, as well as improve our kinematics in the process, we plan on finetuning a potential hard offset to all y values to ensure that we hit the top of the object as well as tuning with adding a variable offset based on the bounding box dimensions. We have started doing this, but plan on performing a lot more trials next week. Another risk involves the accuracy of the object detection, which meets below our standards as defined in the design and use case requirements. A potential cause for this issue is that since mobilenetSSD has a small label database, there’s a chance it doesn’t encompass all the objects we want to pickup, but since we don’t necessarily need identification, just detection, a potential risk mitigation strategy is to lower the confidence threshold of the detection pipeline to hopefully improve on detection accuracy.

One change that was made to the design of the system was adding a second layer to our rover to house our electronics due to space constraints. The change does not incur major costs since the material and machinery were both provided by Roboclub. This is a quick, lasting fix, with not further costs needing mitigation going forward.

As stated in our individual status reports, we are fairly on track, with no major changes to the schedule needed.

A full run-through of our system can be found here, courtesy of Varun: https://drive.google.com/file/d/1vRQWD-5tSb0Tbd89OrpUtsAEdGx-FeZF/view?usp=sharing

Team Status Report for 3/30/24

When we were all working together on Friday, one issue we noticed is that after switching to the 5V driver from the Arduino Nano, when spinning the motors, we observed that they were not spinning the same speed. This is a significant risk in that if there is disparity between the motor speeds, because of our belt setup,  it would affect the manner in which our robot turns, making it not dependable or reliable. To mitigate this risk, we have two potential avenues to pursue: the first is through tuning the commands given by the microcontroller to make sure that the robot can indeed drive straight, thus allowing us to maintain the same speed for the motors manually through tuning. The second way of mitigation is through using rear wheel drive only and switching to casters on the front wheels. This is because the belt tension is causing undue force on the motor shaft, causing it to spin slower. If we convert to rear wheel drive, it removes the need for a belt in the first place.

A change made to the existing design of the system is the switch from using a Raspberry Pi Pico to an Arduino Nano. This is necessary because it allows us to drive a 5V logic as opposed to a 3.3V logic. The change does not incur any additional cost because the Arduino Nano was provided free of charge.

For an updated schedule, we are hoping to target this week to be able to drive the rover and control the servos for the arm, even if it’s a basic program to test functionality.

This video link showcases our currently assembled rover so far (sans-camera), with the motors successfully wired up and spinning!

https://drive.google.com/file/d/1zICyOJkQBSxv6ApgS1hE1o7wqdp9SjWX/view?usp=sharing

Team Status Report for 3/16/24

After doing a post spring break evaluation, the most significant risks that could jeopardize the success of the project revolve around the camera, both in terms of its function and the dependencies that require its outputs, such as the kinematics calculation module. Despite having a pipeline that can detect depth and display a live camera feed smoothly on my (Nathan’s) laptop, when using X11 forwarding, the resulting feed was extremely slow and laggy. Our plan to manage and mitigate this risk is to get our RPi monitor as soon as possible to test on actual usage as well as look for any opportunities to lower bandwidth and latency. The Luxonis documentation has benchmarks to test these values so we can analyze if we have any shortcomings. Additionally, another risk that stems from the first risk is the fact that we are behind on our timeline. However, we have recently placed orders for pcbs this week so for these fabrication tasks, we have controlled what we can control in terms of timing. This week saw a lot of fabrication of parts, so our next weeks will see an abundance of integration and in-person meeting time.

No changes were made to the existing design of the system and it is consistent with the change made after spring break.

Although no official changes to the schedule have been made, we are systematically cutting things out from the future and trying our best to push forward in terms of days.Here are some parts Hayden and Varun made!

Team Status Report for 3/09/24

The biggest risk, which I (Varun) only realized while testing, was whether or not the suction cups could even hold up an iPad. Thankfully, that risk was provably mitigated. Currently, the most significant risk is the kinematics. We’ll need to update the previously thought-out kinematics to be slightly more robust (described in our design report). I hope to figure that out this coming week.

We changed the block diagram of our system a bit, to account for said kinematics. We’ll need fast encoder feedback, which can really only be done on bare metal microcontrollers, rather than the OS-stuffed Raspberry Pi 4. We updated our block diagrams to reflect this change. It adds the cost of the Raspberry Pi Pico, which is rather minimal. Varun’s status report has the progress of the suction system!

Part A – Varun: 

Our target demographic is, by design, people who are not technologically savvy. As mentioned many times prior, the inspiration for this project came through Hayden’s grandfather, whose capability to move is rather limited. Also limited is his technological ability. As such, we decided to simplify the control sequence to ensure that people whose technological ability is at the same level can easily use HomeRover. Additionally, the automatic sequence to grab an object is easily reachable by the touch of a button, so users don’t have to spend a lot of time to learn how to use our system. The factor we want to touch is one that is not talked about much, which is recovery. Though recovery is extremely important, it is a part of the process that is hard to suffer through. We aim to ease that suffering just a little bit, with HomeRover.

Part B – Hayden: 
In regards to different cultural factors our design is robust in a manner that does not require a written language. Our control center uses four buttons for controls which will be labeled with arrows and two buttons for interacting. The two buttons will be labeled with icons that can be translated in a user manual to make the design usable by all cultures. The monitor will display when the rover is in the proper range to pickup the item using colors; red and green are seemingly universal for good and bad so we are employing these colors on our display. As far as moral values in religions our design uses ethically sourced materials and we are mitigating waste by using our modular design. As far as traditions and laws, the only groups I can think of that we are violating their beliefs with our design are the Amish and Mennonite communities; these groups are not in our target demographic so our design is sufficient culturally.

Part C – Nathan:

Considering environmental factors, especially when pertaining to our design’s relationship with living organisms and natural resources, we are primarily concerned with the sourcing of our parts. Because our design utilizes LiPo batteries on both the user side and control side, we have to be cognizant on the origins of the lithium, and by extension the cobalt, that goes into these rechargeable batteries. According to Siddharth Kara, “roughly 75 percent of the world’s supply of cobalt is mined in the Congo”, and oftentimes this is by the hands of child labor. If we are not aware of the origins of the cobalt that go into our batteries, we are directly supporting these brutal mining practices that are causing both human and environmental catastrophe through the exploitation and exhaustion of the natural resources of the Congo land. This is an environmental consequence of utmost importance and it will be considered heavily when purchasing the parts for our system.