Team Status Report for 12/6

Overall Progress

  • Improved UWB direction
  • LiDAR and obstacle avoidance integration

 

Risks & Management 

We are still working on fine-tuning the obstacle avoidance algorithm, which has caused us to fall behind on our testing. We plan to improve our obstacle avoidance as much as possible within a certain timeframe and spend the rest of our time on testing.

 

Design Changes & Justification

A few weeks ago, we switched to GPS since the UWB direction was not accurate. However, we realized while testing GPS that it was highly inaccurate, and after talking with the professors for help, we reverted back to using UWB. We are now using a 2 anchor system instead of 1 anchor system to get the distance. This has been much more reliable in getting the distance between the user and the cart.

 

Testing

  • Load capacity: Can carry up to 30 lbs and still maintain control
  • Speed: Moves at an average of 3.58 mph
  • Runtime: Calculated that the battery packs used to supply power will last for more than 45 minutes, estimated to last roughly 2 hours
  • UWB distance accuracy: Not fully tested but verified that the UWB data provides an accurate estimate of the distances
  • Following distance: Cart can maintain a following distance of 2 feet and will stop if the user and cart are too close and start moving when the distance is more than 2 feet
  • LiDAR obstacle detection: LiDAR can detect obstacles with an average 3 cm difference from the actual distance in real life
  • Obstacle avoidance: Cart is able to stop within 1 foot of obstacles
    • The cart struggles to navigate away from obstacles and out of corners and walls so we are tuning the obstacle avoidance algorithm
  • Data latency: Data of the user’s distance from cart to the motor commands executing occurs within 200ms

Elly’s Status Report for 12/6

This week, I worked on helping to integrate all the subsystems together. During the first half of the week, I helped with getting the cart to follow the user based on the UWB distance through testing the connection between the phones and ensuring that the distance measurements were accurate. After we ensured that the distance measurements were what we expected, I moved onto integrating the UWB coordinates with the LiDAR and path finding algorithm. 

The code on our RPi 5 receives the UWB data via UDP, runs a C++ executable as a subprocess to get the nearby LiDAR data and uses this to calculate the repulsive and attractive forces to get the general direction of the cart. Then serial commands are sent to the Arduino to control the physical motors. We process the incoming UWB and LiDAR data by pairing the timestamps to ensure that the cart is not making decisions based on old data. Originally, we planned to just use the virtual force algorithm, but after testing, we added “states” to handle when the cart gets stuck. There is the driving state where the cart simply follows the user and there are no obstacles detected. The next state is when the cart is blocked and spins in place to find a clear space. The last state is when a clear space is found and the cart moves forward to escape from being stuck. These states are based on forces calculated from the virtual force algorithm. 

For remaining tasks, we are still tuning and testing the obstacle avoidance as there are a few cases where the cart does not behave as expected. For example, when trying to escape from being stuck, the robot will spin in place to find a clear area, but sometimes, it will find a clear area that is in the opposite direction of the user, causing it to move away from the user.

Team Status Report for 11/22

Overall Progress

  • UWB direction
  • Began integrating LiDAR into robot movement
  • Began integrating UWB into robot movement

Risks & Management 

The most significant risk that our project faces is the upcoming deadline of the final demo and final presentation at the end of next week. We spent a lot of time trying to figure out how to obtain a direction value/relative angle, which caused us to fall behind on integration and testing.  Thus, we plan to spend as much time as possible integrating all of the parts together for this upcoming week.

Design Changes & Justification

Since the direction through UWB was not working, we switched to using GPS and the iPhones’ compass headings to calculate the direction of the phones relative to each other. This design change was not ideal, as GPS is not very accurate indoors, but calculating the direction based on GPS is better than not having a direction at all.

Elly’s Status Report for 11/22

This week, I finished setting up the Raspberry Pi by uploading all the code to the microcontroller and started integrating the subsystems together. I spent some time refining the obstacle detection algorithm to ensure that it would turn in the correct direction if there was an obstacle on the left or right. We decided to only focus on obstacles in front of the cart, so I made sure to filter out any obstacles that are detected behind the LiDAR. We also started integrating all the components together. We hooked up the Arduino and the Raspberry Pi together to send the commands based on the obstacle detection algorithm, however we are running into issues with the connection, which we plan to smooth out next week.

I am still behind based on the Gantt Chart, but will plan to spend as much time as possible to finish up integrating all the parts. Next week, I plan to have the LiDAR fully integrated with the motors and be able to have very basic obstacle detection working.

Throughout this project, I’ve learned a lot about LiDAR, obstacle detection, and pathfinding algorithms. This was my first time using pathfinding algorithms, and I had the chance to learn the differences between global navigation and local navigation, and exploring different options like A Star and D Star, while evaluating the tradeoffs on what would work best for our project. To learn this information, I read a lot of articles on pathfinding algorithms and watched videos of people creating similar autonomous following projects to get an idea of what technologies were available for us to use.

Team Status Report for 11/15

Overall Progress

  • Working on Raspberry Pi 5 setup
  • UWB distance measurement but no direction
  • Encoder support on robot

 

Risks & Management 

A huge risk that our project faces is the UWB direction issue. Without the direction of our robot relative to the user, following the user will be extremely difficult. For example, since the user and the robot can be moving at the same time, the robot could end up moving the opposite direction of the user and end up out of range, leading to frequent failures in the efficiency of movement for our robot. Thus we are hoping to find a solution to the lack of direction between the two phones. We have contacted our professors and TA and are hoping to get some guidance on how we can resolve this issue. We are also looking into using the iPhones cameras to calculate direction if needed.

 

Design Changes & Justification

No design changes have been made.

 

Validation

We have not been able to run any validation tests on our project since we do not have all the parts integrated together. However, once we do, we will analyze the data gathered from our testing plan that we outlined in our design report. For the 30 lb load requirement, we will confirm that the motors can successfully initiate movement, stop safely, and maintain maximum speed at all test weights. For the maximum 4 mph speed requirement, we will reference encoder logs to monitor Basket Buddy’s speed. For the following distance, we will analyze the logged UWB outputs to make sure it falls within our expected 2 ft range with a margin of 6 in. Finally, for our safety requirements, we will analyze the timed results from both the app-stop and out-of-range scenarios, to confirm the motor response time is below the threshold.

Elly’s Status Report for 11/15

This week, I focused on helping with integration for the interim demo. The Raspberry Pi 5 came on Wednesday, so for the rest of the week, I focused on getting it set up with WiFi and the Micro SD card . The Raspberry Pi will connect all the components together, such as the UWB connection and the LiDAR, and communicate that to the Teensy for motor commands. We plan to set up the Raspberry Pi in headless mode by ssh-ing into it, since we only plan to program on it, but we are running into some issues with the WiFi.

In terms of progress, we are still behind in the Gantt Chart but most of the work from here is integration, which I will work with my teammates to get done. We have a slack week which will hopefully be enough for us to catch up on all work before the final demo. Next week, I hope to have the Raspberry Pi completely set up with all the code necessary running on it. 

For verification of the subsystems that I have been working on, I created a simulation to ensure that the pathfinding algorithm was working as expected and made sure to set a 2 feet distance from the shopper in the code. I plan to do more extensive testing for the obstacle detection when we integrate the LiDAR with the motors by marking a circle with a 1-foot radius around an obstacle and having the user walk in the direction of the obstacle while having the cart avoid the marked area. For the mobile app, we kept the user interface simple, with only 2 different pages to ensure that the app was intuitive to use. Finally, we tested how long it took for the product details to populate in the app once the barcode scanned, and it was under 500ms.

Team Status Report for 11/8

Overall Progress

  • Created LiDAR simulation for local navigation
  • Worked on motor movement

Risks & Management 

A huge risk that our project is facing is the Raspberry Pi not being here for our interim demo. We discovered that our Raspberry Pi 5 we were using was broken almost 2 weeks ago and promptly placed an order for a new Raspberry Pi 5. Unfortunately we have not received the replacement Raspberry Pi 5 which poses a significant road block to our interim demo since it is the component used for communication between our phones and the Teensy microcontroller. Since the Raspberry Pi 5 won’t be here in time for our demo, we are looking into using a Raspberry Pi 4 or using our computers as a stand in for the demo.  This would be a temporary solution as we still plan on using the Pi 5 for a final demo. 

Design Changes & Justification

No design changes have been made.

Elly’s Status Report for 11/8

This week, I worked on creating a Pygame simulation to observe the behavior of the cart when using the virtual force algorithm. The simulation works by grabbing 20 LiDAR scans of the environment and creating a virtual mapping of the environments. From here, the shopping cart is added into the environment and the computer’s mouse acts as the shopper. While you move the mouse, the shopping cart, which is visualized as a red dot, attempts to follow the mouse and navigate any potential obstacles. When the mouse is far away from the cart, the cart speeds up to catch up to the mouse. For the most part, the virtual force algorithm does a good job of navigating the obstacles, but there are cases in which the cart gets stuck.

I am still behind in progress since we have not been able to integrate the parts together since we still have not received our Raspberry Pi that we ordered. Instead, I have been focusing on trying to get the obstacle detection to work on its own so there will be less debugging when integrating with the motors. Hopefully our Raspberry Pi will be delivered next week so we can make progress with integration.

Demo

Team Status Report for 11/1

Overall Progress

  • Tested LiDAR sensor
  • Tested H-Bridges for motor control

Risks & Management 

One of the risks for the project is the issue with the H-Bridges. Since they were received as faulty, we were unable to test the motors and make significant progress on the robot’s movement. Thus, we bought new H-Bridges to replace the faulty ones and are looking into getting those H-Bridges refunded. If these new H-Bridges don’t work as intended, we are considering buying a robot car kit online to remove the risk of compatibility issues between components, so that our Interim Demo can show the code’s progress.

Additionally, the Raspberry Pi 5 we borrowed from the ECE inventory had a broken SD card holder, which made it so that we were unable to test the receiver iPhone to Raspberry Pi connection. To manage this, we ordered a new Raspberry Pi 5 immediately and continued to work on the software part of the communication. If the new Raspberry Pi experiences delays or if additional hardware issues arise, we can temporarily use a laptop-based server for testing.

Design Changes & Justification

In addition to using D* Lite for pathfinding, we also decided to test out the virtual force algorithm for local navigation as a way to avoid obstacles. This algorithm uses an attractive vector pointing from the robot to the next position and a repulsive vector pointing from the obstacle to the robot. These two vectors are added up to create the direction of motion. We decided to use this because it provides a reactive, real-time, obstacle avoidance. D* Lite by itself only provides the long-term goal of getting the cart to the user, rather than the short-term motions.

Elly’s Status Report for 11/1

I worked on experimenting with the LiDAR sensor to see what outputs it produces. We are using the SLAMTEC RPLIDAR A1. To get started, I used the provided software development kit which contains starter code to get the inputs from the sensor and prints out the angle and distance of an obstacle in reference to the sensor. We plan to use D* Lite as the global navigation and the virtual force algorithm as the local navigation. I focused on implementing the virtual force algorithm first while referencing a Python implementation of the algorithm. Since we have not had the chance to integrate the LiDAR with the motors or the UWB connection, I decided to test the algorithm by creating a Pygame simulation to see if the cart could steer clear of obstacles. The simulation shows how the cart moves and changes direction as obstacles appear into its view. In the video, the red dot represents the shopping cart and the yellow line indicates the front of the shopping cart and the direction it is moving in. The blue points all represent obstacles that the LiDAR has sensed. 

I am behind schedule since I haven’t had the opportunity to test the LiDAR with the motors, so I am unsure about the complete implementation and integration details. I plan to help out with other aspects to try and catch us up to a point where we can integrate all the parts together. 

Demo 

Code (can be found in app/simple_grabber)

LiDAR Datasheet 

Python Implementation 

Virtual Force Algorithm