Team Status Report for 12/6

Overall Progress

  • Improved UWB direction
  • LiDAR and obstacle avoidance integration

 

Risks & Management 

We are still working on fine-tuning the obstacle avoidance algorithm, which has caused us to fall behind on our testing. We plan to improve our obstacle avoidance as much as possible within a certain timeframe and spend the rest of our time on testing.

 

Design Changes & Justification

A few weeks ago, we switched to GPS since the UWB direction was not accurate. However, we realized while testing GPS that it was highly inaccurate, and after talking with the professors for help, we reverted back to using UWB. We are now using a 2 anchor system instead of 1 anchor system to get the distance. This has been much more reliable in getting the distance between the user and the cart.

 

Testing

  • Load capacity: Can carry up to 30 lbs and still maintain control
  • Speed: Moves at an average of 3.58 mph
  • Runtime: Calculated that the battery packs used to supply power will last for more than 45 minutes, estimated to last roughly 2 hours
  • UWB distance accuracy: Not fully tested but verified that the UWB data provides an accurate estimate of the distances
  • Following distance: Cart can maintain a following distance of 2 feet and will stop if the user and cart are too close and start moving when the distance is more than 2 feet
  • LiDAR obstacle detection: LiDAR can detect obstacles with an average 3 cm difference from the actual distance in real life
  • Obstacle avoidance: Cart is able to stop within 1 foot of obstacles
    • The cart struggles to navigate away from obstacles and out of corners and walls so we are tuning the obstacle avoidance algorithm
  • Data latency: Data of the user’s distance from cart to the motor commands executing occurs within 200ms

Elly’s Status Report for 12/6

This week, I worked on helping to integrate all the subsystems together. During the first half of the week, I helped with getting the cart to follow the user based on the UWB distance through testing the connection between the phones and ensuring that the distance measurements were accurate. After we ensured that the distance measurements were what we expected, I moved onto integrating the UWB coordinates with the LiDAR and path finding algorithm. 

The code on our RPi 5 receives the UWB data via UDP, runs a C++ executable as a subprocess to get the nearby LiDAR data and uses this to calculate the repulsive and attractive forces to get the general direction of the cart. Then serial commands are sent to the Arduino to control the physical motors. We process the incoming UWB and LiDAR data by pairing the timestamps to ensure that the cart is not making decisions based on old data. Originally, we planned to just use the virtual force algorithm, but after testing, we added “states” to handle when the cart gets stuck. There is the driving state where the cart simply follows the user and there are no obstacles detected. The next state is when the cart is blocked and spins in place to find a clear space. The last state is when a clear space is found and the cart moves forward to escape from being stuck. These states are based on forces calculated from the virtual force algorithm. 

For remaining tasks, we are still tuning and testing the obstacle avoidance as there are a few cases where the cart does not behave as expected. For example, when trying to escape from being stuck, the robot will spin in place to find a clear area, but sometimes, it will find a clear area that is in the opposite direction of the user, causing it to move away from the user.

Rose’s Status Report for 12/6

We have made substantial progress since the last status report.  Previously, I have attempted to determine the direction relative to the cart using:

  • A single UWB anchor, which did not allow for direction measurements due to Apple’s privacy restrictions 
  • GPS, which proved too inaccurate during motion tests.

Now, based on a suggestion from Tamal, I have switched to using a two-anchor approach for triangulating the direction of the shopper (or tag) using two independent UWB NI sessions. Using two anchors, one mounted on the left side and one mounted on the right side of the cart, we are able to obtain much more accurate and reliable data. Each anchor independently ranges to the shopper (the tag), and both send their distances to the Raspberry Pi via UDP.

This redesign required:

  • Creating two independent NI sessions on the same device (for the shopper)
  • Synchronizing and comparing left/right distances to obtain relative direction
  • Adjusting packet formats and timestamps for reliability
  • Testing mounting positions and spacing to reduce jitter.

Here is a visual of the two anchor one tag system:

Source: https://hackaday.io/project/25406-wild-thumper-based-ros-robot/log/152597-follow-me-part-4

I also ran into numerous issues, and had to do some careful isolation and checking the role pairs to ensure that the carts do not connect to each other and then send their distance to the Raspberry Pi (we should only want distance to shopper). I rewrote the session startup flow such that  each iPhone broadcast its role (cartLeft, cartRight, or shopper), and then checking that the interaction is a valid iteration (not cartLeft-cartRight) BEFORE creating an NI session, exchanging tokens and beginning to range. I also limited the cartLeft and cartRight roles such that they are only advertising and never browsing. Only the shopper is browsing for new connections.

I also helped with improving the stability of the cart and debugging the path-finding. We paired UWB distance measurements based on timestamps to reduce using stale data, and we are also very actively working on integrating the LiDAR such that the robot is able to navigate around obstacles.

I would say that the UWB subsystem is pretty much complete and on schedule. I want to do more work on the mobile app, including implementing cart controls (ie: start, stop, connect, reconnect, disconnect) on the shopper side. Additionally, I will also be tuning the obstacle avoidance behavior.

Audrey’s Status Report for 12/6

This week, I worked on debugging the UWB two-anchor system along with LiDAR obstacle detection pathfinding.  Since most of our debugging for this was a group effort, there was nothing specific I worked on, but rather everything as a whole. I figured out that our meccanum wheels were on backwards, which caused the robot’s movements to be weird. For example, when it turned, it stuttered and often didn’t move. With the wheels flipped in the correct direction, the robot’s movements are much smoother, and its turning is better. I also finalized the wiring for the robot, along with the placement of all components in the basket and on the robot.

Overall, I am not behind, but the team is, so I am helping other team members integrate and debug their subsystems so that our robot can reach final demo status. Next week, or really the next two days, I will be helping with debugging LiDAR obstacle detection and pathfinding, making a mock aisle obstacle course for the demo, and creating a script and possibly slides for the final demo on Monday. 

Audrey’s Status Report for 11/22

This week, I worked on fixing the encoders. I tested the encoders and found out that half of the encoders don’t work. I believe this is due to accidentally frying them when I was first testing them, and I plugged the power into the ground and ground into power. Since the internal encoders are broken, I decided to scrap using encoders for this project due to the upcoming final demo and because the encoders are built into the motors, meaning I would need to order new ones, which would likely not come in time or leave me with little to no time to integrate them into the project. I also helped other teammates with their work, such as helping with testing for the UWB direction, understanding the LiDAR test code, and integrating LiDAR obstacle detection into the project for initial testing. 

From this project, I’ve learned how to use Arduino microcontrollers, expanded my knowledge of embedded systems, and learned new technical details such as how LiDAR sensors work, Apple frameworks, and pathfinding. These skills were learned through researching them online, watching tutorials, and extensive debugging.

 

Overall, I am a bit behind compared to the Gantt chart. Luckily, due to the allocated slack week and the Thanksgiving break, I will be able to dedicate myself to progressing the project so that it can be ready for the final demo. In this upcoming week, I plan on doing the final wiring for the robot, attaching the shopping basket to the robot, and helping with final testing.

Rose’s Status Report for 11/22

This week, I made a lot of progress on the navigation functionality and determining the direction/relative angle from one iPhone to the other (required for enabling the robot to follow the person). After several attempts to use UWB direction vectors (described in previous status reports), I implemented a GPS-based solution. Each iPhone gathers its own GPS coordinates and compass heading while also receiving the peer device’s GPS data. Using these values, I compute the bearing from my phone to the peer phone and then subtract my own heading to obtain an angle between -180° and 180°. This relative angle is packaged into a JSON message and sent through a UDP pipeline to the Raspberry Pi (currently my laptop during testing). On the Python side, I updated the listener to parse these new messages, pass the values to the robot controller, and also filter incoming packets to ensure that only commands from the intended iPhone are used. I integrated this with the whole robot system, watched the motors turn left and right based on the iPhones’ positions and read the live terminal logs, which show the relative angle updating in real time. A reason I had not used GPS earlier is that it is inaccurate indoors. Based on the terminal logs, it is, on average, accurate within 5-10 meters. For now, this will need to work. 

Throughout this project, the three main things I had to learn were iOS development, UWB, and networking. I learned Swift and how to build and deploy apps to physical iPhones, including Xcode setup, device provisioning, and handling permissions. I also extensively researched how Apple’s UWB and Nearby Interaction frameworks work, how they provide distance and direction, and what limitations they have. On the networking side, I studied Bluetooth, TCP, UDP, and IP to figure out how to connect the iPhones to the Raspberry Pi. I decided on a UDP-based pipeline, which required learning sockets, JSON message formats, and data handling. I also experimented with small Flask servers earlier on. Most of this learning came from Apple documentation, online tutorials, open-source examples, and hands-on debugging on real devices, including reading logs and iterating until everything worked.

I’m currently behind schedule because determining the direction between the two iPhones took much longer than expected due to numerous issues (described in previous status reports). To catch up, I’m focusing on making the GPS-based navigation pipeline more stable instead of trying to debug UWB. My immediate goals are to smooth the relative-angle values, reduce jitter, and clean up noisy GPS input so the robot receives more consistent turning commands.

Next week, I want to add some kind of filtering to the relative angle calculations, either with a moving average or a Kalman filter and work with my teammates to improve the robot’s control logic so movements are smoother and less jerky. If time permits, I may revisit UWB to see whether I can troubleshoot it further or if its data can be combined with GPS for better accuracy, but the primary objective is to finalize a reliable, functional navigation system based on the GPS progress I achieved.

Team Status Report for 11/22

Overall Progress

  • UWB direction
  • Began integrating LiDAR into robot movement
  • Began integrating UWB into robot movement

Risks & Management 

The most significant risk that our project faces is the upcoming deadline of the final demo and final presentation at the end of next week. We spent a lot of time trying to figure out how to obtain a direction value/relative angle, which caused us to fall behind on integration and testing.  Thus, we plan to spend as much time as possible integrating all of the parts together for this upcoming week.

Design Changes & Justification

Since the direction through UWB was not working, we switched to using GPS and the iPhones’ compass headings to calculate the direction of the phones relative to each other. This design change was not ideal, as GPS is not very accurate indoors, but calculating the direction based on GPS is better than not having a direction at all.

Elly’s Status Report for 11/22

This week, I finished setting up the Raspberry Pi by uploading all the code to the microcontroller and started integrating the subsystems together. I spent some time refining the obstacle detection algorithm to ensure that it would turn in the correct direction if there was an obstacle on the left or right. We decided to only focus on obstacles in front of the cart, so I made sure to filter out any obstacles that are detected behind the LiDAR. We also started integrating all the components together. We hooked up the Arduino and the Raspberry Pi together to send the commands based on the obstacle detection algorithm, however we are running into issues with the connection, which we plan to smooth out next week.

I am still behind based on the Gantt Chart, but will plan to spend as much time as possible to finish up integrating all the parts. Next week, I plan to have the LiDAR fully integrated with the motors and be able to have very basic obstacle detection working.

Throughout this project, I’ve learned a lot about LiDAR, obstacle detection, and pathfinding algorithms. This was my first time using pathfinding algorithms, and I had the chance to learn the differences between global navigation and local navigation, and exploring different options like A Star and D Star, while evaluating the tradeoffs on what would work best for our project. To learn this information, I read a lot of articles on pathfinding algorithms and watched videos of people creating similar autonomous following projects to get an idea of what technologies were available for us to use.