Weekly Status Report – Week of 11/24

Pallavi –  I spent the first half of this week working with Shreyas to drill in the ultrasonic sensor mounts and the arduino to a secure platform. After setting up the mounts, I re-wired all the sensors and secured the wires to carti-b so there are no visible wires anymore!

The second half of the week was spent setting up the testing and demo configuration in the room. We used poster boards to construct aisles for carti-b to go through, and we plan on attaching candy canes to the aisles to simulate a grocery store setting. I also worked with Arushi to debug the turning mechanism because of inconsistency with carti-b moving forward.

Arushi – I worked one trying to fix a few parts of our implementation, namely the color detection and the turn outing of aisle aspects. I tried to use orange, pink and yellow instead of the green and red circles due to the chance of false positives. Unfortunately, I wasn’t able to better cater the hsv values to these colors more than green and yellow. As a result, tomorrow I am going to modify our target to be a green circle with a yellow smaller circle in it, where I can check for concentric circles. I also worked with Pallavi to test inconsistency of our turns, we were able to identify the problem as being issues with the robot’s direct commands, so I am going to go in with Shreyas tomorrow to change the mechanism to not send a certain distance for the robot to move, but rather turn on the motors and turn them off after a corresponding time based on the distance.

Shreyas – The first half of the week I spent securing everything on the robot. I drilled holes in the mid level platform to secure the ultrasonic sensor mounts and the arduino with zipties onto the platform.  We also drilled the camera onto the wooden pole instead of using tape. I also replaced the camera cable with the new shorter one we bought so that there would be less latency in getting the camera feed and also to prevent the cable from getting tangled on anything. I also helped Pallavi in making our demo environment with the poster boards. The second half of the week I spent making the obstacle avoidance path planning more accurate. I realized that using the framework’s angle and distance commands are not accurate enough so I manually did the angle turns and distance commands using a timer. This helped a lot. I also changed the algorithm so the robot only executes the obstacle avoidance paths if the obstacle is in the way for more than 2 seconds. We decided this is better because most obstacles usually are humans walking through or another cart and wont’ stay in front of the robot for too long. This also reduces the lag between the robot having to execute the obstacle avoidance path and catching up with the human.

 

Weekly Status Report – Week of 10/6

Pallavi – This week was mostly spent working on the design presentation and design document with Arushi and Shreyas. From delving more into the design specifics, I started looking into more details of how we’re going to use the ultrasonic sensors. As a team, we were going back and forth about what exactly we wanted to send from the sensors to our path planning algorithm. Our first idea was to only send a 1 or 0 corresponding to whether or not an obstruction is there for a given sensor. However, if we want to support the design specification of not only being able to sense obstructions but also move around them and fix the path, we would want more information about the obstruction. In order to do this, we would need to come up with a serial protocol to encode this data, since we would want to know both the sensor’s value and also which sensor is associated with which value. Shreyas and I discussed how exactly we want to encode the data being sent to the Pi. Our idea is to use one byte for each sensor. We plan to use 3 MSB to ID the sensor since we have 6 sensors total, and the remaining 5 bits to encode the data. Because we don’t need to have specifics about the exact number of cm distance from the obstruction and rather a range, we can figure out a scaling from the sensor value to the actual value we send.

Next week, I want to focus on figuring out the exact encoding of data. I also want to start setting up the physical cart and looking into how to use the on-board iRoomba sensors.

Arushi – This week I worked on the design presentation and doc. To do so, first I made sure to further think through the overall design and how my module of the image processing fits into it. Specifically, after hearing concerns of our path planning and turning algorithm, I looked more into it. We plan to characterize our algorithm to address two types of turns. Wide turns in areas such as a human moving to another side of an aisle and sharp turns such as a human walking out of an aisle and turning a corner. To address the first type of turn, a wider one, we plan to have two circles on the fronts and back of the target human. Our algorithm will draw a minimally enclosed circle regardless of if the entire circle is seen.  This way the further back circle will be smaller and we will able to tell which way the human turned. When only one or no circles is seen is indicative that the human has been lost or made a sharp turn out of the aisle. In this case, we will have the robot make a full turn and stop when a circle is seen. We focused on this, for this week although this will not come into play until after the midpoint demo. Thus, for the next week we want to work more on building the cart and have the separate modules interact with each other.

Another aspect I thought through for future milestones is when the human detection tells the cart to go straight but the obstacle detection doesn’t allow it to do so (let’s say a child gets in between). In this case what I proposed is that we check the  closest sensors to the desired direction and move our robot in the direction of the sensor that shows it is further from an obstacle. Then we do this check again as to where the human is and if the sensor in that direction allows the cart to move in that direction safely without hitting the obstacle.

Shreyas – This week was mostly spent preparing the design presentation and the design document as well as figuring out how to integrate the major subsystems. I created a more elaborate system architecture block diagram that includes the hardware, software, and interfaces between the subsystems. I also looked into the benefits of having a dedicated Arduino to handle polling the ultrasonic sensors over having the Raspberry Pi handle it directly. Having the Arduino handle it will prevent wasting CPU time on the Raspberry Pi that can be used for the image processing algorithms. The GPIO pins on the Raspberry Pi also support interrupt driven handling. We can use this to our advantage by having the Robot Control Module only handle obstacles when a interrupt handler gets fired.

I also confirmed the power supply we would be using for the Raspberry Pi. We are going to buy Miuzei Raspberry Pi 3b+ battery pack. This is an expansion board that also provides a casing so it’s more robust. We can use this board to also power the Arduino.

 

All of us – in general this week we worked on further defining how the separate components will interact with each other, which is further talked about in depth above.

Weekly Status Report 9/22

Pallavi – This week, I spent most of my time in class looking at micro-processor alternatives with Shreyas and ordering parts. I compiled the parts into an excel sheet, linked below.

Our conclusion was to go with the NanoPc board because it has a gain in processing power over the Raspberry Pi but still has good documentation and community for support.

Both our raspberry Pi and roomba have arrived this week, and our ultrasonic sensors should be coming in next week. My goal for next week is to help Shreyas out writing serial communication in C++ (versus the python we will use to get the set up running) and start working with the ultrasonic sensors to read values.

Arushi – This week I was working from the Grace Hopper Conference. Although I couldn’t meet with my team in person, I made sure to keep myself in the loop. I started writing the program for image processing part of our project using Python and OpenCV. The program either takes in a video or  feed from the camera. For every frame, it first applies a blur to prevent noise from falsely being detected. Next based on the color and intensity of the target image, the program performs a threshold such that only parts of the image that match remain white, and all other areas are black. To further minimize noise, the program uses erosion and dilation on the threshold frame. Then the circle detection OpenCv function for minimum enclosing circle is used and drawn on the frame. For right now, I tuned the program based on a video I found online. So for this week, I would like to standardize what our target image will be and take videos with it in busy environments to improve the image processing.

Shreyas – This week I looked into how to properly interface the raspberry pi with the roomba. The roomba comes with a serial-to-usb cable which will be used to send motor commands as well as receive sensor data. Since the roomba does not provide a power supply to peripheral devices, we ordered a power bank that will power the raspberry Pi. I took a deeper look at the roomba OSI and found that we should set an abstraction layer to send commands to the robot to avoid having to memorize the specific bytes needed. Lastly, I found some online tutorials that should help us code the commands in python.

I also found a camera for us to use that is compatible with both the raspberry pi as well as the NanoPc board.

Parts comparison: https://docs.google.com/spreadsheets/d/1tNtKo1mobMqS0nsFhUPZj27l5SVZSLcxxb6ThgLDm0o/edit?usp=sharing