Weekly Status Report – Week of 11/24

Pallavi –  I spent the first half of this week working with Shreyas to drill in the ultrasonic sensor mounts and the arduino to a secure platform. After setting up the mounts, I re-wired all the sensors and secured the wires to carti-b so there are no visible wires anymore!

The second half of the week was spent setting up the testing and demo configuration in the room. We used poster boards to construct aisles for carti-b to go through, and we plan on attaching candy canes to the aisles to simulate a grocery store setting. I also worked with Arushi to debug the turning mechanism because of inconsistency with carti-b moving forward.

Arushi – I worked one trying to fix a few parts of our implementation, namely the color detection and the turn outing of aisle aspects. I tried to use orange, pink and yellow instead of the green and red circles due to the chance of false positives. Unfortunately, I wasn’t able to better cater the hsv values to these colors more than green and yellow. As a result, tomorrow I am going to modify our target to be a green circle with a yellow smaller circle in it, where I can check for concentric circles. I also worked with Pallavi to test inconsistency of our turns, we were able to identify the problem as being issues with the robot’s direct commands, so I am going to go in with Shreyas tomorrow to change the mechanism to not send a certain distance for the robot to move, but rather turn on the motors and turn them off after a corresponding time based on the distance.

Shreyas – The first half of the week I spent securing everything on the robot. I drilled holes in the mid level platform to secure the ultrasonic sensor mounts and the arduino with zipties onto the platform.  We also drilled the camera onto the wooden pole instead of using tape. I also replaced the camera cable with the new shorter one we bought so that there would be less latency in getting the camera feed and also to prevent the cable from getting tangled on anything. I also helped Pallavi in making our demo environment with the poster boards. The second half of the week I spent making the obstacle avoidance path planning more accurate. I realized that using the framework’s angle and distance commands are not accurate enough so I manually did the angle turns and distance commands using a timer. This helped a lot. I also changed the algorithm so the robot only executes the obstacle avoidance paths if the obstacle is in the way for more than 2 seconds. We decided this is better because most obstacles usually are humans walking through or another cart and wont’ stay in front of the robot for too long. This also reduces the lag between the robot having to execute the obstacle avoidance path and catching up with the human.

 

Weekly Status Report – Week of 11/10

Arushi – This week I didn’t get a chance to work on Cart-i B too much, which is why I spent more time last week. I worked with Shreyas to change the pid to based off of the radius of the target rather than the area to prevent it from jerking as much when we step close to it/when it would have to move backwards. This really smoothed out its movements in forward and backwards motions. Additionally, we worked on further refining the aisle turn code. Throughout this weekend, I plan on going in with Shreyas to further test and refine the obstacle detection/path planning.

Pallavi – This week I was mainly working with Shreyas to fine tune the obstacle detection now that it is integrated with the entire Carti-B structure. One major thing we changed was the distance threshold we were going to use to fire an interrupt and indicate an obstacle. We realized our threshold was too large, so it was actually detecting the human it was tracking as an obstacle and not moving. Another change made to our design was what information we are actually sending over from the arduino to the raspberry pi. Before we were sending the actual distances for all four of our sensors. We realized, however, our path planning algorithm did not need the actual distances and just what sensor has an obstacle violating the threshold. We achieve this by creating a 4-bit value initialized to 0000. When any sensor violates the obstacle threshold, we check each sensor and if it has an obstacle in the violation range, we set that value to be 1. So, if we had a violation in sensors 1 and 3, we would send 1010. This is to remove any extraneous information that is ultimately not used in our final code.

Next week, I will be out of town for Thanksgiving so I will not be able to work on Carti-B. Happy Thanksgiving!!

Shreyas – This last week I worked with Arushi re-configuring the PID to use the radius instead of the area. This made moving backwards a lot smoother. I also worked with Pallavi to change the format data is sent from the arduino to the pi. I got the interrupt handler to work properly so that it properly sets a global flag of whether or not an obstacle exists. In the loop logic based on the flag, the robot will either move normally, stop, or execute a re-route path to get around the obstacle. After my basic testing Thursday, it works pretty well but more testing needs to be done to tune how much the robot turns to avoid the obstacle and how much the robot moves ahead before looking for the human again. We also decided how we’re going to secure the basket to the robot and what kind of basket to get. We also decided we’re going to buy a cheap medium size black jacket and just tape the green circle and red circles on. This weekend Arushi and I will work on finalizing the obstacle detection protocol.

Weekly Status Report – Week of 11/3

Arushi – This week I worked with Shreyas and Pallavi to solidify our plans for moving forward, which was important for us to do, especially for the midpoint demo. I made a doc that has information about what we have done so far and documents our plans, both logistically but also in terms of what our algorithms will do for the upcoming weeks. (The doc is included below). We finalized how we are accomplishing turns, using a red target on the arms and acting accordingly (explained in more detail in the doc). So, I spent this week to modify the algorithm. I got the robot to stop once the red circle has been seen, to address what would happen when a person goes to pick up something from an aisle. I’ve begun writing parts of the code that will be used for when the human turns out of the aisle. I thought that may take me longer to test this with the robot because the plan is to finish obstacle detection first, but I knew that I would at least have image processing code tested using my laptop and will then only have to modify the robot commands. Shreyas and I went in and incorporated the robot controls such that as soon as the robot sees a red circle it stops, to address a human picking up items from the aisle.  Moreover, we also began implementing and testing the robot following the human turn out of aisle. For right now, we found the relationship between the radius of the circle and the distance the robot would have to move forward.  I also went in this week, to work with Shreyas to build a platform for the sensors, this way Pallavi can start testing the sensors and the path planning code. In general, this week I had more time so I was able to go in and work on the turns and help build the platform but I know next week will be more busy for me, so I’ll focus on refining the performance and calibration.

Shreyas – This week Carti-B got a major upgrade in performance and looks. Our first goal was getting ready for the Midpoint demo by making the doc below and having enough code working.  After the midpoint demo Arushi and I used the makerspace to build the platform that the sensors would reside on. Then I decided to make Carti-B more aesthetically pleasing and portable by neatly arranging and securing all the individual systems. This helped prevent any wires from coming in the way of the wheels and the raspberry pi from shifting on the base. To secure the usb cable on the iRoomba, I followed Sullivan’s advice and found spring grippers that apply tension on the cable to keep it connected. This week I also got the interrupt handling to work between the Arduino and the Pi.  The arduino sends a signal to a raspberry pi pin when an obstacle is within threshold and the pi uses an interrupt service handler to quickly set a flag in the Robot Control Module that an obstacle is detected. This method is far more efficient than continuously checking the arduino input manually because now the Pi does not have to waste CPU cycles reading the serial data until the ISR is called. I also set up the serial connection interface so that the arduino sends the 4 distance values and the pi parses the string line and extracts these 4 values into an array for later use. The next work regarding this is handling the different cases (which sensor was activated and what path should it take).  My goal is to finish this by next Wednesday. Lastly Arushi and I discovered that because of how I tried to organize the camera cable, sometimes the camera feed is disturbed. To fix this I put in an order request for a shorter cable.

Pallavi – This week I mainly focused on setting up the ultrasonic sensors on the carti-b setup we already have. We noticed some issues in the beginning of the week with inconsistency with the readings from the sensors. The sensors would only read a distance when the obstacle was moving. I noticed the issue was because of two reasons – one, we were ourselves to test the reading, and human beings are apparently inconsistent obstacles as their clothing often absorbs the ping coming out of the sensor. Also, the code I was using with the NewPing library was taken from example code on their website. I read on some forums that this example code had inconsistent results and they gave another start code to use, and once I had changed the code to use this example code, we were able to get consistent results.

After I was able to get consistent results, I setup the sensors on carti-b thanks to the platform Arushi and Shreyas created. The new setup is shown below:

The sensors are held up with tape, so we are going to purchase mounting kits. We also want to purchase a case for the Arduino Uno. Next week, we want to test the integration of the sensor data with our path planning and hopefully have that finished by Wednesday.

 

Midpoint Demo – Team 5 (Cart-i B)

Weekly Status Report – Week of 10/27

Arushi – I spent this week focusing on setting up the raspberry pi with its battery such that it can run without having to be plugged into the computer and trying to smooth out the pid. From trying many different things, we were able to identify that if the robot is only programmed to follow forwards and backwards the control is very smooth. However, we realized that once we add in the ability to add turns, the robot becomes more jerky, especially when I move towards the robot and the robot should back up. I think this is the case because we always prioritize pivoting over moving straight. Specifically, if the difference between the center of the frame and the center of the circle is larger than a constant threshold, the robot focuses on turning. However, when I take a step towards the robot, because I am so close up a small shake I make can make the robot think that I am in fact leaving the center of the frame. Thus, we came up with a solution that the turn threshold should be dependent on the target’s distance from the area. We tried implementing this but still ran into similar issues .  So, we will try to gather more data points to identify what this threshold should be and hope to make progress in this by Monday’s checkpoint.

Additionally we realized that currently we always prioritize turning over moving forward, which is not good for certain cases. Thus, we changed our code such that if both or needed it assigns a priority to which ever has a greater value (such as turning) and then works on the other one(moving forward). A consideration was have the robot do both, in a “sweeping-like” motion, but that is not possible in a roomba. Additionally, Shreyas and I started working on an implementing a feature such that if a human moves too quickly and leaves the frame on the right side, the robot will go in a 360 towards the right to find the human and vice versa.

Pallavi – I spent a bit of this week continuing to work with the ultrasonic sensors. I was trying to see if there was any interference between the sensors given the orientation of the sensors onto the board. However, I spent the rest of the week helping Arushi and Shreyas remove the jerky movement of the current project in preparation for the demo next week. Arushi talked about a lot of the changes we tried to make to reduce the jerkiness in her part of the post, something we both realized when we were working on in the lab of Friday was if we try to display the live stream video and run our robot project at the same time, our robot lags too much, so we need to disable that when we want to test the smoothness of our iRoomba. We also are working on pseudo-code for our path planning and our final schedule for the remaining weeks for our project.

This coming week, after our mid-point demonstration, I want to focus on adding more to our iRoomba platform to stabilize the camera and also integrate the platform upon which we will place the sensors, as our next goal is to integrate object detection into our design.

Shreyas – The earlier half of this week I set up SSH capabilities on the raspberry pi so that we could finally remotely control it. This also added the luxury of having more than one of us use the raspberry Pi at once for testing and development. For the latter half of the week Arushi and I focused on making the movement smoother by changing our algorithms slightly and adding features (as detailed in her part above). During our testing, on one run,  the iRoomba went wild and disobeyed any commands we gave it. We realized a quick fix to this is to change the iRoomba to safe mode. Before we had it in full control mode, which prevented itself from stabilizing it self and shutting off. Now in safe mode, it auto stops if it looses stability or if we press the buttons on it. Next week we should also try to make the usb cable more secure on the iRoomba because if that disconnects during a run we could again lose control. This week I also tried adding object detection using the iRoomba sensors. I have the code ready and it works independently of the movement code but when I combine the two the robot’s motion becomes very unstable. This is something I will work on more next week as we thought it would be better to work on the general movement control for the demo.

Here’s a video of our current progress with the Carti-B robot:

 

Weekly Status Report – Week of 10/20

Arushi – This week my goals consisted of combining my image processing code with Shreyas’s pid/robotics program to try to start testing our project and refining it for our midpoint demo. We initially had difficulties with a lag in our camera which was fixed by changing a while loop in our code to a for loop that sets a parameter for the pi camera. Once we fixed that, we were able to test rotations in place by moving our target around in a circle around the robot. We had difficulties, however, with the image processing sometimes picking up the top of the door to the room as having hsv values within the threshold range for our target. We tried adjusting the hsv threshold accordingly, but were still having troubles. For possible solutions, I wrote out programs that 1. involved an rgb threshold instead of hsv -> for which we were having troubles identifying the circle under a shadow and 2. changing our target to a chessboard and using a opencv builtin for finding chessboard corners. The chessboard was accurately being picked up, but similar to other HOG based algorithms, it was too slow when it was implemented on the pi. Right as we started to print a new target for which we could specify the hsv, when coloring it, Shreyas tried a threshold that worked. We then were able to test the individual aspects of in-place turns and moving straight. At the end of class we were able to put the parts together. The goal for next week is to further smooth out the PID, connect the pi to a battery source and then test in a variety of lighting to ensure that the image processing does not pick up parts of the image that are not the target.

Shreyas – This week I worked with Arushi on integrating different image processing algorithms into the Raspberry Pi. To avoid repeating details described above, I will talk more about the robot control module side of it. I started with tuning the PID for the X displacement part to make sure rotations were smooth. After this was tuned and after a better image processing technique was found I added logic to find the z displacement based on the area of the target. Then I got the PID controller to also work on the Z displacement part so movement forward and backward was also smooth. The last and most challenging part on my end was figuring out how to move the robot in both the Z and X direction. Since I am sending raw motor power commands based on the output of the PID controller, currently it is not possible to move in the Z and X direction in the same time. The main limitation right now is that the PID controller calculates the motor values independently of each other. Next week I am going to work on having the PID controller give a combined output of Z and X displacement so that movement in both degrees of freedom is possible. However another concern is that may damage the motors. Since the iRoomba is a pivot based robot, sending two different power values to the left and right motors (for curve motion) would put stress on the motors. However for slightly different magnitudes this shouldn’t be a problem. This is something that takes careful testing. Next week my goal is to integrate the iRoomba’s sensors and tune the PID for bi-directional movement.

Pallavi – This week my goal was to get the serial communication between the arduino and the raspberry pi set up. I unfortunately did not complete my goal because I ran into some issues setting up the serial communication on the raspberry pi zero. I’m running a simple python script on the pi just to check whether I can write to the terminal. When I try this, however, the program simply hangs. I looked at this link (https://www.raspberrypi.org/forums/viewtopic.php?t=180951)  and made sure to implement all the solutions discussed. However, my raspberry pi is still not printing out to the terminal. I’m going to work with Shreyas on Monday to figure out what the issue is.

Other than setting up the Pi, I was looking up doing tx, rx communication versus usb communication between the arduino and raspberry pi. I ended up going with tx, rx communication because of less overhead compared with usb communication. I also had to set up the simple voltage divider to properly connect the raspberry pi and arduino, as the raspberry pi has a 3.3V tx and rx pi while the arduino has 5V pins.

This week, I hope to get the serial output working on the pi to ensure I can test my arduino code. I also hope to start integrating my sensors with the combined iRoomba and object detection Shreyas and Arushi have been working on this week.

Weekly Status Report – Week of 10/13

Arushi – This week, I looked more into homography using checkerboards to further understand the process of when we implement turns. Basically, for us, based on what I learned, we will use the angle between the circles’ center to determine which way the human is turning. Furthermore, we established that in general, when it comes to turns, the cart will only move if the human (and thus the closest circle) is getting smaller, to prevent the cart from turning when a human pivots to pick something off from the shelf. I also worked on porting over my code to the raspberry pi and combining the image processing with Shreyas’ robot control/pid module. Because we had no way to adhere the camera to the robot, we tested with me moving the  target in front of the stationary camera and the robot moved accordingly (in terms of the general direction). Thus the next step is to actually attach the camera to the robot and test it, which is the plan for next week. To be able to do so, Ben helped us in making a wooden base for a post to hold the camera.

Pallavi – This week, I continued to work on the ultrasonic sensors. In order to actual start working on the serial connection between the arduino and the pi, I wanted to actually set up the USB connection between the two. Because Arushi and Shreyas were mainly focusing on integrating the iRoomba and the openCV, the main raspberry pi was being used. Instead, I got a leftover Pi from a project from a previous semester (thanks Ben!) and spent a good amount of time setting it up.  Because the pi leftover only had one USB port, it was kind of difficult setting it up since I could only use the mouse or the keyboard. I’ve got the basics set up, but I plan on going into lab early on Monday to set up ssh on the pi so I can connect the Arduino Uno to the usb port. With that, my goal is to have the entire design – detecting values, sending the interrupt when an obstacle is detected, and the serial communication done by the time I leave lab on Monday.

Shreyas – This week was mostly making the core of the Robot Control Module. This included adding PID control, so I had to figure out the right constraints and tune the controller to our iRoomba. I also added the interfacing between the Robot Control Module and the Image Processing Module. The PID controller took the center of the target found in the image processing module and sent the necessary motor commands to the iRoomba. The x displacement currently works so the next steps are the z displacement. I also will be adding the iRoomba’s sensors obstacle detection as well in the coming week. Another issue I had to work with was optimizing the connection of the raspberry pi camera. Originally the FPS was very low but by figuring out another method to get the frames we got the FPS to be a lot faster (around 30 now). Now the main latency comes from the image processing module.

All – next week we plan to focus on testing of our initial milestone of the cart following the human down a relatively straight path and stopping when the human does. We expect that we will have to change some of the color detecting constants based on the camera and the pid constants as we continue to test.

Weekly Status Report – Week of 10/6

Pallavi – This week was mostly spent working on the design presentation and design document with Arushi and Shreyas. From delving more into the design specifics, I started looking into more details of how we’re going to use the ultrasonic sensors. As a team, we were going back and forth about what exactly we wanted to send from the sensors to our path planning algorithm. Our first idea was to only send a 1 or 0 corresponding to whether or not an obstruction is there for a given sensor. However, if we want to support the design specification of not only being able to sense obstructions but also move around them and fix the path, we would want more information about the obstruction. In order to do this, we would need to come up with a serial protocol to encode this data, since we would want to know both the sensor’s value and also which sensor is associated with which value. Shreyas and I discussed how exactly we want to encode the data being sent to the Pi. Our idea is to use one byte for each sensor. We plan to use 3 MSB to ID the sensor since we have 6 sensors total, and the remaining 5 bits to encode the data. Because we don’t need to have specifics about the exact number of cm distance from the obstruction and rather a range, we can figure out a scaling from the sensor value to the actual value we send.

Next week, I want to focus on figuring out the exact encoding of data. I also want to start setting up the physical cart and looking into how to use the on-board iRoomba sensors.

Arushi – This week I worked on the design presentation and doc. To do so, first I made sure to further think through the overall design and how my module of the image processing fits into it. Specifically, after hearing concerns of our path planning and turning algorithm, I looked more into it. We plan to characterize our algorithm to address two types of turns. Wide turns in areas such as a human moving to another side of an aisle and sharp turns such as a human walking out of an aisle and turning a corner. To address the first type of turn, a wider one, we plan to have two circles on the fronts and back of the target human. Our algorithm will draw a minimally enclosed circle regardless of if the entire circle is seen.  This way the further back circle will be smaller and we will able to tell which way the human turned. When only one or no circles is seen is indicative that the human has been lost or made a sharp turn out of the aisle. In this case, we will have the robot make a full turn and stop when a circle is seen. We focused on this, for this week although this will not come into play until after the midpoint demo. Thus, for the next week we want to work more on building the cart and have the separate modules interact with each other.

Another aspect I thought through for future milestones is when the human detection tells the cart to go straight but the obstacle detection doesn’t allow it to do so (let’s say a child gets in between). In this case what I proposed is that we check the  closest sensors to the desired direction and move our robot in the direction of the sensor that shows it is further from an obstacle. Then we do this check again as to where the human is and if the sensor in that direction allows the cart to move in that direction safely without hitting the obstacle.

Shreyas – This week was mostly spent preparing the design presentation and the design document as well as figuring out how to integrate the major subsystems. I created a more elaborate system architecture block diagram that includes the hardware, software, and interfaces between the subsystems. I also looked into the benefits of having a dedicated Arduino to handle polling the ultrasonic sensors over having the Raspberry Pi handle it directly. Having the Arduino handle it will prevent wasting CPU time on the Raspberry Pi that can be used for the image processing algorithms. The GPIO pins on the Raspberry Pi also support interrupt driven handling. We can use this to our advantage by having the Robot Control Module only handle obstacles when a interrupt handler gets fired.

I also confirmed the power supply we would be using for the Raspberry Pi. We are going to buy Miuzei Raspberry Pi 3b+ battery pack. This is an expansion board that also provides a casing so it’s more robust. We can use this board to also power the Arduino.

 

All of us – in general this week we worked on further defining how the separate components will interact with each other, which is further talked about in depth above.

Weekly Status Report 9/29

Arushi – This week I worked with Pallavi to identify what our target image will be and then we took model videos with the target attached to my back. Based on the videos I was able to iterate on my general computer vision algorithm to be able to more accurately draw a circle, and print its parameters, based  on the target image. The next step on this end would to take a video where the target stays still and the camera moves up until it is the closest safe distance for the cart to be able to calibrate how we can use the target radius to detect the distance between the cart and the human. I also began designing a model on how we are going to place our hardware on the iRobot create. For next week, I plan on working with my teammates to figure out how we will interface our separate components now that we have proof of concepts for all the separate components.

Shreyas – This week I was primarily in charge of setting up the Raspberry Pi and the iRobot. Getting the Pi on the CMU wifi was a slight bottleneck but eventually it connected. I installed basic libraries and text editors that we would use later. Then I looked into the pyCreate2 python library which provides good abstraction layer to interface with the iRobot. I tested sending basic movement commands and they seemed to work. I started testing reading the sensor data from the iRobot, but more elaborate testing needs to be done.

Pallavi – This week I worked with Arushi to take some videos to train our open CV model. After helping with that, I mainly focused on working with the ultrasonic sensors. I started by individually testing each sensor, and I found one sensor that wasn’t reading in valid responses. I then started setting up the sensors in the orientation we want to use them for our final design. Next week, I will work with Arushi and Shreyas to get a better idea of how we will integrate all of our parts, especially now that we have a little bit of the individual parts working. I also want to look more into the library functions actually used to detect objects. Now that I have the general setup taken care of, I want to implement our detection algorithm and test it.

All of us will be working on the design presentation at the beginning of this week and also work on the design proposal concurrently. Working on this presentation will hopefully help us with our other goal of figuring out exactly how we want all of our individual parts to fit together.

Weekly Status Report 9/22

Pallavi – This week, I spent most of my time in class looking at micro-processor alternatives with Shreyas and ordering parts. I compiled the parts into an excel sheet, linked below.

Our conclusion was to go with the NanoPc board because it has a gain in processing power over the Raspberry Pi but still has good documentation and community for support.

Both our raspberry Pi and roomba have arrived this week, and our ultrasonic sensors should be coming in next week. My goal for next week is to help Shreyas out writing serial communication in C++ (versus the python we will use to get the set up running) and start working with the ultrasonic sensors to read values.

Arushi – This week I was working from the Grace Hopper Conference. Although I couldn’t meet with my team in person, I made sure to keep myself in the loop. I started writing the program for image processing part of our project using Python and OpenCV. The program either takes in a video or  feed from the camera. For every frame, it first applies a blur to prevent noise from falsely being detected. Next based on the color and intensity of the target image, the program performs a threshold such that only parts of the image that match remain white, and all other areas are black. To further minimize noise, the program uses erosion and dilation on the threshold frame. Then the circle detection OpenCv function for minimum enclosing circle is used and drawn on the frame. For right now, I tuned the program based on a video I found online. So for this week, I would like to standardize what our target image will be and take videos with it in busy environments to improve the image processing.

Shreyas – This week I looked into how to properly interface the raspberry pi with the roomba. The roomba comes with a serial-to-usb cable which will be used to send motor commands as well as receive sensor data. Since the roomba does not provide a power supply to peripheral devices, we ordered a power bank that will power the raspberry Pi. I took a deeper look at the roomba OSI and found that we should set an abstraction layer to send commands to the robot to avoid having to memorize the specific bytes needed. Lastly, I found some online tutorials that should help us code the commands in python.

I also found a camera for us to use that is compatible with both the raspberry pi as well as the NanoPc board.

Parts comparison: https://docs.google.com/spreadsheets/d/1tNtKo1mobMqS0nsFhUPZj27l5SVZSLcxxb6ThgLDm0o/edit?usp=sharing