Weekly Status Report – Week of 11/10

Arushi – This week I didn’t get a chance to work on Cart-i B too much, which is why I spent more time last week. I worked with Shreyas to change the pid to based off of the radius of the target rather than the area to prevent it from jerking as much when we step close to it/when it would have to move backwards. This really smoothed out its movements in forward and backwards motions. Additionally, we worked on further refining the aisle turn code. Throughout this weekend, I plan on going in with Shreyas to further test and refine the obstacle detection/path planning.

Pallavi – This week I was mainly working with Shreyas to fine tune the obstacle detection now that it is integrated with the entire Carti-B structure. One major thing we changed was the distance threshold we were going to use to fire an interrupt and indicate an obstacle. We realized our threshold was too large, so it was actually detecting the human it was tracking as an obstacle and not moving. Another change made to our design was what information we are actually sending over from the arduino to the raspberry pi. Before we were sending the actual distances for all four of our sensors. We realized, however, our path planning algorithm did not need the actual distances and just what sensor has an obstacle violating the threshold. We achieve this by creating a 4-bit value initialized to 0000. When any sensor violates the obstacle threshold, we check each sensor and if it has an obstacle in the violation range, we set that value to be 1. So, if we had a violation in sensors 1 and 3, we would send 1010. This is to remove any extraneous information that is ultimately not used in our final code.

Next week, I will be out of town for Thanksgiving so I will not be able to work on Carti-B. Happy Thanksgiving!!

Shreyas – This last week I worked with Arushi re-configuring the PID to use the radius instead of the area. This made moving backwards a lot smoother. I also worked with Pallavi to change the format data is sent from the arduino to the pi. I got the interrupt handler to work properly so that it properly sets a global flag of whether or not an obstacle exists. In the loop logic based on the flag, the robot will either move normally, stop, or execute a re-route path to get around the obstacle. After my basic testing Thursday, it works pretty well but more testing needs to be done to tune how much the robot turns to avoid the obstacle and how much the robot moves ahead before looking for the human again. We also decided how we’re going to secure the basket to the robot and what kind of basket to get. We also decided we’re going to buy a cheap medium size black jacket and just tape the green circle and red circles on. This weekend Arushi and I will work on finalizing the obstacle detection protocol.

Weekly Status Report – Week of 11/3

Arushi – This week I worked with Shreyas and Pallavi to solidify our plans for moving forward, which was important for us to do, especially for the midpoint demo. I made a doc that has information about what we have done so far and documents our plans, both logistically but also in terms of what our algorithms will do for the upcoming weeks. (The doc is included below). We finalized how we are accomplishing turns, using a red target on the arms and acting accordingly (explained in more detail in the doc). So, I spent this week to modify the algorithm. I got the robot to stop once the red circle has been seen, to address what would happen when a person goes to pick up something from an aisle. I’ve begun writing parts of the code that will be used for when the human turns out of the aisle. I thought that may take me longer to test this with the robot because the plan is to finish obstacle detection first, but I knew that I would at least have image processing code tested using my laptop and will then only have to modify the robot commands. Shreyas and I went in and incorporated the robot controls such that as soon as the robot sees a red circle it stops, to address a human picking up items from the aisle.  Moreover, we also began implementing and testing the robot following the human turn out of aisle. For right now, we found the relationship between the radius of the circle and the distance the robot would have to move forward.  I also went in this week, to work with Shreyas to build a platform for the sensors, this way Pallavi can start testing the sensors and the path planning code. In general, this week I had more time so I was able to go in and work on the turns and help build the platform but I know next week will be more busy for me, so I’ll focus on refining the performance and calibration.

Shreyas – This week Carti-B got a major upgrade in performance and looks. Our first goal was getting ready for the Midpoint demo by making the doc below and having enough code working.  After the midpoint demo Arushi and I used the makerspace to build the platform that the sensors would reside on. Then I decided to make Carti-B more aesthetically pleasing and portable by neatly arranging and securing all the individual systems. This helped prevent any wires from coming in the way of the wheels and the raspberry pi from shifting on the base. To secure the usb cable on the iRoomba, I followed Sullivan’s advice and found spring grippers that apply tension on the cable to keep it connected. This week I also got the interrupt handling to work between the Arduino and the Pi.  The arduino sends a signal to a raspberry pi pin when an obstacle is within threshold and the pi uses an interrupt service handler to quickly set a flag in the Robot Control Module that an obstacle is detected. This method is far more efficient than continuously checking the arduino input manually because now the Pi does not have to waste CPU cycles reading the serial data until the ISR is called. I also set up the serial connection interface so that the arduino sends the 4 distance values and the pi parses the string line and extracts these 4 values into an array for later use. The next work regarding this is handling the different cases (which sensor was activated and what path should it take).  My goal is to finish this by next Wednesday. Lastly Arushi and I discovered that because of how I tried to organize the camera cable, sometimes the camera feed is disturbed. To fix this I put in an order request for a shorter cable.

Pallavi – This week I mainly focused on setting up the ultrasonic sensors on the carti-b setup we already have. We noticed some issues in the beginning of the week with inconsistency with the readings from the sensors. The sensors would only read a distance when the obstacle was moving. I noticed the issue was because of two reasons – one, we were ourselves to test the reading, and human beings are apparently inconsistent obstacles as their clothing often absorbs the ping coming out of the sensor. Also, the code I was using with the NewPing library was taken from example code on their website. I read on some forums that this example code had inconsistent results and they gave another start code to use, and once I had changed the code to use this example code, we were able to get consistent results.

After I was able to get consistent results, I setup the sensors on carti-b thanks to the platform Arushi and Shreyas created. The new setup is shown below:

The sensors are held up with tape, so we are going to purchase mounting kits. We also want to purchase a case for the Arduino Uno. Next week, we want to test the integration of the sensor data with our path planning and hopefully have that finished by Wednesday.

 

Midpoint Demo – Team 5 (Cart-i B)

Weekly Status Report – Week of 10/27

Arushi – I spent this week focusing on setting up the raspberry pi with its battery such that it can run without having to be plugged into the computer and trying to smooth out the pid. From trying many different things, we were able to identify that if the robot is only programmed to follow forwards and backwards the control is very smooth. However, we realized that once we add in the ability to add turns, the robot becomes more jerky, especially when I move towards the robot and the robot should back up. I think this is the case because we always prioritize pivoting over moving straight. Specifically, if the difference between the center of the frame and the center of the circle is larger than a constant threshold, the robot focuses on turning. However, when I take a step towards the robot, because I am so close up a small shake I make can make the robot think that I am in fact leaving the center of the frame. Thus, we came up with a solution that the turn threshold should be dependent on the target’s distance from the area. We tried implementing this but still ran into similar issues .  So, we will try to gather more data points to identify what this threshold should be and hope to make progress in this by Monday’s checkpoint.

Additionally we realized that currently we always prioritize turning over moving forward, which is not good for certain cases. Thus, we changed our code such that if both or needed it assigns a priority to which ever has a greater value (such as turning) and then works on the other one(moving forward). A consideration was have the robot do both, in a “sweeping-like” motion, but that is not possible in a roomba. Additionally, Shreyas and I started working on an implementing a feature such that if a human moves too quickly and leaves the frame on the right side, the robot will go in a 360 towards the right to find the human and vice versa.

Pallavi – I spent a bit of this week continuing to work with the ultrasonic sensors. I was trying to see if there was any interference between the sensors given the orientation of the sensors onto the board. However, I spent the rest of the week helping Arushi and Shreyas remove the jerky movement of the current project in preparation for the demo next week. Arushi talked about a lot of the changes we tried to make to reduce the jerkiness in her part of the post, something we both realized when we were working on in the lab of Friday was if we try to display the live stream video and run our robot project at the same time, our robot lags too much, so we need to disable that when we want to test the smoothness of our iRoomba. We also are working on pseudo-code for our path planning and our final schedule for the remaining weeks for our project.

This coming week, after our mid-point demonstration, I want to focus on adding more to our iRoomba platform to stabilize the camera and also integrate the platform upon which we will place the sensors, as our next goal is to integrate object detection into our design.

Shreyas – The earlier half of this week I set up SSH capabilities on the raspberry pi so that we could finally remotely control it. This also added the luxury of having more than one of us use the raspberry Pi at once for testing and development. For the latter half of the week Arushi and I focused on making the movement smoother by changing our algorithms slightly and adding features (as detailed in her part above). During our testing, on one run,  the iRoomba went wild and disobeyed any commands we gave it. We realized a quick fix to this is to change the iRoomba to safe mode. Before we had it in full control mode, which prevented itself from stabilizing it self and shutting off. Now in safe mode, it auto stops if it looses stability or if we press the buttons on it. Next week we should also try to make the usb cable more secure on the iRoomba because if that disconnects during a run we could again lose control. This week I also tried adding object detection using the iRoomba sensors. I have the code ready and it works independently of the movement code but when I combine the two the robot’s motion becomes very unstable. This is something I will work on more next week as we thought it would be better to work on the general movement control for the demo.

Here’s a video of our current progress with the Carti-B robot:

 

Weekly Status Report – Week of 10/20

Arushi – This week my goals consisted of combining my image processing code with Shreyas’s pid/robotics program to try to start testing our project and refining it for our midpoint demo. We initially had difficulties with a lag in our camera which was fixed by changing a while loop in our code to a for loop that sets a parameter for the pi camera. Once we fixed that, we were able to test rotations in place by moving our target around in a circle around the robot. We had difficulties, however, with the image processing sometimes picking up the top of the door to the room as having hsv values within the threshold range for our target. We tried adjusting the hsv threshold accordingly, but were still having troubles. For possible solutions, I wrote out programs that 1. involved an rgb threshold instead of hsv -> for which we were having troubles identifying the circle under a shadow and 2. changing our target to a chessboard and using a opencv builtin for finding chessboard corners. The chessboard was accurately being picked up, but similar to other HOG based algorithms, it was too slow when it was implemented on the pi. Right as we started to print a new target for which we could specify the hsv, when coloring it, Shreyas tried a threshold that worked. We then were able to test the individual aspects of in-place turns and moving straight. At the end of class we were able to put the parts together. The goal for next week is to further smooth out the PID, connect the pi to a battery source and then test in a variety of lighting to ensure that the image processing does not pick up parts of the image that are not the target.

Shreyas – This week I worked with Arushi on integrating different image processing algorithms into the Raspberry Pi. To avoid repeating details described above, I will talk more about the robot control module side of it. I started with tuning the PID for the X displacement part to make sure rotations were smooth. After this was tuned and after a better image processing technique was found I added logic to find the z displacement based on the area of the target. Then I got the PID controller to also work on the Z displacement part so movement forward and backward was also smooth. The last and most challenging part on my end was figuring out how to move the robot in both the Z and X direction. Since I am sending raw motor power commands based on the output of the PID controller, currently it is not possible to move in the Z and X direction in the same time. The main limitation right now is that the PID controller calculates the motor values independently of each other. Next week I am going to work on having the PID controller give a combined output of Z and X displacement so that movement in both degrees of freedom is possible. However another concern is that may damage the motors. Since the iRoomba is a pivot based robot, sending two different power values to the left and right motors (for curve motion) would put stress on the motors. However for slightly different magnitudes this shouldn’t be a problem. This is something that takes careful testing. Next week my goal is to integrate the iRoomba’s sensors and tune the PID for bi-directional movement.

Pallavi – This week my goal was to get the serial communication between the arduino and the raspberry pi set up. I unfortunately did not complete my goal because I ran into some issues setting up the serial communication on the raspberry pi zero. I’m running a simple python script on the pi just to check whether I can write to the terminal. When I try this, however, the program simply hangs. I looked at this link (https://www.raspberrypi.org/forums/viewtopic.php?t=180951)  and made sure to implement all the solutions discussed. However, my raspberry pi is still not printing out to the terminal. I’m going to work with Shreyas on Monday to figure out what the issue is.

Other than setting up the Pi, I was looking up doing tx, rx communication versus usb communication between the arduino and raspberry pi. I ended up going with tx, rx communication because of less overhead compared with usb communication. I also had to set up the simple voltage divider to properly connect the raspberry pi and arduino, as the raspberry pi has a 3.3V tx and rx pi while the arduino has 5V pins.

This week, I hope to get the serial output working on the pi to ensure I can test my arduino code. I also hope to start integrating my sensors with the combined iRoomba and object detection Shreyas and Arushi have been working on this week.

Weekly Status Report – Week of 10/13

Arushi – This week, I looked more into homography using checkerboards to further understand the process of when we implement turns. Basically, for us, based on what I learned, we will use the angle between the circles’ center to determine which way the human is turning. Furthermore, we established that in general, when it comes to turns, the cart will only move if the human (and thus the closest circle) is getting smaller, to prevent the cart from turning when a human pivots to pick something off from the shelf. I also worked on porting over my code to the raspberry pi and combining the image processing with Shreyas’ robot control/pid module. Because we had no way to adhere the camera to the robot, we tested with me moving the  target in front of the stationary camera and the robot moved accordingly (in terms of the general direction). Thus the next step is to actually attach the camera to the robot and test it, which is the plan for next week. To be able to do so, Ben helped us in making a wooden base for a post to hold the camera.

Pallavi – This week, I continued to work on the ultrasonic sensors. In order to actual start working on the serial connection between the arduino and the pi, I wanted to actually set up the USB connection between the two. Because Arushi and Shreyas were mainly focusing on integrating the iRoomba and the openCV, the main raspberry pi was being used. Instead, I got a leftover Pi from a project from a previous semester (thanks Ben!) and spent a good amount of time setting it up.  Because the pi leftover only had one USB port, it was kind of difficult setting it up since I could only use the mouse or the keyboard. I’ve got the basics set up, but I plan on going into lab early on Monday to set up ssh on the pi so I can connect the Arduino Uno to the usb port. With that, my goal is to have the entire design – detecting values, sending the interrupt when an obstacle is detected, and the serial communication done by the time I leave lab on Monday.

Shreyas – This week was mostly making the core of the Robot Control Module. This included adding PID control, so I had to figure out the right constraints and tune the controller to our iRoomba. I also added the interfacing between the Robot Control Module and the Image Processing Module. The PID controller took the center of the target found in the image processing module and sent the necessary motor commands to the iRoomba. The x displacement currently works so the next steps are the z displacement. I also will be adding the iRoomba’s sensors obstacle detection as well in the coming week. Another issue I had to work with was optimizing the connection of the raspberry pi camera. Originally the FPS was very low but by figuring out another method to get the frames we got the FPS to be a lot faster (around 30 now). Now the main latency comes from the image processing module.

All – next week we plan to focus on testing of our initial milestone of the cart following the human down a relatively straight path and stopping when the human does. We expect that we will have to change some of the color detecting constants based on the camera and the pid constants as we continue to test.

Weekly Status Report 9/29

Arushi – This week I worked with Pallavi to identify what our target image will be and then we took model videos with the target attached to my back. Based on the videos I was able to iterate on my general computer vision algorithm to be able to more accurately draw a circle, and print its parameters, based  on the target image. The next step on this end would to take a video where the target stays still and the camera moves up until it is the closest safe distance for the cart to be able to calibrate how we can use the target radius to detect the distance between the cart and the human. I also began designing a model on how we are going to place our hardware on the iRobot create. For next week, I plan on working with my teammates to figure out how we will interface our separate components now that we have proof of concepts for all the separate components.

Shreyas – This week I was primarily in charge of setting up the Raspberry Pi and the iRobot. Getting the Pi on the CMU wifi was a slight bottleneck but eventually it connected. I installed basic libraries and text editors that we would use later. Then I looked into the pyCreate2 python library which provides good abstraction layer to interface with the iRobot. I tested sending basic movement commands and they seemed to work. I started testing reading the sensor data from the iRobot, but more elaborate testing needs to be done.

Pallavi – This week I worked with Arushi to take some videos to train our open CV model. After helping with that, I mainly focused on working with the ultrasonic sensors. I started by individually testing each sensor, and I found one sensor that wasn’t reading in valid responses. I then started setting up the sensors in the orientation we want to use them for our final design. Next week, I will work with Arushi and Shreyas to get a better idea of how we will integrate all of our parts, especially now that we have a little bit of the individual parts working. I also want to look more into the library functions actually used to detect objects. Now that I have the general setup taken care of, I want to implement our detection algorithm and test it.

All of us will be working on the design presentation at the beginning of this week and also work on the design proposal concurrently. Working on this presentation will hopefully help us with our other goal of figuring out exactly how we want all of our individual parts to fit together.

Weekly Status Report 9/15

Pallavi – Looked into microprocessor alternatives to rapberry pi’s

Shreyas – Looked into iRoomba sensor specifics and did more research into how we plan on building our cart base

Arushi – Started implementing basic color detection using OpenCV.

 

We also looked more into what hardware we’d like to use and have decided on the iRoomba Create as our base and a raspberry pi for the image processing. We are considering other alternatives to a raspberry pi, but for the time being we are going to start development on the pi and invest in more powerful hardware if we see that it is needed.

All of us:

This week we worked on our presentation and re-scoping out our project based on feedback given during the meeting last week.

Next week, we hope to start working with the iRoomba for motor controls, have a working OpenCV color detector, and start looking into the UART interfacing between our micro-controller and iRoomba.

 

We hope to get some significant work done this coming week in order to set up our project and hit the ground running!