Bhumika Kapur’s Status Report for 12/4

This week I continued to work with my team to improve the entire process of our robot, tweaking some values to make it more consistent, and improving the navigation process.  We primarily focused on the robots navigation back to the basket after it retrieves the object, and depositing the item in the basket. The team’s progress is described in this weeks team status report. I also worked on the final presentation which Ludi presented earlier this week.

Bhumika Kapur’s Status Report for 11/20

This week I worked with my teammates on the navigation and retrieval process of the robot. We all worked together on those tasks and our progress is detailed in the team status report. I also worked on improving the CV component of the project as there are some errors that occasionally occur in different lighting conditions, but I am hoping those will be resolved with more testing soon. Next week I will continue to work with my team on the robot and our final presentation,

Bhumika Kapur’s Status Report 11/13

This week I worked on both the edge detection and April tag code.

Firstly, I improved the April tag detection so the algorithm is able to detect an April tag from the camera’s stream, and return the center and coordinates of the tag along with the pose matrix, which allows us to calculate the distance to the tag and the angle. The results of this are shown below:

Second, I worked on improving the edge detection code, to get a bounding box around the different boxes visible in the camera’s stream. The bounding box also allows us to get the exact location of the box, which we will later use to actually retrieve the object. The results of this are shown below:

Finally, I worked with my team on the navigation of the robot. By combining our individual components our robot can now travel to the exact location of the April tag which marks the shelf. The robot is also able to drive up to the exact location of the item which has the laser point on it, and center itself to the object. Over the next week I plan to continue working with my team to finish up the final steps of our implementation.

Bhumika Kapur’s Status Report for 11/6

This week I worked on all three components of my part of the project, the laser detection algorithm, the April tag detection, and the edge detection.

For the edge detection, I was able to implement edge detection directly from the Intel RealSense’s stream, with very low latency. The edge detection also appears to be fairly accurate and is able to recognize most of the edges I tested it on. The results are shown below:

Next I worked on the laser detection. I improved my algorithm so that it would result in an image with a white dot at the location of the laser if it is in the frame, and black everywhere else. I did this by thresholding for pixels that are red and above a certain brightness. Currently this algorithm works decently well but is not 100% accurate in some lighting conditions. The algorithm is also fairly fast and I was able to apply it to each frame from my stream. The results are shown below:

 

Finally I worked on the April Tag detection code. I decided to use a different April Tag python library than the one I was previously using as this new library returns more specific information such as the homography matrix. I am a bit unsure as to how I can use this information to calculate the exact 3D position of the tag, but I plan to look into this more in the next few days. The results are below:

During the upcoming week I plan to improve the April Tag detection so I can get the exact 3D location and, and also work on integrating these algorithms with the navigation of the robot.

Bhumika Kapur’s Status Report for 10/30

This week I was able to start working on the camera and Jetson Xavier setup. We got the Xavier setup earlier in the week, and I worked on getting the Intel RealSense Viewer setup on the Xavier. Once that was setup, I worked on downloading the necessary Python libraries on the Xavier that are needed to use the Intel RealSense data. That took me some time as I ran into many errors with the download but, the library is now set up.

Once the viewer and the library were setup I began working on actually using the camera’s stream. I ran many of the example scripts given on the Intel RealSense website. Two of the more useful scripts that I was able to run were a script for detecting depth of object in the stream, and a script for using depth image to remove the background of an image. The results are shown below:

The first image shows the result of running the depth script, which returns the depth of a pixel in meters. The x,y, and depth information is printed out. The second image shows the result of using depth information to remove the background and focus on a specific object.

Next week I plan to use these scripts in conjunction with the edge and laser detection algorithms I have written, so that the robot can use that information to navigate.

Bhumika Kapur’s Status Report for 10/23

This week I worked on two things. First, I worked on getting our camera, the Intel Realsense setup. We were able to obtain a new camera with its cable, and get it somewhat working on one of our team member’s laptop, and it also worked well on our TA (Tao’s) laptop. The next step is to connect the camera with the Jetson Xavier and get it setup there.

I also worked on the laser detection algorithm. I tried a few different methods that I found online with varying results. The first algorithm that I tried only used the RGB values of the image, and manipulated these values to calculate the degree of red compared with the other pixel values.  The results are shown below, where the laser is the circular spot:

The next algorithm I used involved using the HSV values and thresholding then, and then anding these values to find the laser in the image.  This method seemed to work, but need more testing to determine the best threshold values. The results are shown below, with the laser in pink:

Overall both algorithms were able to detect the laser in a simple example, but I will need to perform more testing to determine which algorithm is preferred when shining the laser on an object. Next week, I plan to work on setting up the camera and improving on this algorithm.

Bhumika Kapur’s Status Report for 10/9

This week I worked on a few tasks. First, I worked with my teammates to complete the design presentation. We had to reevaluate some of our requirements and solution approaches for the presentation. We also discussed our solution approach in detail and researched how each component would connect to other components.

I also received the Intel RealSense camera from Tao this week, and spent some time trying to set it up. I was able to get it connected to my laptop and using a VM, I am able to access the camera. The next step is to setup the camera in the Intel RealSense Viewer. I have been attempting to get the camera setup in the viewer, but the viewer crashes whenever I open it, so I will need to debug that this week.

I also worked on the April tag detection code. As I mentioned last week April tag code is originally in Java/C so I followed some tutorials online in Python, which I am most familiar with, to get the software setup. Currently I am able to draw a bounding box around the April tag, detect the specific family the tag is from, and output the exact coordinates of the tag in the image. My next steps are to combine this with the data that I would receive from the camera to output the exact location of the April tag so the robot can navigate to it. I hope to do that next week.

Example image/output:

 

Bhumika Kapur’s Status Report for 10/2

This week I continued the research I have been doing into the computer vision component of our project. After speaking with Tao this week, our team decided to use the Jetson Xavier NX rather than the Jetson Nano due to the fact that the Xavier NX has 8GB of memory compared to the 2GB of memory in the Jetson Nano. So, I researched cameras that would be compatible with this new component. I looked into the camera in the ECE inventory list that is specifically created for the Xavier NX, but, it has a very low FOV (only 75 degrees). So, I did more research into the sainsmart imx219, and it seems like this is compatible with the Xavier NX as well, and has a higher FOV (160 degrees), so we will most likely go ahead with this camera.

I also talked about depth sensing with Tao, and he suggested that we look into stereo cameras if we wanted a camera with depth sensing capabilities. Unfortunately, many of the stereo cameras compatible with the Xavier NX are quite expensive/sold out. So, I looked into the idea of using a distance sensor such as an ultrasonic sensor from adafruit along with a camera to properly navigate the robot.

This week I also looked into the April tag detection software, which is originally written in C and Java, but I found a python tutorial that I followed. I don’t have the software fully operating yet, but hope that it will be functional by next week.  I also worked on the design presentation with my team, and we discussed our progress and project updates.

Next week I plan to continue working on the April tag detection code, and try to setup the camera from the ECE inventory with the Jetson Xaiver NX.

Bhumika Kapur’s Status Report for 9/25

This week I worked on the proposal presentation. I worked with my group members to complete the proposal slides, and we worked together to do some research into the components and algorithms that we are going to use to quantify some of our testing metrics and requirements.  I was presenting for my team this week so I spent a few days preparing for the presentation and getting familiar with the content.

I believe that I and the team are following our schedule well. Based on the Gantt chart that we created last week, our team is in the researching phase for the components/softwares that plan to use for our project. This past week I did some research into cameras for our project, starting with the cameras that are already in the ECE inventory. I found that some of the cameras in the inventory seem to be satisfactory for our project, such as the Sainsmart Imx, which has a 160 degree wide angle lens. But we may have to purchase a more expensive camera such as the Intel RealSense for depth sensing,  as I could not find any depth sensing information on the cameras in the inventory. I also did research into edge detection algorithms, and found that Canny edge detection seems to be the most effective (95%) and thus the one we should use for our project. We also tested out some images with canny edge detection and found that it ran quite fast, in only a few seconds. (see below). This week I plan to continue research cameras and algorithms, and start writing the code for some of the computer vision algorithms we plan to use in our project, if I have time.

Bhumika Kapur’s Status Report for 9/18

This week the entire team worked on our proposal presentation. We used the feedback we received during our meetings with Tamal and Tao to reconsider some aspects of our project.

I also did some research on the computer vision algorithms that I will be working on this semester. I read some research articles that have been published on detecting a laser pointer,  as I am planning on writing a program that can detect a laser on a box. I also read about edge detection algorithms, as we will need to use edge detection in our project. Finally, I  looked into April Tags and cameras, as we are planning on using both in our project. I looked at the specs and reviews of different cameras, and looked at some April tag tutorials to get a sense of how we will use these components.

So far, I believe I am on schedule. We have just divided up the tasks within the group, and I have began working on my tasks. In the next week, I plan to finalize what edge and laser detection technique I will use, and select a camera for our robot.