Team Status Report for 11/13

This past week, the team primarily worked collaboratively to start  autonomizing the robot. We also had demos during class time where we were able to show the subsystems and receive feedback.

Originally, the code that runs the subsystems for the chassis, slide, and claw systems were separate. When we tried combining them together, we realized that the servo library for the Arduino disables 2 PWM pins on the board (after experiencing a strange bug where some motors stopped moving). This meant that we could not run our entire system together across 2 boards since we needed to use all 12 PWM pins for our 6 motors.  We concluded that we either needed to get a servo shield to connect to servo to the Xavier (the Xavier has GPIO pins so the servo cannot be directly connected) or get a larger Arduino board. We were also running into some slight communication delay issues for our chassis with one motor being on a separate Arduino from the others. Hence, we ended up replacing our 2 Arduino Unos with one Arduino Mega 2560 board. Since the Mega board has 54 digital pins and 15 PWM pins, we were able to run all our subsystems on the single board and also eliminated the communication issues across the 2 boards.

For our navigation code, we first focused on being able to get the robot to navigate to an april tag autonomously. Currently, we are relying on being able to rotate and move the robot based on powering the motors on the chassis for a given amount of time. In our tests, we were able to consistently replicate having 1 second of running motors to 0.25m of movement. However, the translational movement is prone to some drift and acceleration over larger distances. Hence, we plan to mostly keep our movements incremental and purchase an IMU to help with drifting disorientations.

Video of 1 second movement to 0.25m translation

Similarly, we found that we were able to fairly consistently get the robot to rotate angles of movement by proportionally associating power on time to our rotational movement.

We then took these concepts and were able to get our robot to navigate to an April tag that is in its field of view. The April tag provides the horizontal and depth distance from the camera center as well as the yaw angle of rotation. Using this information, we wrote an algorithm for our robot to first detect the April tag, rotate itself so it is parallel-y facing the tag, translate horizontally in front of the tag, and translate depth-wise up to the tag. We still ran into a few drifting issues that we are hoping to resolve with an IMU but got results that generally performed well.

Our plan is to have an april tag on the shelf and on the basket so that the robot can be able to navigate both to and from the shelf this way.

We then focused on being able to scan a shelf for the laser-pointed object. To do this, the robot uses edge-detection to get the bounding boxes of the objects in front of it as well as the laser point detection algorithm. It can then determine which object is being selected and center itself in front of it for grabbing.

We tested this with a setup composing 2 styrofoam boards found in the lab to replicate a shelf. We placed one board flat on 2 chairs and the other board vertically at a 90-degree angle in the back.

Video of centering to laser pointed box (difficult to see in the video but the right-most item has a laser point on it):

Our next steps are to get the robot to actually grab the appropriate object and combine our algorithms. We also plan on purchasing a few items that we believe will help us improve our current implementation such as an IMU for drift-related issues and a battery connector converter to account for the Xavier’s unconventional battery jack port (we have been unable to run the Xavier with a battery because of this issue). The camera is also currently just taped onto the claw since we are still writing our navigation implementation, but we will get it mounted at a place that is most appropriate based on our completed implementation. Finally, we plan to continue to improve on our implementation and be fully ready for testing by the end of the next week at the latest.

Bhumika Kapur’s Status Report 11/13

This week I worked on both the edge detection and April tag code.

Firstly, I improved the April tag detection so the algorithm is able to detect an April tag from the camera’s stream, and return the center and coordinates of the tag along with the pose matrix, which allows us to calculate the distance to the tag and the angle. The results of this are shown below:

Second, I worked on improving the edge detection code, to get a bounding box around the different boxes visible in the camera’s stream. The bounding box also allows us to get the exact location of the box, which we will later use to actually retrieve the object. The results of this are shown below:

Finally, I worked with my team on the navigation of the robot. By combining our individual components our robot can now travel to the exact location of the April tag which marks the shelf. The robot is also able to drive up to the exact location of the item which has the laser point on it, and center itself to the object. Over the next week I plan to continue working with my team to finish up the final steps of our implementation.

Ludi Cao’s Status Report for 11/6

This week I worked mainly on the navigation code and some design of the middle board which places objects. I implemented a program where the Jetson sends directions to the two Arduinos, and the motors spin accordingly based on omnidirectional rules. Attached is a video that demonstrates this behavior. 

One promising thing I noticed is that, even though each motor is controlled by a separate motor driver, and the four motors interface with different Arduinos, the motors change direction at the same time. However, I do notice that there is a lag between the time the Jetson sends commands to the Arduinos, and when the motors respond to the command. Hence, this is something to be aware of when the Jetson sends information to the motors potentially about a frame captured slightly earlier. 

Esther and I also did a bit of designing of the board to place the various electronic parts. We think that we don’t necessarily have to mount everything on top of the board, but it is useful to create mounting holes for the motor shield, since there is a heat sink that gives it some height. We managed to measure out the dimensions for the motor shield. Attached is an image to show how the motor shield would be mounted. I would work on engraving the motor shields dimensions into the larger board, and hopefully have it laser cut on Monday or Wednesday. 

 

To test how the robot currently navigates, I temporarily used a cardboard to place various electronics. Since I am testing in the lab room, I ran into the issue that the floor might have too little friction, resulting in the wheels spinning at different speeds. The robot would not go in the direction as intended. We plan to add a foam-like structure on the wheels to increase friction. I would also look further into my code and the mechanical structure of the robot to see if there’s anything to troubleshoot. 

For next week, I plan to debug the issues about navigation ideally before the demo, laser cut the board to place the electronic components, and work with Bhumika to integrate computer vision into navigation. Hopefully, the robot can respond to camera data and move accordingly. 

Esther Jang’s Status Report for 11/6

This week, I worked on mounting and motorizing the linear slides on the robot. Both slides were mounted mirroring one another on the chassis. The pulleys were strung with a string that was attached to the motor that would pull the slides. I found that the motor alone was insufficient for lowering the slides, so I added some tension to pull the slides down when unwinding the string by attaching surgical tube between adjacent slides.

Overall, the slides were able to successfully extend and detract with a bar mounted between them as shown in the following clip:

There are some slight issues with detracting fully which I think is related to minor tension issues. The slides were able to individually detract otherwise.

At one point in the week, one set of slides were completely stuck and required significant disassembly to resolve. In order to fix and further help avoid this issue, lubricant grease was applied to the slides.

Finally, I also tested the claws this week and found one to have significantly better grip range and torque than the other. Using the servo code written a few weeks ago, I was able to get the claw to grab a small, light box. The claw was also mounted onto the end of the slides.

Team Status Report for 11/6

This week we finished the mechanical assembly of the robot, and are preparing for the interim demo for the following week. Bhumika has mainly worked on the computer vision area, with edge detection, laser detection, and april tag detection working decently well. The linear slides system is motorized and can successfully stretch up and down. The claw is experimented fully and can successfully grab objects within dimensions. The navigation code of the robot is under implementation. Currently, the four motors can receive commands from the Jetson Xavier and spin according to direction. We will continue to fine-tune our specific areas of focus and start working on integration later in the week.

Bhumika Kapur’s Status Report for 11/6

This week I worked on all three components of my part of the project, the laser detection algorithm, the April tag detection, and the edge detection.

For the edge detection, I was able to implement edge detection directly from the Intel RealSense’s stream, with very low latency. The edge detection also appears to be fairly accurate and is able to recognize most of the edges I tested it on. The results are shown below:

Next I worked on the laser detection. I improved my algorithm so that it would result in an image with a white dot at the location of the laser if it is in the frame, and black everywhere else. I did this by thresholding for pixels that are red and above a certain brightness. Currently this algorithm works decently well but is not 100% accurate in some lighting conditions. The algorithm is also fairly fast and I was able to apply it to each frame from my stream. The results are shown below:

 

Finally I worked on the April Tag detection code. I decided to use a different April Tag python library than the one I was previously using as this new library returns more specific information such as the homography matrix. I am a bit unsure as to how I can use this information to calculate the exact 3D position of the tag, but I plan to look into this more in the next few days. The results are below:

During the upcoming week I plan to improve the April Tag detection so I can get the exact 3D location and, and also work on integrating these algorithms with the navigation of the robot.

Ludi Cao’s Status Report for 10/30

The robotics parts arrived earlier this week. I assembled the drivetrain chassis shown in the pictures below.

We are short on screw nuts, so I ordered more which should arrive in a day or two to add more securing between parts. I also tested speed control with the BTS7960 motor shield. I wrote the program such that the wheel speed changes based on the number inputted to the Serial monitor. Last week, I tested information passing between the Jetson Xavier and arduino through Serial.monitor as well and it worked. Hence, integration between the modules is promising. 

https://www.youtube.com/watch?v=VdI3qAsY-3I

Esther Jang’s Status Report for 10/30

Our parts finally arrived this week, so I spent the week building. Unfortunately, I was unable to work in-person for Wednesday and Thursday due to a potential COVID exposure. Hence, most of my work took place on Monday, Friday and Saturday.

I was able to complete building the mechanical aspect of the linear slide system (not including mounting). We did not integrate yet because we ran out of nuts for building but will receive more by tomorrow. The slides seem to extend fairly well although they were getting slightly stuck at times. I think this could easily be fixed with a bit of lubricant. The length of a single slide is 1/3 meter (full meter extrusion cut into thirds), and each set of slides is a mirror of the other.

I also investigated the motor shields with Ludi and was able to control a motor so that it spins at a specified frequency of a specified amount of time, holds, and spins in the opposite direction the same way. This should be approximately the operation we need for the linear slides.

Finally, I attempted to get the claw working with the Arduino code written last week but found that the servo required a voltage of around 6-8V to operate. We will likely need to get a buck convertor to use the claw’s servo.

Bhumika Kapur’s Status Report for 10/30

This week I was able to start working on the camera and Jetson Xavier setup. We got the Xavier setup earlier in the week, and I worked on getting the Intel RealSense Viewer setup on the Xavier. Once that was setup, I worked on downloading the necessary Python libraries on the Xavier that are needed to use the Intel RealSense data. That took me some time as I ran into many errors with the download but, the library is now set up.

Once the viewer and the library were setup I began working on actually using the camera’s stream. I ran many of the example scripts given on the Intel RealSense website. Two of the more useful scripts that I was able to run were a script for detecting depth of object in the stream, and a script for using depth image to remove the background of an image. The results are shown below:

The first image shows the result of running the depth script, which returns the depth of a pixel in meters. The x,y, and depth information is printed out. The second image shows the result of using depth information to remove the background and focus on a specific object.

Next week I plan to use these scripts in conjunction with the edge and laser detection algorithms I have written, so that the robot can use that information to navigate.

Team Status Report for 10/30

This week we received all of the parts that we ordered, and began producing our robot. We setup the Jetson Xavier and Bhumika has been working on the computer vision aspect of the project using that. Ludi was able to assemble most of the chassis. Esther build the linear slides, and both Ludi and Esther are working on getting the motor shields to work. Next week we will continue to work on assembling the robot, and integrating the different components such as the chassis and the linear slides. Overall we are satisfied with the progress we have made so far.