Joseph Jang’s Status Report for 4/16

I have fixed the mechanical issues with the robotic arm by tightening the screws and filling the spaces between the servo motor arm and the plastic metal arm, but have run into some software and electrical issues.  Sometimes the current draw of the servo motors spikes when the motors are under stall torque, causing the current to spike and voltage to drop.  This leads the motors to stutter or sometimes not even move.  While I don’t think this is a problem with the servo motor controller board (PCA9685),  it could be an issue with the current draw that the lead-acid battery is capable of.  I am holding off buying a lithium-ion battery that has a high current draw to power the servo motor controller board, because this voltage drop and current spike occurred after multiple hours and uses of the lead-acid battery, so the issue might be because the battery is not charged enough.

However, for the inverse kinematics, the accuracy of the system is still sometimes off by a couple of inches when the bounds of the robot’s range of motion are reached.  The Matlab library seems to be more complicated than I need, so I have also tried to use a Python library for inverse kinematics called IKPy.  This requires me to make a URDF file that is commonly used in ROS applications.  In both applications, I am getting the angles I desire, but I have yet to test the IKPy library on the robotic arm.  I am hoping the use of ROS and the URDF file, which is very standardized, can help me achieve some accuracy and precision.  I also think I might have to simulate the entire  KBBQ robot environment (dishes, grill, etc.), which will be easier with ROS.

In other minor updates, I have measured, planned out, and set the locations of the dishes, grill, robot, etc.  This way, the robot will have an easier time being more precise within its range of motions.

Raymond’s status report 4/16

I updated my dataset with more images, including hard to find images of arms over the grill, in response to the feedback from the ethics class about potential harm from the project. The newly created network works. Metrics might need to be shifted a bit in the final report, however, since the categories for anything that is not meat is so broad and the dataset is so limited that there will be limits in terms of accuracy. I will see what I can do.

 

Jasper and I got the ethernet working, however, with the revelation that the final demo will take place in the UC, orders have been placed for both a router and a wifi card. Both orders placed because flexibility in terms of results.

 

I have begun integration of the cooking time algorithm with computer vision. I will need Jasper to integrate the user interface. Unfortunately, Joseph’s lack of completion of work in time for integration has bottlenecked progress and he will need to step up for my part of the scheduled tasks to be completed.

Joseph Jang’s Status Report 4/10

This week I continued to work on the inverse kinematics of the robot.  To help debug the issue, I disassembled parts of the robot to make sure each joint is being turned to the proper angle.  I found several electrical and embedded software bugs.  I do not think the issue with the inverse kinematics is caused by MATLAB’s implementation of the IK solver.  However,  I will also be making use of the ROS IK libraries to code in python.  Another issue I found was the stepper motor is heating up rather quickly even when it is not in action.  Because it is turned on even though no commands are being sent to it, it heats up quickly.  Therefore, I made use of the enable pin, so that the base of the robot will be in place but not turn on when it is unnecessary.  That way the stepper motor will not be overloaded and burned out.  On several occasions, the motor had become very hot, so this was a proper risk mitigation fix.

Jasper Lessiohadi’s Status Report 4/10

For our interim demo last Wednesday, I worked on the UI a bit more. It was not exactly where it needed to be, but I will work on that and integrating all of the pieces of our project in the next few days. Besides that, since we just had Carnival, I have made no other progress.

Team Status Report for 4/10

Raymond has been progressing on his project components and has begun part of the integration step. However, he still needs more data to form a more accurate network. Therefore, the Gantt chart is labelled as 90% complete.

Joseph needs to change the type of inverse kinematics he used due to issues with the Matlab library outlined in the detailed design writeup. He will now be using another inverse kinematics library as one of the risk mitigation options.

Jasper is behind in his progress in the software UI component, as he did not properly test out the success of his user interface interaction with camera.

As of the demo, the most recent updated gantt schedule is included in this post. Catch up may be needed, and some slack time will have to be used up to get the system working.

Raymond’s status report for 4/9

In the days before the demo, I modified the pixel to inch parameter to fit the size of the meat within the range of error. This value is hardcoded instead of dynamically determined because the meat is anticipated to be a fixed distance from the camera in the final product, so having fixed values instead of a reference point is better.

Before the demo, I was able to download the necessary libraries and files onto the Jetson, however I lacked enough adaptors to get the keyboard and the camera both working at the same time, so I was unable to get that working before the demo. I was able to get the necessary adaptors to get the Computer Vision working on the Jetson during Carnvial.

Joseph Jang’s Status Report for 4/2

This week, I was able to hardcode some movements for the robotic arm.  I am using the I have taken the measurements of the robotic arm, so I have put it in the Matlab code. First I had to specify the Rigid Body Tree Model.  I could modify the model by using the addBody.  Finally, I used the inverseKinematics function to try to have the robot arm moving.   I’ve tried two different IK models, which are called BFGSGradientProjection and LevenbergMarquardt.  However, since there are multiple solutions that are possible for an (x, y, z) position, the angles of each joint of the robotic arm need to be specified somehow.  We have to choose the right arm joints by making sure the robot arm links are not close to the grill.  Although the picture below is not a picture of the robotic arm I’m using, it is the GUI that I am using to control the robotic arm.

Jasper Lessiohadi’s Status Report 4/2

As I wrote in the team status report, this week I finished work with the UI and cooking time algorithms. The UI is hopefully clear enough to provide a clean and intuitive experience for the user. It displays information about the detected thickness of a new piece of meat, how long the system decides it needs to cook, and which section of the grill it will go on. There is also an overhead camera feed of the grill which displays which section is which, enabling the user to be confident that the system of working as intended. I plan to add the capability of changing the remaining cooking time for any section that they select, but I do not have that working yet.

Team Status Report 4/2

This week, the team has been diligently working towards being ready for our project demo on Wednesday 4/6. We seem to be in a very good spot with all of our subsystems in a presentable point.

Joseph can demonstrate the robotic arm we plan to use to flip and transport the meats being cooked. The inverse kinematics of it have been difficult to implement, but at least we will be able to show that it moves with the range of motion that we need for our end goal. We have the wiring and other heat-sensitive parts insulated so that they will be protected from the high temperatures of the stove. This has been tested using a soldering iron which did not have any effect on the more delicate internals of the arm.

Raymond has blob detection working with the CV able to detect meats with decent accuracy. He has switched from image classification to object recognition to prevent scenarios where a user places 2 different types of meat on a plate. He has changed parameters of the original blob detection algorithm to be better at identifying meats instead of other objects in the scene, along with improving performance in low light conditions.

Jasper has finished work with the UI and cooking time algorithms. The UI is hopefully clear enough to provide a clean and intuitive experience for the user. It displays information about the detected thickness of a new piece of meat, how long the system decides it needs to cook, and which section of the grill it will go on. There is also an overhead camera feed of the grill which displays which section is which, enabling the user to be confident that the system of working as intended. We plan to add the capability of changing the remaining cooking time for any section that they select, but we do not have that working yet.

Overall, the project is keeping up with the proposed schedule from earlier in the semester, and the team has made great progress.

Raymond Ngo’s status report 4/2

In the previous week, I combined the 3 different computer vision algorithms into one program that invokes all of them. Along the way, I found out the blob detection was not as effective as I thought under dim lighting conditions. As a result, I had to make modifications to the blob detection by changing several parameters regarding eroding excess lines.

You may have noticed the image is a result of an object recognition and not of an image cassification network. There are several reasons for this. One, the dataset we collected had multiple types of meat strewn together, and at that moment, our team realized that if someone placed multiple types of meat on a plate in front of the robotic arm, a classification system is not robust enough to know that and may cause undercookng of meat. Another reason is that if the blob detection fails to work in a way we want, object recognition algorithm is our backup mitigation technique. While object recognition is slower than blob detection, it’s still sufficiently fast enough for our desired metrics.

What you see below is the result of YOLOv5 trained on 150 epochs on a tiny data set (only 20 images) augmented to be 60 images in total. The batch size each epoch was 12 images each. YOLOv5 was selected due to its speed advantage and its active community support online.

Currently on track to complete by the completion date indicated on the schedule, which is Monday, at least I am mostly done. The hesitation is because the integration period will provide a chance to add to the dataset, and that would require more training on the network.

By next week, I hope to begin the integration of the subsystems by having the files uploaded onto the Xavier. Hopefully that would lead to an improvement in detection time also.