Raymond Ngo’s status Report 4/30

Slower week work-wise, more focused on final testing and final presentation. Unfortunately, we have been slipping behind compared to the schedule due to last second last minute changes that arose due to the difficulty in integration and the environment. For example, we spent time looking at grills with flat bottoms after feedback, we looked at different types of grills (or whether we should use a grill at all), we looked at different power supplies due to the demo environment.

For my part, a lot of the week was spent on integrating the robotic arm with the computer vision, which required a lot of discussions with Joe and a lot of trial and error. I also completed the battery of tests, but have not done integrated testing with all components working together.

Raymond’s Status Report 4/23

Unfortunately, Jasper was out this week due to covid, but with his input the integration between the UI and the Computer Vision, as well as the integration between the Computer Vision and the Cooking Timer, is complete. The progress we hope to make was delayed several days for integration because the Xaiver broke halfway into the week, requiring several days and a reflash to fix. As a result of that, some progress was lost, but that loss in progress has been reversed with more work.

Furthermore, testing has begun for my computer vision modules for the final presentation. Most of the metrics seem to be fine, however the edge detection might be a bit off in terms of its predictions, probably due to camera quality factors that are hard to resolve. This is unfortunately one tradeoff that cannot be avoided since OpenCV has a max resolution, and any increases in resolution for the image severly impacts negatively everything else in the cv pipeline. Solution is to increase brightness even more, since there is only so much an edge detection algorithm can do when noise is present.

Raymond’s status report 4/16

I updated my dataset with more images, including hard to find images of arms over the grill, in response to the feedback from the ethics class about potential harm from the project. The newly created network works. Metrics might need to be shifted a bit in the final report, however, since the categories for anything that is not meat is so broad and the dataset is so limited that there will be limits in terms of accuracy. I will see what I can do.

 

Jasper and I got the ethernet working, however, with the revelation that the final demo will take place in the UC, orders have been placed for both a router and a wifi card. Both orders placed because flexibility in terms of results.

 

I have begun integration of the cooking time algorithm with computer vision. I will need Jasper to integrate the user interface. Unfortunately, Joseph’s lack of completion of work in time for integration has bottlenecked progress and he will need to step up for my part of the scheduled tasks to be completed.

Raymond’s status report for 4/9

In the days before the demo, I modified the pixel to inch parameter to fit the size of the meat within the range of error. This value is hardcoded instead of dynamically determined because the meat is anticipated to be a fixed distance from the camera in the final product, so having fixed values instead of a reference point is better.

Before the demo, I was able to download the necessary libraries and files onto the Jetson, however I lacked enough adaptors to get the keyboard and the camera both working at the same time, so I was unable to get that working before the demo. I was able to get the necessary adaptors to get the Computer Vision working on the Jetson during Carnvial.

Raymond Ngo’s status report 4/2

In the previous week, I combined the 3 different computer vision algorithms into one program that invokes all of them. Along the way, I found out the blob detection was not as effective as I thought under dim lighting conditions. As a result, I had to make modifications to the blob detection by changing several parameters regarding eroding excess lines.

You may have noticed the image is a result of an object recognition and not of an image cassification network. There are several reasons for this. One, the dataset we collected had multiple types of meat strewn together, and at that moment, our team realized that if someone placed multiple types of meat on a plate in front of the robotic arm, a classification system is not robust enough to know that and may cause undercookng of meat. Another reason is that if the blob detection fails to work in a way we want, object recognition algorithm is our backup mitigation technique. While object recognition is slower than blob detection, it’s still sufficiently fast enough for our desired metrics.

What you see below is the result of YOLOv5 trained on 150 epochs on a tiny data set (only 20 images) augmented to be 60 images in total. The batch size each epoch was 12 images each. YOLOv5 was selected due to its speed advantage and its active community support online.

Currently on track to complete by the completion date indicated on the schedule, which is Monday, at least I am mostly done. The hesitation is because the integration period will provide a chance to add to the dataset, and that would require more training on the network.

By next week, I hope to begin the integration of the subsystems by having the files uploaded onto the Xavier. Hopefully that would lead to an improvement in detection time also.

Raymond Ngo’s status report for 3/26

As promised last week, I was able to solve the issue of blob detection not working on larger blobs. As it turns out, the reason why I was unable to solve the issue earlier was because I was turning off filtering mechanisms one by one, however, I actually had to disable multiple preset parameters to get the blob detection to work on larger images. Furthermore, this past week I was working on ways to improve the blob detector and edge detector on different types of images. As seen in last week’s report, back then I was only testing on a simple slab of meat on a white background. This week, I obtained more images, especially the type of image seen below where the meat is on a tray, which is how I expect the final environment to work. And as you can see, my changes in parameters means edge and blob detection work in more scenarios accurately (note the 3 circles for the 3 trays of meat).

Next week, my 2 primary tasks involve beginning the setup of a neural network and getting the blob and edge detectors to take in a live camera feed to test its effectiveness in a more real world setting. I anticipate the collection and tagging of a dataset.

Raymond Ngo’s Status Report for 3/19

I was able to modify test images to isolate red portions of the image. From there, I was also able to easily obtain the outer edges using the canny edge detector. Isolating red portions of the image is necessary for the development of the computer vision system because the meat is red and the canny edge detection on an unmodified image does not return a clean outer border around the meat. (shown below)

The main issue I need to get solved by next week to remain on track is to figure out why the blob detector cv function does not work well with the processed binary image (shown below). Otherwise, I am on track to complete a significant portion of the width detection and blob detection algorithms.

The reason preprocessing needs to be done on the image is that meat usually is not on a white background like this. Therefore, necessary distractions must be removed for a more accurate detection of the material.

By next week, some form of blob detection must work on this test image, and ideally one other more crowded image.

Test image used
Successful conversion to binary image, unsuccessful attempt to get simpleblobdetector or contour functions to recognize this
Successful use of canny. Without the processing beforehand, the edge would not be this clean.

Raymond Ngo’s Status Report for 2/26

For the deliverables I promised last week, I am not able to provide them because the schedule timing has been tighter than I anticipated, and as a result I was not able to properly tune the parameters for the blob detection. The most it could do was to detect the shadow in the corner. Ideally, a blob detector with better parameters is the deliverable by the deadline next week.

The main reason was the tight schedule between the presentation and the design report. During the past week (from last Saturday to 2/26), a lot of work was done on the design presentation, mainly having to communicate between the team members on the exact design requirements of the project. Questions involving the discovery of a temperature probe that could withstand 500F or higher popped up, as well as questions on the best type of object detection. Furthermore, as I was the one presenting the design slides, it was my responsibility and work to practice every point in the presentation, and to make sure everyone’s slides and information was aligned. In addition, as it was not I but Joseph who possessed robotics knowledge, I had to ask him and do my own research on what the specific design requirements are.

This week, I also conducted research on other classification systems after the feedback from our presentation. Our two main issues are the lack of enough images to form a coherent dataset, hence our lower classification accuracy metrics, and a selection of using a neural network to classify images ahead of other classification systems. One possible risk mitigation took I found was using a different system to identify objects, perhaps using SIFT. That, however, would require telling the user to leave food in a predetermined position (for example, specifically not having thicker slabs of meat rolled up).

We are on schedule for class assignments, but I am a bit behind in configuring the blob detection algorithm. I am personally not too worried about this development because blob detection algorithm is actually me working ahead of schedule anyways.

Raymond Ngo’s Status Report for 2/19

This prior week I was getting myself acquainted with opencv and its libraries. I successfully made a function that captures webcam data both in a continuous stream and when a function is invoked. I successfully applied the cammy edge filter (for thickness detection) on a captured image and increased its threshold. (proof below) This is necessary for the computer vision part of the project because this will be the primary way to detect meat thickness for the cooking time estimation.

I am actually currently on schedule. Figuring out features of opencv and trying out some of the tools is important before starting on the real work of creating tools for the project. Furthermore, finding limitations of some computer vision methods is important before the design review.

Next week’s deliverables: some rudimentary form of blob detection. Uses opencv to capture and process an image. This is necessary because the action that kick starts the cooking process is a user placing meat in front of a camera, and this requires blob detection to see if an object exists or not.

Raymond Ngo’s status report for 2/12

This past week I took a further look at the types of computer vision algorithms needed to complete the thickness estimation project. While initially we decided on using a neural network to determine the type of meat to help find the cooking time, we decided this would not be a good idea, owing to the different colors (from marinating) and the similarities of various types of meat. We would also have issues finding a proper data set to train on.

 

Instead, I looked through the different methods of finding thickness, and the best way seemed to be using the Cammy edge detector function in opencv. The challenge facing the upcoming week would be finding a way of making sure the thickness measurement (most likely in pixels) is accurate. The second issue would be making sure the meat measurement is correctly measured at a similar environment each time. This would most likely be done by having the robotic arm lift the meat to the same location each and every time, with the only variable being the position  the robotic arm grabs the meat at. However, this ignores the really thin cuts of meat. In the coming week, I will discuss the possibility of removing that type of meat from our testing metric completely, given how different it will be from every other type of meat we plan on testing. Included is the image of the outlier meat cut.

 

3 things to consider when you cook meat Korean BBQ Style - Korean BBQ  OnlineKorean BBQ Online