Ishita Sinha’s Status Report for 05/08/21

This week, my team and I worked on completing the integration of the diverter into our project and testing it. To begin with, we first worked on coming up with an idea so that we can make the servo drive the diverter. We were earlier thinking of drilling the arms of the servo into the diverter, but that wouldn’t have worked since the body of the servo would have been free. Finally, with trial and testing, we came up with a solution for this. After that, I worked on integrating the servo code with the rest of the code we had. Post that, I worked on setting up everything so that all of the components of the product are placed as they would be in the final setup. Then, I looked at the camera streams to figure out their right positions and placed them. After that, I updated the rottenness threshold to accommodate for shadows and then performed several tests on good and rotten bananas to see that they were being detected, classified, and diverted correctly. The diverter received the signal for diversion within just 0.08 seconds after the banana image was read, so our entire code took just an average of around 0.08 seconds, which definitely meets our requirements.

Currently, my team and I are working on the final video and poster. We’re also working on the image segmentation for carrots and I’m now working on the AlexNet classifier for rottenness to see how well that would have performed and how long that would have taken. We’ll continue to do this until Monday, and will then start writing up our final report from Tuesday onwards. I’m happy it seems to have all worked out so far for the project, and we’re now in the last leg of just presenting what it is!

I would like to thank all of the instructors and TAs, especially Prof. Savvides and Uzair, for all of their help and guidance and for a wonderful semester!

Ishita Sinha’s Status Report for 05/01/21

This week, my team and I have been working on integrating the algorithm with the actual conveyor belt system and getting it to work. To begin with, I switched all of the code to use Numpy, so I got quite a large speedup, so much so that the entire algorithm is now running in around 0.04 seconds, as compared to the earlier 3-4 seconds, so that seems to have worked out well. Next, I instrumented the code in order to better detect when the banana is in the frame, versus when it is coming in, going out, or isn’t in the frame. The edge detection and frame analysis seem to be performing very well. I worked on the final product setup. We just need to place the other camera and the diverter. I have also been testing the conveyor belt system a lot with the algorithm to see how well the image segmentation seems to be performing. There seem to be some issues that are largely resolved with the use of brighter light, so I’ve written code to increase the brightness of the image, but am also looking for a brighter light. When the image is bright, the algorithm seems to be performing very well with the conveyor belt system. Lastly, I worked on coming up with a design for our diverter that will actually divert the fruit on the belt to move into the rotten v/s the fresh fruit basket. We have the CAD design and should be 3D printing it tomorrow. Our plan is that the diverter will extend a bit more into the conveyor belt system so that it can start diverting the fruit much before it actually hits the end of the belt. I tested this using a book to see if it would work, and it seems to be working, so we hope it works out with the 3D printed part!

For future steps, we plan on getting the 3D printed diverter and setting it up with the servo motor so as to see that the diversion is working out well, so I think there’s going to be a lot of testing in the upcoming week. It’s the last leg, so I hope it works out! We also need to write image segmentation code for cucumbers and carrots, but that’s something we plan on looking into after we have the entire system working for a banana since that’s our MVP. The AlexNet classifier isn’t urgent since our classification system seems to be meeting the timing and accuracy requirements extremely well, but I could work on that after we have this setup working. I hope it all works out!

Ishita Sinha’s Status Report for 04/24/21

Over the past few weeks, I worked on developing a final model for fruit detection. I was working with an already built YOLO model and also tried developing a custom model. However, I noticed that the custom model did not perform much better in terms of accuracy and it did not reduce the time required for computation either, so I decided that we’d continue with the YOLO model. I tested the fruit detection accuracy with this model using a good Kaggle dataset I found online with images similar to the ones that would be clicked with our lighting conditions and background and 0% of the bananas were misclassified, around 3% of the apples were misclassified, and around 3% of the oranges were misclassified out of a dataset consisting of around 250 images. The fruit detection seemed to be taking a while, around 4 seconds for each image. However, I looked up the speed up offered by the Nano, so this should work out for our purposes.

I also worked on integrating all of the code we had so now we have the code ready for the Nano to read the image, process it, pass it to our fruit detection and classification algorithm, and be able to output whether the fruit is rotten or not. I also integrated the code for analyzing whether the fruit is partially or completely in the frame along with our code.

For the apples and oranges, I tested the image segmentation and classification on a very large dataset, but it did not seem to perform too well, so I plan on playing with the rottenness threshold I’ve set while Ishita Kumar improves the masks we get so that we can improve accuracy.

As for the integration, we plan on writing up the code for the Raspberry Pi controlling the gate in the upcoming week. We plan on setting up the conveyor belt and testing our code in a preliminary manner with a temporary physical setup soon, without the gate for now.

The fruit detection seems to be detecting apples as oranges sometimes and for a few instances it seems like it is not able to detect extremely rotten bananas, so I’ll need to look into that to see if it can be improved.

For future steps for my part, I need to work on testing the integrated code with the Nano and conveyor belt with real fruits. Once that’s working, we will start working on getting the Raspberry Pi and servo to work. In parallel, I can play around with the threshold for rottenness classification for apples and oranges. The AlexNet classifier isn’t urgent since our classification system currently seems to be meeting timing and accuracy requirements, but I’ll work on implementing that once we’re in good shape with the above items.

Ishita Sinha’s Status Report for 04/10/21

This week, I worked on testing the classifier we had on a much larger dataset in order to ensure our algorithm generalized well to several types of bananas. Our algorithm achieved a 2.14 % misclassification rate for good bananas out of a dataset comprising approximately 2000 images of good bananas. As for bad bananas, our algorithm achieved a 0.94 % misclassification rate out of a dataset comprising approximately 2800 images of bad bananas. The algorithm seemed to be running quite efficiently as well, so it was good to see that we seemed to be meeting the targets we had set by a huge margin.

Besides this, I worked on clicking images of bananas for checking if our algorithm worked well even on images that we had clicked, and it achieved quite an amazing classification rate and performance for images that we clicked. I also started working on developing the AlexNet classification algorithm code to see if that could give us an improved classification result over our existing code.

We seem to be on track for now, but the upcoming week is going to be a hectic one. We planned it accordingly since we have a bit of a break, but I hope we can stay on track with our progress. For future steps for my part, I need to test our model on more images clicked personally, complete the AlexNet classification, and start working on code for object classification so that by the time we transition to testing for apples and oranges, I have the code in place to be able to check if a fruit is an apple, an orange, or a banana.

Ishita Sinha’s Status Report for 04/03/21

This week, I worked with my team in assembling the conveyor belt and also worked on the pixel classification algorithm.

As for working on the conveyor belt, I suggested a final model we could use in order to be able to meet the requirements we have and determined its feasibility. For now, we plan on using channels to lodge the conveyor belt so that it’s at a height good enough such that the leather belt that goes around it wouldn’t have any friction with the wood. We still need to work this out. I also worked with CAD to help 3D print the parts.

For the pixel classification, I tested it on a dataset and kept modifying the threshold to see what optimal value would work. A threshold of 20% rottenness seems to be the best for separating the good bananas from the rotten ones. I examined the results of the classification and less than 0.8% are misclassified for the rotten bananas out of a sample of 1100+ images of rotten bananas and only around 2% of the good bananas were misclassified out of a sample of around 800 images of good bananas, which meets are benchmarks. The misclassified images were primarily ones where the background was yellow or brown, as a result contributing towards the misclassification. However, we’ll be having a plain black or plain white background for our fruits so that should meet our purposes. I tried using edge detection to first segment out the bananas before running the image segmentation, but using edge detection didn’t help much since for those images the edges weren’t detected well enough to form a good mask. I’ll be testing the algorithm on another test dataset and then we plan on transitioning to testing on real bananas. I’m looking into the AlexNet classification, but I doubt we would need to use it since our classification algorithm seems to be giving wonderful results.

We seem to be on track with respect to our schedule. As for the next few days, I plan on looking into AlexNet and implementing a very basic algorithm just to see its classification results, though I doubt we’d need that, so I plan on focusing my efforts more towards getting the conveyor belt setup done. We must also click pictures of actual bananas for testing and we should have the live feed working with the camera in the next few days. Post that, we should start looking into building the gate and automating its rotation.

Ishita Sinha’s Status Report for 03/27/21

This week, I worked on implementing edge detection for separating the fruit from the background. We may need this when we introduce multiple fruits so that we can separate the fruit from the background, detect which fruit it is by examining the colour within the designated area, and accordingly running our algorithms. Here are the results of running edge detection on the image of this banana:

    

I developed the first classifier for our product, which is the percentage area rottenness classifier. This classifier considers the good parts of the fruit and the seemingly bad parts and computes the percentage rottenness of the fruit. If the percentage is above a certain threshold, it will classify fruits as rotten, else it will classify them as good. Ishita Kumar worked on segmenting the good versus bad parts of the banana, and then, I pass that result into my classifier so that the image can be classified. Over the next week, I plan on looking into an optimal threshold, which I can obtain by testing the classifier on a large number of Google images, as well as some manually taken images if possible. We have found a Kaggle dataset containing images of rotten versus good bananas, so I plan on looking into using that to determine a threshold.

I also looked into AlexNet to work on developing the second classifier. I have started working on it and plan to have it working in the coming week. I can test its working using the Kaggle dataset.

Regarding the use of the 2 classifiers and seeing which one is optimal, I was thinking that instead of using one of the classifiers, we could instead first run the percentage rottenness algorithm. If the percentage is above the “rotten” threshold, we classify it as rotten. If it’s under a certain “good” threshold, we classify it as good. However, if it is in a gray area between these 2 thresholds, we can classify it using the AlexNet classifier. I still need to discuss this with my teammates and the instructors. This approach would also depend on how well the AlexNet classifier seems to work.

As part of the team, we all started working on the conveyor belt this week and plan to have it done by the end of next week. My progress, as well as the team’s progress, is on schedule so far.

Ishita Sinha’s Status Report for 03/13/21

This week, I worked on implementing a colour analysis algorithm that could be used to identify colours in the picture to determine rottenness of the fruit, along with identifying the fruit itself. Earlier, my algorithm wasn’t using the HSV colour space, so it wasn’t working so well, but I worked on changing it, so it’s providing better results now. I need to test if it works well with images of rotten bananas as well. For now, I’ve been working with images of only good bananas. Next week, Ishita Kumar and I plan on meeting to integrate our code and develop a classifier to classify rotten v/s fresh bananas. I also plan on working on the Design Review Report in the upcoming week.

Ishita Sinha’s Status Report for 03/06/21

This week, my teammates and I worked on finalizing the parts we’d need for the critical components and ordered some of those to get started with tinkering with them. We also worked on finalizing how our entire design would work, while also determining backups for the different components in the design.

I worked on implementing the code for a color analysis algorithm so I could integrate it with Ishita Kumar’s code. She’s working on the image segmentation code for identifying the fruit and for separating it from its background. I would then plug in my algorithm to analyze the colors of the fruit in order to predict the rottenness of the fruit.

In the upcoming week, I plan on finalizing the code for the color analysis and integrating the code we have so far so that we can work on developing a classifier for ripe v/s rotten bananas over the next 1-1.5 weeks. For now, I’m working with Google images. As a team, I believe we should also start tinkering with the Jetson Nano to see how we can program it and also figure out how we can program the cameras to take pictures at the given intervals, to begin with.

We are behind the schedule we had planned, but that was quite an ambitious one so that even if we’re lagging, we’re still doing okay. We’re working on an updated schedule based on our actual progress. We’ll need to speed up work over the next few weeks in order to have enough time for testing.

Ishita Sinha’s Status Report for 02/27/21

This week, my teammates and I have been working on finalizing our final design and determining our parts list accordingly. We plan on getting this approved on Monday. Attached below is the preliminary design I drew out for our product:

Besides this, I’ve also been looking into the color analysis of bananas to write up the code for a rottenness predictor based on color.  So far, I’ve primarily researched methods for this and have tried to understand how color analysis is performed. I’ve also looked into how we can develop a rottenness predictor given a dataset of images classified as rotten, ripe, and unripe.

We are behind what we had planned in our schedule in that we haven’t yet ordered parts, but I believe that’s okay since we are waiting to confirm our parts list on Monday, after which we’ll order the parts. Besides that, in terms of working on color analysis and starting work on rottenness prediction, I believe I’m on track.

In the upcoming week, I hope my team and I can finalize the parts list and order it. I also plan on downloading Google images of bananas, making the background white if it isn’t already, and developing a dataset of rotten, ripe, and unripe bananas. I also want to work on the color analysis algorithms to train the dataset so that my team and I can then test the algorithm’s predictions before we develop our actual dataset.

Ishita Sinha’s Status Report for 02/20/21

This week, in our weekly team meeting with Professor Savvides and our TA, Uzair, we worked on reasoning about the feasibility of our project and what we could modify about our proposed MVP to make it better suit our needs. We realized that since bananas rot quite quickly, using a banana in the MVP would be a good idea. We also discussed ways in which we could get pictures of the fruits from all angles. Another idea that was brought up was that we could predict the rottenness of the fruit, or how long it was expected to last, and this sounded like quite an interesting idea for our final project!

Initially, we were considering only round fruits in order to be able to cover all of the sides with ease using a rotating disk. However, when we started considering bananas, we shifted our focus to find ways in which we can hold the fruit up and click pictures from all angles to check for rotting. I spent some time this week thinking about the design of our model. We could have a holder that could hold our fruits up, and then we could have a camera on a rotating stand that could click pictures of the fruit from all angles.

However, this would imply we need quite a heavy-weight stand to hoist the camera on since rotating the stand would not be easy. Thus, an alternative is that we could have the holder rotate the fruit. For this, I looked into rotating disks. To the rotating disk, we could attach a hook to hold the banana (or any other fruit), and then, as the fruit would rotate, the camera would capture pictures. When I looked up rotating disks online, I usually found ones that looked like the one below:

Such a rotating disk would’ve suited our earlier model with round fruits, but it doesn’t work for our current model. We would want rotating disks that could have a hook attached to them using which we could hang the fruit. Thus, I thought we would probably need to 3D print the rotating disk so that we can attach a hook to it, or design it such that it already has a hook built into it, from which we can hang the fruit.

We seem to be on schedule for now. Over the next week, first off, I plan on updating our project proposal presentation in order to ensure we are putting across our idea clearly. Post that, I plan on looking into image segmentation and edge/color detection algorithms so that I understand them better since we would need these algorithms not only to identify the fruit but also to be able to predict rottenness.