Team Status Report for 05/08/21

This week was the last week we could actually put in work before the video, so it was a very crucial week for us since we had to ensure we had at least our MVP working for the video.

The team worked on figuring out how we can attach the servo to the diverter we had printed. We, then, fixed our image segmentation masks to ensure we were detecting the rotten and good parts of the banana correctly. Once we figured that out, we set up our final product and ran several tests to see if we were meeting the metric requirements we had set out to meet. We seem to be meeting the requirements, at least for the MVP, for now. We’re working on image segmentation for carrots to be able to include that into our product, and we’re also working on developing the AlexNet rottenness classifier, so we can see how it’s performing in order to be able to compare it to our current classifier and gain insights. We’re now working on the final video and poster.

Below is a picture of our final product setup. We would like to thank Prof. Savvides and our TA, Uzair, for all of their help and support throughout the semester!

      

Ishita Sinha’s Status Report for 05/08/21

This week, my team and I worked on completing the integration of the diverter into our project and testing it. To begin with, we first worked on coming up with an idea so that we can make the servo drive the diverter. We were earlier thinking of drilling the arms of the servo into the diverter, but that wouldn’t have worked since the body of the servo would have been free. Finally, with trial and testing, we came up with a solution for this. After that, I worked on integrating the servo code with the rest of the code we had. Post that, I worked on setting up everything so that all of the components of the product are placed as they would be in the final setup. Then, I looked at the camera streams to figure out their right positions and placed them. After that, I updated the rottenness threshold to accommodate for shadows and then performed several tests on good and rotten bananas to see that they were being detected, classified, and diverted correctly. The diverter received the signal for diversion within just 0.08 seconds after the banana image was read, so our entire code took just an average of around 0.08 seconds, which definitely meets our requirements.

Currently, my team and I are working on the final video and poster. We’re also working on the image segmentation for carrots and I’m now working on the AlexNet classifier for rottenness to see how well that would have performed and how long that would have taken. We’ll continue to do this until Monday, and will then start writing up our final report from Tuesday onwards. I’m happy it seems to have all worked out so far for the project, and we’re now in the last leg of just presenting what it is!

I would like to thank all of the instructors and TAs, especially Prof. Savvides and Uzair, for all of their help and guidance and for a wonderful semester!

Ishita Sinha’s Status Report for 05/01/21

This week, my team and I have been working on integrating the algorithm with the actual conveyor belt system and getting it to work. To begin with, I switched all of the code to use Numpy, so I got quite a large speedup, so much so that the entire algorithm is now running in around 0.04 seconds, as compared to the earlier 3-4 seconds, so that seems to have worked out well. Next, I instrumented the code in order to better detect when the banana is in the frame, versus when it is coming in, going out, or isn’t in the frame. The edge detection and frame analysis seem to be performing very well. I worked on the final product setup. We just need to place the other camera and the diverter. I have also been testing the conveyor belt system a lot with the algorithm to see how well the image segmentation seems to be performing. There seem to be some issues that are largely resolved with the use of brighter light, so I’ve written code to increase the brightness of the image, but am also looking for a brighter light. When the image is bright, the algorithm seems to be performing very well with the conveyor belt system. Lastly, I worked on coming up with a design for our diverter that will actually divert the fruit on the belt to move into the rotten v/s the fresh fruit basket. We have the CAD design and should be 3D printing it tomorrow. Our plan is that the diverter will extend a bit more into the conveyor belt system so that it can start diverting the fruit much before it actually hits the end of the belt. I tested this using a book to see if it would work, and it seems to be working, so we hope it works out with the 3D printed part!

For future steps, we plan on getting the 3D printed diverter and setting it up with the servo motor so as to see that the diversion is working out well, so I think there’s going to be a lot of testing in the upcoming week. It’s the last leg, so I hope it works out! We also need to write image segmentation code for cucumbers and carrots, but that’s something we plan on looking into after we have the entire system working for a banana since that’s our MVP. The AlexNet classifier isn’t urgent since our classification system seems to be meeting the timing and accuracy requirements extremely well, but I could work on that after we have this setup working. I hope it all works out!

Ishita Sinha’s Status Report for 04/24/21

Over the past few weeks, I worked on developing a final model for fruit detection. I was working with an already built YOLO model and also tried developing a custom model. However, I noticed that the custom model did not perform much better in terms of accuracy and it did not reduce the time required for computation either, so I decided that we’d continue with the YOLO model. I tested the fruit detection accuracy with this model using a good Kaggle dataset I found online with images similar to the ones that would be clicked with our lighting conditions and background and 0% of the bananas were misclassified, around 3% of the apples were misclassified, and around 3% of the oranges were misclassified out of a dataset consisting of around 250 images. The fruit detection seemed to be taking a while, around 4 seconds for each image. However, I looked up the speed up offered by the Nano, so this should work out for our purposes.

I also worked on integrating all of the code we had so now we have the code ready for the Nano to read the image, process it, pass it to our fruit detection and classification algorithm, and be able to output whether the fruit is rotten or not. I also integrated the code for analyzing whether the fruit is partially or completely in the frame along with our code.

For the apples and oranges, I tested the image segmentation and classification on a very large dataset, but it did not seem to perform too well, so I plan on playing with the rottenness threshold I’ve set while Ishita Kumar improves the masks we get so that we can improve accuracy.

As for the integration, we plan on writing up the code for the Raspberry Pi controlling the gate in the upcoming week. We plan on setting up the conveyor belt and testing our code in a preliminary manner with a temporary physical setup soon, without the gate for now.

The fruit detection seems to be detecting apples as oranges sometimes and for a few instances it seems like it is not able to detect extremely rotten bananas, so I’ll need to look into that to see if it can be improved.

For future steps for my part, I need to work on testing the integrated code with the Nano and conveyor belt with real fruits. Once that’s working, we will start working on getting the Raspberry Pi and servo to work. In parallel, I can play around with the threshold for rottenness classification for apples and oranges. The AlexNet classifier isn’t urgent since our classification system currently seems to be meeting timing and accuracy requirements, but I’ll work on implementing that once we’re in good shape with the above items.

Ishita Sinha’s Status Report for 04/10/21

This week, I worked on testing the classifier we had on a much larger dataset in order to ensure our algorithm generalized well to several types of bananas. Our algorithm achieved a 2.14 % misclassification rate for good bananas out of a dataset comprising approximately 2000 images of good bananas. As for bad bananas, our algorithm achieved a 0.94 % misclassification rate out of a dataset comprising approximately 2800 images of bad bananas. The algorithm seemed to be running quite efficiently as well, so it was good to see that we seemed to be meeting the targets we had set by a huge margin.

Besides this, I worked on clicking images of bananas for checking if our algorithm worked well even on images that we had clicked, and it achieved quite an amazing classification rate and performance for images that we clicked. I also started working on developing the AlexNet classification algorithm code to see if that could give us an improved classification result over our existing code.

We seem to be on track for now, but the upcoming week is going to be a hectic one. We planned it accordingly since we have a bit of a break, but I hope we can stay on track with our progress. For future steps for my part, I need to test our model on more images clicked personally, complete the AlexNet classification, and start working on code for object classification so that by the time we transition to testing for apples and oranges, I have the code in place to be able to check if a fruit is an apple, an orange, or a banana.

Team Status Report for 04/10/21

As part of our work on the project this week, on the hardware front, we worked on getting the conveyor belt up and running. We still have some work to complete, but we plan on finishing it up by Monday, in time for the demo. On the software end, we tested our algorithm on a much larger dataset of bananas to ensure it generalizes well. Post that, we started testing on images we took with good versus rotten bananas to see how well it seems to be classifying them. We used a white background for the images since we’ll be having a white background with our conveyor belt. Besides that, we have set up the live stream for the camera, so we now need to work on writing up code for capturing each frame and examining it.

The updated schedule for our project looks as follows:

One of the major concerns for our team, for now, is that we haven’t started working on the gate yet, and we don’t have the setup for the product yet. Our schedule does recognize this and we have it planned accordingly, but realizing we have just 3 weeks until the final demo definitely sounds quite daunting, so we’d need to ensure we really meet the deadlines very well.

Ishita Sinha’s Status Report for 04/03/21

This week, I worked with my team in assembling the conveyor belt and also worked on the pixel classification algorithm.

As for working on the conveyor belt, I suggested a final model we could use in order to be able to meet the requirements we have and determined its feasibility. For now, we plan on using channels to lodge the conveyor belt so that it’s at a height good enough such that the leather belt that goes around it wouldn’t have any friction with the wood. We still need to work this out. I also worked with CAD to help 3D print the parts.

For the pixel classification, I tested it on a dataset and kept modifying the threshold to see what optimal value would work. A threshold of 20% rottenness seems to be the best for separating the good bananas from the rotten ones. I examined the results of the classification and less than 0.8% are misclassified for the rotten bananas out of a sample of 1100+ images of rotten bananas and only around 2% of the good bananas were misclassified out of a sample of around 800 images of good bananas, which meets are benchmarks. The misclassified images were primarily ones where the background was yellow or brown, as a result contributing towards the misclassification. However, we’ll be having a plain black or plain white background for our fruits so that should meet our purposes. I tried using edge detection to first segment out the bananas before running the image segmentation, but using edge detection didn’t help much since for those images the edges weren’t detected well enough to form a good mask. I’ll be testing the algorithm on another test dataset and then we plan on transitioning to testing on real bananas. I’m looking into the AlexNet classification, but I doubt we would need to use it since our classification algorithm seems to be giving wonderful results.

We seem to be on track with respect to our schedule. As for the next few days, I plan on looking into AlexNet and implementing a very basic algorithm just to see its classification results, though I doubt we’d need that, so I plan on focusing my efforts more towards getting the conveyor belt setup done. We must also click pictures of actual bananas for testing and we should have the live feed working with the camera in the next few days. Post that, we should start looking into building the gate and automating its rotation.

Ishita Sinha’s Status Report for 03/27/21

This week, I worked on implementing edge detection for separating the fruit from the background. We may need this when we introduce multiple fruits so that we can separate the fruit from the background, detect which fruit it is by examining the colour within the designated area, and accordingly running our algorithms. Here are the results of running edge detection on the image of this banana:

    

I developed the first classifier for our product, which is the percentage area rottenness classifier. This classifier considers the good parts of the fruit and the seemingly bad parts and computes the percentage rottenness of the fruit. If the percentage is above a certain threshold, it will classify fruits as rotten, else it will classify them as good. Ishita Kumar worked on segmenting the good versus bad parts of the banana, and then, I pass that result into my classifier so that the image can be classified. Over the next week, I plan on looking into an optimal threshold, which I can obtain by testing the classifier on a large number of Google images, as well as some manually taken images if possible. We have found a Kaggle dataset containing images of rotten versus good bananas, so I plan on looking into using that to determine a threshold.

I also looked into AlexNet to work on developing the second classifier. I have started working on it and plan to have it working in the coming week. I can test its working using the Kaggle dataset.

Regarding the use of the 2 classifiers and seeing which one is optimal, I was thinking that instead of using one of the classifiers, we could instead first run the percentage rottenness algorithm. If the percentage is above the “rotten” threshold, we classify it as rotten. If it’s under a certain “good” threshold, we classify it as good. However, if it is in a gray area between these 2 thresholds, we can classify it using the AlexNet classifier. I still need to discuss this with my teammates and the instructors. This approach would also depend on how well the AlexNet classifier seems to work.

As part of the team, we all started working on the conveyor belt this week and plan to have it done by the end of next week. My progress, as well as the team’s progress, is on schedule so far.

Ishita Sinha’s Status Report for 03/13/21

This week, I worked on implementing a colour analysis algorithm that could be used to identify colours in the picture to determine rottenness of the fruit, along with identifying the fruit itself. Earlier, my algorithm wasn’t using the HSV colour space, so it wasn’t working so well, but I worked on changing it, so it’s providing better results now. I need to test if it works well with images of rotten bananas as well. For now, I’ve been working with images of only good bananas. Next week, Ishita Kumar and I plan on meeting to integrate our code and develop a classifier to classify rotten v/s fresh bananas. I also plan on working on the Design Review Report in the upcoming week.

Team Status Report for 03/13/21

The most significant risk we anticipate as of now is building the conveyor belt. None of us have mechanical experience so this would definitely be challenging. However, as a contingency plan for the same, we plan on using a treadmill since the conveyor belt isn’t a part of our product – our product is meant to be integrated into existing conveyor belt systems, so if our conveyor belt doesn’t work, it doesn’t jeopardise our product itself.

The design of the system was finalised and updated, as has been shown below:

We had to make this change to the design since the earlier design that was using pistons might not have been able to meet the requirements of the product. We were using the pistons to push an item off the conveyor belt. However, doing that would require a lot of force from the pistons, which may not be feasible. The pistons could possibly also not meed the speed requirements since they operate quite slowly. Thus, we shifted to this gate model. Corresponding to this change, we updated our model to be sorting fruit into only 2 categories – good v/s rotten, so the gate can rotate in one off 2 directions to push the fruit in the appropriate basket. Correspondingly, the updated block diagram is shown:

We updated the schedule to reflect a more feasible timeline given that we have midterms going on currently. This past week, Kush worked on understanding the Nano and Ishita Kumar and Ishita Sinha worked on improving results from their image segmentation and colour analysis. For the upcoming week, they’ll be integrating their code and working on building a good classifier. Besides this, the team will also be working on the Design Review Report.