Ishita Sinha’s Status Report for 04/24/21

Over the past few weeks, I worked on developing a final model for fruit detection. I was working with an already built YOLO model and also tried developing a custom model. However, I noticed that the custom model did not perform much better in terms of accuracy and it did not reduce the time required for computation either, so I decided that we’d continue with the YOLO model. I tested the fruit detection accuracy with this model using a good Kaggle dataset I found online with images similar to the ones that would be clicked with our lighting conditions and background and 0% of the bananas were misclassified, around 3% of the apples were misclassified, and around 3% of the oranges were misclassified out of a dataset consisting of around 250 images. The fruit detection seemed to be taking a while, around 4 seconds for each image. However, I looked up the speed up offered by the Nano, so this should work out for our purposes.

I also worked on integrating all of the code we had so now we have the code ready for the Nano to read the image, process it, pass it to our fruit detection and classification algorithm, and be able to output whether the fruit is rotten or not. I also integrated the code for analyzing whether the fruit is partially or completely in the frame along with our code.

For the apples and oranges, I tested the image segmentation and classification on a very large dataset, but it did not seem to perform too well, so I plan on playing with the rottenness threshold I’ve set while Ishita Kumar improves the masks we get so that we can improve accuracy.

As for the integration, we plan on writing up the code for the Raspberry Pi controlling the gate in the upcoming week. We plan on setting up the conveyor belt and testing our code in a preliminary manner with a temporary physical setup soon, without the gate for now.

The fruit detection seems to be detecting apples as oranges sometimes and for a few instances it seems like it is not able to detect extremely rotten bananas, so I’ll need to look into that to see if it can be improved.

For future steps for my part, I need to work on testing the integrated code with the Nano and conveyor belt with real fruits. Once that’s working, we will start working on getting the Raspberry Pi and servo to work. In parallel, I can play around with the threshold for rottenness classification for apples and oranges. The AlexNet classifier isn’t urgent since our classification system currently seems to be meeting timing and accuracy requirements, but I’ll work on implementing that once we’re in good shape with the above items.

Kushagra’s Status Report for 04/24/21

  • Started integration test. Currently, we have been able to take pictures from the cameras, and pass them along to the algorithms and get a final result.
  • Almost done with servo + shield (should have the system running by monday). After which, we will have full system integrated (i.e from taking picture to controlling gate via the jetson nano). This is the next major step for me.

Team Status Report for 04/24/21

  • Implemented image segmentation for good and rotten apples and oranges. Started testing the algorithms on datasets for apples and oranges (to good success).
  • Worked on fruit detection (i.e differentiating between rotten and good apples, bananas and oranges) using different models (yolo and pixel analysis).
  • Started integration test. Currently, we have been able to take pictures from the cameras, and pass them along to the algorithms and get a final result. Working on tweaking the threshold values and experimenting with lighting and physical surroundings.
  • Almost done with servo + shield (should have the system running by monday). After which, we will have full system integrated (i.e from taking picture to controlling gate via the jetson nano).
  • Getting close to finishing the project, just need to finish up some physical integration and tweak parameters.

Ishita Kumar’s Status Report for 04/24/21

This week, I implemented image segmentation for both fresh and rotten apples and oranges using appropriate HSV ranges. Segmenting apples was quite difficult because of their diverse nature and the rotten parts being various shades of brown which are close to golden hues found in some apples. So, I have decided to focus our scope on a particular type of apple, probably red apples. This will help us narrow our image segmentation algorithm to work and improve our accuracy. Oranges did not pose the same issue so the algorithm worked well for this fruit. I am on track with my code for now and will fix up a few more things while Ishita Sinha tests our combined integrated code on a custom dataset. We will be meeting every day for integration and testing from now on.

Kushagra’s Status Report for 04/10/21

  • Wrote a program that streams a video from both the cameras (usb and csi) simultaneously. Pressing ‘j’ takes a picture from both cameras, ‘k’ from csi camera and ‘l’ from usb camera. q quits.
    • left is csi, right is usb.
  • Quality of csi is severely lacking. I’ll look more into how gstreamer works since that’s what i’m using for csi. usb cam was able to connect directly, so quality is better for that reason?
    • Also will try to play around with framerate
  • Assembled conveyor structure (now only missing the belt)

Ishita Kumar’s Status Report for 04/10/21

This week, I focused on testing our algorithm on real bananas in real-world conditions for our project. I used white lighting as we plan to do for our final set-up to take pictures. The results were promising. I tested on healthy bananas, and our algorithm correctly identified those bananas as not rotten. I have been thinking about the parts we need for our final set-up and trying to design what we need so we can order the final set of parts for our shed set-up. I have also started looking into other fruits that we are going to use so we can have our algorithm work on those fruits as well and looking into how to use object detection so our algorithm can autonomously detect which fruit it needs to sort. I am also preparing with my team for the interim presentation.

Ishita Sinha’s Status Report for 04/10/21

This week, I worked on testing the classifier we had on a much larger dataset in order to ensure our algorithm generalized well to several types of bananas. Our algorithm achieved a 2.14 % misclassification rate for good bananas out of a dataset comprising approximately 2000 images of good bananas. As for bad bananas, our algorithm achieved a 0.94 % misclassification rate out of a dataset comprising approximately 2800 images of bad bananas. The algorithm seemed to be running quite efficiently as well, so it was good to see that we seemed to be meeting the targets we had set by a huge margin.

Besides this, I worked on clicking images of bananas for checking if our algorithm worked well even on images that we had clicked, and it achieved quite an amazing classification rate and performance for images that we clicked. I also started working on developing the AlexNet classification algorithm code to see if that could give us an improved classification result over our existing code.

We seem to be on track for now, but the upcoming week is going to be a hectic one. We planned it accordingly since we have a bit of a break, but I hope we can stay on track with our progress. For future steps for my part, I need to test our model on more images clicked personally, complete the AlexNet classification, and start working on code for object classification so that by the time we transition to testing for apples and oranges, I have the code in place to be able to check if a fruit is an apple, an orange, or a banana.

Team Status Report for 04/10/21

As part of our work on the project this week, on the hardware front, we worked on getting the conveyor belt up and running. We still have some work to complete, but we plan on finishing it up by Monday, in time for the demo. On the software end, we tested our algorithm on a much larger dataset of bananas to ensure it generalizes well. Post that, we started testing on images we took with good versus rotten bananas to see how well it seems to be classifying them. We used a white background for the images since we’ll be having a white background with our conveyor belt. Besides that, we have set up the live stream for the camera, so we now need to work on writing up code for capturing each frame and examining it.

The updated schedule for our project looks as follows:

One of the major concerns for our team, for now, is that we haven’t started working on the gate yet, and we don’t have the setup for the product yet. Our schedule does recognize this and we have it planned accordingly, but realizing we have just 3 weeks until the final demo definitely sounds quite daunting, so we’d need to ensure we really meet the deadlines very well.

Ishita Sinha’s Status Report for 04/03/21

This week, I worked with my team in assembling the conveyor belt and also worked on the pixel classification algorithm.

As for working on the conveyor belt, I suggested a final model we could use in order to be able to meet the requirements we have and determined its feasibility. For now, we plan on using channels to lodge the conveyor belt so that it’s at a height good enough such that the leather belt that goes around it wouldn’t have any friction with the wood. We still need to work this out. I also worked with CAD to help 3D print the parts.

For the pixel classification, I tested it on a dataset and kept modifying the threshold to see what optimal value would work. A threshold of 20% rottenness seems to be the best for separating the good bananas from the rotten ones. I examined the results of the classification and less than 0.8% are misclassified for the rotten bananas out of a sample of 1100+ images of rotten bananas and only around 2% of the good bananas were misclassified out of a sample of around 800 images of good bananas, which meets are benchmarks. The misclassified images were primarily ones where the background was yellow or brown, as a result contributing towards the misclassification. However, we’ll be having a plain black or plain white background for our fruits so that should meet our purposes. I tried using edge detection to first segment out the bananas before running the image segmentation, but using edge detection didn’t help much since for those images the edges weren’t detected well enough to form a good mask. I’ll be testing the algorithm on another test dataset and then we plan on transitioning to testing on real bananas. I’m looking into the AlexNet classification, but I doubt we would need to use it since our classification algorithm seems to be giving wonderful results.

We seem to be on track with respect to our schedule. As for the next few days, I plan on looking into AlexNet and implementing a very basic algorithm just to see its classification results, though I doubt we’d need that, so I plan on focusing my efforts more towards getting the conveyor belt setup done. We must also click pictures of actual bananas for testing and we should have the live feed working with the camera in the next few days. Post that, we should start looking into building the gate and automating its rotation.

Ishita Kumar’s Status Report for 04/03/21

This week, I used CAD to design the parts necessary for the rollers to be held in place to the wooden slabs in our conveyor belt and our motor. We then 3D printed our parts and they came out to fit perfectly for our needs. I also worked with Ishita Sinha to test our pixel analyzer algorithm on various images of fresh and rotten bananas found from an online dataset on Kaggle. With some trial and error and fine-tuning of the percentage of bad parts for classifying a banana as bad, our algorithm detected almost all bananas correctly for both good and rotten bananas. We decided 20% brown-black parts in a banana image was a good benchmark that worked for us. The few images that were not classified correctly had inappropriate background colors for our need such as yellow or brown. We plan to have a white background and good lighting so we are not worried about those failures as they do not apply in our case. We also acquired free rotten bananas from a store and now we are going to test our algorithm on those bananas. This next week, I want to finish our conveyor belt and decide whether to use AlexNet or not as we may not need it for our purposes. We are slightly behind track on the conveyor belt as I hoped to get it working this week but we are close and I plan to ramp up and ensure we have it working soon next week.