Team Status Report for 04/24/21

  • Implemented image segmentation for good and rotten apples and oranges. Started testing the algorithms on datasets for apples and oranges (to good success).
  • Worked on fruit detection (i.e differentiating between rotten and good apples, bananas and oranges) using different models (yolo and pixel analysis).
  • Started integration test. Currently, we have been able to take pictures from the cameras, and pass them along to the algorithms and get a final result. Working on tweaking the threshold values and experimenting with lighting and physical surroundings.
  • Almost done with servo + shield (should have the system running by monday). After which, we will have full system integrated (i.e from taking picture to controlling gate via the jetson nano).
  • Getting close to finishing the project, just need to finish up some physical integration and tweak parameters.

Ishita Kumar’s Status Report for 04/24/21

This week, I implemented image segmentation for both fresh and rotten apples and oranges using appropriate HSV ranges. Segmenting apples was quite difficult because of their diverse nature and the rotten parts being various shades of brown which are close to golden hues found in some apples. So, I have decided to focus our scope on a particular type of apple, probably red apples. This will help us narrow our image segmentation algorithm to work and improve our accuracy. Oranges did not pose the same issue so the algorithm worked well for this fruit. I am on track with my code for now and will fix up a few more things while Ishita Sinha tests our combined integrated code on a custom dataset. We will be meeting every day for integration and testing from now on.

Kushagra’s Status Report for 04/10/21

  • Wrote a program that streams a video from both the cameras (usb and csi) simultaneously. Pressing ‘j’ takes a picture from both cameras, ‘k’ from csi camera and ‘l’ from usb camera. q quits.
    • left is csi, right is usb.
  • Quality of csi is severely lacking. I’ll look more into how gstreamer works since that’s what i’m using for csi. usb cam was able to connect directly, so quality is better for that reason?
    • Also will try to play around with framerate
  • Assembled conveyor structure (now only missing the belt)

Ishita Kumar’s Status Report for 04/10/21

This week, I focused on testing our algorithm on real bananas in real-world conditions for our project. I used white lighting as we plan to do for our final set-up to take pictures. The results were promising. I tested on healthy bananas, and our algorithm correctly identified those bananas as not rotten. I have been thinking about the parts we need for our final set-up and trying to design what we need so we can order the final set of parts for our shed set-up. I have also started looking into other fruits that we are going to use so we can have our algorithm work on those fruits as well and looking into how to use object detection so our algorithm can autonomously detect which fruit it needs to sort. I am also preparing with my team for the interim presentation.

Ishita Sinha’s Status Report for 04/10/21

This week, I worked on testing the classifier we had on a much larger dataset in order to ensure our algorithm generalized well to several types of bananas. Our algorithm achieved a 2.14 % misclassification rate for good bananas out of a dataset comprising approximately 2000 images of good bananas. As for bad bananas, our algorithm achieved a 0.94 % misclassification rate out of a dataset comprising approximately 2800 images of bad bananas. The algorithm seemed to be running quite efficiently as well, so it was good to see that we seemed to be meeting the targets we had set by a huge margin.

Besides this, I worked on clicking images of bananas for checking if our algorithm worked well even on images that we had clicked, and it achieved quite an amazing classification rate and performance for images that we clicked. I also started working on developing the AlexNet classification algorithm code to see if that could give us an improved classification result over our existing code.

We seem to be on track for now, but the upcoming week is going to be a hectic one. We planned it accordingly since we have a bit of a break, but I hope we can stay on track with our progress. For future steps for my part, I need to test our model on more images clicked personally, complete the AlexNet classification, and start working on code for object classification so that by the time we transition to testing for apples and oranges, I have the code in place to be able to check if a fruit is an apple, an orange, or a banana.

Team Status Report for 04/10/21

As part of our work on the project this week, on the hardware front, we worked on getting the conveyor belt up and running. We still have some work to complete, but we plan on finishing it up by Monday, in time for the demo. On the software end, we tested our algorithm on a much larger dataset of bananas to ensure it generalizes well. Post that, we started testing on images we took with good versus rotten bananas to see how well it seems to be classifying them. We used a white background for the images since we’ll be having a white background with our conveyor belt. Besides that, we have set up the live stream for the camera, so we now need to work on writing up code for capturing each frame and examining it.

The updated schedule for our project looks as follows:

One of the major concerns for our team, for now, is that we haven’t started working on the gate yet, and we don’t have the setup for the product yet. Our schedule does recognize this and we have it planned accordingly, but realizing we have just 3 weeks until the final demo definitely sounds quite daunting, so we’d need to ensure we really meet the deadlines very well.

Ishita Sinha’s Status Report for 04/03/21

This week, I worked with my team in assembling the conveyor belt and also worked on the pixel classification algorithm.

As for working on the conveyor belt, I suggested a final model we could use in order to be able to meet the requirements we have and determined its feasibility. For now, we plan on using channels to lodge the conveyor belt so that it’s at a height good enough such that the leather belt that goes around it wouldn’t have any friction with the wood. We still need to work this out. I also worked with CAD to help 3D print the parts.

For the pixel classification, I tested it on a dataset and kept modifying the threshold to see what optimal value would work. A threshold of 20% rottenness seems to be the best for separating the good bananas from the rotten ones. I examined the results of the classification and less than 0.8% are misclassified for the rotten bananas out of a sample of 1100+ images of rotten bananas and only around 2% of the good bananas were misclassified out of a sample of around 800 images of good bananas, which meets are benchmarks. The misclassified images were primarily ones where the background was yellow or brown, as a result contributing towards the misclassification. However, we’ll be having a plain black or plain white background for our fruits so that should meet our purposes. I tried using edge detection to first segment out the bananas before running the image segmentation, but using edge detection didn’t help much since for those images the edges weren’t detected well enough to form a good mask. I’ll be testing the algorithm on another test dataset and then we plan on transitioning to testing on real bananas. I’m looking into the AlexNet classification, but I doubt we would need to use it since our classification algorithm seems to be giving wonderful results.

We seem to be on track with respect to our schedule. As for the next few days, I plan on looking into AlexNet and implementing a very basic algorithm just to see its classification results, though I doubt we’d need that, so I plan on focusing my efforts more towards getting the conveyor belt setup done. We must also click pictures of actual bananas for testing and we should have the live feed working with the camera in the next few days. Post that, we should start looking into building the gate and automating its rotation.

Ishita Kumar’s Status Report for 04/03/21

This week, I used CAD to design the parts necessary for the rollers to be held in place to the wooden slabs in our conveyor belt and our motor. We then 3D printed our parts and they came out to fit perfectly for our needs. I also worked with Ishita Sinha to test our pixel analyzer algorithm on various images of fresh and rotten bananas found from an online dataset on Kaggle. With some trial and error and fine-tuning of the percentage of bad parts for classifying a banana as bad, our algorithm detected almost all bananas correctly for both good and rotten bananas. We decided 20% brown-black parts in a banana image was a good benchmark that worked for us. The few images that were not classified correctly had inappropriate background colors for our need such as yellow or brown. We plan to have a white background and good lighting so we are not worried about those failures as they do not apply in our case. We also acquired free rotten bananas from a store and now we are going to test our algorithm on those bananas. This next week, I want to finish our conveyor belt and decide whether to use AlexNet or not as we may not need it for our purposes. We are slightly behind track on the conveyor belt as I hoped to get it working this week but we are close and I plan to ramp up and ensure we have it working soon next week.

Kushagra’s Status Report for 04/03/21

  • Got the CSI camera to work in burst mode, with the timing of the burst being configurable.
  • Trying to get the usb camera to take a picture, but it’s proving to be a lot harder. It’s possible that the usb camera isn’t compatible with the nano, in which case we need to order another usb camera. Because of this setback, I’m slightly behind on schedule, but am confident I can make up for it this week.
  • 3D printed the parts for the conveyor belt, now we just need to assemble the whole thing.
  • Goal for next week is to get the usb camera working, and make sure that the motor controller board and the 12V power adapter works with the conveyor belt (and to assemble the belt itself).

Team Status Report for 04/03/21

  • Fabricated all the parts required for the conveyor belt. We decided to 3D print the motor coupler and the shaft couplers instead of using a piece of wood. The couplers seem very sturdy and we are confident they will work.
    • The above picture shows the motor shaft embedded in the motor coupler (yellow piece), which is in turn embedded inside the roller. The 3 black pieces are the other couplers. The whole system is tight and sturdy.
  • The CSI camera is able to take pictures in burst mode now, with the timing of the burst being configurable. Here is a sample image taken from the CSI camera. We’re in the process of getting the usb camera to work, but it’s proving to be a lot harder. It’s possible we might need to order a second usb camera since the one we have might not be compatible.
  • Ordered the 12V adapter for the conveyor belt and the DC motor speed controller board. We plan to finish constructing the conveyor belt by middle of this week.
  • Made a lot of progress on the pixel classification, and the algorithm is now very robust. We have also started looking into AlexNet, but we might not need it since the fine-tuned pixel classifier is working very well.
  • We’re making good progress, and are on track to finish the project by the due date!