Finalized the mechanical design for the conveyer belt system (currently have rack and pinion as a contingency plan).
Extracted color ranges from hsv space for certain images (experimented on a picture of a banana).
Ordered the Nvidia Jetson Nano 2GB and the Raspi camera module V2. Started experimenting with them. Next step is to write a program to take a picture and save it on disk.
Worked on the design presentation. Finalized some design points (apart from mechanical) that were previously unanswered.
Performed risk analysis and risk mitigation. In particular, if we can’t assemble the conveyer belt ourselves, we will use the treadmills in the CUC gym to simulate the process.
Started looking into algorithms for extracting features (e.g localized black spots in a banana) from the segmented images.
Watched tutorials on the jetson nano and learned about all the machine learning / AI capabilities the board has to offer. Found the sdk to be very powerful.
Decided to move away from the rotating plate so that we can incorporate non-round fruits (e.g. bananas)
Developed frameworks for pixel isolation (segmentation and gaussian distribution).
Decided on bananas as the primary fruit since discoloration is more prominent and easier to detect. Still need to learn about HSV.
Started brainstorming ways to isolate fruit through segmentation and detect localized pixels of different colors. This method seems easier than NNs, so preferring this, but keeping options open.
Also started brainstorming on how to differentiate between fruits based on pixel distribution.
Preparing the slides and talking points for proposal presentation
Started looking into Jeston Nano Tutorials and JetPack sdk.