Lauren’s Status Report for 3/27

This week, I helped wire 16 inductive sensors and some of the 6 capacitive sensors we used to measure the voltage and current of, in order to ensure the output voltage was within the limit for each GPIO pin on the Jetson Xavier (3.3V). I also met with Jessica to connect the motor and motor driver to the Jetson Xavier and test it, but it didn’t end up working. I also helped connect the sensors to the Jetson Xavier and confirm that the output from the Xavier was what was expected (we confirmed that the voltages were within the 3.3V maximum and that the GPIO pin read “HIGH” when an object wasn’t touching each sensor, and read “LOW” when an object was touching a sensor).

My progress is slightly behind because I’ll be training the model on AWS on Sunday. However, if this ends up taking a lot of time, I may train it on the Jetson Xavier instead. I will be putting in more time to get this working.

I hope to finish training the model for the image classifier, and start writing the code for background subtraction and object detection.

Team Status Report for 3/27

The most significant risks to the success of this project are interfacing parts with the Jetson Xavier, like the motor and motor driver. These risks are being managed by the team allocating more time in the schedule to connect these parts. We have a Jetson Nano and a Jetson Xavier, so we will have multiple people working on this as a contingency plan if necessary.

No changes have been made to the existing design of the system.

We are pushing back hooking up the motor and motor driver to the Jetson Xavier by a week (moving it from week 3/22 to 3/29). We have attempted to connect them this week, but didn’t get the motor to move yet.

Jessica’s Status Report for 3/27

This week, I helped to wire all of our inductive/capacitive sensors together and measure their output voltage/current to ensure that wiring that number of sensors in parallel would not damage the Jetson Xavier. I also continued downloading dependencies to the Jetson Xavier so that we could correctly interface with the motor driver, camera, sensors, and neural net model. I was able to hook up the Jetson Nano with the Raspberry Pi camera and a simple LED. Then, with Lauren,  I tested the motor driver and the sensors. We were able to hook up multiple inductive and capacitive sensors to the Jetson Xavier, but could not get the motor driver to drive our stepper motor.

My progress is slightly behind schedule. I underestimated the amount of time it would take to set up the Jetson Xavier and interface all of our sensors/motor with it. Installing PyTorch and all of its dependencies on the Jetson Xavier was also more challenging than I expected. However, after testing the sensors, my team realized that calibrating the sensors should take much less time than expected, so we were able to modify our schedule to account for the extra time spent connecting everything to the Jetson Xavier.

Next week, I hope to get the motor driver working successfully and finish installing the necessary machine learning libraries so that we can test the image classifier. Once that’s done, I can help build the sensor box and mount our sensors to it.

Tate’s Status Report for 3/13

This week I spent a lot of time preparing for our Design Review Presentation, which I gave on Wednesday. I also met with Jessica in the middle of the week to hash out any of our concerns on the design for the sliding mechanism. We discussed when certain parts of the mechanism would need to be purchased and then assembled as well. I am currently drawing/designing how our box will be mounted on the sliders, and I made a more detailed diagram of our sensor array with specific sensor sizes and spacing. I also met with all my team members and tested our sensors for around 3 hours on Saturday. I helped solder the ends of the inductive sensor wires to pieces of wire so that they would fit in our breadboard. I also helped with the wiring and connecting of our load sensor which my team hopes to have working this upcoming week.

Everything is on track for me except that I haven’t ordered the mechanism parts yet. Jessica and I narrowed down which parts we need to order, and decided that we don’t need to order them quite yet because we are not in a huge  hurry to assemble the slider. I do hope to order the mechanism parts though in the next week or so.

In the next week, I plan to complete my designated sections for our Design Review Report and also plan to finalize my drawing of how the box will be mounted on the sliding rails. I also plan to order the rest of our sensors and begin attaching them to the platform with my team.

Jessica’s Status Report for 3/13

This week, I mainly helped Tate prepare for the Design Review presentation and began working on the design report. I have added more detail to our system diagram and started filling out our BOM in more detail. I also decided to modify our mechanism to follow the design of a previous project of mine so that we can reuse parts and do not need to waste time CADing/3D-printing. We will need to slightly modify the mechanism’s dimensions, gear size, and moving platform to fit our project. From my calculations, a 1.65″ diameter gear with our 600rpm stepper motor should be able to meet our mechanism latency metric. The other modifications have also been accounted for. In addition to working on the mechanism, I met with my team today to begin testing our sensors. So far, all sensors except our load sensor have been tested.

My progress is currently on schedule. Next week, I hope to continue working on the design report. Then, I hope to start attaching the sensors to our platform and start building the image classifier.

Lauren’s Status Report for 3/13

This week, I helped update Tate on the sensor placement design, and helped him prepare for the design presentation by providing feedback on points he missed or needed to include during practice runs of the presentation. I also helped solder the wires used in the load sensors, and soldered the header pins onto the load sensor module. At a team meeting to test all of the sensors, I brought a bag of recyclables that included the different materials we needed to use to test all of the sensors (like metal cans, glass bottles, recyclable plastics, and paper cartons). I helped test some of the sensors by placing some of these objects close to each sensor. As of this week, all of the sensors except the load sensors were tested. I also started writing a rough draft with the key points of our classifier design and design requirements for the design report.

My progress is on schedule.

I hope to have the final draft of the classifier design (with a detailed software spec) in the design report next week, and start building the model for the image classifier.

Team Status Report for 3/13

The most significant risks are the sensing distances of some of the sensors the team tested. For example, the LDR sensor has a sensing range of 30 cm, but objects are only detected if they are directly placed above it. For this reason, we will change the number of each type of sensor on our platform to address this issue as well as possible. Some of the sensors also are not able to distinguish between the materials we want them to (such as the capacitance sensor for paper vs. non-paper materials), so we will need to either research another sensor to use for certain materials, or adjust the scope of the recyclable categories as needed. Other sensors may not be able to be used as intended (i.e. LDR sensor can’t detect all types of glass), so we may also change the type of sensor used for certain recyclable categories (i.e. use capacitance sensor for glass). Our contingency plan for the limited sensing ranges includes purchasing more sensors if budget allows.

The number of some of the types of sensors was changed, and we will be changing the way some recyclable categories are detected. After testing, the IR sensor was determined to be useless for the purposes of this project, since it can’t distinguish between any of the recyclable categories, so we may be removing it entirely from our system spec. Changing the number of other kinds of sensors was also necessary because some sensors cannot be used to distinguish between certain recyclable categories as intended. For example, the LDR sensor only is able to detect clear glass, not colored glass, so we may use capacitance sensors instead to detect all types of glass. The costs to this change are monetary, since certain sensors like capacitance sensors are more expensive than other kinds of sensors. These costs will be mitigated with further analysis of our budget as we finalize the number of the sensors we will use for each sensor type for the bottom of our platform (which the object goes into).

Here is a picture of the setup we used to test the sensors (this was the capacitance sensor):

Jessica’s Status Report for 3/6

This week, I focused on preparing our design presentation. For the slides, I mainly filled out the block diagram and  mechanical solution and implementation sides. With my group, I then met with our professor and the TA to discuss potential improvements to our presentation. After some advice from our TA to add more implementation details, Lauren and I met to finalize sensor placement and Jetson Nano GPIO allocation.

I also spent a lot of time finalizing the mechanism.  After deciding to reduce our target mechanism latency to <1s last week, I began researching different linear motor actuators. The main types of mechanisms I found were screwball, rack and pinion, and belt driven. It seems like all of these mechanisms are suitable for our use case given that we can adjust the mechanism speed based on gear sizes. I was also able to find CAD mechanisms for the rack and pinion and belt driven mechanisms so that we can hopefully laser cut most of our mechanism.

My progress is slightly behind. I focused most of my time on the hardware aspect of our project and on finalizing the slides, so I was not able to research different image classification models. Hopefully, we can use the resnet model that Lauren and I found a couple of weeks ago.

Next week, I hope to hook up our sensors to an Arduino and begin collecting data. Although we have a general idea of which stepper motors and motor drivers we want, we also need to submit the purchasing request.

Lauren’s Status Report for 3/6

This week I made most of the design presentation slides, and added more details about how the image classifier will work with the sensor classifier to produce the final classification output of recyclable vs. non-recyclable. For a few hours on Saturday, I met with only Jessica (Tate said he was busy, so he did the Team Status Report instead) to come up with how many sensors of each type (inductive, capacitive, etc.) were needed, and how many GPIO pins we would use for each kind of sensor. Jessica and I finalized the placement / position of each type of sensor on the sorting platform (based on sensing ranges, our budget, and the assumption for the minimum object size our trash can will detect). I also added more images for some of the slides for the presentation, based on the feedback the group received from the previous presentation that some of the slides were too empty.

My progress is behind; the mechanism hasn’t been finalized. This week, Jessica asked Tate to finalize certain parts of the mechanism, so we could have it ready for the design presentation, but that hasn’t happened. Tate has been busy, so he hasn’t been able to meet with us or contribute much outside of class. Jessica and I work on the slides during the meetings he misses (typically we update him through text or a later meeting, if he’s available – this week we updated him on the sensor placement we came up with and the number of GPIO pins needed for each sensor; this was necessary because he’ll be presenting next week).

I hope to finalize the other components needed (including the mechanism) for the project, so the team can start buying more parts. I also want to test the sensors that we received.

Team Status Report for 3/6

This week, we had a meeting with our professor and a separate one with our TA to go over some of the design features of our project to ensure that we are prepared for the design review presentations coming up next week. We discussed how we can provide greater detail to our system’s block diagram to better show how our sensor array will be interfacing with the Jetson Nano. We also narrowed down the design of the of the sliding mechanism, and discussed how our sensors would interact with our image classifier.

As of now, our biggest risks still seem to be meeting our desired overall latency and ensuring that our sensor array is able to pick up meaningful readings from objects placed in our bin.  Concerning the latency issue, we have narrowed down our mechanism design preference to two options: 1) using a gear attached to the platform to move it and 2) using a belt system to slide the platform. We are looking for more feedback on which may be quicker and more reliable to help improve our latency and reliability. As for the sensors, we are planning to purchase multiple of the same sensors and spread them out under the platform so that a greater area is covered by the sensors to hopefully get more accurate readings. We will begin the testing of our sensors shortly so we will have a much better idea of exactly where each sensor should be placed in the next few weeks.

One change to our design that we made recently was that we eventually plan on buying more of the same sensors and spreading them out across the holding platform. While this will increase our costs slightly, none of our sensors are extremely expensive and we feel that the potential added accuracy of having a more spread out sensor array is worth the extra cost.

Our project is behind. We have received our first round of sensors in the mail and will begin testing shortly. Our mechanism still hasn’t been finalized. Next week, we will have our design review presentation and will hopefully be able to order some more components to our project as well, after we finalize the mechanism.