Ryan’s Status Report for 4/6/2024

THIS WEEK

  • This week, the NEMA 34 stepper motor arrived, along with a very larger motor driver and a AC to DC 6A power adapter. The team and I tested these components to our satisfaction and determined it provided enough torque to handle our applications.
  • The item stand did not have enough space to accommodate a large motor, so I worked on cutting a large hole in the structure so that the motor can sit in it. I also modified a gear so that it could attach to the larger drive shaft of the motor. We combined all the electronics together for some integration testing, which used the facial recognition software to directly control the item stand. After that was done, Doreen and I worked on code to turn the motor to a specific position, accounting for the gear ratio between the motor shaft and the item stand.
  • Next, we installed 2 LEDs to display information to the user- a blinking yellow LED to show the user which position to place their item, and a red LED to tell the user that their either checked in or checked out their item too late and would have to retry again.

NEXT WEEK

  • Just like last week, I am still slightly behind schedule, though I have caught up more since last week. As a team, we should be approaching the end of end-to-end testing, but we are just getting our system cleaned up and finished for the start of testing. Again, the slack time we allocated perfectly covers this case. There is not much work left to be done on the project.
  • Next week, I plan to install all the components permanently and help Surafel with facial recognition speed and accuracy. After that, we can perform testing and wrap up the project.

SUBSYSTEM VERIFICATION

  • The main subsystem I have been working on this semester is the hardware item stand, which in itself contains several smaller components. For example, the load cell system polls on weight change, the item stand was built to support high load, and the motor was chosen to be able to move heavy load. The use case requirement that relates directly and solely to the item stand is its hardware integrity (how much weight it can handle) and how fast the item stand can detect user interaction (placing vs withdrawing an item).
  • I have already somewhat tested putting weights on the item stand, but not anywhere near our original 150 pounds. I also have not tested increasing the weights or introducing load imbalance yet. After the system is done, I plan to get objects of varying weights and test item stand integrity.
  • The second system that relates to a design/use case requirement is the detection of user interaction. I have already tested this subsystem a while ago, by removing and adding random objects to the load cells and detecting when the load cells see the change. It takes less than 1 second for the load cells to detected weight changes.
  • Overall, I plan to implement the tests described in the design proposal related to the hardware item stand and see if they hit the benchmarks we prescribed.

Surafel’s Status Report for 4/6/24

  • This week, I prepared for the interim demo by ironing out the final kinks in the facial recognition software. Most notably, there was a bug that caused recognition to occur when almost all of the frames currently saved still included the face of the previous person (basically, it was trying to do recognition while not updating the list of recent embeddings, causing it to still guess the previous person). I was able to solve this by making sure to update the recent embeddings I was keeping track of so that they align with the recent frames (something I’m also keeping track of).
  • When it comes to facial recognition I am on track, but the web app will become my primary focus to fully implement before carnival.
  • As I stated above, next week will be solely focused on getting the web app fully up and running before Carnival.
  • For verification and validation, I currently plan on testing facial recognition in two ways, with a static image database and real-time facial recognition (as each has their requirements that I must make sure they are fulfilling). Firstly, testing is needed to make sure the recognition is accurate enough based on the design requirement (which is currently 95% accuracy). To do this, I plan on using a well-known face database and gathering a collection of 20 faces (each face having some amount of pictures of them) and splitting the dataset into training and testing sets. Then I will use the recognition system to first train on the training set and then introduce the testing set to see how well it can recognize the new faces.
  • As for real-time facial recognition, our design requirements state that the system must be able to detect faces within 0.5 meters, within 5 seconds (while still keeping 95% accuracy). I plan on testing this by marking different distances away from our camera (distances both within and outside 0.5 meters), and then timing how long it takes to detect my face. To still make sure we are meeting the 95% accuracy requirement, I plan on repeating this test on at least 20 different faces.