Ryan’s Status Report for 4/6/2024

THIS WEEK

  • This week, the NEMA 34 stepper motor arrived, along with a very larger motor driver and a AC to DC 6A power adapter. The team and I tested these components to our satisfaction and determined it provided enough torque to handle our applications.
  • The item stand did not have enough space to accommodate a large motor, so I worked on cutting a large hole in the structure so that the motor can sit in it. I also modified a gear so that it could attach to the larger drive shaft of the motor. We combined all the electronics together for some integration testing, which used the facial recognition software to directly control the item stand. After that was done, Doreen and I worked on code to turn the motor to a specific position, accounting for the gear ratio between the motor shaft and the item stand.
  • Next, we installed 2 LEDs to display information to the user- a blinking yellow LED to show the user which position to place their item, and a red LED to tell the user that their either checked in or checked out their item too late and would have to retry again.

NEXT WEEK

  • Just like last week, I am still slightly behind schedule, though I have caught up more since last week. As a team, we should be approaching the end of end-to-end testing, but we are just getting our system cleaned up and finished for the start of testing. Again, the slack time we allocated perfectly covers this case. There is not much work left to be done on the project.
  • Next week, I plan to install all the components permanently and help Surafel with facial recognition speed and accuracy. After that, we can perform testing and wrap up the project.

SUBSYSTEM VERIFICATION

  • The main subsystem I have been working on this semester is the hardware item stand, which in itself contains several smaller components. For example, the load cell system polls on weight change, the item stand was built to support high load, and the motor was chosen to be able to move heavy load. The use case requirement that relates directly and solely to the item stand is its hardware integrity (how much weight it can handle) and how fast the item stand can detect user interaction (placing vs withdrawing an item).
  • I have already somewhat tested putting weights on the item stand, but not anywhere near our original 150 pounds. I also have not tested increasing the weights or introducing load imbalance yet. After the system is done, I plan to get objects of varying weights and test item stand integrity.
  • The second system that relates to a design/use case requirement is the detection of user interaction. I have already tested this subsystem a while ago, by removing and adding random objects to the load cells and detecting when the load cells see the change. It takes less than 1 second for the load cells to detected weight changes.
  • Overall, I plan to implement the tests described in the design proposal related to the hardware item stand and see if they hit the benchmarks we prescribed.

Surafel’s Status Report for 4/6/24

  • This week, I prepared for the interim demo by ironing out the final kinks in the facial recognition software. Most notably, there was a bug that caused recognition to occur when almost all of the frames currently saved still included the face of the previous person (basically, it was trying to do recognition while not updating the list of recent embeddings, causing it to still guess the previous person). I was able to solve this by making sure to update the recent embeddings I was keeping track of so that they align with the recent frames (something I’m also keeping track of).
  • When it comes to facial recognition I am on track, but the web app will become my primary focus to fully implement before carnival.
  • As I stated above, next week will be solely focused on getting the web app fully up and running before Carnival.
  • For verification and validation, I currently plan on testing facial recognition in two ways, with a static image database and real-time facial recognition (as each has their requirements that I must make sure they are fulfilling). Firstly, testing is needed to make sure the recognition is accurate enough based on the design requirement (which is currently 95% accuracy). To do this, I plan on using a well-known face database and gathering a collection of 20 faces (each face having some amount of pictures of them) and splitting the dataset into training and testing sets. Then I will use the recognition system to first train on the training set and then introduce the testing set to see how well it can recognize the new faces.
  • As for real-time facial recognition, our design requirements state that the system must be able to detect faces within 0.5 meters, within 5 seconds (while still keeping 95% accuracy). I plan on testing this by marking different distances away from our camera (distances both within and outside 0.5 meters), and then timing how long it takes to detect my face. To still make sure we are meeting the 95% accuracy requirement, I plan on repeating this test on at least 20 different faces.

Ryan’s Status Report for 3/30/24

THIS WEEK

This week, I completed the testing of the hardware components. Unfortunately, our previously single stepper motor did not seem to provide enough torque for our applications. I laser cut a new gear for use on the second backup stepper motor we had. Both motors together still did not provide enough torque. Going back to the drawing board, I calculated the torque ratings we needed and got the necessary components list we needed for a new stepper motor, the NEMA 34. We put that order in an are awaiting its arrival.

Next, Doreen and I implemented back and forth transmission between the two Arduinos and completed all of the code necessary for the item stand/rack to function. For example, upon the receipt of a check-in message, the Arduino Mega on the item stand will run a rack balance algorithm, determine the best location to place the item, rotate to the location, and poll on the weight sensor until it has detected a user placing their item, then sending back a success or fail message to the other Arduino.

Afterwards, we integrated the facial recognition code with our Arduino code by reading and writing to serial from the Python and Arduino code. Recently, we tried scanning our faces and it successfully sends check out/in instructions to an Arduino plugged in to the computer from the computer. The Arduino connected to the computer then transmits wirelessly to the Arduino on the item stand and the instruction is processed.

NEXT WEEK

Though I have done a lot of work this week, I am still slightly behind schedule. At this point in the schedule, I am supposed to be running end to end testing. This is because of some issues such as the motor still not being strong enough. Right now is a waiting game, for the new motor to come in and for facial recognition to be completely fleshed out. Next week, I plan to dedicate extra hours to wrap up the project, testing and installing the new (and much more powerful) motor, and helping on the web app side to make sure everything is integrated together well. I allocated slack time when I built the schedule so it is expected that the end of the semester might not fit the schedule completely.

Team Status Report for 3/30/24

RISKS AND MITIGATION

  • There are two potential risks that may impact the success of our project. Firstly, after beginning integration between the facial recognition system and the hardware system (the coat rack), we noticed that the facial recognition system would not recognize users with the level of accuracy that we expected. In some situations, users were already checked-in and wanted to check out would be checked-in again. We suspect that this is a result of the way the facial recognition system is trained, and have therefore, begun testing other implementations. Secondly, the coat rack is not yet able to sustain the amount of weight that we have initially targeted, 20 pounds. The holding torque of the Nema 17 motors is not high enough, so although the rotation works well when only coats are placed on the hooks, it does not work well when heavier items, like small backpacks are placed. As a result, we have bought a Nema 34 motor, which has a holding torque that we expect would be able to satisfy our use case requirements. In addition to changing the motor, we have also considered limiting the hooks on each to to 10-15 pounds, which would still allow users to place a wide range of items on the rack.

REQUIREMENT CHANGES

  • We are considering changing the weight requirement for our system from 20 pounds to 10-15 pounds. This change has not yet been finalized, however, as we expect the Nema 34 motor to be able to satisfy our initial design requirement.

SCHEDULE

  • Our schedule has remained the same. Changing the motors will not require significant work, so it can be done as we work on the integration between the software and hardware components. As of now, we are working on finalizing the facial recognition algorithm so that it can satisfy our use case requirements. This involves ensuring that users can be accurately defined, and that users are only identified when they are a distance of less than 0.5 m from the camera.

Doreen’s Status Report for 3/30/24

  • This week, I mainly worked on writing code to transmit data between 2 Arduinos. This involved using 2 wireless transceivers to test whether commands could be sent back and forth between them. I worked on this with Ryan. Together, we tested whether a command could be sent from one Arduino to the other to control an LED. Once this worked, we tested if we could send commands to control the motors connected to the Arduino on the coat rack. After creating the data structures and specifying the communication between the two sides, I helped in testing the integrating between the facial recognition system and the components on the rack, with Ryan and Surafel. This involved transmitting data from our python program and being able to rotate our motors when someone attempted to check-in and check-out. We have found some issues when testing this, and plan to continue our integration efforts and fix any software problems that arise upon further testing.
  • My progress is slightly behind. Although a  majority of the work for the hardware components and associated code has be finalized, our team still plans on making further changes, like replacing our current motors. According to the scheduling, we are solely supposed to be integrating the different components of our systems, but with this change, we need to work on integration while we make the necessary motor changes to ensure that we can satisfy our use case requirements. In addition to this,  with integration, there is a possibility that we may realize that system components that seemed to work on their own, may not work together, which may require additional work to fix.
  • For next week, I will continue working on integrating the facial recognition system with the hardware system. Although we have been able to successfully transmit data from our python program to the Arduino Mega on the rack using wireless transceiver, I want to ensure that users can successfully check-in and check-out. In addition to this, I want to ensure that the motor can correctly turn to the position specified by a user’s item position on the stand. In addition to working on these software aspects for the project, I want to complete the hardware aspect by substituting the Nema 17 motor with the Nema 34 motor, which is expected to arrive next week. I would like to test that the holding torque for the motor actually satisfies our use case requirement, sustaining rotation with up to 20 lbs on each of the 6 hooks on the rack.

Surafel’s Status Report for 3/30/24

  • This week I worked with Ryan and Doreen to get all of the components of the project connected and working together. During this, after discussing further plans of action, I ended up replacing the SVM classifier (that did the guessing of faces based on the known face embeddings) with a much simpler solution that involves calculating and finding the face embeddings that have the minimum Euclidean distance (now we don’t have to rerun scripts to retrain the SVM classifier). Also, I was able to get the OpenCV video stream up and displaying on the webapp.
  • I think I have recovered a good amount of what I was behind on because I was sick, but still have to continue to be fully back on track.
  • Next week, I plan on continuing to connect all of the components with my teammates and (time permitting) getting the face recognition to work on the webapp.

Surafel’s Status Report for 3/23/24

  • This week I focused on tuning the training models used for facial recognition so that it can produce the best results for our project. The most important thing I noticed in pursuit of this goal was that the training we were initially doing for the recognition models was based on a linear kernel. Since the data we were using to train was most likely not going to be linearly separable, I changed it so that the training used a non-linear kernel.
  • After being sick for the majority of this week, I have fallen behind schedule and plan on getting back on track this week.
  • Next week, I will turn my focus to the web application part of the project by integrating the code I have written already into the webapp and getting it working on there. I also plan on setting up the initial infrastructure in the webapp to send/receive information to/from the other components of our project.

Doreen’s Status Report for 3/23/24

  • This week, I continued working on the hardware portion of the project, as well as started writing the programs to control the hardware components. These tasks were accomplished as a team with my teammate, Ryan. I first ordered inserts for the load cells, so that we would be able to attach them to the stand using M4 bolts. This involved drilling in the inserts, placing the load cell above them, the attaching the bolts. Now the top of the stand contains the load cells and hooks. In regards to the load cells, we calibrated them so that they would display weights in pounds. Additionally, I helped with testing our motor, successfully being able to control the speed and position of the NEMA 17 motor. We attached the motor to the stand, and have started testing whether our use case requirements, specifically the weight requirement of 25 lbs on each hook, would be achievable. Upon further testing, we realized that our first motor driver did work, and have since begun using it to control the motor as it has a higher maximum current rating. Our entire team tested wireless transmission between two Arduino boards, and were successfully able to communicate between them.
  • My schedule is currently on track with the Gantt chart we created in previous weeks. According to the Gantt chart, I should be assembling the rack with the electronic components. I have tested individual components, and am now ready to write the programs which will control the hardware components.
  • Next week, I will continue working on ensuring that the rack can withstand the required weight as described by our use case requirements. This will involve adding a second motor to the stand, as only one motor, was not powerful enough. In addition to this, I will write code for transmitting data between the web application and stand. This code will define what data is transmitted when a user attempts to check-in or check-out an item. Overall, the upcoming week will be dedicated integrating between different system components and further testing.

Team Status Report for 3/23/24

We are entering the phase of the project requiring testing of all individual components and full integration into a unified system. Currently, we are in the process of merging all hardware components and developing Arduino code to manage their operation. Additionally, during motor testing this week, we found that a single NEMA 17 motor may not deliver sufficient torque for our intended application. Consequently, we have decided to utilize two of them. In the event of further difficulties, we may need to procure a larger stepper motor. Our contingency plans also include using different motor drivers to provide more current to the motor, helping to create more torque.

There has been a slight alteration in our design requirements as we conduct motor tests. Considering safety concerns and feasibility, we are contemplating a reduction in the maximum weight permitted on the rack hooks. Previously, we established the maximum weight for each hook at 25 lbs, but we now believe that reducing it to 15 lbs would be more feasible. This adjustment would still enable users to place large items while also improving the rotation of the rack. In addition to this, it may be necessary for us to add an additional motor to the system to provide more torque, enabling smoother and sturdier rotation. If this additional theory of adding an additional motor works, then we may again consider if the previous maximum weight of 25 lbs is actually feasible.

Our schedule has stayed the same, as it currently states that we should be doing integration testing between the hardware and software component, which have have begun this week.

We have accomplished various tasks this week. Firstly, we have finally been able to control the motor, setting the angle and speed at which it moves. We utilized the DRV8825 to drive the motor initially, but realized that the L298n motor driver would be able to provide more current, and since have switched to using this method method instead. Secondly, we have been able to test wireless transmission by passing a message from one Arduino to another Arduino’s serial monitor. Lastly, we calibrated our 6 load cells and are now able to read any weight placed on one of the hooks in pounds.

On the software side of things, the main improvement this week was tuning the training model to produce better results. After doing more research and looking over the codebase, we realized that the hyperplane that was trying to be found (the “kernal” being used) was linear. Since the data is not going to be linearly separable, we changed the kernal being used to a non-linear one, specifically the Gaussian kernal. Now that there are more parameters to worry about with the Gaussian kernal, we now also do a randomized search over a large range for each parameter to find the optimal parameters each time we train the model.

 

Ryan’s Status Report for 3/23/24

  • This week, I worked on testing more hardware components. The new motor driver was delivered, but broke, prompting us to wait for a replacement for a couple of days. After the replacement came in, we tested it with the stepper motor and rotation works (we have some concerns about the strength of the motor but further testing is required).
  • I also tested the NRF24L01 wireless transceivers, and was able to get the transmission of strings to work, which will form the basis of our wireless commands.
  • After the rest of the components were tested, I attached all the load cells permanently to the stand. Next, I started on the final Arduino code that would be needed to parse wireless commands and control components on the item stand.
  • According to the gantt charge, I am slightly behind schedule, due to my underestimation of the complexity of getting some components to work. The entire next week was originally dedicated to end to end testing, but at least half of that time will be working on the final Arduino code and testing motor strength with weights. I have been putting about 15+ hours of work into the project each week since the start so I believe I will catch up by next week (the gantt chart was a bit optimistic for the stage of “putting everything together”).
  • Next week, I plan to test out motor strength and rotation, and finish writing the final code for the Arduino before we screw the last few bolts in.