Team Status Report for 4/27/24

  • Currently, we still have to finish polishing the web application and facial recognition part of the project. The basic functionality of the web app – showing a camera feed and running the facial recognition system – is working pretty well. We still have to finalize things such as displaying the check-in and check-out logs, along with manual checkout (in case the system cannot detect a checked-in user). The other risk is racial recognition. Throughout our testing in the last 2 days, it is highly effective at distinguishing between diverse populations, but when the testing set contains many very similar faces, the facial recognition system fails to differentiate them. The new facial recognition library we are using is definitely much better than the old one though. We will try our best to flesh out these issues before the final deliverables.
  • The only change we have really made this week is switching to a new facial recognition library, which drastically improves the accuracy and performance of the facial recognition part of the project. This change was necessary because the old facial recognition code was not accurate enough for our metrics. This change did not incur and costs, except perhaps time.
  • There is no change to our schedule at this time (and there can’t really be because it is the last week).

UNIT TESTING:

Item Stand Integrity Tests:

  1. Hook Robustness: Placing 20 pounds on each of the 6 hooks at a time and making one full rotation
  2. Rack Imbalance: Placing 60 pounds on 3 hooks on one side of the rack and making a full rotation
  3. Max Weight: Gradually place more and more weight on a rotating rack until the max weight of 120 pounds is reached
  4. RESULTS: From these tests, we determined that our item stand was robust enough for our use cases. We found that though some of the wood and electronic components did flex (expected), it still held up great. The hooks were able to handle repeated deposit and removal of items, the rack did not tip over from imbalance, and even with a max weight of 120 pounds, the rack rotated continuously.

Integration Tests:

  1. Item Placement/Removal: Placing and removing items from load cells and measuring the time it takes for the web app to receive information about it.
  2. User Position Finding: Check-in and check-out users and measure the time it takes for the rack to rotate to a certain position.
  3. RESULTS: We found that the detection of placing and removal of items from the item stand was propagated throughout the system very quickly, less than our design requirements. So no design changes were needed there. On the other hand, when we tested the ability of the rack to provide users with a new position or their check-in positions, we found that the time it took went way above our design requirements of 1 second. We did not think about the significant time it takes for the motor to rotate safety to the target location, and thus adjusted our design requirements to 7 seconds in the final presentation.

Facial Recognition Tests:

  1. Distance Test: Stand at various distances away from the camera, and see if facial recognition starts recognizing faces.
  2. Face Recognition Time Test: Once close enough to camera, time the time it takes for the face to be seen as new or already in the system.
  3. Accuracy Test: Check-in various faces and check-out, while measuring if the system accurate maps users to their stored faces.
  4. RESULTS: Spoiler: we switched to a new facial recognition library which was a big improvement over the old one. However, our old algorithm with the SVM classifier was adequate at recognizing people at the correct distance of 0.5 meters and within the time of 5 seconds. Accuracy though, took a hit. On very diverse facial datasets, our old model hit 95% accuracy during pure software testing. While this is good on paper, our integration tests involving this system found in real life that in a high percentage of the time, its was just wrong, sometimes up to 20%. With this data from our testing, we decided to switch to a new model using the facial_recognition Python library, which reportedly has a 99% facial accuracy rate. We recently also conducted extensive testing and found that its accuracy rate was well above 90-95% on diverse facial data. It still has some issues when everyone checked into the system look very similar, but we believe this might be unavoidable and thus want to build in some extra safeguards such as manual checkout in our web application (still a work in progress).

Team Status Report for 4/20/24

  • There are a couple of significant risks that could jeopardize the success of the project. One risk is the stability of our hardware components. Through our long testing, we discovered that our NRF24L01 wireless transceivers were not reliable 100% of the time. A majority of the time transmission from the web app side of the system to the hardware side of the system worked, and other times the signal don’t go through, which definitely is not ideal when the system hinges on the transceivers to be the bridge between our 2 major subsystems.
  • As a contingency plan, we purchased extra NRF24L01 transceivers that can be swapped out if something is wrong with the system. As a last resort, our web app keeps track of item stand transactions, which allows the customer to easily find user items (which is just a normal coat checking system).
  • The second risk is facial recognition. This is also another very important component to our system. Without very high accuracy facial recognition, the system could prove to be frustrating to users who may not be able to retrieve their items easily. It is also a very hard thing to perfect and there is not much time left.
  • As a contingency plan for the facial recognition system, we can allow the user to enter a name or identifier in place of scanning their face. Any other kind of identification method could work.
  • Through our end to end testing, we discovered that some of our design requirements were not realistic. For example, we initially required that once a user’s face was matched, the stand should display the user’s position on the rack with an LED within 1 second. This did not account for the time it would take for the motor to rotate from it’s current position to the user’s position after they had checked in. Therefore, we changed our requirement to 7 seconds to allow the motor to rotate the the target position.
  • Overall, there is not much change to the schedule.

Team Status Report for 4/6/2024

RISKS AND MITIGATION

After testing most of our components and the entire system this week, we determined 2 big risks that could jeopardize the success of the project. One risk is wireless communication. In our system, we leverage NRF24L01 wireless transceivers to communicate wirelessly between our facial recognition system and item stand. Sometimes the communication is inconsistent. Data could be send from one transceiver but not received by another. This may be because we are toggling both transceivers between a receive and transmit state. We are planning on testing this more next week. Currently, we put a big delay between toggling the receive and transmit state, so the component has enough time to adapt.

The second risk is the accuracy of the facial recognition system. The facial recognition system is not as accurate as we would like at this moment in time. This is a big deal in our system because inaccurate face detection could result in the wrong user receiving their items. We plan to continue testing and refining the algorithm, especially since the team is wrapping up on the hardware component of the project.

DESIGN CHANGES

From the stand integrity part of our design requirements, we stated that each load cell should be able to hold 25 pounds. However, we have realized that the load cell we have purchased can only detect up to 22 pounds in weight. Additionally, when doing end to end testing this past week with items of various weights, the heaviest item that was placed on the rack was only 10-15 pounds. As a result, we are decreasing the maximum weight for each load cell from 25 pounds to 20 pounds. This would mean the the maximum total weight that can ever be on the item stand at once is 120 pounds. This modification does not incur any costs. It is simply more realistic with the weights for the types of items we expect users to place on the rack. For the other design requirements (detect an item has been added or removed within 1 second, 95% accuracy for facial recognition, etc), we will do further testing and make necessary modifications to attempt to satisfy them.

SCHEDULE

The schedule remains the same. However, we are utilizing some time in our slack time to fix any unforeseen issues and ensure our use case and design requirements are meant. Not considering these minor modifications, all members have completed their required tasks, and what remains is further testing.

VERIFICATION/VALIDATION

As a team, we plan to combine all the components into one system before running our tests. For facial recognition, we want to mostly implement the tests we described during the design proposal. We will test with users standing at various distances from the camera, with the ideal result being that users can only interact with the system within 0.5 meters. We also want to maintain accuracy, so we are planning to test our facial recognition accuracy with at least 20 different faces, with an ideal result of at least 95% accuracy. To test the integration between our hardware and software components, we plan to run tests that model real life scenarios, in addition to the specific tests we listed in our design proposal. For example, we can have several people check in and out of the system in various orders in order to determine if the system is able to accurately keep track of user belongings. For the item stand, we will be using various weights and seeing if the item stand is damaged or can rotate. We will also be introducing weight imbalance- putting a very heavy weight one side and rotating the item stand. Lastly, we want to time how long the process of checking-in our checking-out takes. We want to ensure that the motor can rotate the the correct position and that updates between the facial recognition system and item stand do not take long. This will mean making modification to our code to lessen certain delays or increasing the acceleration/speed of the motor.

Team Status Report for 3/30/24

RISKS AND MITIGATION

  • There are two potential risks that may impact the success of our project. Firstly, after beginning integration between the facial recognition system and the hardware system (the coat rack), we noticed that the facial recognition system would not recognize users with the level of accuracy that we expected. In some situations, users were already checked-in and wanted to check out would be checked-in again. We suspect that this is a result of the way the facial recognition system is trained, and have therefore, begun testing other implementations. Secondly, the coat rack is not yet able to sustain the amount of weight that we have initially targeted, 20 pounds. The holding torque of the Nema 17 motors is not high enough, so although the rotation works well when only coats are placed on the hooks, it does not work well when heavier items, like small backpacks are placed. As a result, we have bought a Nema 34 motor, which has a holding torque that we expect would be able to satisfy our use case requirements. In addition to changing the motor, we have also considered limiting the hooks on each to to 10-15 pounds, which would still allow users to place a wide range of items on the rack.

REQUIREMENT CHANGES

  • We are considering changing the weight requirement for our system from 20 pounds to 10-15 pounds. This change has not yet been finalized, however, as we expect the Nema 34 motor to be able to satisfy our initial design requirement.

SCHEDULE

  • Our schedule has remained the same. Changing the motors will not require significant work, so it can be done as we work on the integration between the software and hardware components. As of now, we are working on finalizing the facial recognition algorithm so that it can satisfy our use case requirements. This involves ensuring that users can be accurately defined, and that users are only identified when they are a distance of less than 0.5 m from the camera.

Team Status Report for 3/23/24

We are entering the phase of the project requiring testing of all individual components and full integration into a unified system. Currently, we are in the process of merging all hardware components and developing Arduino code to manage their operation. Additionally, during motor testing this week, we found that a single NEMA 17 motor may not deliver sufficient torque for our intended application. Consequently, we have decided to utilize two of them. In the event of further difficulties, we may need to procure a larger stepper motor. Our contingency plans also include using different motor drivers to provide more current to the motor, helping to create more torque.

There has been a slight alteration in our design requirements as we conduct motor tests. Considering safety concerns and feasibility, we are contemplating a reduction in the maximum weight permitted on the rack hooks. Previously, we established the maximum weight for each hook at 25 lbs, but we now believe that reducing it to 15 lbs would be more feasible. This adjustment would still enable users to place large items while also improving the rotation of the rack. In addition to this, it may be necessary for us to add an additional motor to the system to provide more torque, enabling smoother and sturdier rotation. If this additional theory of adding an additional motor works, then we may again consider if the previous maximum weight of 25 lbs is actually feasible.

Our schedule has stayed the same, as it currently states that we should be doing integration testing between the hardware and software component, which have have begun this week.

We have accomplished various tasks this week. Firstly, we have finally been able to control the motor, setting the angle and speed at which it moves. We utilized the DRV8825 to drive the motor initially, but realized that the L298n motor driver would be able to provide more current, and since have switched to using this method method instead. Secondly, we have been able to test wireless transmission by passing a message from one Arduino to another Arduino’s serial monitor. Lastly, we calibrated our 6 load cells and are now able to read any weight placed on one of the hooks in pounds.

On the software side of things, the main improvement this week was tuning the training model to produce better results. After doing more research and looking over the codebase, we realized that the hyperplane that was trying to be found (the “kernal” being used) was linear. Since the data is not going to be linearly separable, we changed the kernal being used to a non-linear one, specifically the Gaussian kernal. Now that there are more parameters to worry about with the Gaussian kernal, we now also do a randomized search over a large range for each parameter to find the optimal parameters each time we train the model.

 

Team Status Report for 3/16/24

  • The most significant risk we face in regards to the hardware component of our project involves the rotation mechanism for the hardware. We have faced some challenges regarding faulty components in the past week that has delayed the testing for the motor. We found out from testing our motor that the motor driver may not provide enough voltage to drive the motor, causing the motor to only vibrate, rather than fully rotate. As a result, we have ordered a new motor driver and asked a peer to use theirs so that in the following week we can get back on track with testing the component and fully assembling the rack. In addition to this, we need to complete testing our wireless transceiver.
  • The system requirements have stayed the same. Similarly, our schedule has is the same as previous weeks.
  • For the software, there is now a way to register faces! While the system is running, it is keeping track of the most recent frames it has read. When a new face is detected, it will save those frames in the system and then retrain the recognition model with those new frames included. Works ok for now, and it will hopefully work better after tuning the parameters of the recognition model to get better results.
  • This week, we painted and finished the bottom portion of the stand. As for the six-pronged component at the top, we attached a slip ring and load cells and further tested the wiring for the load cells to ensure that they could capture weight changes. Here are images depicting our current progress with the hardware component.

 

Team Status Report for 3/9/24

  • The most significant risk right now is bringing all the parts together. Especially with constructing the hardware, we have hit many situations similar to deadlock in real life. For example, we want to test out a component by attaching it to our rack, but then attaching to the rack is permanent so we want to test the component. This has slowed down a bit of our final parts list compilation. We plan to work dedicate most of our time in testing our components as well as possible next week. As for contingency plans, plan to be very flexible on researching and ordering new and different components in the event that our existing items do not work out.
  • There are no changes to the existing design at this time
  • Progress on the software has been a bit slow due to some unexpected business, but a countdown has been added to facial recognition to give the user some time to prepare themselves to get scanned, and a way to register new faces is currently in the works.

Current progress on the rack!

 

  • A was written by Ryan Lin, B was written by Doreen Valmyr and C was written by Surafel Tsadik.

Part A: Our product solution is universal across global factors. There really isn’t specific skill or knowledge on how to use our system because we designed the system to be easy to use with limited user interaction. People all over the world go to events and need their items to be stored quickly and securely. One possible way that our product solution may not work with global factors is if our implementation is not all-encompassing. For example, faces in different countries around the world will look different and will be a challenge for our facial recognition system.

Part B: Our innovative product streamlines the often cumbersome process of checking in and retrieving personal belongings, particularly at events attended by large crowds. By leveraging facial recognition technology integrated with a physical coat rack system, individuals can swiftly deposit and retrieve their items, freeing up valuable time for social interaction. Crucially, our solution is designed to be inclusive and culturally sensitive, accommodating users from diverse backgrounds and belief systems. With no barriers to access, individuals of any cultural or religious affiliations can seamlessly utilize our product at events. Additionally, the non-invasive nature of our technology ensures widespread acceptance, as it simply relies on facial recognition for item storage, devoid of any controversial features.

Part C: Currently, many coat checks use a physical ticket system to store and retrieve items. Paper is usually used to print these tickets, and if the event has a large audience a lot of paper will be used and later thrown away. With our system, it will eliminate the need for physical tickets, meaning no more paper needs to be used to print said tickets. Less paper wasted means a positive impact on tree preservation, decreasing the number of trees we need to cut down to produce the paper that goes into the physical tickets.

Team Status Report for 02/24/24

  • We still have some hardware risks for our project. We need to create or buy a gear that is strong enough to rotate our rack. Also, though some testing has been done on some components, other components cannot be tested due to them not being ordered or delivered yet. Secondly, all team members have a midterm on Wednesday next week, which could slow down some progress. To mitigate this risk, we plan to double down on work immediately after the exams. This allows the same amount of work to be done while also allowing us to wait for components.
  • Overall, there weren’t any big changes to the design, except for some minor adjustments in the hardware design of the rack during construction. For example, changing the number of legs from 3 to 4 or reducing the height of some sections. The change just simplified some construction work. The schedule has not changed at all though.
  • We tested out the load cells with an Arduino and it works exactly as expected
  • We are almost done with the non-electronic component of the rack
  • Shout out to Justin from TechSpark
  • Faces are being recognized live with manually inputted images, next is to add new faces to the model through a video stream

Team Status Report for 2/17/24

  • One of the risks is access to machines in TechSpark for the construction of our hardware rack. This risk is lessened from last weak because of some design changes. We plan to CNC cut sheets of plywood and join them with glue instead of cutting wood with other machines and screwing them together. This reduces the amount of training or oversight needed because a CNC machine is relatively easier to use. We have also found some potential people with training to help us with using the machine.
  • If we are not able to use the CNC machine, our contingency plan is to use the TechSpark provided quarter-inch plywood and the laser cutters, because those machines do not require formal training to use.
  • Some of the ordered components may not work, so we ordered spares and plan to test them out within the next week.
  • The only thing that really changed was the design of our hardware rack after going into the specifics on how to handle weight imbalance and robustness.
  • Current rack design!

  • A was written by Ryan Lin, B was written by Doreen Valmyr and C was written by Surafel Tsadik.

Part A: This product solution in general does not apply to the need of public health, safety, and welfare.  However, there may be some small factors of our project that contributes to this topic. For example, perhaps the removal of a ticket system and the ease of checking in their personal items brings a sense of ease to people that may be stressed attending an event. Our product also allows customers to feel safe in knowing that their items are kept track of and kept in a specific position on our item stand. In terms of meeting physical safety and public health there isn’t much of a connection. This product also doesn’t provides for the basic needs of people because it is primarily used for events that usually don’t support people’s basic needs.

Part B: Our facial recognition algorithm will use tools like OpenCV to characterize and distinguish different faces, so things involving a person’s social group, whether cultural, political, or economic, will not be taken into consideration when checking in or checking out a user. In addition to this, our product can be used in a variety of social contexts and within all types of social gatherings that necessitate a attendee having to set aside particular personal items when attending that particular event. Without strict rules being set on the items that can be on the rack, simply the assumption that it will mostly be used for coats, users will not be limited to the items they can set aside, but mostly the weights of those items. As a result, users are able use our product for their required social gatherings without limitations.

Part C: Coat checks at large events currently puts too much responsibility on the attendees. Attendees usually have to keep track of a ticket/number throughout the event, which adds a whole layer of complexity during a time when attendees may not want a complex experience (a more relaxed event, a very formal event). It becomes worse when people begin to lose/forget their identifier, leading to a messy (and potentially malicious) retrieval of checked-in coats. Our product eliminates the need for external identifiers like a ticket or number as you can check in using your face (which everyone will always have). Without the need to keep track of an external identifier, attendees can focus more on the event they’re attending and have a seamless process of retrieving their coats.

Team Status Report for 2/10/24

  • We are still figuring out how we are going to build the rack as none of us have access to the machines in TechSpark. We are looking into getting our wood cut by a woodshop employee at TechSpark, or using the machines at the Project Olympus Incubator Makerspaces.
  • There have been no changes to our design right now, but we are looking into possible weight imbalances affecting the servo rotation and plan to have a new design by next week.
  • Over the next week, we want to come to a consensus on what facial recognition library we want to use for our project and attempt to implement facial recognition soon.