Team Status Report for 4/27/24

  • Currently, we still have to finish polishing the web application and facial recognition part of the project. The basic functionality of the web app – showing a camera feed and running the facial recognition system – is working pretty well. We still have to finalize things such as displaying the check-in and check-out logs, along with manual checkout (in case the system cannot detect a checked-in user). The other risk is racial recognition. Throughout our testing in the last 2 days, it is highly effective at distinguishing between diverse populations, but when the testing set contains many very similar faces, the facial recognition system fails to differentiate them. The new facial recognition library we are using is definitely much better than the old one though. We will try our best to flesh out these issues before the final deliverables.
  • The only change we have really made this week is switching to a new facial recognition library, which drastically improves the accuracy and performance of the facial recognition part of the project. This change was necessary because the old facial recognition code was not accurate enough for our metrics. This change did not incur and costs, except perhaps time.
  • There is no change to our schedule at this time (and there can’t really be because it is the last week).

UNIT TESTING:

Item Stand Integrity Tests:

  1. Hook Robustness: Placing 20 pounds on each of the 6 hooks at a time and making one full rotation
  2. Rack Imbalance: Placing 60 pounds on 3 hooks on one side of the rack and making a full rotation
  3. Max Weight: Gradually place more and more weight on a rotating rack until the max weight of 120 pounds is reached
  4. RESULTS: From these tests, we determined that our item stand was robust enough for our use cases. We found that though some of the wood and electronic components did flex (expected), it still held up great. The hooks were able to handle repeated deposit and removal of items, the rack did not tip over from imbalance, and even with a max weight of 120 pounds, the rack rotated continuously.

Integration Tests:

  1. Item Placement/Removal: Placing and removing items from load cells and measuring the time it takes for the web app to receive information about it.
  2. User Position Finding: Check-in and check-out users and measure the time it takes for the rack to rotate to a certain position.
  3. RESULTS: We found that the detection of placing and removal of items from the item stand was propagated throughout the system very quickly, less than our design requirements. So no design changes were needed there. On the other hand, when we tested the ability of the rack to provide users with a new position or their check-in positions, we found that the time it took went way above our design requirements of 1 second. We did not think about the significant time it takes for the motor to rotate safety to the target location, and thus adjusted our design requirements to 7 seconds in the final presentation.

Facial Recognition Tests:

  1. Distance Test: Stand at various distances away from the camera, and see if facial recognition starts recognizing faces.
  2. Face Recognition Time Test: Once close enough to camera, time the time it takes for the face to be seen as new or already in the system.
  3. Accuracy Test: Check-in various faces and check-out, while measuring if the system accurate maps users to their stored faces.
  4. RESULTS: Spoiler: we switched to a new facial recognition library which was a big improvement over the old one. However, our old algorithm with the SVM classifier was adequate at recognizing people at the correct distance of 0.5 meters and within the time of 5 seconds. Accuracy though, took a hit. On very diverse facial datasets, our old model hit 95% accuracy during pure software testing. While this is good on paper, our integration tests involving this system found in real life that in a high percentage of the time, its was just wrong, sometimes up to 20%. With this data from our testing, we decided to switch to a new model using the facial_recognition Python library, which reportedly has a 99% facial accuracy rate. We recently also conducted extensive testing and found that its accuracy rate was well above 90-95% on diverse facial data. It still has some issues when everyone checked into the system look very similar, but we believe this might be unavoidable and thus want to build in some extra safeguards such as manual checkout in our web application (still a work in progress).

Doreen’s Status Report for 4/27/24

  • This week I mainly worked on the final presentation and doing end-to-end testing. I worked with Surafel in completing speed and distance tests to ensure that only users within 0.5 meters of the camera would be recognized and that they would be recognized within 5 seconds. This involved doing various tests by changing our distances from the camera and fine tuning the facial recognition algorithm with any results were unexpected. For testing the entire system, I asked several volunteers to check in and check out items on the rack and ensured that the rack successfully displayed their items upon checkout. Lastly, I prepared for the final presentation since I would be the speaker. I spent some time practicing delivering the presentation, ensuring I was well informed about all components of our project and that the design trade-offs and testing procedures were clearly explained.
  • My progress is on schedule as I have completed all the parts of the project initially assigned to me, including unit and integration testing.
  • Next week, I hope to work with the other members of the team to wrap up the project. I will mainly work on completing the facial recognition and the web application. If there is time, I will also work on implementing the bonus feature, the buzzer alert, along with various tests to ensure this feature works as intended with the rest of the system.

Doreen’s Status Report for 4/20/24

  • These past 2 weeks, I mainly focused on doing testing. This involved testing the integrity of the item stand, determine if our system met our timing requirements, and doing end-to-end testing with the entire system (including the item stand and facial recognition). For testing the integrity of the item stand, I wanted to ensure that the stand could withstand a large amount of weight placed on it. Each hook should be able to hold up to 20 pounds and the Nema 34 motor should be able to rotate the maximum weight of 120 pounds that we expect. In addition to this, I tested if our system could quickly recognize users, and rotate to a specific user’s position within 5 second. I worked with my team on fixing the motor speed and acceleration as well as looking more into the blocking behavior of our wireless transmission. We ensured that the speed at which the motor rotated would be safe for users and we ensured that the entire check-in and check-out process occurred relatively quickly. I also helped with testing the facial recognition with my team members, helping to develop new approaches and implementations to improve the accuracy closer to our target of 95%. Other than testing, I worked on adding code to determine if an attacker attempted to steal user belongings from the item stand. This involved introducing a buzzer and writing code to make it alert user if anything was removed from the stand while not in a check-out process. The implementation of the buzzer does not work yet, but in further weeks, I hope to further test it.
  • My progress is slightly behind. This week, after introducing the buzzed to our system, our system unfortunately broke. As a result, we had to spend time to determine which component led to the failure. With my team members, we determined that the wireless transceiver were not working, so we had to get new ones. The additional time to debug this issue and receive new parts delayed out end-to-end testing, causing us to have less time to tune our algorithms. I plan to do more testing next week to prepare for the final demo, ensuring that our facial recognition system can accurately detect new faces and that our item stand can correctly find good positions for users to place their items.
  • I plan to spend some time next week to further test the system and try to satisfy any design requirements that were not met. This involves doing more end-to-end testing, and improving the speed at which users can complete the check-in and check-out processes. I also plan to re-introduce the buzzer code so that we can detect if attackers have stolen items on the stand.
  • As I’ve debugged the project, I found it necessary to learn more about facial recognition. I have had to look into the differences between SVM versus euclidean distance classifiers, and I’ve learned about ways to normalize faces. In terms of hardware, I’ve had the opportunity to learn about how to calibrate load cells and do wireless transmission to communicate between two Arduinos. I also have written more code in Arduino, so I improved my skills in this aspect. In terms of debugging, I have learned to do unit testing on individual components to locate which parts of the system led to failure. I have also relied on online tutorial and forums to learn about new approaches to solving problems dealing with both hardware and software components of our system, like load cells, motor, and facial recognition.

Doreen’s Status Report for 4/6/24

  • This week, I mainly worked on preparing for the interim demo. This involved doing some end-to-end testing with the hardware components and the facial recognition system. There were issues relating to wireless transmission between the Python code for the facial recognition and the Arduino code, so I worked with my team members to fix those issues. We realized that the readline function that we utilized would block on many trials, so we had to deduce where to problem was and how to fix it. We thought about various solutions like flushing input and output buffers, but ultimately realized we had to add some delays in various locations of our programs to ensure that the subsystems were in sync. In addition on this integration, I tested the Nema 34 motor and helped write code to control the motor upon an check-in or check-out process. This involved spending time in the TechSpark wood shop to carve a hole in the rack to place the motor and mounting the motor sturdily to the rack using brackets and nails. Additionally, I worked on code to control the motor, allowing it to rotate to a user’s position on the rack.
  • My progress is slightly behind schedule. I have worked on testing various components of our system, including the load cells, motors, and wireless transmission. However, the complete system has not been tested together thoroughly, with members outside of our group. In addition, the LEDs which indicate to the user whether they have successfully placed their items on the rack has not been permanently installed, so a bit more time in the wood shop is required to do that. In order to catch up, I will need to add the LED and do further testing using volunteers to ensure that the system works as a whole.
  • Next week, I will work on adding the LEDs to the rack. I will also work on doing end-to-end testing. Further details are provided above.
  • Although we notice that our system works mostly as intended, I would like to further optimize delays for the check-in and check-out process. The delays added for transmitting information make it so that our timing requirements (taking less than 5 seconds for a using to be recognizes) are not meant. As a result, I would like to find the smallest delays that allows message transmission without blocking behavior. Secondly, I would like to test if each hook on the rack can withstand the maximum weight ( 20-25 pounds) previously set in our use case requirements. This will involve place 20 pound backpacks on our rack and seeing if the motor can successfully rotate that much weight. In a similar regard, I would like to ensure that even with weight imbalances, the motor can successfully rotate the items on the rack. I will do this by placing a large weight (20 pounds) on one side and nothing on the other. I will also check-in 6 times to fill the entire rack, then remove all items on one side, to ensure that the motor successfully rotates the weight and that the thread on the gears are not impacted.

Team Status Report for 3/30/24

RISKS AND MITIGATION

  • There are two potential risks that may impact the success of our project. Firstly, after beginning integration between the facial recognition system and the hardware system (the coat rack), we noticed that the facial recognition system would not recognize users with the level of accuracy that we expected. In some situations, users were already checked-in and wanted to check out would be checked-in again. We suspect that this is a result of the way the facial recognition system is trained, and have therefore, begun testing other implementations. Secondly, the coat rack is not yet able to sustain the amount of weight that we have initially targeted, 20 pounds. The holding torque of the Nema 17 motors is not high enough, so although the rotation works well when only coats are placed on the hooks, it does not work well when heavier items, like small backpacks are placed. As a result, we have bought a Nema 34 motor, which has a holding torque that we expect would be able to satisfy our use case requirements. In addition to changing the motor, we have also considered limiting the hooks on each to to 10-15 pounds, which would still allow users to place a wide range of items on the rack.

REQUIREMENT CHANGES

  • We are considering changing the weight requirement for our system from 20 pounds to 10-15 pounds. This change has not yet been finalized, however, as we expect the Nema 34 motor to be able to satisfy our initial design requirement.

SCHEDULE

  • Our schedule has remained the same. Changing the motors will not require significant work, so it can be done as we work on the integration between the software and hardware components. As of now, we are working on finalizing the facial recognition algorithm so that it can satisfy our use case requirements. This involves ensuring that users can be accurately defined, and that users are only identified when they are a distance of less than 0.5 m from the camera.

Doreen’s Status Report for 3/30/24

  • This week, I mainly worked on writing code to transmit data between 2 Arduinos. This involved using 2 wireless transceivers to test whether commands could be sent back and forth between them. I worked on this with Ryan. Together, we tested whether a command could be sent from one Arduino to the other to control an LED. Once this worked, we tested if we could send commands to control the motors connected to the Arduino on the coat rack. After creating the data structures and specifying the communication between the two sides, I helped in testing the integrating between the facial recognition system and the components on the rack, with Ryan and Surafel. This involved transmitting data from our python program and being able to rotate our motors when someone attempted to check-in and check-out. We have found some issues when testing this, and plan to continue our integration efforts and fix any software problems that arise upon further testing.
  • My progress is slightly behind. Although a  majority of the work for the hardware components and associated code has be finalized, our team still plans on making further changes, like replacing our current motors. According to the scheduling, we are solely supposed to be integrating the different components of our systems, but with this change, we need to work on integration while we make the necessary motor changes to ensure that we can satisfy our use case requirements. In addition to this,  with integration, there is a possibility that we may realize that system components that seemed to work on their own, may not work together, which may require additional work to fix.
  • For next week, I will continue working on integrating the facial recognition system with the hardware system. Although we have been able to successfully transmit data from our python program to the Arduino Mega on the rack using wireless transceiver, I want to ensure that users can successfully check-in and check-out. In addition to this, I want to ensure that the motor can correctly turn to the position specified by a user’s item position on the stand. In addition to working on these software aspects for the project, I want to complete the hardware aspect by substituting the Nema 17 motor with the Nema 34 motor, which is expected to arrive next week. I would like to test that the holding torque for the motor actually satisfies our use case requirement, sustaining rotation with up to 20 lbs on each of the 6 hooks on the rack.

Doreen’s Status Report for 3/23/24

  • This week, I continued working on the hardware portion of the project, as well as started writing the programs to control the hardware components. These tasks were accomplished as a team with my teammate, Ryan. I first ordered inserts for the load cells, so that we would be able to attach them to the stand using M4 bolts. This involved drilling in the inserts, placing the load cell above them, the attaching the bolts. Now the top of the stand contains the load cells and hooks. In regards to the load cells, we calibrated them so that they would display weights in pounds. Additionally, I helped with testing our motor, successfully being able to control the speed and position of the NEMA 17 motor. We attached the motor to the stand, and have started testing whether our use case requirements, specifically the weight requirement of 25 lbs on each hook, would be achievable. Upon further testing, we realized that our first motor driver did work, and have since begun using it to control the motor as it has a higher maximum current rating. Our entire team tested wireless transmission between two Arduino boards, and were successfully able to communicate between them.
  • My schedule is currently on track with the Gantt chart we created in previous weeks. According to the Gantt chart, I should be assembling the rack with the electronic components. I have tested individual components, and am now ready to write the programs which will control the hardware components.
  • Next week, I will continue working on ensuring that the rack can withstand the required weight as described by our use case requirements. This will involve adding a second motor to the stand, as only one motor, was not powerful enough. In addition to this, I will write code for transmitting data between the web application and stand. This code will define what data is transmitted when a user attempts to check-in or check-out an item. Overall, the upcoming week will be dedicated integrating between different system components and further testing.

Doreen’s Status Report for 3/16/24

  • This week I worked with my team member on the hardware component of the project. This involved painting the rack and creating cutouts for electronic components. We determined the placement of the load cells, slip ring, and power cord on the rack. To ensure that these components could be placed, we had to make more cutouts on the stand. In addition to this, we assembled the bottom portion of the stand by attaching L brackets to the base and legs of the stands. Furthermore, I helped with testing the load cells. Although we had tested them in previous weeks, we wanted to see if the power, ground, and clocks line could be combined for the load cells, which required further wiring and testing.
  • According to our Gantt Chart, we are still on schedule because we anticipated the current setbacks. Our schedule states that we should be assembling our rack and adding the electronic components, which is what we are currently working on. However, we need to ensure that we have less setbacks with the motor, by quickly testing our new motor driver, in case we need to make additional plans to ensure that we can accomplish our requirements.
  • For next week, I will help test our motor driver controller and wireless transceiver. In addition to this, I will help further assemble the rack, so that it is at a more completed state. Refer to the team status report for updated media/images.

Doreen’s Status Report for 3/9/24

  • This week, I continued working on the coat rack portion of our project. This involved finishing cutting and sanding our plywood pieces in the woodshop in TechSpark, as well as adding nails to some pieces to ensure the multiples layers of wood would remain together. I also worked with my teammate in staining some of the pieces and designing a gear that would rotate the rack. Lastly, I contributed to our design report by writing several portions and adding corrections were needed.
  • My progress is currently on schedule. However, since the rack is not fully assembled and we do not have the parts to power our motor, we are not able to test if the rotation will work. As a result, we need to quickly order parts and complete assembly next week so that we can test rotation.
  • Next week, I hope to complete an order for more materials so that we can have all necessary materials to power/use our electronics. I hope to continue workin on the coat rack, finishing painting it and also test the rotation. Lastly, I hope calibrate the load cell so we can begin adding various weights on the rack.

Doreen’s Status Report for 2/24/24

  • This week, I worked on building the rack. This involved drawing the shapes for the frame on 1/2 inch plywood and cutting them out in the wood shop in Techspark. Since many pieces needed to be cut, and I did not have training to use the machines in the wood-shop, the only tools that were used were jigsaws and drills, making the process longer than initially thought.
  • My progress is only schedule according to the timeline. However, with multiple midterms in the upcoming week, it will be difficult to carve out time to finish building the rack. I hope to completely focus of this after my exams on Wednesday if I do not find time in the beginning of the week.
  • For next week, I hope to finish building the rack with my team member, so that we can begin adding the electronic components. I also hope to carve out time to learn how to use Coreldraw to design the gears that we will need to rotate the rack. I also plan on helping to write the design report.