Last week, in preparation of the demo, I was able to verify that the card detection works on a dummy data. Dummy data here refers to the images of the card captured not in the exact deployment setting. But through this, I was able to verify that the model’s memorization capability is good. While the testing on the dummy data involved some variables, such as different lighting and card position, I’m confident that we will have more certainty and less variability when using the real data. Since the card’s position, camera’s position, and lighting will be fixed, there will be less variability in the training data and let the model effectively overfit on the data and memorize them.
This week, I primarily worked on setting up the raspberry pi. While sshing into pi in my room’s WAN setting worked, I’m still struggling in sshing into pi in the school’s network setting. I tried several different methods in this, but found it difficult to make it work. I was pretty persisted in trying to make it work in a headless setting, but I figured it will be better and definitely more intuitive to have a monitor connected and edit configuration file while on school WAN. As such, I ordered a hdmi to micro-hdmi cable that will help us connect a monitor to raspberry pi.
I’m on track in terms of the CV part of our machine. However, I’m a little bit behind in terms of integrating all parts together.
In this coming week, my goal is to make the model work on the deployment setting, work on motion detection, and connect everything together.
Verification:
I will need to verify that the card detection achieves 99.9% accuracy. This is an imperative for a seamless gameplay. While it roughly achieves good accuracy on the dummy data, I need to ensure it also does on the actual data I will use for the model.