We’re on schedule and getting into the swing of testing various sensors for their effectiveness within the requirements of our project. Currently focusing on narrowing down our sensor selection after having collected more data from Hunt and having mapped out space in the D-Level cubicles to mimic the Hunt carrel measurements.
This week we worked in tandem to improve our requirements and begin testing sensors. We checked out Raspberry Pis, ultrasonic sensors, a passive infrared (PIR) sensor, and a pressure sensor from the existing inventory. We also ordered capacitive paint to test a desktop solution for proximity sensing, as an alternative to ceiling-mounted ultrasonic.
Our proposal presentation debrief gave us some important things to consider that I spent a lot of time this week trying to answer:
How do we define accuracy in terms of false positives and negatives?
A false positive (where the seat is occupied but our system says it’s available) is far less acceptable than a false negative (where the seat is available but our system says it’s occupied). I read this article and more on precision and recall to help me understand metrics beyond just accuracy, and I’m currently working on applying this understanding to define more specific requirements (like an F1 score) for our project.
How do we quantify our power requirements at this early stage, especially if we’re using a power supply?
Our power consumption metric can be defined in relation to another relevant machine. For example, we can claim our system will cost less to run than a printer or TV or 10 laptop chargers. Cost of electricity data is easily available, so with more sensor research under our belt, we can make a confident claim.
What is the difference between latency and response time? Set specific requirements for each.
Read a few different things on this and went down a rabbit hole of performance testing, service level requirements, and user perception of latency. We’ll need to calculate individually the latency, processing time, and response time we will be aiming for.
Precisely define the MVP: How many carrels are we covering? How will we set it up?
While we defined our MVP as an in-lab mockup of a Hunt carrel, we needed to specify our scale, which we have decided is 3 identical carrels. In Hunt we collected the measurements of what turned out to be 4 slightly different types of carrels, amounting to 139 in total. We’ll likely be able to set up a mock carrel in a D-level cubicle to make testing easier. Ideally, we will move beyond our MVP and build a system that supports at least a dozen carrels.
What is the feasibility of Andrew ID authentication? How would we implement this?
I’ve reached out to CMU Computing Services to understand what the best and most feasible implementation of auth would be between a directory lookup, Andrew email authentication, connection to CMU_Secure, or other unknown option.
Deliverables for next week:
- Come up with some power consumption models
- Get more clarity (and hopefully a definitive answer) on an Andrew ID app auth solution
- Order more sensors and continue sensor range/effectiveness testing
- Design a rough sensor hub schematic
What did you personally accomplish this week on the project?
I debriefed and reflected on the feedback from the presentation. This led to the development of a list of questions Alisha and I thought over and spoke with TAs/professors about. The list involved things such as (1) how are we going to improve our requirements and (2) what steps should we take to incorporate AndrewID authentication into our project?
We ordered a few parts from Bare Conductive that all relate to the capacitive proximity sensor we want to experiment with once the parts arrive from the UK. We then wired up a circuit to test out the sensitivity of the PIR sensors in the lab, calculating the sensor scope and fine-tuning its sensitivity. We also took a field trip to the D level of Hamerschlag to check out what the cubicle space looks like and see whether it would be a good alternative environment to implement our product as opposed to Hunt 3rd floor.
We also spent time in Hunt measuring the dimensions of all of the different study carrels. It turns out that there are actually 4 different sizes of study carrels on the floor. We counted how many of each there were and measured the distance from the floor to ceiling because we know that the height at which the sensors are mounted on the ceiling will play into how much area they cover below them.
What deliverables do you hope to complete in the next week?
The capacitive proximity sensors are coming from Europe, so I am not sure that we will receive them yet this coming week. However, we do have an ultrasonic sensor that we plan to test. We also will be investing a lot more time into fleshing out and finalizing our design document. We plan to weigh the pros and cons of different sensor hub designs and get feedback from TAs/professors so that we know the strongest way to move forward in the development of that portion of our project.
Risks + Concerns
We’re concerned that we may not find sensors that can successfully solve both parts of our problem: detecting a person at a carrel and detecting an object on the carrel desk without the presence of a person. Hopefully, by spending a significant amount of time on research and testing upfront, we can hone in on an effective solution. We’ve also considered various locations to mount the sensors (e.g. on the ceiling, under the desk, or on the desktop).
An important part of our project is that we can create a solution that doesn’t rely on computer vision or mounting cameras. From the perspective of project scope, we aren’t well-versed in computer vision and wanted a hardware solution. Additionally, we don’t want to mount cameras above study spaces (and laptops!) as we feel it’s an unnecessary intrusion on privacy. However, CV can suffice as our contingency plan if all else fails (which it won’t!!).
No changes to our design or schedule as of now.