Ryan Status Report 11/20

This week I worked on testing, refactoring our code to allow for faster testing and verification, and drafting a user survey to guide some of our design decisions. In addition, I soldered and created another “base station.”

(User Survey: https://docs.google.com/forms/d/1YM4OFE08eWwsJJnDHzXr_HDfEiTtsAuOjxWQ_ADB7e0/edit)

The testing I completed involved our first requirement: to turn on the lights within 2 seconds of a person entering a section. This ensures our computation and communication latency from signal detection to computation to signaling the lights is fast enough for our user. In order to capture times, I recorded myself on video, and timed from first movement to the lights turning on for five trials. I obtained the following results. The average time is 1.028s and each trial is below 2s.

Trial,Time
1,   1.2s
2,   0.81s
3,   1.11s
4,   0.97s
5,   1.05s

To assist with further testing, I refactored our code and began developing a script to modify and adjust our weights given values captured and written to a test file along with a desired light behavior. The script uses a gradient descent-like algorithm for adjusting weights and validating defined test cases. Weights will be slightly modified until (hopefully) the desired light behavior is correct for all test cases.

This coming week I will continue testing, send out the survey to collect responses after it is finalized and create another base station.

Malavika Status Report 11/20

This week was mostly integrating all parts of our system together and performing tests. The functionality of the website is complete including bidirectional communication which allows the user to turn one or more of the work zones on, turn off all lights, enable a security protection mode to turn off microphones and motion sensors, and control the weighting of the sensors manually.

I added a range input element in the form of a slider which allows users to control the percentage of weight from PIRs to microphones on their own end sending the information to the Raspberry Pi. Diva and I worked together to design the optimal way for the weights to be implemented when being changed in the system. We decided that there will be two base weights (one for the PIR and one for the microphone) which serve as the value by which the counter for each work station increments when a sensor detects a positive value. Multiplying these by the percentages the user sends to the Raspberry Pi when moving the slider will change the increment value, and thereby a PIR sensor will be “worth more” and increase the counter threshold by a larger value than a microphone, should the percentage be specified as so.

We also performed preliminary testing with three base stations in the 18-500 lab. I worked on some of the circuitry for disconnecting and attaching the LED lights to the breadboards to be set up as stations in the four work zones we will have (we currently have three). I also ordered another set of PIR sensors as we were missing a fourth one for our final station.

The rest of the week primarily entails testing all of our edge and use cases and working on the final presentation which I will be presenting. Diva and I spoke with Professor Yu on Wednesday about our testing plans, and he mentioned it would be useful to think about how to best present our testing data. We need to demonstrate the physical range of our sensors and so we plan on having plots that illustrate this with distance from the center of the PIR in feet on the x axis and the value the sensors (logical 0 or 1) on the y axis. We will also perform a similar series of tests for the microphone application.

We also want to make a user survey to conduct behavioral testing of individuals and their preferences and comfort level with data gathering sensors such as microphones and plan on doing this this upcoming week.

Diva-Oriane Status Report 11/20

I worked on getting two way communication with the ESPs on the work stations and the raspberry pi. The lights are now turned on by a message being sent to them with mosquitto protocol. I integrated the weight messages from and to the web app into the raspberry pi’s base station code. When the weight is changed on the sliding bar of the web app, the raspberry pi changes the weights it uses for the pir and mic data. The raspberry pi keeps track of its default values incase the web app switches back to manual.

WEIGHT:AUTO/Manual:PIR weight:Mic weight

The raspberry pi sends back its current weights to the web app when the web app switches back to auto mode. When it is in manual the web app already knows the weight values.

We were able to scale up to 3 base stations this week, but we are missing a PIR for the 4th base station.

We started some testing, but still have to get some done before the final presentation after thanksgiving. We did tests that allowed us to modify the threshold for the microphones and the PIRs, to pick up reasonable values (reasonable level of sensitivity).

The tests that we still need to be conducted:

-All combinations of people 4 at 4 base stations. i.e. (people at 1, people at 1 &2, people at 1&2&3, people at 1&2&3&4, people at 2&3, …)

-No movement tests

-No noise tests

Ryan Status Report 11/13

This week I worked to solder and set up another base station with a pir sensor and microphone and work with Diva and Malavika to get messages sent from our webapp to the Rasberry Pi base station in addition to doing some initial testing with two base stations.

I tested by doing some work and moving around .5m from one base station and also 1m from that base station. This allows us to observe the sensor values when a person is in our defined work station and also when the person is outside of the defined work station. I wrote the values to text files to allow for backtesting and allow us to tweak our weights with immediate feedback on accuracy. For the most part, our data looks promising, we are getting a lot more feedback from the station I am closest too however there is some noise from both sensors (false positives) so we’ll have to decide how we’d like to handle this in order to maintain our turn on latency.

This week my focus will be to add one more workstation and do some more testing so that I can work with Diva to optimize our weighting.

Malavika Status Report 11/13/21

This week we had the interim demos and showed the Professors and TAs what we have so far of our MVP. On the webserver side, I was able to illustrate the connection the webserver makes to the Raspberry Pi and also the messages it sends to it. For the later demo, I was able to get bidirectional communication working so the webserver can also read messages sent from the Pi by subscribing to a separate topic especially for incoming communications. The webapp publishes messages to the pir/data/web topic. Interaction can be seen in the images below.

The STATION:ON:1 command was received from the Raspberry Pi — currently, the webapp is simply appending the message to the div by adding the string as an HTML element. Ultimately, these messages will be parsed and this page will change the workzone user graphics accordingly.

Diva and I went into the lab yesterday to finalize our commands between the Pi and the server. We also performed testing for the microphones but could not find an appropriate sensitivity. We accounted for all edge cases such as making sure the pir was on while the mic was off when moving and making no sound (and vice versa).

Diva-Oriane Marty Report 11/13

I was able to get the sensor data sending directly from two microcontroller to the raspberry pi by the demo on Monday. We were also able to able to scale up to two base stations.  Malavika and I worked on sending and receiving multiple kinds of messages from the web app to the raspberry pi base station. My role was to integrate the messages into the logic of the raspberry pi base station.

The messages we are able to send from the web app to the raspberry pi base station are the following:

STATION:ON/OFF:Station Number 

Station Number (1,2,3,4 or 5 (all stations))

On – force station to remain on regardless of the sensor data

Off – go back to just working with the sensor data

MICS:ON/OFF

On – use mic data

Off – ignore mic data

PIRS:ON/OFF

On – use pir data

Off – ignore pir data

WEIGHT:AUTO/Manual:PIR weight:Mic weight

This is the only one not yet implemented 

It allows the user to set the weight of pir data and mic data manual if they wish too

The messages we are able to send from the raspberry pi base station to the web app are the following:

STATION:ON/OFF:Station Number 

Station Number (1,2,3,4)

This message is sent from the raspberry pi base station to the web app when a station on/off state changes.

WEIGHT:AUTO/Manual:PIR weight:Mic weight

This is the only one not yet implemented 

Send the current weights of pir and mic data either when prompted or when changed.

I was also able to the the lights working by adding a transistor.

Next week the main goal would be to scale up to 4 base stations and figure out the weights of the sensors and placement of the sensors. We also need to send user privacy concern surveys.

Ryan Status Report 11/06

This week I worked on integrating communication between our sensors and Raspberry Pi to the logic Diva is working on and assist Malavika in tying in the web app as well.  The integration with Diva is mostly complete while the integration with Malavika will require some more work, we ran into some bugs but mainly were focused on a proof of concept and verifying that we can in fact send messages from the web app to the Pi which we have verified.

I worked to solder and set up two stations for our interim demo. Each station consists of a ESP8266 (WiFi) microcontroller, a microphone, and a PIR sensor. I programmed the ESP8266 microcontroller to send the sensor data only when it changes. The idea behind this is for improving latency. The amount of messages sent from all the microcontrollers is reduced and the logic is distributed to these microcontrollers preventing constant checking in our threads. In addition, no information is lost because we can assume the value is the same until it changes.

Here is one of the work stations with our ESP8266 connected to PIR and microphone sensor.

This console output shows the result of some initial testing with full integration for 90 seconds. In the first output, we were closer to workstation0 and our algorithm computed a score of 133 whereas workstation1 had a score of 87.  Next we tested the reverse and got scores of 79 and 140. We still need to test and adjust thresholds however it is very promising to see higher values corresponding to the workstation we were closer to.

This was a productive week and we seem to be close to our original schedule. This upcoming week, I plan to set up more work stations (increasing from 2 to 4). In addition, I will work to integrate more functionality from the web app that Malavika is working on. This will involve including more logic for the messages that will be sent from the web app in addition to the logic for sending messages from the Pi to the ESP8266 microcontrollers.

Diva-Oriane Marty Status Report 11/6/2021

This week I worked on integrating the sensor values into the code I wrote last week.

As a group we debugged getting messages sent from the web app to the raspberry pi using mosquitto protocol. After testing the microphones we decided on using the one with a potentiometer that controls the “sound” threshold and output a digital signal. It seems sensitive enough for our test case and provides the most reliable data. It also allows us to insure anonymity and privacy. We are currently able to send appropriate sensor data to a LED on the board that represents a base station.

For the interim demo: I have a working station that sends signals to the raspberry pi and for which the raspberry pi behave accordingly. I almost have the sensor data being sent directly from a microcontroller to the raspberry pi, but I hope to finalize that tomorrow (Sunday) before the demos on Monday. Next week I hope for us to be able to scale up to at least two base stations, start placing the various sensors around the room, and more elaborately incorporating the options from the web app.

Malavika Status Report 11/6/21

Continuing from last week’s status report, this week entailed presenting the use cases for the website as well as the layout for the graphical controls interface users can interact with in order to control the lights as well as the sensors in the system. I showed Professor Yu and our TA the vision I had in mind which includes four separate switches on the webserver which can turn each of the four light switches on or off. The controls page will also contain a switch to override the automatic weighting of the PIR and microphone sensors and allows users to turn a knob and specify the relative weights of the sensors when deciding to turn on or off a light.

Professor Yu suggested that when the webserver is not overriding the Raspberry Pi, users should be able to view what the control weight settings currently are in automatic mode to allow them to make a more informed decision when manually setting the sensor weights.

I also established in our weekly meeting that there will be a bidirectional communication channel between the webserver and the Raspberry Pi, which serves as the broker in the Mosquitto communication protocol our system uses.

The rest of the week I was going through the MQTT tutorial to interact from the web server to the Raspberry Pi. I established that the next immediate step was to communicate to the Raspberry Pi, and worry about the other direction later. I got the dummy website I was building through the tutorial to connect with the Raspberry Pi after debugging with Ryan, Diva, and Professor Mukherjee (see images below). While the webserver can establish a connection with the Pi through the websockets protocol on a specific port and subscribe to the pir/data topic, it is unable to actually send messages, which I will debug after the demo. As of now, it is simply a matter of transferring the functionality from the dummy server to the controls user interface on the actual website and editing the current JavaScript to support this action.

For the demo, I will display the website and graphical interface itself as well as its ability to connect to the Raspberry Pi.

Ryan Gess Status Report 10/30

This week I worked with Diva to to verify we can receive input data from two different microcontrollers connected to different sensors. We were successfully able to collect data with a microphone sensor and send it over our communication channel to our raspberry pi. (See Diva’s report for a photo).

I also worked on testing the limitations of our communication system as Professor Tamal brought to my attention that the networking capabilities of a Raspberry Pi are fairly weak. We are able to receive messages when the microcontroller is placed in any part of the room however once I had more than 2 sensors things break down and we cannot receive any messages. Below is some data I collected with two sensors, the mac address is the unique identifier and I also collected the signal and timestamp.

 

I used this blog http://www.steves-internet-guide.com/multiple-client-connections-python-mqtt/ to try to work through some of these communication issues and try a different approach to what I am currently doing. I still need to perform more research as I haven’t found many others experiencing this issue online. This week I plan to look into this some more as well as possibly investigate a different protocol however this would not be ideal as it seems MQTT can integrate nicely with our web app.