Team Status Report for 4/30

This week Ryan and Jake focused on performing a further examination of the the power consumption of each component of our hardware. Using an ammeter we found the thermal sensor, PIR and esp8266’s current draw under reading and wifi uploading, as well as under DeepSleep and light sleep. We also attempted to monitor current draw through a typical detection and upload cycle by using the oscilloscope to grab voltage off of a resistor leading into the entire circuit however this effort was unsuccessful as we tried using a tiny resistor so we could use our original power supply while still allowing every component to be in its operating window. The data we got on the oscope was far too noisy to draw any conclusions from.

For our verification and testing of the front end for capacity ( i.e how many people are website could support) we utilized an industry testing platform called Load View which is a cloud based load tester that essentially creates a simulation of however many concurrent users you want and creates load stress for your website. This is a common industry testing  method taught in Jeff Epinger’s web app class called “Web Page Load Testing.” Within our application, the EC2’s point of stress is in responding to the XML requests sent out by users browsers asking for the latest data on table occupancy.  What Load View provides  for us is the ability to easily create a set of browsers concurrently that all have our website open and are all sending these XML requests. As our original use-case requirement was that we wanted to support 25 users we ran a stepped load where we gradually built up to 25 websites open in browser to see how load times would react. Our apache2 instance of our Django server running on an EC2 performed well we had 100% success on each test and load times even under max capacity remained under 2 seconds which is satisfactory for our user experience.

From a hardware perspective, we have a serious risk to our design which might not be resolvable this semester with our failing battery power tests (for more details on this, refer to Ryan’s Status Report for 4/30). There are two easy ways to mitigate this risk by decreasing the power consumption enough for the system to stay powered for 107 hours. The first way is to order additional hardware which will turn off the power to the thermal sensor while it is in deep sleep (which we cannot control from the processor).  In order to last for 107 hours, our system has to consume at most 11.2mA per hour, and it is currently consuming 14.76mA in its most optimized state. However, the thermal sensor is responsible for 5mA of that consumption, so turning it off when the processor goes into deep sleep will reduce that number to around 9.8mA which is well under that mark. The second way would be to order a battery with an increased capacity. Our system currently consumes about 7.8Ah of energy, so any battery larger than that would be sufficient. While both of these options are very feasible, we would not be able to make these adjustments by the end of the semester. Fortunately, though, we won’t need to last nearly as long for our demo, and should have no problems with staying powered for the whole time. From a hardware perspective, all that’s left to do is to fine-tune occupancy detection algorithm thresholds for Weigand Gym and potentially figure out a more elegant way to mount the sensors than with tape, and we should be ready!

For frontend, we should be mostly ready to go for the web app, and we plan to migrate that to a mobile friendly version as well. The planned statistics are implemented and we also plan to do user testing in the UC before the demo to make sure that the integration between hardware and software is seamless enough to not interrupt user experience.

Team Status Report for 4/23

From a hardware perspective, we were fortunately able to avoid any design changes, as we were able to integrate the thermal sensor into the design both hardware- and software-wise after many tribulations. We began battery tests and will have results for those tests on Sunday before the final presentation is due. Additionally, we will complete power unit testing with different sections of code on Sunday before the final presentation, and we hope to get a much clearer picture of how closely our design is matching our use case. One big risk is either power or battery testing failing, both of which would indicate that our system consumes more power than expected and we won’t be able to meet our battery life use case. However, should that be the case, we have a couple of risk mitigation plans that we hope will solve the problem. We still haven’t introduced specific optimizations to the code which should significantly reduce power draw, including better use of deep sleep and potentially using a light sleep in the few seconds between the processor taking readings from each sensor. Additionally, we should be able to reduce the amount of readings required to determine an occupancy change without any major sacrifices in accuracy, which would allow us to take less readings per minute and further reduce power draw, while still meeting our use case of accurately updating occupancy status within a minute of it changing.

Team Status Report for 4/16

This week we focused on completing a few of the remaining things from our schedule. The first of these was to convert the EC2 database to run MySQL. For the demo we were just running sqlLite as a proof of concept but to ensure we are optimizing for our design requirements a change to MySQL was preferred. We also deployed a still old but upgraded front end html and django code and implemented that on the ec2 so that it runs on our website.

We additionally developed two of the tests/validations our team initially had planned to do. The first of these is tracking the aws -> website capacity and loading times. We are utilizing GTMmetrix which is a pretty standard speed test tool that performs multiple tests across various browsers, locations, and connections. We have yet to run this as Angela is finishing up her finalized front end which is the element of our pipeline this seeks to stress test. The Second of these tests is a fleet of node mcu’s that all are publishing json packets. This stress tests the device node -> aws capacity. This is more of a verification/validation than a test as there isn’t a solid metric for this other than delay and packet loss.

Last week we mentioned that we needed to  convert all devices to access CMU-SECURE through a wifi access point as CMU-DEVICE was blocking certain things we were trying to do with NTP. Now that we have confirmed our final demo location as Wiegland Gym we have concluded an ethernet based wifi access point is not viable as there isn’t accessible ethernet access for our team.  We are going to test one of these options, using a Samsung Phone as a wifi repeater this upcoming week. This is probably our greatest issue for risk mitigation at the moment. Our initial demo proved that we can still have a working project without this solution however to realize the full vision of our project this will need to be solved.

 

Team Status Report 4/10

This week was super busy in terms of preparing for demos, because of this we are going to break down progress and issues by platform

AWS IoT – Originally our design had our incoming data going to AWS IoT’s internal time series data base and then the platform’s rules engine would utilize a Lambda Rule to pass this along to a MySQL DB hosted by Amazon RDS. Django would access this directly. We fully implemented this but kept running into errors integrating this pipeline with Django’s models. Django models are intended to be a wrapper that sits over SQL and keeps the programmer completely removed from it, but because we were writing a bunch of SQL by hand as part of the Lambda function Django really didn’t like that. This left us with two options either abandon Django models and go with straight py sql or integrate the DB on the EC2. Because as a team we wanted to go with Django b/c of the abilities of models we made the design pivot to move towards a database with the django MVC on the EC2. This simplified the AWS IoT portion of our project in that we completely cut out the Amazon RDS MySQL server and lambda function and we replaced them with an AWS IoT rule that sent data within a https to a subdomain of our hosted website.

EC2/Django – So this clearly had trickle down effects within the EC2 Django deployment. For one we created a bunch of code that established Django models for tables with all their necessary features i.e. table_id, occupancy status, time of last update, etc.. We didn’t just create the templates as we also created a migration such that whenever the database is reinitialized  or the server reboots the database is pre-populated with instances of tables for all the devices we are currently supporting. In terms of data communication from AWS to Django we talked briefly about how AWS IoT was sending a post request to a specific url, this is specifically a json dictionary that contains all data transfered off the esp8266 as well as a key to match for security so nobody can spoof send data.

Because of the change in overall design we performed some extensive load testing to ensure this could still meet our project requirements. We converted the domain to point to an elastic IP to help with load. For testing We flooded the EC2 with 80 db updates and 100 get requests within a minute and it experienced no issues whatsoever so the change still meets our project requirements.

We additionally worked a bunch on the front end. We built out a basic html file to hold table information (id: status) and then tied this in with a javascript, jquery, and Ajax set of functions that would via a url automatically refresh occupancy data at a set interval of time. This tied into some backend code We wrote in Django that would query the database for all Table model instances, package these in a dictionary and send them off in json as a xml request response to the browser for our Demo. Although it had yet to be synced with the rest of the Django code on the EC2, We also developed a basic front end representations of tables and their occupancies.

hardware/esp8266 – We wrote a script to perform basic pir based occupancy detection for the demo. This consisted of two versions of detection. One that would connect to wifi only when detection had occurred as a preliminary power saving technique and one that would stay connected. This will slot into our final demo idea of having two detection algorithms running. One that will show purely how well we can detect occupancy and another that will show our in practice version designed around saving power through wifi use minimization strategies.  We ran into a new problem using CMU-DEVICE Wifi where we wern’t allowed to use NTP for any accurate time synchronization. In our demo testing we found that there isn’t a reliable delay between detection and when status updates reach our sql. Because of this as well as our desire to ensure we are operating our esp8266’s  during our desired running window, 8am-5pm, we wanted to use NTP for a variety of things on our nodes. Because of this we will be pivoting from CMU-DEVICE to using a wifi access point to connect to CMU-SECURE.

 

Team Status Report 3/27

Our team had to focus more on the ethics assignment that was released to be due on Monday as well as the lecture on Wednesday. As a team, we’ve been trying to integrate our individual work, which didn’t happen last week as we had hoped. Because of this pushback, we will have to focus much more on connecting the pipelines between software and hardware as well as make sure our individual products still work as intended.

With the ethics assignment, we were also presented with other topics to think about in regards to our project, which would be more on the user side of this project. How would we prevent people from using our data for ill-intent (aka knowing the number of students in the UC at a time)? What would we do about people who don’t want their data collected in terms of occupancy? These are definitely problems we should confront.

Since we are behind schedule, we hope to catch up a lot more in class the following week as well as make sure software and hardware form a pipeline.

Team Status Report for 3/19

This week we made great strides for the hardware software interface. All of the teams esp8266s were registered with CMU-Device’s wifi. We successfully implemented the esp8266’s quick connection to wifi, establishing communication with AWS IoT, and pushing data packets with table occupancy. Code can be found here

We initially deployed our Amazon RDS mySQL instance and created the preliminary AWS lambda rule to push the uploaded data to this mySQL database: code

We are currently on schedule with exception to low power testing and code creation. The major deviation is the necessity to use the PubSubClient and WifiClientSecure libraries to upload data securely from the esp8266 without fully knowing what they are doing under the hood. The risk mitigation for this is that we could rewrite some of the connecting and publish code as we only need to transfer data in one direction.

Additionally, we ordered the rest of the hardware last week, including switching regulators and a battery charger, and are hopeful that we will be able to get everything assembled soon! Although we would have ideally had this done a few days earlier, we still have a lot of time to run potential risk mitigation plans in case pieces of hardware don’t work as expected by figuring out and purchasing alternative solutions.

Team Status Report 2/26

This week, we presented our design review and are currently awaiting feedback. We did not make any changes to the system, only more detailed analyses of the frontend and hardware.

Through our preparation for the design review and report, we did not discover any new significant risks yet. Overall, we are just trying to properly manage costs and size for the hardware in order to support the 55 hour battery life as well as making sure software structure can hold up to the number of tables it has to support. Currently, our biggest power consumption device is the PIR sensor, which will, after 55 hrs, consume 85% of the available power consumption. As a contingency plan in case the battery does not last, we are planning to lower the active time of the PIR sensor, such that the thermal sensor does the initial occupancy sensing and the PIR sensor is then used to confirm occupancy.

Team Status Report 2/19

This week we did a lot of prep work for getting started on building. We were able to get some power consumption analysis going as well as further part analysis.  For AWS, we were able to get AWS IOT working and connecting our esp8266 to AWS core. We also spent a good portion of the week planning our design presentation. In terms of front end, we got a simple UI going that displays needed information on the occupancy status in the UC. We have some ongoing conversations for storing information from our sensors and how we should identify each table, but that is not a significant risk.

The biggest risk so far is still regarding the sensors. However, after doing some power consumption analyses, we discovered that we would have to do a bit more design changes to the hardware in order to account for the battery lasting 55 hours. This didn’t necessarily change the costs, but we did have a more in depth conversation of what parts we actually need to support this current load. We also discovered that there need not be a big change on our existing schedule, and we are moving forward at a good pace.

Team Status Report for 2/12

We had our project proposal on Wednesday, which Jake presented for. Beyond working on the presentation, we also submitted order requests for hardware parts; 2 potential Esp8266s, and a D6T thermal sensor. In addition, we looked into a bunch of different sensor options, and also looked into solar panels as an alternative means of battery charging. Finally, we set up the software portion, including starting up a Github repo and requesting AWS services.

Right now, the main risk we are facing is not being able to detect people with our combination of sensors. Although we are planning on only using a button to indicate occupancy for MVP which is not at risk of failing, we understand that this isn’t an ideal solution for our use case, and that we’d want to be able to update occupancy status without relying on a user to remember to press a button when they sit down and get up to leave. To try to mitigate this risk, we are already ordering sensors to try to give us ample time to set up and test them. Another risk we are facing is that we may not be able to meet the 55-hour battery requirement that we are setting. We would like for the battery to last 55 hours so that it can operate from 9-8pm Monday through Friday (which we identified to be the peak hours on the 2nd floor of the UC) and then charge over the weekend, but we realize that finding a battery with that large of a capacity and then managing it strategically enough to last a week is a tall task. We have looked into alternative methods of charging, such as solar power, although we are skeptical of how practical these methods may be, so we might have to reframe and change our use case to accommodate this.

Since we are fresh off our project proposal and haven’t done too much deeper design thought and testing yet, we have not yet made any major changes to the design of the system or the schedule. But with the design review on the horizon, expect changes to come!

One exciting point of progress has been integrating a PIR sensor with our Arduino! Although we haven’t done too much work with using the sensor to detect occupancy yet, it is definitely exciting to see everything start to come together.