Reports

Rip’s Status Report for 2-22

This past week I spent time working on the design of the hardware for our system. It was not very difficult to realize that we didn’t want to hack processors on consumer IoT devices. That would be time consuming and would not have a guaranteed outcome if we ended up not being able to load our software onto those devices. What I think we really need is a standardized platform to build all of these devices on. What immediately came to mind, and what we’ll be using, is a raspberry pi. Being a single board computer (SBC), it’s small and has the ability to run a complex linux kernel. It is also one of the most documented SBCs on the market.

As far as individual devices go, I believe that most of the device that we are going to have in our ecosystem can be created using solderable RPI headers, that will house the simple circuits and components we need for the alarm clock, the light, and the sensor device. For the coffee pot, we want control it with the a relay over a power cable. The reason being that we don’t feel like creating a whole set of electronics for a coffee pot is within the scope of this project, and we want to accomplish control of the coffee pot in the simplest way possible. For the MVP, we’d like to have this all implemented in breadboards, but for the final I’d like to have it all on headers and connected to the board.

Team Status Report for 2-22

We made a number of important design decisions, and a couple of changes.

  • Sensors: Originally we intended to have devices have both sensing and actuation capabilities. What this effectively means is that we would need to both read from and write to devices. Either we would need a shim to control a premade device, or we would need to build it from scratch. The first is very difficult, and the second is unnecessarily time consuming. As such we decided to split the funcionality. Devices will now be <i>either<i> sensing or actuation. That way we can make a simple shim for actuation devices, and it just turns it on or off, and then we buy off the shelf sensors that are intended to be read from. This makes the whole system design much simpler without giving up functionality.
  • Protocol: When considering different protocols, we were considering bluetooth (BLE) and wifi. While using BLE would allow the system to function without the router, it would introduce further difficulties. The main difficulty is that BLE is not very good over large distances. Given our use case, it’s very possible that devices are separated by as much as 50 feet (across a house). This would impede the functionality. As such we decided on wifi
  • PCB design: We decided against designing our own PCB and custom mamking devices. As discussed above in the “sensors” section, we can achieve the same results by using a simple device shim. As such, we cut PCB design out.

We don’t believe that these changes will incur any costs. This is because all the changes are geared towards simplifying the design while maintaining fucntionality. As such, it should be easier to actually implement.

One other thing we’d like to note is that over the past couple weeks, we’ve made a lot of progress as a group on how to properly go through the design process. Previously, we had a backwards approach. We went in with a set of technologies we wanted to use, and tried to shape our use case around it. Over the past couple weeks, however, we’ve started to tackle it from the opposite direction. We sat down as a group and defined in great detail a couple use cases for our end product. Working from there, we rehashed the requirements, comparing them against those from our proposal. The, using those requirements, we sat down and actually researched different technologies.

Richard’s Status Report for 2-22

This past week, I personally worked on wire-framing the web application. After I drew out what I thought would be the minimum set of pages to make the app usable, my team iterated on the design and we came up with a couple more pages to add. After creating these diagrams on draw-io, I researched which technologies we should use for web application. We needed something that everybody team was at least semi-comfortable using, something that had good documentation, and something that was light-weight enough so that we could host the web application locally. After making a combined list of possible options, we were content with React for our Front-end and Express for our backend. I’ve now started to build out our wireframes in React. The routing works, but the data isn’t getting transferred yet.

I feel like our progress is about a week behind schedule. However, we might be able to get this time back, because our team is no longer using PCB, which we allocated a week to learn and design. To get back on track, I’ll put in extra hours to make sure I can meet the web app deadline as best I can.

Next week, I plan to flesh out the front-end of the web application, and start working on the back-end, which will just be a bunch of endpoints my front-end could call. This will need to be done alongside the rest of the team, because the backend models will have to match the models for the rest of the system.

Niko’s Status Report for 2-22

As a note to the reader, this past week we decided as a team to change our proposed devices from a combined sensing / smart device into 2 separate devices. One device will do only sensing, and the other will do only “actions”, such as turning a device on or off.

Another thing to note is that we decided to host the webapp locally. This means that one node in the network will be a “master”, and run the webapp. If that device goes down, the network should elect a new master, and then that device should spin up the webapp. As such all devices should have the capability to run the webapp

This past week I did research into how the interaction layer will be designed. In particular, I put a lot of thought into what data our system will contain, and how we might go about storing it in the system. I came to the following conclusions.

The data the system will need:

  • Per-node sensor data (for sensing nodes)
  • Identifying device information (such as a device id)
  • Identifying smart home network information (for commisioning new devices and for reconnencting when a device goes down)
  • Device list, and how to access each device (e.g. device IP addresses)
  • Code for the webapp
  • User’s settings
  • Defined interactions, and when each interaction was triggered
    • This list of past interactions can be viewed by the user
  • Last known status of a device. If a sensing device, last piece of data. If an actuation device, then whether device is on or off.

In thinking about this data, I came to the conclusion that we can minimize the complexity of our system if we minimize the data that needs to be shared. Here is the breakdown I arrived at:

On each node, modifiable data:

  • Sensor data

On each node, hardcoded:

  • Device ID
  • Code for webapp

Shared across all nodes:

  • Network id
  • Registered devices list
  • User settings
  • Defined interactions / past transactions
  • Last known device status

The biggest source of complexity here is the shared data. However, it can be further simplified because the majority of the “shared” data is information that will be needed if a device becomes the hoster of the webapp. The only data that needs to be written by all nodes is last known device status and past interactions.

My progress is on schedule.

For next week, I plan to research a few different database solutions using the above information as criteria for their effectiveness. For each, I will use our listed use cases and our metrics from the project proposal to evaluate their effectiveness. I hope to select a database technology, and then prototype how it fits together with the rest of the system.

Rip’s Status Report for 2-15

My progress over the last week has been steady research and thinking about how we are going to solve many of the problems with the hardware of this project. I’ve been out of town this whole past week for the UAA Swimming and Diving Championships in at the University of Chicago, and have only had intermittent time to do work in between swimming. I also forgot to start reading The Pentium Chronicles before the deadline, because I didn’t realize I had to be finished with those section a week before we discussed them. I will be working hard this week to catch up on the book for the discussions and to get all of the design down in writing.

The first problem I’ve been thinking about is what “devices” to use to demo our project and what our MVP should be. I’ve been writing down lists of home appliances, devices, and kitchen gadgets and trying to think about how to turn those into smart devices using a raspberry pi and also what interactions those devices can have with each other. Actually, what just popped into my head was that we can make one or more devices sensor devices that drive this systems interactions with humans. Maybe that could work. Without much more thought into that, I’ve narrowed down our MVP scope to a breadboard with “sensors” and “actuators”, LEDs and buttons. I think that this is a good decision because it allows us to put two people on the initial interaction layer work for the majority of our first phase. I think the datastore on the devices is going to be the hardest and most crucial piece of the project, and having two people on that will make problem solving a little bit easier. I also think that, since our system is device ambiguous, we won’t need to have specific devices to show an MVP.

The second problem I’ve been thinking about is how we’ll be storing the data. From past work, I’ve learned that key-value stores are much better than relational databases for storing sensor data or polled data. Because of this, I’m leaning towards using Redis as a data-store on these devices. Niko should be looking into other databases and how they might fit in to the project as well. Redis is a big player in log storage and has recently pushed into the IoT field. The problem doesn’t necessarily stop with that decision though, because there’s many, many ways to store data in Redis, and we shouldn’t be too hasty in deciding that. I’m still split between using Redis in a cluster among all the devices and having the webapp request from that cluster everytime it needs data, and having a Redis instance on each device and writing a distributed protocol for all these nodes to act as one system. The first option allows us to use built in functionality of redis, but will take significantly less effort, and the second option will take significantly more effort but will show that we did more work. I think I’m going to have a conversation with our TA about this one, because I’d really like to use the built in functionality and then build on top of that, but it might not be considered enough work to be a full project. I’m also not 100% sure how much work either of these design decisions will be and have to look into that.