Reports

Richard’s Status Report for 4-12

This past week, we demo’d our project. We gained valuable feedback from the instructors afterward, and we will use this feedback to fuel our last sprint.

In the big picture, we have three parts to the project. My webapp is made so that Niko can implement whatever he needs to very easily. Basically, I wrote the backend in Python so that to integrate with Niko’s code, all the code has to do is import Niko’s library and call his functions.

As you may have seen during the demo, the three parts of the project are still quite a ways from becoming totally integrated. This past week, I focused on reading Niko’s interaction layer code and figuring out how what functions he wrote should be invoked in the Flask backend. This was tedious, because a bunch of the code was dispersed among quite a few files to look through. From here, all I have left to do is style up the frontend a little and test the webapp to see if there are any bugs. I will help Niko if he gets confused or Rip if he needs any help with his hardware web application.

For the next sprint, we will all have to work pretty hard. We want to have a successful project, and we’ll try our best to achieve that. To stay on schedule, I have to monitor Niko’s progress on filling out the backend with his function invocations and jump in if he has too much on his plate.

Richard’s Status Report for 4-5

This week, I set up the backend for the webapp. Originally, I had wanted to use ExpressJs as the backend framework because of how lightweight it was. We also all have a little bit of experience with this. However, this past week, we decided it would be better to switch this to a Python Flask backend.

This decision was made with ease of integration in mind. Niko already wrote a significant portion of the interaction layer code in Python. Porting this over to Javascript would have been a ton of extra work with no extra benefits. At the time of this decision the backend wasn’t written or connected to the frontend yet, so switching the framework for it wouldn’t have resulted in any loss of work or time.

This week, I set up the Flask backend and connected it to the frontend so that Niko could implement the API using the library he wrote. I tested the backend/frontend connection by having the API return “dummy” or “fake” data. I knew the whole webapp was working by deploying the webapp locally and checking to see if every page showed correctly, displaying the dummy data.

I’m on schedule, and to stay this way, I’ll have to keep working diligently. I’ll have to help Rip a little bit with his hardware web application, because I have a bit more background with this kind of stuff than him.

Niko’s Status Report for 3-29

This past week I thought a lot about node setup and the relationships between all of the moving parts in the interaction layer.

I will begin by discussing node setup. While we don’t plan to incorporate new device commissioning into our project, we do need some way of bootstrapping a node, at least for our own development. From this perspective I wrote some bootstrapping scripts to obtain the appropriate code, set up config files, install necessary libraries, and set up the database.

I also spent some time designing the database and its schema, which can be found in more detail here (note that while unlikely, this link could change in the future. If so, see the readme here. Most importantly, I defined what the tables will look like, and what datatypes will exist in each table.

With regards to the master process, I wrote a preliminary python executable that checks in a loop if the broker / webapp are running, and if not, starts them. While i think the master may end up having to do a few more things, I think that for the most part this will be its sole purpose

As for the node process, I spent some time debating the merits of implementing it in C / C++ vs. in Python. This was a difficult decision because the node process is where the bulk of the actual interaction logic will exist. The main problem with doing Python is that python is not particularly good for parallel programming. While constructs for concurrent execution exist, (i.e. threads), each thread must acquire Python’s global lock, which serializes the execution. Processes could instead be used, but are a much more heavyweight alternative for a problem that only needs a smaller snippet of code.

Since most of the node’s communications are going to be done through the MQTT broker over the network (an inherently async operation), bottlenecking the system by serializing execution seems at first glance to be a mistake, and points towards using C as a better solution. That being said, I believe that we can get around this problem and still use Python. As long as we keep each thread very lightweight, and limit the amount of time each thread can run (limit blocking operations), then it should be no problem if execution is effectively serialized. I think that this allows us to take advantage of Pythons very powerful facilities, and eliminate the complexity of C.

One thing that was brought up in our SOW review was that we should redefine our latency requirements. As we discussed, it won’t really mean anything if we define latency requirements between our nodes, as they are no longer on the same network and as such have potentially unpredictable (and out of our control delays). However, we do have some level of control over the on device latency. While it’s true that virtualized hardware such as aws ec2 instances don’t guarantee consecutive execution (the hypervisor can interleave execution of different machines), we believe this will be less noticeable than network latency.

After thinking about it a bit, I decided the simplest way to measure this on device latency is as follows: when a relevant piece of data is received from the broker (i.e. something that should trigger an interaction), the node will write a timestamp to a log file. When that piece of data is finally acted upon, the node will write a second timestamp to the log file. By looking at the difference in timestamps, we can measure the on device latency.

Another thing that I worked on was defining more formally how all the moving parts in the interaction layer interact. See the below diagram for more information:

Moving forward, I have a few goals for the upcoming week.

  • Latency: I would like to define on-device latency that is reasonable and in-line with the initial (and modified) goals of this project.
  • APIs:
    • Work with Rip to define how my layer will interact with his hardware. Currently we are planning on having Rip implement a “hardware” library with a simple API that I can call to interact with the “hardware”. This would include functions such as “read_sensor_data” and “turn_on”, etc. I would like to iron out this interface with him by next weekend.
    • Work with Richard to interface with the webapp. As I currently understand it, the webapp will need to publish config changes, read existing config data, and request sensor data from other nodes. While I plan for all this functionality to be facilitated by the broker, I would like to implement a simple library of helper functions to mask this fact from the webapp. Ideally, Richard will be able to call functions such as get_data(node_serial, time_start, time_end) or publish_config_change(config). I would also like to iron out this API with him by this weekend, even if the library itself isn’t finished.
  • Node process: I would like a simple version of the node process done by this weekend. I think this subset of the total functionality is sufficient for the midsemester demo. It should function with hardcoded values in the database and mock versions of the hardware library / webapp to work against. This process should be able to:
    • Read config / interaction data from the database
    • Publish data to the broker
    • Receive data from the broker and act upon it (do interactions)

Team Status Report for 3-29

This past week, we had to redirect our project because of the novel coronavirus. As a team, we discussed and decided how our project needed to change, now that we can not meet in person.

The web application will not need to change much, if at all. The front-end remains essentially unchanged. The back-end now needs to make remote calls to the smart devices over the internet, instead of over the local network like we had planned in the beginning of the project.

Obviously, the hardware layer will have to change quite a bit. Essentially, we’ll emulate the hardware layer using software. We thought of a few ways to do this, including using QEMU software and another webapp. Ultimately, qemu introduced unnecessary complexity into the project, so we are going with the secondary web application. This was a difficult decision to make, but ultimately we don’t have many other options. Also, once this gets built, the software layer can be a good start to test the system if we want to build new features.

The Interaction Layer will also have to change significantly. The interaction layer, which acts like a translator between the hardware (now software emulated) and the webapp, will have to mounted on emulated hardware instead of real hardware. We don’t have experience with this, so we don’t know how difficult this could turn out to be. It can’t be stressed enough how important communication is for the development of the interaction layer. This talks to both the webapp and the hardware layer, so there are double the opportunities for misunderstandings and miscommunication.

Here is an updated GANTT chart for the rest of the semester:

Richard’s Status Report for 3-29

This week, we refocused our project. After meeting with the professor, we decided that the best way for us to go is to continue working on the project as per before COVID-19, pretending the internet is our local network.

My portion of the project, the webapp, was least affected by this. Therefore, I was able to write a significant amount of code this week. Specifically, I’ve added pages for individual interactions, forms for new devices, and a form for new interactions. I’ve also added links to the home page to a few other pages. This means I’ve just about wrapped up the frontend of the React webapp, and started planning out the backend with Niko.

I’m on schedule with regards to our updated schedule, and to stay on track I will have to keep working diligently.

Rip’s Status Report 3-28

This past week I thought a little bit more about what the interface between the interaction layer and the hardware layer will look like. I started with a few assumption:

  • The devices running the interaction layer will need to register their device type and their functionalities
  • The interaction layer will need to access and control the device hardware
  • The device should be identifiable at the hardware layer, so that the interaction layer can self identify if the power is cut out or the interaction layer is messed up

From this I thought about how the device would be commission. I want setup to use standard interfaces, so a json setup file will be sent from the interaction layer on the device to the hardware layer. This will tell the hardware layer everything about the physical setup, where sensors are plugged in, how they’re interfaced with, etc..

From there the hardware layer will store that profile and offer standard interfaces for each sensor or actuator. For example, there will be a read temp function a thermistor. I think this type of dynamic interface will be really intuitive to use for the interaction layer and will easily scale to many devices, whether it be with hardware or in a software domain like this project is.

I want to start working on this layer this week, and create more block diagrams to explain this more specifically. I think this online system can be used as a testbed if this system ever gets put on to actual hardware.

Rip’s Status Report 3-21

We’ve discussed how this project is going to progress and I’m confident in the outcome. I’ll be creating a hardware emulator using react and nodejs. The interaction layer will interact with it through the same interface that I had planned for the original hardware. I don’t have that interface planned, but I’d like to start experimenting with different ideas, of which I don’t have right now.

This is a diagram of the original vs new hardware layer. It summarizes the changes to the hardware layer better than I can explain it.

We’re still discussing what will happen with the interaction layer, and how we’ll virtualize that. I think we should use Qemu to emulate the RPi hardware and run that in AWS, but that might be overcomplicated, since we are likely doing all of this using a higher level language. I would be okay with using a few aws instances and treating each instance as a device, but I like the idea of Qemu and I think Niko can do it.

Richard’s part doesn’t really change at all, which is good because he’s got a pretty good web app going already.

By next week I want to think a little bit more about how this interface is going to work.

Rip’s Status Report for 3-14

Got news that we won’t be going back to physical classes. While this is upsetting, I’m thankful that this project isn’t super dependent on hardware. While we would like to test this on real hardware, we can get by without. I’m not sure what that will look like yet though. For now, I’m focused on driving back up to PGH and getting all my stuff. I’m going to try to pick up some of our parts too. I hope I’ll still be able to. I don’t think we need them, but I wanted some of those for personal use.

I’m unsure as to how the next week will look, and I don’t think the course staff is sure either. I still want to complete this project, so I don’t want to change it too much. We need to sit and discuss how to do this without being together or having any physical hardware. For now, that’s about all that’s on our plates due to the circumstances.

Rip’s Status Report for 3-7

This week I finalized the parts list with my groupmates and ordered the parts. I admittedly didn’t do much else. I’m starting to think a little bit about how I’m going to interface with the interaction layer, but not in much depth. it’s probably a little bit late to be thinking vaguely about that.

Next week we’ll be on spring break so I don’t expect to do much. I’ll be with both of my groupmates over spring break, so if anything comes up we can handle it. We should get our parts in the middle of spring break so I’m excited for that.

Team Status Report for 3-21

This past week our team worked on adapting to the changing situation and the shift to virtual classes and projects. From a team perspective, we worked to establish regular channels of communication, and scheduled recurring zoom meetings for Monday and Wednesday 11:00-12:30 and Thursday 1:30-2:30. In addition, we began the discussion of how we can communicate problems, progress, and questions outside of those meetings

Outside of the team dynamic, we also worked on our statement of work. This document details how our project needs to be changed so that it can be done entirely virtually. Due to the rapidly shifting situation and the sporadic shipping of parts, we have decided to (almost) entirely remove hardware from the project. The only exception is that Rip will obtain a few small IoT devices that support developer APIs and use them to generate sensor data sets. From that point forward, we will simulate the entire system (sensors, hardware, network, etc.) in the cloud, and serve the sensor data that rip recorded.

Besides the hardware changes, the webapp and device interactions remain largely the same. The main difference with those is that they will now be hosted on a cloud server, as opposed to on a local IoT device.