Krish’s Status Update for 7/11

I was not able to do much work this week on the project, due to my other commitments. Next week, we should have some data available, so that I can start training the machine learning model.

Team Status Update for 7/11

A change in design was made this week to the Camera Node. In order to draw enough voltage from the LiPo batteries to supply the camera module, a 5V boost converter was needed. We decided on instead using 5V portable chargers. Cost-wise, this was the cheaper solution, however we would be losing out on the functionality of being able to remotely read the battery charge. This would introduce the risk of running out of battery without us knowing. To combat this, we chose portable batteries with over 5 times the capacity of our LiPo batteries, so our node will certainly meet the 72 hour uptime requirement. We will also now notify the site when a node disconnects, but we should be monitoring how long the nodes have been active and recharge them well before they drain completely.

Camera Node Schedule has been updated and pushed back a week. Other tasks including dependencies have been updated accordingly, there is no major shift in final end time.

Pablo’s Status Update for 7/11

This week, I finished the single node. I had to overcome two major problems: Outdated libraries and incompatible hardware. The libraries for the ArduCam were written for Arduino and were reworked by someone else to work for the Particle Photon. Unfortunately, the pinout and macros for the Particle Argon are different, so this introduced a slew of errors. Luckily, I managed to rework the library much faster than anticpated and got the Camera Node up and running. The next issue came when trying to make the node completely standalone, the lack of voltage from the LiPo battery. I realized I was running into an error when removing from the computer because it was drawing extra voltage from the micro usb. The solution I used was moving from LiPo, to portable chargers. With this I now have images being captured and sent remotely over TCP! (picture from the Camera Node below!)

To be quite honest, I wanted to have this done much earlier, but the past week has been rough mentally due to the election. I am slightly behind, but readjusted our gantt chart and are still within the margin we laid out for ourselves. I anticipate to be completely done, with the Camera Node network set up in Sorrells 2 weeks from now. The next week, I plan on building the housing for the node and capturing the preliminary data set from my apartment.

Team Status Update for 31/10

For the cloud part of the project, the pipeline works fine. There are no significant risks in terms of machine learning. For the website, the biggest risk is that it may not communicate easily with the central node. However, this risk can easily be mitigated with clear communication between Krish and Arjun, who are in charge of the website and central node communications.

Arjun’s status update for 31/10

I worked on fine tuning the echo server in python. Pablo and Arjun will need to discuss a protocol for how the nano and particle argons will communicate with each other. This week was a little extra hectic due to unforeseen circumstances so I didn’t have time to do as much as I wanted.

According to the gannt chart we are a bit behind schedule, but the next major task is for Pablo and Arjun to talk about protocol between central and camera nodes, which we can do this week.

Pablo’s Status Update for 31/10

I unfortunately did not get as much done this week as I had anticipated. This week was more hectic than usual, and I realized I had a flaw in my implementation, meaning that I have to refactor my code to correct the error.

I spent a good amount of time of the ethics reading and response this week. I found the fictional Ad Empathy technology design to be scarily realistic and probable uses for nefarious purposes hit a little too close to home considering Cambridge Analytica and the upcoming election. I suppose it is Halloween, so a bit of spookiness is to be expected haha. Additionally, thinking about ethics and how it relates to our project was a little sobering; I realized that if this project were to grow to become a commercial application, data security needs to be a priority as this information can be harmful if used for individual tracking.

I am currently a week behind schedule on delivering images good enough to start developing the model. I intend to spend this upcoming week setting up the node in my apartment so that I can start getting preliminary images sent, so Krish can continue working on the model.

Krish’s Status Update for 31/10

There was not much work for me to do this week, since the machine learning data is not yet available. I spent most of my time on getting familiar with AWS and reflecting on the ethics readings.

For AWS, I read up on the S3 product. S3 stands for Simple Storage Service, and we may use it to securely maintain and access our data on the cloud. It has multiple tiers like Standard, Infrequent Access and Glacier for varying levels of access amounts and latency requirements. After reading up on all of them, I will use the Standard S3 bucket. It has a durability of 99.999999999% and availability of 99.99%. The high durability ensures that the data will not be lost and the high availability will ensure good enough latency for the purpose of training a machine learning model. Other than S3, I also plan to use EC2 for this project, but I had researched EC2 before this week and my research into it this week did not reveal any new information that changed my plans.

I also spent a good amount of time this week on the ethics readings. I found Langdon Winner’s paper particularly interesting. I didn’t know the extent to which simple design choices made by engineers affected society and culture. It has made me more mindful of my project and gotten me to think about unintended consequences that may arise. While working with data containing images of real people, I must be very careful that the data is secure and used only for the purposes of this project. Otherwise, it could be misused and violate people’s privacy.

Arjun’s status report for 24/10

This week I finalized research with python and have implemented a simple echo server in python using the libraries available. In terms of the gantt schedule, we are a little behind with Pablo and Arjun meeting for node communication. Krish and Arjun discussed communication with the cloud, and are leaning towards using http requests to communicate.

Krish’s Status Update for 24/10

This week, I tested the pipeline for the machine learning model using some pictures taken from my phone. Until the real data is available, the model cannot be trained. However, in the meanwhile we can ensure that when the data is available, the code is able to run smoothly.

When I had last created the pipeline, I did not have any data to run the code on. However, this week when I used the images I took from my phone, it revealed some bugs. None of the bugs were individually worth mentioning, but I spent a large amount of time debugging. Once the dataset from the cameras is available, we should be able to run them through the model and start with transfer learning.

I also read up on web sockets. Last week, I spoke to Arjun about how the server would communicate with the central node. We had two options, post requests and web sockets. Post requests come with problems regarding the CSRF tokens. On the other hand, I personally did not have a lot of knowledge about sockets. This week, after reading up on sockets and Django Channels, I can implement web sockets for communication with the central node. This will be faster than a post request, since the socket is constantly connected, and so data can be sent over faster. We still have the option of implementing Post requests, but as of now I will experiment with making a web socket connection.

Team Status Update for 17/10

One risk that could occur with the website backend is that it might not be compatible with our TCP server. In most Django projects, both the client and the server are handled by Django. However, in our situation the client is our central node, for which we will write the communication protocol from scratch. In order to mitigate this risk, Krish and Arjun will have to be clear on what the central node sends and what the AWS server receives. This also raises the question on the implementation style of the central node code due to parsing the csrf token in HTTP requests, which Krish and Arjun will discuss.