Krish’s Status Update for 31/10

There was not much work for me to do this week, since the machine learning data is not yet available. I spent most of my time on getting familiar with AWS and reflecting on the ethics readings.

For AWS, I read up on the S3 product. S3 stands for Simple Storage Service, and we may use it to securely maintain and access our data on the cloud. It has multiple tiers like Standard, Infrequent Access and Glacier for varying levels of access amounts and latency requirements. After reading up on all of them, I will use the Standard S3 bucket. It has a durability of 99.999999999% and availability of 99.99%. The high durability ensures that the data will not be lost and the high availability will ensure good enough latency for the purpose of training a machine learning model. Other than S3, I also plan to use EC2 for this project, but I had researched EC2 before this week and my research into it this week did not reveal any new information that changed my plans.

I also spent a good amount of time this week on the ethics readings. I found Langdon Winner’s paper particularly interesting. I didn’t know the extent to which simple design choices made by engineers affected society and culture. It has made me more mindful of my project and gotten me to think about unintended consequences that may arise. While working with data containing images of real people, I must be very careful that the data is secure and used only for the purposes of this project. Otherwise, it could be misused and violate people’s privacy.

Arjun’s status report for 24/10

This week I finalized research with python and have implemented a simple echo server in python using the libraries available. In terms of the gantt schedule, we are a little behind with Pablo and Arjun meeting for node communication. Krish and Arjun discussed communication with the cloud, and are leaning towards using http requests to communicate.

Pablo’s status report for 24/10

This week I worked on setting up more tools on the particle argon to facilitate sending images over wifi. I am slightly behind my internal schedule, but am within the schedule set on the Gantt chart. To fully catch up, I need to meet with Arjun to discuss the wifi adapter and transfer protocol.

Krish’s Status Update for 24/10

This week, I tested the pipeline for the machine learning model using some pictures taken from my phone. Until the real data is available, the model cannot be trained. However, in the meanwhile we can ensure that when the data is available, the code is able to run smoothly.

When I had last created the pipeline, I did not have any data to run the code on. However, this week when I used the images I took from my phone, it revealed some bugs. None of the bugs were individually worth mentioning, but I spent a large amount of time debugging. Once the dataset from the cameras is available, we should be able to run them through the model and start with transfer learning.

I also read up on web sockets. Last week, I spoke to Arjun about how the server would communicate with the central node. We had two options, post requests and web sockets. Post requests come with problems regarding the CSRF tokens. On the other hand, I personally did not have a lot of knowledge about sockets. This week, after reading up on sockets and Django Channels, I can implement web sockets for communication with the central node. This will be faster than a post request, since the socket is constantly connected, and so data can be sent over faster. We still have the option of implementing Post requests, but as of now I will experiment with making a web socket connection.

Team Status Update for 17/10

One risk that could occur with the website backend is that it might not be compatible with our TCP server. In most Django projects, both the client and the server are handled by Django. However, in our situation the client is our central node, for which we will write the communication protocol from scratch. In order to mitigate this risk, Krish and Arjun will have to be clear on what the central node sends and what the AWS server receives. This also raises the question on the implementation style of the central node code due to parsing the csrf token in HTTP requests, which Krish and Arjun will discuss.

Pablo’s Status Update for 17/10

 

I worked on setting up a prototype node and currently have one setup that is self powered. I also set up the SPI between the particle Argon and Camera so that they can properly send data back and forth, it should be very soon where I can send a request and can receive and display an image.

I am currently at the point of the schedule as I was last week, focusing on completing more tasks next weekend as my mini ends this week. With my mini out of the way, I should have an additional three hours per week to work on further tasks. The next coming weeks I plan on working in functionality to complete a fully functional image node.

Arjun’s Status Report for 17/10

I have been researching more about switching to a python implementation for the central node code instead of a C implementation. The considerations were because of Krish mentioning the use of POST requests with a csrf token for the website and communication with AWS, as well as image stitching and other image preprocessing. I have been researching more about the libraries in python for sockets, requests, and threading to see if they can offer the same performance advantages that a C implementation would have, as well as get familiar with the libraries since I am used to C code for this type of programming. I am also in the middle of setting up the Jetson Nano properly with its OS and Wifi adapter.

Krish’s Status Update for 17/10

According to the schedule, there is not much to be done for the machine learning aspect of the project until we are able to collect data. In the meanwhile, I worked on the website interface of the project. I set up the Django server which communicates both with the central node and the users. As of now, there is no information to transfer, so I used dummy information.

I spent a majority of my time this week researching the CSRF token. CSRF stands for cross site request forgery. It occurs when a malicious website sends an HTTP Post request from an unknowing user’s computer to another website. Since it is a post request, this can change the state in the other website’s server. This could have consequences ranging from posting information on social media on the user’s behalf without the user’s knowledge to something potentially more dangerous. As a result, it is common practice to send the user a cross site request forgery token. Only the user’s browser has access to this token and it is required to make post requests. As a result, any malicious website cannot post on behalf of the user.

In our project, the only entity to make a post request to the server is the central node, which will not interact with other websites. Thus, it may not require a CSRF token. Additionally, if we require a CSRF token, we would need a two-way communication channel between the central node and the server, since the server will have to send the token. Without the token, we could simplify our design so that the only communications are from the central node to the server. At the time of writing this, I am still in the process of researching. In the next few days, we should have made a decision about the CSRF token.

Next week, I will continue to work on the website. Additionally, if we are able to generate data, I can also start working on the machine learning model.

Pablo’s Status Update for 10/10

Not much was accomplished this past week, parts have been very late to arrive and the most important component, the Particle Argon, only arrived a couple days ago. In the meantime, I read documentation about the Argon, and familiarized myself with the camera module since that was the first to arrive.

We are currently behind schedule, me personally and the team as a whole, due to those late deliveries and the fact that the project relies on the image capturing nodes to be set up to continue with training the ML algorithm. My steps to catch up are to put in extra hours to make up for lost time. Luckily we already had a very aggressive timeline, so these delays only put us into meeting the deadline, but we would like to get back to having some breathing room in case anything else happens to arise.

Now that I have the components I need, the deliverables I plan on having complete is a prototype node built out with the camera and Argon properly communicating using SPI.

-Pablo