Eshita’s Status Report for 4/29

This week, I came up with a concrete testing plan from the advice received during our final presentation and performed 32 different test runs with the paint thinner scent. To come up with the test plan was important to gather the correct metrics regarding the performance of the robot. I truly observed the tradeoff between random exploration and planned path planning in testing, as when the object would be a straight path away from the robot, sometimes it would not converge under 3 minutes or get triggered for the scented object. I also tested out a new battery power supply using Lithium Ion batteries and observed a significant increase in the life of the robot motor driver and the Arduino. Alkaline 9V batteries provided a battery life of 30-45 minutes, whereas the lithium-ion batteries provided a life of 3h 13 minutes due to its higher current power (1200mAh compared to 350mAh on the alkaline batteries).

Presenting the final presentation was an important milestone for our team. I also worked on the slides and script this week, along with starting off with writing sections on the design report and poster.

Eshita’s Status Report for 4/22

This week, my focus was to round out and complete the overdue tasks of creating and integrating our embedded ML model and adding Neopixel light cycles to ScentBot’s different states. We collected data for paint thinner, smoke, and alcohol as a team, and in exploring ML models we could utilize, I came across a GCP tool called Neuton, which had backed tutorials of being able to run on an Arduino. After training on our dataset to create a neural network, we faced issues in integrating it for the Arduino Mega specifically. Upon more research, I found that the architectural driver to run the model on an ATmega processor was not the same and hence, this model would not compatible to run. I instead shifted tracks to utilizing MicroMLgen, which has the capability to convert a Support Vector Classification model in Python to a C header file (an example that we are using for ScentBot currently), which we could include on our Arduino sketch as a library.

In creating the SVC, there was a tradeoff between storage space on the Mega vs. the accuracy that the model could obtain. We also found that while the classification of the model was good after a high threshold was reached, localization would prove difficult due to the high number of false positives. Hence, we decided on a linear SVC model, which would only classify upon reaching a set threshold of sensor values. Upon testing, we also found that it was difficult to test smoke in the presence of other flammable substances and make its directionality toward the sensor array. We also explored propane and isobutane medical sprays to potentially trigger our sensors, but the concentration was not high enough to trigger the sensors. We decided to ain to make ScentBot work for alcohol, paint thinner and ambient scents working toward our final demo.

I am currently on track with all our tasks. Our testing will go into the next week as we complete trial runs beyond the final presentation. I still need to work on my script, presentation skills and time taken for our slides for Monday, which we are going to work on together with the help of a practice run tomorrow.

 

Eshita’s Status Report for 4/8

This week, I focused on preparing for the interim demo and verifying test results for different scents. For the interim demo, my contributions in the initial code base were with the sensor readings, and adding the LCD display was a good way to verify the initial results of getting live sensor values displayed as the robot is scanning and randomly exploring. For data routing, I tested out various communication methods which each had its shortcomings. I tested out the communication from the Arduino to the ESP8266 Wi-Fi module using MQTT via cloud (which lacked hardware drivers), the Arduino to ESP8266 Wi-Fi module to a hosted web server, which would not work so well with our high frequency of data and high-speed control loop. Switching gears to the NodeMCU as an alternative, I also extensively explored I2C communication and Serial communication methods. These presented their own pros and cons, lacking the quick updates we needed for the sensor data to be classified correctly. The unit tests helped me recognize what methods would be effective to achieve the use case and design requirements for our project. The latency requirements for ScentBot would only be achieved if we either moved the sensor data readings over to the NodeMCU module or hosted it all locally on the Arduino. Even if we pursued option 1, this would mean a delay in communicating a classification result to the robot, causing more delays in Scentbot detecting the scent.

Testing with smoke and paint thinner, we found that simple thresholding and slop calculation methods will not work to differentiate between different scents, since all the values go up for our sensors regardless of the kind of object placed in front of it from our three initial choices for our use case requirement: alcohol, paint thinner and smoke.

This week, I generated initial datasets for alcohol and paint thinner and fed them through a naive binary classification CNN task on GCP’s Neuton Tiny ML platform. Since we have the increased memory due to the Arduino Mega, we can now explore having a model placed locally on the Arduino. The binary classification model shows promising results on initial training data. I will complete all dataset generation by this weekend and move ahead with exploring the analysis of a CNN on these different scents. The unit testing involved will be a prediction on a test dataset once we export the model, and explore the changed logic of the robot for scanning and detection mode than the one we have currently.

We also discovered interesting aspects about the power drain of our robot, caused by the Arduino, LCD display, and fan drawing from the 5V battery. I would like to perform tests as part of our final report on the battery life of our robot, as I think it is important to do this from the user’s perspective. We are now proceeding with just replacing batteries if it goes under that threshold.

According to our new updated Gantt chart, my work is currently on schedule to be completed with data generation this week and in developing the classification model in the upcoming week. I also want to prepare for and start thinking about the final presentation and the skills I must display to properly showcase the work my teammates and I have done.

Eshita’s Status Report for 4/1

This week, my focus was on devising an alternative for network communications. I devoted time to research alternatives for communication using Serial and I2C protocols. I also helped debug and experiment with the gradient methodology for best line fit that Aditti had written up.

Investigating the I2C protocol was hopeful, where I tried to encode the Arduino as a slave that would send sensor data as it received it to the NodeMCU (Master). The NodeMCU can receive sensor readings as they update, but it is slower as the max speed is 400kbps over I2C. Moreover, the Wi-Fi communication would need a channel opened to listen for all client requests to pull data in, which prevents the I2C from readings updated readings at the same time. The other alternative would be to utilize the NodeMCU’s local memory to store our classification model, but with the slow speed of this protocol, it wasn’t the best fit for our high-speed control loop.

Investigating serial communication also led to issues where the updated values being sent from the Arduino were not showing up on the NodeMCU, although it is a much faster approach to receiving data. I am currently working on two approaches to debugging this. Since we are waiting for the MEGA to come in, I am working on setting up a TinyML embedded pipeline for the dataset generation we have completed. The C file can then be included as a separate file on our Mega, which can allow our project to hopefully work without communication lags and issues we are currently facing. The other alternative is to look more into serial communication and make that work with the model hosted locally on the NodeMCU. We are also meeting to work on our pitch for the interim demo, which I will contribute toward in devising materials and scripts.

Eshita’s Status Report for 3/25

This week, I worked on soldering our circuit parts together, along with integrating sensor data reading over TCP communication to a local web server on the ESP8266. I have encountered multiple issues which have set me back in my progress toward software and hardware integration. When editing the code with hardware integration, we found stability issues which left no memory for local variables to issue commands to the ESP8266 module to receive and send data. This led to a pivot in my coding to try and make the sensor sampling, collection, and retrieval possible with a python script with just the ENS160 sensor. The differences between using the Wi-Fi module over the NodeMCU are leading to a lot of debugging issues. due to the lack of libraries around the ESP chip, I have realized through research that sending a JSON string across is going to take manual entering of response codes and header metadata sent across in normal TCP/IP communication for it to be read by a requests library on Python. Trying to integrate the existing NodeMCU library with the Wi-Fi module proved unsuccessful, and I am currently working on trying to code a function with the needed metadata and information that can then be read by a script to run our classification models successfully. With the Arduino Mega, I imagine this will be an easier decision, so in the next week (and in the next 2 days, hopefully), I need to determine whether the early issues I faced with this pin, combined with the ones I’m facing today, are enough to consider an alternative. The lack of hardware and software drivers with this chip is making software and hardware integration harder than it needs to be, in my opinion, but I do want to make an as informed decision as I can about this.

Eshita’s status report for 3/18

This week I worked on establishing a data collection pipeline for the sensors. The updated code now stores values and prints them in a CSV format, which will then be read by a python script from serial communication. This way, we can collect data efficiently and in a quicker manner. An example of the columns we tried collecting data with is shown below.

A major problem we faced was trying to work simultaneously on the motor calibration and odometry of the robot, while also wanting to collect data from within the car to emulate the readings the sensors will be getting. Hence, while Aditti and Caroline focused more on the odometry, I set up a pipeline that would be easy to distribute work within the team. We were also facing problems with the hardware drivers for our particular wi-fi module with Azure as discussed in our weekly meeting, so I am also working on code that will send across the JSON response to a python script that can run a classification model.

Eshita’s Status Report for 3/11

This week, I worked on creating the code for the Arduino and sensor array to transfer data from the sensors to the Arduino. The code is attached on Github here (https://github.com/aditti-ramsisaria/ece-capstone) in sensors/ and ens160_lib/. I faced a number of issues in implementing all the sensors to work together. The ENS160 library was not updated with the most recent libraries so I had to update one of the functions in the Arduino library made by ScioSense to make it functional. The picture of the sensors working is attached below.

The other aspect I was working on was communication using the ESP8266 chip. There is a way to send JSON data across wifi to a local web server hosted on the Wi-Fi module. The working implementation of this is attached, showing a simple JSON message hosted entirely locally on the Wifi-module. Sending across data using JSON is very feasible, but adds complexity in retrieving the data from the web server for the classification algorithm. On the other hand, the ESP8266 chip we have ordered does not have enough documentation for being implemented on Azure with MQTT, as was highlighted in my previous status report. We have a NodeMCU which works with Azure, and I will research communication between the NodeMCU and the Arduino since our sensors will only work all together with I2C on the Arduino.

 

Eshita’s Status Report for 2/25

This week, I worked on setting up the Azure IoT hub instance with the configurations and adding the ESP8266 to the devices list for communicating with the cloud. I faced a number of issues in setting this up, as the ESP8266 Wi-Fi module we ordered does not have much documentation or listed steps for connecting to Azure. I used a modified Azure SDK for the NodeMCU version of the chip through the Arduino IDE, but there are additional requirements like flashing the firmware since we’re relaying it through the Arduino Uno. Flashing the firmware is very OS-dependent, so I am thinking about how we’re going to integrate all of this together down the line. I fell behind schedule this week, not being able to work with Caroline on the sensor system assembly, but we will begin this immediately since there is still a sensor that has not arrived yet. Connecting this hardware to the cloud was harder than I had imagined, previously coming from building just software solutions on the cloud. I feel like my shortfall in not being able to contribute this week has given me some anxiety about playing catch-up, but I will work on this immediately and make it a priority for me to complete it.

Eshita’s Status Report for 2/18

This week I worked on preparing our decided cloud platform: Azure IoT Hub and starting the software integration of Aditti’s wavefront segmentation computer vision program with the USB camera. The camera integration proved to be a difficult task, as with closer distance and smaller arenas (I used Letter sized paper as the “arena”), the camera shadow would be interpreted as an additional object. I hence had to use an artificial light source from my phone camera to make sure objects were illuminated correctly. I suspect we will need further testing on making sure the camera is stable and can click frames without a shadow appearing from its overhead nature. I have attached a few pictures below illustrating my testing of the camera feed. We have created a Github repository for our code and CAD files so far.

 

For the Azure application, there are currently two steps I must perform. One is to link an Arduino with the Wifi module, and the next step would be to link this Wifi Module to the cloud. I have found the following resources to help me investigate the same. (https://blog.avotrix.com/azure-iot-hub-with-esp8266/ for connecting the ESP8266 to the cloud and https://www.instructables.com/Get-Started-With-ESP8266-Using-AT-Commands-Via-Ard/ for controlling the ESP8266 from the Arduino). Next week, after we have all the robot CAD files printed out, Caroline and I will start the next step on sensor array assembling and cloud data collection.

The courses that helped me understand this course are 10-301 Introduction to Machine Learning for computer vision segmentation, 18-220: Electronic Devices and Analog Circuits for understanding basic commands and time delays on the Arduino. We spent some additional time researching multitasking on the Arduino for parallel processes. I have no formal experience in Cloud Computing, but I am certified as a Machine Learning Engineer and Cloud Architect for Google Cloud Platform, which has helped me immensely in investigating implementation on the cloud this week.

Eshita’s Status Report for 2/11

This week I focused on researching cloud implementations and alternatives for our sensor data collection and for hosting our classification algorithm. I also focused on the proposal presentation with Aditti and Caroline, where we met several times to go over various design details. There are several alternatives to consider: we could collect the data directly onto the Arduino serial monitor for training purposes, and send telemetry data for classification once our model is implemented using payloads with Azure. We could also send the same payloads for both collecting the training data and for the actual classification. The Machine Learning model for classification would be imported into a Jupyter/Python instance. I also found a project which utilizes Python notebooks and libraries along with AWS IoT to read data from sensors, and am spending more time doing trade studies between the two. I am more drawn towards AWS because of my prior experience with it, but doing research on AWS and Azure shows advantages for both. They are economical solutions that offer a lot of message-sending abilities from various IoT devices to their dashboards. The main tradeoff I envision currently is the difference in ML capabilities between Azure and AWS. While AWS is less friendly to beginners and more costly, Azure’s ML capabilities might be harder to implement with their IoT hub. My goal for week 5 is to create an instance with an Arduino board I have lying around and see if I can send some basic data about an LED light being on/off on Azure, on schedule with the Gantt chart presented during our proposal.