Week 8 Update

This week, we focused heavily on preparing for our midpoint demo. Caroline and Snigdha worked to integrate the first half of the system with both hardware and software components. We ported our encoding script to Java and are now able to enter a message on the raspberry pi and print out the encoded message directly. During the later half of the week, we focused on catching up on the CV side, working to make sure the CV output was as desired to be able to properly decode the message. We also worked on setting up the second raspberry pi and installing openCV and other necessary software on it.

Below is our updated Gantt chart for the midpoint demo.

Caroline

This week I worked on installing and testing OpenCV on the second raspberry pi. The install process takes several hours, and the last 2 times I have tried it has frozen somewhere near the end. This week I plan to arrange my time better so that I can leave the installations running while I’m doing other things, so that all of my work time is not dedicated to watching the terminal.

Snigdha

This week, I worked on making sure the encoding algorithm we created would be able to run continuously on the first raspberry pi from the command line. In order to do this, we had to decide between either using JavaScript and web sockets or porting our encoding to Java. In the interest of time and in order to avoid unnecessary complexity, Caroline and I decided switching to Java was the most effective approach since Processing with Java can be run from the command line. After Caroline and I were able to print the image as described above, I worked with Shivani to sync up on the CV and decoder parts. We worked together to tweak the output CSV in order to make the decoding process smoother. Using that, I modified the decoder to properly parse the CSV as it’s read in and use this to decode the message. During the coming week, I’ll be working on this more to get the decoder to correctly output the decoded message.

Shivani

This week, I worked on testing our new pattern with the cv. I discovered that the ordering is not working as we expected since the shapes are staggered. I’ve been working on fixing this alignment issue without having to hard code it in. Next week, I plan on finishing debugging this ordering issue and testing it with the Rpi camera

Week 7 Update

Overall, we focused this week on preparing for the midpoint demo, by trying to integrate separate parts into a more functional project. Because of external factors, we were unable to work on the CV this week, so we focused on the integrating the encoding end. We have the encoding system set up such that when someone types a message in and hits a button, it automatically prints out the encoded image. To do this, we rewrote the encoding system in Java so that we could use the Processing desktop app which integrates with the command line. Unfortunately, because of the types of communication we’re using, it’s really slow right now (~20 seconds from entering the text to printing), but we have plans to make it much faster and more polished in the coming week.

a full 20+ seconds… yikes

Caroline

Encoding System

Last week and this week, I set up the first Raspberry Pi with an LCD screen, a keyboard, a button, and a printer. The hardware system is now automated such that the user types in a message into the Pi, they see it on the LCD screen, they hit the button, and the message prints out automatically.

Because Snigdha’s encoding system is written in p5.js for the web, we originally needed to communicate between the web and between a node server in order to actually print out the image on the canvas. We figured that we needed  to use web sockets to communicate back and forth between p5.js and node. This came with several more problems involving client-side web programming, and between installing browserify and other packages to attempt to communicate via a web server, it took a big push to get things up and running here. The system I came up with is:

(1) user inputs message (string) via keyboard, which is received by local node server.

(2) local node server broadcasts message via socket.io.

(3) client-side p5.js app hears the message, and updates the image.

(4) p5.js app converts canvas to data image, and sends the data back (string) via socket.io.

(5) local node server receives PNG data, and prints out encoded image.

However: this would have required majorly wrestling with lots of obnoxious web programming. We decided instead to use processing,  which has command line functionality available, as it’s written in Java. Snigdha reimplemented her code in Processing Java, and I wrote code to automatically call the printer from within processing, and to automatically call processing from Javascript, which is what our GPIO system is still written in. This worked – we were able to automatically print end-to-end from keyboard input by pushing a button! Major Milestone!

But alas, there is a problem. On average, after measuring four times, it took around 20 seconds to automatically print the image after the button is pressed. This is pretty bad. The reason it takes so long is because calling a processing sketch from the command line takes a long time. So even though my code is optimized to print quickly, the actual rendering of the image is taking a super long time based on the graphical environment we used.

Right now, our main priority is getting everything functional. However, I would really like to spend a few days working  on getting this to run much faster. I have three ideas for how to do this. I really want to make sure that the printer is called pretty much instantly after the message is inputted, right now, that is the longest waiting period.

(1) do the awful web server thing. It would be a frustrating experience getting it to work, but web socket communication is a lot faster than calling processing code from the command line.

(2) Use something like OSC to allow javascript and processing to communicate with each other live without having to re-launch the app every time,  but I’m not sure how that would work . <- I think this is what I’m gonna go for

(3) implement all of the hardware in processing. It would require me to write my own I2C library for the LCD display but I think it could work

Decoding System

I also set up the second Raspberry Pi with a Pi Camera, an LCD screen, and a button. I wrote python code that takes a photo with the Pi Camera whenever someone presses the button. The next step is to install OpenCV on the Raspberry Pi and use it to automatically process the image that is taken. We’re also planning to add a digital touch screen so that participants can see the camera feed before they take a photo.

Snigdha

This week, I worked on modifying the encoding file to be able to be run from the Raspberry Pi to make it easier to integrate with the hardware. This involves modifying the way input is handled so that we could read an input from the hardware, as well as making sure the file could be continually running instead of having to rely on a human running at each time. While this is still an ongoing process, I also looked at using socket.io to get the input from the hardware into the generateImage file.

Shivani

 

Week 4 Update

Week 4 Update

This week’s main goals were to integrate the OpenCV detection with the decoding algorithm. We worked on developing a way to refine the CV to lower the processing time and let it detect multiple elements in parallel so it can send the data quicker. We also discussed a way to reduce overhead and duplication of code between the decoding, encoding, and detection parts by using a common dictionary. For this week, we plan on working to integrate the image scanning with the OpenCV, getting the Raspberry Pi set up with a printer and keyboard, and writing the decoding algorithm.

 

Shivani

This week, after getting feedback from the presentation on Monday, I ran metrics for the CV for a few different patterns to benchmark our progress. After removing the image generation at each step and running the different detections in parallel, I refined some of the CV to reduce the computation time. It currently takes 1.7 seconds to finish processing “Hello World” which is a good place for us and leaves time for printing and other UI features. In addition, I combined all of the different outputs of our CV detection (color, shape, filled/unfilled, order) in a 2D list to export. I met with Snigdha and we decided that the best way to transfer the data to the decoding part of our project was a csv file that she can parse. This upcoming week I am going to be working on exporting the data in a csv file and working on some ordering edge cases that pop up.

Caroline

This week I was away visiting graduate schools, and didn’t have the opportunity to get much done. However, I am looking forward to starting the hardware next week. My goal for next week is to hook up Snigdha’s image generation on a raspberry pi with a button and a printer, so that hitting the button prints out an image. This is going to require hooking up her javascript to save an image rather than draw on an HTML canvas, and then using the image to call the printer directly from javascript or the command line.

Snigdha

This week, I worked with Shivani to figure out an efficient way to combine the results from the CV algorithm with the decoding algorithm. We decided against doing all the decoding processing in the CV file, and instead agreed to export the information in a CSV that could be read in and decoded on the Raspberry Pi.  With this system in place, I will spend the next week on developing writing decoding functions. I also spoke with her about the ability to detect our most recent encoded pattern and will be working on refining it further by changing the encoding pattern from six shapes to three, and adding a filled/unfilled feature to the shapes. Lastly, I worked with the team on the Design Document due this Sunday.

Week 3 Update

Week 3 Update

Summary

This week’s main goals were to work on the OpenCV detection and develop key component detection such as shape and color. We worked on a more competent encoding algorithm and the computer generation of the algorithm to allow for further refinement based on the OpenCV results. We also spent some time looking into what the process of scanning and processing the image into a CV-readable format would look like. For next week, we are planning on refining our encoding/decoding algorithm and integrating the algorithm with CV. We also plan on looking at how to get the OpenCV onto the iOS app and whether to scan using a smartphone camera or another Raspberry Pi.

Updated Gantt Chart

Shivani

This week, I made progress on detecting more detailed shapes and images. I worked on color detection for different lighting conditions for maroon, purple, yellow, cyan, and green. In addition, I created a filter to determine if a shape is filled or not. I also refined the initial shape detection from last week so it detects the symbols in a directed order. For this upcoming week, I’ll be working with Snigdha and Caroline to come up with a format to store all of the information about the pattern and finalize everything we need to scan for.

Caroline

This week I got some OpenCV demos running on iOS, and started working on creating our “scanner” app. However, we realized that we may not actually want to use iOS for our scanner, we might want a second Raspberry Pi to scan it back. Next week I will be traveling, but after that I will be working to integrate printing on the Raspberry Pi. We decided that we’d definitely like to print it out, and now it’s a question of whether we’ll be using a raspberry pi with a camera for the scanning as well.

Snigdha

This week, I worked on the encoding/decoding algorithm to generate a visual encoding that was less rudimentary and also more visually pleasing. The algorithm incorporates 4 shapes, 4 colors, and ses of 6 shapes to encode each letter. In addition, the algorithm now includes repetition for more accurate decoding. For next week, in addition to working on the design document, I’ll be working with Shivani to further refine this algorithm and also connect the encoding part to the CV.

“Hello world” encoded using our current algorithm.