Week 10 Update

This week, we were away for Thanksgiving and did not get much done on our project. We had planned to leave this week for break, so this does not put us behind schedule.

Our ToDo list for the next two weeks is pretty intense, and we have plans to make everything happen. Our first priority is making sure the camera decoding setup works on the second Raspberry Pi, and as such, we are hoping to get the stand for the Pi camera printed and complete early this week. After that, we will be working through the decoding setup.

Encoding/Decoding:

  • Get the pattern decoded in a way that’s easy to process w/ CSV
  • Cipher

Decoding Setup:

  • Set up the camera at certain height that doesn’t change & adjust focus perfectly
  • Set up on loop to send to CV
  • Have image in certain position
  • Play around with camera
  • Which side is up?
  • Color recognition

Encoding Setup:

  • Make it not slow

Camera situation:

  • Test CV on raspberry pi
  • Have the encoding rig for the camera made/cut ASAP
  • Have it set up on loop

Week 9 Update

Summary

This week, we tested the Pi Camera with the encoded message, and found that the Pi Camera will be high res enough, and we just need to get it to focus. We also found several “to do” items regarding improving our decoding pipeline.  We also worked on finishing the decoder, integrating the CV with the decoding algorithm.

Decoder – Snigdha & Shivani

This week we worked on finishing the decoder. We worked closely to tweak the CV and encoding pattern bit by bit in order to make sure that the shapes were being properly detected. From there, Snigdha worked on updating the decoder to work with the modified csv format and make sure that it was able to accurately decode the message. Snigdha also worked on modifying the encoding pattern to make it a better fit for the 4×6 paper when printed.

Pi Camera Tests – Shivani & Caroline

This week, we ran some tests on the Raspberry Pi camera setup together, with Shivani testing the recognition ability with the OpenCV, and Caroline controlling the camera and printer.

Findings

  • Pi camera is perfectly high res enough
  • Can easily see all 32 characters as small as we have them currently
  • Can fit it all onto the card size that we wanted

 

To Do

  • Camera saves large image sizes, need to resize and crop and do slight image processing
  • Figure out ordering and alignment
  • Determine exact distances and measurements for photos
  • Make prototype setup for photo taking so we can start using it for decoding
  • Ending line to determine which side is up
  • Last row of characters gets slightly cut off for some reason
  • Test color recognition and find range
Original Image from Rpi camera
Image after processing (cropping, increasing brightness, slight rotation)
Image with pattern detection

Week 8 Update

This week, we focused heavily on preparing for our midpoint demo. Caroline and Snigdha worked to integrate the first half of the system with both hardware and software components. We ported our encoding script to Java and are now able to enter a message on the raspberry pi and print out the encoded message directly. During the later half of the week, we focused on catching up on the CV side, working to make sure the CV output was as desired to be able to properly decode the message. We also worked on setting up the second raspberry pi and installing openCV and other necessary software on it.

Below is our updated Gantt chart for the midpoint demo.

Caroline

This week I worked on installing and testing OpenCV on the second raspberry pi. The install process takes several hours, and the last 2 times I have tried it has frozen somewhere near the end. This week I plan to arrange my time better so that I can leave the installations running while I’m doing other things, so that all of my work time is not dedicated to watching the terminal.

Snigdha

This week, I worked on making sure the encoding algorithm we created would be able to run continuously on the first raspberry pi from the command line. In order to do this, we had to decide between either using JavaScript and web sockets or porting our encoding to Java. In the interest of time and in order to avoid unnecessary complexity, Caroline and I decided switching to Java was the most effective approach since Processing with Java can be run from the command line. After Caroline and I were able to print the image as described above, I worked with Shivani to sync up on the CV and decoder parts. We worked together to tweak the output CSV in order to make the decoding process smoother. Using that, I modified the decoder to properly parse the CSV as it’s read in and use this to decode the message. During the coming week, I’ll be working on this more to get the decoder to correctly output the decoded message.

Shivani

This week, I worked on testing our new pattern with the cv. I discovered that the ordering is not working as we expected since the shapes are staggered. I’ve been working on fixing this alignment issue without having to hard code it in. Next week, I plan on finishing debugging this ordering issue and testing it with the Rpi camera

Week 7 Update

Overall, we focused this week on preparing for the midpoint demo, by trying to integrate separate parts into a more functional project. Because of external factors, we were unable to work on the CV this week, so we focused on the integrating the encoding end. We have the encoding system set up such that when someone types a message in and hits a button, it automatically prints out the encoded image. To do this, we rewrote the encoding system in Java so that we could use the Processing desktop app which integrates with the command line. Unfortunately, because of the types of communication we’re using, it’s really slow right now (~20 seconds from entering the text to printing), but we have plans to make it much faster and more polished in the coming week.

a full 20+ seconds… yikes

Caroline

Encoding System

Last week and this week, I set up the first Raspberry Pi with an LCD screen, a keyboard, a button, and a printer. The hardware system is now automated such that the user types in a message into the Pi, they see it on the LCD screen, they hit the button, and the message prints out automatically.

Because Snigdha’s encoding system is written in p5.js for the web, we originally needed to communicate between the web and between a node server in order to actually print out the image on the canvas. We figured that we needed  to use web sockets to communicate back and forth between p5.js and node. This came with several more problems involving client-side web programming, and between installing browserify and other packages to attempt to communicate via a web server, it took a big push to get things up and running here. The system I came up with is:

(1) user inputs message (string) via keyboard, which is received by local node server.

(2) local node server broadcasts message via socket.io.

(3) client-side p5.js app hears the message, and updates the image.

(4) p5.js app converts canvas to data image, and sends the data back (string) via socket.io.

(5) local node server receives PNG data, and prints out encoded image.

However: this would have required majorly wrestling with lots of obnoxious web programming. We decided instead to use processing,  which has command line functionality available, as it’s written in Java. Snigdha reimplemented her code in Processing Java, and I wrote code to automatically call the printer from within processing, and to automatically call processing from Javascript, which is what our GPIO system is still written in. This worked – we were able to automatically print end-to-end from keyboard input by pushing a button! Major Milestone!

But alas, there is a problem. On average, after measuring four times, it took around 20 seconds to automatically print the image after the button is pressed. This is pretty bad. The reason it takes so long is because calling a processing sketch from the command line takes a long time. So even though my code is optimized to print quickly, the actual rendering of the image is taking a super long time based on the graphical environment we used.

Right now, our main priority is getting everything functional. However, I would really like to spend a few days working  on getting this to run much faster. I have three ideas for how to do this. I really want to make sure that the printer is called pretty much instantly after the message is inputted, right now, that is the longest waiting period.

(1) do the awful web server thing. It would be a frustrating experience getting it to work, but web socket communication is a lot faster than calling processing code from the command line.

(2) Use something like OSC to allow javascript and processing to communicate with each other live without having to re-launch the app every time,  but I’m not sure how that would work . <- I think this is what I’m gonna go for

(3) implement all of the hardware in processing. It would require me to write my own I2C library for the LCD display but I think it could work

Decoding System

I also set up the second Raspberry Pi with a Pi Camera, an LCD screen, and a button. I wrote python code that takes a photo with the Pi Camera whenever someone presses the button. The next step is to install OpenCV on the Raspberry Pi and use it to automatically process the image that is taken. We’re also planning to add a digital touch screen so that participants can see the camera feed before they take a photo.

Snigdha

This week, I worked on modifying the encoding file to be able to be run from the Raspberry Pi to make it easier to integrate with the hardware. This involves modifying the way input is handled so that we could read an input from the hardware, as well as making sure the file could be continually running instead of having to rely on a human running at each time. While this is still an ongoing process, I also looked at using socket.io to get the input from the hardware into the generateImage file.

Shivani