Weekly Status Reports

03/04

03/04

picked up the cameras and motors! ordered a usb to usb-c cable for Neeti installed OpenCV version running Python 3 on RPi Sam instead of using virtual environments Figuring out how to process keystroke input on RPi without using GUI TO-DO: waiting for the motor 

03/02

03/02

Worked for many many many hours yesterday and today to finish writing and format the design review report. Discussed plans for this week (before spring break) – we want to begin work on the ML model for gesture detection and begin manufacturing the data sets 

Team Status Update for 02/29

Team Status Update for 02/29

This week we ordered all our parts that we finalized in the design review except for the microphones. We checked in with the front desk to see if the parts had arrived but they have not been approved yet, so we are waiting on them! Until then, we were able to work on Pi-to-Pi communication through TCP and establish client-server connection. We also were able to us PyGame to process keystroke input from a USB keyboard connected to the Pi. We were able to successfully send messages from one Pi to another!

Important takeaways we learnt about PyGame: event processing worked and processing get_pressed() didn’t for some reason.  Also, the display (game console thing) is necessary to get keystrokes because the keystrokes are to the display and not to our machines.

Once we have the motors we will be able to work on rotation and using the motor hat library. We also asked professor Savvides whether a CNN is the best model for our use case. The peer reviews were actually very helpful in pointing out situations and edge cases we hadn’t thought about.

We also set up our private git repo and checked that all of us could commit and push to remote from both Pis!

Shrutika’s Status Report for 02/29

Shrutika’s Status Report for 02/29

This week we had design review presentations, which was actually really useful. We got a lot of constructive feedback that forced us to really think through each part of our design. The block diagrams we made for the design review especially helped us figure out 

Neeti’s Status Report for 02/29

Neeti’s Status Report for 02/29

This week I worked on making a lot of the design choices for our project as well as the slides for the design review presention. I spent last weekend researching the different parts available to us and the pros and cons of each of these 

Gauri’s Status Report for 02/29

Gauri’s Status Report for 02/29

This week we spent most of our time on our design review presentation, finalizing parts and design decisions regarding interface communications, etc.  We met a couple times last weekend and once during the week to get these done and help Neeti rehearse.  Yesterday we met again to chalk out our design review report and get some of the things we could do before receiving parts done.  I setup a git repo and made sure we could all push and commit from both Pis.  Then I did some research into datasets available for gesture detection and found these: https://lttm.dei.unipd.it/downloads/gesture/ and http://www.nlpr.ia.ac.cn/iva/yfzhang/datasets/egogesture.html.  Because our gestures aren’t readily available, we might need to manufacture our own (this might not be enough data to train a neural net, but it also might be since we only basically need two gestures).  In our peer feedback, some people brought up the point that CNN might not be the most accurate model to do gesture detection and so I emailed Professor Savvides who taught us Pattern Recognition Theory last semester and would be the best person to advise us on this.  Then, I worked with Shrutika on getting a TCP connection between the Pis working and processing keystrokes using Pygame.

I think we are a little behind schedule because we had aimed to get our manual mode working by spring break, but spring break is next week.  We thought that since we’d ordered our parts last Monday, we’d have them by Friday, but there seems to have been a slight delay in ordering.  We will probably be able to get our manual mode (without gesture detection) working by this Friday if our parts arrive by Wednesday.  If not, I’m hoping we get the parts by the end of the week and then we might be able to realistically get manual mode working by the end of the first week after spring break.

I have two onsites though and will be flying out this Wednesday.  I will try to work on our project (specifically the ML models) a bit during spring break since I’m back on March 10th.  I’m not too worried about us catching up on manual mode, but I do think we have a significant amount of experimentation to do with the microphones and that we should get started on the in parallel in order to save time.

02/28

02/28

Checked with the receiving desk to see if our parts had arrived.  They have not yet been approved and will probably arrive sometime next week hopefully! Until then, we wanted to get our Pi-Pi communication established and process keystrokes using pygame. Set up a client 

Neeti’s Status Report for 02/22

Neeti’s Status Report for 02/22

This week I worked on doing research for the design review while Gauri and Shrutika figured out how to ssh into the Pis. This was due to the fact that there are only two Pis and I did not have a USB to USB-c cable 

Gauri’s Status Report for 02/22

Gauri’s Status Report for 02/22

This week I mainly worked with Shrutika to try to figure out how to ssh into the Pi’s.  Initially, our plan had been to use the RPi GUI by connecting it through an HDMI cable to our laptops and then easily setup everything (in the way that the official RPi tutorials explained).  However, we weren’t able to find the right adapter/cable since the RPi 4 needs a micro-HDMI.  We didn’t want to waste any more time ordering parts and so I started researching how we could begin using the Pi without this cable.  I remembered that we’d ssh’ed directly into the Pi in embedded and began researching how to do that for our new Pis.

This turned out to be way more complicated than any of us expected.  The process differed from embedded because in that, we had already setup and tested UART before trying to setup wifi in the last lab and here we didn’t have that.  It was pretty easy to connect to raspberrypi.local with the Pi connected to my laptop through an ethernet cable, but then when we wanted to connect it to a wifi network the steps differed.  Once Shrutika figured out that it was possible to connect to her home wifi network, we assumed it would be easy to repeat the process on CMU networks.  That didn’t work.  I found out that there is apparently a small difference between the ethernet MAC address and the wifi MAC address on every device (and we’d been registering with the ethernet MAC addr instead of the wifi one).  Once we changed this, it took a day to register and then worked.

Apart from this, I spent some time researching the way we’d use the camera and OpenCV to do gesture detection (building an ML model, etc) and also briefly looked into the available GPIO libraries.  Since we took basically an entire week to get the Pis setup, we definitely underestimated the effort some things might take.  We are still not really behind since our main goal is to get manual mode working by spring break and I think we can still accomplish at least most of that.

Over the next week I hope to get our motor control working with Neeti while Shrutika simultaneously works on Pi-Pi communication.  We will also be spending some time on the design review report and presentation prep.

Shrutika’s Status Report for 02/22

Shrutika’s Status Report for 02/22

This week I spent a lot of time being able to ssh onto the Raspberry Pis. There’s a lot of guidance online but it differs by network, especially since we wanted to use the school network, and a lot of them assume you have all