After making the monumental decision to shift away from the KRIA and into the Raspberry Pi last week, on Monday we confirmed our ideas and plan with Tamal and Nathan, and could now proceed with figuring out the details on what to do next. After getting approval, I ordered the Raspberry Pi 5 from the ECE 500 inventory, and started to do more research into its capabilities and if it would suit exactly what we needed. We also found that it could connect to the OAK-D camera with ease, as Nathan had used it in his project. We also found a guide made by Luxonis on how to connect a raspberry pi to the OAK-D, so I spent some time reading through it. The Luxonis guide had links to a few Raspberry Pi OS files that had the DepthAI and other dependencies pre-installed, so I spent some time trying to flash those into the spare SD cards we have. My laptop has a slot for the microSD cards, but previously I had been misusing it, as I thought I had to hold in the card with my finger to have it be read. That was not the case, I simply had to use my nail to press the card further into the computer and it would lock and hold in place. Knowing that earlier would have saved my finger a lot of pain but at least I was able to figure it out eventually. Using the Luxonis OS installer was actually not the standard Raspberry Pi Imager, and there were some issues with the one that they provided. I ended up installing the Raspberry Pi imager as well, and had multiple copies of raspberry Pi OS on a few SD cards.
On Wednesday, the Pi arrived, and luckily the kit that it came in had all the port connection wires that I needed to plug into a monitor. I tried connecting it up and seeing what would happen, but plugging it in and sliding in the microSD card only gave me a black screen. This was concerning, as I wasn’t sure if the OS I grabbed from the Luxonis guide was good or not, and I also didn’t know if it was an issue with the monitor or the Pi itself. I ended up having to try all combinations of my installed OS files and a few different monitors in the 1300 wing, to which nothing worked. I realized that the Pi was displaying an error code through the LED, and through that was able to find out that there was actually a SPI EEPROM error. This meant a corrupted image with the Pi, which was something I had no control over during the setup. I ended up solving the issue by following a guide on the Raspberry Pi forum, and the Pi was able to display on a monitor. Here’s a link to a picture of it successfully displaying: (Too big for upload on here)
I was now able to test the different OS files, and found out that the Luxonis provided ones were either corrupted while flashing or not compatible with the RPI5. Thus I had to stick with the default Raspberry Pi OS, and started looking into how to install the required dependencies and libraries.Â
At the same time, I started to setup the SSH from our computers that we would need to access into the RPI to run our files. This required getting on the same network from my computer and the RPI while the RPI was connected to a monitor. I had to request a new device to be added to the CMU-DEVICE wifi, as when my computer is on CMU-SECURE and the pi is on CMU-DEVICE, we are technically on the same network and SSH is possible.
After we managed to connect via SSH, we are able to unplug the RPI from the monitor and just connect power to the RPI and SSH into it, which makes operating and doing work on it much easier. While SSH’ed, I was able to then setup the DepthAI library and other dependencies that our code would need to run. This was done with the help of Jimmy, who showed me how to setup the Conda environment to do this.
We made pretty good progress with setting up the RPI and all of that, so next steps would include figuring out how the current camera code works, because we now need to interface with getting camera data into the robot. If we can get rid of the middleman Arduino that currently Josiah is using to test the robot, we should be able to save on latency.Â