Weekly Report #6 – 3/23


I was able to successfully quantize and compile the trained Yolov3-tiny network, so that it runs on the Ultra96 board as a demo taking in a camera feed and showing bounding boxes. Right now this DNNDK version of the network doesn’t match the behavior of the original (so the bounding boxes are wrong), but I believe I know the reason for the discrepancy. Soon, we should be able to get a working version of this demo.

When I get this done, I will turn back to looking at implementing the lower power modes.


Because we got all of the materials for the project, I started and finished assembling the pan-tilt servo mount. This mount essentially allows us to move the camera, when mounted, both horizontally and vertically as needed to track a person. There were some complications with regards to the screws on the servo motors but we were eventually able to get those off even if our methods were less than conventional, to say the least. Once Jerry finishes getting the neural network loaded, we will have to start working on the control algorithms for moving the servo.

To help this process go faster, I will start looking into making a hello world that proves that we can interface with the servo motor. Overall I think we have a good pace but we still do need to increase our pace if we want to finish on time.


While the latter part of my week was occupied with a Model UN conference I had to dedicate significant time to hosting, earlier in the week I got access to the servo hardware and a chance to try out some of the basic servo functionality. I started with some basic code from the Arduino servo library, as that is one of the most ubiquitous control schemes and serves as a good proof of concept before trying Circuit Python. Unfortunately, I discovered that there seems to be a problem/incompatibility either with our servo or the Arduino ecosystem, as the servo would refuse to do anything other than turn counter-clockwise until it hit the limit of its range of motion. Manual analog pin output control did nothing to address this issue. To illustrate the complication involved, we found that even while not nominally transmitting a PWM signal to the servo (so it should be at rest), it consistently moved by ~10 degrees when a stove-top igniter ~4m away sparked. This is troubling to say the least, and we are investigating the cause of the discrepancy as a group. I am still somewhat behind schedule, though the coming week should be lighter than past ones. I will focus on servo/motor control and finish the soldering needed to get the Feather boards functional.


In addition to individual contributions specified above, Karthik and Jerry completely assembled the servo mount, which is now ready for a camera attachment to enable full pan and tilt functionality. Additionally, towards the end of the week we worked together to try to diagnose the servo problems, with a picture of our setup shown below. It should be noted that using the battery as our power source serves as a good proof of concept for how the servos will be powered in the final system, control dynamics aside.

Overall, we are pretty much on track but could be slightly behind depending on how long it takes us to get through the issues we saw throughout this week.

Weekly Report #4 – 3/9


Firstly, I was able to set up the board to be accessible without needing a monitor. This would increase our flexibility for working with the board.

I have also been researching into the low power suspend mode and other options for reducing the Ultra96’s power consumption. We discovered that the Linux built-in suspend probably doesn’t power gate the FPGA fabric, as power consumption only falls by half when suspended. There is a chance that the low power suspend requires more deep level tweaking with Vivado (in particular, enabling the option), though we plan to try out the steps afterward in case the DNNDK image also has it enabled.

Since we as a team were able to run various demo CV algorithms with the board, the logical next step will be to try to load our own network on it.


Throughout this week, I helped Jerry and Nathan with trying to set up Deephi on the Ultra96. This took a considerable amount of effort because of some extra time that we needed to debug some issues with transferring the correct files to the Ultra96 board. Specifically, we first tried to use USB, but that didn’t work because the USB was not mounted. Because of this we eventually started to use the wifi connection to the Ultra96 to make an SFTP connection and transfer the Deephi files that way. Eventually, we got it to work and here is a picture with us using one of the face detection demos.

This is a picture of me and Jerry running Deephi face detection demo on the Ultra96.

On top of that, we recently acquired the servos that we are going to be using for the project. So, I have started to think about and look into ways to try and interface with the servo motors. Overall, I think I might be slightly behind schedule, but I think a simple demo with a valid control algorithm for the servos should help me get back on schedule.


During, the earlier part of this week, I spent the majority of my time working on finalizing the system details for the Design Report. We decided to use an Adafruit Feather board to control the motors and servos, with controls coming from the Ultra96. I investigated the differences between Circuit Python and Arduino C for the control logic. My initial inclination is to use Arduino C for the sake of portability, and I will try to get a demo working by the end of tomorrow.

I also ordered the last of the components for the mechanical control system, including the Feather board and control board accessories. On top of that, I ordered the lens, however, the vendor is apparently shady about the shipping info, so it’ll take about a week longer than intended for it to arrive. In the meantime, I’ll focus on the servo control, which points the camera at the subject.

P.S. I have been investigating machining the enclosure for the camera at the CMU makerspace instead of buying it. Would save ~$100 and 2 weeks shipping time. None of us have such experience, but we can seek advice.

Team Status Report:

Most of our time this week was spent debugging different issues while trying to run the Deephi demos on the Ultra96. The first issues started with the wifi. Specifically the wifi software that our archlinux OS used had a bug which stopped it from working with wifi networks that used WPA2 security. Our first attempt to get around involved installing a new wifi GUI that would allow us to connect to networks that used WPA2 security. However, after doing that, we were unable to connect to any wifi networks. So, eventually, after trying to fix the issue multiple times, we were able to get it working by hard-reseting, unplugging all of the peripheral devices and reflashing the archlinux OS on the SD card.

We also tried looking into ways to profile the power usage of the board. We could get a crude figure for system power just by looking at the current draw from the power source, though we’d have to find a way to adapt this approach to work with the dedicated DC adapter. We tried installing powertop to get a more detailed breakdown, but the utility wasn’t able to show any power usage statistics. Next we plan to try insalling some tools from Xilinx to get a better idea about the power usage.



Weekly Report #3 – 3/2


I have fine-tuned the Yolov3-tiny network to specialize solely in person detection using data from MS-COCO (the dataset used to train the pre-trained network) and the Caltech Pedestrian Dataset, which would hopefully increase the accuracy of the network for our use case. I verified that it performed better on the Caltech dataset compared to the pretrained network.

Since we’ve formally decided to support battery operations, I have also been researching into possible batteries for the system. At first I was considering USB power banks, and found that though this is possible, a large amount of equipment will be needed:

  • USB power bank
  • A way to convert the 5V USB output to a 12V 4.8mm x 2.1mm jack output
  • A way to prevent the power bank from turning itself off under low current draw (!!)
  • Courage to believe that the power bank won’t draw too much power while it’s being forced on

I breadboarded a circuit that kept one of my own power banks on, but later by chance I found a different battery that outputs 12V DC directly and is well-suited for powering low-powered things for a long time, so we decided to use that instead.

Next time, I will try to make the neural network run on the board.


I have been working on trying to interface with the GPIO pins on the Ultra96 Board. This will specifically become more useful as the parts we ordered start to come in. Therefore, because we don’t have all the parts, I am trying to interface with LEDs that are connected through the GPIO pins just as a proof of concept.  To do this, I have been doing some installations of different board configuration files and been going through some Ultra96 startup guides to try and set up both Vivado and the Xilinx SDK on my laptop to use the GPIO pins. This has involved getting pin assignments from a third-party site. After that I started setting up both Vivado and the Xilinx SDK to use the proper pin assignments.

On top of this, I have been working with Nathan and Jerry to try and get an Archlinux system with the Deephi libraries that we needed for our project onto our board. Overall, I think we are pretty close to on schedule because after getting the OS on our board and the logitech webcam, we can now start to test the GPIO interfacing with the webcam.


I spent my week focusing on getting the Ultra96 functional with the Deephi demo IP. As the end result of this process involved pulling in Karthik and Jerry as well, I will detail it in more detail in the team section. Additionally, I worked on nailing down the specifics of our control and low power motion subsystem. The general system is as follows. We will use the Adafruit Feather Board platform (specifically their ARM M0+ based board, specialized for CircuitPython) to monitor the motion sensor with low power consumption. Upon detecting motion, it will send a signal over UART to the Ultra96 to wake it from deep sleep. This seems the lowest power way to economically monitor the device’s surroundings. The Feather Board will also control the servos and stepper motors of the control system, as Adafruit and the Arduino ecosystem have very robust software tools for precisely that purpose, and there is no effective power difference compared to using the Ultra96 itself, despite the latter’s greater complexity.

Also, I forgot to mention this in the last post, but I significantly improved the website before last week’s blog post, as some visitors may have noticed, and even included one-click links to our presentations (as opposed to embedding them in a post).


In addition to the contributions above, our team had an epic this Saturday to rival the Iliad. Nathan started by trying to get some Deephi demos working on the board. Nominally, this involved downloading a Xilinx board image and transferring over some test programs, a simple feat in theory. However, upon trying to boot that board image, nothing was happening. We tried reflashing the original Linux distribution, which works just fine, and also validated the flashing utility, but the Deephi boot image just wouldn’t work.

Eventually, we were able to track down a UART to USB tool, which we could use to debug the boot process instead of flying blind. However, what we saw was the following message, over and over again:

Xilinx Zynq MP First Stage Boot Loader Release 2018.2 Dec 10 2018 – 14:03:11 Reset Mode : System Reset Platform: Silicon (4.0), Running on A53-0 (64-bit) Processor, Device Name: XCZUG SD0 Boot Mode Non authenticated Bitstream download to start now PL Configuration done successfully

We tried various solutions, including booting from a USB (got nothing over debug) and even modifying the binary file, but nothing worked.

Eventually, however, we stumbled upon the true source of the problem. Apparently, the lab power supply can only provide 0.5A, which was just enough to boot into Linux, but was insufficient to power the boot process for the Deephi. However, by increasing the voltage, we were able to provide enough power to get to the desktop, but not enough to run the demos. We will receive a new power supply this week to fix this problem.

Edit 3/2: Corrected typo “logic” to “Linux”.

Weekly Report #2 – 2/23


This past week was largely spend completing the Vivado tutorials and building an understanding the of the software and hardware components left in the system. My conclusions (pending further group discussion) are as follows.

The deep learning inference should be primarily driven by Xilinx’s Deephi DNNDK edge inference technology. This was originally a Beijing-based startup that Xilinx acquired just last year. This makes the technology both largely unexplored (allowing room for differentiation even with off the shelf IP) and yet well documented through Xilinx’s involvement. The video from the camera can be imported directly through gstreamer, for which Xilinx already has some examples we can use as a reference, albeit not completely plug and play. From there, we can use a system of servos with a pan-tilt arrangement to reorient the camera. We will need to custom-write algorithms for moving the camera, focus/zoom as necessary, and obtaining a movement position based on the data returned by the neural net.

In terms of scheduling, I had a bit of a gap here in my work on the Gantt chart, and will use this as an opportunity to get ahead on my understanding of the software, and potentially investigate expanding the functionality of the project, e.g. through cloud integration.


So I had been familiarizing myself the Linux interface of the Ultra96 board, and was trying to get internet access on the device to make development easier on it (e.g. so we can download TensorFlow). I had registered the device on CMU’s network, which let me successfully access the front page of CMU’s website (everything else redirected to it). I think after a bit of time it will give true internet access, but I haven’t tested this. If this doesn’t work, it’s possible to just copy files to the SD card.

Next step would be to write a program on the Ultra96 that controls the GPIO pins, and test the signals from an oscilloscope.


Because we got the board this week, I helped Jerry with setting up the board and trying to connect it to the internet. Also, before we got the board on Thursday, I talked with both Nathan and Jerry about the types of cameras we would like to use for our tracking system and we decided on the Logitech Webcam c920. Because this webcam is not supported by gphoto or gphoto2, I have been looking into other alternatives for interfacing with our new camera. Out of the options I have looked into so far, I found that we could use either OpenCV or video4Linux2. But, out of these two options, I am leaning towards OpenCV believe that OpenCV may be better because it is already a preestablished library.

Then, because we switched from gphoto to a new capture software we are a bit behind, but I believe that during the next week once we install OpenCV and get the camera, I should be able to interface with and send video data to the Yolo-v3 neural network on the backend. So, if this gets done, I believe we will be back on schedule.


Our primary visible accomplishment this week was getting our FPGA board up and running. We’ve also discussed the control algorithm for the mechatronics. If we will be using a servo, a simple feedback controller would work to center the image given knowledge about whether the center of the bounding box should be moved up / down, left / right. A motor will be slightly more involved, because we’d need a PID controller, but we have experience implementing such algorithms.