Author: nmlevin
Weekly Report #12 – 5/4
Jerry:
I’ve been giving another go at training the neural network, where this time I didn’t use the Google Open Images dataset due to the bad quality of the bounding boxes in the images. I reverted back to the COCO + Caltech Pedestrian dataset combo, and this time I also added negative training examples from the NYU Depth v2 dataset, which had many images of relatively cluttered rooms with few people in view. I manually took out all the images there that had people in it, and trained the network longer than before. This time we had success; the system has much fewer false positives and is better at drawing bounding boxes for people who are partially out of view.
We also tweaked the anchoring algorithm to give a grace period of around a second after a bounding box is lost, changed the smoothing algorithm to predict the motion of bounding boxes (to a certain maximum number of frames in the future), and tweaked the numbers for a smoother tracking experience. Now the system can track people who are moving relatively fast, and is pretty good at zooming in while keeping a person’s face in view.
We also made the person tracking code run automatically on boot. The last step is to build the enclosure, collect some demo footage, and prepare for the final demo.
Nathan:
This week I’ve been spending most of my time preparing the enclosure for the final presentation. I’m not quite done sculpting the box in Solidworks, but I plan to have it complete by tomorrow (Sunday) morning. I’ll post an update then. For now, however, I’ll describe some of the specifics. It will be 9″x7″x4″ (dimensions subject to review), made out of plywood procured from the Makerspace, and laser cut to fit. It’s taking slightly longer than I expected because I’m making the joints interlock both for the better aesthetic and superior mechanical properties. Also makes it easier to test-fit before we glue it all together. There will be holes in the back for 3 thing: a power button, a 12V DC jack for power, and a combined HDMI and USB port. The USB portion will be vital for attaching a storage device, while the HDMI will be for optional display functionality. I’ll edit in some good pictures once it’s done, but I plan to have the cutting done by 10am or so, and the final assembly and such done by noon. Will leave time to do it all over if something goes wrong, just in case, but I’m confident no such situation will come to pass. This will be my last significant contribution to the project before the final report.
Karthik Natarajan
This week we didn’t have too much to do as most of our project was done for the final presentation. So, overall, throughout this week, after Jerry finished training an updated neural network, I helped him with modifying the code and tweaking some of the parameters to make our motion smoother with the new neural network. On top of that, I have been helping Jerry work with the motion sensor to incorporate the shutdown/turn-on feature. Outside of that, I helped Nathan plan the measurements out for the enclosure so we can use the laser cutter tomorrow morning. At this point, I think most of the risk factors are gone and we are pretty much on schedule, so we should be ready for the public demo. 🙂
Team:
It’s the home stretch, but we’re in good shape for the final demo on Monday. The only unfinished items are the final tuning of the tracking, which is more of an indefinite polishing step than a well defined to-do item, and the integration with the enclosure, which should be completed by mid-day tomorrow. There are no remaining risks to the project, nor should there be any at this late stage, and barring some sort of catastrophe, we should be all set for a successful Monday demo. There are no other changes to report.
Weekly Report #11 – 4/27
Jerry:
So as mentioned on the team report, we worked hard to port all our code to run with the Arduino library. In that version of the code, the focus and zoom controls moved independently of one another, so focus would be lost while the zoom was being modified. In addition, the zoom controls were quite slow. I realized that the slow zoom was caused by a bottleneck in the I2C bus, and that the clock speed of the I2C bus was configurable to be as much as 10 times higher than the default value. Over the week, I modified the Arduino code to simultaneously modify both zoom and focus so that the camera never loses focus while zooming in, and enabled the faster I2C clock option so that zooming takes much less time.
When that was done, I tried to improve the Yolo-v3 neural network by getting a larger data set with clearer images, and performing data augmentation on the data set to improve the network’s ability to handle bounding boxes that extend outside the image. However, when I trained the network with the new data set it looked like the new network performed more poorly. It turned out that the data set I used (Google’s Open Images data set) had many missing and inaccurate bounding boxes, making it much lower quality than COCO and the Caltech Pedestrian dataset even with a larger size and higher quality images. I think I will still try to re-train with just data augmentation.
I have also implemented a mechanism to switch between multiple targets in the Ultra96 code, locking onto one person for certain periods of time before switching to another. This should make up one of the last components left on the checklist of features to implement.
Karthik Natarajan:
This week I have been working on trying to save the video stream to the SD card on the board so we can look at the footage when necessary. To start this process, I initially tried to use gstreamer so we could take the compressed file directly from the camera. However, this approach resulted in a lot of compilation errors due to “omxh264dec”, “h264parse” or some other arbitrary package not being installed. And, to fix that I even tried to install all of the plugin packages available for gstreamer but to no avail :'(. So, after toiling with this for a fairly long time I moved over to trying to use openCV. Which, after messing with the Makefile, started to save uncompressed video on the board. And, if we try to compress the frames with openCV our code starts to run slowly and result in lost frames. Therefore, as of right now, I am concurrently looking into gstreamer and openCV to figure out which option would be better able to fix this problem. Also, earlier in the week I helped port the Circuitpython code to arduino C code as mentioned in the team report.
Nathan:
After the Monday demo (see Team section for details), I spent my time gathering measurement data and preparing the presentation for this upcoming Monday or Wednesday. The main topic of the presentation will be the evaluation of our system, including power and identification accuracy measurements, so I’ve been focusing on measuring the power draw of the system and making organized comparisons to our original requirements. To that end, I’ve also been testing different DPU configurations so that we can choose the optimal one for our final product, based primarily on framerate data from the execution of our neural net, though I’m also experimenting with some Deephi profiling tools. I’m also working on updating past diagrams and such for the presentation, including making note of all the additional customizations we’ve made since the design review. Once this presentation is done, I will move on to working on the poster and wrapping up the construction of the enclosure, for which I picked up the internal components earlier this week.
Team:
It’s crunch time for the project, but while there are still a number of tasks to accomplish before the end, we feel generally positive going into the final stretch. All of the major components are functional, and the remaining tasks (integration of the motion detection, tuning of power, tuning of tracking, and enclosure) are either simple or just polishing existing aspects of the system.
We had our last in-lab demo earlier in the week, and for that we finalized the integration of the pan, tilt, and zoom systems. We unfortunately had to convert our code from CircuitPython to Arduino C to fit within the Feather M0’s RAM limites (32KB), but that only temporarily slowed us down. We had a few bugs regarding the zoom behavior, but those have since been fixed, and we’ve sped up the motion system’s performance dramatically by tuning the I2C bus used for communication between the Featherboard and Featherwings.
The outstanding risks are the integration/completion of the remaining components (listed above), but there is no longer anything that fundamentally threatens the integrity of the project. By the end of this coming week, the remaining tasks should be complete, and we should be essentially ready for the final demo. Accordingly, the design of the system and Gantt chart remain unchanged from last week.
Weekly Report #10 – 4/20
Jerry:
This time, I’ve gotten the DPU properly instantiated and working on our custom boot image. It turns out that debugging these pieces of firmware can be pretty tricky. The first few times I re-compiled the Petalinux image, I excluded the DNNDK files, DPU driver, and other libraries like OpenCV, and added them in after compilation. After shuffling about to find where to put the files, I managed to make everything except the DPU driver work. The DPU driver, on the other hand, seemed eerily silent. It had a debug print in the first line of its initialization, but I never saw that line. I thought it was because the debug prints weren’t enabled in the kernel, but I tried several kernel configuration and still saw no prints. I then thought it was because the driver was compiled in release mode, removing all printk’s, so I recompiled the DPU drivere (this time, including it and all the other libraries directly in the Petalinux boot image at compilation), but it still didn’t make a noise. Finally after some Google searching for what others found when their (totally unrelated) drivers didn’t print, I found out that I needed to add something to the Linux device tree. It was great seeing all my redundant print statements show up once I recompiled the DPU driver with the device tree update.
There was also a little moment of dependency hell trying to make our Yolo-v3 run. At first I had an error message that the architecture of our DPU wasn’t supported by DNNDK (but why would you include this architecture as an option in the IP??). One painful Vivado recompilation later, I found out that the convolutional neural network IP instantiated an old version of the DPU. Luckily, DNNDK offered a (non-default) compiler for the older version of the DPU. Guess what, that DPU supported the architecture I started with earlier… Another painful Vivado recompilation, and I finally got the DPU to correctly predict a bounding box around a test image.
After that we focused on the focus / zoom equipment. I used the training data from Karthik’s calibration to fit a curve, where it turned out that a 4th degree polynomial had a very close fit to our observed points.
Karthik Natarajan:
Throughout this week, I mostly worked on calibrating the zoom lens and finding the proper value of focus. Firstly, after integrating Jerry’s new stepper library into our featherboard, we were able to move multiple steps at a faster rate than before. And, we have been able to decrease the heat coming from the servos that was mentioned before by releasing the stepper motors once they have finished moving. After integrating Jerry’s modified stepper.py library, I worked on getting the zoom lens to focus properly at different levels of zoom. To reduce both the complexity and the time for each adjustment, I decided to use a function to model the relationship between the focus and the zoom of a lens. This idea was actually initially prompted by finding an analytical curve for the relationship online. However, because this curve was lens specific, I manually looked for the best focus step values, for a fixed set of 11 zoom values by looking at how sharp a person appeared at a distance proportional to the zoom.
After doing this for a while, I saw that there was a small difference due to hysteresis in the proper focus value when moving the stepper motor backward vs forward. To fix this problem, we introduced a slack variable which changed the number of steps based on which way the stepper motor was moving. After getting all of these points, Jerry graphed the data in excel and we came up with the curve below
Nathan:
I spent this week primarily on integrating the Logitech C920 with the Kurokesu enclosure we received last week. This was substantially more difficult than expected, and involved a fair amount of, shall we say, “brute force” in the disassembly of the existing C920 enclosure. The screws in particular had an almost malign reluctance to budge, and so needed to be goaded in a process involving some pliers and a vice clamp. The soldering itself was also rather persnickety, involving delicate manipulation of fine wires, and even scraping away a layer of the PCB in a process that has no doubt measurably shortened my lifespan. In any case, after much turmoil, the C920 PCB was successfully transplanted into its new home in the Kurokesu case, and it works! This thankfully removed one of the major risks our project faced, and paves the way for a smooth transition into our Monday demo and final presentation.
This upcoming week, I’ll be working with Karthik and Jerry to prepare for our demo on Monday, and will actually get started on buying/building the case for all the components. This is more an aesthetic matter than anything else, but I consider it important for the final presentation.
Team:
This week was generally a success for our team. It was more work than expected, but the C920 PCB was transferred to the Kurokesu enclosure, and the zoom lens successfully integrated. A picture is included at the bottom of this blog post, though regretfully we forgot to take a picture of the intermediate steps. This removes one of the biggest outstanding risks this project faced, and completes a key milestone for the Monday demo.
As an update from last week, and pending any breakthrough regarding the matter, we’ve decided to go through with the power-on/power-off wake behavior for the Ultra96, with boot times around 5s, acceptable for our use case. To compensate for that longer than anticipated delay, we’re working on introducing an intermediate power state with the DPU deactivated, allowing for primitive low-power inference before the full capabilities of the SoC are engaged. This change allows us to remain in a power-on state more liberally, improving the responsiveness of the system within our power profile.
Other tasks for this project’s completion include the construction of the case/package, final tuning of the zoom/focus, and integration of the power system. However, these constitute more of a “to-do” list than actual threats to the project, so we feel quite confident in the project’s state heading into the final stretch, and at this time have a significant majority of the core systems complete.
As for scheduling, there has been some minor clean-up regarding remaining tasks, including breaking up the optimization and final system integration, but there are no major changes to report.
Weekly Report #9 – 4/13
Karthik Natarajan:
So, after last week I worked more on moving the zoom servos. Specifically, I was able to use the stepper motor library provided by CircuitPython to get focus servo to move one step at a time. We did it specifically one step at a time because CircuitPython only supported moving single steps. But, as more time went on we realized that there were two problems with this. Firstly, the stepper motors tended to be pretty hot because of continuously starting and stopping after attempting each step. Secondly, because the stepper motor only moved one step at a time, we weren’t able to move the stepper motor as fast as we wanted to. Because of this, we decided to re-implement the stepper motor library as mentioned above in Jerry’s status report. After this, I decided to test the library on the zoom lens. As of right now, we are still in the process of testing that but we have ensured that the new library still keeps all the necessary functions from the default stepper motor library circuitPython. We will have more information about this on Monday.
Jerry Ding:
After spending much effort on this and looking carefully at the board schematics, it looks like it’s hard to further decrease the Ultra96 board’s power consumption below 2W without fully shutting it down. Though I’ve successfully gotten the processing system (CPU, memory, power management unit) and the programmable logic powered down (and confirmed this with debug prints), the remaining peripherals still consumed a non-negligible amount of power.
Instead, we decided to follow a backup plan and fully power down the system every time. The original boot time was over 20 seconds even with a relatively simple hardware block diagram but by disabling kernel features, optimizing the u-boot bootloader, compressing the kernel image and moving the PL programming step to a later step, we were able to get this down to 5 seconds.
Though taking time to boot up is suboptimal, we believe it is still not too difficult to meet our originally stated goals of zooming into a person up to 20 feet away subject to a variety of paths and lighting conditions. Our microwave sensor’s range is rated for 53 feet, and we have two sensors to get essentially a 180 degree field of view, so there will be plenty of buffer distance to work with. In addition, I looked at many example videos of package thieves, and they usually are not running or in a particular hurry on their entry path (possibly to avoid looking suspicious). Five seconds after motion is detected, the person should be easily filmed. Though a longer detection range will cause more false positives, the fact that no power is being consumed in the off state at all will make our power budget easier to work with.
In addition to this work, I wrote an alternative version of the CircuitPython stepper motor library code to allow multiple steps in one function call. I made sure that the minimum amount of Python code will be executed between each step, and wrote a good number of unit tests to make sure that my version of the library behaves identically to the original. Once the stepper motor work is done, there shouldn’t be much left except polishing the user-space applications for a smooth product.
Nathan:
I’ll start off with an update from last week. It took longer than I expected, as Xilinx’s documentation was missing a few steps (connecting some of the clock, reset, AXI signals, and a few minor functional blocks), but I completed the DPU build. You can see my design layout below.
I used the B1600 core as a “middling” size implementation of the IP. You can see some of the parameters in the attached image. The current problem I’m struggling with is some sort of timing difficulty. It has trouble reading a timing file and reports that some part of the design does not meet the timing criteria. I’m convinced these two issues are related, so over the next week, I will be working in Vivado to get this fixed and suitable for implementation on the FPGA. Additionally, I’m working on the enclosure, but in light of the more pressing Vivado work, have pushed it back by another week. I’ll meet with Professor Nace if he’s available and ask about what I have access to in the Makerspace and get started on fabricating the basic components.
Team:
Our biggest ongoing concern as a team is the difficulty getting the board to go into a deep sleep state. While we’ve been able to suspend most activity, for reasons unknown, the board still consumes a substantial amount of power. To address this, we’ve decided to split our efforts into two direction. We will continue to try to fix the sleep behavior, while also working on reducing the boot time. While not exactly ideal, if we can get the boot time low enough, the range from our motion sensors should allow for the system to boot up before someone reaches the package, and thus we will maintain our functionality.
Another thing we’re waiting on is the C920’s enclosure from Kurokesu. It’s in shipment, and should ideally arrive within a week or so, but we do not have a confirmed arrival time as it stands. If it’s late we will need to work quickly to get it implemented in time for the end of month demo, but based on the average shipment times, this should not be necessary.
Our only major schedule change for the week is pushing back the enclosure fabrication, but there is some extension to the tuning of the power system, testing of zoom, and DPU implementation. All schedule changes are maintained in the Gantt chart, with the link provided here again for convenience: https://prod.teamgantt.com/gantt/schedule/?ids=1467220&public_keys=0rh40k2Z0hkS
Weekly Report #8 – 4/6
Karthik Natarajan:
This week I spent my time on doing two distinct things. Firstly, I spent time trying to convert the code from Arduino C to CircuitPython so it would work on the Adafruit board. Also, the problem with the memory mentioned by Nathan last week was actually not a problem. Instead, as we started looking through the files, we saw that Nathan’s Mac automatically added unecessary temp files to the board that were not needed to run the code. And, deleting all of these freed up a considerable amount of space. On top of that, when moving the code over we realized that the CircuitPython servo library did not support negative degree values, so we could not use a couple of degrees of motion that were available on the Arduino. This also was not really a problem because it did not cut off too many degrees of movement from our motion.
After I finished this, I started working with the zoom lens while Nathan and Jerry looked into other tasks related to ensuring low power usage. The main problem was that the wires were all bundled together but the inputs on the board were on opposite sides. To try and get around this I tried to put wires into each wire slot in the adapter but they turned out to be way to thin to use normal wires. On top of that, when I tried to use the thin wire, it was extremely difficult to strip and even when I did manage to strip it, it was too thin so it was not making a reliable connection with the wire in the adapter. To fix this I eventually cut off the adapter and stripped the wires on the zoom lens. For the next week, I will continue to work more with the zoom lens and figure out how to better desgin the code to use the 2 stepper motors so we can better keep track of which direction to move the motors.
Nathan:
This week, in addition to the team demo work, I spent time working with the newly released DPU IP from Xilinx, which opens up the opportunity to modify the DPU design itself and add our own IP (primarily for power management). We had initially intended to do this from the start of the project, but the lack of access to the IP made us dismiss the opportunity until now. It may be short notice, but I’m really excited to be able to play around with these specifications. Right now our design is a bit overkill. Paring it down can ideally let us save a significant amount of power. I almost have the DPU instantiated on the Ultra96, as in it should be working within an hour from this post. Just need to fiddle with some latencies a bit. I’ll post an update when I succeed.
On the topic of power, I had initially hoped we could reduce active power by cutting the voltage to certain components in commensurate with the clock speed. Sadly, while this may technically be possible, it basically involves reprogramming the PMIC on the fly, which is beyond the scope of our project. For now, I’ll stick to reducing clock speed and logic usage.
It’s somewhat behind schedule (with the Gantt chart updated accordingly), but I’ve also been sketching out the details for our enclosure (i.e. the box we will put all the components in). Ideally I want to have it be mostly a sealed system (though likely a hinged lid for convenience), with access to some important ports (USB, power, display) on the back and top. I’m thinking wood will make a fine material, but need to double check that it’ll not interfere too much with the WiFi.
Jerry:
This week I was mostly free of work from other classes, so I dove deep into the Vivado work trying to create a hardware design that has all the hardware components needed for our low power requirements. Basically, the amount of control we have over the default Linux distribution given by Deephi is limited, so we need to create our own design with a superset of its features. Mainly, we needed to include these in our design:
* Must run Linux
* The ability to control the fan
* The ability to enter a suspended state
* Has the Deephi DPU instantiated
Firstly, once I was able to install Vivado it was fairly easy to make a bitstream that simply turned the fan off.
Then I had to install PetaLinux, which was a huge pain. The installation kept failing to install, saying that it couldn’t install Yocto SDK. This was frustrating to debug because the installer always took about 30 minutes to initialize and it never cached any of its progress, plus there were no useful debug print statements to work off of. Eventually I discovered it was because my path had a symbolic link in it (which I put because I had drives whose names had spaces in them; not using a symbolic link causes even more problems). Then I realized that the board support package for Ultra96 only supported 2018.2, which is apparently not compatible with version I had (just one version higher at 2018.3)… So I had fun reinstalling Vivado and Petalinux 2018.2 instead.
Using Petalinux was also incredibly painful. The folder added to about 30GB, with 300,000 files at some directory. This makes the build very slow (2+ hours), and worse if you misconfigured something there’s no easy way to undo that so you’d need to recreate the project. The saving grace was that the build process had a cache, so I could copy the folder as a backup and revert to that if I broke something. Still, copying took about 30 minutes and building took at least 30 minutes each time even with the cache.
After a dozen reverts and builds, I had a working Petalinux distribution where I could control the fan. Soon after I incorporated the datapath that lets me do deep sleep. However, it seems like the board is still using a non-negligible amount of power when we try to start deep sleep, so we suspect we misconfigured something. This is where I’m at now.
In the process we found out that the programmable logic takes a significant amount of static power, but I was able to add kernel features to let me turn off the PL power domain. The PL loses its state when turned back on, but I also incorporated and tested a driver that will let me reprogram the PL once it is powered back on. So this is under control.
Once the deep sleep works, we’ll actually be pretty close to a finished product. First we would need to also instantiate the DPU, which Nathan is working on. We would then have to attach the zoom lens and calibrate it. Then it’s a bit of polish and we should be done.
Team:
For the demo, we unfortunately have given a bad impression by spending too long getting the system running. We thought we were thoroughly prepared to set up the system on the day of the demo, since we have tested it on the weekend and have made similar setups in the lab, but we hadn’t accounted for the only variable that was different across all previous setups. We wanted to move the system so that we can step farther away from the camera and give it a clearer view, but that meant connecting it to a different monitor, which we have never used for other setups. Not knowing the ports of the monitors very well, we accidentally connected to the display out port of the monitor, so the monitor did not respond at all. We also learned that our demo application crashes when there is no available monitor, since it imports OpenCV whose routines crash when there is no display. Soon we moved it back to the original monitor, but by then about 10 minutes have passed. We only found out why the first monitor didn’t respond later on. Otherwise, however, the functional aspect of the demo ran smoothly, and the system was able to successfully track all who tried it.
On the Gantt chart, we also updated it to move the design/manufacturing of the enclosure back because we have had some problems with trying to deal with the low power parts of the project.
Weekly Report #5 – 3/16
Nathan:
I was not able to get as much work done this week as I intended, with a busier than expected spring break schedule and lack of convenient internet access. For now, I’ve continued to study the Adafruit CircuitPython libraries for servo and stepper motor control, and their UART control as well. I’ve downloaded the necessary libraries, and will try out my example code once I get access to the hardware.
Karthik:
I have also not been doing as much work because of spring break. But I have been researching into possible control logic that we could use for the servo. I will hopefully be able to implement it once I get back to CMU after the break. So, because of that we are still a bit behind our schedule, but as mentioned in the team report we have updated the Gantt chart to better illustrate our current situation.
Jerry:
I mostly just researched into ways to implement autofocusing for the system. Currently, I’m thinking of measuring a lookup table for reasonable focus parameters given the size of the bounding box to be tracked and the current zoom level, and using a contrast based algorithm to fine tune the focus around that point.
Team:
Regarding the Gantt chart, there are a few changes made. One, the enclosure was pushed back by about a week and a half, as leading up to the April 1st demo, the most important thing is getting working mechanical control and vision systems. Some of the power-related optimization tasks were also relabeled to better reflect a more software/firmware-based nature, as opposed to the earlier hardware controls, and changes were made to the distribution of work to bring motor/servo control under the group whole-domain.
Weekly Report #1 – 2/16
See the links below for the Week #1 status reports:
Week #1 Report – Karthik
Week #1 Report – Jerry
Week #1 Report – Nathan
Week #1 Report – Team