Tom’s Status Report 4/23/22

This week I finalized the user interface design for the front panel and began software implementation, including finalizing the software architecture for the primary computer (raspberry pi), co-processor (stm32), and the interaction protocol.

Detailed PDF of the enclosure design: panel_sheetmetal_engraved_textpoly

The design was focused on simplicity and consistency, with an emphasis on sharp corners and circles. The design is intended to be clean and simple, relying on simple shapes that mirror the mechanical design of the enclosure and merge seamlessly with the geometric designs of the actual GUI application.

After production, the cases look like this. They’re black anodized aluminum, so the pattern is just laser etched onto them.

Software architecture is done, and I’m working to implement the system in C++/Qt currently. My goal for this week is to produce audio and finish the core software (everything except the GUI itself).

Software architecture:

A significant amount of work needs to happen in the GUI space to finish this. I’ve been working on the voice/pseudovoice/patch architecture, which is the core of the sound synthesis. Voices control voice hardware, including dedicated VCF/VCA circuits (voltage controlled filter / amplifiers.) Pseudovoices are oscillator-pairs with amplifier envelopes, which allow multiple notes to be allocated to the same hardware voice, although they must share VCF/VCA parameters. The voicing modes discussed in the design report discuss how this process works.

This week, I want to finish all of the Primary Computer software except for the GUI software, and put together the synthesizer.

Tom’s Status Report for 4/16/2022

This week+, I spent most of my time continuing to wrestle with Petalinux, this time 2018.3 which is supported for the Ultra96v2.  Unfortunately, despite significant efforts (as in easily 40 hours worth of attempted builds) I was unable to successfully build a petalinux image on my system. I think the only solution is to have the right native ubuntu version as well as using an official Xilinx-supported board, such as the zedboards or ZCU series. I decided at this point to cut our losses and switch gears for the digital side of the system from the Xilinx-Zynq to a raspi+mcu approach which has a far simpler toolchain. The main reason I wanted to use the Zynq in the first place was to figure out the embedded linux build process, but without good mentorship and a working toolchain this is just infeasible in the amount of time I have on top of the rest of the system software and mechanical design.

I’ve already setup simple python scripts on the pi, verified that the display works, and begun to implement the Qt application.

The new architecture for the digital hardware has two possible approaches, depending on the balance between the MCU and Raspi as well as the comms link speed:

The simple option puts the oscillators and filter control onto the STM32, since it’s the only thing the Raspi can’t do (44kHz real-time i2s on the stm32 is simple, but not on the raspi although it is possible with certain modifications). The main issue is that filter/envelope info has to be sent repeatedly over the comms link, which limits the interpolation density for the envelopes if we’re running in real time. I estimated that with the 36 byte payload (6 voices + envelopes + amplifiers) that we’d be able to update oscillators/filters at 320Hz, which should be more than sufficient. We could easily do the firmware for this on a bare-metal stm32.

If for whatever reason we run into issues here, the next step is to move all real-time synthesis to the STM32. This adds significant complexity to the stm32 software, but prevents the need to send anything over comms in real-time.

Additionally, I obtained 2 raspberry pi 3b+ and a raspi 4 to use for the synth, as well as an stm32F4 development board with a built-in i2s dac that we can use instead of the off-system i2s dac. I’ve tested the raspi, but have not tried to get the stm32 toolchain up yet, although I’ve used them before and I’m familiar with the system.

Lastly, I designed the front panel layout for the synth, including final hole positions and silkscreen/laser etching files. Sam is in the process of finalizing the design drawings for manufacturing so there might be a few modifications, but it looks like this right now. (This is at 300dpi, so feel free to open the full image and zoom on the details.)

 

Next week, I’m planning on working mostly on the Qt application and getting the stm32 toolchain setup. I’ve started working on the Qt application last week, but there’s not really anything to show besides that the toolchain works and there’s placeholder gui elements and some simple frameworks for patch parameters. I don’t think we’ll have any trouble running Qt on the raspi, but I want to test that ASAP as well.

Tom’s Status Report for 4/2

This week I gave up on the HDMI pipeline on the pynq zynq board. After 3 weeks of trying, I couldn’t get a successful toolchain for the board working. The Pynq (also sold as the Arty Z7 board) was discontinued, and no support exists after petalinux 2017, meaning we need to use archived tools that no longer work on modern linux versions. Additionally, the hdmi pipeline was finicky and required regenerating the bitstream and rebuilding the linux image to change resolutions. This process takes several hours, and we were unable to successfully get the pynq to recognize an hdmi display and output data.

Switching to the ultra96 board meant we could make use of the UltraScale+ Zynq’s hardware GPU and skip the complex design of an framebuffer/hdmi pipeline. Since switching, we’ve already brought up hdmi and i2c drivers on the ps system, and built bitstreams and petalinux images. I successfully wrote a driver to interface with the knobs over i2c, and have been working on deploying Qt to the ultra96 to build the gui. So far, I’ve successfully configured petalinux to include Qt binaries, but building the Qt binaries has already taken multiple hours and might be an overnight process.

This week I’m hoping to deploy the Qt gui to the ultra96 and a write user-space drivers for Graham’s AXI peripheral work.

Tom’s Status Report for 2/26/22

This week was mostly research for the video pipeline, and I began working on the Qt application for the linux side of the project.

The video pipeline for the Zynq uses the AXI VDMA core to transfer data out of the framebuffer which is stored in the Zynq’s memory. (The PS and the PL share the same memory.) A video timing controller creates the pulses for vsync, hsync, etc for the HDMI phy.

Lauri's blog | Connecting test pattern generator to VGA output on ZYBO

The main application will run with Qt and draw to the framebuffer. I need to test this, but petalinux (Xilinx’s linux distribution) has finished drivers for the AXI-VDMA system, so as long as I followed the spec correctly Qt should be able to draw to the hdmi output without any real setup. This is tricky and might be very unreliable at first. The Qt application is multithreaded C++ and will do basically all of the complex work for the synthesis, including envelopes, wavetable selection/loading, video output, reading inputs from encoders, and voice allocation.

Team status report for 2/20/2022

We locked down our synth architecture this week, and decided to switch from a true 6-voiced polyphonic synthesizer to a paraphonic synthesizer. We determined that the io usage and complexity from 6 true analog voices was too complex and risky to implement, and after testing paraphonic synthesizers we determined that the sound was good enough. Basically, paraphonic synthesizers work by using a single global “gate” where the filter envelopes of each note are synchronized, and resets only after all notes are released. This is different than a true polyphonic instrument, but acceptable for playing chords and monophonic melodies

For hardware, this means we’re using one VCF chip, one i2s stereo DAC, and one additional DAC for controlling the parameters of the VCF chip. We may implement two voices in hardware to create a “duophonic” synthesizer, which would allow true stereo patches and a small level of polyphony.

Tom’s status report for 2/20/2022

This week unfortunately was mostly a return to planning to make sure we nailed down the design. I had initially intended to dive into FPGA-land and start bringing up the zynq and the video pipeline. While i was able to synthesize a design for a video framebuffer on the zynq, I haven’t been able to test if it actually works on hardware yet.

Mostly, I ended up focusing with Sam on “knob selection” which meant dialing in exactly what the interface to the synth is, and consequently exactly what features we need to implement on a technical level. We settled on descoping 6 hardware voices in favor of creating a “paraphonic synthesizer” where the gating for all filter envelopes is identical. This is discussed more in the team status update.

I’m currently working closely with Sam to find a way to interface with the many rotary encoders that we use to change synthesizer settings. Due to the chip shortages, the open-source MCU-based designs for this are all unavailable so we’ll have to switch to analog pots or make something custom, since we don’t have enough io to interface all of the rotary encoders with the fpga directly.

My job personally was coming up with this spreadsheet: https://docs.google.com/spreadsheets/d/1Dr5_RnABUDoyzo6D7ym3ToypcFMyjv6tvbP-L7gBcFY/edit?usp=sharing
which details exactly which modulatable features we’re implementing on the synth. This gives us a framework to define requirements precisely for the analog system and how the software needs to be implemented.

Tom’s status report for 2/12/22

This week I worked on the toolchain for the Zynq. I downloaded vivado and started working on the hdmi pipeline as a good test of the toolchain. We started knob allocation and a creating a list of modulatable signals. I personally got the zynq to boot and finished a detailed block diagram for the system. Coming up, I want to actually test the hdmi pipeline and start working on the software including voice allocation, pseudo code for wavetables, and envelope code.