Jolin’s status report for 5/8/2021

  • I gave the final presentation about our project. Everything went smoothly and I am satisfied with the outcome.
  • I continued to integrate the unison detune and ADSR features into the system. I changed the algorithm for unison detune so that the changes in period for the auxiliary oscillators roughly and efficiently correspond to the even spacing in frequency. Now the unison detunes actually sounds like the effect we get from a typical linear unison detuner.
  • I am implementing the record and cycle feature. It’s expected to be completed by tomorrow.

Next week I will be helping Michelle finishing up the video and working on the final report.

Jolin’s status report for 5/1/2021

  • I integrated the unison detune feature and made sure the whole system works well. I am integrating ADSR and adapting the existing division table so that the oscillator and ADSR can share the same table.
  • As I was implementing the arpeggiator, I found out that the arpeggiator functionalities we originally planned to implement have already been implemented by the keyboard. With the MIDI message the keyboard sends, we don’t have enough information to recreate the same functionalities in FPGA. The keyboard only sends out MIDI note messages at the configured rate and pattern. It does not send control_change messages for the arpeggiator configuration buttons and it’s impossible to figure out which piano key on the keyboard is pressed based on the MIDI note message since the octave transpose makes 3 keys being able to produce the same MIDI note message.
  • After discussion with Hongrun and Michelle, we decided to implement a “record-and-cycle” function instead. This feature allows the user to have a custom “spacing” between the notes, not just 1/4 notes at some tempo like the arpeggiator. This feature is not so different from arpeggiator on the implementation level in the sense that both functionalities capture notes and then play the captured notes at some rate. We think it augments the arpeggiator feature and is a good proof-of-concept for what we originally would have implemented. Users still get the same functionality.
  • To compensate for the change to the arpeggiator, we decide to add pre-configured stored wavetables and add the functionality to modulate between two waveforms.

I think I will be able to finish everything by the video deadline.

I will work on the remainder of the record-and-cycle feature and the modulation feature next week.

Jolin’s status report for 4/24/2021

I took a slow week after getting a ton of things done last week for the interim demo. I had to fly back to Pittsburgh to move house and take care of all the lease-ending chaos. I had limited access to a working environment.

I am a little behind schedule but with the mental planning of the arpeggiator I have already done, I should be able to catch up this weekend and early next week.

Jolin’s status report for 4/10/2021

  • This week I spent a ton of hours writing full system integration and the progress is shown in this pull request. (The pull request shows some older commits but the main progress was made earlier this week).
  • The project structure is now much more clear. I decoupled the hardware interfaces from the core system with a clear division of responsibility. The ChipInterface connects the core system conFFTi to hardware PLL clock, data read in from UART driver and send the audio out to the DAC driver. This setup isolates the core system logic from the hardware interface, so in the future if we want to use a different FPGA board or use a different audio codec, all we need to do is to change the device drivers without modifying the core system.
  • The system also becomes easily configurable. I defined all the configuration constants in one file. It’s super easy to change all system configurations from the output audio bit width to the number of polyphony support all by just changing the numbers in this file.
  • The code for each subsystem is organized into its own folder: midi, dispatcher, pipeline, and mixer. Each subsystem comes with its own unit test suite for all the modules. To make life easier, I standardized the testing procedure with the Makefiles (rule definition, file dependency). Testing is now much less painful.
  • My work concludes checkpoint 2 of our project.

I am on schedule.

I received the new TRS to MIDI cable and DAC controller with the headphone jack today. Next week I will be working on the arpeggiator control.

Jolin’s status report for 4/3/2021

This week I paused from arpeggiator control and focused on DE2-115 audio CODEC control and USB control instead. IMHO the readability and quality of the demonstration code that came with DE2-115 CD are rather low. It took me a town of effort to look past the typos (e.g. adio_codec) and dead code (i.e. wall of code working with signals that has nothing to do with output signal). Although I can generate Altera IP modules and understand how to use simple ones, I can’t get the whole control to work. Luckily, the PLL modules that I did understand can be integrated with Hongrun’s effort of using GPIO for MIDI input and audio DAC output.

We have adjusted our timeline last week to account for the difficulty we faced with the FPGA IO.

This weekend I will be integrating all the code we have so far for our checkpoint 2. Next week I will start to code the arpeggiator control.

Jolin’s status report for 3/27/2021

  • I finished coding the polyphony control and I am in the process of debugging the simulation. To make my design more generic (i.e. the number of notes supported at a time is not fixed to 4 but configurable through a parameter), I learned a lot about the generator block in SystemVerilog.
  • My design for the arpeggiator control is naturally divided into two parts. The first part is to store the notes while the arpeggiator key is pressed, and sort them in pitch ascending order. The second part is to play the notes at the desired rate and pattern. This is relatively easy once the first part is set up since it basically uses a counter to determine when a note needs to start and end, where the counter value are calculated based on tempo and rate. Then at each start time, the pattern and mode determine whether a rest or a note will be played, and which note will be played.

I am a little behind schedule but since I left ample time for polyphony control integration, it will be easy to catch up.

Next week I will finish debugging the polyphony control and start coding the arpeggiator control.

Jolin’s status report for 3/13/2021

  • I now have a fully working simulation and synthesis environment with ModelSim and Quartus, respectively. Making the USB-blaster cable work took me a while since the USB-Blaster on Linux requires some special setup. I spent extra hours debugging the JTAG connection because my laptop’s USB port/USB-A-to-C dongle connection doesn’t work reliably.
  • My random number generator module and testbench now pass simulation. The output is verified against the Matlab script.
  • I learned how to use algorithmic state machine with datapath, or ASMD. It is closer to the actual Verilog code than FSMD.
  • I finished the ASMD of the polyphony control that handles MIDI note messages. This design is sufficient for Ckpt 1 (3/22).

I am on schedule.

Next week I will spend most of the time writing the Design Review Report. I will add control message handling to the polyphony design after Hongrun figures out the list of control messages the MIDI keyboard would generate.

Jolin’s status report for 3/6/2021

  • I finished scripting the pseudo-random number generation algorithm with Matlab and System Verilog file generation. The configurable n bit LFSR is able to generate a pseudo-random sequence of length 2^n – 1. It generates a 1-bit number for each clock cycle. With this functionality, the arpeggiator can support random rhythm generation, where each step has a 50% chance of being either a note or a rest.
  • I had a primitive polyphony control design. I will record the current status of each of the four audio processing pipelines – is it in use and by which note. On the key-down event, I will send the note to a free pipeline and mark it in use. If none is free, discard the note. On the key-up event, I will signal the ADSR envelope that the note is released and mark the pipeline as free. Since there are only four pipelines, the scheduling can be as easy as looping until the first free.
  • I studied our MIDI keyboard Launchkey Mini user guide and led the discussion on detailing which effect parameter is user-controlled and by which control on the keyboard. The result is shown in our presentation slide.

The simulation and synthesis environment setup is taking a lot longer than I thought. On the other hand, due to the large project specification change, my timeline has changed a lot and I am able to reschedule my later tasks to account for the difficulty of the environment setup.

Next week I will finish the environment setup and start on a more detailed design for polyphony control.

Jolin’s status report for 2/27/21

  • I did more research on FPGA random number generation and started scripting with Matlab.
  • I read the simulation scripts from 18240 and 18447.
  • With my teammates, we compiled a bill of materials.

I am on track with the schedule.

I am expecting to finish random number generation early next week and start on polyphony control design.

Jolin’s Status Report for 02/20/21

  • I finalized my share of work this week.
    • From the papers of past similar projects I have read, the memory constraint came up as a technical challenge. I did more research on the feasibility and necessity of using an external memory on FPGA, did some back of envelop calculation, and talked with Professor Sullivan about the type of available FPGA. I think we most likely would not need the external memory if we three can all get DE2-115, although if only DE0-CV has enough supply for all of us, the external memory may be necessary.
  • Collaboratively my teammate and I completed the proposal presentation slides, and I formatted the slides so it looks professional.
  • I set up the project github repo for the code base.

We are still in the proposal stage, and our presentation slide/preparation is on track for the Proposal Presentation next week.

I plan to set up the simulation environment for the team and start on random number generation scripting and simulation.