This week I began integration of our full system, including debugging the hardware and software together.
A lot of effort was spent working with the new hardware, particularly establishing a strong physical connection with the LCD pins, learning to pair a speaker to the BM83, which required some soldering and re-soldering, exchanging dead wires, and removing bad connections.
Integration required a few attempts at debugging configuration issues since the audio path and UI had never been tested together. Eventually, with help from Xingran, I got this working. Now, the LCD is really dim, so I need to look into resolving this issue.
I conducted testing with a guitar as in input signal. There is latency of about 200ms (rough estimate), which I think is induced mostly by whatever Bluetooth codec we are using. Also, the signal is relatively distorted, with a significant amount of noise.
I successfully developed the delay effect, but noticed that it amplifies the noise due to the feedback loop.
Next steps are looking into the LCD display issue, implementing the EQ effect, and finalizing our video, poster, and report.
Here is quick video demo of playing guitar through BARI:
This week I worked on the UI. I have successfully included global control variables for parameters such as channel enables and gain. Everything is controlled by the rotary encoder, button, and a programmable footswitch. As we add new effects or features, I can easily add new menus and variables given my modular infrastructure.
Tomorrow I will prepare for the final presentation, which I will deliver on Monday or Wednesday. Before then I need to update some slides and include information on UI validation.
Next week I will work on implementing at least delay and EQ, as well as helping Xingran with integration between the UI and audio signal path. I am currently on schedule, but we have a lot of work to do before the end!
The past two weeks I have been focusing on the LCD UI. I finished the character maps and have created helper functions and a main menu, as well as a bare-bones channel parameter menu. From here, I will include global variables that are controlled by the UI, and I will enable control via the rotary encoder, button, and footswitch.
My progress is behind schedule, but I plan to work on the UI more before Monday. The code is written modularly, so once I write interrupt routines for the user-input controllers, I should be able to integrate everything quickly.
In the next week I hope to finish the UI and maybe implement some of the DSP effects, or do whatever else the team needs. We will probably also work on the final presentation slide deck, which I will be presenting.
Below is the small demo of going through a channel parameter menu:
This week I focused on interfacing with the LCD. On Wednesday I went to the lab to meet Adam and Xingran and learned how to program Rev1 using the STM32 Discovery Board. Following that, I debugged the flashing process locally at my house. Attempting different USB cables, ports on my computers, wires, and allowing for a lot of hard and soft resets on all devices, I eventually got it to flash (at least once in a while)!
From there I wrote up some code to turn the LCD’s display on and display characters. Again, this was based on this Arduino code from the manufacturer. I had no luck with getting the LCD running, even after some debugging.
On Saturday I went to the lab with Adam and we found out a couple important discoveries
The parallel/serial pin was hard-wired to VDD, which is parallel. However, we needed it to be connected to ground so that we can interface it via SPI. Adam fixed this
My initialization code was running before any peripherals or SPI was set up, so I moved it to later in the execution.
We were amazed to see that the LCD displayed pixels after making these changes!
This week I shifted my focus to the UI. I created a draft of a wireframe of our menu interface, received feedback from the team, and then incorporated their changes. Give the specs of our LCD (128×64 pixels), and the use of standard 8×5 pixel characters, all the items on a menu will fit on the screen. Here is the wireframe for your reference: (BARI UI Wireframe)
From there, the team decided I should work on some of the lower level implementation of the UI as well, so I worked on a function that maps characters to pixels. I did not find any resources online for this in terms of code, but I did find a nice template for the pixel maps, as well as a tool to quickly get the hex for each pixel map. I also found an example of interfacing with our LCD. All of these resources have been helpful in working with the implementation of the UI. Further, I have found mapping numbers, letters in both cases, and select symbols to pixel arrays has been a time-consuming process. However, it is the foundation of the UI.
Next week I hope to finish interfacing with the LCD, getting some kind of trivial writing, and then eventually laying out the menu and writing menu layers to the screen.
The past two weeks, I have focused on our MVP DSP effects of chorus, delay, and EQ. I have successfully prototyped delay and EQ with user-controlled parameters that shape the sound according to our design spec. Chorus is very close, but I am running into a bug where there is a crackling sound introduced with the audio. I suspect this might be a feature induced by block processing that I have to find a way to work around. If I can’t get it working, I will reach out to Tom Sullivan or maybe Ryan for some guidance.
I provided a demo of the MVP effects to my team last Monday, and received positive feedback. During our Monday sync-up, Tom suggested wah-wah as an additional effect if we have, especially I have a mid-peak filter working. The implementation would be as simple as sweeping the center frequency of the filter.
I am behind on the STM32 implementation of effects, but I am still optimistic I can port my MATLAB code to C quickly.
For this next week I am shifting my focus from DSP effects to UI design. I will start with a wireframe and then from there write code for the menu layout, assuming I am provided high-level functions such as write_to_lcd(char[] str). This is a change in our schedule, but one we deem as more essential to getting a working product than the MVP effects.
Below I have included an audio file with our custom delay effect added with the following settings:
This week I focused on prototyping our MVP effects (delay, chorus, 3-band EQ) in MATLAB. In particular, I tried to write the algorithms such that they can be ported to the microcontroller environment easily. This includes strategies such as mocking block processing by using input buffers that go through for loops.
I made good progress with chorus, setting up a superimposition of delayed copies of an input signal with delays set by a “LFO.” In its current development this LFO is more of a triangular wave than a smooth sin wave since I have not implemented interpolation between discrete points. I have not settled on how many copies of the signal to overlay. Currently the effect is audible, but it’s not very strong.
I did more research on 3-band EQ’s and instead of typical LPFs and HPFs, low shelf and high shelf filters are used. I have not implemented my 3 filters yet, but I am thinking of starting my using the specs in this report. This includes a low shelf with a center frequency of 200Hz, a band pass filter with Q = 1 and a center frequency of 1kHz, and a high shelf with a center frequency of 5kHz.
I will model the delay algorithm using this reference from Analog Devices. Their algorithm is in line with what I was thinking to do, and it’s good to see an industry standard reference.
Unfortunately I am still behind on prototyping the MVP effects in MATLAB. I think that writing them in a way such that they can be written on the STM32 environment quickly will help save time during the actual implementation, so I am not too worried. While I was hoping to give a demo of the MVP effects last Friday, I will have to push off the demo to some time this week.
I worked with Adam to finalize our purchases for Rev1 and created lists for Amazon and DigiKey, filling out the form for the Amazon order.
Next week I am hoping to complete the MATLAB prototyping and begin writing some framework code (buffers, looping structures, etc.) for implementing the effects on the STM32. In addition, I will work with Xingran to develop the outline for the user interface menus.
Check out the audio file to hear a chorus effect in action! This is an excerpt from a recording of the guitar part to Sondheim’s Assassins musical that I did for the school of drama back in January.
This week I focused on researching algorithms for our MVP effects, EQ, reverb, and chorus. The standard way to implement EQ is by having a filter for each band, and applying an amount of gain to each filter, to shape the frequency response. This should be simple to implement, requiring filter coefficients, and buffers of inputs and outputs of the signal. Chorus is implemented with overlaying copies of a signal with variable delays controlled by LFO’s. In essence it should should like a “chorus” of singers or instruments playing all at the same time, with slight phase variations. This will require storing some inputs and outputs, past what we are storing in our DMA input and output buffers. I researched reverb, and after a lot of research into different implementations (mostly Schroder based reverbs such as “Freeverb”), it seems that for a versatile reverb with a large delay time, we run into issues with storing inputs and outputs. To store 1 second of inputs and outputs, required for a 1 second reverb-time, we would use just about all of our available SRAM. After some consideration, the team and I decided that going for delay would be a better option. A 1 second delay is practical, whereas a 1 second reverb time is a bit low. Delay is an IIR system that involved a feedback loop with a gain less than 1.
I did not implement any MVP effects in MATLAB, so we updated the schedule to give me another week to actually implement the effects. Prototyping the effects properly will save time in the end as we try to port the effects over to the STM32.
While researching the MVP effects, I looked into the memory usage of the effects, and talked with Xingran about implementation decisions- we ultimately decided on double input and output buffers served by DMA. We expect a processing latency of about 12ms.
Since we changed the schedule, my focus will be on implementing our MVP effects in MATLAB and providing the team a demo on Friday.
This week was proposal presentation week! We got to see some cool projects, many of which are audio or signal processing related. We received positive feedback from other groups!
This week I finished up the UI trade, outlining our user interface module’s requirements and looking at different options for components. This required research into different kinds of displays (such as TFT LCD pros and cons), and I learned about how LCD screens work in the process. One constraint we had is being able to power it using a 3.3V supply, which limited our options. From there, I prioritized cost effectiveness and size. We want to make sure the UI is easy to navigate and able to enable a user to quickly adjust parameters. I also selected a rotary encoder with a button, but we ultimately ended up getting a new model of the same one for cheaper!
I also worked on the interface between the UI module and the microcontroller. I got my hands dirty looking at the LCD screen’s data sheet and also how utilize the FMSC feature of the microprocessor in order to efficiently connect the LCD. I did some research on the pinout for the rotary encoder and switch, and found some GPIO pins to connect it to. Finally, I researched different footswitches, since we want to include a toggle switch for quick effect-toggling during a performance. I learned about single, double, and triple pole switches and also single, double, and triple throw switches. While a TPDT switch is standard for a lot of guitar pedals, we can have the microcontroller process the toggle, so we really only need a SPST switch. The other thing I looked for was a clicky/tactile feel when you toggle the switch since I know I personally like that feedback when I am using an effects processor.
I also worked with Adam and Xingran to finalize a budget for our project, selecting our components.
I looked into implementing digital effects, and have started a game plan for each (EQ, reverb, chorus). Next week I will begin to write some MATLAB code to actually test my architected designs. I’ll use the sample guitar input I provided Adam to test the analog effect and hear how the effects shape the output. The UI to microcontroller interface took longer than expected, so I did not work on this as much as I had hoped this week. Progress should be a lot better now that I can focus on it this week!
This week I did some research on a good choice for our analog effect. I concluded that overdrive/distortion would be the most valuable analog effect due to the difficulty in effectively modeling transistor clipping in digital software. From there, I looked at available reference schematics to see what options would be quick to implement, without the need to re-invent the wheel. I was also hoping to find something relatively unique but fully practical. I found a vintage 70’s analog overdrive effect called the Colorsound Overdriver and loved the sound after watching some YouTube demos of the pedal. Not only did it have a great sounding overdrive, but if you boosted the gain high you could get a nice distortion, and even a fuzz effect at the highest gain. It’s a super versatile effect, and it would be a cool bonus to add to BARI given that clones of the effect cost $100+ and the original can sell for over $1000 (according to Reverb).
I narrowed down our MVP digital effects to be EQ, reverb, and chorus. EQ will be useful for guitarists who want to adjust the tone of their Bluetooth speaker, since unlike traditional guitar amps, the speaker won’t have bass/treble dials. Reverb will be great for singers or even instruments who want a fuller sound, and it is a staple effect among musicians. Chorus is a modulation effect that gives the impression you have a “chorus” of guitars playing together, and makes for a good combination with the other effects. The team helped me narrow down some metrics to test the quality of the effects, such as control resolution for adjusting parameters for the effects. We are thinking of using MATLAB to compare spectrograms of the MATLAB simulations of the output of each effect to BARI’s output to determine if we successfully implement the effects on BARI.
I also worked on some miscellaneous tasks such as setting up the website and providing information for the use cases slide for the project proposal. I need to finish writing about the System UI in our system spec. I am behind on this task, but will get it down early next week. Next week I will provide Adam with a file representing voltages from a guitar input so he can simulate the output of his Colorsound Overdriver circuit. I’ll also be available to help with the effect if needed. I will begin writing MATLAB scripts to simulate our digital MVP effects in software, which will be my focus for the next couple of weeks.