Final Report

It’s been a long semester, and we’ve put in a lot of work, but our project is finally done! Our final report can be found here. Ultimately, we’ re pretty proud of what we were able to build this semester, and an honorable mention at the final project demo session was a nice way to cap everything off.

Thanks to everyone who helped us this semester!

Status Update – Matt (05/04/2019)

This week we really ironed out the last few details for our project. Primarily, we finalized our goals for what we hope to have ready for the demo on Monday. This will include demos of our live audio features including all digital effect blocks as well as basic analog effects. We also plan to show how our software can handle recorded audio as well.

We also worked a bit on our final report. Ultimately, this will be the last deliverable after our demo on Monday. We will try to find some time on Monday/Tuesday to finish up the remainder.

All in all, I think we’ve had a successful semester. We accomplished most of our project goals (though we have had to make some concessions where necessary) and I’m pretty satisfied with our final product.

Status Update – Team (04/27/19)

This week involved an assortment of minor things, such as fixing small bugs, gathering some statistics about our project, and working on a slideshow for Monday’s presentation. We patched bugs related to how our audio manager parses certain file types, and how the backend shuts down after being stopped by a user on the frontend. We had a few users try out our project and collected some feedback. We also collected some backend latency measurements which stacked up nicely to our initial goals, especially for digital effects. We also met for a while on Saturday to develop our presentation slideshow.

Going forward, the next big milestone is Monday’s presentation (slides will appear in next week’s status report or in a new blog post this week). After that, our attention will turn to the poster and final report. We also may end up fixing any minor issues we discover as we prep for the public demo, but for the most part we think our codebase is pretty much in its final state right now.

Status Update – Matt (04/27/19)

This week, we worked on some minor bug fixes and prepared for our final presentation. One important thing we fixed was a bug that prevented the backend from shutting down correctly when the user stopped the simulation from the frontend. Ironically, this bug was a direct result of some code that was introduced for testing and never removed. Preparing for the presentation also involved testing in order to collect data for our presentation. Part of this involved gathering some latency measurements for our backend circuit simulator/audio processor. Another aspect of this was getting some people to try out our project and see their reactions/collect feedback.

We also met for a few hours to develop the slideshow that Stephen will be presenting on Monday. As part of this process, we spent some time recording demo audio tracks that we can include as snippets in our presentation. This slideshow is mostly wrapped up, though there are a few finishing touches that we’ll probably add tomorrow afternoon.

Next week, I’ll probably turn my attention to the poster and final report, as well as iron out any last minor details that need to be addressed before the public demo day.

Status Update – Matt (04/20/2019)

This week, we got a lot done in advance of our final in-lab demo on Monday. Though I still had some work to do on the circuit simulator related to implementing our transistor device model, I ultimately decided to shift my attention towards some other high priority issues for our demo. In particular, I helped Stephen out with some important outstanding items on the frontend side of our project.

The first thing I worked on for the frontend was overhauling some parts of the GUI to provide a nicer user-experience. This involved making the application full-screen, widening the schematic editor, centering the toolbar, and moving several of the buttons into the native macOS menu. This actually turned out to be easier said than done, due to some interesting facts I learned about the ElectronJS programming environment. In particular, the parts of the code that have access to native menus and the parts of the code that render the GUI (referred to as the main and renderer processes respectively), run in different processes with no access to the other processes’ data. This means I had to write some code to trigger appropriate IPC via a message-passing model when I wanted a native menu option to render a change on the GUI. The actual GUI now looks like this:

The biggest frontend feature I worked on was support for live audio simulations. In particular, users needed to be able to start and stop the simulator from the frontend, as well as tell if the simulator is currently on. I decided that the nicest way to do this was via a modal that pops up when a user selects a live audio simulation:

The sound wave icon in the middle oscillates when a simulation is running to provide users with an indicator about the state of the application. When the user presses “Start”, the frontend launches a new simulator instance to run the user’s circuit against live audio from an instrument. When the user presses “Stop”, the frontend sends a SIGUSR1 signal to the running simulator. On the backend, we installed a SIGUSR1 handler which exits the simulator gracefully when the signal is delivered.

On the backend, I also spent some time adding parser support for netlists containing audio transformation “block effects” such as fuzz pedal blocks, distortion pedals, etc. Ultimately, we were able to get a full end-to-end test in today. This involved designing a circuit on the frontend using new features such as effect blocks, running a live audio simulation on the GUI, being able to pause and run the simulation as desired, and playing back the audio in real time.

Next week, we have our demo on Monday. I also plan to maybe finish implementing transistors this week, but I’ve run into some hard bugs that I have yet to figure out. Luckily, I think enough works for a nice demo even without this feature. We also are going to have to prepare for presentations and work on our final report/poster.

Status Update – Team (04/13/2019)

As a team, we got a few main things done this week. Matt was able to successfully get diodes working in the circuit simulator, which allows us to simulate new kinds of audio effects, especially those that involve clipping input signals. Stephen worked on enhancements to the frontend. Joseph worked on getting live audio to work, and was able to get live playback working correctly, though we still haven’t done a full system test involving live audio from an instrument. This will be our priority on Monday.

We also decided on a few enhancements we want to make, so we all have things to work on this week. Matt is going to add transistors into the circuit simulator, which should wrap up development on that subsystem. Joseph is going to finish testing live audio for the audio processor, and then add in the ability to perform higher level transformations on audio signals using functional blocks as Professor Sullivan suggested. Stephen is going to work on improving the user experience for the frontend over the demo quality version we used for our mid-semester demo, and then add support on the frontend for transistors and audio transformation functional blocks.

Ultimately, last week wasn’t very productive due to Carnival. We hope to make up for that this week. We’re hoping we can make a big push this week and be code complete by the end of the week, allowing us to turn our attention to testing and gathering results for our final demo/report.

Status Update – Matt (04/13/2019)

This week I continued to put the finishing touches on the circuit simulator. This included implementing the diode device model. As it turns out, this was a bit trickier than some other components supported by the circuit simulator, due to the fact that the current through a diode is a function that involves exponentials and several more parameters than a simple equation like Ohm’s Law for example. Ultimately, I took a set of parameters off of the data sheet for the diode used in the guitar pedal we’re trying to simulate, and based my model off of those. It wouldn’t be very hard to make it extensible to support a wide variety of different types of diodes (this is what SPICE does), but I think this may not be necessary for our project and I prefer to keep things simple until the need for a more complex solution arises.

Once the diode model was implemented, I tested it on a variety of circuits. A good example is the following circuit, which clips an input signal around 0.7V in both the positive and negative directions, as expected.

Other circuits tested include basic single diode circuits and more complex designs such as bridge rectifiers. Overall, it seems as though everything works pretty well. This should enable us to expand the audio effects we can simulate to include things that rely on clipping, which turns out to be a large portion of guitar pedal effects.

The only issue I noticed was that circuits that include diodes take several more iterations of Newton’s Method to converge to a solution than circuits that contain only resistors, capacitors, and inductors. Ultimately, I’m not too worried about this issue, as even complex rectifier circuits still ran plenty fast enough to process entire *.wav files sampled at 44 KHz in times far shorter than the length of the file, indicating that processing live audio should still be a possibility once the audio processor can support it.

The next step for me is to continue working on the transistor model, which is the last circuit element I plan to support. I started working on transistors this week, but haven’t really gotten much work done since Carnival began. After Carnival, I anticipate this will take me a couple more days and it should be possible to have a finished circuit simulator by the end of the week. After this, I’ll turn my attention to whatever work is unfinished, whether it be testing, helping with the front-end, or whatever else remains to be completed.

Status Update – Team (04/06/19)

This week as a team, we focused primarily on integration before the midpoint demo. This involved several things: First, the circuit simulator was rewritten in C++. The primary reason for this was to make it faster, but it also allowed for easy integration with the audio processor by just combining both modules into a single program, producing a single executable. The way this was done was by defining an API that we call the AudioManager, which allows the circuit simulator to read in signals from the audio processor in a consistent fashion regardless of the source. We now refer to the combination of the circuit simulator and audio processor as the “backend” of our project. Once this step was complete, we moved on to integrating the backend with the frontend. This was fairly simple. The frontend generates a netlist file based on the circuit the user build, and then uses a Javascript library aimed at running shell commands in JS to invoke an instance of the backend, passing in the netlist file path, the input signal source, and the location where the output should be saved as command line arguments. This enables us to have a simple command line interface that can just as easily be tested from the GUI as from a terminal. During the process of integrating the frontend with the backend, we also discovered and fixed a few bugs relating to how the circuit simulator parses netlists. When all was said and done, we made a new release in our repository (linked here) to act as a snapshot of our progress for the midpoint demo.

During the midpoint demo, we were able to get some feedback about our project that we will be taking into careful consideration. In particular, Professor Sullivan suggested adding a feature that allows users to select some pre-built effects without having to get into low-level circuit details. This is a feature we intend to add, and it should be fairly simple to implement this as a layer between the AudioManager and the circuit simulator.

In the coming week, Joseph plans to extend the audio processor to support live audio. Matt plans to implement diodes in the circuit simulator and begin working on transistors. Stephen is going to be putting finishing touches on V1 of the front-end and turning his attention towards user studies. Joseph is going to be working on the dual channel live audio input module we need for our final testing procedure.

Upcoming Schedule:

 

 

Status Update – Matt (04/06/19)

This week, I got a lot done in advance of the mid-semester demo. The first thing I accomplished was getting nearly all of the circuit simulator rewritten in C++, which turned out to be tens of times faster than the Python version we’ve been working with so far. I’ve noticed speedups of 50-70x over the Python version when running the C++ version compiled with -O3 on the same inputs depending on the circuit. For simple circuit configurations, we’ve been able to process inputs sampled at 44 KHz lasting tens of seconds in under a second of simulation time. I expect that this speed will tail off a bit once we start to deal with increasingly complex circuits, but this was a hopeful sign in terms of our aspirations to eventually operate on live audio signals. The full source code for the simulator can be found in the backend directory of our repository.

We also worked quite a bit on integration this week. For me, this involved multiple steps. First, I worked with Joseph to integrate the circuit simulator with the audio processor. To accomplish this, we decided to create an interface called AudioManager, which provides the circuit simulator with a consistent API to read in signals/voltages regardless of where they may come from. This allows us to use a variety of audio file types or even a live instrument to provide the circuit simulator with an input signal. Regardless of the input source, the AudioManager handles all of the source specific details so that the simulator does not become polluted with logic to interact with audio hardware or parse wav files. I think this integration worked out quite nicely, especially since we can now support new audio sources without having to modify the simulator at all.

I also worked with Stephen to integrate the circuit simulator with the frontend. The way this works is by having the frontend use the fork/exec system calls (via a Javascript library) to run an instance of the circuit simulator. All communication necessary happens via a command line interface that I implemented in the circuit simulator. Stephen’s frontend simply provides the simulator with the appropriate arguments, such as the input signal source used to instantiate the AudioManager, the netlist file containing the circuit to be simulated, and the user’s chosen name for the output audio file. Additionally, there is an optional –plot argument, which causes the circuit simulator to pipe simulation results into a Python script for plotting using matplotlib when specified. This is a nice option to use for debugging or if the user wants to see a graphical representation of how their circuit behaves.

I’m currently working on implementing diodes, which will enable us to expand into clipping effects. I am hoping to have this done by Wednesday of this week, and then I can move into implementing transistors, which is the last type of component we hope to support. Once all component types are integrated, I hope to begin conducting more extensive tests on the accuracy of the simulator. I am certain that more complicated circuits will reveal some bugs and I want to have ample time to fix them.

Status Report – Matt (03/30/2019)

This week, I’ve continued to work on the circuit simulator. I spent some time working with Joseph to get it partially integrated with the audio processor ahead of the mid-semester demo.  We succeeded in getting a recorded audio file passed through the simulator and playing back the result, which was a big milestone for us.

I’ve also started implementing the simulator in C++, as I’ve been working in Python lately to enable a fast development pace and faster debugging. Ultimately, working in C++ enables easier communication with the audio processor and should make the simulator quite a bit faster as the circuits start to get more complicated. I hope to get much of this work done in the next couple days and it should be finished before our demo on Wednesday.

We also plan to meet tomorrow to continue to refine the product we hope to produce for our demo. In particular, we hope to be able to finalize the details of our testing procedure that should enable us to gather quantitative error measurements of our simulator now that we have the USB audio interface we need. Going forward, my primary goal will be to get the simulator ported over to C++ as quickly as possible (probably a couple days work), implement transistors and diodes in the simulator, and finalize the testing procedure.