Status Update – Team (04/20/2019)

We did a lot of our work this week as a team. Together, we decided on how to redesign the user flow of the frontend, and how to make the application look better.

We also spent a lot of time working on integration. Continuing on from last week, we had to also add integration for the functional blocks (look at individual status updates for more details).

Finally, we took some time figuring out our demo for next week.

Status Update – Joseph Kim (04/20/2019)

The past week, I finished up the live aspects of the simulator and implemented a few digital effects. The digital effects include delay, reverb, distortion and fuzz. I worked with Stephen and Matt so that the frontend can now support chaining digital effects with the actual circuit. The digital effects were all implemented under the same virtual class, and the latencies from all the effects are fairly low. The digital effects also sound fairly convincing. I ran into a problem at one point where the audio would sound very distorted unless I set the input latency very high, and this problem was also apparent on audacity, but we found that this was solved by rerouting the output to the audio interface rather than playing it out through the speakers of the computer. But by rerouting the output to the audio interface and then to an amp, we found that we we could achieve fairly low latency. At this point, our project is coming close to our final product and ready for the demo.

In the upcoming week, I want to work on polishing up the code and testing it. I also am planning to refine the reverb and distortion effects, as they are both a little bit less sophisticated than I would like. I will also work on making the effects take in variable parameters. I will also be working on our final presentation and preparing for the demo.

 

Status Update – Stephen He (04/20/2019)

This week is crunch time! We’re nearing the end of the semester, so we decided to push harder as a group to get our project as good as it can be.

First thing I focused on was “projectizing” the workflow of our application. Originally, our application had the user select the destination and source of all files when the user wanted to save or load a circuit, or run a simulation and play a sound. We noticed that this can become really tedious when you want to quickly iterate on your circuit and test your sound. So, we decided to make our application workflow similar to Quartus’s- First, we create a project folder, and all files are saved to predetermined locations within the project folder.  This way, the user does not have to specify any particular destination each time they want to save / load / run simulation / play. To load a circuit that was previously worked on, all the user has to do is switch the project folder. This also helps the user find where there stuff is saved. Below is what it looks like when the user first opens our application:

This popup greets the user upon running our application

This menu lets the user immediately choose to load or create a new project, just like in Quartus.

The next thing I focused on was getting our Functional Blocks integrated with the front end. The Functional Blocks  we focused on are the Fuzz, Delay, and Reverb blocks. We also are in the process with a Distortion block. I created a new icon for each block, and added them to the parts bin on the right side of our interface, letting the user drag and drop them as they would with any other component.

The Fuzz, Reverb, and Delay blocks on the parts bin

These functional blocks can be used to model the input passing through different pedals before reaching the created circuit:

Vin passing through a reverb functional block before hitting the rest of the circuit (resistor can be replaced with any circuit)

This created a problem, however. These functional blocks technically represent things that are outside of our circuit. In the case of the image above, our circuit simulator should see the Vin as the output of the reverb, even though in the graphical interface Vin is connected to the input of the reverb.

To solve this problem, I implemented a DFS algorithm that traversed from each connection point in Vin to the nearest non-functional block and non-wire component.  This involved adding logic that implicitly connects the top left and top right nodes of functional blocks (and the bottom left and bottom right nodes) together.

Finally, I worked on miscellaneous bug fixes and cleaned up our code. Javascript is weird.

Status Update – Matt (04/20/2019)

This week, we got a lot done in advance of our final in-lab demo on Monday. Though I still had some work to do on the circuit simulator related to implementing our transistor device model, I ultimately decided to shift my attention towards some other high priority issues for our demo. In particular, I helped Stephen out with some important outstanding items on the frontend side of our project.

The first thing I worked on for the frontend was overhauling some parts of the GUI to provide a nicer user-experience. This involved making the application full-screen, widening the schematic editor, centering the toolbar, and moving several of the buttons into the native macOS menu. This actually turned out to be easier said than done, due to some interesting facts I learned about the ElectronJS programming environment. In particular, the parts of the code that have access to native menus and the parts of the code that render the GUI (referred to as the main and renderer processes respectively), run in different processes with no access to the other processes’ data. This means I had to write some code to trigger appropriate IPC via a message-passing model when I wanted a native menu option to render a change on the GUI. The actual GUI now looks like this:

The biggest frontend feature I worked on was support for live audio simulations. In particular, users needed to be able to start and stop the simulator from the frontend, as well as tell if the simulator is currently on. I decided that the nicest way to do this was via a modal that pops up when a user selects a live audio simulation:

The sound wave icon in the middle oscillates when a simulation is running to provide users with an indicator about the state of the application. When the user presses “Start”, the frontend launches a new simulator instance to run the user’s circuit against live audio from an instrument. When the user presses “Stop”, the frontend sends a SIGUSR1 signal to the running simulator. On the backend, we installed a SIGUSR1 handler which exits the simulator gracefully when the signal is delivered.

On the backend, I also spent some time adding parser support for netlists containing audio transformation “block effects” such as fuzz pedal blocks, distortion pedals, etc. Ultimately, we were able to get a full end-to-end test in today. This involved designing a circuit on the frontend using new features such as effect blocks, running a live audio simulation on the GUI, being able to pause and run the simulation as desired, and playing back the audio in real time.

Next week, we have our demo on Monday. I also plan to maybe finish implementing transistors this week, but I’ve run into some hard bugs that I have yet to figure out. Luckily, I think enough works for a nice demo even without this feature. We also are going to have to prepare for presentations and work on our final report/poster.

Status Update – Stephen He (04/13/2019)

This week, I made some small touches to the front end. These involved working on the user interface and workflow of the application. Currently, there are a lot of pop ups that the user has to click through in order to save all of the information related to a circuit. We have decided to make it similar to Quartus in a way- We create a project folder upfront, and then save circuits / project files without the user explicitly defining where each file goes. This way, it’s less of a hassle for the user to use our application.

We also had our reading discussion for chapters 3 and 4 of the Pentium Chronicles book. I really enjoyed this, although I think chapters 1 and 2 were more relatable to our current project.

This week, I spent a majority of my time building booth / enjoying carnival. As a result, I didn’t get as much done as I had hoped. I plan on pressing a lot harder next week. We ideally want to finish the whole project and leave time for user tests, so next week will be extremely important.

Status Update – Team (04/13/2019)

As a team, we got a few main things done this week. Matt was able to successfully get diodes working in the circuit simulator, which allows us to simulate new kinds of audio effects, especially those that involve clipping input signals. Stephen worked on enhancements to the frontend. Joseph worked on getting live audio to work, and was able to get live playback working correctly, though we still haven’t done a full system test involving live audio from an instrument. This will be our priority on Monday.

We also decided on a few enhancements we want to make, so we all have things to work on this week. Matt is going to add transistors into the circuit simulator, which should wrap up development on that subsystem. Joseph is going to finish testing live audio for the audio processor, and then add in the ability to perform higher level transformations on audio signals using functional blocks as Professor Sullivan suggested. Stephen is going to work on improving the user experience for the frontend over the demo quality version we used for our mid-semester demo, and then add support on the frontend for transistors and audio transformation functional blocks.

Ultimately, last week wasn’t very productive due to Carnival. We hope to make up for that this week. We’re hoping we can make a big push this week and be code complete by the end of the week, allowing us to turn our attention to testing and gathering results for our final demo/report.

Status Update – Matt (04/13/2019)

This week I continued to put the finishing touches on the circuit simulator. This included implementing the diode device model. As it turns out, this was a bit trickier than some other components supported by the circuit simulator, due to the fact that the current through a diode is a function that involves exponentials and several more parameters than a simple equation like Ohm’s Law for example. Ultimately, I took a set of parameters off of the data sheet for the diode used in the guitar pedal we’re trying to simulate, and based my model off of those. It wouldn’t be very hard to make it extensible to support a wide variety of different types of diodes (this is what SPICE does), but I think this may not be necessary for our project and I prefer to keep things simple until the need for a more complex solution arises.

Once the diode model was implemented, I tested it on a variety of circuits. A good example is the following circuit, which clips an input signal around 0.7V in both the positive and negative directions, as expected.

Other circuits tested include basic single diode circuits and more complex designs such as bridge rectifiers. Overall, it seems as though everything works pretty well. This should enable us to expand the audio effects we can simulate to include things that rely on clipping, which turns out to be a large portion of guitar pedal effects.

The only issue I noticed was that circuits that include diodes take several more iterations of Newton’s Method to converge to a solution than circuits that contain only resistors, capacitors, and inductors. Ultimately, I’m not too worried about this issue, as even complex rectifier circuits still ran plenty fast enough to process entire *.wav files sampled at 44 KHz in times far shorter than the length of the file, indicating that processing live audio should still be a possibility once the audio processor can support it.

The next step for me is to continue working on the transistor model, which is the last circuit element I plan to support. I started working on transistors this week, but haven’t really gotten much work done since Carnival began. After Carnival, I anticipate this will take me a couple more days and it should be possible to have a finished circuit simulator by the end of the week. After this, I’ll turn my attention to whatever work is unfinished, whether it be testing, helping with the front-end, or whatever else remains to be completed.

Status Update – Joseph (04/06/19)

This week, I spent a good number of hours integrating parts together. I also restructured a lot of my code so that it would fit in better with Matt’s backend components. I worked with Matt and Stephen to finalize what we were looking to have by the demo and worked on finishing it. I also spent some time integrating the live audio, though I haven’t had time to test it as I was busy in the second part of this week with other things.

Moving on, we decided that we also wanted to have some digital effects as part of our code, so I’ll also be working on that. These effects may include clipping for distortion, echo, and more. I also need to work on the live audio, and I imagine spending a good amount of time tweaking parameters, like the size of each buffer or the number of buffers to store, or the actual implementation to get the results we want.

By next week, I hope to have a good amount of things I mentioned done, though with Carnival my progress may be stinted a little bit.

Status Update – Stephen (04/06/19)

This week, we had our midsemester demo. So, a lot of what I worked on was to make the demo as smooth as possible.

I made a lot of changes to our import and export functionality. I fully defined the netlist definition for all of our circuit components, and also added in functionality to let the user specify the files to be used for the circuit simulation, and the locations of the output of the sound playback. After doing this, my frontend is now fully integrated with Joseph’s sound module and Matt’s circuit simulator module. Any changes to their parts should require little to no changes on how my frontend interacts with them!

I also updated the icons for the circuit simulator frontend. All the icons I used are front flaticon.com.

This week, since we were dealing with integration, we took a deep dive into each other’s code. I spent a while debugging and understanding Matt and Joseph’s components.

Finally, we had a reading assignment for this week. I spent a few hours reading and writing my response.

Status Update – Team (04/06/19)

This week as a team, we focused primarily on integration before the midpoint demo. This involved several things: First, the circuit simulator was rewritten in C++. The primary reason for this was to make it faster, but it also allowed for easy integration with the audio processor by just combining both modules into a single program, producing a single executable. The way this was done was by defining an API that we call the AudioManager, which allows the circuit simulator to read in signals from the audio processor in a consistent fashion regardless of the source. We now refer to the combination of the circuit simulator and audio processor as the “backend” of our project. Once this step was complete, we moved on to integrating the backend with the frontend. This was fairly simple. The frontend generates a netlist file based on the circuit the user build, and then uses a Javascript library aimed at running shell commands in JS to invoke an instance of the backend, passing in the netlist file path, the input signal source, and the location where the output should be saved as command line arguments. This enables us to have a simple command line interface that can just as easily be tested from the GUI as from a terminal. During the process of integrating the frontend with the backend, we also discovered and fixed a few bugs relating to how the circuit simulator parses netlists. When all was said and done, we made a new release in our repository (linked here) to act as a snapshot of our progress for the midpoint demo.

During the midpoint demo, we were able to get some feedback about our project that we will be taking into careful consideration. In particular, Professor Sullivan suggested adding a feature that allows users to select some pre-built effects without having to get into low-level circuit details. This is a feature we intend to add, and it should be fairly simple to implement this as a layer between the AudioManager and the circuit simulator.

In the coming week, Joseph plans to extend the audio processor to support live audio. Matt plans to implement diodes in the circuit simulator and begin working on transistors. Stephen is going to be putting finishing touches on V1 of the front-end and turning his attention towards user studies. Joseph is going to be working on the dual channel live audio input module we need for our final testing procedure.

Upcoming Schedule: