Wednesday, March 14, 2012

Patch map

The following image shows the 1000 patches of the Matrix-1000, each represented by a small colored square. There are 50 squares per row, so each bank is two rows.

PatchMap

Red squares are patches that either use filter modulation or have a click. These are likely the last features that I will try to simulate.

White square patches have a noise component in them. Most of them are in the “FX & Perc” bank.

Yellow patches use only square wave DCOs, blue patches use only the variable waveform, and green patches use both square and waveform DCOs.

This patch map suggests that it should be possible to simulate most patches even before implementing filter modulation, and that I may be able to simulate up to one fourth of the patches (there are 250 yellow squares) after implementing the square wave, the filter and digital control (including envelopes, LFOs and matrix modulation).

It’s also interesting to note that there are two black squares. These correspond to patches with no oscillators and no noise. How is it possible for these patches to produce sound then? The two patches are called heart and zap and loading them in an editor shows that their filter resonance is set to the maximum possible value, so these patches are using the ability of the filter to self-oscillate and produce a sinusoidal output even when it has no input signal.

[tag: 55QSB2593CDW]

Sunday, March 4, 2012

Analysis tools – part 1

Studying sounds with enough detail to allow writing a model that can reproduce them means first of all measuring those sounds.

I don’t have expensive lab hardware, but I think a good audio card and some code is all that’s needed. After all, recording tracks with a computer is one of the first steps in any music production that uses a digital audio workstation.

I’m surprised when I read that some virtual analog emulations have been created without access to the original machine, but only based on sound clips recorded by other people. For this project I’ve already collected many thousands of clips, which would be impossible to do without having the synth with me. Maybe the difference is that the goal of those projects was “only” to get the sound right, while I’d also like to reproduce the relationship between the sounds and the patch parameter values (sound generation in the Matrix-1000 is analog, but the sound parameters are digitally controlled and have precise numeric values).

CaptureSoundSmallDue to the large number of recordings I plan to take and analyze, acquiring the samples in an audio program and saving the files manually would be impractical, so the first thing I did when I started this project was to write some code to record sound from the audio card, analyze it and save the analysis results for further processing.

As a starting point, I used the code of the CaptureSound code sample in an old DirectX SDK and as an exercise modified it into a Sound Scope application, showing the waveform that the synth is outputting in real time.

The screenshot shows the program’s window, the sound waveform is in the upper half, while the lower half only shows an implementation detail of how the sound’s auto-correlation graph is used to detect the instantaneous sound frequency.

This first code building block was then expanded to an application that can send MIDI patches to the Matrix-1000 - slowly varying selected parameters - and record the sounds resulting from each parameter variation. I can start the application before going to sleep, it will go on all night recording parameter sweeps and in the morning I’ll find one or more files full of juicy data awaiting me.

What to do with all these data? First, a good idea is to graph them, check that they make sense and get an intuitive feeling for how the synth parameters work.

For example, the following graph shows the spectrum of the sound resulting from sending the noise generator output through the low pass filter, for different values of the filter cutoff frequency (this is a logarithmic graph, x is octaves and y is dB):

LowPassSmall

From these graphs it’s possible for example to measure the cutoff frequency (actual value in Hertz) corresponding to the values indicated in the MIDI SysEx describing the patch. It’s also possible to design a digital filter approximating the analog filter to the desired degree of accuracy.

Studying the data sometimes results in peculiar discoveries (for example, in the above graph it can be seen that when the parameter value is incremented from an odd number to the next (even) number, the actual cutoff frequency increases much less than when the value is incremented from an even number to the next (odd) number), exposes the occasional bug in the synth implementation and often generates even more questions than it answers (for example, is the hump in the graph corresponding to the low frequencies a characteristic of the random noise generator, of the filter response, an artifact or error of the analysis process, or something else entirely?).

The measured data needs some further processing to generate parameters and code that can be used to reproduce the measured quantities in a digital simulation of the synthesizer. Often I will look for a continuous function that approximates the measurements. The following graph shows the measurements of the sound amplitude for each value of the VCA patch parameter (the little diamonds), and a continuous curve that approximates the measured values almost perfectly. The approximation function used in this case is a logistic curve and the values of the curve parameters that best match the experimental data were computed using the Lab Fit software. The result is a mathematical formula that can be used in the virtual analog simulation:

VcaGraphSmall

In part 2, I’ll mention a couple of the math and modeling tools I use.