Roger B. Dannenberg
This page describes how to use the convolution function in
Nyquist. Convolution is used for filtering, reverberation, and
combining sounds.
The convolution function is convolution, which is
called as follows:
convolution(sound, response)where sound is the input sound, and response is the impulse response. These parameters will be explained further below.
Convolution works by scaling and shifting the input signal
according to the response signal. For example, consider a very
simple example where the response has only two samples: 0.1, 0.2.
In this case, the input sound is copied and scaled by 0.1 (with no
shift). It is also copied, shifted by one sample, and scaled by
0.2. Then, these two copies are added to form the result.
Now, consider a slightly more complex response consisting of 0.5,
0, 0, 0, ..., 0, 0, 0.5. Let's assume there are 44,099, or about 1
second's worth, of zeros, and only the first and last sample of
the response are non-zero. The convolution algorithm is the same,
only this time we'll make 44101 copies, shifting them by 0 to
44100 samples, and all are scaled by zero except for the first and
last copy, so we can ignore most of them. The interesting part
then is that we get a copy of the sound scaled by 0.5 plus a copy
of the sound scaled by 0.5 and shifted by 44100, resulting in a 1
delay.
That's a lot of work to make a delay! But notice that if we
changed more of the zeros to non-zero values, we could insert lots
of delays, each with a different amplitude. Taken to an extreme --
thousands of delays and scale factors, increasing in density -- we
can describe reverberation. Reverberation can be viewed as a
specific kind of filter. Convolution is a general way to create a
wide variety of filters. In particular, convolution can implement
any finite impulse response, or FIR filter.
A common application of convolution is reverberation. You can
search the internet for downloadable impulse responses. You can
also synthesize an impulse response from noise. Here is an example
that models a short melody with reverberation created from
convolution. Because the reverberation is based on noise, it is
very smooth and uniform. Impulse responses based on real rooms (or
even good room models) may have more ``character'' formed by early
reflections and somewhat irregular reflection patterns.
In the code below, response is a function that computes
a reverberation response -- in this case just an exponentially
decaying noise signal. The melody function uses dhm-organ
to make a short musical phrase. The convolve function
convolves the melody with the reverb response to create a
reverberated signal.
load "mateos/organ.lsp" ;; defines dmhm-organ(pitch) function response() return noise() * pwev(0.5, 1, 0.001) function melody() return sim(dmhm-organ(c4) ~ 0.5 @ 0, dmhm-organ(bf3) ~ 0.4 @ 0.4, dmhm-organ(f3) ~ 0.5 @ 0.8, dmhm-organ(g4) ~ 1 @ 1.3) function convolve-demo-1() play convolve(melody(), response() ~ 4) exec convolve-demo-1()
The next example has no corollary in the acoustical world: We
convolve a musical signal with a speech signal. You can hear
traces of both inputs in the output. Convolution is symmetric, so
you can think of this as either the musical signal reverberated in
a room whose impulse response sounds like speech or a speech
signal reverberated in a room whose impulse response sounds like
music.
function convolve-demo-2() play convolve(s-read("../audio/happy.wav"), s-read("../audio/pv.wav")) exec convolve-demo-2()
Source code to run both of these examples is in convolve.sal.