KEMBAR78
LC A New Computer Music Programming Lang | PDF | Real Time Computing | Computer Programming
100% found this document useful (1 vote)
35 views8 pages

LC A New Computer Music Programming Lang

The paper introduces LC, a new computer music programming language designed to address three key issues in current computer music language design: dynamic program modification, precise timing behavior, and microsound synthesis programming. LC features prototype-based programming, mostly-strongly-timed programming, and integration of objects and functions for effective microsound manipulation. These innovations aim to enhance creative practices in computer music by providing better support for dynamic and real-time musical exploration.

Uploaded by

Bonxi BonBon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
35 views8 pages

LC A New Computer Music Programming Lang

The paper introduces LC, a new computer music programming language designed to address three key issues in current computer music language design: dynamic program modification, precise timing behavior, and microsound synthesis programming. LC features prototype-based programming, mostly-strongly-timed programming, and integration of objects and functions for effective microsound manipulation. These innovations aim to enhance creative practices in computer music by providing better support for dynamic and real-time musical exploration.

Uploaded by

Bonxi BonBon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

A. Georgaki and G. Kouroupetroglou (Eds.

), Proceedings ICMC|SMC|2014, 14-20 September 2014, Athens, Greece

LC: A New Computer Music Programming Language


with Three Core Features
Hiroki NISHINO Naotoshi OSAKA Ryohei NAKATSU
NUS Graduate School for Dept. of Information Systems & Interactive and
Integrative Sciences & Engineering, Multimedia Design, Digital Media Institute,
National University of Singapore Tokyo Denki University National University of Singapore
g0901876@nus.edu.sg osaka@dendai.ac.jp idmnr@nus.edu.sg

ABSTRACT In this paper, we first address three issues in computer


music language design, which were raised by the creative
This paper gives a brief overview of the three core fea- practices of our time: (a) the insufficient support for dy-
tures of LC, a new computer music programming lan- namic modification of a computer music program, (b) the
guage we prototyped: (1) prototype-based programming insufficient support for precise timing behavior and other
at both levels of compositional algorithms and sound syn- features with respect to time, and (c) the difficulty in mi-
thesis, (2) the mostly-strongly-timed programming con- crosound synthesis programming.
cept and other features with respect to time, and (3) the
integration of objects and functions that can directly rep- In the following sections, we discuss these problems and
resent microsounds and the related manipulations for then how they correspond to the three core features of
microsound synthesis. As these features correspond to LC: (1) prototype-based programming, (2) mostly-
issues in computer music language design raised by re- strongly-timed programing, and (3) the integration of the
cent creative practices, such a language design can bene- objects and functions for microsound synthesis within its
fit both the research on computer music language design sound synthesis framework, together with related works
and the creative practices of our time, as a design exem- and a brief discussion.
plar.
Such a discussion regarding the language design and the
1. INTRODUCTION issues found with the creative practices can benefit fur-
ther research on computer music languages and the inves-
While the advance of computer technology and pro-
tigation on how creative exploration by computer musi-
gramming language research has largely influenced the
cians should be supported by computer music languages.
evolution of computer music languages, issues found in
creative practices have also motivated the development of
new computer music languages. For instance, “the need 2. THREE ISSUES IN TODAY’S COM-
for a simple, powerful language in which to describe a PUTER MUSIC LANGUAGE DESIGN
complex sequence of sound” in the early days of comput-
er music [13, p.34] led to the invention of the unit- 2.1 The insufficient support for dynamic modification
generator concept, which still serves as a core abstraction of a computer music program
for digital sound synthesis. In another example, Max and
some other languages for IRCAM’s Music Workstation Recent computer music practices suggest a significant
were designed with the motivation that “musicians with need for more dynamic computer music programming
only a user’s knowledge of computers could invent and languages today. For example, live-coding performances
experiment with their own techniques for synthesis and [6], involve the creation and modification of computer
control” [18]. music programs on-the-fly on stage, even while the pro-
grams are being executed. In addition, dynamic-patching
Therefore, the problems revealed by the creative practices as seen in reacTable [12] involves the dynamic modifica-
can also be regarded as significant design opportunities tion of a sound synthesis graph.
for a new computer music programming language. In the
design and development of LC, a new computer music However, many computer music languages still exhibit
programming language, we also took the issues raised by certain usability difficulties when performing dynamic
the creative practices of our time into account. While LC modification at least at one of these levels. Such difficul-
has been partly described in our previous works [14, 15, ties can obstruct further creative musical explorations. As
16, 17], significant extensions have been made to its orig- the degree of support for the dynamism in a programming
inal language specification in the design process. environment depends not just on the design of a library or
a framework utilized, but also on the basic language de-
Copyright: © 2014 Hiroki NISHINO et al. This is an open-access arti- sign, which can be substantially limiting. It is highly de-
cle dis- tributed under the terms of the Creative Commons Attribution sirable to consider such an issue as one of the important
License 3.0 Unported, which permits unrestricted use, distribution, criteria from the earliest stage of the language design
and reproduction in any medium, provided the original author and process.
source are credited.

- 1565 -
A. Georgaki and G. Kouroupetroglou (Eds.), Proceedings ICMC|SMC|2014, 14-20 September 2014, Athens, Greece

2.2 The insufficient support for precise timing behav- microsound synthesis. Bencina discusses such an issue in
ior and other features with respect to time the object-oriented software design for a software granu-
lar synthesizer in [2]. In another example, the design of
The precise timing behavior of a computer music system Brandt’s Chronic computer music language is also highly
has become a traditional issue. Even in earlier decades motivated by problems exhibited in the traditional unit-
when a real-time interactive computer music system con- generator concept when describing microsound synthesis
sisted of a computer and external synthesizer hardware, techniques [4]. While its application domain focuses only
the slow processing speed of CPUs and the low band- on frequency-domain signal processing and analysis,
width of hardware interfaces have motivated the research Wang et al. describe a similar issue when discussing
on the improvement of the timing precision required for ChucK’s unit-analyzer concept [23].
better musical presentation of live computer music com-
positions1. Today, even sample-rate accurate timing be- However, The former two works are not very adaptable
havior is considered desirable. For instance, to render the to the design of a real-time interactive computer music
output of a microsound synthesis technique as theoreti- language. The work by Bencina targets the stand-alone
cally expected, sample-rate accurate timing precision in software rather than the language design. Brandt’s Chron-
scheduling microsounds is essential. ic is a non real-time computer music language, the design
of which still leaves ‘an open problem’ for application to
While some recent computer music languages provide real time computer music languages because of its acaus-
sample-rate accurate timing behavior as in ChucK [22], al behavior 2 [4, p.77]. The target domain of ChucK’s
LuaAV [21], their synchronous behavior can result in the unit-analyzer concept is only signal processing and anal-
temporary suspension of real-time DSP in the presence of ysis in the frequency-domain, and it lacks the generality
a time-consuming task, as it blocks the audio computa- to apply to various microsound synthesis techniques; The
tion until all the scheduled tasks are finished. Moreover, substantial necessity for further research on more appro-
the features with respect to time that were seen in the priate abstractions that can tersely describe microsound
computer music languages of earlier eras, such as timing synthesis techniques still remains.
constraints and time-fault tolerance, seem to not be con-
sidered in many recent computer music languages; even
Impromptu [20], which is a good exception that is clearly 3. THREE CORE FEATURES OF LC
designed with such considerations, still lacks some desir-
able features with respect to time. For example, Im- 3.1 Prototype-based programming at both levels of
promptu cannot handle the violation of execution-time compositional algorithms and sound synthesis
constraints. In prototype-based languages, “each object defines its
own behavior and has a shape of its own”, whereas “each
As above, the support for precise timing behavior and object is an instance of a specific class” in class-based
other features with respect to time is still an issue of sig- languages [11, p.151]. Unlike class-based languages,
nificance in today’s computer music language design. slots (or fields and methods) can be added to an object
dynamically after its creation. Prototype-based languages
2.3 The difficulty in microsound synthesis program- allow a significant degree of flexibility and tolerance
ming against the dynamic modification of a computer program
Broadly speaking, usability difficulties can be caused at runtime. The LC language adopts prototype-based pro-
when the abstractions applied to the software are incom- gramming at both levels of compositional algorithms and
patible with what a user thinks. As “the co-evolving na- sound synthesis, for better support of dynamic modifica-
ture of technology adoption results in new concepts tions to a computer program.
emerging through use of technology”, such a gap caused
between the existing abstractions and emerging concepts At the compositional algorithm level, Table is provided
“may introduce usability difficulties”, which did not exist for prototype-based programming. Figure 1 describes a
previously [3]. simple example of prototype-based programming by Ta-
This view may correspond to the unit-generator concept ble. A shown, LC is a dynamically-typed language and
and microsound synthesis, as the latter was brought into also supports other features such as duck-typing and first-
practice much later than the establishment of the former; class functions.
one of the earliest well-known experiments in mi-
crosound synthesis is one by Roads in 1974 [19, p.302], LC also supports prototype-based programming at the
long after the invention of unit-generator concepts in sound synthesis level. Instead of Table, Patch is provided,
1960 [7, P.26]. which can be utilized to build and modify a unit-
generator graph dynamically. Figure 2 (example a) de-
Indeed, several researchers have already discussed the scribes an example of creating and modifying a Patch
gap between the traditional unit-generator concept and object. As shown in Figure 2 (example b), syntax sugars

1
FORMULA well represents the research on timing precision and the 2
time-related features in its era, even though its target application domain In Chronic, a future event can influence the result already made. As
was still a hybrid computer music system that consists of a computer Brandt admits, this is a significant obstacle for the adoption of its pro-
and the external MIDI synthesizer(s) [1]. gramming model to a real-time computer music language [4. p.77].

- 1566 -
A. Georgaki and G. Kouroupetroglou (Eds.), Proceedings ICMC|SMC|2014, 14-20 September 2014, Athens, Greece

are provided to make the code more readable. Additional-


ly, a patch can be used as a subpatch (see Figure 3) 3.2 Mostly-strongly-timed programming and other
01: //create an object ex nihilo and initialize it. features with respect to time
02: var obj = new Table();
03: obj.balance = 0; //the initial balance is 0.
04: //attach the methods to the object. 3.2.1 Mostly-strongly-timed programming
05: obj.deposit = function (var self, amount){ The ideal synchronous hypothesis underlies the strongly-
06: self.balance += amount;
07: return self; timed programming concept (and other similar synchro-
08: }; nous approaches). It assumes “all computation and com-
09: obj.withdraw = function (var self, amount){
10: self.balance =+ amount; munications are assumed to take zero time (that is, all
11: return self; temporal scopes are executed instantaneously)” and “dur-
12: };
13: obj.showBalance = function (var self){ ing implementation, the ideal synchronous hypothesis is
14: println("current balance:" .. self.balance); interpreted to imply the system must execute fast enough
15: return self;
16: }; for the effects of the synchronous hypothesis to hold” [5,
17: //deposit and print. p.360]; in a computer music language designed with such
18: obj.deposit(obj, 1000);
19: obj.showBalance(obj); //this prints out ‘1000’ a synchronous approach, this assumption can be invali-
20: //obj->method(a, b, c) is a syntax sugar of dated when the deadline for the next audio computation is
21: //obj.method(obj, a, b, c).
22: obj->withdraw(750); missed because of a time-consuming task. This invalida-
23: obj->showBalance(); //this prints out ‘250’. tion leads to the temporal suspension of audio output,
Figure 1. An example of prototype-based programming which is undesirable for computer music programs. As
at the level of compositional algorithms in LC. this problem in strongly-timed programming is rooted in
Example (a) the underlying concept of the ideal synchronous hypothe-
01: //create a patch object.
02: var p = new Patch(); sis, the temporal suspension of audio output in the pres-
03: ence of a time-consuming tasks is inevitable without
04: //create ugens and assign them to the slots.
05: p.src = new Sin~(freq:440); making any extension to the original concept.
06: p.rev = new Freeverb~();
07: p.dac = new DAC~();
08: LC proposes a new programming concept, mostly-
09: //make connections. strongly-timed programming, which extends strongly-
10: p->connect(\src, \defout, \rev, \defin);
11: p->connect(\rev, \defout, \dac, \defout); timed programing with the explicit context switching
12: between the synchronous/non-preemptive behavior and
13: //'compile’ the patch to reflect above.
14: p->compile(); the asynchronous/preemptive behavior. When the current
15: //play the patch and wait for 1 sec. context of the thread is asynchronous/preemptive, the
16: p->start();
17: now += 1::second;
underlying scheduler can suspend the execution of the
18: thread at an arbitrary timing, even without the explicit
19: //modify the unit-generator graph
20: p.src = new Phasor~(freq:1760); advance of time.
21: p->connect(\rev, \defout, \dac, \ch1);
22: p->disconnect(\rev, \defout, \dac, \defout);
23: p->compile();
Thus, mostly-strongly-timed programming allows the
time-consuming part of a task to be executed without
Example (b)
01: //the patch statement can create and connect suspending real-time DSP and to run in the background,
02: //ugens at once and then perform compilation. while maintaining the precise timing behavior of strong-
03: var p = patch {
04: //`=>’ builds a connection. ly-timed programming. To switch the context explicitly,
04: src:Sin~(freq:440) => rev:Freeverb~() sync and async statements can be used. These statements
05: => dac:DAC~();
06: }; will execute the following statement (or compound
07: statement) in the synchronous/non-preemptive and asyn-
08: //play the patch and wait for 1 sec.
09: p->start(); chronous/preemptive contexts respectively. These two
10: now += 1::second; statements can be nested. Figure 4 describes a simple
11:
12: //modify the unit-generator graph. example of mostly-strongly-timed programming.
13: update_patch(p){
14: src:Phasor~(freq:1760);
15: //`=|’ can be used for disconnection. 3.2.2 Other features with respect to time
16: rev =| dac;
17: //the inlet & outlet can be given as below.
18: rev {\defout => \ch1} dac; 3.2.2.1 Timing-Constraints
19: };
LC can express both start-time constraints and execution-
Figure 2. An example of prototype-based programming time constraints with sample-rate accuracy. For start-time
at the level of sound synthesis in LC. constraints, both patch and Thread objects can be given
an offset to the start-time as an argument. For execution-
01: //Inlet~ and Oulet~ can be used in a subpatch.
02: var s = patch {
time constraints, the within-timeout statement is provided.
03: defin:Inlet~() {\defout => \amp} Sin~(440) Figure 5 and Figure 6 describe these features respective-
04: => defout:Outlet~();
05:};
ly. As shown, when the code consumes more time than
06: //a simple tremolo effect. the above 's' the given constraint by a within statement during the exe-
07: //is given as a subpatch (`sub:s’ on line 09)
08: var p = patch {
cution of its following statement (or blocked statements),
09: amp:Sin~(freq:5) => sub:s => dac:DAC~(); it immediately jumps to the statement (or blocked state-
10: };
11: p->start();
ments) in the matching timeout block. When timeout is

Figure 3. An example of subpatch in LC.

- 1567 -
A. Georgaki and G. Kouroupetroglou (Eds.), Proceedings ICMC|SMC|2014, 14-20 September 2014, Athens, Greece

omitted, the code simply jumps to the next statement after 01: //giving the execution-time constraints
the within statement. As seen in Figure 7, execution time- 02: within(2::second){
03: var cnt = 0;
constraints can be correctly nested. 04: while(true){
05: println("count : " .. cnt);
06: now += 0.5::second;
3.2.2.2 Time-tagged message communication 07: cnt += 1;
08: }
In LC, the message-passing model is applied to the inter- 09: //the below code is never reached.
thread communication. When a message is sent out, the 10: println("done.");
11: }
delivery timing of the message can be specified. Figure 8 12: timeout {
describes the example of message-passing in LC. As 13: println("timeout!");
14: }
shown, when the ‘<-’ operator is used for message pass- 15: //’time out’ block can be omitted.
ing, the delivery time or timing offset can be given. When 16: within(3::second){
17: async while(true) println(“*”);
any value of the type time is passed, it is interpreted as 18: }
the delivery time. If the value is of the type duration, it is
Figure 6. An example of execution-time constraints in LC (1).
interpreted as a timing offset.
01: //'sync' is the default context. create a patch 01: within(1::second){
02: // to make the suspension of DSP audible. 02: within(2::second){
03: var p = patch { 03: //the code jumps to the outer timeout block
04: Sin~() => DAC~(); 04: //exactly after 1 second.
05: }; 05: now += 3::second;
06: p->start(); 06: }
07: //loading large files and extracting wavesets. 07: //this timeout block will never be reached.
08: //as DISK I/O can be time consuming, this can 08: timeout {
09: //temporarily suspend the real-time output. 09: println("the inner ‘timeout’.");
10: LoadSndFile(0, "/large_snd_file.aiff"); 10: }
11: var wavesets = ExtractWavesets(0); 11: }
12: 12: //the code jumps to below block as expected.
13: //performing it in `async'. 13: timeout {
14: async { 14: println("the outer 'timeout'.");
15: //as this block can be preempted without 15: }
16: //the advance of logical time, the suspension
17: //of the audio computation does not occur. Figure 7. An example of execution-time constraints in LC (2).
18: LoadSndFile(0, "/large_snd_file.aiff");
19: wavesets = ExtractWavesets(0); 01: //a function to be launched as a thread.
20: } 02: var f = function() {
21: 03: var thread = GetCurrentThread();
22: //sync/async can be nested freely
04: while(true){
23: sync { 05: //receive a message in the blocking mode.
24: //now in the synchronous context 06: var msg = thread->recv(\blocking);
25: some_function_call(1, 2,3 ); 07: if (msg == \quit){
26: 08: break;
27: //switch to the asynchronous context
09: }
28: async { 10: println("message :" .. msg);
29: some_ohter_function_call(4, 5); 11: }
30: //switch to the synchronous context again. 12: println("quit.");
31: sync { 13: return;
32: yet_anohter_function_call(4, 5);
14: };
33: } 15:
34: //now back to the asynchronous context 16: //create and start a thread.
35: println("done."); 17: var thread = f@();
36: } 18: thread->start();
37: //now back to the synchronous context
19:
38: println("bye!"); 20: //sending messages...
39: } 21: //deliver the message immediately.
22: thread <- "Hello!";
Figure 4. An example of mostly-strongly-timed program- 23:
ming in LC. 24: //deliver the message at the given 'time'.
25: thread <- @now + 1::second, "1 second passed";
26:
01: //giving the start-time offset to a patch. 27: //deliver the message after the given duration.
02: var p = patch { 28: thread <- @2::second, "2 second passed";
03: Sin~(880) => DAC~(); 29: thread <- @3::second, \quit;
04: };
05: //the patch starts 1 second later.
06: p->start(offset: 1::second); Figure 8. An example of time-tagged inter-thread message
07:
08: //giving the start-time offset to a thread. communication in LC.
09: //create a first class function.
10: var f = function(var message){ 3.3 The integration of the objects and library func-
11: println("message : " .. message);
12: };
tions that can directly represent microsounds and the
13: related manipulations for microsound synthesis
14: //create a thread by LC’s ‘@’ operator.
15: var thread = f@("Hello, world!"); The sound synthesis framework of LC integrates the ob-
16: //the thread starts executing after 2 second.
17: thread->start(offset: 2::second); jects and functions that can directly represent the mi-
crosounds and related manipulations for microsound syn-
Figure 5. An example of start-time constraints in LC.
thesis. LC was first designed as a hosting language to
enclose the LCSynth sound synthesis language [15, 17],
yet there has been a significant degree of modifications
made in the sound synthesis framework since then.

- 1568 -
A. Georgaki and G. Kouroupetroglou (Eds.), Proceedings ICMC|SMC|2014, 14-20 September 2014, Athens, Greece

In the current version of LC, the microsound synthesis ulation [19, p.127] in LC, respectively. As seen on line
objects and functions are completely separated from the 05 in Figure 9, each sample within Samples and Sam-
unit-generator sound synthesis framework in the current pleBuffer is directly accessible by the ‘[]’ operator.
version. However, the basic programming model for mi-
crosound synthesis in LCSynth as described in [17] is Figure 11 shows a pictorial representation of waveset
still applicable to LC programs. harmonic distortion. As shown, each waveset 5 is
resampled to produce the harmonics of the original
In LC, Samples is the object used to represent a single waveset and then overlap-added to the original after be-
microsound. Samples is an immutable object, which con- ing weighted. Figure 12 shows a simple example only
tains the sample values within. There is no limitation for with the second harmonics, not weighted.
the sample size3. SampleBuffer is a mutable version of
Samples. These two objects are mutually convertible by
calling toSampleBuffer and toSampleBuf method.

01: //instantiate a new SampleBuffer object and


02: //fill it with sinewave of 256 samp freq * 4.
03: var sbuf = new SampleBuffer(1024);
04: for (var i = 0; i < sbuf.size; i+=1){
05: sbuf[i] = Sin(3.14159265359 * 2 *
06: (i * 4.0 / sbuf.size));
07: }
08:
09: //create a grain.
10: //first convert it to a Samples object.
11: var tmp = sbuf->toSamples();
12 //apply a hanning window.
13: var win = GenWindow(tmp.dur, \hanning);
14: var grn = tmp->applyEnv(win)->resample(440); Figure 11. A pictorial representation of waveset harmonic
15:
16: //perform synchronous granular synthesis
distortion technique.
17: within(5::second){ 01: //load the sound file and extract wavesets.
18: while(true){ 02: LoadSndFile(0, "/sound/sample1.aif");
19: PanOut(grn, 0.0); //0.0 = center. 03: var wvsets = ExtractWavesets(0);
20: now += grn.dur / 4; 04:
21: } 05: //perform a simple waveset harmonic distortion.
22: } 06: for (var i =0; i < wvsets.size; i+= 1){
07: //resample the waveset at the given index
Figure 9. An example of synchronous granular synthesis in LC. 08: //so to create the 2nd harmonics.
09: var orig = wvsets[i];
10: var octup= orig->resample(orig.size / 2);
01: //load the sound file onto Buffer No.0. 11:
02: LoadSndFile(0, "source.aif"); 12: //schedule the original.
03: 13: WriteDAC(orig);
04: //perform sound synthesis for 2 seconds. 14: //schedule two 2nd harmonics. give the offset
04: within(2::second){ 15: //to schedule another right after the 1st one.
05: //these are the synthesis parameters. 16: WriteDAC(octup);
06: var pitch = 2; 17: WriteDAC(octup, offset:octup.dur);
07: var rpos = 0::second; 18:
08: var grnsize= 512; 19: //sleep until the next timing.
09: var grndur = grnsize::samp; 20: now += orig.dur;
10: var win = GenWindow(grndur, \hanning); 21: }
11: var rdur = grndur * pitch;
12: Figure 12. An example of waveset harmonic distortion in LC.
13: //perform pitch-shifting.
14: while(true){ Figure 13 describes almost the same example of waveset
15: //read the sound fragment. harmonic distortion, but with the triangle envelope ap-
16: var snd = ReadBuf(0, rdur, offset: rpos);
17: plied to the entire output. As shown, a Samples object
18: //resample and apply an envelope. can be written directly into the input of a unit-generator
19: var tmp = snd->resample(grnsize);
20: var grn = tmp->applyEnv(win); (lines 16 to 22) and the output of a unit-generator can be
21: taken out as a Samples object (line 24). Figure 14 shows
22: //output the grain. advance the read pos.
23: PanOut(grn); another example of waveset harmonic distortion. This
24: rpos += grn.dur / 2; example also applies reverberation together with envelop-
25:
26: //wait until the next timing. shaping. As shown, a patch can be used in the same man-
27: now += grn.dur / 2; ner as the Figure 13 example. Furthermore, as seen lines
28: }
29: } 31to 46 in the Figure 14 example, if a patch is active, the
patch automatically reads the given input and outputs the
Figure 10. An example of pitch-shifting by granulation in LC. processed sound to the DAC output.
Figure 9 and Figure 10 describe simple examples of syn-
chronous granular synthesis4, and pitch-shifting by gran- Thus, the collaboration between the unit-generator con-
cept and LC’s microsound synthesis abstraction can be
3 performed quite easily.
However, an out-of-memory exception is thrown if the memory allo-
cation failed when creating a Samples or SampleBuffer object.
4 5
In synchronous granular synthesis, the sound "results from one or A waveset is defined as “the distance from a zero-crossing to a 3rd
more stream of grain” and “the grains follow each other at regular inter- zero-crossing” [25, Appendix II p.50]. In Figure 11 (left), each waveset
vals” [19, p.93]. is separated by grey lines.

- 1569 -
A. Georgaki and G. Kouroupetroglou (Eds.), Proceedings ICMC|SMC|2014, 14-20 September 2014, Athens, Greece

To add more, FFT/IFFT can be also performed within the 01: //load the sound files onto the buffers.
same microsound synthesis framework. Figure 15 de- 02: LoadSndFile(0, “/sound/sound1.wav”);
03: LoadSndFile(1, “/sound/sound2.wav”);
scribes a simple cross-synthesis example in LC. 04: //the duration of each FFT/IFFT window and
05: //the number of the overlapping windows.
06: var dur =1024::samp;
01: //load the sound file and extract wavesets.
07: var ovlp= 4;
02: LoadSndFile(0, "/sound/sample1.aif"); 08:
03: var wvsets = ExtractWavesets(0); 09: //process 800 frames.
04: 10: for (var i=0; i < 800; i += 1){
05: //create an triangle envelope ugen. trigger it. 11: //first, extract snd fragments from the buffers.
06: var env = new TriEnv~(2::second); 12: var src1 = ReadBuf(0, dur, offset:i* dur / ovlp);
07: env->trigger(); 13: var src2 = ReadBuf(1, dur, offset:i* dur / ovlp);
08: 14:
09: //perform a simple waveset harmonic distortion. 15: //perform FFT. PFFT applies a window and returns
10: for (var i =0; i < wvsets.size; i+= 1){ 16: //an array of Samples objects [magnitude, phase].
11: //resample the waveset at the given index 17: var pfft1 = PFFT(src1, \hanning);
12: //so to create the 2nd harmonic. 18: var pfft2 = PFFT(src2, \hanning);
13: var orig = wvsets[i]; 19:
20: //cross synthesis
14: var octup= orig->resample(orig.size / 2);
21: var ppved = pfft1[0]->mul(pfft2[0]);
15: 22:
16: //write the original to the ugen input. 23: //perform IFFT and writes to the sound output.
17: env->write(orig); 24: var pifft = PIFFT(ppved, pfft1[1], \hanning);
18: //write two 2nd harmonics. give the offset 25: //wait until the next timing.
19: //to schedule another right after the 1st one. 26: now += src1.dur / ovlp;
20: env->write(octup); 27: }
21: env->write(octup, offset:octup.dur);
22:
23: //read the output of the ugen. send it to dac. Figure 15. An example of cross-synthesis in LC.
24: var out = env->pread(orig.dur);
25: WriteDAC(out);
26: //sleep until the next timing. 4. DISCUSSION
27: now += wvsets[i].dur;
28: }
4.1 Prototype-based programming in LC
Figure 13. An example of waveset harmonic distortion in LC,
with the triangle envelope applied to the entire output. As briefly mentioned in Section 2.1, while there exists
the need for a more dynamic computer music language,
01: //load the sound file and extract wavesets. the existing computer music languages exhibit certain
02: LoadSndFile(0, "sample2sec.aif"); problems at least at either the level of sound synthesis or
03: var wvsets = ExtractWavesets(0)
04: //create a patch and trigger the envelope the level of compositional algorithms.
05: var pat = patch {
06: defin:TriEnv~(2::second) => Freeverb~()
07: defout::Outlet~(); For instance, as ChucK is a statically-typed class-based
08: }; language, ChucK is not suitable for dynamic modification
09: pat.defin->trigger();
10: at runtime. Assume a variable src is assigned a SinOsc
11: //perform a simple waveset harmonic distortion. unit-generator; one cannot simply assign a Phasor unit-
12: for (var i =0; i < wvsets.size; i+= 1){
13: //resample the waveset at the given index generator to src for replacement, since the types of these
14: //so to create the 2nd harmonics. two objects differ. Using a common parent class Ugen for
15: var orig = wvsets[i];
16: var octup= orig->resample(orig.size / 2);
the type of src would hinder access to the fields or meth-
17: ods that exist in SinOsc or Phasor, but not in Ugen. Fur-
//write to the patch’s default input.
18:
19: pat->write(orig);
thermore, it shows a certain degree of viscosity6 in the
20: pat->write(octup); modification of a synthesis graph, as it is required to dis-
21: pat->write(octup, offset:octup.dur);
22:
connect the connections to the unit-generator to be re-
23: //read the output of the ugen. send it to dac. placed first, and then rebuild the connections to a new
24: var out = pat ->pread(orig.dur);
25: WriteDAC(out);
unit-generator. This is because ChucK builds the connec-
26: //sleep until the next scheduling timing. tions between the instances of the unit-generators rather
27: now += wvsets[i].dur;
28: }
than the variables.
29:
30: //swap the outlet with DAC and play the patch. SuperCollider [24] seems fairly dynamic in its basic lan-
31: update_patch(pat) {
32: defout:DAC~(); guage concept, yet its Just-in-Time programming library
33: }; [24, chapter 7] exhibits a different kind of viscosity
34: pat.defin->trigger();
35: pat->start(); against the dynamic modification of a synthesis graph. In
36: Just-in-Time programming, while there isn’t the necessity
37: //perform a simple waveset harmonic distortion.
38: for (var i = 0; i < wvsets.size; i+= 1){ for reconnection as in ChucK, the modification of a syn-
39: var orig = wvsets[i]; thesis graph is allowed only at the point where a proxy
40: var octup= orig->resample(orig.size / 2);
41: object is utilized. When a modification where a proxy
42: pat->write(orig); object is not used needs to be made, it can require a con-
43: pat->write(octup);
44: pat->write(octup, offset:octup.dur); siderable degree of recoding. Figure 15 briefly illustrates
45: a typical viscosity problem in Just-in-Time programing;
46: //as the patch is active, there is no need to
47: //read the patch and send it to dac. even only to make c and d in the synthesis graph (on lines
48: now += wvsets[i].dur;
49: }
6
Viscosity is defined as “resistance to change: the cost of making small
Figure 14. An example of waveset harmonic distortion in LC,
changes” and it “becomes a problem in opportunistic planning when the
with the triangle envelope and reverberation applied. user/planner changes the plan” [3].

- 1570 -
A. Georgaki and G. Kouroupetroglou (Eds.), Proceedings ICMC|SMC|2014, 14-20 September 2014, Athens, Greece

07 and 08) replaceable, almost the whole code must be vides one solution for this problem by extending strong-
rewritten as on lines 17 through 26. ly-timed programing with the explicit switching between
synchronous and asynchronous contexts as described in
01: p = ProxySpace.push // if needed the previous section.
02:
03: ~a = Lag.ar(LFClipNoise.ar(2 ! 2, 0.5, 0.5), 0.2);
04: ( Many computer music languages lack certain desirable
05: ~b = { features with respect to time. While the designers of Im-
06: var c,d;
07: c = Dust.ar(20 ! 2); promptu clearly take such features into consideration and
08: d = Decay2.ar(c, 0.01, 0.02, SinOsc.ar(11300));
09: d + BPF.ar(c * 5, ~a.ar * 3000 + 1000,0.1) provide the capability for timing constraints, Impromptu
10: }; does not provide the feature of time-fault tolerance and
11: );
12: cannot handle the violation of execution-time constraints.
13: ~b.play; Impromptu’s framework to handle execution-time con-
14:
15: // the refactored code from above straints has another significant problem in that it cannot
16:
17: ( describe the nested execution-time constraints. Moreover,
18: ~a={ as Impromptu performs sound synthesis in a different
19: var a;
20: a = Lag.ar(LFClipNoise.ar(2 ! 2, 0.5, 0.5), 0.2); thread than threads for compositional algorithms, the tim-
21: BPF.ar(~c.ar * 5, a * 3000 + 1000, 0.1); ing behavior of Impromptu is not very precise in compar-
22: }
23: ); ison with other languages designed with the synchronous
24: ~c = {Dust.ar(20 ! 2)};
25: ~d = {Decay2.ar(~c.ar,0.01,0.02),SinOsc.ar(11300)}; approach.
26: ~b = ~a + ~b;
27:
28: ~b.play; On the contrary, LC provides the sample-rate accuracy in
timing behavior. Both constraints on start-time and exe-
Figure 16. Refactoring a synthesis graph at runtime
cution-time are performed with the sample-rate accuracy.
in SuperCollider [24, p.212].
Start-time constraints will be never violated due to LC’s
Impromptu also supports a considerable degree of dy- synchronous behavior. By the within-timeout statement,
namic modification at the compositional algorithm level, LC can handle the violation of execution-timing con-
as it is an internal domain-specific language7 built on straints. Execution-time constraints can be correctly nest-
LISP, which is highly dynamic. At the sound synthesis ed.
level, it depends on Apple’s Audio Unit framework and
the dynamic modification of the connections between 4.3 The Integration of the objects and library func-
Audio Units is also supported. However, the replacement tions/methods for microsound synthesis in LC
of audio units must involve the removal of existing con-
nections and requires reconnections as in ChucK. As discussed in Section 2.3, LC is not the first language
with objects that can directly represent microsounds. The
On the contrary, LC adopts the concept of prototype- previous works by Bencina (the software design for gran-
based programming at both the compositional algorithms ular synthesizers), Brandt (Chronic computer music lan-
and sound synthesis levels. As the connections in a syn- guage), and Wang (ChucK’s unit-analyzer concept) also
thesis graph in LC’s patch are made between the slots and discuss the necessity for more appropriate abstractions
not between the instances of unit-generators, the re- for microsound synthesis, emphasizing the difference
placement of unit-generators can be performed simply by between microsound synthesis techniques and other con-
an assignment. The modification of a synthesis graph can ventional synthesis techniques that can fit within the unit-
be performed quite simply as shown in Figure 2. generator concept.

4.2 Mostly-strongly-timed programming and other Bencina states “granular synthesis differs from many
other audio synthesis techniques in that it straddles the
features with respect to time in LC
boundary between algorithmic event scheduling and pol-
As already discussed in Section 2.2, in computer music yphonic event synthesis” [2, p.56]. Brandt attributes the
languages designed with the synchronous approach, a difficulty in microsound synthesis programming in unit-
time-consuming task can easily lead to the temporary generator languages partly to the inaccessibility to the
suspension of real-time DSP, as seen in ChucK, LuaAV lower-level details, which the unit-generator concept ab-
and the like. However, if the sound synthesis thread (or stracts away8[3]. Wang et al. also state that “the high-
process) is separated from a thread (or process) that per- level abstractions in the system should expose essential
forms compositional algorithms, the synchronization be- low-level parameters while doing away with syntactic
tween them will be imprecise and sample-rate accurate overhead, thereby providing a highly flexible and open
timing behavior will be unrealizable in today’s computer framework that can be easily used for a variety of tasks”
systems; thus, such languages as SuperCollider or Im- when discussing the design of ChucK’s unit-analyzer
promptu fail to provide the sample-rate accurate timing concept [23].
behavior. LC’s mostly-strongly-timed programming pro-

7 8
“An internal DSL is a DSL represented within the syntax of a general- Brandt discusses that “if a desired operation is not present, and cannot
purpose language” [9, p.15] and morphs “the host language into a DSL be represented as a composition of primitives, it cannot be realized
itself – the Lisp tradition is the best example of this” [8]. within the language” in a unit-generator language, in [3, p.4].

- 1571 -
A. Georgaki and G. Kouroupetroglou (Eds.), Proceedings ICMC|SMC|2014, 14-20 September 2014, Athens, Greece

LC’s microsound synthesis framework is also designed [8] M. Fowler, Language workbenches: The killer-app
with a similar approach. As shown in the examples in for domain specific languages,
Section 3.3, in LC’s programming model, microsound http://www.martinfowler.com/articles/languageWor
synthesis is described straightforwardly as an algorithmic kbench.html. [Online; accessed on 22/Mar/2014].
scheduling of microsound objects. Each sample within a
microsound object is directly accessible, while the utility [9] M. Fowler, Domain-Specific Languages. Addison-
methods are also offered to manipulate samples at once. Wesley. 2010.
[10] T. R. Green and A. Blackwell, “Cognitive
The significant difference between LC and these previous dimensions of information artefacts: a tutorial”, in
works is that LC provides a programming model for real-
BCH HCI conference, 1998.
time interactive computer music languages with more
generality; the works by Bencina and by Brandt do not [11] R. Ierusalimschy, Programming in Lua, Second
target the design of real-time computer music languages, Edition. LUA.ORG, 2006
and Wang’s unit-analyzer concept targets only frequency-
domain signal processing and analysis. In addition, LC’s [12] M. Kaltenbrunner et al., “Dynamic patches for live
microsound synthesis framework is also highly independ- musical performance”, in Proc. Intl. Conf. New
ent from the unit-generator concept. Interfaces for Musical Expression, 2004.
[13] M. V. Mathews, et al., The Technology of Computer
5. CONCLUSIONS Music. MIT press, 1969
In this paper, we discussed the three issues in today’s [14] H. Nishino and N. Osaka, “LCSynth: A Strongly-
computer music practices and described how each feature timed Synthesis Language that Integrates Object and
of LC corresponds to them, with code examples and a Manipulations for Microsounds” in Proc. Sound and
comparison with other languages. As LC’s language de- Music Computing Conference, 2012.
sign is motivated to contribute to the solutions to the is-
sues discovered in recent creative practices, it can benefit [15] H. Nishino et al., “LC: A Strongly-timed Prototype-
both further research on computer music languages and based Programming Language for Computer
creative practices, as one design exemplar. Music”, in Proc. ICMC, 2013
[16] H. Nishino et al., “Unit-generators Considered
6. FUTURE WORK Harmful (for Microsound Synthesis): A Novel
As the current version of LC is just a proof-of-concept Programming Model for Microsound Synthesis in
version, we are currently planning to implement a more LCSynth”, in Proc. ICMC, 2013
efficient version. We are also currently working to pro- [17] H. Nishino, “Mostly-strongly-timed Programming”,
vide more detailed publications on each of LC’s features.
in Proc. ACM SPLASH/OOPSLA, 2012

7. REFERENCES [18] M. Puckette, “FTS: A real-time monitor for


multiprocessor music synthesis”. Computer music
[1] D. P. Anderson and R. Kiuvila. “Formula: A journal, Vol.15(3), 1991
programming language for expressive computer
music”, Computer Vol.24 (7), 1991 [19] C. Roads, Microsound. The MIT Press, 2004

[2] R. Bencina, “Implementing Real-Time Granular [20] A. Sorensen et al., “Programming with time: Cyber-
Synthesis”, Audio Anecdotes III, A.K Peters, 2006 physical programming with Impromptu”, in Proc.
ACM SPLASH/OOPLSA, 2010
[3] A. Blandford and T. Green, “From tasks to
conceptual structures: misfit analysis”, in Proc. [21] G. Wakefield et al., “LuaAV: Extensibility and
IHM-HCI Vol. (2), 2001 heterogeneity for audiovisual computing”, in Proc.
the Linux Audio Conference, 2010.
[4] E. Brandt, Temporal Type Constructors for
Computer Music Programming, Ph.D. thesis, [22] G. Wang, The chuck audio programming language.
Carnegie Melon University, 2008 A strongly-timed and on-the-fly environ/mentality.
Ph.D. thesis, Princeton University, 2008.
[5] A. Burns and A. J. Wellings. Real-Time Systems and
Programming Languages: Ada 95, Real Time Java [23] G. Wang et al., “Combining analysis and synthesis
and Real Time Posix. Addison Wesley. 2001 in the chuck programming language”, in Proc.
ICMC, 2007.
[6] N. Collins et al., “Live coding in laptop
performance”, Organised Sound, Vol.8 (3), [24] S. Wilson et al, The SuperCollider Book. The MIT
Cambridge University Press, 2003. Press, 2011.

[7] R. T. Dean, Oxford handbook of computer music. [25] T. Wishart, Audible Design. Orpheus the
Oxford University Press USA. 2009 Pantomime. 1994

- 1572 -

You might also like