Une interview de Don Buchla (en anglais)

Les secrets de cuisine en soustractive, additive, Karplus-Strong, FM, etc...
Avatar de l’utilisateur
BLT
Messages : 14347
Inscription : 12 nov. 2009, 18:20
Localisation : dans une valise
Contact :

Une interview de Don Buchla (en anglais)

Message par BLT »

Quelqu'un vient de poster sur Muff cette célèbre et intéressante interview du grand monsieur, qui avait été publiée dans le Keyboard Magazine de décembre 1982 (apparemment les archives ne sont plus dispo en ligne).

The Horizons of Instrument Design: A Conversation with Don Buchla
by JIM AIKIN [UPDATED:NOV 29, 2017 ORIGINAL:AUG 30, 2016]

MUSICAL INSTRUMENT DESIGN is a topic most musicians are only peripherally aware of. We have enough trouble learning to play our instruments and working out repertoire, without worrying about things like control signal rejection in a VCA, oscillator scaling error, or how many milliseconds it takes a microprocessor to update a value in a data register. At most, we might notice if a pitch-bend wheel is mounted in an awkward position, or if a sequencer doesn't have a single-step mode to allow us to edit out mistakes.

Even so, the kind of music we're able to make today on electronic instruments is dependent in drastic, far-reaching ways on the decisions made by instrument designers. When they come up with a new design concept and build it into their instruments, we can use it to make new musical sounds. And conversely, if they leave a feature off of an instrument, either because they didn't think of it, or because they're cutting costs, or because they don't think we'll want it, or because they don't think we'll be able to understand how it works, then we can't use it to make music, no matter how wonderful those other, excluded possibilities might be.

These days, electronic keyboards are appearing on the market at a dizzying rate, with new features by the bushel—arpeggiators, sequencers, built-in rhythm units, keyboards with a programmable split, digital playback of pre-recorded sounds, and on and on. It's reached the point that many keyboard players are reluctant to buy a new instrument for fear something else will come along six months later that will offer more features at a lower price, rendering their purchase obsolete. In this kind of climate, it would be easy to assume that what we're being offered is the last word in design concepts. There couldn't be any more significant possibilities, already in existence, that aren't being exploited, could there? It couldn't be, could it, that we're being deprived of other unseen oceans of musical possibility?

To these questions, Don Buchla would answer a resounding "Yes." And he's not just indulging in idle speculation or wishful thinking of the "gee-wouldn't-it-be-neat-if" variety. During more than twenty years as a designer and builder of electronic musical instruments, Buchla has been tirelessly exploring and revising concepts of what such instruments can be. The instruments he builds have been used and praised by distinguished musicians working in electronic and experimental music. Clearly, his comments and criticisms are worth considering.

If you've been involved in electronic music for a while, you probably know Buchla's name. During the late '60s, when Moog and ARP were first proving that there was a market for synthesizers, Buchla was the "other guy," a West Coast instrument maker who became known as something of a maverick, and who shunned the mass marketing route that both Moog and ARP dove into. Fifteen years later, the scene has changed. Other manufacturers have come to dominate the market, ARP no longer exists, and while Moog synthesizers are still a major entity, Bob Moog is no longer associated with the company and in fact has lost the right to call the instruments he builds today Moog synthesizers. But Buchla is still doing what he has always done, following his own vision and building instruments whose unusual features put them in a class by themselves.

Buchla's association with electronic music began in 1962, when he discovered the San Francisco Tape Music Center, an enclave of experimentally-minded musicians that had been founded a couple of years before by composers Ramon Sender and Morton Subotnick. At that time, "electronic music" was virtually synonymous with musique concrète, the technique of composing by recording acoustic sounds and splicing bits of tape together. Buchla's own concrète pieces, he reports, made use of taped sounds of insects.

But he was also actively designing both acoustic instruments and electronic gadgets. In the acoustic realm, he put together "sound sculptures" of welded steel and strings. And his electronic inventions used sound without being exclusively musical, at least in a traditional sense. Morton Subotnick recalls his first contact with the young Buchla: "At that time there was really no such thing as a synthesizer. Ramon and I had devised a lot of different approaches to what we though a music synthesizer might be, and other people who were inventing things would come and show them to us. Don was one of those people. He had just invented a device to help blind people. It was like a little lantern that made different pitched sounds and different amplitudes depending on where the objects in the room were in relationship to it."

It wasn't long before Buchla and Subotnick began combining their efforts. "The first project we suggested to Don," Subotnick indicates, "was a synthesizer that used light. We laid out the principle of the thing, and Don said he didn't think that was the direction to go, but he would put it together and show us what it worked like. He came back less than a week later with a little device on a plywood board. This was a disc that rotated and had little holes in it, with a light that shone through the holes onto a photocell. By changing the positions of the holes, you could change the harmonic structure of the signal at the output of the photocell." [Ed. Note: For more on photocell instruments, see Tom Rhea's Electronic Perspectives column in Keyboard, July, Aug., & Sept. '77.]

Various kinds of electronic tone-generating equipment were already in common use in musique concrète. For example, electronic test oscillators whose pitch had to be adjusted by hand could be used as a sound source for recording various tones, and the resulting tape cut apart and spliced back together to create a melodic line. But Buchla saw the challenge of creating a complete integrated electronic musical instrument. Again, Subotnick offers his recollections: "We began designing what we considered to be the black box for composers. Don would come in regularly with ideas, and I would take things home on paper and pretend that I was writing a piece on them. I would come back with suggestions, and he would counter them with further suggestions. Then at the end of '62 I got a grant from the Rockefeller Foundation for $500. I asked Don what the instrument would cost, and he said $500, so we got that amount from the Foundation to do a prototype system. I'm sure it cost him more than that to put it together, but that's what the Tape Center paid him for it.

"So by late '63 we had the first complete system. This system is still in use at Mills College [where the Tape Center's activities were later relocated]. It has two keyboards. One is a set of twelve touch plates that can be used for initiating various events. The other is a set of ten keys—we had ten tape loop machines for musique concrète, so we asked him to build an interface that would allow us to trigger any of them. If you pressed a key, you would get a signal from the corresponding loop machine. In addition, the system had quite a few VCAs, a couple of envelope generators, a high-low-bandpass voltage-controlled filter, a couple of mixers, a pair of voltage processors, white noise, oscillators with sine, saw, and square waves, spatial location controls, and an eight-position sequencer." Buchla is generally credited with having invented the analog sequencer, which was conceived of first as a device for eliminating numerous tiny tape splices by allowing a single oscillator to be programmed to sound a desired sequence of pitches.

Currently, Buchla lives and works for most of the year in Berkeley, California, where he employs a small staff to help program and assemble his instruments. He prefers to think of himself as an instrument builder in the old tradition dating back to the seventeenth and eighteenth centuries, rather than as a manufacturer. And his customers continue to be drawn primarily from the academic community rather than from the public at large. His latest venture is the 400 Series (previous designs have been tagged the 100, 200, 300, and 500 Series), a compact but powerful analog/digital hybrid that sells for around $10,000, but that has many unusual capabilities not found on instruments costing twice as much.

Since most of us will never have the chance to play a Buchla synthesizer, Morton Subotnick's comments may be illuminating: "The most important thing about these instruments is that there is a kind of neutrality about the way things are designed and laid out, so that a composer can impose his or her own personality on the mechanism. For example, Don always dissociated a voltage-controlled amplifier from its control voltage source. That meant that the voltage source could be used for controlling anything. It wasn't locked into a single use, the way it would be on most [non-modular] systems. But this can be a handicap in a commercial sense, in that it makes the instrument more difficult to use. You don't sit down and plunk a key and get a sound. You have to decide ahead of time what that key is going to do. That kind of sophistication has given him the rightful place as the most interesting of all the people building this kind of equipment, but it has also made his systems less accessible to many people. You have to be fairly sophisticated in your approach to music to really make good use of them."

The trend today is certainly toward push-button instruments that offer instantaneous access to predictable sounds, and Buchla has consistently gone in exactly the opposite direction. When we spoke to him on the phone recently in connection with some research we were doing, he said he felt that Keyboard, by devoting so much space to such instruments, was fostering some misconceptions about what electronic musical instruments are and are capable of being. We agreed that there were aspects of the question that we had been neglecting, and offered to let him present the other side of the picture.
[/i]

* * * *

What was the first electronic musical instrument you designed?
My first instrument was based on the idea of analyzing the shape of the hand, in order to translate that shape into the equivalent waveshape. By moving your hand, you could change the waveshape, and thus the timbre, of the sound. The first instrument used a motor-driven scanning wheel, which extracted information about the position of the hand along two axes, the horizontal axis relating to time and the vertical to amplitude, essentially. So if you held your hand with the fingers parted and parallel to one another perpendicular to the time axis, you would get a square wave with five times the fundamental frequency. If you slanted your hand, you would smear the rise time of the wave so that the harmonic content became less, and if you put your fingers together you would extract more of the fundamental. As you moved your hand around, you could change the timbre in very rapid and responsive ways. This instrument had the disadvantage that you didn't have access to a variety of pitches, but at that time the usual way of dealing with pitches in classical electronic tape studios was to record a lot of segments at different pitches and splice them together. Later we built a unit that used a cathode ray scanner, which was much more responsive.

How did your design concepts develop from there?
Oh, we got into sequencers and voltage-controlled oscillators. It didn't take us long to arrive at the concept of voltage-controlling all of the parameters that could possibly be regarded as significant musical variables, including such things as voltage-controlled reverb and voltage-controlled degree of randomness. It did take us a few years to find truly general ways of dealing with the voltages that did the controlling. That turned out to be the bigger problem.

What do you mean by 'truly general ways'?
Most of the current electronic musical instruments that you'll see have fairly limited methods of generating and routing control voltage shapes. Sequencers generally output only series of discrete voltages. Envelope generators are often limited to creating shapes that can be described by such terms as attack, initial decay, sustain level, and final decay. Low-frequency oscillators offer only a few waveshapes, and they operate within a narrow frequency range. And these are the only three control voltage generators that one would expect to find in the so-called traditional synthesizer. In other words, the designers of these instruments make very stringent assumptions about what shapes are esthetically acceptable in the way of, say, envelopes.

Where did these limiting assumptions come from?
The sequencer is thought of primarily as an automated substitute for a piano or organ keyboard, on which one has access only to discrete static pitches. The LFO is thought of primarily as a vibrato oscillator, so it's designed only to do what a violinist or vocalist does in the way of adding vibrato to a tone. And the envelope generator is based on the note forms created by percussive instruments, which are loud at the onset and decay smoothly, and by sustaining instruments of the wind and string families. In other words, instrument designers still think that all notes should be shaped the way traditional acoustic notes have been shaped. Electronic instruments are obviously capable of much more than this, but the electronic possibilities are still being made subservient to our traditional acoustic experience.

But the instruments you design aren't limited in these ways.
I would hope not. Our instruments incorporate what we call arbitrary function generators, with which you can generate any kind of shape you want. The instrument doesn't force you to make any assumptions about what are pleasing shapes and what are not. That has to be decided on a musical level.

Does it bother you that your ideas haven't been more widely adopted?
It doesn't bother me that my own ideas in particular have not been widely received. It does bother me that the powers that be have such shortsighted views of what musical instrument design and development could be all about. In a sense, they have to operate the way they do. They can only undertake projects where they can see black ink at the end of the fiscal year. Consequently, there is virtually no legitimate research going on in the area of musical instrument development. There are reasons for this impasse: One is the fact that an instrument has to exist long before performance techniques can be developed and a repertoire arises. Because of this, the market for the instrument doesn't exist for many years after the R&D that goes into developing a truly new instrument. With short-term profits a primary motive, the big corporations are simply not interested.

What they want to do is put instruments in people's hands that people already know how to play.
Exactly. Which is why you see organ keyboards on electronic instruments, virtually to the exclusion of any other kind of input structure.

Which guarantees that you're not going to get anything radically new musically.
Certainly not in those terms. You're going to continue to talk about the same form of a note, and the same pitch structures that we've been listening to for so long. Certain aspects of the music are going to be dictated by the nature of the input structure, and by the correlation between that and the sound-generating structures. That is, we don't sit down at a conventional keyboard and expect to perform something other than 12-tones-per-octave music.

Although in fact such a keyboard could be used to control other musical elements.
But when you open up those other possibilities, you'll alienate the people who are coming from a rock-band orientation and want instant gratification. They don't want to have to figure out some other relationship between their actions and the instrument's response. Of course, this is primarily a marketing problem, and I don't attempt to serve the popular market. If I'm serving a market at all it's a significant group that are interested in experimental and avant-garde music.

Unlike your earlier instruments, which were modular, you new 400 Series has a non-modular integrated design. What considerations led you in this direction?
The 400 Series has integrated a lot of disparate elements that I finally resolved should all be put together in one box and thought of not as a modular system but as a very together kind of resource that gains economy and a commonness of language as a result of being made in a single configuration. It's my thought that an instrument, to be a legitimate instrument, has to be identifiable and replicable, in order to be composed for and to have a repertoire develop. The 400 is meant to be not hardware-limited at all, but to be, if limited, language-limited. And language is something we can develop. The more users there are out there, the cheaper it is to develop new musical language.

In other words, you’re talking about computer software additions to the system.
Yes. I think that my activities in the future will probably center more on the development of language, as opposed to the development of hardware. The 400 will be the nucleus of this development.

Do you do computer programming yourself?
No, I’ve never learned programing.

Then when you talk about designing languages, what are you referring to?
When I design a language I’m concerned with what gestures should have what effects. And how the video display should be laid out, and things of that sort. I know nothing about how to program the computer to make these things happen. Well, I wouldn’t say I know nothing about it, because a little bit rubs off. But I do think that language design and programming design and programming have to be two distinct activities, even though the same person may participate in both.

When I'm working with language design, my ignorance of programming works to my advantage, in that I'm free to consider musical language as a high-bandwidth communications tool rather than as a sequentially oriented problem-solving or information-management structure. I've investigated a lot of different musical languages, and I've been appalled at the extent to which musicians have foregone the possibility of designing the language themselves, because they have simply thought of language design as some aspect of programming. They've let the programmers handle it, and programmers are generally not likely to have very revolutionary concepts of how music can be described. If we want really powerful musical languages, it will have to be musicians that design them. It's the same thing, as with hardware designers—musicians, rather than engineers, should be designing instruments. Anything is possible; we're not limited by technology, we're not limited by the computer. We're limited only by our mind-sets. Sometimes in subtle and sometimes in flagrant ways.

What directions do you see these languages developing in?
Well, that's a pretty open-ended question. Given a reasonably flexible hardware resource, languages are limited only by the concepts of those who design the languages. For the 400, we will initially have two languages of my own design, one of which is, I would say with some reservations, almost traditionally oriented, in that it utilizes a score editor [a highly flexible polyphonic sequencer with a video display and complex editing capabilities] and recognizable bass and treble clefs, and is based on the concept of music consisting of notes, which I think is a very limiting concept, incidentally, and on the concept of instrument definitions, although the instrument definitions can be pretty bizarre. Nevertheless, this language configures the 400 as a collection or orchestra of instruments.

The other language is much more general. It's called Patch Five. It's the fifth of a series of patch languages that I've designed. They're called that because they emulate the settings of knobs and the insertion of patch cords as in a conventional analog system. Patch Five is distinguished by its almost total generality. It's a little cumbersome to use, in that it may take a day or two to specify a complex musical resource, but once the musician has gone through this process of specification, the instrument becomes eminently playable in a very personal manner.

What kinds of specifications might the musician come up with?
You can literally configure each and every key to produce whatever kind of interaction with the system you would like. You're not limited to any of my preset concepts of what kinds of interactions are musically acceptable.

Wouldn't the player have to know something about computer programming?
Patch Five and the other languages I've developed are meant to be approachable by non-programmers. They are organized in ways that musicians should not find too foreign, and they have been learned by musicians with no programming experience at all, in relatively short periods of time.

Using Patch Five, for example, could I tell the instrument that whenever I play a low C I want some specified change in the timbre of whatever note follows it?
Yeah, or you could go a step further and use a very powerful concept, one that we're not used to in traditional music, of having one key re-configure the effect of another.

I'm trying to imagine what that would sound like…
Well, you can't imagine the sound because it's not associated with sound. It's associated with the musical structure, and that structure can be your own invention. The fact that you can modify the structure on many levels by means of your input actions during a musical performance is a characteristic of the language. This does invite, I think, new concepts of what performance is all about. We are no longer restricted to playing notes. We can interact with our music on a variety of levels.

Is it possible to intentionally configure the 400 in such a way that you may not know precisely what it's going to do when you hit a given key?
Oh, yeah. Uncertainty is certainly a useful musical tool.

But in that case, it's possible that you're going to get something that sounds really awful.
But these are judgments that I leave up to the users of my instruments. I'm not going to build the instruments so that they can create only beautiful sounds. I'm not going to say, "Well, everything has to have 12 equal steps to the octave, and you're not allowed any random elements, and all the note shapes will be derived from traditional percussive sounds." You'll find some very expensive systems in which there are precisely these kinds of implicit assumptions in the design of the instrument, which I think is very strange.

We had some information that some of your early instruments had oscillators with linear rather than exponential control, so that a given voltage change would produce a larger or smaller interval shift depending on whether the pitch of the oscillator was lower or higher. Is it true that you built instruments that operated this way?
No, my oscillators have always been exponentially controlled. In all of our systems, all of the relationships between the control voltages and the responses have been modeled on psychoacoustic bases. The confusion may have come about because our control voltages have always been linear. That is, an envelope will have a linear rather than an exponential decay, but then the gate [VCA] that you attach the envelope to will have an exponential characteristic, because that's the way we hear. But if you chose to attach that envelope to a device that governed location in a stereo or quad acoustic environment, you would not get a type of location change that sounded illogical, because the location panning device would have an appropriate input and could respond linearly. Our control-generating apparatus is all linear, and the exponential characteristics are then built into the devices that deal with areas in which perception is exponential.

Why is it that other designers haven't adopted this approach?
It won't work, for example, if you go the route that Moog did, of designing voltage-controlled amplifiers to serve as either ring modulators or gates. Then the VCAs have to be linear, because they have to serve in the control domain, the parametric domain, as well as in the audio domain. You have some severe problems there, trying to build a vehicle that can both float and handle well on the freeway, so to speak. That was never satisfactorily resolved in modular analog systems, to my knowledge. Most systems use the concept of a gate being a linear device, so that they can use it as a ring modulator, and then they build the exponential characteristics into the envelope generators. Which means, for instance, that the attack is a reverse exponent.

So you don't use your VCAs as ring modulators.
Right. You can build a far better VCA if you don't have to use it for a ring modulator. The control voltage rejection can be many dB higher; the distortion can be much lower if you don't have to build it to operate in the audio domain. You have a similar problem if you try to use a sequencer as an oscillator. If you build a sequencer that will run fast enough, you can listen to the stepped output directly rather than using it to control an oscillator, and by changing the height of the various steps you can change the waveshape. That sounds interesting in theory, but the first people who built sequencers that could run in the audio domain—and I guess I was probably the first, but I learned my lesson fast—found that all they were doing when they turned a knob was varying the amplitude of some dominant harmonic, practically independently of which knob they turned. The lesson there is that what we hear in the temporal domain we hear one way, but in the harmonic domain we hear in a different way. So we need to build devices whose design will vary depending on whether they're going to be asked to deal with form or with sound. This way, you can optimize modules for the particular area they're dealing with, rather than trying to make them serve two functions and compromise both, which is what generally happens.

So on your early analog systems, you had modules that were specifically dedicated to doing one task, rather than multi-function modules.
I wouldn't state it quite that restrictively. Those modules that were designed to operate in the control realm could not operate in the acoustic realm. That's why a multiple arbitrary function generator could be so complex. It didn't have to run at 20kHz. It could operate with times that we perceived as being musically significant, which are perhaps in the millisecond and longer range. Our mixers were designed to operate in the audio domain, and therefore the input controls were exponential. They were AC-coupled, and therefore there were no voltage offsets [additional DC voltages] added when you tried to do FM [frequency modulation], for instance. They had 20dB of headroom [the ability to handle high-amplitude signals without distortion], which you need when you're dealing with summing audio signals. Whereas a control voltage processor would ideally have linear inputs, would be able to do more complex mathematical things than simply adding signals, and could operate with no headroom at all, because there's no reason for headroom in the control voltage realm. So there are lots of reasons for separating structure from sound, and these reasons persist as we develop hybrid systems with digital components. We continually have to know whether we're dealing with structure or with sound.

So that's why your systems use banana plugs for control voltages and mini-phone plugs for audio signals.
That's one reason. There's no chance of getting the two confused accidentally. And there's actually a subtle benefit that has to do with the way we perceive a complex system. If you look at some of Subotnick's typical patches, you'll discover an absolute maze in terms of density. "The patch cord jungle" is a term that is sometimes applied to my systems. But the maze can be fractured into two sub-systems, one dealing with the sonic aspects and one dealing with the structural aspects. The nature of the signal is immediately identified by the nature of the patch cord, which puts you way ahead in terms of analyzing what's going on.

Of course, there are two sources of control signals, one being internal devices like envelope generators and the other being touch-sensitive devices under the direct control of the performer. Could you talk a bit more about those input devices?
Our traditional input structures have been designed to efficiently set physical bodies into vibration. It's important to realize that that's where the organ keyboard comes from. It's a linear array. It cannot be a two-dimensional array, because it would be too complicated to attach a two-dimensional array to a linkage that threw hammers at strings. This is true in the case of all traditional input structures. But electronics technology offers us an incredible freedom from the direct connection found in traditional instruments between that which we touch and that which vibrates and creates sound. It's so much freedom, in fact, that we're all scared. We don't know what to do with it, and consequently we do nothing and continue to attach arrays of black and white switches to systems rather than investigating new input structures.

In fact, with electronics, those structures could take any form at all.
It's not exactly a blue-sky problem—or rather, I don't like the blue-sky approach, in which you say, "Wouldn't it be neat if..." and then you build it, and then you find that nobody is playing music on your instrument because you didn't do your homework. A musical instrument evolves over a number of decades or centuries, and a lot of human effort goes into the ultimate design of the playing attributes of an instrument. I think an equivalent amount of design effort, short-cutted by the fact that we know more about evolution than the nineteenth-century instrument designers did, has to go into what I would call legitimate electronic musical instruments. I would like to hand my children these instruments and say, "Here's an instrument worth studying for the next ten years." I would therefore demand that the instruments not be technologically obsolete in two years, but be valid musical instruments a hundred years from now—or maybe twenty years from now, let's be realistic. So as designers we have to design instruments that incorporate input structures that are musically highly optimized. Not just useful, but something that we're not going to come along and improve radically in the next year because we didn't do our homework the first time.

That's a tough problem.
We have the tools to do it, and I feel that I have the understanding, perhaps, to undertake this kind of research, but it's expensive research if you want to do it efficiently and quickly rather than slowly, which is how I tend to do it myself. I would like to apply high-powered resources—computers for developing simulations, in order to short-circuit the amount of time it takes to re-cut the violin's f-hole, for example, to see if it will sound better.

But since we already have a highly optimized input structure in the organ keyboard, which lots of musicians already know how to use with a great deal of facility, doesn't it make sense to make use of it, rather than trying to develop a new structure that nobody will know how to play music on?
Where is your music? Is it in your hands, or is it in your head? How much is music and musicality, and how much is technicianship, and where do you draw the line? I heard a story last week about a guy who bought an electronic instrument with a keyboard from another manufacturer, and later brought it back to the store and said, "Where are all these space sounds that I wanted?" And the salesman said to him, "Well, how are you gonna get all those space sounds playing twelve black and white keys?" We see this all the time. Somehow instrument designers are expected to take the entire electronic vocabulary, which is so incredibly extensive, and put it all under the control of a linear array of 60 switches, give or take a few. And that's impossible. Not only impossible, but very uninviting.

Now, don't misquote me and say I'm against keyboards. I've been misquoted on that one enough. A keyboard is a useful input structure if what you want is rapid simultaneous access to a large number of sounds of fixed pitch, but it's much less useful for controlling some other aspects of sound. I am definitely in favor of developing a family of musical instruments in which keyboards are one of a number of available input structures, and I don't have any preferences between apples and oranges. I think they can coexist.

And we don't necessarily know yet what the best-input structures will be for controlling other aspects of the sound.
I think we need to conduct research to find out what the nature of input structures is, and how they can be optimized, and whether certain of them may be optimized for certain kinds of expression, certain kinds of music. You're going to have to understand a little bit about what you're trying to do musically before you say, "Wouldn't it be neat if . . ."

In other words, once you say, "Wouldn't it be neat if we had a three-dimensional joy-stick," the kind of music you can then make with the joystick will be pre-determined to some extent.
Yeah.

But it's also true that once you take the data from the joystick and put it into your computer, you can perform a wide variety of musical tasks with it. So to some extent, each musician can have their own joystick, which responds differently and affects different parameters than anybody else's.
We've worked a lot with the concept of extracting gestural information from action. In fact, there's so much information contained in how a musician strikes a key that one can imagine a rather powerful pre-processor simply processing data coming from a complex multiple input structure so that the host computer that's running the language can assimilate it. Again, it's hard to talk about specifics. I don't regard a giant joystick as being a very general input. We have to look very carefully at the dimensions we're dealing with. If we analyze input structures in very general terms, it may turn out that many of the possibly very potent input structures and substructures have never been thought of, but have simply fallen out of previous analyses because they have combinations of dimensions that haven't been considered.

What kinds of input structures might those be?
Well, an analysis will rapidly break down to analyzing a multiplicity of dimensions. One attribute that is of immediate and obvious significance is continuousness vs. discreteness. A stringed instrument input, a violin for instance, is discrete in one dimension and continuous in another. The piano has two dimensions, the vertical dimension, in which there's some continuity, and the horizontal dimension, which is discrete. A trap drum set could be thought of as discrete in two dimensions with sub-elements that are continuous in two dimensions, with a third continuous dimension that you interact with by playing louder or softer.

This is beginning to seem quite complex.
There's more. You can investigate not only linear dimensions, but rotation as well. You can talk about pitch and roll and yaw, to use airplane terminology. This gives us six continuous dimensions in terms of how we position each hand in space. The hand can be anywhere in the sphere of space whose center is the shoulder joint, which we can describe using an XYZ coordinate system, and it can also be tilted at various angles, which might be described in terms of degrees of pitch, roll, and yaw. And all of this can be electronically sensed with the greatest of ease.

We can easily imagine a small fingerboard attached to the thumb, with a key or pressure pad for each finger. These could play a combination of tunings, or anything else that one might imagine, much as a trumpet plays. There are an awful lot of notes for just three valves. But in the cases that I've seen—I've seen an electric sax, and some clarinet-like instruments also—the switches were attached to a rigid body in an obvious imitation of the traditional form of the instrument. The designers forgot to detach the switches from the body, even though there was no physical necessity of attaching them. So they threw away all these degrees of freedom, and in the process all of the kinds of musical expressiveness that might be associated with them.

You mentioned earlier the fact that you would like to design instruments in such a way that they don't become obsolete. Do you find it frustrating that the rapidity of technological advance has this effect?
An instrument that's well designed won't become obsolete. But the tendency is for engineers to design musical instruments, and needless to say, being engineers, they design from the inside out. They design the circuits, and then they put knobs on them. But if a designer expects to design legitimate instruments, he has to design them from the outside in. He has to build the outside of the instrument first. This is what the musician is going to encounter. You cannot become obsolete if you design a legitimate instrument from the outside in. I don't care how you make the sound. If today it's analog and tomorrow it's digital, fine.

The interface will remain usable by the musician.
Exactly. The instruments that I built in the early '60s are for the most part still in use. The very first one, which is at Mills College, I can't even get hold of. It's in use. Legitimate musical instruments will not be subject to rapid technological obsolescence. Improved models will be cheaper, and perhaps even more useful musically.

Why is it that you don't call your instruments synthesizers? Do you have some objection to that term?
Maybe some minor objections. The term `synthesizer' has come to mean 'electronic keyboard instrument.' Even more than that, the response of the instrument is predicted. One expects to play a synthesizer and achieve a 12-tone scale response, with very traditional note shapes, and perhaps even stops that are labeled by instrument names. A number of years ago, in a magazine I wish I had walked off the airplane with, Bob Moog said that eventually the synthesizer and the organ will meet. And indeed, his prophecy has largely come true. If you go down to your local organ store, you're going to have some problems telling which instruments are which—and what's the difference, anyway?

Once you attach a keyboard to the instrument, you're headed in that direction.
I always thought it was a wonderful joke to attach an organ keyboard to an electronic instrument in such a way that the keyboard was monophonic. One of the great attributes of the organ keyboard was the fact that you could get to ten keys at once. And then a few years later people started bragging that their instruments were polyphonic, forgetting that ten years before there hadn't been any such thing as a monophonic black-and-white keyboard. So I chose to stay away from the term 'synthesizer' because it has been preempted by people who put keyboards onto electronic tone generators, with a mapping between the two that is the same mapping one finds in a piano.

What do you mean by 'mapping'?
We’re dealing with a class of instruments that has one very salient feature that's different from anything found in the acoustic instrument realm. That is that the input structure and the resulting sonic responses are no longer tied together in fixed and predictable ways. Information from the input structures can be used to control the tone-generating structure in a variety of potentially complex ways, using a computer to establish the moment-to-moment relationships, or mapping. This mapping is essentially a third part of an instrument that up to now we have never had access to, other than by pulling out stops on an organ. So we have to learn to deal with that effectively.
To return to your question about the term 'synthesizer,' my other objection to the word has to do with the semantic connotations that have become attached to it. 'Synthesizer' is related to 'synthetic,' and 'synthetic' underwent a change in meaning in the late '40s. Instead of meaning `built up out of constituent parts,' it has come to mean 'imitative' or 'artificial,' which started when it got attached to rayon as opposed to silk. I get letters that start, "Dear Sir — I am interested in synthetic music." And the synthesizer is thought of as something that can substitute for strings and brass in the sweetening [overdubbing] session. I tend to want to stay away from that. I don't regard my activities as a threat to traditional musicians. I'm interested in a fourth legitimate family of instruments. To the strings and winds and the percussion, I want to add the electronics. I don't think we have to regard them as replacements for anything else. And the idea that all electronic instruments have to go under the name 'synthesizer' is crazy.

Then what do you call your instruments?
I call my specific instruments by such names as ‘Buchla 400’, ‘Music Easel’, ‘Touché’ and so on. These are the names of the instruments. I don't have any generic name for them other than the fact that they all lie within the electronic family. I call them electronic musical instruments. Pretty straightforward, I think.
L'équipe d'Anafrog, soit on l'aime, soit on ne l'aime pas. Dans les deux cas elle s'en fout.
"It's not wise to upset a Wookiee."
Répondre
cron