Navigation – Plan du site

AccueilNuméros6Dossier1. PrimitivesMelodic Primitives and Semiosis

Dossier
1. Primitives

Melodic Primitives and Semiosis

José Roberto do Carmo Jr. et Thiago Corrêa de Freitas
p. 23-41

Résumés

Cet essai traite du rôle joué par les primitives mélodiques dans la sémiose. Chaque son mélodique doit être déterminé par la hauteur, la durée, l’intensité et le timbre, lesquels sont les catégories minimales rencontrées en musique. Nous proposons une analyse de ces primitives selon deux points de vue. Dans la première partie de cet article nous comparons le rôle joué par les primitives dans la musique et le langage (puisque le système phonologique des langages naturels est aussi organisé comme une structure d’éléments minimaux). Cette étude nous a permis de comprendre certaines différences entre musique et langage : les primitives mélodiques présentent trois caractéristiques qui sont absentes dans les primitives phonologiques : gradation, sensibilité au contexte et tensivité. La seconde partie de l’article traite de la physique du timbre, qui montre pourquoi ce dernier n’est pas distinctif mais a plutôt une fonction de connotation (Hjelmslev). Nous concluons l’essai en proposant que les caractéristiques mélodiques soient divisées en deux groupes. Le premier contiendrait les catégories distinctives, à savoir la hauteur, la durée et l’intensité, lesquelles sont responsables de la construction des énoncés mélodiques. Le timbre, responsable de la construction de l’énonciation de l’ensemble des énoncés mélodiques, serait le seul élément du deuxième groupe. Dans la conclusion de l’article, nous proposons de continuer la réflexion en se penchant sur ce que cette approche apporte comme éclairage sur les différentes manières de construire les sons dans la musique et le langage.

Haut de page

Entrées d’index

Mots-clés :

son, énonciation, connotation
Haut de page

Texte intégral

1. Primitives

1Some of the biggest unanswered questions about the meaning of music concern the nature of musical primitives: what they are, how many they are, what their role in semiosis is. Obviously, this is very far from being a simple problem and would require a significant effort even to begin to figure out the nature of its solution, if there is one. For this reason, the discussion proposed in this essay addresses a much more restricted subject. We will look at melody, particularly at tonal melody — which is a quite concrete and traceable object — through the lenses of linguistics and psychoacoustics. There are two assumptions behind this approach. First, we assume that the kinship between music and language is deeper than is usually thought. Thus, the comparative method is not just the best one, it is the single method through which we can advance in these questions. Second, we assume that no matter how meaningful music may be, it has a concrete aspect whose relevance in semiosis is underestimated. Paradoxically, the deeper we go in musical meaning, the more we need to look at the physical aspects of sound production and sound perception.

  • 1 Helmholtz (1895).

2Ever since Helmholtz’s seminal work, On the sensations of tone as a physiological basis for the theory of music,1 it is known that pitch, intensity, duration and timbre are the main attributes of tone, which in turn is the basic unit of melody. From a semiotic point of view it may be better to say that these attributes are minimal features of tone, emphasizing the fact that two different tones can be fully characterized on the basis of differences in pitch, intensity, duration and timbre. To say that these features are minimal is to say that they are at the very endpoint of musical analysis, in the same way as phonetic features are at the endpoint of linguistic analysis. To say that these features are melodic primitives is equivalent to saying that any aspect of melody could be described as a composition, that is, a particular arrangement of them.

3Such “atomistic” approach is not viewed favorably in semiotics, at least within the Greimasian framework, and hence the question of musical primitives has been relegated to a minor role. Several reasons contribute to this neglect, the main one being the holistic principle already formulated by Aristotle, according to which “the whole is more than the sum of its parts”. Accordingly, the properties of a complex object cannot be understood only by the properties of its parts, rather the object’s complexity affects the structure of its parts. Greimas’ criticism of the sort of componential analysis carried out in the 60s is particularly clear about this point:

On peut imaginer, théoriquement, qu’une vingtaine de catégories sémiques binaires, considérées comme base taxinomique d’une combinatoire, est susceptible de produire quelques millions de combinaisons sémémiques, nombre largement suffisant, à première vue, pour recouvrir l’univers sémantique coextensif à une langue naturelle donnée. Sans parler de la difficulté pratique d’établir une telle base d’universaux sémantiques, un autre problème — non moins ardu — se pose lorsqu’il s’agit de préciser les règles de compatibilité et d’incompatibilité sémantiques, qui président non seulement à la construction des sémèmes, mais aussi à celles d’unités syntagmatiques plus larges (énoncé, discours). Aussi voit-on que l’analyse sémique (ou componentielle) n’obtient de résultats satisfaisants qu’en pratiquant des descriptions taxinomiques limitées…, et que l’idée de pouvoir disposer, pour l’interprétation sémantique, de matrices comparables à celles que la phonologie est susceptible de fournir pour sa propre interprétation, doit être abandonnée… Ainsi, la grande illusion des années 1960 — qui croyait possible de doter la linguistique des moyens nécessaires pour l’analyse exhaustive du plan du contenu des langues naturelles — a-t-elle dû être abandonnée, car la linguistique s’était ainsi engagée, sans toujours s’en rendre bien compte, dans le projet extraordinaire d’une description complète de l’ensemble des cultures, aux dimensions mêmes de l’humanité. Greimas & Courtés (1979, p. 327)

4However, the fact that it is very hard to reduce the whole to its parts does not imply that we will not improve our understanding of musical semiosis if we consider those parts separately and study their specific properties. Indeed, it is not necessary to choose between “atomism” and “holism”; rather, semiotic analysis comprises both a holistic approach, in which the object (a musical piece) is taken as a whole, and an atomistic approach, in which the properties of primitives become the focus of the investigation. This kind of research leads us to questions like: What are the semiotic properties of tone per se? What are the meaningful effects of dynamics and tempo? Why is there no specific notation for timbre? This point is illustrated by the two fragments of Mozart’s clarinet concerto shown in figure 1. The first fragment (1) contains the motif played on the violin (bars 1-2); the second (2) contains the motif played on the clarinet (bars 57-58).

Figure 1

Figure 1

Mozart’s clarinet concerto in A (K622), Allegro, violin (1) and clarinet (2) parts.

5In some respects, these two fragments may be viewed as exactly the same; in some others, they may not. Actually they are almost indistinguishable from each other, the difference being the key signature — 3 sharps in (1) and no sharps or flats in (2) — which is just a notational convention. However, considering the way these fragments are actually played, they are quite distinct. The actual sound we hear when musicians play these fragments is somehow independent from their form, and therefore should be investigated by themselves. Evidently, we do not ask the reasons why Mozart decided to have the strings (1) introduce the theme and some bars later restate it with the clarinet (2), which is sort of canonical is this kind of composition. The question here is why Mozart chose the clarinet and not the oboe? Or the horn, or the viol? In brief, what we are looking for is a “semiosis of the timbre”, which is somehow independent from the piece in which it is actualized. By doing so, we take timbre as a musical primitive. And what goes for timbre goes for dynamics, tempo, and other melodic variables.

6In order to examine this issue we have divided this article into three parts. The subject of the first part is melodic distinctiveness. Basically, we discuss how melodic primitives are used to build up musical objects like melodies, chords, rhythmic motives, and so on. In order to do so, we need to take a look at the kinship between music and language from the viewpoint of the theory of distinctive features. Indeed, timbre, intensity, pitch and duration are minimal features both in music and in language. We will show that there are only two classes of features, one that comprises pitch, duration and intensity, which is responsible for melodic distinctiveness, and a class that comprises only timbre, which is responsible for musical enunciation. The second part explores some properties of the vocal apparatus and of musical instruments in order to explain why melodic primitives are organized in the way they are. The subject of the third part, which is quite speculative, is the relationship between timbre, connotation and enunciation.

2. Melodic distinctiveness

7Let us start our discussion by introducing the concept of distinctiveness in the way it is understood in linguistics. The point here is to know whether or not distinctiveness — a concept taken from phonology — can be applied to describe how basic musical units are arranged in a system. According to phonologists, the phoneme is the smallest unit of the speech chain which distinguishes meaning. The English words “tea” and “key”, for instance, have both two phonemes each, /t/ and /i:/, and /k/ and /i:/ respectively. A pair of words like “tea” and “key” is called a minimal pair because they differ in one single phoneme in the same position in each word. The phonemes can be broken up into even smaller parts called phonetic features, which are the smallest units of the system behind the speech chain. Phonetic features are acoustic or articulatory attributes of the sounds of language like Sonorant, Vocalic, Consonantal, etc. Figure 2 shows a matrix with all the features necessary to build up the words “tea” and “key” and to distinguish them from each other.

Figure 2

“tea”

/ti:/

“key”

/ki:/

/t/

/i:/

/k/

/i:/

Sonorant

-

+

-

+

Vocalic

-

+

-

+

Consonantal

+

-

+

-

Coronal

+

+

-

+

Anterior

+

-

-

-

High

-

+

+

+

Continuant

-

+

-

+

Voiced

-

+

-

+

Matrix of phonetic features of the segments /t/, /k/ and /i:/.

  • 2 Many aspects of the sound structure of languages cannot be described only through the theory of dis (...)

8We can see that /t/ and /k/ share the same features, but do not share their values: /t/ is [+Coronal], [+Anterior] and [-High] while /k/ is [-Coronal], [-Anterior] and [+High]. Thus, although the phoneme is the smallest unit of the speech chain used to differentiate words, its distinctiveness is a property borrowed from the phonetic features. Moreover, the inventory of phonemes is always a restricted set of sounds that are actually employed in a particular language. The inventory of phonetic features is universal, they are present in all known languages and occupy the lowest degree in the phonological hierarchy, which means that they are at the very endpoint of phonological analysis. In brief, phonetic features are the primitives of the phonology of any and all everyday languages.2

9We are now in a position to introduce our first hypothesis: the mechanism behind the distinction between two melodic strings (a sequence of notes) and between two words (a sequence of phonemes) is exactly the same, that is, the concept of distinctiveness can be applied to music in the same way as it is applied to language. For example, consider the fragments (3) and (4) in Figure 3 below.

Figure 3

Figure 3

First bars of Happy Birthday (3) and Star-Spangled Banner (4).

10The figure shows the first bars of two unique melodies, Happy Birthday (3) and The Star-Spangled Banner (4). No one will confuse these melodies despite the fact that they differ from each other in nothing but one single attribute, namely, pitch. Both have the same number of notes (six), with the same duration each, and the more prominent notes of each string are placed in the same position. Consequently, these fragments constitute a melodic minimal pair through which we isolate pitch from other tone attributes, showing that it is a melodic feature — from now on [Pitch] — whose distinctive function in music is similar to the one played by phonetic features in language. The obvious difference between phonetic features like Consonantal or Voiced, on one hand, and Pitch, on the other, is that the former are binary, since they admit of just two values, + or -,while a melodic feature like Pitch is n-ary, that is, it admits of multiple values. We will come back to this point later on.

11To say that pitch is a distinctive feature in music is to over-explain something that everyone knows, and the extent to which such an approach could carry us may be open to question. However, things are not that simple. Indeed, there are melodies which can be easily distinguished from each other, but are made on the same sequence of pitches. Consider the first bars of the Beethoven’s “Ode to Joy” (5) and Bach’s Cantata “Jesu, Joy of Man’s Desiring” (6). No one will confuse these two melodies. However, they are built up on the very same chain of pitches, as Figure 4 demonstrates.

Figure 4

Figure 4

The first bars of Beethoven’s “Ode to joy” (5) and Bach’s “Jesu, joy of man’s desiring” (6).

12These two fragments constitute another melodic minimal pair. But now what distinguishes them from each other is not the pitch of their tones but their duration. In other words, we have managed to isolate duration from other features — from now on [Duration] — showing that it plays a similar role to pitch in creating melodic distinctiveness. Like pitch, duration is not binary, rather it is n-ary, which seems to be an important characteristic of all melodic features and distinguishes them from phonetic features, which are mainly binary.

13Another tone attribute that plays a distinctive role in music is intensity — for now on [Intensity]. Figure 5 below presents the theme of the Allegro from the Brandenburg concerto BWV 1048 (7) and the same melodic material in a compound rhythm (8). The single difference between the two fragments lies in the notes which are stressed and the notes which are not. Since this detail is enough to distinguish the two fragments, it is to be considered a melodic feature.

Figure 5

Figure 5

Allegro from Bach’s Brandenburg Concerto BWV 1048 in the original version (7), and in a compound rhythm (8).

  • 3 Hjelmslev (1961, p. 117).

14It seems that intensity and duration are usually redundant features, that is, the longer the tone is the stronger it tends to be and vice-versa. Finally, it is possible to show that timbre, the remaining attribute of tone, is NOT a distinctive melodic feature. We have already seen this indirectly in the fragments of Mozart’s clarinet concerto (figure 1). The fact that timbre is not a distinctive feature explains why there is not an accurate notational system for timbre. The same goes for dynamics, tempo, key, among others. As we will see later, the main characteristic of a non-distinctive feature is that it is able to be “translated” (in the sense Hjelmslev understands this notion).3 One can always play a melody at a slow or fast tempo, and this will not change its identity. One can always play a melody on the violin or on the clarinet, as we have seen from Mozart’s clarinet concerto. In these cases, we are just “translating” one tempo into another, one timbre into another, and so on. But when we change pitch, duration or intensity of the melodic tones, we change its identity. It is for this reason that they are distinctive melodic features. Interestingly, as with phonological features, one can build up a matrix with distinctive melodic features. Indeed, one finds such a matrix in software for music composition and edition. Figure 6 shows an example from the first three bars of “Happy Birthday”. We will come back to this point later.

Figure 6

Bar

Pitch

Duration

0001.01.001

g3

288

0001.01.007

g3

96

0002.01.001

a3

384

0002.02.001

g3

384

0002.03.001

c4

384

0003.01.001

b3

768

A matrix from the first three bars of “Happy Birthday” taken from the software MUSE.

3. Grading, context-sensitivity and tensiveness

  • 4 Every time we are faced with an object, no matter its nature, we are asked to identify and recogniz (...)
  • 5 “In view of the fact that phonological features are classificatory devices, they are binary, as are (...)

15Melodic distinctiveness is not a minor question. Indeed, distinctiveness per se is an intriguing cognitive question whether from a theoretical or a practical perspective.4 But our interest in distinctive melodic features lies on three particularities, namely grading, context-sensitivity and tensiveness, which are absent from phonetic features and which could help to explain some of the meaning effects of music. Let us briefly examine each of these particularities, starting with grading. Phonological systems are described as structures of numerous features (around twenty, according Chomsky & Halle), and each one is defined as positive or negative. Any natural language can be described as a special arrangement of those twenty binary features. The structure behind music seems to be much more parsimonious, since no more than three features are needed to characterize any melodic string. On the other hand, these features are not binary, rather they are gradual, assuming several values depending of each musical “idiom”. In other words, the phonological system and the melodic system seem to be conversely oriented, the first being a structure of binary oppositions,5 the second being a structure of gradual contrasts. This detail affects the way semiosis is produced in language and in music.

  • 6 We cannot develop this point further here as it would require an extended discussion about tonality (...)

16Coming back to the examples we began with, the difference between /t/ in “tea” and /k/ in “key” is due to the linguistic oppositions between phonological features like [+/- Coronal], [+/- Anterior] and [+/-High]. The way language expresses grammatical and semantic relations is also based on oppositions between binary features, like [+/- Masculine], [+/- Singular]and so on. In a sense, the system of phonological oppositions and the system of semantic and lexical oppositions seem to suit each other. Both are organized on the same basis. That is why binary opposition is the fundamental relationship in linguistic structures. A quite different picture emerges from the melodic system because, in this case, we have a system exclusively based on degrees of contrast, rather than on binary oppositions. Pitch is a relationship between two adjacent units in a string, and not simply an opposition between two units. This is true not only for pitch, but for all melodic features, be they primitive or derived. Duration, pitch, tempo, dynamics, etc. are not bipolar categories, but contrastive and gradual in nature. This perhaps explains why music lacks in representing referents and propositions. On the other hand music is suitable to expressing correlations with meanings based on contrasts, like affects and emotions. To sum up, the grading of distinctive melodic features (pitch, duration, and intensity) and of their derived features (register, tempo and dynamics), allows them to express any kind of meaning whose intrinsic character is also gradual. As we have known since Sémiotique des passions (Greimas & Fontanille 1991), grading is one of the basic characteristics of emotion, affects and passions.6

  • 7 We borrow this notion from semantics: “All linguistic morphemes are context-sensitive in the way th (...)
  • 8 Many languages present neutralization between phonological oppositions. For instance, under certain (...)

17A second particularity which is present in melodic features — and absent from phonological features — is context-sensitivity.7 One cannot assign an absolute value for the pitch, duration and intensity of a tone independently of the context where it occurs. In order to make this point clear, let us consider the phonetic feature [Voiced], for example. A segment can be [+ Voiced], like /b/ or [- Voiced], like /p/. This characteristic is not dependent on the context where /b/ or /p/ occurs.8 In other words, /b/ is always /b/, in any circumstance, as well as /p/ is always /p/ in any circumstance, and the same goes for any other phonetic feature, without exception. This context-free characteristic makes the phonological system stable enough for its purpose, namely, the communicative functions of the natural languages.

18All melodic features, on the contrary, are context-sensitive. One cannot assert that a musical sound is short or long, high or low, strong or weak by itself. As the values are all relative, the sound that is long in one context can be short in another context, and the same is true for intensity and pitch features. To say that a sound is context-free is to say that it is not subject to any syntagmatic constraint. Conversely, to say that a sound is context-sensitive is to say that it is subject to a syntagmatic constraint.

  • 9 “Tout le mécanisme du langage (…) repose sur des oppositions de ce genre et sur les différences pho (...)
  • 10 The presupposition rests on the voiceless/voiced glosseme that underlies the opposition p/b.

19This point is crucial for understanding why musical semiosis is fundamentally iconic whereas verbal semiosis is fundamentally symbolic (see Santaella, in this volume, page 92). Saussure showed that language is based on oppositions.9 Two phonemes, say /p/ and /b/, are opposed to each other from the point of view of the paradigm, that is, the vertical axis that is behind the text. This means that /p/ and /b/ presuppose each other.10 However, there is no presupposition at all between them from a syntagmatic point of view, i.e., there is no syntagmatic constraint between /p/ and /b/.To sum up, thanks to the paradigmatic presupposition, /p/ and /b/ are placed in opposite sides in the system and can be used as a basis for oppositions, like in “pack” versus “back”; however, thanks to the lack of syntagmatic presupposition, the presence of /p/ does not imply the presence of /b/ and vice-versa. So we find words with both /p/ and /b/ (“bump”, “pub”), just /p/ (“papa”) or just /b/ (“baby”). The basic mechanism of natural languages rests on this dual property of the phonemes.

  • 11 In fact, we are not faced with a real impossibility; we just do not know any everyday language whos (...)

20In music, on the contrary, things do not work in this way because musical notes contract mutual presuppositions from both the paradigmatic and the syntagmatic viewpoints. Thus, a phoneme, say /p/, can be unequivocally established through the paradigmatic functions contracted with other members of the system, say /b/, /t/, /d/, etc. However, a musical tone cannot be unequivocally established through only the paradigmatic functions it contracts with other members of the system. In other words, the paradigmatic oppositions are necessary but not sufficient conditions to establish the value of a tone. To do so, it is necessary to consider its syntagmatic environment, that is, its context. We are faced with a relationship between two units in a string (one syntagma), and not simply an opposition between two units (one paradigm). All and any musical segments are subject to this kind of syntagmatic constraint. This is true not only for pitch, but for all musical features, without exception. This property of musical features have two remarkable consequences. On one hand, music is unable to express oppositive contents like the ones we find in natural languages. It is not possible to express the difference between a “computer” and a “table” through musical sounds because they are context-sensitive, that is, they are not autonomous from each other.11 On the other hand, musical features can express iconically any content that itself presents the same syntagmatic constraint. This is the basic reason why music can express affects and emotions almost directly. Thus, serenity and ecstasy, sadness and joy, tension and relaxation are not only opposed affects (paradigmatic). We can trace a continuous line between joy and sadness, as well as between fast and low tempos, because joy and sadness, on one hand, and fast and low tempos, on the other, establish cohesive relationships. Melody achieves its universality from this structural parallelism between musical features on one hand and affects and emotions on the other (see Zbikowski, in this volume, page 154).

21Finally, the third particularity of distinctive melodic features, tensiveness, is closely related to grading and context-sensitivity. All and any distinctive melodic features are tensive categories. Let us see how Greimas & Courtés define tensiveness in the Dictionnaire I.

Tensivité : La tensivité est la relation que contracte le sème duratif d’un procès avec le sème terminatif : ce qui produit l’effet de “tension”, “progression” (exemple : l’adverbe “presque”, ou l’expression aspectuelle “sur le point de”). Cette relation aspectuelle surdétermine la configuration aspectuelle et la dynamise en quelque sorte. Paradigmatiquement, la tensivité s’oppose à la détensivité. (Greimas & Courtés 1979, p. 388)

  • 12 “…expression plane and content plane can be described exhaustively and consistently as being struct (...)

22Obviously, this definition takes tensiveness as a category in the content plane. Indeed, at the time the Dictionnaire I was being prepared (1979), the main task of Greimas and Courtés was to establish a solid basis for a theory of discourse independently from its manifestation, that is, independently from the expression plane. However, the Dictionnaire II (1986) restates the entry, now written by Claude Zilberberg, who emphasizes that shortcoming in the theory. In any case, thanks to the principle of structural analogy established by Hjelmslev,12 it is possible to regard tensiveness as a category of the expression plane as well.

23The crucial point with tensiveness is that it establishes a strong relationship between both the paradigmatic and the syntagmatic axes, which is far from being usual among categories on both planes. On the contrary, adjacent units in the syntagmatic axis usually do not inherit paradigmatic dependence. One example will make this point clear. The category [+/- Voiced] constitutes a paradigmatic dependence between the features [+ Voiced], or voiced and [- Voiced] or voiceless. However, when projected onto the syntagmatic axis, these features are NOT dependent on each other. The fact that a sound is [+ Voiced] does not produce the expectation of a sound [- Voiced].

  • 13 See Jakobson, Fant & Halle (1952, p. 13).

24This kind of expectation is exactly the meaning effect produced by any tensive category. It is obvious that tensiveness is one of the foundations of any and all rhythmic phenomena. Indeed, we could define rhythm as nothing but the projection of a paradigmatic dependence onto the syntagma. This is the reason why poetic rhythm is based on prosodic features, that is, the alternating between weak and strong, high and low, short and long13 syllables. All these prosodic categories are tensive, since the presence of one of these terms creates the expectation of its complement. Not surprisingly, the prosodic features cover exactly the same substance as the distinctive melodic features do.

4. Musical instruments, vocal apparatus and melodic primitives

25Before we go on, here is a summary of what we have seen so far. The expression plan of both music and language is built up with sounds organized in a system of opposite values, which can be analyzed in minimal elements, the so-called primitives. These elements are commutable, that is, the interchange between any two of them changes the identity of an utterance. The properties of the primitives are inherited by the sounds that compose the system of both language and music. Since grading, context-sensitivity and tensiveness are properties of pitch, duration and intensity, music is especially suitable to function as an analog (an icon) of any object or process that presents the same properties. The expression plan of natural language, on the other hand, is mainly structured on oppositions and discrete categories and, for this reason, it cannot refer directly to dynamic and tensive objects and/or processes.

26There are reasons to think that the roots of this difference between language and music lie on their origin and development. Although language is as “natural” as music and their origins are lost in the mists of time, it is possible to hypothesize that speech and singing have a common background in the prehistory of mankind, but at some point they diverted into opposing paths. This is the so called musilanguage hypothesis sustained by Brown (2001). Although it is not possible to prove this hypothesis, we can indirectly explore it, and its consequences, by considering some properties of the vocal apparatus and of musical instruments. The idea behind this approach is that the evolution and the development of language and music could well be mapped against the mechanisms that produce linguistic and musical sounds. Instead of considering language and music by themselves, we might now consider the devices that produce language and music, with the hope that they can shed light on the problems of melodic primitives.

27According to Fant (1960), speech sounds result from two consecutive processes: a source produces an initial sound and a filter modifies it. At the larynx, where the vocal folds are, sounds are produced whose spectrum contains different frequencies. This spectrum is filtered by articulators like tongue, teeth, lips, velum, etc. Basically, when a human speaks, the vibration of the vocal cords function as a glottal source of energy and the displacement of the organs of the mouth and throat function as the filter. The combined action of source and filter produces the two phonological macro-categories, phonemes (consonants and vowels) and prosodemes. With this mechanism, a human can produce thousands of syllables, which are the basic building blocks of speech. The inventory of English syllables, for example, runs to something around 16,000 syllables. The crucial point in Fant’s model is that phonemes depend on the mobility of both source and filter, whereas prosodemes depend exclusively on the source’s mobility. Consequently, source and filter are functionally specific.

28Musical instruments are mechanisms fully comparable to the human vocal tract. All and any melodic musical instruments have a source of energy and a filter. The source-function can be performed by the strings of a guitar, the wooden bars of a xylophone, the reed of a clarinet, and so on, while the filter-function is performed by sound boxes and tubes in many different shapes and sizes, built from a whole array of different materials. However, musical instruments present a crucial difference in relation to the vocal tract. In musical instruments, the filter does not have movable parts that play a role similar to that played by the jaw, tongue or velum. That means that the filter of musical instruments is always static. One can pluck or bow the strings of a violin but one cannot change the shape of its body in order to produce different timbres. The same is true for wind instruments. On a saxophone, for example, the source is a single reed mouthpiece, and the filter is the metal pipe and the bell. The only movable part of the whole device is the reed. The keys only change the resonance frequency of the sound generated. We press the keys in order to change nothing but the pitch.

  • 14 The system of standard English sounds, for example, runs to forty-four phonemes but only a few pros (...)

29As tones are the outcome of the source vibration, we have nothing but tones as the output of a musical instrument. In this order of ideas, the mobility of the filter seems to be the key factor distinguishing musical and linguistic sounds, which are therefore organized in systems conversely oriented. The melodic system can be described as a singular case of organization of basic sound unit categories where (i) the segments category is completely syncretized in one single value, the timbre, and (ii) the supra-segments category presents maximal resolution, with a wide range of pitch, duration, and intensity values. This organization is opposed to the one observed in natural languages, where segments present maximal resolution and supra-segments are reduced to a category with few terms.14

30From this point of view, what makes the relationship between music and language so special is that they share the same basic units, but at the same time these units are organized in systems conversely oriented. In language, the prosodic categories are atrophied and play a marginal role; in music, they are hypertrophied and constitute the core of the system. In language, the inventory of phonemes is much richer than the inventory of prosodemes, while in music we have the opposite scenario. Thus, we could metaphorically say that the melodic system seems to be an inverted mirror of the phonological system.

31This reasoning shows that the systems of musical and linguistic sounds are not different from each other in the substance of their basic units. Consequently, the fact that they are specialized systems has to be attributed to the divergent ways their evolution and development took. It is well known that several anatomical changes in the vocal tract were decisive for the emergence of human language (de Boer 2010). Crucially, the lowered larynx and the angle between mouth and pharynx created the room necessary for the improvement of tongue mobility, that is, the filter mobility. On the other hand, from the source-filter model perspective, what distinguishes a modern flute from its probably oldest ancestor (the disputed Divje Babe flute), or a modern string instrument from the ancient lyra, is the range of tones they produce (Figure 7). As instruments evolve, their range increases. If we consider the three basic features of musical tones (pitch, duration and intensity), the current violin has a range of about 2,400 sounds and a piano has 4,488 sounds, within the same violin-like and piano-like timbres.

Figure 7

Figure 7

The dashed line with arrows shows the divergence in the evolution of music and language.

32This approach helps us understand why music and language share some properties but not others, and strengthens the musilanguage hypothesis proposed by Brown (2001). However, in contrast to Brown’s psycho-biological standpoint, ours is semiotic — that is, we try to show that some properties of the vocal tract and of musical instruments are at the root of the different ways language and music produce meaning.

5. The psychoacoustics basis of timbre

33It is time now to briefly discuss the remaining melodic primitive: timbre. We have just seen that, from a semiotic viewpoint, timbre seems to play a different role than pitch, duration and intensity. Timbre is not a distinctive feature. In terms of hjelmslevian semiotics, one can say that whereas pitch, duration and intensity are melodic invariants, timbre is a melodic variant, thus emphasizing the fact that timbre can be freely varied without changing the identity of a melody — something which cannot be done with pitch, duration and intensity. The psychoacoustics behind sound production help us figure out why things work that way and what the consequences of this fact are for musical semiosis.

34One of the most important tools for acoustic analysis (Henrique) was developed in the early 19th century by the French mathematician and physicist Jean-Baptiste Joseph Fourier, and is now called Fourier analysis. If applied to sound, it states that any complex sound can be described as a composition of simpler sounds. These simpler sounds can be split into two groups: (i) the Fundamental, also called f1, and (ii) the harmonics, whose frequencies are integer multiples of f1. Theoretically, the number of harmonics is unlimited. For instance, given a sound whose f = 1000 Hz, its fundamental and harmonics are: f1 = 1000 Hz; f2 = 2000 Hz; f3 = 3000 Hz; and so on.

35One may ask what exactly differs between the timbre of two musical instruments, say, the violin and the clarinet? The answer is that the harmonics produced by these instruments present different intensities, even when f is exactly the same. Thus, suppose a violin and a clarinet producing the same note C, with the same intensity and the same duration. We will perceive different sounds (timbres) due to the harmonic content of these sounds. Moreover, a trained musician can perceive and identify two violins because no two musical instruments produce exactly the same timbre, no matter how similar they are to each other. The same goes for any musical instrument. Furthermore, this principle of timbre complexity also explains why we never find two different people with the same voice. Indeed, the structure of the vocal apparatus is quite similar to that of musical instruments, and the physics behind them is exactly the same, both being composed of a vibration system which produces a complex vibration pattern (f1 and harmonics) and a filter system which enhances certain harmonics and dampens others.

36The violin is a good example of such a vibrating/filter system, as we can trace the sound path from the string to the plates. The vibrating string usually has a lot of harmonics, each with a different intensity. From the string, the vibration is transmitted along and down the bridge to the top plate. The bridge has its own preferred frequencies and hence plays the role of a filter that reinforces some harmonics while softening others. From the bridge, sound is transmitted to the top plate of the violin. Like the bridge, the top and back plates also have their own (different) preferred frequencies and again the filter effect occurs. Let us illustrate this point with a concrete example (see Figure 8) following Gough. A G note with a fundamental frequency (f1) of 196 Hz is played by bowing the violin’s string (input wave form). The intensity of each harmonic of this frequency decreases as its order increases, which is represented by the decreasing height of the bars, as shown in the graph on the right side of the input wave form. However, the bridge is selective in frequency and emphasizes the frequencies between 2.5 kHz and 3.5 kHz (bridge response). The body has a much more complex pattern of emphasized/dampened frequencies that affects virtually all of the harmonics and the fundamental (body response). To sum up, the final sound produced by a violin (lower graph) has a completely different set of relative intensities as compared with the ones of a single vibrating string. The waveform of this final sound is shown in Figure 8 (output waveform).

Figure 8

Figure 8

The filter effects caused by the violin parts and the modifications they produce on the input waveform.

37What we have seen so far is related to the way sound is produced. But the way sound is perceived also affects the meaning effect of the timbre. Indeed, our perception of sound is not “flat” because the ear response varies according to frequency. A clear indication of this is the impossibility to transpose timbres. Suppose one records a female voice and then, with the aid of software, we transpose this sound to a lower tone to make it sound like a male voice; the resulting voice will not be a male voice, rather a caricature of a male voice with a comedic meaning effect. As explains Leipp (2010, p. 156),

L’anatomie et la physiologie individuelles du système auditif jouent donc un rôle déterminant dans la sensation de timbre et il n’est pas surprenant de constater des divergences dans le jugement entre auditeurs d’âges différents. Il est donc clair que le mot « timbre » ne peut avoir de signification intrinsèque, absolue, physique.

38What is the relevance of this mechanism behind sound production and sound perception for musical semiosis? Above all, we note that what we call timbre, pitch, intensity and duration are just meaning effects, that is, abstract representations that result from our perception of certain physical magnitudes. Thus, the frequency of f1 produces the meaning effect of pitch, the amplitude of f1 produces the meaning effect of intensity, the length in time of f1 produces the effect of duration. The complex harmonic composition of multiples of f1 produces the effect of timbre. It is important to note that, besides their frequencies and relative intensities, each harmonic has its own duration.

  • 15 This is the reason behind the fact that one can represent a sequence of tones with a limited number (...)

39We now have enough elements to justify our proposition that timbre, on one hand, and the other melodic features, on the other, should be treated as two different classes of primitives. We have mentioned the possibility of representing tone as a matrix of melodic features. However, due to the differences we have pointed out between pitch, duration and intensity, on one hand, and timbre, on the other, we cannot represent timbre in a single matrix. Because distinctive melodic features are finite values, a matrix for tone has a limited number of columns; and because the number of harmonics that characterize timbre is theoretically unlimited, a matrix for timbre would have an unlimited number of columns, which is far from being a simple and practical system of representation.15 To sum up, while pitch, duration and intensity constitute a closed class, timbre constitutes an open class.

6. Conclusion

40Now we will proceed to discuss how these particularities of the musical primitives could guide future research in semiotics. Among the developments that could be explored and about which we intend to write at length later, we now mention just two, namely the kinship between music & language, including parallels and divergences in their evolution and development, and the question of musical enunciation and connotation.

41One of the main tasks of contemporary semiotics is still the comparative study of music and language. The chief reason for this is that there are no other semiotics so similar to each other than those for music and language. At the same time, the singularities presented by music can shed light on language and vice-versa. Moreover, from a structural point of view, language and music are by far the most complex semiotics we know. We have already seen that melodic features present three particularities — namely grading, context-sensitivity and tensiveness — which are absent from phonological features, and which are responsible for some differences between music and language. Moreover, the fact that music is a prosodic-based semiotic (since pitch, intensity and timbre are the so-called prosodic categories) opens the possibility for polyphony, which is strictly impossible in language. Two or more people cannot speak at the same time, which is not a problem at all when they sing or play. One important consequence of polyphony is that it opens the doors for harmony. When we play two or more instruments at the same time, the meaning effects of harmony are created. Any monody has an implicit harmony, but the whole development of harmonic possibilities depends on polyphony. Thus, the roots of harmony and polyphony — to mention just two relevant fields of music — are grounded on musical primitives. Another question involving primitives relates to a striking asymmetry in the inventory of the significants of both music and language. We know that musical instruments and the vocal apparatus present structural similarities. Despite this fact, linguistic distinctiveness is organized around phonemes, with tones as a marginal class, even in tone languages. On the other hand, we have nothing but tones as the output of a musical instrument. This seems to be a key factor distinguishing musical and linguistic sounds, which are therefore organized in systems conversely oriented. In language — including tone languages — the prosodic features (pitch, duration and intensity) play a secondary, marginal role; in music, they play a primary role and, in fact, constitute the core of the system. From this viewpoint, the plan of expression of music and language seem to be “inverted mirrors”. This phenomenon, whose roots can be found in the evolution of music and language, can help us understand why music is fundamentally iconic whereas language is primarily symbolic (cf. Santaella, in this volume). Despite being a crucial question to our understanding of music semiosis, no continued attempt has been made to explore this research pathway, as far as we know.

42Any and all enunciates — including melodic enunciates — are the result of an enunciation act performed by an enunciator. Since pitch, duration and intensity are the sole distinctive melodic features, they are the necessary and sufficient conditions to establish any and all melodic utterances, that is, a melodic utterance is nothing but a string of pitch, duration and intensity values. Timbre has no place in such a melodic utterance. On the other hand, there is a solidarity (Hjelmslev) between enunciate and enunciator. Thus, any melodic utterance can be performed in an unlimited number of timbres, each one lending its particular character to it. In other words, in each enunciation act, the enunciator manifests itself through the timbre. It must be stressed that this is restricted to the melodic primitives and their role in semiosis. Evidently there are derivates from these primitives, which can play different roles. For instance, dynamics, which is a derivate from intensity, is not a distinctive feature and consequently belongs in one of the enunciation categories. The same goes for register and tempo, which are derived from pitch and duration respectively. All these categories are controlled by the enunciator. For instance, a singer or player can change the meaning effect of a whole piece by changing the values of timbre (at least partially), tempo, register and dynamics. By doing so, he/she will not change the piece in itself, but some of its meaning effect, which is usually expressed verbally with connotative terms like “strong”, “sweet”, “precise”, and so on. It is clear that those meaning effects are related to the enunciator since those terms characterize a strong, sweet or precise enunciation. In other words, all of those indexes connote something from the enunciator rather than from the enunciate.

Haut de page

Bibliographie

Brown, S. (2001), “The ‘Musilanguage’ Model of Music Evolution”, in The Origins of Music, Cambridge: MIT Press, pp. 271-301.

Carmo Jr., J.R. do (2012), “On some parallels between the vocal apparatus and musical instruments, and their consequences for the evolution of language and music”, in Scott-Phillips, Thomas et al. (eds.), Proceedings of the 9th international conference on the evolution of language (EVOLANG9), Kyoto: World Scientific, pp. 430-432.

Chomsky, N. & Halle, M. (1968), The sound patterns of English, New York: Harper and Row.

De Boer, B.G. (2010), “Modelling vocal anatomy’s significant effect on speech”, Journal of Evolutionary Psychology, 8(4), pp. 351-366.

Fant, G. (1960), Acoustic Theory of Speech Production, Mouton: The Hague.

Gough, C. (2000), Science and the Stradivarius, Physics World, online: http://physicsworld.com.

Greimas, A.J. & Courtés, J. (1979), Sémiotique. Dictionnaire raisonné de la théorie du langage I, Paris: Hachette.

Greimas, A.J. & Fontanille, J. (1991), Sémiotique des passions, Paris: Seuil.

Helmholtz, H. (1895), On the Sensations of Tone as a Physiological Basis for the Theory of Music, London: Longman.

Henrique, L.L. (2011), Acústica Musical, Lisboa: Fundação Calouste Gulbekian.

Hjelmslev, L. (1961), Prolegomena to a Theory of Language, Madison: The University of Wisconsin Press.

Jakobson, R., Fant, G., Halle, M. (1952), Preliminaries to Speech Analysis: The Distinctive CFeatures and their correlates, Cambridge: MIT Press.

Leipp, É. (2010), Acoustique et musique, Paris: Presses des Mines.

Robert, St. (2005), “The Challenge of Polygrammaticalization for Linguistic Theory”, in Frajzygier, Hodges & Rood (eds.), Linguistic Diversity and Language Theories, Amsterdam, John Benjamins, pp. 119-142.

Saussure, F. de (1913), Cours de linguistique générale, Paris: Payot.

Zilberberg, Cl. (2012), La Structure tensive, Liège: Presses Universitaires de Liège.

Haut de page

Notes

1 Helmholtz (1895).

2 Many aspects of the sound structure of languages cannot be described only through the theory of distinctive features. Instances of this are the rhythmic patterns of speech, intonation, the interaction between phonological units (words, feet, clitics, etc.) and syntactical structures, etc. Thus, since the late 80’s, new research pathways have been opening to explore these fields, like metric phonology (Liberman & Prince 1977), intonational phonology (Pierrehumbert 1979), auto-segmental phonology (Goldsmith 1976; Clements 1985), prosodic phonology (Nespor & Vogel 1986), to mention a few. Although essential to our understanding of the way language works as a whole, those analytical models have not changed a single detail in the theory of distinctive features. Since the crucial point in our argument is distinctiveness, we have no need to consider other phonological models in our analysis.

3 Hjelmslev (1961, p. 117).

4 Every time we are faced with an object, no matter its nature, we are asked to identify and recognize it, and the single way to do that is by finding out the distinctive feature(s) that makes it different from other more or less similar objects. Whether we perceive an ordinary object, a word, or a melody, we are always asked to find out what gives them their specificity. Although distinctiveness is not among the main topics in semiotics, it is in the core of the so called artificial recognition systems, which are devices for the recognition of voices, characters (OCR), faces, gestures, and melodies, among others. The economic interest in these devices and the technical interest in how they may work are quite obvious. With 39% of the music market being made in digital format, it is no wonder that companies like Sony, Google and Microsoft have a strong investment in research, mainly in Music Data Mining and Music Recommendation Systems, fields in which a theory of the melodic distinctiveness is crucial.

5 “In view of the fact that phonological features are classificatory devices, they are binary, as are all other classificatory features in the lexicon, for the natural way of indicating whether or not an item belongs to a particular category by means of binary features. This does not mean that the phonetic features into which the phonological features are mapped must also be binary. In fact, the phonetic features are physical scales and may thus assume numerous coefficients, as determined by the rules of the phonological component.” (Chomsky & Halle 1970, p. 297).

6 We cannot develop this point further here as it would require an extended discussion about tonality and harmony, and perhaps another essay. The reader interested in the semantic aspects of this issue can refer to Claude Zilberberg’s recent work, in particular Zilberberg (2012).

7 We borrow this notion from semantics: “All linguistic morphemes are context-sensitive in the way that their semantic value depends partly on their semantic environment (tender does not have the same meaning in a tender steak and in a tender man)”. (Robert 2005, p. 123).

8 Many languages present neutralization between phonological oppositions. For instance, under certain circumstances, German plosive voiced consonants [b, d, g] become plosive unvoiced consonants [p, t, k], respectively. Thus, “ab” (from) is pronounced [ap], “Farhad” (bicycle) is pronounced [faːɐ̯ˌʀaːt], and “Tag” (day) is pronounced [taːk]. Obviously, such a devoicing rule does not imply that the speaker does not recognize the difference between voiced and unvoiced consonants.

9 “Tout le mécanisme du langage (…) repose sur des oppositions de ce genre et sur les différences phoniques et conceptuelles qu’elles impliquent.” (Saussure 1967, p. 165).

10 The presupposition rests on the voiceless/voiced glosseme that underlies the opposition p/b.

11 In fact, we are not faced with a real impossibility; we just do not know any everyday language whose structure is based on this kind of syntagmatic relationship. By the way, such a (hypothetical) language should be completely based on prosody.

12 “…expression plane and content plane can be described exhaustively and consistently as being structured in quite analogous fashions, so that quite identically defined categories are foreseen in the two planes. This means a further essential confirmation of the correctness of conceiving expression and content as coordinate and equal entities in every respect” (Hjelmslev 1961, p. 60).

13 See Jakobson, Fant & Halle (1952, p. 13).

14 The system of standard English sounds, for example, runs to forty-four phonemes but only a few prosodemes, while the currently known languages number up to at least ninety-three phonemes. On the other hand, the system of prosodemes can run to only five levels of pitch (Mandarin Chinese) and three levels of duration (Swedish).

15 This is the reason behind the fact that one can represent a sequence of tones with a limited number of symbols but one cannot do the same with timbre. Consequently, there is no notation system for timbre, and we are limited to use labels like “violin” “cello” “flute” and so on.

Haut de page

Table des illustrations

Titre Figure 1
Légende Mozart’s clarinet concerto in A (K622), Allegro, violin (1) and clarinet (2) parts.
URL http://journals.openedition.org/signata/docannexe/image/1054/img-1.jpg
Fichier image/jpeg, 84k
Titre Figure 3
Légende First bars of Happy Birthday (3) and Star-Spangled Banner (4).
URL http://journals.openedition.org/signata/docannexe/image/1054/img-2.jpg
Fichier image/jpeg, 52k
Titre Figure 4
Légende The first bars of Beethoven’s “Ode to joy” (5) and Bach’s “Jesu, joy of man’s desiring” (6).
URL http://journals.openedition.org/signata/docannexe/image/1054/img-3.jpg
Fichier image/jpeg, 72k
Titre Figure 5
Légende Allegro from Bach’s Brandenburg Concerto BWV 1048 in the original version (7), and in a compound rhythm (8).
URL http://journals.openedition.org/signata/docannexe/image/1054/img-4.jpg
Fichier image/jpeg, 84k
Titre Figure 7
Légende The dashed line with arrows shows the divergence in the evolution of music and language.
URL http://journals.openedition.org/signata/docannexe/image/1054/img-5.jpg
Fichier image/jpeg, 124k
Titre Figure 8
Légende The filter effects caused by the violin parts and the modifications they produce on the input waveform.
URL http://journals.openedition.org/signata/docannexe/image/1054/img-6.jpg
Fichier image/jpeg, 92k
Haut de page

Pour citer cet article

Référence papier

José Roberto do Carmo Jr. et Thiago Corrêa de Freitas, « Melodic Primitives and Semiosis »Signata, 6 | 2015, 23-41.

Référence électronique

José Roberto do Carmo Jr. et Thiago Corrêa de Freitas, « Melodic Primitives and Semiosis »Signata [En ligne], 6 | 2015, mis en ligne le 31 décembre 2016, consulté le 29 mars 2024. URL : http://journals.openedition.org/signata/1054 ; DOI : https://doi.org/10.4000/signata.1054

Haut de page

Auteurs

José Roberto do Carmo Jr.

José Roberto do Carmo Jr. is luthier and linguist. In the late nineties he established a lutherie workshop in São Paulo and since then he has been repairing and building electro-acoustic Upright Basses, mainly for jazz and pop musicians. Besides working as a luthier, he got a Master (2003) and a PhD (2007) in Linguistics. He was a visiting researcher at Freie Universität Berlin (2010), University of Cambridge (2011) and Université de Liège (2012). His primary research interest is the relationship between Music and Language. Specifically, he works on the hypothesis that musical instruments are semiotic mechanisms fully comparable to the human vocal tract and, consequently, that some properties of the vocal tract and of musical instruments are at the root of the different ways language and music produce meaning. He has published Da voz aos instrumentos musicais: um estudo semiótico (2005) and edited Linguagens na cibercultura (2013). Website: www.soundsign.net.

Articles du même auteur

Thiago Corrêa de Freitas

Thiago Corrêa de Freitas holds a BS in Physics (2007), a MS in Physics (2009) and a PhD in Physics (2012) from the Universidade Federal do Paraná, being currently a lecturer in the Lutherie course at the same university. Published more than 15 papers in specialized journals in the fields of acoustics and mechanical aspects of musical instruments. His main research interests are new materials for musical instruments, subjective aspects of musical sounds and interdisciplinarity between physics and music.

Haut de page

Droits d’auteur

CC-BY-4.0

Le texte seul est utilisable sous licence CC BY 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search