If this is addressed at me, no i do not since for me these things should be equal.
@lrnv IIUC there is another model that makes more distinctions, where each note is part of a key and is not equal to any note in a different key, even if they have the same pitch.
@dpsanders is that a fair description of your model? Do you know if these models have names?
As far as I understand, a distinction is made between pitch, which is partly psychological, and frequency, which is purely scientific.
Effectively pitch becomes an abstract concept, and this is what the package is reflecting.
C♯♯
is usually written as C𝄪
(“C double sharp”) and is indeed supported by the package.
For example, in the key of G♯ major:
julia> scale = Scale(G♯[4], major_scale);
julia> notes = Base.Iterators.take(scale, 8) |> collect; show(notes)
Pitch[G♯₄, A♯₄, B♯₄, C♯₅, D♯₅, E♯₅, F𝄪₅, G♯₅]
This is so that each note in the scale receives a unique note letter name. This is the usual way to deal with these weird keys in music theory. This one is “the same as” A♭ major, of course. It would be easy to add an is_enharmonic
function to the package.
Yes, those intervals indeed have different names in music theory:
julia> Interval(A[4], D♭[4])
Augmented 5th
julia> Interval(A[4], C♯[4])
Minor 6th
The so-called “number” of the interval is determined by the “number of letter names” between the two notes.
That sounds a bit too extreme, since if you e.g. modulate to the dominant from C major, you want the new G to be the same as the old G from C major!
To elaborate on this a bit: If you see e.g. a middle C on a piece of sheet music, it represents the rather abstract concept “middle C”. For example, if different musicians play that note, one might be slightly out of tune so they would not produce the same frequency, but they would still both think they were “playing C”.
Indeed, different orchestras tune to different frequencies for A, from say 435 to 445 Hz, or even 415 or lower if they are a Baroque group. So A does not correspond to a unique frequency; it’s more like a social / mathematical abstract concept
Yes, but the A might be different if you aren’t using equal temperament (e.g. if you are working with instruments that are tuned by ear, including human voices), since a just second from the G is different from a just sixth from the C (they differ by 1/80).
The unfortunate fact is that you can’t do a 1–1 identification between notes and fundamental frequencies (even if we normalize by the A4 tuning), since note names by themselves don’t fully specify the tuning system and many tuning systems remain in common use even if we restrict ourselves to Western music.
Many music software packages don’t bother with these distinctions, though, I think, because such distinctions are often left to the discretion of the performers. I guess there are some more precise notations for this, however, e.g. I’ve heard of “Extended Helmholtz-Ellis accidentals”, which have some support in notation software like MuseScore, but my knowledge of microtonal music is skin-deep.
Interesting package.
I agree, frequency is undefined until you specify concert A and a temperament. Although A4 = 440 Hz and equal temperament should be the defaults.
Tuning is easiest to specify if the notes are “fixed pitch”, like in keyboard instruments. I am aware as a musician that what it means to be “in tune” for instruments like a cello, guitar, or a choir can be very fickle (e.g. harmonies are usually tuned justly and melodies follow Pythagorean tuning, but even this has exceptions), and for simplicity it’s best to not go down that rabbit hole.
If nonequal temperaments are implemented, enharmonicity should be allowed for well temperaments. In other words C# = Db in Kirnberger III but are not equal in quarter-comma meantone.
Besides diatonic scales, you could implement Allen Forte’s catalog of pitch class sets, of which the major scale is assigned the number 7-35. I don’t know how compatible it would be with the current data structures since these Forte classifications come from a musical set theory perspective.
Oh, this is super exciting! One of the first things I ever did with Julia was making my own little synthesizer, taking in either MIDI events or input from the QWERTY keyboard and generating simple tones in (nearly) real-time.
I might want to go find that code and clean in up so that I can make a MusicTheory.jl compatible package of it!
Thanks for your work on this!
Nice!
Ear training exercises would be the most interesting thing I would like to see, since what we can easily find online is very poor. A bit more than just the typical “identify the interval” thing, e.g. listen and identify chords by their arpeggios or identifying modes by hearing them played and so on. Also quarter tones.
Another interesting application could be on trying to detect patterns (a scale, mode, chord) in sections. The big problem is assuming there are diatonic patterns at all. Good examples come from jazz, where often people cannot even agree what chord is being played on a given piece, because people cannot decide which notes are from the main chord or are embellishments. Or if it’s a substitution.
I have to admit that thinking about using programming for composing is something that makes me shiver in a bad way, but that’s just my personal opinion. I can try to picture it being useful in a “monte-carlo” sense where you randomly sample patterns according to some rules and then listen to all of them to see if anything sounds good to you. But we can already do that on our instruments!
Since even using the usual “recipes” we are taught when learning about degrees of the major and minor scale is almost guaranteed to make something sound extremely boring, I am, by experience, against systematizing composition. I can see it necessary if you have to write film music, marketing music, tv music and music as a product in general.
There’s a genre of electronic music in which programming is part of the composition process. Indeed, such music (which pre-dates computer music!) is often written to be performed only by machines, e.g. using irrational time relationships like 1/√2 music. (In contrast, if you google “irrational time signature”, you find a lot of discussions in which unusual time signatures like 7/8 are amusingly called “irrational” but can still be performed by humans.)
Thanks for showing us this. I have to appreciate the technical difficulty of this guy pulling this off when there were no computers. However, my monkey brain really doesn’t feel like the extra mathematical impositions makes it sound better. I think the initial section of " Study for Player Piano No. 33" sounds interesting, but I’m just skeptical that similar texture couldn’t be achieved without the programming part.
I don’t see any reason not to define some differential equation for the note frequency of voice k being heard at time t \nu_k(t)
d\nu_k(t)=f(t, \dots)
Or even better, make it stochastic
d\nu_k(t)=f(t,\dots)+g(t,\dots)Z(t)
For some stochastic process Z(t). This way I, the composer, won’t even know the pitch before it’s written on the file.
The only problem is that it will provably sound pretty bad.
To contextualize, I’m all for uncomfortable sounds, I’m currently on a Penderecki listening binge. I’m not trying to make the boring argument that anything that doesn’t sound “agreeable” is invalid. I think my argument is more along the lines of “the task of composing becomes even more difficult after you surpass a certain degree of systematization in the composition process”. It’s like being too focused on the “process” you use to compose rather than on the end result of the sound. I’m sure many people wrote wonderful music focusing mostly on the process, but to my personal experience it only seems to make thinks ever more difficult instead of easier (which could be stated to be the goal behind programming something).