Igevorse Otonnoleare

Assigning channels to instruments: first steps

20 July 2014 | 10:18 pm

Overview of work this week

In short:

  • I began implementing assigning channels to instruments.
  • Found bugs related to changing instruments.
  • Fixed bugs with JACK Transport.

I decided not to implement "assigning channels for JACK MIDI Out", but began implementing more complex "assigning channels to instruments" feature. Users have been waiting for this, and now I can "kill 2 birds with one stone" - implementing this feature gives me proper midi channels for JACK MIDI Out too.

So, I continued researching of code related to MIDI and begin implementing.

At the beginning I wanted to change a lot of code to make internal midi system more simple. I changed some code and got partially working solution, but stalled with further implementation. After additional research I realized that current implementation of midi is convenient enough and I can implement this feature with minimal changes of code. My second implementation was simplier.

We can't change channels in "Create Instruments" window because there are only Parts and Staves, not Instruments. So, now we can change port/channel directly in Mixer window.

Here you can see the screenshot of new Mixer window:

It is already working solution, not a sketch. I tested it with QSynth (it shows the input channels) - channels really change.

Now I should implement saving channels to score file and restoring it.

While implementing "assigning channels to instruments" I was researching an existing midi-related code and found a few bugs with "Instrument change" and "Staff-text". Some of them can be fixed, but some not. It's more a design problem.

For example, user can create a piano staff, then add "Instrument change" and change instrument to violin from the 2nd measure. Then he can add "Staff-text" to 4th measure to make violin sound like a pizzicato violin. Now, what should we do if he deletes "Instrument change"? Should we get a piano staff with pizzicato violin from 4th measure? Or piano staff only? It is more a design problem.

Also my mentor found a few bugs in my JACK Transport implementation, I've already fixed them.

Key tasks that stalled

It's ok.

Upcoming tasks this week

Continue implementing assigning channels to instruments: save/restore channels to score file.

Check channels in midi export.


As usual, you can find me on IRC #musescore as igevorse.

Feel free to write me about implementing some JACK-related features that you need and want.



« Bugs & magic with cross-thread… · Assigning channels to instruments: first steps · Assigning channels to instruments:… »

  1. Comments (5):


  2. avatar
    pedroseq · 26 July 2014 | 06:03 pm

    Hello,

    First, let me congratulate you on the excellent work you've done so far, to implement this keystone feature in MuseScore! We're now closer to the day in which MuseScore will be able to communicate with external programs/samplers/VST hosts, via MIDI ports/channels, just like all other regular notation softwares and sequencers. And this leads me to my two (related) feature requests:

    1. After this feature has been fully implemented, will we be able to send MIDI Program/Control changes via staff/system text messages in the score, to other external programs? If it isn't possible yet, can you address this feature afterwards?

    2. Another situation that limits my use of several computers in music production is the desyncronization between a sampler loaded in the same computer as MuseScore and a sampler loaded in a second computer, receiving MIDI information over LAN. This desyncronization produces an audible delay in sample reproduction between both samplers. Why is this a problem? Let me exemplify: I want to notate a large orchestral piece using a score with roughly 50 staves (from 1st flute/piccolo, all the way to double basses). I have 2 desktop computers and neither one let me reproduce, individualy, the entire orchestra using high quality samples (better than any availabe soundfont) without sound artifacts, maximizing CPU load or even crashing the system. Some sequencers freeze unchanged tracks/staves, recording them to internal wave files, and let the sampler reproduce only the changed tracks/staves. Thus, when playing back them, this lowers the cpu load and the entire orchestra can be heard even with a fairly modest computer hardware configuration. But, as far as I know, no notation softwares do this. So, I decided to distribute sampling reproduction between both computers, sending MIDI information over LAN via QMidiRoute and QMidiNet. This does the job, but, as I stated, sampling reproduction is desyncronized between samplers due to the larger latency indtroduced by MIDI-LAN forwarding. Therefore, I leave here a suggestion which I think would solve this easily: let each MIDI Device or MIDI Port have its own MIDI delay/latency attribution within MuseScore (Sibelius 3 had this feature and it worked well, being the only way to syncronize a hardware synth with a software synth). Is this doable in MuseScore? I really hope so!

    My best regards

  3. avatar
    igevorse · Site · 27 July 2014 | 01:07 pm

    Hi, pedroseq!

    1. Yes, it would be possible, but without sending "program change" message. How it works: if you have a staff with staff-text in the middle, your staff would take 2 channels with different programs. So, while playback, when we reach staff-text, we simply switch to the second channel, there is no need to send "program change" message. You can select volume, chorus, reverb, etc. for these channels: they're listed in mixer windows as 2 different instruments.

    2. I understand your problem. I hope it is solvable without changing anything in MuseScore. Try reading more about your favorite synthesizer, maybe it have an ability to add delay. For example there are few parameters in timidity "--delay=(d|l|r|b)[,msec]" and "--audio-buffer=sec/n" can help you (if I understand parameters correctly).

  4. avatar
    pedroseq · 30 July 2014 | 11:16 pm

    Hi, igevorse!

    I thank you for your reply! Regarding your responses to my suggestions:

    1. To change values for CC5 (for a complete list of MIDI Control Changes, see, e.g.: http://nickfever.com/402/production-tips-and-resources/midi-cc-list/), what can I do, then? Also, I tested that "automated" program change solution you described and found a possible bug. Some (unrealistic) changes, like piano to violin, or vice-versa, aren't possible without changing first the "instruments.xml" file, according to a post in MuseScore's forum. For now, that's fine, as they aren't expected to happen in reality, except maybe in some weird performances. But when doing realistic instrument changes, like flute to piccolo and back to flute, Musescore 2.0 creates 3 channels (flute, piccolo & flute), instead of 2 (flute & piccolo). Thus, if I understood it, MuseScore 2.0 seams to create a new pair of channels for each different instrument change, instead of using channels already available for a given instrument (shouldn't the program know that it already has those instruments available in certain channels?). And, let's face it, this design principle of creating new channels on demand for each instrument change/articulation present in GM standard isn't ideal for playback mixing. In all other similar softwares, the rule is to have the same number of mixer channels and of tracks/instruments in the score. Instrument changes are treated as MIDI "Program Changes", when using GM/GS/XG compatible banks and other solutions are possible for sample libraries not GM compatible, namely the use of keyswitches for changing articulations within the same instrument. Afterall, Tremolo Strings isn't really a new instrument, but rather an articulation that strings can change to, and should be treated as such, no? Therefore, regarding playback, shouldn't MuseScore be evolving in the same direction as the other softwares, since the GM standard is quite limited in availabe articulations for orchestral strings, woodwinds, brass and percussion?

    2. My point was precisely to do some minor tweaks in MuseScore's new code to allow for that extra delay (besides the normal system latency) in each port/channel. Timidity may have those parameters (and I'll be looking at them soon), as may also some of the other samplers (I think it's a function called "MIDI sync"), but not most of them, and when using 2 or more computers it's harder to setup latencies/delays in each individual sampler then it is doing it in the master program that's outputting the MIDI messages. I'm not a C++/MIDI expert but i think it should be as easy as implementing a wait() function actuating in each MIDI port/channel, saying MuseScore to wait "xx milliseconds" (as inputted by the user in the mixer panel, e. g.), before sending the MIDI messages through a given port/channel. Couldn't this really be addressed at this stage?

    Once again, I thank you for your attention to my questions.

    Best regards

  5. avatar
    igevorse · Site · 02 August 2014 | 12:13 pm

    Hello.

    Sorry for the late answer.

    1. Now you can send a CC5 message only with changing the instruments.xml file. But I started working on MIDI Actions(Staff-text properties -> MIDI Actions tab), so you would get an ability to send it via MIDI Actions.

    Change "Piano -> Violin" is possible: try using "Instrument change" element, you can change piano to any other instrument. Also it is realistic change for me, as a synth (midi keyboard) player I have to change sounds of my synth during the performance.

    You're right, MuseScore creates a new channel(s) for each instrument change. It was considered as a feature: you can set different volume, pan, etc for different instrument change. But your proposed approach about using current channels might be better. I am already working now on it. With this approach you can change volume with dynamics elements and pan, chorus, reverb via MIDI Actions.

    I agree Tremolo Strings is not a different instrument, as well as Muted Guitar and Muted Trumpet. But if there some standards we should comply with them. I think MuseScore was created not for the playback mixing (you have DAWs for this), but for the music notation purposes. By the way, if you set a tremolo articulation to "Strings", you would get a different sound from "Tremolo Strings".

    "regarding playback, shouldn't MuseScore be evolving in the same direction as the other software?" Could you give an example of evolving of other software?

    2. I understand your point. Unfortunately we can't do it: developers think MuseScore shouldn't became a "monster" with over 9000 features. So, the basic idea is: do not implement feature in MuseScore if it is already implemented in other application that can be used with MuseScore.

    Your case is interesting and complex at the same time.

    Let me offer you another possible solution: you can have MuseScore installed on both computers with the same score opened. You can mute first 25 staves on the first computer, and mute staves 26-50 on the second computer. Now you can run playing in both MuseScore instances simultaneously and get a sound without delay. If you use JACK, you can use a netJack tool for syncing - you just need to hit "play" in one MuseScore, and both will start playing.

    It's better to use a MuseScore mailing lists for further conversations, I think. You can write directly here: http://dev-list.musescore.org/Improving-JACK-MIDI-Out-td7578792.html, it's a thread about my work on JACK.

    Thank you for your interest in MuseScore.

  6. avatar
    pedroseq · 02 August 2014 | 05:13 pm

    Hi,

    Once more, I thank you for your answer! This will be my final response here, since, after this, I'll try to add other comments on MuseScore's dev-list, as you suggest. Bottom line is:

    1. Great news! That already helps me a lot! At least, with CC changes available, much hassle is avoided regarding external VSTi hosts/MIDI devices usage.

    As for the "Piano -> Violin" change, I was initially thinking about real instruments, not synthesizers. But you're right, for synths it's a realistic transition. In MuseScore 2.0, when I made the change form piano to violin, no violin channel was created and the piano sound didn't change accordingly, since I didn't alter the "instruments.xml" file. Was it supposed to change the sound anyway? Using the same procedure with flute and piccolo, it produced the desired sound change.

    Regarding instruments and channels, the most common approach is: 1 track (staff) = 1 channel, considering piano double staves as being also 1 track. This takes into account the GM standard, since a Program Change creates the instrument transition within the same channel. But this is a nightmare when using sample libraries (and I've got quite a few, some of which are in SFZ, that I eager to use with MuseScore), because samples can't be changed on-the-fly with Program Changes, as GM compatible banks in softsynths can. So, another approach is: 1 instrument = 1 channel. As an example, Overture is quite flexible in this approach: each track can have up to 8 voices and each voice can have its own channel. Thus, in the same track, one can have a flute (voice 1), changing into a piccolo (voice 2), changing back to flute (voice 1) and going to alto flute (voice 3), back to flute, to piccolo, etc. Therefore, with the approach 1 instrument = 1 channel and being able to send MIDI messages through each channel to external programs/devices, it would be possible to use sample libraries load in e.g., LinuxSampler, fluidsynth, or even Kontakt (running under Wine) and many VSTi hosts, solving temporarily MuseScore's inhability to properly communicate with VSTi hosts.

    About the instrument's articulations having their own channels, as occurs now in MuseScore, I understand why it was initially desired to have different mixer settings for each "GM" instrument and not for each "different" instrument: a freeware SF2 bank may not have a balanced sound intensity accross all instruments: the sustained strings may be softer than the piziccato strings, or the tremolo ones. In this context, it makes sense to balance each one separately. But overall, this produces an enormous number of unnecessary mixer entries. For example, I have prepared a classical orchestral score: 2 staves for flutes, 2 for oboes, 2 for clarinets, etc. Each trumpet staff produces 2 entries in the mixer (normal and muted trumpet). Violins and alike each have three (sustained, pizzicato and tremolo). So, 25 tracks give 37 mixer entries, or so. Now imagine that I wish muted trombones/tubas, horns with open sound, strings with detache, spiccato, trills, col legno, up bow, down bow, glissandi, etc. None of there are GM available (imagine the huge number of mixer entries needed, if they were included in GM standard!), but I have some sample libraries with these articulations/techniques and I want to hear them performed after placing them in the score. No need for them to have separate entries in the mixer, I just want to send the appropriate Control Change through the instrument's channel, with a staff-text, use QMidiRoute to convert Control Change into a note, for keyswitching, and all is done!

    So, my point here is that "complying" with the GM standard doesn't have to mean "exclusive dedication" to it. For some reason, all other notation softwares are more flexible in channel attribution and don't use this particular approach that MuseScore has, regarding GM sounds.

    2. I understand that you're a newcomer to MuseScore's development team and that you may not be able to enforce some decisions, as other established developers in the team may do. So, I'll try to see what I can do with other tools, as you suggest.

    My final comments regarding MuseScore's evolution, in comparison to other softwares: I know that there are a lot of users trying to run away from paid software, like Sibelius and Finale. Two reasons for this: 1) the best softwares are quite expensive, even their upgrades, and money is running low nowadays; 2) the economy's instability made some corporations take bad moves, ruining the future of their softwares (Sibelius may be one of these shortly) and leading their users into fear of product discontinuity, after spending years trying to learn how to use them efficiently.

    So, most think "what free alternatives are there that may accomplish what we used to do with paid softwares? Few open source softwares achieve this standard (MuseScore is potentially one) and, even so, users have to deal with the lacking features.

    I've been a long time Linux user and what has become apparent to me, over the years, is that the open source community always produces a vast array of potentially useful tools, but few of them become trully established for widespread professional use, or even superseed effectively the tools offered by commercial companies, because most open source developers have their own visions of what their tools needs to be (many times for their own personal use) and forget to ask to the larger community of potencial users "what would you like/need this tool to have?" right form the start (at least, that's what I would try to do).

    MuseScore, for instance, as I understand, it derived from the MusE sequencer - which already has some good MIDI/Audio features (the same as most sequencers, although not as good as Ardour) - because its initial developers wanted to focus mainly on its notation engraving features. That's quite fine for their needs and of several other users. But now you and other new developers have to "reinvent the wheel", regarding the MIDI/Audio features, because of the pressure made by many other users to have basic sequencing/mixing features. Wouldn't it have been easier to develop the scoring/engraving features in MusE to a point similar to that MuseScore now has, or integrating both softwares in a way that made them act like a single tool, rather than two unrelated ones?

    For me, the situation is the following: I want to compose scores using good engraving features, not piano roll windows (there's nothing like a clean music sheet to boost creativity), and I want to hear what I compose, because I don't have a record in my brain with all the instrumental techniques available in today's orchestras, and I've to be certain that what I imagine sounds good in reality (even if only a "sampled" reality). Some sequencing features, like sending Program/PitchBend/Modulation Wheel/CC changes, MIDI sync, panning, volume control and alike are definitely required for this, but not real-time audio recording, mixing, sync, or such. Now, having to output the score to a second tool, every five minutes, just to test how it sounds, reduces substantially the worklfow, which is bad.

    I know that future will bring notation softwares that have many DAW features and DAW's that have greater scoring/engraving features, because users are getting tired of relying on a multitude of tools to do what one tool could simply do. That's why so many MuseScore users have requested insistently for DAW features, like VST hosting and advanced MIDI capabilities.

    To support my opinion, let me say that Overture's creator (only one developer!) has spent years redesigning it from scratch and asked users, at the forum, to give as much input as possible regarding wanted features, in order to develop the most versatile notation/DAW-like tool as possible, the upcoming Overture 5. Also, many Sibelius and Finale users are eager to see what comes out of Steinberg's hat, since they integrated Sibelius' former team in their workforce. The same is happening with Presonus, after acquiring the Notion software. This is the future, at least in commercial softwares.

    I thank you for your patience to read my extensive answers and requests!

    Best regards

Leave your response!






Style `onWall HashCode` by Lited & Sayori
Get your own blog immediately for free with Lited!