Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What will Midi 2.0 mean for musicians?

I suspect: frustrations with shit not working with other shit, like it used to with MIDI, and a big decrement in DIY hackability.

USB is in the mix! Pretty much `nuff said, but I will say it anyway. USB is a complex beast with documentation that is a good fraction of a foot high, if printed out in all its versions on standard letter sized laser paper. If you bring that into any standard, that is now part of your standard.

USB connectors do not have the optical isolation of MIDI current loops; USB interconnecting will bring in noisy ground loops that will have musicians and recording engineers pulling out their hair.

The clever thing in MIDI is that a device which sends messages over MIDI to another device drives current pulses (not voltage). These current pulses activate an octo-coupling device in the receiver such as a phototransistor. There is no galvanic connection between the devices; they don't share any ground or anything.

All sorts of talented musicians have done incredible things with MIDI. The resolution of MIDI has been just fine for people with real chops. MIDI 2.0 isn't going to solve the real problem: talent vacuum.



> All sorts of talented musicians have done incredible things with MIDI. The resolution of MIDI has been just fine for people with real chops. MIDI 2.0 isn't going to solve the real problem: talent vacuum.

I find it difficult to believe that someone with even a passing knowledge what MIDI does would have this opinion. Most of the variables are only 7 bits of resolution which produces jarring jumps when you try to adjust parameters in real time.

I remember taking a college class 20 years ago where we talked about the deficiencies of MIDI and what MIDI 2.0 should look like. It's been 20 years since that conversation and it's mind boggling to me that MIDI is only getting updated now.


Note that more bits don't eliminate jumps on their own. You need to also send changes at a higher rate to take advantage of those bits, which in turn translates to the need for higher speed encoders, more processing time spent dealing with the data, etc.

A different way to eliminate jumps is simply to low-pass filter the values on the receiver, and read out values from the filter at whatever rate your synthesizer engine can handle. The precision of most controls does not matter that much; you just want to eliminate zipper noise, and this does that.

(Of course there are some controls which need the extra resolution. Filter cutoff comes to mind… even 10 bits I've found limiting. Strangely, even though MIDI 1.1 specifies some 14-bit CCs, filter cutoff is not one of these.)


> Note that more bits don't eliminate jumps on their own. You need to also send changes at a higher rate to take advantage of those bits, which in turn translates to the need for higher speed encoders, more processing time spent dealing with the data, etc.

That presumes a continuous information stream being sampled. But the sample-depth problem affects discrete notes, too—it's pretty easy to notice how coarse-grained the quiet end of variation is on a MIDI keyboard or drum controller's attack pulse.


Yeah, the 7-bit amplitudes really wreck things. I switched from MIDI to OSC basically because of the better resolution, though I eventually gave up due to the lack of support for the protocol.


7 bits can encode 0 == "off", plus a 127 dB amplitude range in 1 dB increments.

In a musical mix, -20 dB down, plus "off", is all you need; anything turned down more than about 20 dB relative to everything else disappears.

+/- 20 dB of cut and boost spread into 127 bits is ridiculously good resolution.


So I get what you're saying, but this isn't my experience.

There's no room to use small changes in volume for expressiveness at the low end of the volume scale, since attacks/sustains/release shape is more quantized. So if your piece has fff and ppp in it (which is probably a full 40dB range) the ppp part will sound super flat while the fff part might sound great.


Also, MIDI has the nice, round speed of 31,250 bps. Since it uses start and stop bits, that's 3,125 bytes per second. A "note on" message to start playing a note is three bytes long: a 4 bit "this is a note on" field, followed by a 4 bit MIDI channel number, then an 8 bit note number, then an 8 bit velocity ("how hard I hit the key") number. "Note off" messages, sent when you want to stop playing a note, is identical except for the first 4 bit status field. So, if everything's perfect, playing and releasing one single note will take 6 bytes out of the available 3,125 available each second, or 1.92ms. That's why a lot of so-called "black MIDI" songs are probably literally unplayable through an actual hardware MIDI interface.

But forget about playing and releasing notes. Say you're triggering a MIDI drum machine and a synth. Sounds like violins have a slow "attack" - that is, you don't go instantly from "no sound" to "full sound", but ramp up over a short interval. Imagine a violinist that has to start moving their bow, or a trumpeter that has to start blowing. It doesn't matter if you send a synthesizer a set of "note on" messages saying "play a middle C major chord" for violin sounds and they don't all get there simultaneously, because it was going to take them all a little bit to start playing anyway. Drums are a different story. If you expect a kick and hi-hat to play at exactly the same time, you don't have that many milliseconds between their starts before a normal human listener can start to really notice it.

So, the worst case scenario is that you'd have a piece of sequenced music that plays two drums, a piano chord, a bass line, and a violin chord at the same time. This is were sound engineers start getting hyper nitpicky about stringing the equipment together so that:

- The two drums fire off in adjacent time slices so that they sound as simultaneous as possible.

- The piano notes come next, from lowest (because if it's a sampled sound, low notes will be played back more slowly and therefore have a slower attack) to highest.

- The bass sound comes next because those don't usually have a super aggressive attack.

- Violins come last, and it doesn't really matter because they're lazy and they'll take a few hundred milliseconds to really kick in anyway.

The worst case scenario is:

- One drum fires off.

- The rest of the instruments fire off in reverse order of their attacks, like high piano, bass, high violin; medium piano, medium violin; low piano, low violin.

- The other drum fires off.

Because MIDI is so glacially slow compared to every other protocol commonly used, it's going to sound absolutely terrible.

MIDI is amazing in so many way, but it has some very severe technical limitations by modern standards. I can't believe it's taken this long for a replacement to come along.


> playing and releasing one single note will take 6 bytes out of the available 3,125 available each second, or 1.92ms.

Rounding this off to 0.002 s and taking speed of sound to be 340 m/s, we can work out that sound travels 68 cm in that time.

So if you're positioned next to a drum kit such that the snare drum is somehow 70 cm farther from your face, and the drummer hits both at exactly the same time (down to a small fraction of a millisecond), you will hear the snare drum 2 ms later than the high hat.

You're assuming that all of the MIDI events in the entire show are multiplexed onto a single serial data link. That means all the controllers and instruments are daisy-chained, in which case your latencies may be actually worse than you imagine because any piece of gear that has store-and-forward pass through (receives and re-transmits MIDI messages) adds latency.

The obvious way to avoid all that is to have a star topology: have the events flowing over separate cables from the controllers to the the capturing MIDI host, or from the host sequencer out to instruments. Have little or no daisy chaining going on.

Now if you have lots of MIDI streams concentrating in some host and it wants to send all of them to another piece of gear (like a synth, to play them), then maybe yes, the regular MIDI serial link might not be the best. I'm sure we can solve that problem without redesigning MIDI.

> I can't believe it's taken this long for a replacement to come along.

Almost forty years tells you that this is a solution in search of a problem. Industries don't stick with forty-year-old solutions, unless they really are more than good enough.

True, some of it is conservatism coming from the musicians: lots of people have gear from the 1980-s that speaks MIDI, using it on a daily basis.


We used to deal with the serial problem using a hack: bump events back/forward by one or two quantums of time to ensure that they go out over the wire in the order that you want. It's laborious and I am looking forward to the next generation never having to worry about it. (That _will_ be fixed, right?)


If you really had to send the data from multiple sources into a single MIDI destination over a single cable, then if a small global delay were acceptable, a smart scheduling algorithm with a 10-20 millisecond jitter buffer would probably take pretty good care of it so that the upstream data wouldn't have to be tweaked.

(Note that if you stand with your guitar 5 meters from your 4x12 stack, you're hearing a 15 ms delay due to the speed of sound.)


Unfortunately, because of the differences in instrument attack, which a MIDI controller would have almost no knowledge of, I think a random jitter would not fix the issue.


An interrupt controller has no knowledge of device semantics; it can just prioritize them based on a simple priority value. The scheduler could do the same thing. It could even be configuration-free by using some convention, like lower instrument numbers have higher priority.

Also, the physical layer of MIDI could simply be extended into higher baud rates while all else stays the same.

I can't remember the last time I used a serial line to an embedded system in the last 15 years that wasn't pegged to 115 kbps. Bit rate is a relatively trivial parameter in serial communication; it doesn't give rise to a full blown different protocol.

115 kbps is almost four times faster than MIDI's 31250. Plain serial communication can go even faster. The current-loop style signaling in MIDI is robust against noise and good for distance. 400 kbps MIDI seems quite realistic.

This would just be used for multiplexed traffic, like sequencer to synth; no need for it from an individual instrument or controller.


> USB connectors do not have the optical isolation of MIDI current loops; USB interconnecting will bring in noisy ground loops that will have musicians and recording engineers pulling out their hair.

Isn't MIDI... digital? What do ground loops matter as long as the signal decodes?

Or do you mean that they'll put current into the analogue signal chain?

IMHO, the correct response to that is to do all hybrid analogue+digital signal processing in the digital domain with opto-isolated pre-amp ADCs, no?


It matters a bunch. All kinds of noise can get picked up over cables and bleed into your audio path on a digital rig. It happens all the time just with power cables which is why every musician carries a handful of "ground lifts" even though they're technically illegal in a lot of places. That said, MIDI over USB is kind of necessary in this day and age. Hopefully instrument manufacturers will be rigorous about isolating any interference it could pick up.


MIDI connects all sorts of gear, a lot of which contains analog signal paths. That's why its design is the way it is.

For instance, rack-mounted synthesizer with analog outputs going to a PA.


Ground loops can end up with a surprising amount of current. They're also very good at emitting hum into other nearby devices.


From the article:

> When you connect devices together, the Capability Inquiry will immediately determine what each instrument is able to do: Your new MIDI 2.0 controller will automatically know which pieces of your rig are equipped with MIDI 1.0, which are capable of 2.0, and tailor its messages accordingly.

So hopefully backward compatibility Just WorksTM.


USB midi is already a thing though. And its a real pain to use if you want to connect 2 devices together where one of them isn't your computer.


MiDI 2.0 is not isolated? Crap. Literally any of my guitar pedals when connected by USB instantly injects noise into my electric guitar signal chain, and isolated USB hubs are practically non-existent.


So buy a cheap USB isolator ala https://www.aliexpress.com/item/32965730354.html ?

Or do you need High Speed (480Mbps)?


The isolator most likely won't help. Odds are the pickups on the guitar are picking up the extra noise from the pedal, thus it will always pass the noise along the entirety of the chain, isolated hub or not.


I have no idea why we don't have ADCs at each instrument (if required, otherwise just send the digital output) and DACs at the speakers/amps only with a fully digital mixing and distribution chain/network... it seems silly to be stuck on analogue audio distribution and mixing networks where these things are still problems.


I'm sorry people are downvoting you without explanation... I used to think the same thing myself until I actually got into music and the engineering behind why things work.

The reason why we don't do that is acoustic and electrical coupling. Sound is AC, and because it is alternating we have to deal with impedance matching. air to physical objects actually has a poor impedance mismatch because of the difference in density. Electrically with pickups, when one system has a poor impedance match to another system some really interesting effects can occur. When you overload a downstream device sometimes you can produce interesting interference that just happens to produce harmonics that are musically pleasing to the ear (3rds, 4ths, 5ths). Electrical guitar amps are a great example of this; you can actually design a tube amplifier to produce even or odd order harmonics by the electrical structure of the amp.

It's the "less than perfect" analog devices and their complex interactions that make what musicians to refer to as "tone".

Fortunately there is actually an alternative: it's called balanced transmission. The standard for audio is unbalanced unfortunately. But essentially you get the best of both worlds: noise rejection from third-party sources yet analog transmission and coupling. Ironically most digital transmissions eventually travel over an analog balance transmission.


MIDI is just the protocol and then there are transports.

One of these is ethernet (RTP midi), which should not have this problems. Or Bluetooth, or WiFi (very bad latency).

Or USB or DIN 5.


> The clever thing in MIDI is that a device which sends messages over MIDI to another device drives current pulses (not voltage). These current pulses activate an octo-coupling device in the receiver such as a phototransistor. There is no galvanic connection between the devices; they don't share any ground or anything.

I don’t understand why this matters at all. The engineers who made 1.0 had different sets of constraints that we no longer have.

Now days we shove gigabits a second over cheap twisted pair wire. MIDI could do a lot more on modern or even decade old hardware...


> Now days we shove gigabits a second over cheap twisted pair wire.

Yes, and when you listen to your PC's "line out", you can hear the unwanted effects of all that sort of thing.


Can't confirm, my newest motherboard's "line out" has actually been one of the best audio devices I've used in a while. And it's just a budget board.


Ethernet is isolated (transformer-coupled).

USB in its most common form has shared ground.


USB interconnecting will bring in noisy ground loops that will have musicians and recording engineers pulling out their hair.

I just switched to a bus-powered USB-C (Arrow) audio interface and have picked up a nasty ground loop/dirty power noise problem in the process. Current setup is a MacBook with the Arrow directly plugged in, and MacBook powered by a second thunderbolt 3 connection to an OWC Tb3 dock, and I am assuming if I power the Mac with its dedicated power supply the noise issue will go away, but if it doesn’t... well, I don’t know what else I could do to fix it.

In the past I’ve solved all similar issues by using a powered USB hub between the problem devices and laptop.


> All sorts of talented musicians have done incredible things with MIDI. The resolution of MIDI has been just fine for people with real chops. MIDI 2.0 isn't going to solve the real problem: talent vacuum.

Wouldn't basic economics suggest that talent vacuum would lead to most talented musicians making good money? -- And I'd argue the opposite is the case - there's a lot more musical talent than the world "needs" (and thus is willing to pay for). Therefore most musicians are poor (and many with additional talents stop being musicians). -- Or did you mean the "talent vacuum" in a different way?


"Getting paid in the music industry" and "having your music respected by other musicians" are two thinly-related different things.


Do you think we should just stick with limited protocols from the 80s?

There's no reason you couldn't make an optically isolated USB hub. With USB 2 it would be trivial. USB 3 is harder but I doubt you need that for MIDI.


> All sorts of talented musicians have done incredible things with MIDI.

I mean Beethoven did some incredible shit without even MIDI 1.0, I’m not sure that’s a sensible line of reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: