The article linked above is more than slightly inaccurate and the author’s modern perspective misses the point; prior to MIDI, the only available method for layering synthesizers was multitrack analog tape, an extremely expensive process requiring a professional studio with massive and expensive multiple head tape decks, analog mixing desks and hundreds of thousands of dollars worth of outboard hardware. Professional musicians did not use general-purpose home computers like the Atari ST and Commodore 64 with MIDI instruments in the beginning; we used hardware sequencers, the first of which were hardly more sophisticated than drum machines, but they could be used to keep a beat and a bassline going while a keyboardist played other parts; this effectively reduced the necessary number of musicians to tour a complete show to Howard Jones and possibly his roadie. These simple early step-sequencers like the Roland TR 808 didn’t even have MIDI, at first, but later ones used MIDI to trigger note and chord sequences, patterns and arpeggios, feats formerly done using less flexible analog control voltages, back in modular times.
The first MIDI killer app was simple layering of keyboard sounds; analog synthesizers of the day were heavy, bulky , and just coming out of the modular era, they were lacking in polyphony, so if you needed more than one timbre at a time, or more than a few simultaneous notes of the same timbre, you needed more than one keyboard synth. The proliferation of keyboards led to a need for a way to stack them together, either for more polyphony or a fatter, more layered sound achieved by mixing/combining notes from different synths. Of course we could already do this mixing/layering in the analog realm, but in those days, the cheapest available mixing desk cost $5k, was made by Soundcraft in the UK and featured if I recall correctly, 12 channels, making effective keyboard synthesis a very expensive proposition, the realm of a a few rich rock stars. With MIDI, layering two or more synthesizers in realtime became as simple as hooking up so many cables, daisy-chaining as you went. While personal computers in those early days were nowhere near robust enough for use on stages, the MIDI system most definitely was, and the ability to control racks of sound modules from a single controller keyboard or two upped the ante, in terms of how many sounds a gigging keyboardist could travel with, allowing them to blend multiple types of synthesis in a more compact rig and reducing setup time and stage clutter.
MIDI’s second major breakthrough was in the storing and editing of patch data through MIDI System Exclusive, a means for instrument manufacturers to, among other things, share patch data among instruments of the same type. With the advent of digital synthesizers like Yamaha’s flagship DX7, onboard storage of synthesizer control settings, called “patches”, became possible, though the instruments often had limited memory capacity. The digitally controlled nature of this new breed of music machine essentially allowed the player to defeat these limitations, at first by use of a (usually proprietary and expensive) EPROM chip, and later through the use of personal computer software sequencers and patch editor/librarians (through use of an appropriate MIDI interface) and more sophisticated MIDI hardware sequencers, like the Roland MC 500 , which could save and restore the entire complement of onboard patches from dozens of keyboards, as well sequenced notes. In addition to these digital snapshots of all available control settings, a vast array of control changes could be sent from one instrument to another, further expanding the range of available controllers and data that could be sent and received, beginning with the expression control that began as the Hammond organ‘s “gas pedal”, and adding pitch bend, modulation, aftertouch or key pressure, filter cutoff, and dozens more controls, including performance gestures on track pads and ribbon controllers.
Sequencing, the real MIDI breakthrough technology, paralleled the development of velocity-sensing keyboards; since MIDI is a serial digital interface, it gradually became possible to “record” a deeply textured physical music performance on one keyboard and play it back on another, independently of the sound of either instrument. THIS, coupled with MIDI synchronization, the ability to use multichannel, multitimbral MIDI sequences alongside of multitrtrack analog and digital audio is what opened the floodgates.
A couple of other not so minor points: EDM existed prior to MIDI, though it wasn’t called that, as did several other music genres that rely heavily on MIDI sequencing today. Also, the BBC article linked above gives sole credit for the massive MIDI spec to synthesis pioneer Dave Smith, but Bob Moog is generally credited with floating the idea for what became MIDI in an off-the cuff remark at an AES convention, and Tom Oberheim’s contributions to the vision for what MIDI could become should not be overlooked, nor should the work of hundreds of engineers at Roland, Korg and Yamaha.