What is the difference between FM Synthesis (OPL-3) and WaveTable Synthesis (MPU-401)?
Both of these technologies refer to how the songs in your MIDI files are played on your hardware. When a song is stored in MIDI format, the actual sounds of the music are not stored. Instead, MIDI only contains information on how to play the song, not what it should sound like. It’s up to the actual hardware to determine what the sounds are. As you can imagine, the side-effect is that the same MIDI file can sound different on different MIDI hardware. One approach to solving this problem is the General Midi standard. This standard specifies the musical instrument to be used for the basis of a given sound, but it still does not specify exactly how that instrument should sound. It’s still a big improvement, however, because now you can at least recognize your songs (imagine how your songs would sound if the piano was replaced with a trombone, but still playing the same notes). So it’s still up to the hardware to generate the actual sounds. To do this, there are two possible approaches, on
Related Questions
- What is the difference between Cascade coprocessor synthesis and custom instruction set processor (CISP) design using configurable IP?
- What is the difference between Cascade coprocessor synthesis and custom instruction set processor (CISP) design using EDA tools?
- What is the difference between transcription, and translation in protein synthesis