MP3 works by using a series of psychoacoustic 'tricks'. The encoder will run a series of analyses on the original file and - depending on the quality required - take out more or less of the original source signal. This is true of any lossy codec and they all work is similar ways, although the algorithm used to determine what is unnecessary is different between each different codec, so they will sound different.
One of the common methods used is to take out frequencies that are masked by others. Some frequencies 'cover up' other frequencies making them difficult to hear and - thus - irrelevant to the mix. Those will be taken straight out with only the prominent frequency remaining. Another technique that is used is take out the fundamental frequency of a note and to leave some of the harmonics. The human ear quite often 'fills in' the fundamental note if we hear the harmonics, so taking out the lowest fundamental is utilised. Out goes a whole set of other frequencies.
There are dozens of these little things that MP3 does.
Then you run into all the 'standard' quality changes. Bit depth, sampling rate, etc.
This article:
http://www.soundonsound.com/sos/may00/articles/mp3.htm
Explains quite a few of them. Even though the article is nearly thirteen years old, the codec (and hence encoding) is still the same. The processes have got marginally better and refined since then but the basic premise and the techniques are the same.
The fact is, that the more we remove from an original PCM recording, the more we notice the distortions and changes - to the point where it starts to sound 'unnatural'. Now, for most people, 128 Kb/S is
just about 'good enough', although a lot of people will be able to tell the difference; especially if they're trained (like me) to spot the differences. 192 Kb/S is more than good enough for the vast majority of people (myself probably included) for an MP3 to sound like the original PCM recording. There is a certain skill to it and I can best describe it as a kind of 'graininess' that you get at lower-quality. When you get into very low-quality, it's immediately obvious because the dynamic range and sampling rate (hence frequency output - like up Nyquist theorem) is significantly altered. That does get harder to detect with age because of a natural loss of hearing at high-frequency but if somebody is - theoretically - over the age of fifty or sixty with 'normal' hearing loss, then it's still detectable and obvious.
When mixing tracks though, it always helps to have the highest-quality source file to work from in the first place. When processing tracks, some of the quality is likely to be lost with various manipulations or any distortions made more obvious by processes like normalisation, so working with the best-quality we can prevents that from happening.