Jump to content
IGNORED

Master Quality Authenticated


Recommended Posts

So today I happened to find a pretty ambitiously worded buzzword salad about some thing called MQA - http://www.theabsolutesound.com/articles/let-the-revolution-begin/.

 

Fortunately, there is an actual Sound On Sound article to explain the thing as well - https://www.soundonsound.com/techniques/mqa-time-domain-accuracy-digital-audio-quality

 

So it seems that the theory actually makes some sense for a change. My understanding is that it's some clever math that cleans up the time domain of audio, which is something related to perceiving the direction a sound comes from. So there's a difference between you listening to a live orchestra and a recording of one, because in the live situation you perceive everything with your own ears as best as you can, but in the recording situation, inaccuracies in the chain will mess up that accuracy. MQA seems essentially a bunch of calibrated and approved tech to use in the studio to mitigate the time domain difference, somehow encode it in the noise floor of conventional audio, and unpack it again when you're listening to it. So even if you don't have the badass hardware, the files will still play on your system.

 

I think it's kind of cool if it ends up being real. After all, going to 24/192 files doesn't seem like an impossible jump. On the other hand, the company seems hell bent on hoarding the tech to themselves (they'll probably make a fortune out of licensing I guess), and it's probably certain that hardware makers will milk the hell out of devices with the new tech. :)

I'm also thinking whether the larger public will even give a damn. After all, if the difference is once again something that you can detect with either mastering engineer ears or an oscilloscope, then it probably doesn't matter for 99% of people. I'm pretty happy with the 128kbps Soundcloud stream as it is. And if you're making your music 100% in the computer, do the time-domain inaccuracies matter at all?

 

PS I really think "Master Quality Authenticated" is the dumbest name.

Edited by thawkins
Link to comment
https://forum.watmm.com/topic/94000-master-quality-authenticated/
Share on other sites

Agreed with and/or understood almost everything in that first thing, but the self congratulatory cunty tone makes me want them all to fail.  Why do people who do cool stuff have to be complete fucking dicks about it all?

This makes sense for acoustic music, but for anything else I don't think it will be useful, ESPECIALLY electronic music. 

Also, there are so many other variables in live sound. Most shows around here sound terrible. PA systems change the sound drastically. I'm talking rock/metal though so again it doesn't really apply....

Interesting application for data-reduction, but not quite proven to actually improve audio quality yet. It even says in the article that it's presumed that modifying the time domain will affect audio quality, but we're not sure yet.

 

Using this for streaming services would greatly improve data use, but you probably won't be able to hear the difference in your car or through your earbuds while using the subway.

Everything I read in the SOS article made sense and lined up with my admittedly limited knowledge of the science behind this stuff, but will it sound as subjectively good as recording straight to VHS or a microcassette recorder with a loose belt?

  On 9/27/2017 at 10:28 PM, Braintree said:

Interesting application for data-reduction, but not quite proven to actually improve audio quality yet. It even says in the article that it's presumed that modifying the time domain will affect audio quality, but we're not sure yet.

 

Using this for streaming services would greatly improve data use, but you probably won't be able to hear the difference in your car or through your earbuds while using the subway.

 

Intellectually I agree, but then my first sort of moment of awakening to the fact that not all digital audio is created equal was when I got my first CD burner back in the day and when I was pirating some albums from the drummer in my band I noticed that I could consistently hear a pretty obvious difference between a factory CD and a CD-R copy made at 4x speed played back on the same cheap Sony boom box (all of the transients in the bass were kind of smeared on the copy, and the high end sounded kind of rolled off and it just generally sounded not right - I assume it's because the CD-R had a higher error rate than the original and the error correction in the early 2000s on a cheap boom box wasn't exactly the best - it was actually harder to hear on a better player).

But yeah, tI won't complain if this stuff takes off but I won't be going out and buying anything because of it, either.

I think that even though the SOS article explanation kind of makes it seem like a real thing, it's probably audiophile snake oil still. It's weird that it's been adopted by Tidal and RIAA somehow. I think there might be a point there to also bring the whole audio chain under DRM, because of the requirement for special MQA approved tech. It kind of reminds me of HDMI and built-in DRM in hardware players. And to go even further back, DVD region restrictions. None of this has anything to do with audio quality, but everything to do with a rent-seeking industry trying to make a profit.

 

As for the change in quality, I'll believe it when I hear it.

Bob Katz and a few other people have been talking about some of these issues for decades and I don't think it's quite snake oil exactly, but I also picked up the potential for it to be a way to sneak in the infrastructure for large scale DRM right away and almost posted about it earlier this afternoon. I wouldn't be at all surprised if this really did give some degree of improvement (in the technical sense, and this is where I should point out that I usually gently lowpass almost everything at like 10-15k because excessive high frequencies sound kind of awful to me, and I notice massive, night and day differences between properly dithered 16 bit audio and 24 bit word lengths but minimal if any differences between 44.1kHz and 96kHz sampling rates, and only record at my interface's 96kHz because I'd rather have that data than not) but I also suspect that most people who are perfectly happy with 256kbps streaming audio aren't going to care (even though I firmly believe that higher resolution audio DOES improve a listener's experience even  if they don't consciously recognize the difference) and this specific technology is gaining industry acceptance more for DRM related reasons than for quality related reasons because the industry has a long history of not giving a shit about quality if they can make more money.

 

EDIT: also I think low bitrate Realaudio files sound pretty sweet.

Edited by RSP
  On 9/28/2017 at 1:20 AM, RSP said:

I noticed that I could consistently hear a pretty obvious difference between a factory CD and a CD-R copy made at 4x speed played back on the same cheap Sony boom box

Wwwwwwwwaaiiitaminnit.....Either your CDr burner was broken or your brain was playing tricks on you, if there was as much of a difference in the data stream that you could actually perceive changes in audio could you imagine what a disaster that'd actually be doing to the bits when used as a data backup device!

I haven't eaten a Wagon Wheel since 07/11/07... ilovecubus.co.uk - 25ml of mp3 taken twice daily.

Hmmm, so they are trying to get the sound back to how it was concieved for the listener?

Like people have said, it won't matter for people like us, or people who produce in the box but for live stuff or whatever sounds a nice idea. However, I can't get past the fact that once a audio file is mixed and finalised it's kinda set in stone. Anyone in design/srt etc... If you have ever been asked to photoshop an old photo to make it clearer like I have many times you have to explain to the person that the photo is set in stone, you can not add detail that is not present, it was not captured, we can't guess what was there in the flesh at the time.

So, this thing, it's all guesswork right? an algorithm to replicate an idea of how it was percieved. Sounds wank to me, again like photshop filters, awful.

I did not read the article, can you tell? ha

  On 9/28/2017 at 11:50 AM, Chesney said:

Hmmm, so they are trying to get the sound back to how it was concieved for the listener?

Like people have said, it won't matter for people like us, or people who produce in the box but for live stuff or whatever sounds a nice idea. However, I can't get past the fact that once a audio file is mixed and finalised it's kinda set in stone. Anyone in design/srt etc... If you have ever been asked to photoshop an old photo to make it clearer like I have many times you have to explain to the person that the photo is set in stone, you can not add detail that is not present, it was not captured, we can't guess what was there in the flesh at the time.

So, this thing, it's all guesswork right? an algorithm to replicate an idea of how it was percieved. Sounds wank to me, again like photshop filters, awful.

I did not read the article, can you tell? ha

 

Nah I understood the point was to have the MQA tech used in all the stages of production already, to somehow capture the missing magic stuff that gets lost even with the best digital chains.

  On 9/28/2017 at 10:49 AM, mcbpete said:

 

  On 9/28/2017 at 1:20 AM, RSP said:

I noticed that I could consistently hear a pretty obvious difference between a factory CD and a CD-R copy made at 4x speed played back on the same cheap Sony boom box

Wwwwwwwwaaiiitaminnit.....Either your CDr burner was broken or your brain was playing tricks on you, if there was as much of a difference in the data stream that you could actually perceive changes in audio could you imagine what a disaster that'd actually be doing to the bits when used as a data backup device!

 

 

I remember CD writing software back in the day used to have different modes for Audio and Data, so maybe it's more down to the software messing something up?

Yeah possibly though I think that was to ensure the files were written in Red-Book audio CD spec rather than .... whatever the other spec is ! - The PCM data should've still been duplicated bit-for-bit though ...

I haven't eaten a Wagon Wheel since 07/11/07... ilovecubus.co.uk - 25ml of mp3 taken twice daily.

  On 9/28/2017 at 10:49 AM, mcbpete said:

 

  On 9/28/2017 at 1:20 AM, RSP said:

I noticed that I could consistently hear a pretty obvious difference between a factory CD and a CD-R copy made at 4x speed played back on the same cheap Sony boom box

Wwwwwwwwaaiiitaminnit.....Either your CDr burner was broken or your brain was playing tricks on you, if there was as much of a difference in the data stream that you could actually perceive changes in audio could you imagine what a disaster that'd actually be doing to the bits when used as a data backup device!

 

 

 

I think I had a perfect storm of material that really highlighted the differences, a low end CD player, and possibly bad blanks (this was around 1999and I was using a pretty high end Ricoh SCSI burner and blanks, but "high end" doesn't necessarily mean it's actually any good of course.

 

Who knows.  It wasn't really a big enough deal for me to work with, just something I noticed at the time (and isn't really that relevant to the thread, I was just using it as an example of a situation where cheap playback equipment actually EXAGGERATED quality loss rather than masking it).

 

As far as CD-R copies being a bit for bit duplicate of the source file, it's not really that simple. I started to write something about why, but when I was checking my facts I found an old Bob Katz article I'd never read before that describes exactly what I was hearing and gives a detailed explanation of what it is and what manufacturers were starting to do to fix it. So I'm just going to post that (sorry for the length, I tried the spoiler tag in BBcode mode but it didn't work for me):

 

tl;dr - this was a problem back in the 80s and 90s (I heard it back in college, around 1998 or 99, when I got a computer and CD burner for school since I was commuting by train at the time to save money and it was a hassle to spend a lot of time in the labs on campus); Sony eventually acknowledged it and when this was written there were steps being taken to solve it, so it probably isn't nearly as much of an issue now, if at all. Also who actually uses CD-R anymore?

 

  Quote
Can Compact Discs contain jitter?

When I started in this business, I was skeptical that there could be sonic differences between CDs that demonstrably contained the same data. But over time, I have learned to hear the subtle (but important) sonic differences between jittery (and less jittery) CDs. What started me on this quest was that CD pressings often sounded deteriorated (soundstage width, depth, resolution, purity of tone, other symptoms) compared to the CDR master from which they were made. Clients were coming to me, musicians with systems ranging from $1000 to $50,000, complaining about sonic differences that by traditional scientific theory should not exist. But the closer you look at the phenomenon of jitter, the more you realize that even minute amounts of jitter are audible, even through the FIFO (First in, First Out) buffer built into every CD player.

 

CDRs recorded on different types of machines sound different to my ears. An AES-EBU (stand-alone) CD recorder produces inferior-sounding CDs compared to a SCSI-based (computer) CD recorder. This is understandable when you realize that a SCSI-based recorder uses a crystal oscillator master clock. Whenever its buffer gets low, this type of recorder requests data on the SCSI buss from the source computer and thus is not dependent on the stability of the computer’s clock. In contrast, a stand-alone CD recorder works exactly like a DAT machine; it slaves its master clock to the jittery incoming clock imbedded in the AES/EBU signal. No matter how effective the recorder’s PLL at removing incoming jitter, it can never be as effective as a well-designed crystal clock.

 

I’ve also observed that a 4X-speed SCSI-based CDR copy sounds inferior to a double-speed copy and yet again inferior to a 1X speed copy.

 

Does a CD copy made from a jittery source sound inferior to one made from a clean source? I don’t think so; I think the quality of the copy is solely dependent on clocking and mechanics involved during the transfer. Further research should be done on this question.

 

David Smith (of Sony Music) was the first to point out to me that power supply design is very important to jitter in a CD player, a CD recorder, or a glass mastering machine. Although the FIFO is supposed to eliminate all the jitter coming in, it doesn’t seem to be doing an adequate job. One theory put forth by David is that the crystal oscillator at the output of the FIFO is powered by the same power supply that powers the input of the FIFO. Thus, the variations in loading at the input to the FIFO are microcosmically transmitted to the output of the FIFO through the power supply. Considering the minute amounts of jitter that are detectable by the ear, it is very difficult to design a power supply/grounding system that effectively blocks jitter from critical components. Crystal oscillators and phase locked loops should be powered from independent supplies, perhaps even battery supplies. A lot of research is left to be done; one of the difficulties is finding measurement instruments capable of quantifying very low amounts of jitter. Until we are able to correlate jitter measurements against audibility, the ear remains the final judge. Yet another obstacle to good “anti-jitter” engineering design is engineers who don’t (or won’t) listen. The proof is there before your ears!

 

David Smith also discovered that inserting a reclocking device during glass mastering definitely improves the sound of the CD pressing. Correlary question: If you use a good reclocking device on the final transfer to Glass Master, does this cancel out any jitter of previous source or source(s) that were used in the pre-production of the premaster? Answer: We’re not sure yet!

 

Listening tests

I have participated in a number of blind (and double-blind) listening tests that clearly indicate that a CD which is pressed from a “jittery” source sounds worse than one made from a less jittery source. In one test, a CD plant pressed a number of test CDs, simply marked “A” or “B”. No one outside of the plant knew which was “A” and which “B.” All listeners preferred the pressing marked “A,” as closer to the master, and sonically superior to “B.” Not to prolong the suspense, disc “A” was glass mastered from PCM-1630, disc “B” from a CDR.

 

Attention CD Plants–a New Solution to the Jitter Problem from Sony

In response to pressure from its musical clients, and recognizing that jitter really is a problem, Sony Corporation has decided to improve on the quality of glass mastering. The result is a new system called (appropriately) The Ultimate Cutter. The system can be retrofitted to any CD plant’s Glass Mastering system for approximately $100,000. The Ultimate Cutter contains 2 gigabytes of flash RAM, and a very stable clock. It is designed to eliminate the multiple interfering clocks and mechanical irregularities of traditional systems using 1630, Exabyte, or CD ROM sources. First the data is transferred to the cutter’s RAM from the CD Master; then all interfering sources may be shut down, and a glass master cut with the stable clock directly from RAM. This system is currently under test, and I look forward to hearing the sonic results.

 

And a couple other related things:

http://stason.org/TULARC/pc/cd-recordable/2-17-Why-don-t-audio-CDs-use-error-correction.html

http://www.enjoythemusic.com/magazine/bas/0709/

 

For a few years I had an old NAD CD player that actually had a light on the front that would light whenever the error correction was working (also you could turn correction off, but you really didn't want to do that - a lot of discs would play more or less fine but some of them, especially CD-Rs, would sound like a worn out DAT tape) and you could actually see the difference between a factory CD (which would usually have some errors, especially if it was a cheaply manufactured on) and a CD-R, which would usually keep the light lit continuously, especially if they were burned at higher speeds (8x or 16x was about as fast as it got back then, this was mid 2000s).

 

Anyway, enough of the derail.

Edited by RSP
  On 9/28/2017 at 1:20 AM, RSP said:

 

  On 9/27/2017 at 10:28 PM, Braintree said:

Interesting application for data-reduction, but not quite proven to actually improve audio quality yet. It even says in the article that it's presumed that modifying the time domain will affect audio quality, but we're not sure yet.

 

Using this for streaming services would greatly improve data use, but you probably won't be able to hear the difference in your car or through your earbuds while using the subway.

 

Intellectually I agree, but then my first sort of moment of awakening to the fact that not all digital audio is created equal was when I got my first CD burner back in the day and when I was pirating some albums from the drummer in my band I noticed that I could consistently hear a pretty obvious difference between a factory CD and a CD-R copy made at 4x speed played back on the same cheap Sony boom box (all of the transients in the bass were kind of smeared on the copy, and the high end sounded kind of rolled off and it just generally sounded not right - I assume it's because the CD-R had a higher error rate than the original and the error correction in the early 2000s on a cheap boom box wasn't exactly the best - it was actually harder to hear on a better player).

But yeah, tI won't complain if this stuff takes off but I won't be going out and buying anything because of it, either.

 

 

 

I'm just saying that in order for their technology to actually improve audio quality (as we perceive it) you have to trust that their assumption is correct...even though they don't fully believe it in the article. It's not proven that increasing resolution for the time domain will improve audio fidelity. It's a fascinating read, though.

 

This thing kind of reads like a boardroom pitch, too.

Yrh, I didn't even read the actual product marketing stuff beyond the first paragraph, so I'm really just talking about the SOS article.  The other stuff is pretty gross.

  On 9/28/2017 at 5:08 PM, RSP said:

As far as CD-R copies being a bit for bit duplicate of the source file, it's not really that simple. I started to write something about why, but when I was checking my facts I found an old Bob Katz article I'd never read before that describes exactly what I was hearing and gives a detailed explanation of what it is and what manufacturers were starting to do to fix it. So I'm just going to post that (sorry for the length, I tried the spoiler tag in BBcode mode but it didn't work for me):

tl;dr - this was a problem back in the 80s and 90s (I heard it back in college, around 1998 or 99, when I got a computer and CD burner for school since I was commuting by train at the time to save money and it was a hassle to spend a lot of time in the labs on campus); Sony eventually acknowledged it and when this was written there were steps being taken to solve it, so it probably isn't nearly as much of an issue now, if at all. Also who actually uses CD-R anymore?

maaan, I respect your views on most other things relating to music production, but this sounds like a load of audiophile bullshit to me (basically as soon as anyone mentions jitter when it's not in a MIDI context). I honestly don't see how a bad digital rip, using the same bit depth and sampling rate at either end, can affect the frequency spectrum. I can understand CD-Rs being shit, and have certainly lost many files due to discs corrupting over time, but apart from the occasional loss of a split second due to an incorrect initial rip, I've never heard any difference between CD and CD-R audio in terms of frequency content.

 

Has anyone actually analysed this? It wouldn't be difficult.

Edited by modey
  On 9/28/2017 at 1:43 PM, thawkins said:

 

  On 9/28/2017 at 11:50 AM, Chesney said:

Hmmm, so they are trying to get the sound back to how it was concieved for the listener?

Like people have said, it won't matter for people like us, or people who produce in the box but for live stuff or whatever sounds a nice idea. However, I can't get past the fact that once a audio file is mixed and finalised it's kinda set in stone. Anyone in design/srt etc... If you have ever been asked to photoshop an old photo to make it clearer like I have many times you have to explain to the person that the photo is set in stone, you can not add detail that is not present, it was not captured, we can't guess what was there in the flesh at the time.

So, this thing, it's all guesswork right? an algorithm to replicate an idea of how it was percieved. Sounds wank to me, again like photshop filters, awful.

I did not read the article, can you tell? ha

 

Nah I understood the point was to have the MQA tech used in all the stages of production already, to somehow capture the missing magic stuff that gets lost even with the best digital chains.

 

 

 

It's a way of encoding high frequency material in the noise floor so that it doesn't actually approach the nyquist frequency where all of the phase issues from the antialias filter happen, as far as I understood. So even if you're playing back audio that was digitized traditionally and then encoded with this system, you still see some of the benefits because the antialiasing artifacts on the A/D end are permanently recorded in the audio but you're still reducing them on the D/A end.  That makes sense, it's more just a matter of how much it actually matters in the real world and whether it's being used as a cover to make another try at adding hardware-level DRM to music.

I've done several experiments regarding CD (glass master) vs. CDR -- and what I've found is that modern equipment makes bit-for-bit copies if you take a glass master CD and duplicate to CDR. The checksums are exactly the same.

 

Some people argue that the CD player is where the difference is (CDR ink vs. CD metal) but I could not observe any difference in frequency content on a modern Harmon-Kardon CD player.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   1 Member

×
×