AudioMelody - make better music

Digital Audio – Hearing Is Believing

The following is a discussion that started between me and a friend in a pub in January 2000. It follows on from some very pertinent discussion on the Internet about how “Digital Level Meters” tend to take responsibility away from peoples ears – with disastrous results, and how in the early days of Digital Audio, many of us embraced the early technology because we were “told” it was better. But deep in the back of our minds, we had our suspicions…

Digital Audio has in fact been around for many years. But what really boosted the demand for low-cost digital recording was the invention of the Compact Disc. In the early years of Compact Disc production, the recording machine of choice was the Sony 1610 Digital Stereo Master Recorder. This system comprised of a high-quality digital processor, a video recorder (rather than invent a new medium, video was chosen as a high-density storage system), and an editing system. The whole system overall cost thousands of pounds, but was still a massive success in the industry.

Then Sony caught everybody by surprise. They released a product called the PCM-F1 digital recorder. This was just a small digital processor that could be plugged into any video recorder, and cost just a few hundred pounds. Recording studios naturally purchased hundreds of these machines.

But Sony insisted that the device was purely for home audio enthusiasts and should not be used by professional recording studios. The much more expensive Sony 1610 system was the only one designed for professional use they insisted. But recording studios ignored this advice and the F1 system – and its later replacement the PCM-701 – became a common sight in all of the major studios. Unfortunately recording engineers believed what other Product Marketing people were telling the general public: “Digital is good, Analogue is bad”. It was considered poor form to question the sound quality, which was admittedly impressive for such a physically small device.

I was as guilty of this as anyone else. I used the Sony F1 system to record masters of album mixes onto. The machine was located on top of a 1/2 inch analogue stereo mastering machine which was – looking back now – undoubtedly far superior, but at that time everybody just assumed that the F1 must somehow be better, purely because it was digital – as the Product Marketing people said – “Digital is good, Analogue is bad”. I was a little concerned that the sound might be lacking in depth and warmth, but the mixes, being sourced from an early generation SSL mixing console anyway, already had most of the life stripped out of them, and were as hard as nails, so it was a bit hard to tell…

When people discovered that the F1 system was being used in integrated digital environments, then people started to get extremely concerned. The F1 system, for economy, only had a single digital converter that was rapidly switched between the left and right channels. This meant that the digital signal for one channel was effectively delayed on recording and the other channel delayed on playback to compensate, so in the digital domain the signal for one channel was recorded as being delayed with respect to the other. Ordinarilly this wouldn’t matter as the system sorts it out on playback, but in professional applications this meant that digital transfers were being produced incorrectly compared to 1610 with a tiny delay between the left and right speakers. At that time it was hard to detect, and even harder to correct.

A company called Audio and Design invented a device to solve the problem. The device featured a button called CTC which stood for Coincidence Time Correction. The device applied a counter-delay to one channel on recording (and, naturally, the other on playback) therefore compensating for the delays already present within the F1system without effecting the sound overall (although I must confess to personally being a bit concerned about adding yet more signal processing into the chain). Theoretically, this meant that the signals for both channels in the digital domain were now perfectly “time coincident” – in phase, basically – and tapes had to be carefully labelled (although many certainly weren’t), as such a tape should not be played back on an unmodified system (although I’m sure many were).

Unfortunately – to make matters worse (much worse) – on one particular model of this device, the delay was being applied to the wrong channel thereby doubling the digital domain error that the F1 system was producing in the first place! Hugely embarrasing for Audio and Design and no doubt Sony probably thought that people were getting their just desserts for using what was – after all – a hobbyist product, in a professional environment.

But history repeated itself when Sony invented the DAT machine. This device takes extremely small cassette tapes and records at CD quality. Once again, recording studios around the world immediately purchased these devices (unlike the general public, unfortunately!), and Sony again insisted, that DAT was purely a high-end audio enthusiasts product, which should not be used in a professional setting. The 1610 digital mastering system (or rather its updated equivilent, the 1630), was still the only device of choice to use under those circumstances. It’s true that Mitsubishi did have a stereo digital reel-to-reel machine available at the time – and very sexy it was too – but it failed to set the world on fire, and certainly never became a standard. With so much other digital technology available at the time people were possibly too frighted to use it, lest they find themselves unable to play their tapes elsewhere in future! This is a shame because the Mitsubishi machine was probably rather good, undoubtedly professional, and would certainly have been head and shoulders above Sony F1 or DAT (at the time). It unfortunately found itself in a market where serious people were either already owners of 1610, or were cutting costs by using DAT or F1.

As a result of recording studios insisting on using DAT for their projects, manufacturers started producing “professional” DAT machines with vastly superior audio electronics, and much more robust tape transports compared to the original budget models. This has resulted in DAT “crossing sides” from being a home enthusiasts format, into the accepted professional standard that is now widely used in recording studios, broadcast organisations, and the film industry. It certainly never made it as a “domestic” format. Although the professional DAT machines are very nice indeed (I’ve got one myself – the Panasonic SV-3800 which has amazingly transparent sound due to high quality converters and excellent audio electronics), the fact remains that DAT tapes are extremely fragile (most people have tales of DAT tapes getting mangled by someone elses machine at some point), DAT machines are also liable to mechanical failure, and it is not at all uncommon to find a tape that plays fine in one DAT machine, but not at all in another.

History has – surprisingly – repeated itself yet again with the invention of the Sony Minidisc. Minidisc was also deemed not to be for professional use, but with so many sound engineers recording (eg) interviews on location in the broadcast industry, coupled with the relative mechanical frailty of DAT machines, Minidisc has become an acceptable standard for such applications. Music Sound Engineers steer well clear of it though, as they are usually deeply suspicious of the way it modifies the sound in order to compress it onto a small disc. This suspicion is probably a little bit unfair, considering that Dolby-SR noise reduction (which most music engineers love) does a very similar thing and no-one ever complains about that. Actually the similarities are such that Sony inadvertantly breached a Dolby patent on sound processing when they invented Minidisc, and a royalty is due to Dolby on every Minidisc machine sold. That’s why Dolby is mentioned on the back of all Minidisc machines. Clever chaps, those Dolby people.

Something to remember about Minidisc is that – being software driven – the sound quality has improved with each successive generation. The first generation Minidisc players were criticised for sounding “metallic”, but now we’re onto revision 4.5 of the ATRAC encode/decode algorythm the sound quality is pretty spectacular actually. Speaking personally, I LOVE the sound of my Sony MZ-R5ST Minidisc recorder. It seems to put back in the “transparency” that gets lost through the recording chain. Although I master onto DAT, I’m seriously considering looping future mixdowns through one generation of Minidisc to capture that great Minidisc sound! I can’t help but wonder if the Minidisc ATRACK algorythm – designed as it is to throw away what it considers to be superfluous background junk – throws away certain intermodulation distortions that otherwise “muddy” the mix.

In retrospect, Sony’s original concerns about the use of domestic equipment in professional environments were well-founded, and many recordings of the past would have been much better served if recorded on 30ips 1/2 inch analogue tape instead. It’s a shame that people believed consumer Product Marketing digital hype instead of using their own ears. After years of people being mocked for complaining that digital audio somehow “didn’t sound right”, the better machines of today have shown that those peoples’ ears were not deceiving them. Many people believe that even the 16 bit digital machines of today are still no match for the wonderfully transparent sonic capabilities of 30ips 1/2 inch analogue tape.

In the future though, I suspect that the process of domestic products “crossing over” into professional ones will become even more widespread. New Product Development is so technically complex and expensive (wages of experienced staff, cost of prototypes etc) that you either have to fund it by making a mass market product, or by making the product hugely expensive. I’m not sure that professional recording studios can (for example) continue to justify spending thousands and thousands of pounds on a top-class digital multitrack, when 16 Track Portastudios from the likes of Korg have excellent sonic abilities at a mere fraction of the cost. Exactly how much better does equipment have to be, in order to justify spending thousands of pounds more than something that might possibly be “good enough”?

Of course the general public will probably not be blown away by this technology – they simply don’t have the heritage or experience to know how to use it. Give the Korg Portastudio to an experienced Record Producer and they’ll be able to produce something approaching production quality with relative ease. Give the same machine to a 16 year-old and they’ll produce something that sounds exactly as bedroom recordings always sound – crap – and what’s more, they will blame the machine for it…

There is a new breed of stereo recorder coming out. These digital recording machines sample at 96 kilohertz, and use 24 bits for recording. The vast majority of the early reports of these machines, state that sonically this really is now a major leap forward, and that these machines now have the transparency of analogue tape, and that they don’t suffer from what we now acknowledge as the extremely audible artifacts of digital recording, which Product Marketing people tried to persuade us for years was “all in our minds”.

It looks as if – finally – we might actually be getting the audio quality that was originally promised all those years ago.

Jezar.

Tagged as: , , , ,

Leave a Reply