The general population really doesn’t understand digital technology. And it’s costing them money.
This was brought home to me last weekend while helping a friend choose a new TV. In the local shop, I noticed a variety of HDMI cables for sale. Now HDMI, for those of you not familiar with it, is quite a nice standard. It provides digital video and digital audio down a single compact and convenient connection. Much neater than the bulky DVI, VGA, SCART etc which preceded it.
However, notice that it’s a digital standard. This means that, subject to major failures, what goes in at one end ought to come out at the other. Why, then, does the store sell a variety of cables of different qualities and prices? In the days of analog connections, there was something to be said for low-impedance connections and for careful screening. Who knows, those articles in the hi-fi press extolling the virtues of gold plugs and low-oxygen copper cables might even have had something to them.
But in the digital world, if you put ones and zeros in one end of a cable and don’t get something recognisable as ones and zeros at the other, you don’t get a slightly worse picture or sound. You get complete breakdown, and major image or sound corruption. A cable which does that should not be sold at a cheaper price; it shouldn’t be sold at all. Better-quality cabling will allow things to work over greater distances, but for the average user with a DVD player under his TV, it will make no difference at all.
For example, my (quite expensive) CD player is connected to my (quite expensive) amplifier through a digital COAX connection. I use a single phono-phono cable I bought for about $1 in a Radio Shack sale. And the sound is perfect.
So I asked the nice man in the shop about the fact that they sold a modest-length HDMI cable for over £100 just beside the one for £15 (which, incidentally, probably costs less than a dollar to make).
“Oh yes”, he said, “it does have an effect. We had a customer do a side-by-side test just recently and he could see a difference. He bought the more expensive cable.”
“But how?”, I asked. “It’s ones and zeros! You don’t get better quality ones or nicer-shaped zeros by paying more! How could there be a difference?”
“Well, the customer said there was one. I don’t really understand the science behind how it all works…”
The customer is always right, you see. Even when science is against him.
And now back to my copy of Richard Dawkins…
Followup: Gizmodo did some tests and agreed with my assertion. It makes no difference whether you have cheap leads or expensive ones for short distances. It can be worth paying the extra if your cable is more than 50ft long.
I suspect the customer is always right when they pick the more expensive option ðŸ™‚
Interesting. Not knowing the workings – but presumably the system at the other side ‘decodes’ the signal and corrects small mistakes/glitches. If this is the case then the higher-price cable may result in reduced mistakes which could end up making the life of the player easier/resulting in better quality output.
I’ve no idea if the above is even logical (or makes sense in binary-transportation terms) but it could explain away differentials in price (though obviously not on that scale).
Apparently HDMI has no error correction. I have to wonder if this was done to benefit the gold cable industry.
Even digital systems can have noise, but it’s doubtful the cables make a difference. One does get interesting effects on cable and satellite TV during storms, where parts of the video stream are lost, and pictures freeze, or have large blocks since the more detailed image data was lost.
But, it’s fascinating to watch society come to terms with digital technology. It’s not only layman customers who can be largely ignorant. I’ve even heard of high-ranking technology officers and executives at major media companies demanding that their expert listeners check the degree of sound quality degredation from audio transmitted over TCP/IP. Not even streaming transmission, where there could be glitches, but straight file copies. Not audio compression, but audio file copying.
0 and 1 are nice abstractions, in the real world they are represented as signals and they encounter noise, interference and other nasty things. Try fitting a pair of RJ45s on a random power cable and check how well it works on your home network. They are just 0s and 1s, right? Digital signals just degrade differently than analog, it doesn’t mean that they don’t degrade.
If it is worth buying an expensive HDMI cable is a different question. I would try out the cheap one and look for artifacts from bit errors. If it is not good enough then take it back and get a better one. But paying $100 for a cable does seem excessive.
Yes, bit errors are definitely a possibility, and since the video is (I believe) uncompressed on HDMI, they could show up as pixel errors. But my hope would be that the HDMI specification would prohibit any such failiures by a comfortable margin, especially over normal distances.
With digital audio, of course, you could get a rather nasty pop if you had a single bit error, and with any compressed signal the resulting artefacts could be serious.
All cables tend to be over-priced. It’s hard to buy a USB cable in a store for less than about $15, for example, yet I know from those who work in the industry that they cost around $0.35 to manufacture. But the one in my local shop was over £100 – ie. closer to $200!
Talk about jumping to a conclusion before doing any research. “Gee, I personally can’t figure out why there would be a difference, therefore thre isn’t one!” Skeptics thought that CD players couldn’t sound different when they first came out (I mean, they’re all just 1 and 0’s right?), and then someone figuired out how to measure jitter and all of a sudden it was obvious why they DID sound different. So before you go condemning something you know nothing about, do a little research, or admit that you have no idea but can’t see or hear the difference yourself (proving that YOUR input devices are probably a little dull).
Thank you adsf – most helpful.
CD player analogies aren’t relevant here. I’m not saying that all TVs look alike. I’m not even saying that all DVD players output the same bitstream. There can certainly be substantial differences at both ends. In addition, errors are expected on optical media and different optical systems and ECC technology will have different degrees of success in correcting them.
The question is whether a cable designed to transmit digital data without ECC over this kind of distance should introduce any bit errors at all. My assertion is that it shouldn’t.
I know, for example, that the HDMI spec 1.3 defines different minimum specs for cables for short lengths (up to 5m) and for those intended for a longer distances, which is why I mentioned cable lengths in the posting. The UKP 100+ cable was 1m long, BTW. I could even go into the pixel clock rates of the TMDS connection if wanted! It would just make a rather dull post.
But since you’ve obviously done the research and are sure that I haven’t, perhaps you could point me to the scientific basis for your comments? I may well have missed something and am willing to be proved wrong.
“But since you’ve obviously done the research” – another unsupported assertion. The difference is I don’t make assertions about things I know nothing about. But what I do know is that you do.
This reminded me of an article by Ben Goldacre, the Guardian’s ‘Bad Science’ columnist, on expensive ‘kettle leads’ – like the digital leads you speak of, these merely take electricity in one end and put out electricity out the other end. But some stores sell expensive ones alongside cheap ones.
[…] Lovely story on Quentin’s Blog: The general population really doesn’t understand digital technology. And it’s costing them money. […]