Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Version: Firmware 13.1.3 + DOS 2.2.4 Date: 2019-02-12
#77
(17-Feb-2019, 12:28)thumb5 Wrote: It's even simpler than you're making it out to be, @David A.  The fact is that if people are hearing a difference in sound quality and report that in good faith, there is a difference.  Equally, if people are not hearing a difference, there is no difference.  Both statements are equally valid and do not conflict, because we're talking about a subjective experience.

If one wants to determine whether there is an objective cause of any reported sound quality differences, that's another story entirely, and -- as I think you acknowledge -- not something that can be resolved by any amount of uncontrolled, individual listening reports.

No. If someone hears a difference and reports it in good faith, they're simply reporting that they hear a difference, not that there actually is a difference. Similarly those who don't hear a difference and report that in good faith are reporting what they hear, not that there actually is no difference.

I don't think anyone here has so far suggested that people are not honestly reporting what they hear. No one is disputing the reports. The issue of concern is n to whether people are hearing different things, the issue of concern is whether there is an objective difference which is why the discussion of objective tests of one kind or another..

What kind of listening test would I  set up if I were going to try to do a controlled listening test? Well, it would have the following features"

- 1 listener at a time sitting in exactly the same spot so room acoustics had no affect on the listener and there was a completely controlled listening environment.

- 2 identical amps, one running RAAT and the other running AIR. Both stream from identical but different servers so both servers can be independently controlled to start a track playing at as close to the same moment as possible. Volume settings matched on both amps and the display turned off or covered so the user can't tell which stream an amp is getting.

- speaker outputs of each amp connected to a high quality switching box which feeds the speakers. The listener has control of the switching box and can switch between one and and the other at will.

Both servers are running identical playlists of different music with the listener giving the tester a list of tracks beforehand so their personal playlist can be prepared and they can make comparisons based on music they like. Because the user can only control the amp switch, not the playlist which needs to be separately controlled for each amp/streaming method at the start, the listener can't go back to rehear bits of a track, they can only listen to the playlist streaming in real time and switch between amps whenever they want to switch.

It's not ideal because there are limits on the listener's control of the music and the switching box might introduce noise which masks some things but it provides instantaneous switching at matched levels and a blind test. It can be made double blind if the person interfacing with the listener has no knowledge of which amp is getting which stream and is not one of the people involved in starting the streams from the 2 servers running.

It wouldn't be an easy test to set up and since it involves only 1 listener at a time it would take some days to get a reasonable sized sample of subjects through the test. Put those 2 things together and what you've got is a test requiring professional level setup and conduct, several days with several people involved on the test administration side, and it's not going to be cheap to run. Then you've got to design  your questionnaire. Do you want a simple yes/no for whether or not there is a difference each time the amp is swapped or do you also want a report on which one the listener prefers, and last of all do you want comments on what differences were heard when a difference is reported. Then you have to collate that data in a spreadsheet or database and do your statistical analysis. That's the kind of listening test which will hold up to criticism reasonably well and might deliver some compelling results, especially if a statistically significant number of people report hearing a difference (the minimum requirement for establishing that a difference exists) and then, if you gather data on preference and on what listeners heard, whether people with different preferences are reporting hearing similar sorts of differences so we can tell that their preferences are based in personal taste.

It would be a hell of a lot easier to establish that there is an objective difference by making measurements of server output and Devialet DAC output (at pre out outputs probably)  and looking for differences there in things like package error rate, noise, distortion, jitter and anything else a good researcher could think of. Of course then there's the problem of establishing whether any difference revealed is audible but it's easier to design a test for that if you know what the difference is and under what sort of circumstances that difference might be expected to be audible.

Good tests aren't easy or cheap to design and conduct.
Roon Nucleus+, Devilalet Expert 140 Pro CI, Focal Sopra 2, PS Audio P12, Keces P8 LPS, Uptone Audio EtherREGEN with optical fibre link to my router, Shunyata Alpha NR and Sigma NR power cables, Shunyata Sigma ethernet cables, Shunyata Alpha V2 speaker cables, Grand Prix Audio Monaco rack, RealTRAPS acoustic treatment.

Brisbane, Qld, Australia
Reply


Messages In This Thread
RE: Version: Firmware 13.1.3 + DOS 2.2.4 Date: 2019-02-12 - by David A - 17-Feb-2019, 21:50

Forum Jump:


Users browsing this thread: 8 Guest(s)