02-Jul-2018, 12:52
Hello and welcome to the forum.
If I am reading your post correctly, it would seam that you are indeed feeding your Devialet in fundamentally different ways, and so a subjective difference in performance is no surprise.
When using the Raspberry PI + Audiophonics V3 Sabre, you are basically taking an Ethernet stream, converting this into analogue using the Audiophonics, you are then sending the analogue signal to the Devialet via RCA. The Devialet will take the analogue feed and convert this into PCM digital, which it will then utilise in the ADH core, reconvert to analogue / amplify and feed your speakers. So in simple terms, via this method you are performing the digital to analogue conversion twice, once in the Audiophonics, then again in the Devialet.
From a technical perspective, performing the digital to analogue conversion twice is not a good idea. What is interesting is that you seam to prefer the soundstage via this method. To be honest, this does not surprise me too much, the soundstage is an illusion and sometimes messing things up a little in the digital domain (or even the analogue domain) can nominally give an impression of a richer soundstage. So this begs the question, other than the soundstage effect, what else sounds different?
Why not try a couple of other experiments? With some appropriately selected music, try your method R2 and compare to AIR, listen out for accuracy and realism in the bass, then listen out for those small details, ambient sounds in the room, this kind of thing. Which do you prefer? Any observations?
As for the differences between AIR and UpNP, they are slightly different protocols, so although the digital to analogue conversion will be the same, there may be some fractional differences in timing, jitter and so on, that will give a very fractional difference in presentation.
If I am reading your post correctly, it would seam that you are indeed feeding your Devialet in fundamentally different ways, and so a subjective difference in performance is no surprise.
When using the Raspberry PI + Audiophonics V3 Sabre, you are basically taking an Ethernet stream, converting this into analogue using the Audiophonics, you are then sending the analogue signal to the Devialet via RCA. The Devialet will take the analogue feed and convert this into PCM digital, which it will then utilise in the ADH core, reconvert to analogue / amplify and feed your speakers. So in simple terms, via this method you are performing the digital to analogue conversion twice, once in the Audiophonics, then again in the Devialet.
From a technical perspective, performing the digital to analogue conversion twice is not a good idea. What is interesting is that you seam to prefer the soundstage via this method. To be honest, this does not surprise me too much, the soundstage is an illusion and sometimes messing things up a little in the digital domain (or even the analogue domain) can nominally give an impression of a richer soundstage. So this begs the question, other than the soundstage effect, what else sounds different?
Why not try a couple of other experiments? With some appropriately selected music, try your method R2 and compare to AIR, listen out for accuracy and realism in the bass, then listen out for those small details, ambient sounds in the room, this kind of thing. Which do you prefer? Any observations?
As for the differences between AIR and UpNP, they are slightly different protocols, so although the digital to analogue conversion will be the same, there may be some fractional differences in timing, jitter and so on, that will give a very fractional difference in presentation.
1000 Pro - KEF Blade - iFi Zen Stream - Mutec REF10 - MC3+USB - Pro-Ject Signature 12

