In practice, they need higher ASPs in PC as units decline and they have to do it while there is competitive pressure. A focus on HBM+EMIB+a GPU die might be likely and high DRAM prices could be seen as a positive as it boosts their ASPs. In server maybe they can try to leverage XPoint to reduce the amount of DRAM used but remains to be seen if that works out - ofc the driver would be to generate more revenue, not some war on DRAM. Hmm, interesting thing here, Intel offers somewhat limited memory bandwidth and PCIe connectivity with their server SKUs as a way to boost CPU units sales. AMD is exploiting this now with Epyc and Intel will have to adjust their strategy. Maybe they'll try to leverage their proprietary interface to XPoint while keeping the current DRAM BW and PCIe connectivity limitations, not sure, need to digest this.
Edit - stumbled upon this blog post and thought it's worth sharing https://community.cadence.com/cadence_blogs_8/b/breakfast-bytes/posts/arm-at-smc
I think xpoint has potential to be useful. Though I was always under the impression that a DRAM role memory may require an endurance of around 10^12 W/E cycles. I haven't seen strong evidence that PCRAM is capable of that at latest node sizes.
Just to add to this point and I could be wrong but I was always under that Intel viewed DRAM as a bottleneck and designed an architecture that minimized its dependency. Particularly when CPUs started breaking the 1ghz barrier (greatly exceeding DRAM latencies) they resorted to expand L2 and L3 caches in the front end micro-op layer while maintaining a "load store" execution core.