@alex_m1: Max, do you think the laptop processors can handle all this bandwidth?
No -- of course not -- that's why the Pico Computing coard uses four Altera FPGAs, because you can use the FPGA fabric to perform computations / processing in a massively parallel way. So if we see laptops with hybrid memory cubes, I met we will also see their processors augmented by FPGAs or FPGA fabric...
When I see FPGAs used in computing I can't help but think back to transputers and the promise of CPUs which reconfigure themselve to adapt to the tasks requested of them. I supported the Parallela project by Adapteva because I had experience of what interesting things can be done with new architectures (I tested ZiiLabs Stem Cell processors later bought by Intel). But I am disappointed that Adapteva hasn't yet managed to get to mass-production and worse still there seem to be few good tools to take advantage of their architecture.
I really thing computing won't move on until we start to disassociate the hardware from the software. It is perhaps herasy for someone in the embedded software business to say that, but I think if (at the top end) we can abstract away the problem from the target we can be more flexible about the ways in which the target devices are built. At the moment attaching an FPGA to a linux computer doesn't accellerate anything unless you write application specific code as a one-shot process.
My solution? We need JITs which have a common input language and a means to adapt to the hardware. Can we have a genetic algorythm which adapts to the architecture to create an intermediate instruction set? Initially it might not run fast, or it might not seem to run at all, but after a time it might optimise beyond human programming ability. There seems to be some research in this area but I don't know how production ready these are.
To me, FPGAs remind me of early computing where we had to deal with individual chips rather than the SoC that we have today.