I honestly don't see the point in measuring the distribution of _die_area_ between new/reused design and memory.
Die area may influence cost models, yields etc but has little bearing on the design realted aspects of a chip.
One can fill 90% of a chip with replicated memory banks, using almost no design effort. Memory doesn't need to be functionally or formally verified. Doesn't need to be timing-closed. Doesn't need gate-level simulations. Integration of memory into a design is normally straightforward.
Memory should simply be left out of any discussion regarding "innovation in hardware being constrained".
Memory aside, we are left with new vs. reused blocks. Even here die area is of little value in the discussion, since if I have 12 hardened CPU cores replicated in my design, they may dominate the die area and still be a negligible part of the project when measured in design effort (= schedule, = investment, ~ innovation).
Just removing memories from the graphs in the article will show that while there is a clear increase in the die area share of reused designs, it is not a sharp exponential trend and the ratio just transitioned from ~40:60 to ~60:40 over 2 decades. This is by no means a _fundamental_ change of the industry.
Keeping in mind that due to replication, die area is at best an "inaccurate" indicator of design effort / innovation, I don't think any serious conclusion can be drawn about the subject purely from die-area data.
NASA's Orion Flight Software Production Systems Manager Darrel G. Raines joins Planet Analog Editor Steve Taranovich and Embedded.com Editor Max Maxfield to talk about embedded flight software used on the Mars on EE Times Radio. Live radio show and live chat. Get your questions ready.