thank you for your inquiry. Some remarks related to your questions:
The two-teams-approach, of course, was only possible due to the fact that we had some funding by the german government for this.
Yes, we also tried to re-simulate bugs, which had been found with formal simulation.
The typical procedure, when a bug was found in the formal domain, was to contact the block designer. The block designer took the stimuli of the counterexample, tried to reproduce the bug in simulation to understand what's happening and corrected it. Then we redid the formal verification part to check if the bug really disappeared and no new bug had been introduced by the fix.
We tried to run the SVA assertion in simulation as well, but they did not run out-of-the-box. The reason was that currently formal and simulation tools support different SVA subsets and interpret some SVA constructs differently (see article). So it was quite a hard job to migrate formal properties to simulation. Moreover, doing a migration in the direction as proposed seems to me to be of an academic nature. With the claim of the formal approach to be exhaustive, simulation can only detect bugs, which have been found previously by formal.
Nice to see 2 teams running in parallel to really assess dynamic and static verification benefits (usually case studies first use a verification then teh other and conclude only on the missed bugs of a method, not a real comparison!)
You've run the Formal verification on the design that was only verified and debuged via simulation based verification.But did you try the simulation based verification environment on the Formally verified only design?
By the way, did you run the SVA assertions dynamically as well? were they able to catch the bugs without running Static Fomal analysis?