gnasher, on 2012-July-02, 05:44, said:
If I were doing a double-dummy simulation, I would do the equivalent: I'd look at some of the hands, and consider (a) whether they were consistent with the auction I was trying to simulate, and (b) whether the double-dummy results were consistent with real-life expectations. If I were trying to persuade somebody else that my double-dummy simulation accurately modelled real-life bridge, I would invite them to do the same. I don't really understand the rationale for not doing this.
Oh I do inspect a few of the generated deals, mainly to check whether I have overlooked something and have to refine my specifications and because the deals in themselves are often interesting.
I do not use my own code but Dealmaster PRO and I found it to be reliable. The DD analyzer (DeepFinesse) is beyond doubt.
But all this is a completely different issue.
I think I can export generated deals out of Dealmaster PRO, but what do you want me to do with them?
Publish them all in BBO? Seems to me impractical to do. My sample size is usually 1000 deals, unless generating will take ages.
But why would we need to hide anything?
Best way to check results is for somebody else to repeat them with their own software and sometimes with the same or their own specifications.
Whenever I published results and others used similar specifications in their simulations they came to similar results.
But frankly I do not understand all this skepticism of their validity, as if people had any incentive of making false or careless claims.
Simulation results are sometimes surprising and refute "standard wisdom". That's why I like them. Others seem to hate them for that reason.
I wonder who has an open mind here.
Rainer Herrmann