One of the tricky things about highly procedural games is testing their many varied outputs. Among the goals of this are: checking for out-of-bounds results, and looking for undesired peaks/valleys in results as they accumulate over time. If you've got a really procedural game, how do you test for all these, not just once, but many times as you iterate on your design and implementation?
Ben Weber has an interesting blog up at Gamasutra on using Google's Cloud Dataflow tool to simulate many runs of a game simulation.
This delivers a couple of very useful services to testing simulated procedural games. One is that if you need a bunch of "testers," you can easily scale up the number of servers to some predetermined maximum. And by saving your chosen outputs to a BiqQuery table, you can use Google Data Studio to review the results and look for patterns of interest related to game features.
Naturally, this ain't free. Spinning up a lot of servers could get expensive. (I also do not love Google, but that's a personal issue.)
Still, it's an interesting option for combating the particularly difficult problem for highly procedural games of frequently testing semi-unpredictable outputs.
Users browsing this forum: No registered users and 1 guest