On April 25 two employees of Grendel Games attended the Everything Procedural Conference 2018 at the NHTV in Breda. There were many interesting talks, each of which sparked different insights and inspiration for future projects. This article outlines a few of the takeaways we got from the talks.
Everything you see in No Man’s Sky is procedurally generated at runtime.
Online Procedural Generation
Procedural Generation can be integrated into a game to be used at runtime. The generation process can be executed at load time (at startup or during a loading screen), or continuously as the game is running. Especially the last variety puts very strict requirements on the performance of the procedural generator, as a game running at 60FPS only has about 16 milliseconds to perform all its logic including AI, physics, rendering, etc. As the generation process has to limited to a few milliseconds per frame, online procedural generation usually cannot yield the same high-detail results as an offline generation can, but the possible results are certainly impressive.
Dealing with change
Runtime generation has a side-effect that if you make a change to the way a generator works, this may have unexpected changes on the generated result. Furthermore, these changes may not always be immediately apparent, especially when the scope of the generated world is large. These changes can sometimes be prevented by designing the generators to be robust and capable of handling changes or additions without this interfering with previous results. If changes cannot be avoided, or are actually desired for updates such as balancing or bug fixes, a mechanism can be put in the game to ensure the game state properly recovers from these changes. One way in which this can be accomplished is by relocating the player and any open objectives to a different place when the previous save location was no longer suitable/in existence.
Expose your data
Debugging a procedural world can be difficult, but there are best practices that make life easier. One is making sure all generated data is human-readable, for instance using JSON or XML data, and making the data between generation stages available. This prevents black-box behaviour and allows pinpointing where a bug or change occurred. Having this intermediate data available is also valuable for testing purposes when it can be re-entered into the system. This way the same data can be fed into a generator multiple times, and any change to the end result can easily be detected.
Automate testing
Having access to the intermediate data also makes it easier to implement automated testing. It is useful to store data for extreme situations to test for performance bottlenecks. Test automation allows catching bug or changes that would be very hard to detect manually. Ideally, testing is automated as much as possible, although the limits of automation must be well recognised.
Visualize the result
Another way to improve developing with procedural generation is to make the results visible during the development process. When tweaking the system that generates varieties of a type of object, say a tree, it is useful to have a visualization of what different generated trees may look like. Visualizing data like this greatly speeds up development time and makes the procedural generation process more accessible for designers and artists.
Offline Procedural Generation
The environments in Ghosts Recon Wildlands are procedurally generated using Houdini.
Generation time
Procedural generation can also be utilized for content production by using it offline, as a design tool. This allows using it to generate very high-detail content, as the generation tools can take their time. Examples were given of some tools taking up to 45 minutes to cook their content. Render farms were used to offload this process and allowed artists and level designers to continue working. Of course, not all tools take that long, allowing many to be used as part of the general workflow.
Layered approach
Offline procedural generation is often built up in layers. The bottom layers provide coarse data of large features, such as the mountains and valleys that make up the terrain. On top of this are more detailed layers such as roads, rivers and vegetation. Each layer makes use of the layers below it. For instance, the roads follow a logical path along the terrain, rivers are carved out using erosion simulation and different vegetation types are placed depending on parameters such as altitude, proximity to water and slope incline.
Additional masking layers can be used to manually indicate where for instance there shouldn’t be any vegetation, or where a higher density of grass or rocks should be spawned. It is often better to modify the generation process like this than to edit the final result, by integrating it in the pipeline those changes are always included when re-generating the world. An important feature is that the dependency between layers only works bottom-up, the lower layers are never influenced by higher layers. At specific milestones during development, the lower layers are locked, preventing them from being modified. This allows more fine-grained polish on the upper layers by knowing that the underlying terrain shape won’t change any more.
Free data
Building the world data in this layered fashion also means there is a lot of data available. This data can be used for more than its original purpose, for instance for creating debugging or profiling tools. An example from the conference was using the data to create a poly-count density heat map to find performance hotspots. This data was later used to decide where flocks of birds could be spawned without degrading performance. The developers called this free data, as it was available for use without additional time or effort, just by being part of the procedural pipeline.
This can also work the other way around, by using artist-created assets to base further generation on. This was used in Guerilla Games’ Horizon Zero Dawn to create smooth transitions between organic-shaped stalagmites and the mechanic-styled environment in which they were placed. The objects were rendered from above to an SDF (signed density field) texture. The environment was voxelized and converted into a 3D SDF structure. These SDFs were then blended together, and the mesh was generated from them using an isosurface extraction technique like Surface Nets. Parameters were used during blending and extracting to make the generated meshes fit perfectly with the stalagmites. Finally, the meshes were simplified to reduce the polygon count, and all hidden polygons were removed.
Other uses
Procedural generation can also be used to take over repetitive tasks, such as placing reflection baking probes in locations that need it, or determining where certain audio zones need to be positioned. Involving the whole team in the process can result in many interesting opportunities.
Generate responsibly
A downside of procedural generation is that it often doesn’t generate the exact desired result. Rather than spending a lot of time tweaking the generator to deliver a perfect end result, it is sometimes better to use the generator for the bulk of the work, then tweak the end result until it is just right. It is important to know both the opportunities and the limitations of a procedural generation when deciding to work with it.
Wave Function Collapse
The Wave Function Collapse algorithm is used to generate islands from 3D tiles in Bad North.
Not quite quantum computing
The Wave Function Collapse is a technique that borrows its workings from a quantum mechanics principle. The objective of the technique is to populate a grid with tiles from a set, in a way that these tiles fit together naturally. The tiles can be 2D or 3D (or any other dimension really), and are processed to determine the characteristics of their edges. These edge characteristics can be anything, examples include pixel colours or mesh vertex positions.
How it works
The wave function works by first expanding its probability space. All cells in the grid are assigned all possible tiles since all cells are still in an undefined state. The wave function collapse is started by defining the tile for a single cell. This tile limits the possible tiles which can be placed in adjacent cells based on their edge characteristics. All tiles which are no longer possible are removed from the adjacent tiles. This change recursively propagates through the grid until no more changes are made.
After this first change, the cell with the lowest number of possible tiles is chosen, this is called the cell with the lowest entropy. This cell is assigned a random tile from its possibilities, and the changes are again propagated through the grid. This process continues until all grid cells have been assigned a single tile. At this point, the wave function has collapsed into a single state.
The wave function collapses. On the left, all tiles are still possible, towards the right the possibilities slowly collapse into a final state. Images captured from Wave by Oskar Stålberg.
Mixed initiative
Level design often requires more control than this, which brings us to mixed-initiative. This simply means that the first cell or multiple cells are chosen either by hand or based on specific preconditions. The wave function collapse is then started to fill in the rest of the cells. This technique allows control where needed, and randomness for the rest. A heuristic can be added to the tile-choosing code to bias it towards a specific type of tile, for instance, to generate results with a specific theme.
Constraint-based programming
Wave function collapse is a form of constraint-based programming, where the program is only requested to give a result that matches given constraints. Everything that matches those constraints is a valid result.
Content agnostic
An advantage of WFC is that it is content-agnostic. The algorithm itself can execute without requiring knowledge on what it is generating. All it cares about is whether two tiles can connect. This leaves a lot of artistic and creative freedom without having to adjust or re-write the algorithm.
Procedural Generation and Deep Learning
The Artistic Style Transfer technique uses deep learning to transfer a style from one image to another (@DmitryUlyanovML)
Deep learning is developing fast, and it is interesting to see which applications it can have related to procedural generation.
Tweaking parameters
One way in which deep learning can be applied is to find optimal parameters for a procedural generator. Getting the settings just right for the desired result can require many iterations, but if the correctness of the result can be measured, the parameters tuning can be automated to search for the best configuration.
Speed up costly calculations
Procedural generation often uses computationally expensive algorithms to generate content, this cost is especially great when the generation must take place at runtime. In some situations, this cost can be reduced by training a neural network to output the same result as the algorithm, but at a fraction of its cost. This can be a viable solution, but the fact that neural networks never really reach 100% accuracy prevents this from being usable in all situations.
Artistic Style Transfer
Deep learning has been used before for so-called artistic style transfer, where the visual style of one image is transferred onto another image. This technique could also be used to transfer content onto a game world’s surface based on the shapes and contours of that surface. This could be used to place landscape features such as rivers and lakes, but also to place rocks, vegetation or other objects. By training the neural network on data sets of existing environment, it can learn to generate new natural landscapes.
Concluding
Procedural generation is a diverse field with many different approaches and applications. The talks at the Everything Procedural Conference gave a good sampling of this diversity, and have supplied us with many inspiring ideas. We’re looking forward to applying these ideas in our games!
Did you like this article? Then you might be interested in our other in-depth articles on procedurally generate water paint stains and automated testing of procedural content. Stay up to date on new articles by subscribing to our newsletter.
[sc name=”newsletter-signup-EN” ]
Sources
The information in this article is based on these talks from the Everything Procedural Conference 2018:
- Twan de Graaf – Ubisoft Paris
- Pierre Villette – Ubisoft Paris
- Innes McKendrick – Hello Games
- Oskar Stålberg – Bad North
- Anastasia Opara – SEED at EA