- Core Concepts
- Procedural Generation
- Top-down thinking
- Sub-division of Spaces
- Creating Content
- Creating Worlds
The project has been vision led, and evolved towards goals of interactivity, flexibility, and parameterised modelling. As such, a lot has been learned during its development about the nature of procedural generation, what approaches and thought processes work, and which don't.
Let's look at some of the element and themes involved.
Procedural generation is driven by the process of building sets of instructions that a computer can follow to carry out a task. Each step in a procedure is an operation or decision, applied to the information that is passed in and flows through it. The end result is some form of asset to be displayed (or otherwise used) in the game world or environment you are constructing.
In our case here, we're predominantly building geometry (triangle meshes) to form the objects and surfaces of the game world we see. The output of our procedures will be geometry, the input can be any number of parameters, of various types. In our example above the input, operations, and output are:
- Input - Bounds describing the size and height of a table
- Operations -
- Slice off top of bounds for table top
- Underneath, generate four corner bounds of the same size, inset from table edge.
- Populate them with table legs
- Combine it all together
- Paint it yellow
- Output - Geometry representing a table
Procedures are the data that persists your design, they are your content, they are what makes your creation unique. Each is a collection of operations, other procedures, built-in constant values, and interconnections between them. Together they perform their designed task to produce your game world.
At some point, a procedure will need to perform some function to manipulate the information flowing through it. This is the job of operators; fundamental elements of procedural creations which encapsulate some general purpose function. Operators are like built-in procedures, hard coded to perform a single specific function. In-fact they are effectively functions in the same sense of any programming language; pass in data, get transformed data out. Unlike many languages though, operators can have multiple outputs as well as multiple inputs.
Procedures can contain other procedures, and they appear and behave just as operators do. They take input data and produce transformed output data. This approach is fundamental to creative efficiency, allowing re-use of more general functionality within less general layers of the design. The main entry point into your project, the top-level procedure, will be the most specific, and the procedures will get more general as you dig down.
Building this way allows accumulation of useful functionality and even larger scale assets into libraries of content that can be re-used within a project or outside in later projects.
An important aspect of this re-use is that it allows project-wide changes and improvements. Fixing a problem with your brick wall fixes all the walls, immediately. Procedures are easily added as placeholders, to allow quick prototyping and then can later be re-visited to flesh-out properly.
Procedures are evaluated non-destructively, accumulating calculated and derived information as it progresses, without replacing any previous values. Any operator output value, once calculated, remains fixed, immutable.
This means that procedures are completely functional, i.e. not having side effects. Whilst there is a bit of an evaluation cost associated with this, it simplifies the thinking needed when building procedure graphs.
The inputs to these processes are the controlling parameters, each presenting an axis along which the resulting content can be varied. The parameters you choose to expose depend on how your content is to be used, they represent a design choice.
Constant values at unwired operator inputs are equally considered parameters, except these are design time parameters, constant at runtime. This is in contrast to procedure inputs which are run time parameters, variable at runtime.
The procedure hierarchy represents a large interconnected graph of operators, all feeding information ultimately towards the final outputs. Evaluation of this is performed by the synthesisers running on background threads by starting with the top level procedure and requesting output values. By following the graph back, evaluation of procedures and sub-procedures will eventually yield the results. This evaluation is lazy in nature, on three levels:
- An operators output is only calculated when needed.
- Only operator inputs that are actually needed are subsequently evaluated.
- Sub-procedures are only instantiated when an output is actually required.
By way of example, the
If conditional operator shown here represents a decision point in the procedure. A huge amount of work could be required to calculate each of the two possible inputs and it would be wasteful to work them both out. Instead, after the controlling boolean value is evaluated, only the input path corresponding to the result is requested to be passed back up the evaluation chain to its output.
|Choose one of two values|
This approach obviously saves us time as only calculations that are needed are performed. However, step 3 is critical to recursion support as you can't afford to speculatively instantiate an unknown number of recursions.
While the engine and tools can provide the features needed to build procedurally, the user needs to work and think procedurally too.
As well as the engine implementation intrinsicly supporting procedural generation, the actual procedures themselves require particular approaches and techniques to be applied during their design too. Some of these are documented here and should be considered when you set about your project.
It is a fairly straight forward idea that we only build detail where it's needed, but this turns out to have some crucial implications. The thinking goes like this:
- Because you start with nothing, you always have to be moving towards more detail to build the world.
- Because you are working from low detail to high detail, the procedures need to work from high-level to low-level.
- These correspond to high-level decisions and low-level decision.
- High level decisions drive the over-arching content generation in an area, the low-level decisions drive the finer-grain content generation.
- High level decisions are made before the low level ones.
- The content generated by the low level decisions must fundamentally align with the content generated by the high level decisions.
- The low level decisions depend on the high level decisions.
This results in a cascade of calculation and instantiation as we refine the world content. Each layer generating its more detailed content based on the layer above.
This means that decisions made in an lower detail level must be honoured (and improved upon) in the higher detail levels. Fundamentally this is expressed as:
No low detail content can depend on decisions made when generating high detail content beneath it.
The low detail content has to come first. The practical implications of this are:
- You can't build up larger scale, lower detail, content from finer-grained calculation such as a simulation.
- A low detail version of something can't be built from sampling its high detail version.
- Larger things are not the sum of their parts, the parts are a decomposition or refinement of the larger thing.
In fact, there are very good performance reasons this condition must hold; you can't afford to generate lots of layers of higher level content in order to just show a low detail area of the world. i.e. This completely defeats the point of the dynamic detail system
Additionally, by inference of the fact that dependencies must only be upwards from higher detail content to lower detail parent content:
Content at any level can't be dependant on content at the same level, adjacent to it.
This is because it can't be relied upon to exist yet, and would introduce circular dependencies.
Apparance follows the idea of progressive refinement to add detail to the scene. That is; progressively more detailed versions of an object are built and substituted gradually as you approach them. This refinement can happen in any way you decide to implement it, some common ways are:
- Start including procedures to generate extra geometry, e.g. switch on the door handle door once you are close enough.
- Substitute one procedure for another to replace low detail geometry with higher detail geometry, e.g. change from a line primitive to a cylinder for a thin straight feature.
- Change parameters controlling the sub-division of primitive operators to generate more surface geometry, e.g. more faces around a cylinder.
- Replace textured flat surface with surface geometry, e.g. brick pattern becomes actual bricks.
As you build more of this into your procedures, more detail can be revealed in the world.
To implement this switching, selection, and detail based parameterisation, there are two operators available at the moment Block Info, and Detail Switch:
|Information about current block we should be modelling in|
When synthesis is being carried out, the synthesiser knows the octree node it is doing the work for and can provide size (and location if needed) information. You can use this to drive your procedures decisions about what geometry to generate and how much to sub-divide things up.
|Choose one of two bits of geometry depending on current block size|
For convenience, a helper switch operator is provided that uses the block size to select one of two geometry inputs. When the node being built for is smaller than the threshold value, the high detail input is evaluated and passed to the output, when the node being built for is equal or larger than the threshold value, the low detail input is evaluated and passed to the output.
Divide and Conquer
An octree structure is used to managed the scene and all content within it. Starting at a root octree node large enough to enclose everything, it is repeatedly refined into smaller sub-nodes according to where the world is being viewed from and where there is content.
Each level depthwise within the octree structure is progressively smaller (half the size in each dimension), and manages scene content over a smaller volume. To populate each node, the content from its parent that overlaps it is re-synthesised at a greater detail level. A core premise here is that as each progressively smaller node is re-built it can afford to spend the same fixed budget (time and geometry), but over a smaller volume, allowing an increase in detail.
As the scene is refined, the tracking octree grows deeper and its nodes smaller.
Part of the refinement process the engine handles is deciding which nodes each procedure belongs in. By convention, the first frame parameter passed in is assumed to be the bounds the geometry will be created within. It is important that this holds true and no geometry outside this is produced or the tracking can 'lose' pieces of content further down the refinement path.
As a procedure divides up the space within it for its parts, the frames passed to the sub-procedures are captured so they can be distributed between the child nodes when it comes to refinement.
The refinement process is driven by proximity to the viewpoint (camera), with higher detail content being generated to add near detail over the lower detail farther away. With correct balancing, in theory, this means that each level of detail should use about the same amount of geometry. This range of detail levels and the distances they are intended to appear at are called 'detail bands'.
Where you have multiple detail levels for the same content being generated we need a way of selecting which one to render and nicely transitioning between them as the viewpoint moves. This is handled by a screen-door transparency pattern blend control in the common pixel shader code.
As procedures drive procedures, within procedures, deeper and deeper to continue the refinement process, all information needed for deeper ones is passed down as input parameters. Because the engine knows about the data types they can be captured for re-application then the deeper procedures are re-synthesised.
As new parameters are derived from the calculations performed, these can be used to drive deeper content creation. Some is passed from the top level, some from within.
Sub-division of Spaces
There are a couple of ways geometric content can be created, what I will call 'bottom up' and 'top down'. Bottom up is where you just start creating primitives (triangles, cubes, cylinders, etc), manipulating them and positioning them around some starting position and orientation to create your object. Top down is where you start with a volume which you partition up into portions correctly sized to create your primitives inside.
Whilst both are supported by Apparance, the top-down approach fits the divide-and-conquer/refinement model better and is generally the best way to build things.
|Modelling Approach||Example Operations|
|Bottom Up||Create cube → Transform vertices to desired position|
|Top Down||Sub-divide/manipulate space to where cube needed → Create cube|
An important data type used in this process, and through-out many of the example procedures, is the Frame.
A Frame is a data type that describes an oriented bounding-box, or more precisely an oriented rectangular cuboid. A three-dimensional region of space with a width, height, depth, orientation, and position.
Most of the geometry primitives take a Frame as their input and construct geometry to fill it.
- Cube - Construct a cuboid fitted to the frame bounds
- Cylinder - Construct a cylinder fitted to the frame bounds
- Sphere - Construct a sphere fitted to the frame bounds
- Sheet - Construct a single sided flat sheet fitted to the lower Z face of the frame bounds
There are still some times when it's better or clearer to use the absolute coordinate based primitive operators:
- Cube - Construct a unit cube at the origin
- Triangle - Construct a single sided triangle between three world positions
- Line - Construct a line between two world positions
- Grid - Construct a grid of lines given an origin world position and two axis vectors
To use these within the Frame based modelling approach, the Frame To World operator can be used to generate an absolute world coordinate from a Frame.
|Convert a position relative to a frame to absolute world position|
Full details of these operators available on the Operators manual page. A lot more primitive types and operations on them will be added in the future.
Frames don't normally need to be created as such, they are usually set up at the top-level procedure inputs (as a constant value) and everything beneath derived from them.
Frames can be manipulated in a variety of ways:
- Split - Divide in-two in one axis, by an amount or proportion
- Shrink - Divide into three in one axis, creating a shrunken centre section and two equal side sections (by size or proportion)
- Extend - Generate to frames extending out of opposite sides in one axis, by an amount or proportion
- Resize - Shrink (or grow) the frame in place, to an amount or proportion, and aligned to sides or centred
- Rotate - Rotate about the three axes in turn
- Offset - Move the frame to another position in space relative to current position, by an amount or proportion
- Reorient - Re-assign the axes and orientation of them within space without changing its position or size in space
- Diagonal - Generate a frame that sits down one of the frames diagonals and intersects the remaining corner with its top.
Full details of these operators available on the Operators manual page.
The inputs to the modelling process are the controlling parameters, and for a given set of these values (and the octree context you are in) you will always generate the same result. This is important for experiences that require consistency between repeat vists of a player and between separate players.
To achieve this, yet maintain the (seemingly random) variety required of a rich and interesting world, all variety decisions must be controlled by a repeatable random number generator. That is one that will for a given input seed, produce the same output value or sequence of values.
An important type of information that we need to pass around a lot during procedural generation are random seeds. These are used a lot to drive variation within your world. By deriving new seeds, splitting off random sequences, and sharing seeds between instances, variation can be layered on at different levels of detail to produce good variability, patterning, and grouping of features as you would expect in the real world.
Propagating seeds allows for design elements to be shared across your world bringing consistency, yet still be unique to your world. We don't want everything to be completely random, managing seeds allows us to control this.
Random number generation in Apparance is supported via two Operators: Random and Twist.
|Generate a random number in a given range|
Random (in both integer and floating point versions) generates a random value between (inclusive) two specified values. Each value has equal chance of being generated (flat distribution), and is dependent on the input seed.
Design#for a design seed,
Variety#for a variety seed.
Along with each generated random number the Random operators create a new seed value that can be used to generate the next random number in the sequence.
Where you need to seed multiple sub-procedures, but don't want to have each generate the same content, you could just use a chain of random operators to generate separate seeds, but this can have problems within the sub-procedures when they are used to generate multiple values. This is because generating multiple seeds like this means they are related i.e. they are from the nearby points in the same random sequence.
Instead, an operator is provided to basically 'jump tracks' from the current position in the random sequence to another wildly different position. This means that subsequent values and seeds won't be related in the same way.
|Switches the random stream sequence to avoid duplication when sub-dividing procedures with similar content|
This operator is also parameterised to allow splitting the random stream into several different streams and should be used when you are propagating seeds to related sub-procedures, especially where recursion is involved.
A Case Of Non-Randomness - Part 1 / A Case Of Non-Randomness - Part 2
A common need in many languages is to be able to repeat something, be it lamp-posts along a street, or fins around a rocket. At the moment recursion, whilst not the most intuitive technique is the only way Apparance supports this. There are several ways this could be done, and alternatives will be explored in the future.
In its simplest form, recursion allows the chaining together of a series of identical procedures, accumulating the result as you go, until some stop condition is met, e.g. a counter reaches zero.
Whilst this works fine and is the simplest to understand, it has some drawbacks that need to be considered sometimes:
- Stack depth - every iteration uses up a chunk of stack space so you are limited in quantity.
- Detail capture - the top-down modelling and sub-division of space approaches don't work well with this deep recursive implementation of repeated content.
The solution here is to use a 'binary chop' approach, each time dividing the problem into two halves of similar size and recursing into both of them. This both uses less stack space and plays well with the capturing process. To help implement this, the
Stacker.Splitter Operator, and friends, are provided to do all the heavy lifting for you, as well as providing some extra useful features.
A useful capability is to be able to divide up space and fill it with repeated objects, sometimes with intermediate spacers and end-caps.
A pair supporting Operators are provided to help with this process, handling all the calculation and recursive construction of the parts together for you.
|General stacking/packing utility. Recursively split up space to contain a repeat series of objects with optional start/end caps, all separated with spacers.|
|Combine required elements from stacking process.|
Here they are in use; in the recursive part of the process.
The top two sub-procedures are where the recursion happens, and the stacking problem is divived in two. The lower sub-procedures are where you create geometry to form the relevant parts (here a placeholder coloured element):
- Lower Cap - optional fixed size part to include at the start
- Spacer - optional fixed size part to include between objects and to optionally pack the end-caps
- Object - always required, and with the size being adjusted to ensure full occupancy of the available space
- Upper Cap - optional fixed size part to include at the end
The following additional configuration for the stacking process can be specified:
- Axis - which axis to perform the stacking on (0 for X, 1 for Y, 2 for Z)
- Object Size - The object size to aim for in the stacking direction
- Spacer Size - The exact spacer size (if used)
- Lower Cap Size - The exact lower cap size (if used)
- Upper Cap Size - The exact upper cap size (if used)
An additional feature is indexing support, where the Splitter operator will provide the Index and Total Objects count to each object (or spacer) when they are instantiated. To do this; pass the starting index and the total objects parameters as procedure inputs, and wiring the up the lower and upper recursion instances to the Lower Index, Upper Index, and Total Object splitter outputs. Then you can pass the Total Objects and Object Index outputs to the geometry generation procedures that need it.
When the repeat procedure is placed for use, in a containing procedure, the
Stacker.Options operator can be used to configure which spacing and capping parts are required.
|Helper to generate selection inputs to a stacker based object.|
Here a single repeat procedure is placed down and configured for use. The frame passed in will provide the volume to be filled, and the options determine which spacers and caps are required.
Together they give rise to the following output (in this case, with all parts enabled).
By configuring the spacing options, end cap options, and providing the appropriate geometry generation procedures, you can use the Stacking Operators to construct a wide variety of complex composite objects. Some examples might be:
- A fence with narrow posts and larger end posts - end-caps, objects, and spacers all provided
- A row of columns - columns as spacers, with small non-geometry objects as flexible gaps between
- A stack of books - split on z axis, no caps or spacers, object is book of random orientation
- A course of brickwork - bricks as objects with inset cement pieces as spacers
Apparance will in future also support repetition by a) in-place iteration of procedures, and b) geometry duplication.
A variety of geometry generation Operators are available (see full primitive operator list). These fall into a couple of category axes:
- Triangle - planar, three sided, three corner, surface geometry
- Line - linear, single pixel screen-space width lines
- Frame - location specified by bounding frame, usually exactly encompasing
- Vertex - explicit world-space coordinates, e.g. vertices
- Uncoloured - colour to be applied later, usually the more complex, compound shapes
- Vertex colour - colour specified for each vertex, usually more basic primitives
During the synthesis process, operator generated geometry is accumulated in the synthesis buffer. The Model Segment (white connection points) data type is a reference to where in the buffer this geometry has been created. Passing this around allows specific pieces to be referred to and have further processing operations applied (see below). A special
Merge operator provides a way of combining separate pieces of geometry together to be treated as one.
|Merge geometry together|
The geometry referred to by a Model Segment can have a number of manipulation Operators applied to it. Currently operations of these types are available but more will be added in the future. See full the modelling operator list.
- Surface - e.g. paint (colour), decorate (material)
- Warp - e.g. taper, noise distortion
- Transform - e.g. rotate, offset, scale (normally eschewed in favour of Frame manipulation)
- Normals - e.g. normal blending
- Conversion - e.g. to wireframe
- Control - e.g. hide
Materials control the surface appearance of the geometry. There is a default material for simply lit flat shaded coloured surfaces that un-decorated geometry will use. Geometry supports vertex colouring to allow multi-coloured geometry to be combined into single draw calls.
For non-coloured primitives, you can apply an all-over colour using the
|Paints geometry with a particular solid colour|
This sets all passed in vertices to the provided colour.
Material are applied in a similar way to the colour. A
Decorate Operator applies the material ID to the triangles/lines in the supplied Model Segment.
|Applied a particular material to the specified geometry|
Materials are built from a Vertex Shader and Pixel Shader pair with the
|Builds a material around particular vertex and pixel shaders|
Material procedures that use this operator must be Resource Procedures (a tick-box on the procedure properties). See below for more on this.
Shaders in Apparance are built procedurally just like everything else. Composition of the textual shader definition is performed using string operators giving full control over the shader code created. Vertex and Pixel shaders have their own compilation operators:
|Compiles a text definition into a vertex shader|
|Compiles a text definition into a pixel shader|
As with Material Procedures, procedures built around these operators must be Resource Procedures (see below).
These texual definitions can be parameterised and contain all manor of composition logic just like any other procedure. As you make changes to a shader procedure it triggers recompilation and re-application of the material effect giving interactive real-time editing of your shaders.
Resources are a type of procedure that is not built straight away, instead they are deferred to their own separate synthesis run. Where they are placed used however, an ID is immediately returned in leu of the actual resource. Later consumers of this ID (e.g. a Material operator being passed a Vertex Shader ID) can 'trade it in' for the actual resource. Doing this enables sharing of content across the world, since multiple requests for resources that take the same input parameters can return the same resource. Once the resource has been synthesised the first time it will be available for use by other procedures at no synthesis cost. When the resource is nolonger referenced it will be released.
At the moment, materials are of limited use. There is opportunity to play with world-space procedural shader effects and texturing, but with no UVs and no texture support this may be limiting. In future all this will be available and much richer surface appearances will be possible.
So far, the modelling side of Apparance has focussed on building objects, that-is; individual things or stand-alone structures.
The next step is to be able to scale up, to expand our creations into large worlds, populated with many items. Building outwards is actually similar to building inwards, either way you end up with a large scale scene with small scale detail. This is what is meant by Apparance being effectively 'scale-less'; at every level you can build inwards, adding detail. A wise man once said:
In a system where you can zoom in infinitely, every level of detail is always placeholder for the next level of detail.
Fitting this to the Top-Down development model that befits procedural generation means first expand, then refine back again. For example, having built a house procedure, design a town procedure that uses it, build a city procedure that uses that, then a county/country procedure that uses that, and so on.
To support this ever expanding world, procedures need to be scalable, that-is they need to be able to produce content at a wide variety of detail levels according to where they are being used. Parameterised by detail level if you like. A house viewed from a mile up town overview will need to be very low detail, just vaguely hinting at there being a house at all. A house viewed from the doorstep will need lots of detail, right down to the door knob.
To support this variety, a procedure (and procedures within it) needs to be aware of the scale of the world around it, and adjust to the detail level required. As we saw earlier, detail is an aspect of the scene that the engine manages for you, and you equip your procedures to work with it using block size checks. The
Block Info and
Detail Switch operators provide access to the context your procedure is being synthesised within.
By way of example let's look at an example of how to set up the detail switching for a wooden crate procedure.
Here is the detail selection process for the main crate procedure:
Two switches are used in this example, to select between the three high-level detail versions: low, medium, and high. The switching points of 64 and 32 units respectively are chosen to look good at the distances these detail levels appear, for a typical crate size.
The second way the detail is managed in this example is that the material is only enabled when the high detail create gets even closer. This is performed by checking the block size manually, which generates a boolean flag value that can be used to drive various changes, in this case, enabling the material (0 is 'no material').
Larger objects would typically have higher thresholds, smaller ones lower. Balancing of these values is largely an interactive process, seeing where detail transitions are best arranged such that they are largely invisible to the player.
Within a scene this detail switching provides a gradual transition from high detail up close to low detail at distance. Careful optimisation of low detail procedures and clever switching allows huge draw distanges and large quantities of objects to be realised.