This page mainly describes how the Murl Engine processes a given scene graph with regard to output generation, i.e. sound playback and rendering of visual objects. If you have not worked with scene graphs yet, we recommend reading the Cube tutorial, which explains how to create a simple scene graph from an XML file in its section Version 2: Creating a Package.
Basic Operation
To put it simple: During its output generation stage, the Murl Engine processes the current scene graph in search for any nodes that might produce visual or audible output. This process is carried out repeatedly, once for every single display frame. Ideally, this happens around 60 times per second depending on the actual refresh rate of the physical screen. In practice, this value may fall below the screen refresh rate, depending on the actual scene complexity and the resulting workload on the CPU and graphics processor.
Scene graph processing, or traversal, is basically performed in a top-down manner. Let's take a look at a very simple (non-functioning) XML scene graph definition file:
<?xml version="1.0" ?> <Graph> <CubeGeometry/> <Node> <PlaneGeometry/> </Node> </Graph>
It will always be the case that the CubeGeometry
is processed before the Node
, which is in turn processed before the PlaneGeometry
(Note that the XML root element <Graph>
does not represent a graph node itself, it just indicates that this particular XML file represents a scene graph description and not some other document).
The following sections give an overview of different output-related node classes that can be used in a scene graph. Click on one of these links to quickly jump to the respective section:
- Geometry and Sound
- Cameras and Listeners
- Cullers
- Programs, Materials, Parameters and Textures
- Lights
- The Glue
Geometry and Sound
Visual output is generally represented by nodes implementing the Graph::IDrawable
interface and usually consists of more or less complex three-dimensional geometry data. For example, a PlaneGeometry
node contains a very simple geometry description of a flat, rectangular plane made up from only four individual corner vertices, whereas a ResourceMeshGeometry
may represent a geometry object with hundreds or thousands of individual vertices in three-dimensional space, created with a 3D-modelling tool such as Maya or Blender.
Audio output is represented by nodes implementing the Graph::IPlayable
interface. Currently, the AudioSequence
node is the only one using that interface, taking a number of individual sound objects and playing them back in a seamless sequence. Playable nodes are also represented in three-dimensional space, hence being able to play back in a position-dependent manner.
Note here the word "three-dimensional": By design, objects in the Murl Engine's scene graph are always represented in a virtual 3D coordinate system (if you want to create a 2D-only application, this may simply mean you can ignore one of the three coordinate axes and only position your objects using the remaining two axes. You can find more on this topic on the 2D-only Rendering page).
The following is a list of different geometry nodes for various purposes:
PlaneGeometry
: This node represents a simple rectangular plane object that can be used to quickly render e.g. a 2D image on screen. Planes can be scaled to match the desired output size and it is possible to define only a sub-region of a given image by adapting the corner points' texture coordinates. See Tutorial #07: Images.PlaneSequenceGeometry
: Similar to a plane, but it takes a reference to an Atlas Resource object for defining the actual image shown. Quickly selecting one of multiple sub-images from the atlas can be done by setting a single index. Additionally, the individual images in the atlas may be played back as an animation sequence using aTimeline
node and an Animation Resource object. See Tutorial #08: Animated Images and the Atlas Generator reference page.CubeGeometry
: This node represents a simple 3D unit cube that can be arbitrarily scaled to match the desired output size. See Tutorial #01: Cube.ResourceMeshGeometry
: A mesh geometry can be used to render arbitrary 3D models on screen, with the actual model data contained in a given Mesh Resource object. Mesh Resources are usually created by using the Scene Converter tool, see the Scene Converter reference page.ResourceBspGeometry
: Similar to a mesh geometry, but rendering these nodes makes use of additional BSP visibility data that must be contained in the given Mesh Resource. Similar to the concept found in first-person games, this node can be used to efficiently render the inside of e.g. a large building or dungeon. See the Scene Converter reference page.GenericGeometry
: This node provides low-level access to a geometry object's underlying data structures, such as vertex buffers and index buffers, in order to programmatically create and modify 3D models.
Cameras and Listeners
As a consequence, as the virtual world is all in 3D and the output screen is only two-dimensional, we must provide some mapping or transformation from our world coordinate system to the screen coordinate system. For drawable objects this can be achieved through the use of cameras, e.g. a Camera
node.
In analogy to drawables vs. cameras, playable objects are transformed through the use of an audio sink, or listener, such as a Listener
node. Both cameras and listeners can also be positioned and oriented within the virtual 3D world, thus making it possible to move through the world.
The Definition of the actual position and orientation of cameras and listeners is separated from the respective nodes by means of separate CameraTransform
and ListenerTransform
nodes. These nodes must link to exactly one camera or listener via their cameraId
and listenerId
attributes, respectively. By doing so, it is possible to define a camera somewhere in the scene graph and specify its position later, e.g. by setting its transformation as a child of the player's main transformation in a first-person game to always move the camera together with the player.
Cullers
Cullers are a useful feature when working with a scene graph that represents a large virtual world. When in such a case only a small part of the world is visible at any given time, it is usually a good idea to tell the engine to remove (cull) any geometry that is not visible when rendering a particular frame, as this will either speed up rendering or make it possible to add more detail to the scene without degrading performance.
Currently, there are three types of cullers available:
Culler
nodes provide a simple means of culling away geometry that lies outside of an active perspective camera's view frustum (See Tutorial #01: Cube for a short introduction to view frustums). This process is very efficient, so a 3D application should make use of frustum cullers as often as possible.ResourceBspCuller
nodes make use of the BSP visibility information stored in a given Mesh Resource object to determine, if a geometry object is possibly visible from the active camera's view point. This type of culler may also remove geometry that lies behind a wall inside e.g. a building or dungeon. See the Scene Converter reference page.
- Note
- Cullers may also be chained together using the
parentCullerId
attribute. This way it is possible to create a combined culler object that e.g. quickly rejects geometry at first by applying a parentCuller
, and perform more elaborate culling on the remaining visible geometry afterwards using aResourceBspCuller
(which takes more time to process).
Programs, Materials, Parameters and Textures
By design, one fundamental property of geometry objects in the Murl Engine is that they do not carry any information about their actual appearance on screen; they only contain pure geometric data. The reason for this is, that such a geometry object (e.g. a simple plane) may be reused any number of times using different materials, colors or textures. In other words, the "what is rendered" (the geometry) is separated from the "how is it rendered" in the scene graph.
The "how is it rendered" is represented by four other entities discussed in the following sub-sections:
Program nodes
Program nodes implement the IProgram
interface and represent the actual "recipe" of how individual pixels of geometry rendered on screen should be colored. A program basically defines e.g. if a texture image should be used, if lighting should be applied or if additional coloring is desired. Currently, there are two such node classes:
FixedProgram
nodes provide a convenient way for defining a program with (very) simple properties.ShaderProgram
nodes can be used for implementing advanced rendering techniques, allowing to create custom GPU shader programs by linking to individualIShader
nodes. See the Shader-based Rendering page.
Material nodes
Material nodes implement the IMaterial
interface and must always directly link to a given IProgram
node. Materials define additional visual properties for rendering that cannot be covered by a program, such as blending modes (transparency) and equations, and control how an output surface's individual buffers (color buffer, Z-buffer and stencil buffer) are accessed. More than one material may link to the same program, so it is possible to e.g. use the same program with and without transparency.
There exist two node classes that implement the IMaterial
interface:
Material
nodes provide the most commonly used way for defining a material, as described in various Tutorials.MultiMaterial
nodes provide an advanced mechanism for e.g. multi-pass rendering. See the Multi-Pass Rendering page.
- Note
- In the Murl Engine, in contrast to some other software packages, materials do not specify any colors for rendering. Color properties are intentionally separated from an actual material definition and are made available through the next entity:
Parameters nodes
Parameters nodes implement the IParameters
interface and provide additional properties that may be desired for rendering, with colors being among them as mentioned. There exist three classes:
FixedParameters
nodes can be used for simple output coloring through e.g. theirdiffuseColor
attribute in conjunction with aFixedProgram
node having itscoloringEnabled
attribute set to"yes"
.GenericParameters
nodes can be used for advanced rendering techniques together with aShaderProgram
. Again see the Shader-based Rendering page.MultiParameters
nodes provide an advanced mechanism for e.g. multi-pass rendering. See the Multi-Pass Rendering page.
Texture nodes
Texture nodes implement the ITexture
interface. A texture basically encapsulates some form of pixel data that can be used to map images onto geometry objects, of course using an IProgram
that allows doing so. There exist a number of different nodes for this purpose:
FlatTexture
nodes simply define a flat 2D image.CubemapTexture
nodes consist of six individual 2D images representing all sides of a cube (a so-called cube map).MultiTexture
nodes provide an advanced mechanism for e.g. multi-pass rendering. See the Multi-Pass Rendering page.- And various other specialized nodes. See the inheritance diagram for the
Texture
base class.
Lights
Scene lighting can be performed by using one of the available node types implementing the ILight
interface. In order to use a light source with an active material, the material's program must have lighting enabled, e.g. by setting the lightingEnabled="yes"
attribute for a FixedProgram
or by manually implementing a lighting equation in a ShaderProgram
.
The following light nodes are available:
Light
nodes define a single light source in three-dimensional space.MultiLight
nodes provide an advanced mechanism for e.g. multi-pass rendering. See the Multi-Pass Rendering page.
Like for cameras and listeners, lights also have their position and orientation separated from the actual node. LightTransform
nodes are used to specify the light's actual transformation, thus making it possible to define a light somewhere in the scene graph and to e.g. set its position as a child of a player's main transformation later on in order to create a player head-light.
The Glue
So far, so good. By now, you should have a basic overview of the available output-related node interfaces and classes. But there is still one large open question: How do we link this all up?
The answer is given on the next page (Output-States, Slots und Units) which deals with the actual process of how nodes interact in the scene graph with respect to output generation. The magic key word is "state".