Open Source ECS Project

Documented here, and source code at: https://github.com/tdbe/Tdbe-2023-URP-DOTS-ECS-Graphics-Physics.

Quick gameplay video.

image

v 2022.2.6f1

.’s 1.0 sandbox

  • everything is unmanaged, bursted, and (multi)threaded by default (except the legacy input)
  • player, game system, states, random/spawners, variable rates, threads, aspects, dynamic buffers & nativearray components, collisions, dynamic bounds, warping, pickups with visuals, rocks, ufos+ai, shooting and health / dying.

image

  • 5-8 ms on main thread, and 140-190FPS, with 500k-1m triangles

stats1

  • Diagram of the ECS layout: https://miro.com/app/board/uXjVMWg58OI=/?share_link_id=616428552594

image

Project

Fairly wide scope of ECS DOD / DOT usage, generic and specific setups, for a full game core loop. There are still a few gameplay details done in a hurry, marked with “// TODO:” or “// NOTE”.

The project is set up to be visible via the Hierarchy what is going on and roughly in what order, and using prefabs and components with mono authorings, and inspector notes. Most things & rationales are also described in comments in code.

Assets[tdbe]\Scenes\GameScene_01 <– scene to play

Play:

  • Control ship with the chosen input data keys in the corresponding Player prefab. By default:
    • Player_1: arrow keys to move (physics like a hovercraft), right Ctrl to shoot, right Shift to teleport. Touch the pickups to equip them.
    • Player_2: WASD to move, Space to shoot, leftShift to teleport.
  • To easier test, you have 1000 health. You get damaged by 1, every physics tick, by every damage component that is touching you. Anything that dies disappears, no animations, but there is health GUI.

Some points of interest:

  • everything is physics based.
  • I made what I think is a cool multithreaded RandomnessComponent using nativeArray, persistent state, and local plus per-game seeds.
  • simple but reusable Random Spawner aspect, also reused in targeted spawning of child rocks and player teleportation.
  • resizeable window / teleport bounds
  • equipped pickups are visible on you and have modding behaviour to your health or to your shooting (e.g. go through objects).
  • tweakable health and time to live on everything that moves including rocks.
  • tweakable damage dealing from everything that moves.
  • randomized PCG for variable rate update groups, randomized (and/or binary) sizes as well, for enemies and rocks.
  • enemy AI follows closest player, even through portals (picks shortest path to nearest player, including portals).
  • Quickly made a dumb but cleverly versatile offsetted outline shadergraph shader that I quickly built all my assets from “CSG style”.

Philosophy:

  • performant (threaded, bursted, instanced, masked) by default, not “well this won’t hurt so much”.
  • main system can update states of other systems, other systems control their own state and do their one job. (and there can be sub-branching).
  • a system changes the component state of something, and then another system takes over. E.g. no scripting of events chains on spawn etc.
  • reuse components, systems, threads, and aspects, unless doing so becomes confusing project-management wise or future-gamedev wise.
  • at the same time, don’t split up code that you don’t need accessed from anywhere else yet. E.g. you can use “{ }” to separate out blocks locally, without actually moving them out. So you don’t end up with confusing modules that someone else won’t know when to use, etc.
  • track memory limits, pay attention to what / when you’re increasing or destroying; maybe destroy everything in one system at a controlled time.
  • think about all the limits; e.g. is it bad if you wipe out all enemies on the screen at the same time?
  • use state machines; approaches are described in code (e.g. in GameSystem).
  • maybe break up large components if there is some small part you’re writing to a lot.
  • make things clear at a glance: hierarchy objects, inspector notes, code descriptions of your ideas etc.
  • In ECS anything can be represented as just an efficient database query. So the limits & wisdom are about how you save, define, equip and see this query as a state in a production friendly way.

Some annoying quirks I found:

  • Cross-scene communication techniques in ECS are: *crickets* ..just use statics or somehtin..?
  • Oh what’s that, you just wanted to quickly access some main Camera data, from your entity subscene? 🙃
  • Yo what’s up with Variable Rate Update Groups - insta-updating on rate change? It’s an interval, not a sometimes-interval..!
  • Some things you don’t expect, don’t get authored from mono. For example: isKinematic, isTrigger, physics layers.
  • Rigidbody freeze position and rotation does NOT have a solution from Unity in ECS. Yeah there’s the external JAC shit but it’s not the same behaviour, it’s restricting and sometimes physics-unreliable AF joint authoring components.
  • Yes you knew about the renderer and TransformSystemGroup when spawning, but ECS fixed step physics simulation will also process some collider at 0,0,0 of an entity if you don’t use the right command buffer stage. And yeah I know this is per design.
  • NonUniformScale/PostTransformScale component (to be replaced with PostTransformMatrix) is not disabled but actually absent by default, and can be requested / added.
  • Getting collision hit points. I get it, but cumbersome UX…

image

Open Source Boilerplate Project

Still WIP

Documented here, and source code at: https://github.com/tdbe/openxr-vulkan-project-fundamentals.

More OpenXR/Vulkan/C++ Gameplay, Input, and Rendering fundamentals, on top of @janhsimon’s excellent timesaving openxr-vulkan-example.

Quickly make a performant XR game with common-sense gamedev principles. Explained; technical + friendly, conversational, human-speak tutorial style of how it’s set up. Because there’s quite a bit of evidence that khronos(openxr&vulkan) know not of what a hoo-man is. ??

My feature stack so far:

XR Locomotion

  • Mode: Rotating and accelerated Panning the scene by grabbing with both hands, and seeing a “tunnelvision” “external scene”.
  • Uses state machines for movment and for visuals.

Base XR gameplay mechanics

  • Mechanics system based on a list of GameBehaviour’s processed by Main’s loop.
  • Each behaviour is Created (with its own required references), Updated (with frame & input data etc), and at the end Destroyed.
  • Sample mechanics for Locomotion, Hands, XR Input testing, World Objects.

GameData.h

  • GameObject’s{Material, Model, Properties (e.g. worldMatrix, isVisible)}.
  • PlayerObject’s{GameObject’s, PlayerActiveStates}.
  • Material’s{Shader, Descriptor-set UniformData, optional/shared Pipeline (for e.g blend ops)}

Rendering

  • Eplained in Janhsimon’s Headset & Context classes, the easily confusing & hard to customize khronos vulkan + openxr implementation. Especially regarding multipass vs singlepass & multiview, and what it takes if you want to use your own renderer or a diffrent api like webgpu.
  • Explained and tweaked the Renderer/Pipeline; look for “// [tdbe]”
  • Per-material, per-model, per-pipeline properties. Easily create a material e.g. transparent, doublesided; add uniforms / new shader etc.
  • Render pipeline knows if you modified any default properties in materials and in that case creates unique mats/pipelines.

Input class and InputData’s in Inputspace.

  • “proper” xr input, supporting “all” controllers/headsetss, with customizable binding paths and action sets.
  • nicely accessible through InputData and InputHaptics.
  • poses for controllers and for head.
  • actions (buttons, sticks, triggers, pressure, proximity etc).
  • user presence / headset activity state.
  • haptic feedback output.
  • exposes matrixes, and xr action state data (e.g. lastChangeTime, isActive, changedSinceLastSync)

Utils class

  • Some Utils for XR, input, and extra gamedev math/utils.

Below is Janhsimon’s original readme:


Overview

This project demonstrates how you can write your own VR application using OpenXR and Vulkan. These are its main features:

  • Basic rendering of example scene to the headset and into a resizeable mirror view on your desktop monitor.
  • Focus on easy to read and understand C++ without smart pointers, inheritance, templates, etc.
  • Usage of the Vulkan multiview extension for extra performance.
  • Warning-free code base spread over a small handful of classes.
  • No OpenXR or Vulkan validation errors or warnings.
  • CMake project setup for easy building.

Integrating both OpenXR and Vulkan yourself can be a daunting and painfully time-consuming task. Both APIs are very verbose and require the correct handling of countless minute details. This is why there are two main use cases where this project comes in handy:

  1. Fork the repository and use it as a starting point to save yourself weeks of tedious integration work before you get to the juicy bits of VR development.
  2. Reference the code while writing your own implementation from scratch, to help you out if you are stuck with a problem, or simply for inspiration.

[…]

You probably know that you can assign and write to render textures and 3D textures and even custom data buffers like RW Structured Buffers from Compute Shaders. These can hold spatial information trees, distance fields, flow maps, points, meshes etc. (For more reading look up UAVs (Unordered Access Views), SRV (Shader Resource Views), and shader Resource Binding registers.)

But with shader model 5.0 and d3d11 you can now do more or less the same in regular vertex fragment shaders. This is great because it allows you to easily bake data onto mesh textures or atlasses while they’re being rendered to screen anyway.

It’s got such a cool range of possibilities, that you can even do dumb simple stuff like sample the shader’s color under your mouse cursor, from inside the shader, and write it from inside the shader to a struct that you can simultaneously read back in C#, with no need to iterate through pixels or have access to any textures or anything like that.

So I’m’a show you how to UAV in unity shaders.

Number 1: Constructing Render Targets

Your shader will have a RWTexture2D (or even 3D if you wanna get fancy and bake some point clouds):

1
2
3
4
5
6
7
8
9
10
CGINCLUDE
#ifdef _ALSO_USE_RENDER_TEXTURE
	#pragma target 5.0
	uniform RWTexture2D<half4> _MainTexInternal : register(u2);
	
	sampler2D sp_MainTexInternal_Sampler2D;
	float4 sp_MainTexInternal_Sampler2D_ST;
#endif
//... other stuff
ENDCG

The register(u2) represents which internal gpu registrar to bind the data structure to. You need to specify the same in C#, and keep in mind this is global on the GPU.

Now you can use this _MainTexInternal as if it was a 2D array in your shader. Which means it will take ints as coords like so _MainTexInternal[int2(10,12)] - which means it won’t be filtered / smooth. However, you can form C# assign this same RenderTexture as a regular Sampler2D texture in the material/shader, with material.SetTexture as you would with any other texture, and then you can read from it with regular UVs.

So now let’s create that render texture in C# and assign it to the material. Do this in a ConstructRenderTargets() and call it from something like Start().

1
2
3
4
5
6
7
8
9
10
11
12
if(m_MaterialData.kw_ALSO_USE_RENDER_TEXTURE)
{
	m_paintAccumulationRT = new RenderTexture(rtWH_x, rtWH_y, 0, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);// Must be ARGB32 but will get automagically treated as float or float4 or int or half, from your shader code declaration.
	m_paintAccumulationRT.name = _MainTexInternal;
	m_paintAccumulationRT.enableRandomWrite = true;
	m_paintAccumulationRT.Create();
	
	m_MaterialData.material.SetTexture(m_MaterialData.sp_MainTexInternal, m_paintAccumulationRT);
	m_MaterialData.material.SetTexture(m_MaterialData.sp_MainTexInternal_Sampler2D, m_paintAccumulationRT);
	Graphics.ClearRandomWriteTargets();
	Graphics.SetRandomWriteTarget(2, m_paintAccumulationRT);//with `, true);` it doesn't take RTs
}

On that last line above, note the nuber 2. That’s the register index from the shader. So register(u2) corresponds to 2 here.

Number 2: Constructing Data Buffers

Let’s just create an array of some arbitrary MyStruct, that will exist in both the shader and in C#.

1
2
3
4
5
6
7
8
9
10
11
12
13
CGINCLUDE
#ifdef _ALSO_USE_RW_STRUCTURED_BUFFER
	#pragma target 5.0 // no need to re-declare this directive if you already did it 
	
	struct MyCustomData
	{
		half3 something;
		half3 somethingElse;
	}
	uniform RWStructuredBuffer<MyCustomData> _MyCustomBuffer : register(u1);
#endif
//... other stuff
ENDCG

So RWStructuredBuffer<MyCustomData> is our buffer. It has some limits of what can go inside, and it’s not 100% the C standard. But it’s still really useful and can hold tons of entries or just a few (as much as a texture, or as much as memory allows).

Now let’s construct the Compute Buffer in C#. Do this in a ConstructDataBuffers() and call it from somehting like Start().

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
//Needs to be defined the same as in the shader.
public struct MyCustomData
{
	Vector3 something;
	Vector3 somethingElse;
}
MyCustomData[] m_MyCustomDataArr;

void ConstructDataBuffers()
{
	if(m_MaterialData.kw_ALSO_USE_RW_STRUCTURED_BUFFER)
	{
		int memalloc = 24;
		m_MyCustomComputeBuffer = new ComputeBuffer(bufferLength, memalloc);//stride == sizeof(MyCustomDataStruct)
		Graphics.SetRandomWriteTarget(1, m_MyCustomComputeBuffer, true);
		m_MaterialData.material.SetBuffer(m_MaterialData.sp_MyCustomBuffer, m_MyCustomComputeBuffer);
		
		m_MyCustomDataArr = new MyCustomData[bufferLength];
	}
}

If y’all know how to iterate through memory, you know what that memalloc value is for. It’s the size of the struct in bytes. A float3 is 12 bytes, and the structure I created in the shader has 2 half3’s which equal to 1x float3 :) In the C# side, we don’t have half3s but we can define it as Vector3 which is resolved to float3 and is bound as a half3 on the GPU in our case. If you’re worried about conversion, there’s a Mathf.FloatToHalf() function.

Now to do stuff with the data of this struct from C#. Do this in Update() if you want/need, it’s fine.

1
2
3
4
5
6
7
8
9
10
11
12
13
void ProcessData(){
	if(m_MaterialData.kw_ALSO_USE_RW_STRUCTURED_BUFFER)
	{
		//here's how to read back the data from the shader
		m_MyCustomComputeBuffer.GetData(m_MyCustomDataArr);//obviously this way you will loose all the values you had in the array beforehand
		
		m_MyCustomDataArr[10].something = new Vector3(1,0,1);
		
		//now set it back to the GPU
		m_MyCustomComputeBuffer.SetData(m_MyCustomDataArr);
	}

}

Now to do stuff with this data buffer on the shader side:

1
2
// somewhere in vert or in frag or in a geometry function or wherever:
_MyCustomBuffer[my1DCoorinate].somethingElse = half3(0,1,0);

Done! Now go do cool stuff. And show me.


One of the things I did with this technique was a VR mesh painter where I have a custom SDF (signed distance field) volume to represent a 3D spray volume function intersecting with the world position on the fragment of the mesh I’m drawing on. You can also atlas your UVs so that you can have your RT as a global atlas and paint multiple objects to the same RT without overlaps.

You also need to realize that the objects are painted from the PoV of the camera, and so it might not hit fragments/pixels that are at grazing angles if you use say a VR controller, and you’re not aiming down the camera’s view direction. This results in sometimes grainy incomplete results depending on the mesh and angle. But you can fix that by doing the rendering with a different camera mounted to your hand and so you can render the painting passes of your obects only, with ColorMask 0, invisibly, to that camera. (or just using compute shaders isntead).

You can also do this whole thing but with Command Buffers and DrawMesh instead of Graphics.Set… I’ve done this a couple of times using Compute Shaders, but with the 5.0 vert frag shaders I had issues tricking unity’s API to work last I tried.

So perhaps another blog post should be about how to set up command buffers and compute shaders, and how to do something cool like turn a mesh into a pointcloud and do cool ungodly things to it :)

It’s called Voice of God, and the player shapes the ground. We won 3/5 awards in the ITU-Copenhagen site!


Among the prizes we got was a ham and NGJ tickets!

The theme was WAVES this year and we used your voice to make waves in the ground. Roughtly, your pitch (the frequency) is mapped to the screen from left to right (low pitch to high), and your loudness affects the amplitude.

You’re effectively controlling the rolling character via a winamp visualization. The cleaner your sound and the smoother your vocal range, the more effective you are.

We used unity 2D tools, the Animator, a rigidbody pool, and sound basetones and overtones merged into a world position function.

Voice of God itch.io page here. Github repo with build here or here. And the original GGJ submission’s page is here.

VoG Zeus yelling
Gameplay gif

It was a lot of fun but punishing to our vocal chords after the 2 days of testing and development :)

A good chunk of the level.
A good chunk of the level. (4 screens wide)

I’ll briefly explain how shader programs are created and invoked under the hood, with shader variant compilation.

(with specifics for unity3d and Standard, Uber, Custom shaders)

Here’s a few high level scenarios you’d experience:

  • You have a material with like 5 textures, what happens if you remove some of them at runtime? Does it become more efficient?

  • You want to switch a shader from Opaque to Transparent at runtime.

  • You want to create a material with any specific shader variant, programatically.

And here’s how things actually work:

If you examine StandardShaderGUI.cs (get it here), you see that a whole bunch of stuff happens when you change a material setting. Things to look for: SetKeyword, MaterialChanged, SetupMaterialWithBlendMode, Shader.EnableKeyword(), Shader.DisableKeyword().

Shaders are text files that are typically full of compile time directives like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
CGPROGRAM
#pragma compile_type __ _NORMALMAP //<- you define the keyword here
// the "__" is a wildcard for "OFF by default"
// can also have #pragma compile_type _DO_STUFF_OFF _DO_STUFF_ON _DO_OTHER_STUFF
// Replace "compile_type" with either "shader_feature" or "multi_compile". More on that further down.

#if defined(_NORMALMAP))
	foo = tex2dlod(_NormalMap, uvw);
	//or
	bar = tex2D(_NormalMap, uv);
	//etc.
#else
	//something else, or no else at all
#endif

//...
ENDCG

When you build the game, the engine goes through all your Materials and figures out what combination of settings they use, and parse and compile shader programs for each combination necessary.

Note: Normally the HLSL or CG you write in an engine is interpreted and compiled through a bunch of internal engine compiler layers and frameworks before it becomes “real” HLSL or “real” GLSL, Vulkan etc., according to the platform the build is for (pc/console etc). This is why for example Unity shaders have “Surface Shaders” (which is Latin for “fake-shader”). The CG in CGPROGRAM is real but is used as a middle language.

This means that if in your scene you have a material with the Standard Shader with a _NormalMap texture, and one without, 2 shader variants get created and are available at runtime. But this is because the Editor GUI C# script has noticed you left the texture slot empty. If you write your own shader without a custom editor script, it will always include the calculations for the texture (and treat it as black if left empty).

When you’re working with the Standard or Uber shader, and remove a texture at runtime from C#, it’s the same, the shader just uses values of 0 when it samples that textue. So you will need to also tell the Standard Shader or Uber Shader etc. itself to actually be swaped out on the GPU for another variant: myMaterial.EnableKeyword("_NORMALMAP");, or globally for every material: Shader.EnableKeyword("_NORMALMAP");.

Same if you made a copy newMat.CopyPropertiesFromMaterial(oldMat);, or just a newMat = new Material(Shader.Find("Standard")); programatically. By default all keywords are disabled. Or rather, if you see the multiple keywords on the #pragma, the first one is the one used by default, and usually it would be a wildcard __ or a _NORMALMAP_OFF, before the _NORMALMAP or _NORMALMAP_ON. If you’re dealing with an _OFF _ON setup, then in addition to enabling, you also should DisableKeyword() for the _OFF one.

So how do you swap shader types at runtime? (transparent to opaque, tessellated to not, etc)

Solution 1:

So one not-so-great strat would be to include in your scene the material variations you’ll need in the build. A better but still not super maintainable one would be including your variant materials in the Resources folder, which always gets included in the build.

Changing Shader Modes:

An artist once pointed out to me that she can animate the shader type of a material, from Opaque to Transparent. That’s funny, Unity, because you can’t do that (that easily)! Again, if you look at StandardShaderGUI.cs, there’s a lot of keywords and flags being set for switching between modes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
switch (blendMode)
{
	case BlendMode.Opaque:
		material.SetInt("_SrcBlend", (int)UnityEngine.Rendering.BlendMode.One);
		material.SetInt("_DstBlend", (int)UnityEngine.Rendering.BlendMode.Zero);
		material.SetInt("_ZWrite", 1);
		material.DisableKeyword("_ALPHATEST_ON");
		material.DisableKeyword("_ALPHABLEND_ON");
		material.DisableKeyword("_ALPHAPREMULTIPLY_ON");
		material.renderQueue = -1;
		break;
	case BlendMode.Cutout:
		material.SetInt("_SrcBlend", (int)UnityEngine.Rendering.BlendMode.One);
		material.SetInt("_DstBlend", (int)UnityEngine.Rendering.BlendMode.Zero);
		material.SetInt("_ZWrite", 1);
		material.EnableKeyword("_ALPHATEST_ON");
		material.DisableKeyword("_ALPHABLEND_ON");
		material.DisableKeyword("_ALPHAPREMULTIPLY_ON");
		material.renderQueue = 2450;
		break;
	case BlendMode.Fade:
		material.SetInt("_SrcBlend", (int)UnityEngine.Rendering.BlendMode.SrcAlpha);
		material.SetInt("_DstBlend", (int)UnityEngine.Rendering.BlendMode.OneMinusSrcAlpha);
		material.SetInt("_ZWrite", 0);
		material.DisableKeyword("_ALPHATEST_ON");
		material.EnableKeyword("_ALPHABLEND_ON");
		material.DisableKeyword("_ALPHAPREMULTIPLY_ON");
		material.renderQueue = 3000;
		break;
	case BlendMode.Transparent:
		material.SetInt("_SrcBlend", (int)UnityEngine.Rendering.BlendMode.One);
		material.SetInt("_DstBlend", (int)UnityEngine.Rendering.BlendMode.OneMinusSrcAlpha);
		material.SetInt("_ZWrite", 0);
		material.DisableKeyword("_ALPHATEST_ON");
		material.DisableKeyword("_ALPHABLEND_ON");
		material.EnableKeyword("_ALPHAPREMULTIPLY_ON");
		material.renderQueue = 3000;
		break;
}

So you’ll have to make a script with the above code in it, and provided you have the correct shader variants included at runtime, you’ll be able to switch a material from Opaque to Transparent or Cutout or Fade or whatever.

This code also works for the Uber shader BTW.


JAckie Chan Adventures - Uncle 'One more thing!'
One more thing!

There are actually 2 ways to compile different shader versions. When I wrote compile_type in #pragma compile_type __ _NORMALMAP, I lied. I meant:

  • #pragma shader_feature __ _NORMALMAP or

  • #pragma multi_compile __ _NORMALMAP.

(There’s actually more stuff (like multi_compile_fwdadd or UNITY_HARDWARE_TIER[X]) (read here)).

1). “Shader Feature” means when you build the game, the engine will only include the contents of a #if _SOMETHING #endif if a material with that keyword enabled, is used in the scene(s). Unity Standard Shader source uses shader_feature!

2). “Multi Compile” means that every possible combination of the keywords you defined in the #pragma, will be generated in shader variant files and actually included in the build. This can explode to a big number & build time! (don’t use unless on purpose, or just a small custom shader you’ve made)

Solution 2:

The proper-ish way to include other variants for shader_feature shaders is to use the Graphics Settings where in the Preloaded Shaders array you can drag in a preload shader collection file (make it rightclicking in the project panel) of the shader variant(s) you want.

I said “ish” because it’s not technically the proper way: it force-loads it in all scenes, but Unity isn’t clear on what else to do [details here] (they probably don’t have a proper solution implemented, it’s Unity after all ;) - the engine where one indie programmer can do a better job at an entire rendering pipeline in less time…).

Conclusion:

Use shader_feature, use the Resources folder or a Shader Preload Collection, and only use multi_compile sparingly.

Note: Protip: [ShaderVariantCollection] docs: “records which shader variants are actually used in each shader.”