advanced_postprocessing.rst 8.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195
  1. .. _doc_advanced_postprocessing:
  2. Advanced post-processing
  3. ========================
  4. Introduction
  5. ------------
  6. This tutorial describes an advanced method for post-processing in Godot.
  7. In particular, it will explain how to write a post-processing shader that
  8. uses the depth buffer. You should already be familiar with post-processing
  9. generally and, in particular, with the methods outlined in the :ref:`custom post-processing tutorial <doc_custom_postprocessing>`.
  10. In the previous post-processing tutorial we rendered the scene to a :ref:`Viewport <class_Viewport>`
  11. and then rendered the Viewport in a :ref:`ViewportContainer <class_ViewportContainer>`
  12. to the main scene. One limitation of this method is that we could not access the
  13. depth buffer because the depth buffer is only available in spatial shaders and
  14. Viewports do not maintain depth information.
  15. Full screen quad
  16. ----------------
  17. In the :ref:`custom post-processing tutorial <doc_custom_postprocessing>` we
  18. covered how to use a Viewport to make custom post-processing effects. There are
  19. two main drawbacks of using a Viewport:
  20. 1. The depth buffer cannot be accessed
  21. 2. The effect of the post-processing shader is not visible in the editor
  22. To get around the limitation on using the depth buffer, use a :ref:`MeshInstance <class_MeshInstance>`
  23. with a :ref:`QuadMesh <class_QuadMesh>` primitive. This allows us to use a spatial
  24. shader and to access the depth texture of the scene. Next, use a vertex shader
  25. to make the quad cover the screen at all times so that the post-processing
  26. effect will be applied at all times, including in the editor.
  27. First, create a new MeshInstance and set its mesh to a QuadMesh. This creates a quad
  28. centered at position ``(0, 0, 0)`` with a width and height of ``1``. Set the width
  29. and height to ``2``. Right now the quad occupies a position in world space at the
  30. origin, however, we want it to move with the camera so that it always covers the
  31. entire screen. To do this, we will bypass the coordinate transforms that translate
  32. the vertex positions through the difference coordinate spaces and treat the vertices
  33. as if they were already in clip space.
  34. The vertex shader expects coordinates to be output in clip space, which are coordinates
  35. ranging from ``-1`` at the left and bottom of the screen to ``1`` at the top and right
  36. of the screen. This is why the QuadMesh needs to have height and width of ``2``.
  37. Godot handles the transform from model to view space to clip space behind the scenes,
  38. so we need to nullify the effects of Godot's transformations.
  39. First, set ``render_mode`` to ``skip_vertex_transform``, which removes the transformation
  40. from model space to view space. Godot handles the transformation from view space to clip space
  41. behind the scenes with the ``PROJECTION_MATRIX`` even when ``skip_vertex_transform`` is set.
  42. Nullify the projection matrix by setting it to the `identity matrix <https://en.wikipedia.org/wiki/Identity_matrix>`_.
  43. In Godot this is done by passing a `1` to a ``mat4``.
  44. .. code-block:: glsl
  45. shader_type spatial;
  46. render_mode skip_vertex_transform, unshaded;
  47. void vertex() {
  48. PROJECTION_MATRIX = mat4(1.0);
  49. }
  50. Even with this vertex shader the quad keeps disappearing. This is due to frustum
  51. culling which is done on the CPU. Frustum culling uses the camera matrix and the
  52. AABBs of Meshes to determine if the Mesh will be visible *before* passing it to the GPU.
  53. The CPU has no knowledge of what we are doing with the vertices so it assumes the
  54. coordinates specified refer to world positions, not clip space positions, which results
  55. in Godot culling the quad when we turn away from the center of the scene. In
  56. order to keep the quad from being culled there are a few options:
  57. 1. Add the QuadMesh as a child to the camera, so the camera is always pointed at it
  58. 2. Make the AABB as large as possible so it can always be seen
  59. The second option ensures that the quad is visible in the editor. While the first
  60. option guarantees that it will still be visible even if the camera moves outside the AABB.
  61. You can also use both options.
  62. Depth texture
  63. -------------
  64. To read from the depth texture, perform a texture lookup using ``texture()`` and
  65. the uniform variable ``DEPTH_TEXTURE``.
  66. .. code-block:: glsl
  67. float depth = texture(DEPTH_TEXTURE, SCREEN_UV).x;
  68. .. note:: Similar to accessing the screen texture, accessing the depth texture is only
  69. possible when reading from the current viewport. The depth texture cannot be
  70. accessed from another viewport you have rendered to.
  71. The values returned by ``DEPTH_TEXTURE`` are between ``0`` and ``1`` and are nonlinear.
  72. When displaying depth directly from the ``DEPTH_TEXTURE`` everything will look almost
  73. white unless it is very close. This is because the depth buffer stores objects closer
  74. to the camera using more bits than those further, so most of the detail in depth
  75. buffer is found close to the camera. In order to make the depth value align with world or
  76. model coordinates we need to linearise the value. When we apply the projection matrix to the
  77. vertex position the z value is made nonlinear, so to linearise it we multiply it by the
  78. inverse of the projection matrix which in Godot is accessible with the variable
  79. ``INV_PROJECTION_MATRIX``
  80. First take the screen space coordinates and transform them into normalized device
  81. coordinates (NDC). NDC run from ``-1`` to ``1``, similar to clip space coordinates.
  82. Reconstruct the NDC using ``SCREEN_UV`` for the ``x`` and ``y`` axis, and
  83. the depth value for ``z``.
  84. .. code-block:: glsl
  85. void fragment() {
  86. float depth = texture(DEPTH_TEXTURE, SCREEN_UV).x;
  87. vec3 ndc = vec3(SCREEN_UV, depth) * 2.0 - 1.0;
  88. }
  89. Convert NDC to view space by multiplying the NDC by ``INV_PROJECTION_MATRIX``.
  90. Recall that view space gives positions relative to the camera so the ``z`` value will give us
  91. the distance to the point.
  92. .. code-block:: glsl
  93. void fragment() {
  94. ...
  95. vec4 view = INV_PROJECTION_MATRIX * vec4(ndc, 1.0);
  96. view.xyz /= view.w;
  97. float linear_depth = -view.z;
  98. }
  99. Because the camera is facing the negative ``z`` direction the position will have a negative ``z`` value.
  100. In order to get a usable depth value we have to negate ``view.z``.
  101. The world position can be constructed from the depth buffer using the following code. Note
  102. that the ``CAMERA_MATRIX`` is needed to transform the position from view space into world space so
  103. it needs to be passed to the fragment shader with a varying.
  104. .. code-block:: glsl
  105. varying mat4 CAMERA;
  106. void vertex() {
  107. CAMERA = CAMERA_MATRIX;
  108. }
  109. void fragment() {
  110. ...
  111. vec4 world = CAMERA * INV_PROJECTION_MATRIX * vec4(ndc, 1.0);
  112. vec3 world_position = world.xyz / world.w;
  113. }
  114. An optimization
  115. ---------------
  116. You can benefit from using a single large triangle rather than using a full
  117. screen quad. The reason for this is explained `here <https://michaldrobot.com/2014/04/01/gcn-execution-patterns-in-full-screen-passes>`_.
  118. However, the benefit is quite small and only beneficial when running especially
  119. complex fragment shaders.
  120. Set the Mesh in the MeshInstance to an :ref:`ArrayMesh <class_ArrayMesh>`. An
  121. ArrayMesh is a tool that allows you to easily construct a Mesh from Arrays for
  122. vertices, normals, colors, etc.
  123. Now, attach a script to the MeshInstance and use the following code:
  124. ::
  125. extends MeshInstance
  126. func _ready():
  127. # Create a single triangle out of vertices
  128. var verts = PoolVector3Array()
  129. verts.append(Vector3(-1.0, -1.0, 0.0))
  130. verts.append(Vector3(-1.0, 3.0, 0.0))
  131. verts.append(Vector3(3.0, -1.0, 0.0))
  132. # Create an array of arrays
  133. # This could contain normals, colors, uvs, etc.
  134. var mesh_array = []
  135. mesh_array.resize(Mesh.ARRAY_MAX) #required size for ArrayMesh Array
  136. mesh_array[Mesh.ARRAY_VERTEX] = verts #position of vertex array in ArrayMesh Array
  137. # Create mesh from mesh_array
  138. mesh.add_surface_from_arrays(Mesh.PRIMITIVE_TRIANGLES, mesh_array)
  139. .. note:: The triangle is specified in normalized device coordinates. Recall, NDC run
  140. from ``-1`` to ``1`` in both the ``x`` and ``y`` directions. This makes the screen
  141. ``2`` units wide and ``2`` units tall. In order to cover the entire screen with
  142. a single triangle, use a triangle that is ``4`` units wide and ``4``
  143. units tall, double its height and width.
  144. Assign the same vertex shader from above and everything should look exactly the same.
  145. The one drawback to using an ArrayMesh over using a QuadMesh is that the ArrayMesh
  146. is not visible in the editor because the triangle is not constructed until the scene
  147. is run. To get around that, construct a single triangle Mesh in a modelling program
  148. and use that in the MeshInstance instead.