TODOExperimentation.txt 6.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126
  1. ------------------------- STUDY --------------------------------
  2. Study shadow rendering implementations
  3. Study how is transparency handled (is it order independant?)
  4. Figure out what is skylight
  5. Determine how is light bleeding handled (if at all)
  6. ---------------------- IMPLEMENTATION ---------------------------
  7. RenderTexturePool needs support for cube and 3D textures
  8. Lights need getLightMesh() method
  9. - Need cone to use when rendering spot light, sphere otherwise
  10. Load up and set up a test-bed with Ribek's scene
  11. Quantize buffer sizes so they're divideable by 8 when requesting them from RenderTexturePool
  12. R11G11B10 and R10G10B10A2 formats haven't been tested
  13. Make sure materials can generate unique "hash" values so I can quickly check if they're equal
  14. - Generate unique ID for each shader (32bit)
  15. - Limit state IDs to 10 bits
  16. - Generate a combined 64bit hash value for the material using shader + 3 states
  17. Make GUI and Overlay render separately without using the main render queue
  18. - Get rid of DrawList and make them render similar to how handles/gizmos do it
  19. - They should render after everything else
  20. - Disable depth test and depth write in their material
  21. - Since most GUI elements share a material check if material changed from the last element and don't set new shaders/states otherwise
  22. - (changing params and param buffers should still apply)
  23. - Since I'm doing the rendering directly in GUIManager I can exactly control which states I modify
  24. Make scene grid rendering more akin to handle & gizmo rendering so I can remove the onRenderViewport callback
  25. - Similar thing with dock manager
  26. Rendering procedure should be:
  27. - For each camera go over each renderable and determine if their layers match
  28. - If they match do frustum culling
  29. - Add renderables to opaque or transparent queue depending on material
  30. - This should happen in renderAllCore before actual rendering starts
  31. - When rendering bind gbuffer render targets and render opaque queue
  32. - Unbind gbuffer and resolve to scene color
  33. - Bind scene color and render transparent queue
  34. - Resolve scene color to frame buffer (they could be the same target if no MSAA is used, and gamma correction is properly set)
  35. - After rendering is done make sure to clear the queues so I don't hold invalid references
  36. Reduce state switching:
  37. - Extend render queues so they compare material hash values and sort accordingly
  38. - Add a flag that allows me to control whether the queue should prefer distance sort or material sort
  39. - Extend entries output from render queue with a flag that signals that a material should be re-applied
  40. - When rendering only apply new shaders & states when that flag is encountered
  41. - Switch buffers, textures and sampler for each renderable (possibly ignoring static/per-frame/per-camera ones)
  42. Store RenderTargets per camera
  43. - Only create it if camera is rendering some renderables
  44. - If none are rendered clear the reference to free the targets
  45. Generate different RenderableController for each set of elements
  46. - Will likely want to rename current LitTexRenderableController to OpaqueSomething
  47. - Each controller would be connected to its own render queue (generated in above step)
  48. - Renderable controller should probably be notified when rendering starts/ends so it may bind gbuffer and/or other resoures.
  49. Light queues:
  50. - Will likely need to generate a set of visible lights per camera similar to renderables (separate them by type most likely)
  51. - Will also need a set of renderables per light when rendering shadows
  52. --------------------------- DESIGN ---------------------------
  53. How will cameras interact with the renderer? The cameras currently available shouldn't have depth buffers
  54. - Need to modify RenderWindow so it doesn't create depth buffers
  55. - Find all places where I create windows and modify this
  56. - Modify render target creation in SceneWindow
  57. - What happens when a user creates a camera with a depth buffer?
  58. - Print out a warning and ignore it?
  59. - Or resolve the gbuffer into it? Probably this, as I want to be able to read the depth buffer from script code if needed
  60. - This still isn't perfect as I'd have duplicate buffers when using non-MSAA buffer that require no resolve
  61. - Similar issue when a multisampled buffer is used for the camera
  62. Separate GUI rendering into a separate part to be rendered after gbuffer is resolved?
  63. Will likely need an easy way to determine supported feature set (likely just depending on shader model)
  64. Consider encapsulating shaders together with methods for setting their parameters (and possibly retrieving output)
  65. - So that external code doesn't need to know about its internal and do less work
  66. - This would contain a reference to the shader and its parameters
  67. - It would then have a SetParameters method (custom per each shader) which updates its params in a simple manner
  68. - (Later) Possibly allow them to return a feature level and/or platform they're to be used on
  69. - (Later) It might be important to be easily able to use different versions of the shader (e.g. different defines)
  70. - This might require handling compilation on this class, instead on resource load (But then again I could potentially
  71. have the shader in an include file and then specific shader files for each define version)
  72. --------------------------- LONG TERM ------------------------
  73. Deferred:
  74. - Create a tile deferred renderer
  75. - Support for point, directional and spot lights
  76. - Basic lambert shading initially
  77. - Create brand new default shaders
  78. - HDR, tone mapping and gamma correct (toggle-able)
  79. - Will likely need a simple framework for rendering full-screen effects
  80. (e.g. I will need to downsample scene to determine brightness here, but will
  81. also need that framework for all post-processing)
  82. Implement shadows
  83. - Start with hard shadows
  84. - Move to PCF soft shadows (see if there's anything better)
  85. - Then cascaded maps
  86. Later:
  87. - Reflection probes
  88. - Proper PBR materials with reflection
  89. - Post-processing system - FXAA, SSAO, Color correction, Depth of field (Bokeh)
  90. - Forward rendering for transparent objects
  91. - Occlusion
  92. - GI
  93. - Volumetric lighting
  94. - SSR
  95. - Depth pre-pass - Make sure this can be toggled on and off as needed
  96. - HDR skybox, skylight stuff
  97. - Skin & vegetation shaders
  98. - Tesselation/displacement/parallax
  99. - Water
  100. - Fog
  101. - Motion blur
  102. - Per object shadows
  103. - Extend camera with shutter speed (motion blur), aperture size and focal distance (depth of field), exposure (HDR)
  104. --------------------------- TEST -----------------------------
  105. Test all APIs with new changes regarding depth buffer creation on windows