BearishSun 8 лет назад
Родитель
Сommit
104eaa3184

+ 41 - 0
Documentation/Manuals/Native/commandBuffers.md

@@ -0,0 +1,41 @@
+Command buffers		{#commandBuffers}
+===============
+
+Rendering can be a very CPU heavy operation even though GPU does all the rendering - but CPU is still the one submitting all those commands. For this purpose Banshee provides a @ref bs::ct::CommandBuffer "ct::CommandBuffer" object. This object allows you to queue low-level rendering commands on different threads, allowing you to better distribute the CPU usage. Normally rendering commands are only allowed to be submitted from the core thread, but when using command buffers you are allowed to use a different thread for each command buffer.
+
+Almost every method on **RenderAPI** accepts a **CommandBuffer** as its last parameter. If you don't provide one the system will instead use its primary internal command buffer. When you do you can use **RenderAPI** from different threads safely.
+
+> Note: At this point command buffers are only natively supported by the Vulkan render API and will be emulated on others. This means there will be no performance benefit on non-Vulkan render APIs.
+
+# Creation
+To create a **CommandBuffer** call @ref bs::ct::CommandBuffer::create "ct::CommandBuffer::create()" with @ref bs::GpuQueueType "GpuQueueType" as the parameter. **GpuQueueType** can be:
+ - @ref bs::GQT_GRAPHICS "GQT_GRAPHICS" - This is the default command buffer type that supports all type of operations.
+ - @ref bs::GQT_COMPUTE "GQT_COMPUTE" - Command buffer type that only supports compute operations.
+
+If you know a command buffer will only execute compute operations it is beneficial to only create it using **GQT_COMPUTE**.
+
+~~~~~~~~~~~~~{.cpp}
+SPtr<CommandBuffer> cmds = CommandBuffer::create(GQT_GRAPHICS);
+~~~~~~~~~~~~~
+
+# Usage
+Once created simply provide it to any relevant **RenderAPI** calls as the last parameter.
+
+~~~~~~~~~~~~~{.cpp}
+RenderAPI& rapi = RenderAPI::instance();
+
+// ... bind pipeline, vertex/index buffers, etc.
+rapi.drawIndexed(0, numIndices, 0, numVertices, 1 cmds);
+~~~~~~~~~~~~~
+
+# Submitting
+Commands queued on a command buffer will only get executed after the command buffer is submitted. Submission is done by calling @ref bs::ct::RenderAPI::submitCommandBuffer "ct::RenderAPI::submitCommandBuffer()".
+
+~~~~~~~~~~~~~{.cpp}
+RenderAPI& rapi = RenderAPI::instance();
+rapi.submitCommandBuffer(cmds);
+~~~~~~~~~~~~~
+
+The default command buffer (one used when you provide no command buffer to **RenderAPI**) is automatically submitted on a call to **RenderAPI::swapBuffers()**.
+
+Note that even though command buffers can be populated with commands on various threads, **ct::RenderAPI::submitCommandBuffer()** must only be called from the core thread. You must also externally synchronize access to **CommandBuffer** when passing it between threads, as it is not thread safe.

+ 13 - 0
Documentation/Manuals/Native/compute.md

@@ -0,0 +1,13 @@
+Compute			{#compute}
+===============
+[TOC]
+
+Compute GPU programs are not meant to be used for drawing/rendering, but instead of arbitrary computations. In order to execute a compute program you must bind a **ComputePipelineState** to the **RenderAPI**, as previously shown. After it is bound you must call @ref bs::ct::RenderAPI "ct::RenderAPI::dispatch()".
+
+Compute GPU programs are executed in one or multiple thread groups. Each thread group has one or multiple threads, as defined in the GPU program code itself. Thread groups and threads can be organized in one, two or three dimensions, depending on what is most relevant to the data being processed. **ct::RenderAPI::dispatch()** expects the number of thread-groups to launch as parameters.
+
+~~~~~~~~~~~~~{.cpp}
+// Execute a GPU program with 32x32 thread-groups
+RenderAPI& rapi = RenderAPI::instance();
+rapi.dispatch(32, 32, 1);
+~~~~~~~~~~~~~

+ 58 - 0
Documentation/Manuals/Native/gpuBuffers.md

@@ -0,0 +1,58 @@
+GPU Buffers			{#gpuBuffers}
+===============
+[TOC]
+
+GPU buffers (also known as generic buffers) allow you to provide data to a **GpuProgram** similar as a texture. In particular they are very similar to a one-dimensional texture. They aren't constrained by size limitations like a texture, and allow each entry in the buffer to be more complex than just a primitive data type. This allows you to provide your GPU programs with complex data easily. In Banshee they are represented using the @ref bs::ct::GpuBuffer "ct::GpuBuffer" type. 
+
+# Creation {#gpuBuffers_a}
+To create a **ct::GpuBuffer** you must fill out a @ref bs::GPU_BUFFER_DESC "GPU_BUFFER_DESC" structure and call the @ref bs::ct::GpuBuffer::create "ct::GpuBuffer::create()" method. At minimum you need to provide:
+ - @ref bs::GPU_BUFFER_DESC::type "GPU_BUFFER_DESC::type" - This can be @ref bs::GBT_STANDARD "GBT_STANDARD" or @ref bs::GBT_STRUCTURED "GBT_STRUCTURED". See below for explanation of each.
+ - @ref bs::GPU_BUFFER_DESC::elementCount "GPU_BUFFER_DESC::elementCount" - Number of elements in the buffer.
+ - @ref bs::GPU_BUFFER_DESC::format "GPU_BUFFER_DESC::format" - Format of each individual element in the buffer as @ref bs::GpuBufferFormat "GpuBufferFormat". Only relevant for buffers with type **GBT_STANDARD**.
+ - @ref bs::GPU_BUFFER_DESC::elementSize "GPU_BUFFER_DESC::elementSize" - Size (in bytes) of each element in the buffer. Only relevant for buffers with type **GBT_STRUCTURED**.
+ 
+Standard buffers contain primitive elements (of **GpuBufferFormat** format), such as floats or ints, each with up to 4 components. In HLSL these buffers are represented using **Buffer** or **RWBuffer** types. In GLSL they are represented using **samplerBuffer** or **imageBuffer** types.
+ 
+~~~~~~~~~~~~~{.cpp}
+// Creates a standard buffer with 32 elements, each a 4-component float
+GPU_BUFFER_DESC desc;
+desc.type = GBT_STANDARD;
+desc.elementCount = 32;
+desc.format = BF_32X4F;
+
+SPtr<GpuBuffer> buffer = GpuBuffer::create(desc);
+~~~~~~~~~~~~~
+
+Structured buffers contain elements of arbitrary size and are usually used for storing structures of more complex data. In HLSL these buffers are represented using **StructuredBuffer** or **RWStructuredBuffer** types. In GLSL they are represented using the **buffer** block, also known as shared storage buffer object.
+ 
+~~~~~~~~~~~~~{.cpp}
+struct MyData
+{
+	float a;
+	int b;
+};
+
+// Creates a structured buffer with 32 elements, each with enough size to store the MyData struct
+GPU_BUFFER_DESC desc;
+desc.type = GBT_STRUCTURED;
+desc.elementCount = 32;
+desc.elementSize = sizeof(MyData);
+
+SPtr<GpuBuffer> buffer = GpuBuffer::create(desc);
+~~~~~~~~~~~~~ 
+
+# Reading/writing {#gpuBuffers_b}
+Reading or writing to a GPU buffer uses the same approach as other types of buffers, like index or vertex buffers. Refer back to the @ref geometry manual to see how.
+
+# Binding {#gpuBuffers_c}
+Once created buffer can be bound to a GPU program through **GpuParams** by calling @ref bs::ct::GpuParams::setBuffer(GpuProgramType, const String&, const BufferType&) "ct::GpuParams::setBuffer()".
+
+~~~~~~~~~~~~~{.cpp}
+SPtr<GpuParams> params = ...;
+params->setBuffer(GPT_FRAGMENT_PROGRAM, "myBuffer", buffer);
+~~~~~~~~~~~~~ 
+
+# Load-store buffers {#gpuBuffers_d}
+Same as with textures, buffers can also be used for GPU program load-store operations. You simply need to enable the @ref bs::GPU_BUFFER_DESC::randomGpuWrite "GPU_BUFFER_DESC::randomGpuWrite" option on buffer creation.
+
+After that buffer can be bound as normal, as shown above. This is different from load-store textures which have a separate set of methods for binding in **GpuParams**.

+ 4 - 0
Documentation/Manuals/Native/lowLevelRenderingExample.md

@@ -0,0 +1,4 @@
+Working example		{#lowLevelRenderingExample}
+===============
+
+A working example on how to use the low level rendering API, using most of the functionality described in these manuals, can be found in the *Examples/ExampleLowLevelRendering* project provided with the source code.

+ 3 - 5
Documentation/Manuals/Native/manuals.md

@@ -81,10 +81,10 @@ A set of manuals covering advanced functionality intented for those wanting to e
  - [Render targets](@ref renderTargets)
  - [Drawing](@ref drawing) 
  - [Load-store textures](@ref loadStoreTextures)
- - [Generic buffers](@ref genericBuffers)
+ - [GPU buffers](@ref gpuBuffers)
  - [Compute](@ref compute)
  - [Command buffers](@ref commandBuffers)
- - [Complete example](@ref lowLevelRenderingExample)
+ - [Working example](@ref lowLevelRenderingExample)
 
 ## General guides
 Name                                      | Description
@@ -105,7 +105,5 @@ Name                                      | Description
 
 Name                                      | Description
 ------------------------------------------|-------------
-[Textures](@ref textures)                 | Shows you how to create, use and manipulate textures.
 [Meshes](@ref meshes)                     | Shows you how to create, use and manipulate meshes.
-[Materials](@ref materials)				  | Shows you how to create and use materials and shaders.
-[Render API](@ref renderAPI)
+[Materials](@ref materials)				  | Shows you how to create and use materials and shaders.

+ 0 - 294
Documentation/Manuals/Native/renderAPI.md

@@ -1,294 +0,0 @@
-Render API									{#renderAPI}
-===============
-[TOC]
-
-Render API is an interface that allows you to perform low-level rendering operations akin to DirectX or OpenGL. In Banshee this API is provided through the @ref bs::RenderAPI "RenderAPI" for the simulation thread, and @ref bs::ct::RenderAPI "ct::RenderAPI" for the core thread. If you are confused by the dual nature of objects, read the [core thread](@ref coreThread) manual. 
-
-For the remainder of this manual we'll focus on the core thread portion of the API, but both provide essentially identical functionality. The main difference that the execution of commands on the simulation thread isn't immediate, instead they are queued on an internal queue which is sent to the core thread at the end of the frame.
-
-Render API lets you manipulate the GPU pipeline by setting various states (depth stencil, blend and rasterizer), binding GPU programs, textures and executing draw calls. In this manual we'll cover how to use this API to perform rendering manually. 
-
-To render using this API you need to:
- - Create and bind a render target
-  - Set up a viewport (if rendering to a part of a render target)
-  - Clear render target (if we're re-using a render target)
- - Create and bind a pipeline state
-   - Create and bind vertex/fragment GPU programs
-   - Create and bind depth stencil, blend and/or rasterizer states (optionally)
-   - Create and bind geometry/hull/domain GPU program (optionally)
- - Bind GPU program parameters (textures, samplers, etc., if any)
- - Create and bind vertices
-  - Create and bind the vertex declaration
-  - Create and bind the vertex buffer
- - Create and bind the index buffer (optionally)
- - Issue a draw command
- 
-We'll cover each step of this process, and at the end also show an alternate pipeline for compute operations.
-
-# Render targets {#renderAPI_a}
-Before any rendering can be done you must bind at least one render target. To learn more about, and how to create a render target check out the [render target](@ref renderTargets) manual. 
-
-Binding a render target involves calling @ref bs::ct::RenderAPI::setRenderTarget "ct::RenderAPI::setRenderTarget". This will cause any rendering to be output to the entirety of the render target. Optionally you can also call @ref bs::ct::RenderAPI::setViewport "ct::RenderAPI::setViewport" to select a sub-rectangle of the target to render to. 
-
-Binding a render target means you cannot use it for reading within a GPU program. However if your render target has a depth-buffer you can optionally set the `readOnlyDepthStencil` parameter of @ref bs::ct::RenderAPI::setRenderTarget "ct::RenderAPI::setRenderTarget" to true, which will allow you to have a depth buffer be bound for both depth-testing and reading from the GPU program.
- 
-Before doing any rendering it's always good to clear the render target to some default value. Use @ref bs::ct::RenderAPI::clearRenderTarget "ct::RenderAPI::clearRenderTarget" to clear the entire target, or @ref bs::ct::RenderAPI::clearViewport "ct::RenderAPI::clearViewport" to clear just the viewport portion of the target. When clearing you can choose whether to clear color, depth or stencil buffers (or all) determined by @ref bs::FrameBufferType "FrameBufferType" flags. You can also choose the values to clear each of the buffers to. And finally if your render target has multiple surfaces, you can choose to clear only some of the surfaces by providing a bitmask.
-
-Once you are done rendering make sure to call @ref bs::ct::RenderAPI::swapBuffers "ct::RenderAPI::swapBuffers" if your render target has multiple buffers (like a window). This will swap the buffers and present the rendered image on the screen.
-
-A simple example covering all these commands:
-~~~~~~~~~~~~~{.cpp}
-SPtr<RenderTarget> myRt = ...; // Assuming we created this earlier, and it's a double-buffered window
-
-RenderAPI& rapi = RenderAPI::instance();
-rapi.setRenderTarget(myRt);
-rapi.setViewport(Rect2(0.5f, 0.0f, 0.5f, 1.0f)); // Draw only to the right side of the target
-
-// Clear all buffers: color to white, depth to 1.0, stencil to 0
-rapi.clearViewport(FBT_COLOR | FBT_DEPTH | FBT_STENCIL, Color::White, 1.0f, 0);
-... execute some draw calls ...
-rapi.swapBuffers();
-~~~~~~~~~~~~~
- 
-# Pipeline state {#renderAPI_b}
-Before executing the drawing operation you must set up an @ref bs::ct::GraphicsPipelineState "ct::GraphicsPipelineState" object, which contains a set of fixed and programmable states that control primitive rendering. This includes GPU programs (e.g. vertex/fragment) and fixed states (depth-stencil, blend, rasterizer).
-
-To create a pipeline state you must fill out @ref bs::PIPELINE_STATE_DESC "PIPELINE_STATE_DESC" descriptor, and use it to construct the state, like so:
-~~~~~~~~~~~~~{.cpp}
-PIPELINE_STATE_DESC desc;
-// Fixed states (see below on how to create them)
-desc.blendState = ...;
-desc.rasterizerState = ...;
-desc.depthStencilState = ...;
-
-// GPU programs (see below on how to create them)
-desc.vertexProgram = ...
-desc.fragmentProgram = ...;
-desc.geometryProgram = ...;
-desc.hullProgram = ...;
-desc.domainProgram = ...;
-
-SPtr<GraphicsPipelineState> state = GraphicsPipelineState::create(desc);
-~~~~~~~~~~~~~
-
-Once created the pipeline can be bound for rendering by calling @ref bs::ct::RenderAPI::setGraphicsPipeline "ct::RenderAPI::setGraphicsPipeline".
-
-~~~~~~~~~~~~~{.cpp}
-// Bind pipeline for use (continued from above)
-
-RenderAPI& rapi = RenderAPI::instance();
-rapi.setGraphicsPipeline(state);
-~~~~~~~~~~~~~
-
-We continue below with explanation on how to create fixed and programmable states required to initialize GraphicsPipelineState.
-
-## Fixed pipeline states {#renderAPI_b_a}
-Fixed pipeline states allow you to control (to some extent) non-programmable parts of the pipeline. This includes anything from blend operations, rasterization mode to depth testing. Setting these states is optional and if not set, default values will be used.
-
-States can be created by:
- - @ref bs::ct::DepthStencilState "ct::DepthStencilState" - Populate @ref bs::DEPTH_STENCIL_STATE_DESC "DEPTH_STENCIL_STATE_DESC" and call @ref bs::ct::DepthStencilState::create "ct::DepthStencilState::create" 
- - @ref bs::ct::BlendState "ct::BlendState" - Populate @ref bs::BLEND_STATE_DESC "BLEND_STATE_DESC" and call @ref bs::ct::BlendState::create "ct::BlendState::create" 
- - @ref bs::ct::RasterizerState "ct::RasterizerState" - Populate @ref bs::RASTERIZER_STATE_DESC "RASTERIZER_STATE_DESC" and call @ref bs::ct::RasterizerState::create "ct::RasterizerState::create" 
- 
-We won't explain what each of the states does. For that you can check out the class documentation of the states themselves, or familiarize yourself with the modern GPU pipeline in general, as the states mirror it exactly.
-
-## GPU programs {#renderAPI_b_b}
-The pipeline state also requires you to bind at least one GPU program (programmable state). At minimum you will need to bind a vertex program, while in most cases you will also need a fragment program. Optionally you can also bind geometry, hull or domain programs for more advanced functionality. To learn how to create GPU programs see [GPU program manual](@ref gpuPrograms).
-
-Most GPU programs also accept a number of parameters, whether textures, buffers, sampler states or primitive values like floats or integers. These parameters are accessed through @ref bs::ct::GpuParams "ct::GpuParams" object. You can use this object to assign individual parameters and then bind the object to the render API using @ref bs::ct::RenderAPI::setGpuParams "ct::RenderAPI::setGpuParams". See below for an example.
-
-~~~~~~~~~~~~~{.cpp}
-... assuming graphics pipeline state and relevant GPU programs are created ...
-SPtr<GraphicsPipelineState> state = ...;
-SPtr<GpuParams> params = GpuParams::create(state);
-
-// Retrieve GPU param handles we can then read/write to
-GpuParamVec2 myVectorParam;
-GpuParamTexture myTextureParam;
-
-params->getParam(GPT_FRAGMENT_PROGRAM, "myVector", myVectorParam); // Assuming "myVector" is the variable name in the program source code
-params->getTextureParam(GPT_FRAGMENT_PROGRAM, "myTexture", myTextureParam); // Assuming "myTexture" is the variable name in the program source code
-
-myVectorParam.set(Vector2(1, 2));
-myTextureParam.set(myTexture); // Assuming we created "myTexture" earlier.
-
-// Bind parameters for use 
-RenderAPI& rapi = RenderAPI::instance();
-rapi.setGpuParams(params);
-~~~~~~~~~~~~~
- 
-All parameters are bound by first retrieving their handles, and then using those handles for parameter access. In the example above we show how to bind a texture and a 2D vector to a GPU program. Same approach follows for all available parameter types.
-
-After the parameters are set, we bind them to the pipeline by calling @ref bs::ct::RenderAPI::setGpuParams "ct::RenderAPI::setGpuParams".
-
-### Data parameters {#renderAPI_c_a_a} 
-Handles for data parameters like int, float, 2D vector, etc. can be retrieved by calling @ref bs::ct::GpuParams::getParam "ct::GpuParams::getParam" which can then be assigned to as shown above.
-
-Alternatively you may also assign entire blocks of data parameters by calling @ref bs::ct::GpuParams::setParamBlockBuffer(GpuProgramType,const String&,const ParamsBufferType&) "ct::GpuParams::setParamBlockBuffer". When assigning entire blocks you must create and populate the @ref bs::GpuParamBlockBuffer "GpuParamBlockBuffer" object manually.
-
-When writing to buffers manually you must ensure to write to offsets the GPU program expects the data to be at. You can find thise information from @ref bs::GpuParamDesc "GpuParamDesc" structure accessible from @ref bs::ct::GpuProgram::getParamDesc "ct::GpuProgram::getParamDesc". 
-
-### Texture parameters {#renderAPI_c_a_b} 
-Handles for texture parameters can be retrieved by calling @ref bs::ct::GpuParams::getTextureParam "ct::GpuParams::getTextureParam", or @ref bs::ct::GpuParams::getLoadStoreTextureParam "ct::GpuParams::getLoadStoreTextureParam" if the texture should be bound for load-store operations.
-
-Learn more about textures and their different types in the [texture manual](@ref textures).
-
-### Sampler state parameters {#renderAPI_c_a_c}
-Sampler states can be used to customize how is the texture sampled. You can retrieve a handle for a sampler state parameter by calling @ref bs::ct::GpuParams::getSamplerStateParam "ct::GpuParams::getSamplerStateParam".
-
-Sampler states are represented by the @ref bs::ct::SamplerState "ct::SamplerState" object, which you can create by populating the @ref bs::SAMPLER_STATE_DESC "SAMPLER_STATE_DESC" and calling @ref bs::ct::SamplerState::create "ct::SamplerState::create". 
-
-An example to create and bind a sampler state:
-~~~~~~~~~~~~~{.cpp}
-
-// Use nearest neighbor filtering when sampling
-SAMPLER_STATE_DESC ssDesc;
-ssDesc.magFilter = FO_POINT;
-ssDesc.minFilter = FO_POINT;
-ssDesc.mipFilter = FO_POINT;
-
-SPtr<SamplerState> mySamplerState = SamplerState::create(ssDesc);
-
-SPtr<GpuParams> params = ...;
-
-GpuParamSampState mySamplerParam;
-params->getSamplerStateParam(GPT_FRAGMENT_PROGRAM, "mySamplerState", mySamplerParam); // Assuming "mySamplerState" is the variable name in the program source code
-
-mySamplerParam.set(mySamplerState);
-
-~~~~~~~~~~~~~
-
-# Vertex buffer {#renderAPI_d}
-@ref bs::ct::VertexBuffer "Vertex buffer" is a buffer that contains all vertices of the object we wish to render. When drawing the vertices will be interpreted as primitives (either points, lines or triangles) and rendered. Each vertex can have one or multiple properties associated with it.
-
-To create a vertex buffer call @ref bs::ct::VertexBuffer::create "ct::VertexBuffer::create". You need to know the size of an individual vertex (determined by the properties each vertex requires) and the number of vertices. Optionally if your vertex buffer is used for output from the geometry GPU program you can toggle on the `streamOut` parameter.
-
-Once the vertex buffer is created you will want to populate it with some data (detailed below) and then bind it to the pipeline using @ref bs::ct::RenderAPI::setVertexBuffers "ct::RenderAPI::setVertexBuffers". You can bind one or multiple vertex buffers at once. They all must have the same vertex counts but can have different properties, which will all be fed to the pipeline when rendering.
-
-Creation of an example vertex buffer:
-~~~~~~~~~~~~~{.cpp}
-// Create a vertex buffer containing 8 vertices with just a vertex position
-SPtr<VertexBuffer> vb = VertexBuffer::create(sizeof(Vector3), 8);
-
-RenderAPI& rapi = RenderAPI::instance();
-rapi.setVertexBuffers(0, { vb });
-~~~~~~~~~~~~~
-
-## Reading/writing {#renderAPI_d_a}
-@ref bs::ct::VertexBuffer "ct::VertexBuffer" provides a couple of way to read and write data from/to it:
- - @ref bs::ct::VertexBuffer::lock "ct::VertexBuffer::lock" locks a specific region of the vertex buffer and returns a pointer you can then use for reading and writing. Make sure to specify valid @ref bs::GpuLockOptions "GpuLockOptions" signaling whether you are planning on read or writing from the buffer. Once done call @ref bs::ct::VertexBuffer::unlock "ct::VertexBuffer::unlock" to make the locked region accessible to the GPU again.
- - @ref bs::ct::VertexBuffer::readData "ct::VertexBuffer::readData" and @ref bs::ct::VertexBuffer::writeData "ct::VertexBuffer::writeData" to write or read entire blocks at once, but are more or less similar to the previous method.
- - @ref bs::ct::VertexBuffer::copyData "ct::VertexBuffer::copyData" can be used to efficiently copy data between two vertex buffers.
-
-An example of writing to the vertex buffer:
-~~~~~~~~~~~~~{.cpp}
-// Create a vertex buffer containing 8 vertices with just a vertex position
-SPtr<VertexBuffer> vb = VertexBuffer::create(sizeof(Vector3), 8);
-
-Vector3* positions = (Vector3)vb->lock(0, sizeof(Vector3) * 8, GBL_WRITE_ONLY_DISCARD);
-... write to the positions array ...
-vb->unlock();
-~~~~~~~~~~~~~
-
-When your vertices contain multiple properties it can be difficult to keep track of which offset to write which property, or determine the stride between two vertices. For this purpose you can use @ref bs::VertexDataDesc "VertexDataDesc" which allows you to easily set up vertex properties like so:
-~~~~~~~~~~~~~{.cpp}
-// Create a vertex with a position, normal and UV coordinates
-SPtr<VertexDataDesc> vertexDesc = VertexDataDesc::create();
-vertexDesc->addVertElem(VET_FLOAT3, VES_POSITION);
-vertexDesc->addVertElem(VET_FLOAT3, VES_NORMAL);
-vertexDesc->addVertElem(VET_FLOAT2, VES_TEXCOORD);
-~~~~~~~~~~~~~
-
-You can then use methods like @ref bs::VertexDataDesc::getElementSize "VertexDataDesc::getElementSize" to learn the size of a particular element, @ref bs::VertexDataDesc::getVertexStride "VertexDataDesc::getVertexStride" to learn the stride between elements. You can also retrieve detailed per-property information by iterating over all properties with @ref bs::VertexDataDesc::getNumElements "VertexDataDesc::getNumElements" and @ref bs::VertexDataDesc::getElement "VertexDataDesc::getElement". These methods return a @ref bs::VertexElement "VertexElement" which can be used for finding out the offset of the individual element.
-
-To learn more about vertex descriptors read the [mesh](@ref meshes) manual.
-
-## Vertex declaration {#renderAPI_d_b}
-Before a vertex buffer can be used for rendering, you need to tell the pipeline in what format are its vertices structured in. You do that by creating a @ref bs::ct::VertexDeclaration "ct::VertexDeclaration" object using the @ref bs::VertexDataDesc "VertexDataDesc" we described in the previous section. This object can then be passed to @ref bs::ct::RenderAPI::setVertexDeclaration "ct::RenderAPI::setVertexDeclaration" to bind it to the pipeline.
-
-For example:
-~~~~~~~~~~~~~{.cpp}
-SPtr<VertexDataDesc> vertexDesc = ...; // Creating vertex desc as above
-SPtr<VertexDeclaration> vertexDecl = VertexDeclaration::create(vertexDesc);
-
-RenderAPI& rapi = RenderAPI::instance();
-rapi.setVertexDeclaration(vertexDecl);
-~~~~~~~~~~~~~
-
-It is important that the vertex declaration contains properties needed by the bound vertex GPU program, as well as that it matches the vertex layout in the vertex buffer. See the [gpu program](@ref gpuPrograms) manual to learn how to retrieve vertex properties expected by a GPU program.
-
-# Index buffer {#renderAPI_e}
-Normally when you draw data from a vertex buffer, the vertices are assumed to form primitives sequentially (e.g. every three vertices is a triangle). By using an @ref bs::ct::IndexBuffer "index buffer" you an provide an additional layer of abstraction. Index buffer is fully optional, but when bound it will be used for forming primitives instead of the vertex buffer (i.e. every three indices will form a triangle). Each entry in an index buffer points to a vertex in the vertex buffer. This level of abstraction allows you to re-use the same vertex in multiple primitives, as well as create more optimal vertex order for GPU processing.
-
-To create an index buffer call @ref bs::ct::IndexBuffer::create "ct::IndexBuffer::create". It expects a number of indices, and the type of indices. Index type can be either 16- or 32-bit. To bind an index buffer to the pipeline call @ref bs::ct::RenderAPI::setIndexBuffer "ct::RenderAPI::setIndexBuffer".
-
-Reading and writing from/to the index buffer has the identical interface to the vertex buffer, so we won't show it again.
-
-# Drawing {#renderAPI_f}
-Once all the previous states, programs and buffers have been set up, we can finally render our object. First of we must set up the type of primitives we wish to render by calling @ref bs::ct::RenderAPI::setDrawOperation "ct::RenderAPI::setDrawOperation" with a @ref bs::DrawOperationType "DrawOperationType" specifying the primitive type. This determines how the the vertices (or indices) in our buffers interpreted.
-
-After that you can issue a @ref bs::ct::RenderAPI::draw "ct::RenderAPI::draw" call if rendering without an index buffer. It expects the vertex index to start rendering from, and the number of vertices to render. The number of vertices must be divisible by the number of vertices expected by the @ref bs::DrawOperationType "DrawOperationType" you're using (e.g. three for triangles, two for lines, one for points). The vertices will then be pulled from the vertex buffers, processed by the fixed pipeline controlled by the states, and by the programmable pipeline controlled by the GPU programs and the output image will be rendered to the bound render target.
-
-If using an index buffer you should issue a @ref bs::ct::RenderAPI::drawIndexed "ct::RenderAPI::drawIndexed" call. Aside from vertex offset/count, it also expects an offset into the index buffer to start rendering from, and number of indices to render from. In this case the vertex offset will be added to every read index, allowing you to re-use the index buffer for potentially different geometry. 
- 
-And this wraps up the rendering pipeline. After this step your object should be rendered to your render target and ready to display. 
- 
-# Compute {#renderAPI_g}
-The compute pipeline is a very simple pipeline that can be used for general purpose calculations. It is separate from the graphics pipeline we have been describing so far, but uses the same functionality, just in a more limited way. You don't have to set fixed states, render targets, vertex/index buffers and only one GPU program type is supported (compute GPU program).
-
-The pipeline is represented with the @ref bs::ct::ComputePipelineState "ct::ComputePipelineState" object, which must be initialized with the compute GPU program to use.
-
-After creation use @ref bs::ct::RenderAPI::setComputePipeline "ct::RenderAPI::setComputePipeline" to bind the pipeline for further operations. When the pipeline is set up you can execute it by calling @ref bs::ct::RenderAPI::dispatchCompute "ct::RenderAPI::dispatchCompute". You should provide it a three dimensional number that determines how many instances of the currently bound GPU program to execute. The total number of executions will be X * Y * Z.
-
-Since compute pipeline doesn't support render targets, you will want to use load-store textures for output. An example of a simple compute pipeline:
-~~~~~~~~~~~~~{.cpp}
-SPtr<GpuProgram> computeProgram = ...;
-
-SPtr<ComputePipelineState> state = ComputePipelineState::create(computeProgram);
-SPtr<GpuParams> computeGpuParams = GpuParams::create(state);
-
-... optionally set some parameters ...
-
-RenderAPI& rapi = RenderAPI::instance();
-rapi.setComputePipeline(state);
-rapi.setGpuParams(computeGpuParams);
-rapi.dispatchCompute(512, 512);
-
-... read from our texture to get the result ...
-~~~~~~~~~~~~~
-
-We won't go any deeper about the details of the compute pipeline as this information can be found by learning about the GPU pipeline in general from other sources.
-
-# API specifics {#renderAPI_h}
-@ref bs::RenderAPI "RenderAPI" can be internally implemented by a variety of actual rendering API's like DirectX or OpenGL. Most of the functionality is shared, but there are always some differences between them to be noted (for example DirectX uses a depth range of [0, 1] while OpenGL uses [-1, 1]). Often those differences can be important for various rendering algorithms.
-
-Use @ref bs::RenderAPI::getAPIInfo "RenderAPI::getAPIInfo" to receive the @ref bs::RenderAPIInfo "RenderAPIInfo" containing such information, so you may modify your rendering accordingly. 
-
-For convenience a specialized @ref bs::RenderAPI::convertProjectionMatrix "RenderAPI::convertProjectionMatrix" method is also provided, which converts a generic engine projection matrix, into a render API specific one.
-
-# Command buffers {#renderAPI_i}
-Almost all @ref bs::ct::RenderAPI "ct::RenderAPI" commands we talked about so far support @ref bs::ct::CommandBuffer "ct::CommandBuffer"s. Command buffers are optional, but they allow the rendering commands to be generated from threads other than the core thread.
-
-To create a command buffer call @ref bs::ct::CommandBuffer::create "ct::CommandBuffer::create" after which provide it to the relevant @ref bs::ct::RenderAPI "ct::RenderAPI" calls. Those commands will get recorded in the command buffer, but not executed. To actually execute the commands call @ref bs::ct::RenderAPI::submitCommandBuffer "ct::RenderAPI::submitCommandBuffer".
-
-This allows rendering to be faster since work can be distributed over multiple CPU cores. Note that only command queuing can happen on a separate thread, command buffer creation and execution must still happen on the core thread.
-
-Command buffer example:
-~~~~~~~~~~~~~{.cpp}
-// Core thread
-SPtr<CommandBuffer> cmdBuffer = CommandBuffer::create(CBT_COMPUTE);
-SPtr<GpuProgram> computeProgram = ...;
-SPtr<GpuParams> computeGpuParams = ...;
-SPtr<ComputePipelineState> state = ComputePipelineState::create(computeProgram);
-
-... queue up worker thread(s) ...
-
-// Worker thread
-RenderAPI& rapi = RenderAPI::instance();
-rapi.setComputePipeline(state, cmdBuffer);
-rapi.setGpuParams(computeGpuParams, cmdBuffer);
-rapi.dispatchCompute(512, 512, cmdBuffer);
-
-// Core thread
-rapi.submitCommandBuffer(cmdBuffer);
-~~~~~~~~~~~~~

+ 0 - 74
Documentation/Manuals/Native/textures.md

@@ -1,74 +0,0 @@
-Textures									{#textures}
-===============
-[TOC]
-
-Textures in Banshee are represented with the @ref bs::Texture "Texture" and @ref bs::ct::Texture "ct::Texture" classes. Both of these provide almost equivalent functionality, but the former is for use on the simulation thread, and the latter is for use on the core thread. If you are confused by the dual nature of the objects, read the [core thread](@ref coreThread) manual. 
-
-We're going to focus on the simulation thread implementation in this manual, and then note the differences in the core thread version at the end.
-
-# Creating a texture {#textures_a}
-To create a texture call @ref bs::Texture::create "Texture::create" or one if its overloads. You'll need to populate the @ref bs::TEXTURE_DESC "TEXTURE_DESC" structure and pass it as a parameter. At minimum you need to provide a @ref bs::TextureType "texture type", dimensions and @ref bs::PixelFormat "pixel format". The dimensions range from one to three dimensional depending on the texture type.
-
-Optionally you can also provide the number of mipmaps, number of samples, usage flags and a gamma correction flag:
- - A texture with mip-maps will contain a set of scaled down versions of itself that are used by the GPU for special filtering. 
- - A texture with more than one sample can be used only for rendering (see the [render targets](@ref renderTargets) manual), and is useful for antialiasing.
- - @ref bs::TextureUsage "Usage flags" specify how is the texture allowed to be used.
- - Gamma correction flag specifies if the data in the texture has been gamma corrected. If enabled the GPU will transform the texture data back to linear space when it is accessing it. If the texture is already in linear space, or you do not need it to be in linear space you can leave this off.
- 
-For example:
-~~~~~~~~~~~~~{.cpp}
-// Creates a 2D texture, 128x128 with an 8-bit RGBA format
-TEXTURE_DESC desc;
-desc.type = TEX_TYPE_2D;
-desc.width = 128;
-desc.heigth = 128;
-desc.format = PF_R8G8B8A8;
-
-HTexture texture = Texture::create(desc);
-~~~~~~~~~~~~~ 
-
-You can also create a non-empty texture by creating it with a populated @ref bs::PixelData "PixelData" object. More about @ref bs::PixelData "PixelData" later.
- 
-# Accessing properties {#textures_b} 
-You can access all relevant information about a texture (e.g. width, height) by calling @ref bs::Texture::getProperties() "Texture::getProperties()" which will return an instance of @ref bs::TextureProperties "TextureProperties". 
- 
-# Reading/writing {#textures_c}
-To read and write from/to the texture use the @ref bs::Texture::readData "Texture::readData" and @ref bs::Texture::writeData "Texture::writeData" methods. These expect a face and mipmap index to read/write to, and a @ref bs::PixelData "PixelData" object.
-
-@ref bs::PixelData "PixelData" object is just a container for a set of pixels. You can create one manually or use @ref bs::TextureProperties::allocBuffer "TextureProperties::allocBuffer" to create the object of valid size and format for the specified sub-resource index. When reading from the texture the buffer will be filled with pixels from the texture, and when writing you are expected to populate the object.
-
-Be aware that read and write operations are asynchronous and you must follow the rules for @ref asyncMethod "asynchronous methods".
-
-## PixelData {#textures_c_a}
-You can create @ref bs::PixelData "PixelData" manually by calling @ref bs::PixelData::create "PixelData::create" and providing it with dimensions and pixel format. When working with textures you must ensure that the dimensions and the format matches the texture sub-resource.
-
-Once created you can use @ref bs::PixelData::getColorAt "PixelData::getColorAt", @ref bs::PixelData::setColorAt "PixelData::setColorAt", @ref bs::PixelData::getColors "PixelData::getColors" and @ref bs::PixelData::setColors "PixelData::setColors" to read/write colors from/to its internal buffer.
-
-You can also use @ref bs::PixelUtil "PixelUtil" to perform various operations on the pixels. This includes generating mip maps, converting to different pixel formats, compressing, scaling and other.
-
-## Cached CPU data {#textures_c_b}
-When you read from a texture using the @ref bs::Texture::readData "Texture::readData" method the read will be performed from the GPU. This is useful if the GPU has in some way modified the texture, but will also incur a potentially large performance penalty because it will introduce a CPU-GPU synchronization point. In a lot of cases you might just want to read pixels from a texture that was imported or created on the CPU in some other way.
-
-For this reason @ref bs::Texture::readCachedData "Texture::readCachedData" exists. It will read data quickly with little performance impact. However you must create the texture using the @ref bs::TU_CPUCACHED "TU_CPUCACHED" usage. This also means that the texture will keep a copy of its pixels in system memory, so use it sparingly. If the texture is modified from the GPU this method will not reflect such changes.
-
-# Rendering using the texture {#textures_d}
-To use a texture for rendering you need to either:
- - (High level) Assign it to a @ref bs::Material "Material" which will then automatically get used on a @ref bs::Renderable "Renderable" which uses the material. Read the [material manual](@ref materials) for more information.
- - (Low level) Bind the texture to a @ref bs::GpuParams "GpuParams" object, which can then be assigned to pipeline for rendering. Read the [render API manual](@ref renderAPI) for more information.
-
-# Saving/loading {#textures_e}
-A texture is a @ref bs::Resource "Resource" and can be saved/loaded like any other. See the [resource](@ref resources) manual.
-
-# Core thread textures {#textures_f}
-So far we have only talked about the simulation thread @ref bs::Texture "Texture" but have ignored the core thread @ref bs::ct::Texture "ct::Texture". The functionality between the two is mostly the same, with the major difference being that the core thread version doesn't have asychronous write/read methods, and those operations are instead performed immediately.
-
-You can also use @ref bs::ct::Texture::lock "ct::Texture::lock" and @ref bs::ct::Texture::unlock "ct::Texture::unlock" to get access to the texture buffer, which allows you to only read/write from/to portions of it, instead of always writing to the entire buffer.
-
-And finally @ref bs::ct::Texture::copy "ct::Texture::copy" method can be used for quickly copying a contents of one texture to another texture. This method will also resolve multi-sampled surface in the case the source is multi-sampled but the destination is not.
-
-# Load-store textures {#textures_g}
-Load-store textures are a special type of textures that can be written to by the GPU. This is opposed to normal textures which are read only. They are particularily useful for compute operations which cannot use render targets for output, or for GPU operations that which to write to arbitrary locations rather than just to their own pixel location.
-
-To create a load-store texture just provide the @ref bs::TU_LOADSTORE "TU_LOADSTORE" usage flag on creation - the rest of the creation procedure is identical. Load-store textures cannot be multisampled and cannot be used as render or depth-stencil targets. They also cannot have mip-maps nor can they be created with compressed texture formats.
-
-Rendering using these textures is similar to normal textures, but when binding them to @ref bs::Material "Material" or @ref bs::GpuParams "GpuParams" they also require a separate @ref bs::TextureSurface "TextureSurface" parameter to determine the surface of the texture to bind, in case it has multiple surfaces or mip levels.