Преглед изворни кода

More work on documentation

BearishSun пре 8 година
родитељ
комит
93970b7753

+ 94 - 0
Documentation/Manuals/Native/advMemAlloc.md

@@ -0,0 +1,94 @@
+Advanced memory allocation									{#advMemAlloc}
+===============
+[TOC]
+
+Banshee allows you to allocate memory in various ways, so you can have fast memory allocations for many situations. We have already shown how to allocate memory for the general case, using **bs_new** / **bs_delete**, **bs_alloc**, **bs_free** and shown how to use shared pointers. But allocating memory using these general purpose allocators can be expensive. Therefore it is beneficial to have more specialized allocator types that have certain restrictions, but allocate memory with almost no overhead.
+
+# Stack allocator {#advMemAlloc_a}
+Stack allocator allows you to allocate memory quickly and with zero fragmentation. It comes with a restriction that it can only deallocate memory in the opposite order it was allocated. This usually only makes it suitable for temporary allocations within a single method, where you can guarantee the proper order.
+
+Use @ref bs::bs_stack_alloc "bs_stack_alloc()" / @ref bs::bs_stack_free "bs_stack_free()" or @ref bs::bs_stack_new "bs_stack_new()" / @ref bs::bs_stack_delete "bs_stack_delete()" to allocate/free memory using the stack allocator.
+
+~~~~~~~~~~~~~{.cpp}
+UINT8* buffer = bs_stack_alloc(1024);
+... do something with buffer ...
+UINT8* buffer2 = bs_stack_alloc(512);
+... do something with buffer2 ...
+bs_stack_free(buffer2); // Must free buffer2 first!
+bs_stack_free(buffer);
+~~~~~~~~~~~~~
+
+# Frame allocator {#advMemAlloc_b}
+Frame allocator segments all allocated memory into *frames*. These frames are stored in a stack-like fashion, and must be deallocated in the opposite order they were allocated, similar to how the stack allocator works. However there are no restrictions on memory deallocation within a single frame, which makes this type of allocator usable in many more situations than the stack allocator. Its downside is that it doesn't deallocate memory until the whole frame is freed - which means it usually uses up more memory than it would otherwise need to.
+
+Use @ref bs::bs_frame_mark "bs_frame_mark()" to start a new frame, and use @ref bs::bs_frame_clear "bs_frame_clear()" to free all of the memory in a single frame. The frames have to be released in opposite order they were created. 
+
+Once you have started a frame use @ref bs::bs_frame_alloc "bs_frame_alloc()" / @ref bs::bs_frame_free "bs_frame_free()" or @ref bs::bs_frame_new "bs_frame_new()" / @ref bs::bs_frame_delete "bs_frame_delete()" to allocate/free memory using the frame allocator. Calls to **bs_frame_free()** / **bs_frame_delete()** are required even through the frame allocator doesn't process individual deallocations, and this is used primarily for debug purposes.
+
+~~~~~~~~~~~~~{.cpp}
+// Mark a new frame
+bs_frame_mark();
+UINT8* buffer = bs_frame_alloc(1024);
+... do something with buffer ...
+UINT8* buffer2 = bs_frame_alloc(512);
+... do something with buffer2 ...
+bs_frame_free(buffer); // Only does some checks in debug mode, doesn't actually free anything
+bs_frame_free(buffer2); // Only does some checks in debug mode, doesn't actually free anything
+bs_frame_clear(); // Frees memory for both buffers
+~~~~~~~~~~~~~
+
+## Local frame allocators {#advMemAlloc_b_a}
+
+You can also create your own frame allocators by constructing a @ref bs::FrameAlloc "FrameAlloc" object and calling memory management methods on it directly. While the global frame allocator methods are mostly useful for temporary allocations within a single method, creating your own frame allocator allows you to share frame allocated memory between different objects and persist it for a longer period of time.
+
+For example if you are running some complex algorithm involving multiple classes you might create a frame allocator to be used throughout the algorithm, and then just free all the memory at once when the algorithm finishes.
+
+~~~~~~~~~~~~~{.cpp}
+FrameAlloc alloc;
+alloc.markFrame();
+UINT8* buffer = alloc.alloc(1024);
+... do something with buffer ...
+UINT8* buffer2 = alloc.alloc(512);
+... do something with buffer2 ...
+alloc.dealloc(buffer);
+alloc.dealloc(buffer2);
+alloc.clear();
+~~~~~~~~~~~~~
+
+## Container allocators {#advMemAlloc_b_b}
+
+You may also use frame allocator to allocate containers like **String**, **Vector** or **Map**. Simply mark the frame as in the above example, and then use the following container alternatives: @ref bs::String "FrameString", @ref bs::FrameVector "FrameVector" or @ref bs::FrameMap "FrameMap" (most other containers also have a *Frame* version). For example:
+
+~~~~~~~~~~~~~{.cpp}
+// Mark a new frame
+bs_frame_mark();
+{
+	FrameVector<UINT8> vector;
+	... populate the vector ... // No dynamic memory allocation cost as with a normal Vector
+} // Block making sure the vector is deallocated before calling bs_frame_clear
+bs_frame_clear(); // Frees memory for the vector
+~~~~~~~~~~~~~
+
+# Static allocator {#advMemAlloc_c}
+@ref bs::StaticAlloc<BlockSize, MaxDynamicMemory> "StaticAlloc<BlockSize, MaxDynamicMemory>" is the only specialized type of allocator that is used for permanent allocations. It works by pre-allocating a user-defined number of bytes. It then tries to use this pre-allocated buffer for any allocations requested from it. As long as the number of allocated bytes doesn't exceed the size of the pre-allocated buffer, allocations are basically free. If you exceed the size of the pre-allocated buffer the allocator will fall back on dynamic allocations.
+
+The downside of this allocator is that the pre-allocated buffer will be using up memory, whether that memory is is actually required or not. Therefore it is important to predict a good static buffer size so that not much memory is wasted, and that most objects don't exceed the static buffer size. This kind of allocator is mostly useful when you have many relatively small objects, each of which requires dynamic allocation of a different size.
+
+~~~~~~~~~~~~~{.cpp}
+class MyObj
+{
+	StaticAlloc<512> mAlloc; // Ensures that every instance of this object has 512 bytes pre-allocated
+	UINT8* mData = nullptr;
+	
+	MyObj(int size)
+	{
+		// As long as size doesn't go over 512 bytes, no dynamic allocations will be made
+		mData = mAlloc.alloc(size);
+	}
+	
+	~MyObj()
+	{
+		mAlloc.free(mData);
+	}
+}
+~~~~~~~~~~~~~

+ 17 - 0
Documentation/Manuals/Native/any.md

@@ -0,0 +1,17 @@
+Any									{#any}
+===============
+[TOC]
+
+@ref bs::Any "Any" is a specialized data type that allows you to store any kind of object in it. For example:
+~~~~~~~~~~~~~{.cpp}
+Any var1 = Vector<String>();
+
+struct MyStruct { int a; };
+Any var2 = MyStruct();
+~~~~~~~~~~~~~
+
+Use @ref bs::any_cast "any_cast()" and @ref bs::any_cast_ref "any_cast_ref()" to retrieve valid types from an **Any** variable.
+~~~~~~~~~~~~~{.cpp}
+Vector<String> val1 = any_cast<Vector<String>>(var1);
+MyStruct& val2 = any_cast_ref<MyStruct>(var2);
+~~~~~~~~~~~~~

+ 17 - 0
Documentation/Manuals/Native/crashHandling.md

@@ -0,0 +1,17 @@
+Crash handling							{#crashHandling}
+===============
+[TOC]
+
+Whenever possible you should try to avoid triggering a crash, and instead try to recover from error conditions as best as possible. When crashing cannot be avoided you can use @ref bs::CrashHandler "CrashHandler" to report fatal errors. Call @ref bs::CrashHandler::reportCrash "CrashHandler::reportCrash()" to manually trigger such an error. An error will be logged and a message box with relevant information displayed.
+
+~~~~~~~~~~~~~{.cpp}
+gCrashHandler().reportCrash("My error", "My error description");
+~~~~~~~~~~~~~
+
+You can also use @ref BS_EXCEPT macro, which internally calls **CrashHandler::reportCrash()** but automatically adds file/line information and terminates the process after reporting the crash.
+
+~~~~~~~~~~~~~{.cpp}
+BS_EXCEPT(InternalErrorException, "My error description");
+~~~~~~~~~~~~~
+
+**CrashHandler** also provides @ref bs::CrashHandler::getStackTrace "CrashHandler::getStackTrace()" method that allows you to retrieve a stack trace to the current method, which can be useful for logging, or custom crash handling.

+ 24 - 0
Documentation/Manuals/Native/dynLib.md

@@ -0,0 +1,24 @@
+Dynamic libraries									{#dynLib}
+===============
+[TOC]
+
+In order to load dynamic libraries like .dll or .so, you can use the @ref bs::DynLibManager "DynLibManager" module. It has two main methods:
+ - @ref bs::DynLibManager::load "DynLibManager::load()" - Accepts a file name to the library (without extension), and returns the @ref bs::DynLib "DynLib" object if the load is successful or null otherwise. 
+ - @ref bs::DynLibManager::unload "DynLibManager::unload()" - Unloads a previously loaded library.
+ 
+Once the library is loaded you can use the **DynLib** object and its @ref bs::DynLib::getSymbol "DynLib::getSymbol()" method to retrieve a function pointer within the dynamic library, and call into it. 
+
+~~~~~~~~~~~~~{.cpp}
+// Load library
+DynLib* myLibrary = DynLibManager::instance().load("myPlugin");
+
+// Retrieve function pointer (symbol) to the "loadPlugin" method
+typedef void* (*LoadPluginFunc)();
+LoadPluginFunc loadPluginFunc = (LoadPluginFunc)myLibrary->getSymbol("loadPlugin");
+
+// Call the function
+loadPluginFunc();
+
+// Assuming we're done, unload the plugin
+DynLibManager::instance().unload(myLibrary);
+~~~~~~~~~~~~~

+ 28 - 0
Documentation/Manuals/Native/flags.md

@@ -0,0 +1,28 @@
+Flags									{#flags}
+===============
+[TOC]
+
+@ref bs::Flags<Enum, Storage> "Flags<Enum, Storage>" provide a wrapper around an `enum` and allow you to easily perform bitwise operations on them without having to cast to integers. For example when using raw C++ you must do something like this:
+~~~~~~~~~~~~~{.cpp}
+enum class MyFlag
+{
+	Flag1 = 1<<0,
+	Flag2 = 1<<1,
+	Flag3 = 1<<2
+};
+
+MyFlag combined = (MyFlag)((UINT32)MyFlag::Flag1 | (UINT32)MyFlag::Flag2);
+~~~~~~~~~~~~~
+
+Which is cumbersome. Flags require an additional step to define the enum, but after that allow you to manipulate values much more nicely. 
+
+To create a **Flags<Enum, Storage>** type for an enum simply define a `typedef` with your enum type provided as the template parameter. You must also follow that definition with a @ref BS_FLAGS_OPERATORS macro in order to ensure all operators are properly defined.
+~~~~~~~~~~~~~{.cpp}
+typedef Flags<MyFlag> MyFlags;
+BS_FLAGS_OPERATORS(MyFlag)
+~~~~~~~~~~~~~
+
+Now you can do something like this:
+~~~~~~~~~~~~~{.cpp}
+MyFlags combined = MyFlag::Flag1 | MyFlag::Flag2;
+~~~~~~~~~~~~~

+ 40 - 0
Documentation/Manuals/Native/gpuProfiling.md

@@ -0,0 +1,40 @@
+GPU profiling 							{#gpuProfiling}
+===============
+[TOC]
+
+GPU operations cannot be profiled using the CPU profiler as they are executed asynchronously. This means when you call a method that executes on the GPU (such as those on **RenderAPI**, as well as GPU resource read/write operations) it will return almost immediately, meaning the timing information reported by the CPU profiler will not be representative of the time it actually took to execute the operation. 
+
+Therefore GPU profiling is instead handled by @ref bs::ProfilerGPU "ProfilerGPU", globally accessible from @ref bs::gProfilerGPU "gProfilerGPU()". It allows you to track execution times of GPU operations, as well as other helpful information.
+
+# Sampling {#gpuProfiling_a}
+
+Similar to CPU profiling, you issue sampling calls using @ref bs::ProfilerGPU::beginSample "ProfilerGPU::beginSample()" and @ref bs::ProfilerGPU::endSample "ProfilerGPU::endSample()". Any GPU commands executed between these two calls will be measured.
+
+~~~~~~~~~~~~~{.cpp}
+RenderAPI& rapi = RenderAPI::instance();
+
+// ... bind pipeline states, buffers, etc.
+
+// Measure how long it takes the GPU to draw something
+gGPUProfiler().beginSample("mySample");
+rapi.drawIndexed(0, numIndices, 0, numVertices);
+gGPUProfiler().endSample("mySample");
+~~~~~~~~~~~~~
+
+Each GPU sample will measure the time to took the execute the command on the GPU, but will also measure various resource usage stats. For example it will measure number of draw calls, number of vertices/primitives drawn, render state switches, and similar.
+
+All **ProfilerGPU::beginSample()** and **ProfilerGPU::endSample()** calls need to be placed in-between @ref bs::ProfilerGPU::beginFrame "ProfilerGPU::beginFrame()" and @ref bs::ProfilerGPU::endFrame "ProfilerGPU::endFrame()". As their name implies you should only call these methods once per frame. If working with the default renderer, they will be called for you by the renderer.
+
+# Reporting {#gpuProfiling_b}
+
+GPU profiler will generate a single report per **ProfilerGPU::beginFrame()** / **ProfilerGPU::endFrame()** pair. Since GPU is executing asynchronously the report might not be available as soon as you call **ProfilerGPU::endFrame()**. Instead you should call @ref bs::ProfilerGPU::getNumAvailableReports() "ProfilerGPU::getNumAvailableReports()" to check if there is a report available. If there is one you can then call @ref bs::ProfilerGPU::getNextReport "ProfilerGPU::getNextReport()" to retrieve the report. Report information is contained in a @ref bs::GPUProfilerReport "GPUProfilerReport" structure.
+
+~~~~~~~~~~~~~{.cpp}
+if(ProfilerGPU::instance().getNumAvailableReports() > 0)
+{
+	GPUProfilerReport report = ProfilerGPU::instance().getNextReport(); 
+	gDebug().logDebug("It took " + report.frameSample.timeMs + "ms to execute " + report.frameSample.numDrawCalls + " draw calls.");
+}
+~~~~~~~~~~~~~
+
+Retrieving the report removes it from the profiler. If more than one report is available then the oldest report is returned first. If you don't retrieve reports for multiple frames the system will start discarding oldest reports.

+ 14 - 2
Documentation/Manuals/Native/manuals.md

@@ -84,12 +84,24 @@ A set of manuals covering advanced functionality intented for those wanting to e
  - [GPU buffers](@ref gpuBuffers)
  - [GPU buffers](@ref gpuBuffers)
  - [Compute](@ref compute)
  - [Compute](@ref compute)
  - [Command buffers](@ref commandBuffers)
  - [Command buffers](@ref commandBuffers)
+ - [GPU profiling](@ref gpuProfiling)
  - [Working example](@ref lowLevelRenderingExample)
  - [Working example](@ref lowLevelRenderingExample)
-
+- **More utilities**
+ - [Modules](@ref modules)
+ - [Advanced memory allocation](@ref advMemAlloc)
+ - [Crash handling](@ref crashHandling)
+ - [Dynamic libraries](@ref dynLib)
+ - [Flags](@ref flags)
+ - [Any](@ref any) 
+ - [Unit tests](@ref unitTests)
+- [Threading](@ref threading)
+- **Renderer**
+ - [Renderer extensions](@ref rendererExtensions)
+ - [Creating a renderer manually](@ref customRenderer)
+ 
 ## General guides
 ## General guides
 Name                                      | Description
 Name                                      | Description
 ------------------------------------------|-------------
 ------------------------------------------|-------------
-[Utilities](@ref utilities)               | Provides an overview of a variety of utility systems used throughout Banshee.
 [Resources](@ref resources)  			  | Explains how resources work, including saving, loading and creating brand new resource types.
 [Resources](@ref resources)  			  | Explains how resources work, including saving, loading and creating brand new resource types.
 [Scripting](@ref scripting)               | Shows you how to interact with the scripting system, and how to expose C++ objects to the scripting API.
 [Scripting](@ref scripting)               | Shows you how to interact with the scripting system, and how to expose C++ objects to the scripting API.
 [Renderer](@ref renderer)    	  		  | Explains how the renderer works on the low level, and how to create a custom renderer so you may fully customize the look of your application.
 [Renderer](@ref renderer)    	  		  | Explains how the renderer works on the low level, and how to create a custom renderer so you may fully customize the look of your application.

+ 18 - 0
Documentation/Manuals/Native/modules.md

@@ -0,0 +1,18 @@
+Modules									{#modules}
+===============
+[TOC]
+
+A @ref bs::Module<T> "Module<T>" is a specialized form of singleton used for many of Banshee's systems. Unlike standard singletons it requires manual startup and shutdown. To use it for your own objects, simply inherit from it and provide your own class as its template parameter.
+
+~~~~~~~~~~~~~{.cpp}
+class MyModule : public Module<MyModule>
+{ };
+~~~~~~~~~~~~~
+
+Use @ref bs::Module<T>::startUp "Module<T>::startUp()" to start it up. Once started use @ref bs::Module<T>::instance "Module<T>::instance()" to access its instance. Once done with it call @ref bs::Module<T>::shutDown "Module<T>::shutDown()" to release it.
+
+~~~~~~~~~~~~~{.cpp}
+MyModule::startUp();
+MyModule::instance().doSomething();
+MyModule::shutDown();
+~~~~~~~~~~~~~

+ 0 - 37
Documentation/Manuals/Native/profiling.md

@@ -1,37 +0,0 @@
-Profiling code								{#profiling}
-===============
-[TOC]
-
-Code profiling is an important process to determine performance bottlenecks. Profiling measures code execution times, memory allocations or various other resource usage. Banshee supports CPU and GPU profiling.
-
-# CPU profiling {#profiling_a}
-CPU profiler allow you to measure execution times of code executing on the CPU. It will also track the number of memory operations. CPU profiling is handled by the @ref bs::ProfilerCPU "ProfilerCPU" module. You use that module to set up sampling points that determine which areas of code to measure, and to retrieve reports telling you the measured results.
-
-The profiler supports two separate measuring modes, the normal mode measures time in milliseconds, while the precise mode measures time in CPU cycles. The precise mode does have drawbacks as it is inacurrate for longer code as it will not account for OS context switches and similar. Usually you will be using the normal measuring mode, and reserve the precise mode when you need to know exactly how many cycles some relatively small operation takes.
-
-To start profiling, issue a call to @ref bs::ProfilerCPU::beginSample "ProfilerCPU::beginSample" or @ref bs::ProfilerCPU::beginSamplePrecise "ProfilerCPU::beginSamplePrecise". Both of these methods accept a string that can be used for identifying the result when the profiler report is generated. You must follow each of these with a call to @ref bs::ProfilerCPU::endSample "ProfilerCPU::endSample" or @ref bs::ProfilerCPU::endSamplePrecise "ProfilerCPU::endSamplePrecise", which take the same identifier string as their counterparts. Any code executed between these calls will be measured.
-
-You can also nest profiling calls, by calling another `beginSample` within an existing `beginSample` call.
-
-Once you are done sampling your code, call @ref bs::ProfilerCPU::generateReport() "ProfilerCPU::generateReport". This will return a @ref bs::CPUProfilerReport "CPUProfilerReport" which contains both normal and precise sampling data as a hierarchy of samples. You can then use this data to print it on the screen, save it to a log, or similar.
-
-You may also use the built-in @ref bs::ProfilerOverlay component to automatically display contents of both CPU and GPU profilers every frame, which will save you from manually parsing the profiler reports. Just add it to the scene and assign it a camera to render to.
-
-If you just want to clear profiling data without generating a report, call @ref bs::ProfilerCPU::reset "ProfilerCPU::reset()".
-
-## Threads {#profiling_b}
-The profiler is thread-safe, but if you will be profiling code on threads not managed by the engine, you must manually call @ref bs::ProfilerCPU::beginThread "ProfilerCPU::beginThread" before any sample calls, and @ref bs::ProfilerCPU::endThread "ProfilerCPU::endThread" after all sample calls.
-
-## Overhead {#profiling_c}
-Profiler code itself will introduce a certain amount of overhead which will slightly skew profiling results. The profiler attempts to estimate its error, which is reported in the returned reports. You can choose to take this into consideration if you need really precise results.
-
-# GPU profiling {#profiling_d}
-GPU profiling is handled through @ref bs::ProfilerGPU "ProfilerGPU" module, which allows you to track execution times of GPU operations, as well as various resource usages.
-
-Similar to CPU profiling, you issue sampling calls using @ref bs::ProfilerGPU::beginSample "ProfilerGPU::beginSample" and @ref bs::ProfilerGPU::endSample "ProfilerGPU::endSample". Any GPU commands executed between these two calls will be measured.
-
-Each GPU sample will measure the time to took the execute the command on the GPU, but will also measure various resource usage stats. For example it will measure number of draw calls, number of vertices/primitives drawn, render state switches, and similar.
-
-All this information will be contained in a @ref bs::GPUProfilerReport "GPUProfilerReport". To retrieve the report use @ref bs::ProfilerGPU::getNumAvailableReports() "ProfilerGPU::getNumAvailableReports()" and @ref bs::ProfilerGPU::getNextReport "ProfilerGPU::getNextReport()". 
-
-GPU profiler always generates only a single report per frame. It will queue the reports, and the above method will return the oldest report and remove it from the queue. Only a certain small number of reports are queued, and old reports will be lost if you don't retrieve them every few frames.

+ 157 - 0
Documentation/Manuals/Native/threading.md

@@ -0,0 +1,157 @@
+Threading									{#threading}
+===============
+[TOC]
+
+In this chapter we'll show how to start new threads of execution and how to safely synchronize between them. We'll start with explanation of the basic threading primitives, and then move onto higher level concepts like the thread pool and task scheduler.
+
+# Primitives {#threading_a}
+This section describes the most basic primitives you can use to manipulate threads. All threading primitives use the standard C++ library constructs, so for more information you should read their documentation.
+
+## Thread {#threading_a_a}
+To create a new thread use @ref bs::Thread "Thread", with its constructor parameter being a function pointer of the function that will execute on the new thread.
+~~~~~~~~~~~~~{.cpp}
+void workerFunc()
+{
+	// This runs on another thread
+}
+
+Thread myThread(&workerFunc);
+~~~~~~~~~~~~~
+
+## Mutex {#threading_a_b}
+Use @ref bs::Mutex "Mutex" and @ref bs::Lock "Lock" to synchronize access between multiple threads. **Lock** automatically locks the mutex when it's constructed, and unlocks it when it goes out of scope.
+
+~~~~~~~~~~~~~{.cpp}
+Vector<int> output;
+int startIdx = 0;
+Mutex mutex;
+
+void workerFunc()
+{
+	// Lock the mutex before modifying either "output" or "startIdx"
+	// This ensures only one thread every accesses it at once
+	Lock lock(mutex);
+	output.push_back(startIdx++);
+}
+
+// Start two threads that write to "output"
+Thread threadA(&workerFunc);
+Thread threadB(&workerFunc);
+~~~~~~~~~~~~~
+
+If a mutex can be locked recursively, use @ref bs::RecursiveMutex "RecursiveMutex" and @ref bs::RecursiveLock "RecursiveLock" instead.
+
+## Signal {#threading_a_c}
+Use @ref bs::Signal "Signal" to pause thread execution until another thread reaches a certain point.
+
+~~~~~~~~~~~~~{.cpp}
+bool isReady = false;
+int result = 0;
+
+Signal signal;
+Mutex mutex;
+
+void workerFunc()
+{
+	for(int i = 0; i < 100000; i++)
+		result += i; // Or some more complex calculation
+	
+	// Lock the mutex so we can safely modify isReady
+	{
+		Lock lock(mutex);
+		isReady = true;		
+	} // Automatically unlocked when lock goes out of scope
+	
+	// Notify everyone waiting that the signal is ready
+	signal.notify_all();
+}
+
+// Start executing workerFunc
+Thread myThread(&workerFunc);
+
+// Wait until the signal is triggered, or until isReady is set to true, whichever comes first
+Lock lock(mutex);
+if(!isReady)
+	signal.wait_for(lock);
+~~~~~~~~~~~~~
+
+## Other {#threading_a_d}
+The previous sections covered all the primitives, but there is some more useful functionality to be aware of:
+ - @ref BS_THREAD_HARDWARE_CONCURRENCY - Returns number of logical CPU cores.
+ - @ref BS_THREAD_CURRENT_ID - Returns @ref bs::ThreadId "ThreadId" of the current thread.
+ - @ref BS_THREAD_SLEEP - Pauses the current thread for a set number of milliseconds.
+
+# Thread pool {#threading_b}
+Instead of using **Thread** as described in the previous section, you can instead use the @ref bs::ThreadPool "ThreadPool" module for running threads. It allows you to re-use threads and avoid paying the cost of thread creation and destruction. It keeps any thread that was retired in idle state, and will re-use it when user requests a new thread.
+
+An example:
+~~~~~~~~~~~~~{.cpp}
+void workerFunc()
+{
+	// This runs on another thread
+}
+
+ThreadPool::instance().run("MyThread", &workerFunc);
+~~~~~~~~~~~~~
+
+# Task scheduler {#threading_c}
+@ref bs::TaskScheduler "TaskScheduler" module allows even more fine grained control over threads. It ensures there are only as many threads as the number of logical CPU cores. This ensures good thread distribution accross the cores, so that multiple threads don't fight for resources on the same core.
+
+It accomplishes that by storing each worker function as a @ref bs::Task "Task", which it then dispatches to threads that are free. This ensure you can just queue up as many tasks as required without needing to worry about efficiently utilizing CPU cores.
+
+To create a task call @ref bs::Task::create "Task::create()" with a task name, and a function pointer that will execute the task code.
+
+~~~~~~~~~~~~~{.cpp}
+void workerFunc()
+{
+	// This runs on another thread
+}
+
+SPtr<Task> task = Task::create("MyTask", &workerFunc);
+~~~~~~~~~~~~~
+
+Then run the task by calling @ref bs::TaskScheduler::addTask() "TaskScheduler::addTask()".
+
+~~~~~~~~~~~~~{.cpp}
+TaskScheduler::instance().addTask(task);
+~~~~~~~~~~~~~
+
+Tasks can also have priorities and dependencies. Normally tasks start executing in the order they are submitted, but tasks with a higher priority will execute sooner than those with a lower priority. In case some tasks depend on another task you can set up a dependency, which will ensure the dependant task only executes after its dependency has finished.
+
+Both priorities and dependencies are provided as extra parameters to the **Task::create()** method.
+
+~~~~~~~~~~~~~{.cpp}
+int a;
+int b;
+
+void depencyWorkerFunc() 
+{
+	a = 5 + 3;
+}
+
+void workerFunc() 
+{
+	b = a * 8;
+}
+
+SPtr<Task> dependency = Task::create("MyDependency", &depencyWorkerFunc);
+
+// Run task with high priority, and a dependency on another task
+SPtr<Task> task = Task::create("MyTask", &workerFunc, TaskPriority::High, dependency);
+
+TaskScheduler::instance().addTask(dependency);
+TaskScheduler::instance().addTask(task);
+~~~~~~~~~~~~~
+
+You can cancel a task by calling @ref bs::Task::cancel() "Task::cancel()". Note this will only cancel it if it hasn't started executing already.
+
+~~~~~~~~~~~~~{.cpp}
+task->cancel();
+~~~~~~~~~~~~~
+
+Finally, you can block the current thread until a task finished by calling @ref bs::Task::wait "Task::wait()".
+
+~~~~~~~~~~~~~{.cpp}
+task->wait();
+// Task guaranteed to be finished at this point
+~~~~~~~~~~~~~

+ 32 - 0
Documentation/Manuals/Native/unitTests.md

@@ -0,0 +1,32 @@
+Unit tests									{#unitTests}
+===============
+[TOC]
+
+All unit tests are implemented as a part of a @ref bs::TestSuite "TestSuite" class. You can create your own test suites, or add tests to the existing ones. 
+
+To register new tests call @ref BS_ADD_TEST in the test suite's constructor. The test method must not accept any parameters or return any values. To report test failure call @ref BS_TEST_ASSERT or @ref BS_TEST_ASSERT_MSG. If neither of those trigger, test is assumed to be successful.
+
+~~~~~~~~~~~~~{.cpp}
+class MyTestSuite : TestSuite
+{
+public:
+	MyTestSuite()
+	{
+		BS_ADD_TEST(MyTestSuite::myTest);
+	}
+	
+private:
+	void myTest()
+	{
+		BS_TEST_ASSERT_MSG(2 + 2 == 4, "Something really bad is going on.");
+	}
+};
+~~~~~~~~~~~~~
+
+To run all tests in a test suite create an instance of the **TestSuite** and run it, like so:
+~~~~~~~~~~~~~{.cpp}
+SPtr<TestSuite> tests = MyTestSuite::create<MyTestSuite>();
+tests->run(ExceptionTestOutput());
+~~~~~~~~~~~~~
+
+When running the test we provide @ref bs::ExceptionTestOutput "ExceptionTestOutput" which tells the test runner to terminate the application when a test fails. You can implement your own @ref bs::TestOutput "TestOutput" class to handle test failure more gracefully.

+ 0 - 477
Documentation/Manuals/Native/utilities.md

@@ -1,477 +0,0 @@
-Utilities									{#utilities}
-===============
-[TOC]
-
-This manual will quickly go over all the important utility systems commonly used by Banshee. We won't go into major detail about these features, but will rather point you towards the relevant API documentation.
-
-# Module {#utilities_a}
-A @ref bs::Module<T> "Module<T>" is a specialized form of singleton used for many of Banshee's systems. Unlike normal singletons it requires manual startup and shutdown, which solves many of the problems associated with traditional singletons.
-
-To use it for your own objects, simply inherit from it:
-~~~~~~~~~~~~~{.cpp}
-class MyModule : public Module<MyModule>
-{ };
-~~~~~~~~~~~~~
-
-Use @ref bs::Module<T>::startUp "Module<T>::startUp" to start it up. Once started use @ref bs::Module<T>::instance "Module<T>::instance" to access its instance. Once done with it call @ref bs::Module<T>::shutDown "Module<T>::shutDown" to release it. For example:
-~~~~~~~~~~~~~{.cpp}
-MyModule::startUp();
-MyModule::instance().doSomething();
-MyModule::shutDown();
-~~~~~~~~~~~~~
-
-# Path {#utilities_b}
-Use @ref bs::Path "Path" to manipulate file/folder paths. Initialize it with a path string and then call various methods to manipulate it. It is recommended to always store paths using @ref bs::Path "Path" instead of raw strings.
-
-Some of the things you can do once a @ref bs::Path "Path" is constructed:
- - Retrieve the filename using @ref bs::Path::getFilename "Path::getFilename"
- - Retrieve the filename extension using @ref bs::Path::getExtension "Path::getExtension"
- - Get last element of path, either file or directory using @ref bs::Path::getTail "Path::getTail"
- - Iterate over directories, get drive, combine paths, convert relative to absolute paths and vice versa, and more...
- 
-For example:
-~~~~~~~~~~~~~{.cpp}
-Path myPath("C:\\Path\\To\\File.txt");
-
-String filename = myPath.getFilename(); // Returns file
-myPath.setExtension(".jpg"); // Path is now "C:\Path\To\File.jpg"
-myPath.makeRelative("C:\\Path"); // Path is now "To\File.jpg"
-
-Path a("C:\\Path\\To\\");
-Path b("File.txt");
-Path combined = a + b; // // Path is now "C:\Path\To\File.txt"
-~~~~~~~~~~~~~
- 
-All @ref bs::Path "Path" methods that return strings come in two variants, one that returns a narrow (8-bit) character string like @ref bs::Path::getFilename "Path::getFilename", and one that contains wide character string like @ref bs::Path::getWFilename "Path::getWFilename".
-
-When setting paths be careful with setting backslashes or slashes at the end of the path. Path with a no backslash/slash on the end will be interpreted as a file path, and path with a backslash/slash will be interpreted as a folder path. For example:
- - "C:\MyFolder" - "MyFolder" interpreted as a file, @ref bs::Path::getFilename "Path::getFilename" returns "MyFolder"
- - "C:\MyFolder\" - "MyFolder" interpreted as a folder, @ref bs::Path::getFilename "Path::getFilename" returns an empty string
- 
-# File system {#utilities_c}
-The @ref bs::FileSystem "FileSystem" module allows you to open and create files, create folders, move/copy/remove files and folders, iterate all folder/files in a folder, get file size, check if folder/folder exists, get working path and others.
-
-An example creating a folder and a file:
-~~~~~~~~~~~~~{.cpp}
-FileSystem::createDir("C:\\Path\\To\\");
-SPtr<DataStream> fileStream = FileSystem::createAndOpenFile("C:\\Path\\To\\File.txt");
-... write to data stream...
-~~~~~~~~~~~~~
-# Data streams {#utilities_d}
-@ref bs::DataStream "Data streams" allow you to easily write/read binary/text data from/to disk/memory/etc. The two primary types of streams are @ref bs::MemoryDataStream "MemoryDataStream" for reading/writing directly to memory, and @ref bs::FileDataStream "FileDataStream" for reading/writing to a file.
-
-You create memory streams by providing them with a pointer and size of a memory buffer, while you create file streams by calling @ref bs::FileSystem::openFile "FileSystem::openFile" or @ref bs::FileSystem::createAndOpenFile "FileSystem::createAndOpenFile". Once you are done with a stream make sure to close it by calling @ref bs::DataStream::close "DataStream::close". Stream will also be automatically closed when it goes out of scope.
-
-Once you have a stream you can seek to a position within a stream and read/write to it. For example:
-~~~~~~~~~~~~~{.cpp}
-SPtr<DataStream> fileStream = FileSystem::createAndOpenFile("C:\\Path\\To\\File.txt");
-// Write some string data
-fileStream.writeString("Writing to a file");
-// Write some binary data
-UINT8* myBuffer = bs_alloc(1024);
-... fill up the buffer with some data ...
-fileStream.write(myBuffer, 1024);
-
-fileStream.close();
-~~~~~~~~~~~~~
- 
-# Events {#utilities_e}
-@ref bs::TEvent<RetType, Args> "Events" allow your objects to expose events that may trigger during execution. External objects interested in those events can then register callbacks with those events and be notified when they happen. They are useful because they allow two objects to communicate without necessarily knowing about each other's types, which can reduce class coupling and improve design.
-
-When creating an event, all you need to do it specify a format of the callback it sends out, for example:
-~~~~~~~~~~~~~{.cpp}
-class MySystem
-{
-	static Event<void()> myEvent; // Callback with no parameters
-	static Event<void(UINT32)> myEvent2; // Callback with a UINT32 parameter
-};
-~~~~~~~~~~~~~
-
-Then an external object can register itself with an event by calling @ref bs::TEvent<RetType, Args> "Event::connect". This method will return an @ref bs::HEvent "HEvent" handle. You can use this handle to manually disconnect from the event by calling @ref bs::HEvent::disconnect "HEvent::disconnect". For example:
-~~~~~~~~~~~~~{.cpp}
-// Subscribe to an event we defined previously
-// Simply pass a function pointer matching the callback
-HEvent eventHandle = MySystem::myEvent2.connect(&myEventReceiver);
-
-void myEventReceiver(UINT32 val)
-{
-	// Do something
-}
-~~~~~~~~~~~~~
-
-When using non-static class methods as callbacks, things get a little bit more complicated. This is because each such method by default expects a pointer to an instance of itself (`this` pointer). This is normally hidden from the programmer and happens under the hood, but we must handle it when dealing with callbacks. We can do this by using `std::bind` which allows us to replace function arguments with constant values. For example:
-~~~~~~~~~~~~~{.cpp}
-class EventReceiver
-{
-	EventReceiver()
-	{
-		// Convert a method with signature void(EventReceiver*, UINT32) into void(UINT32) by binding the "EventReceiver*"
-		// argument to the value of "this". Read up on the C++ library for more information about std::bind.
-		MySystem::myEvent2.connect(std::bind(&EventReceiver::myEventReceiver, this, std::placeholders::_1));
-	}
-	
-	void myEventReceiver(UINT32 val)
-	{
-		// Do something
-	}
-};
-~~~~~~~~~~~~~
-
-Then when an object is ready to trigger an event simply call it like a functor:
-~~~~~~~~~~~~~{.cpp}
-MySystem::myEvent(); // Trigger an event with no arguments
-MySystem::myEvent2(5); // Trigger an event with a single argument
-~~~~~~~~~~~~~
-
-# Any {#utilities_f}
-Use the @ref bs::Any "Any" type to easily store any kind of object in it. For example:
-~~~~~~~~~~~~~{.cpp}
-Any var1 = Vector<String>();
-
-struct MyStruct { int a; };
-Any var2 = MyStruct();
-~~~~~~~~~~~~~
-
-Use @ref bs::any_cast "any_cast" and @ref bs::any_cast_ref "any_cast_ref" to retrieve valid types from an @ref bs::Any "Any" variable. For example:
-~~~~~~~~~~~~~{.cpp}
-Vector<String> val1 = any_cast<Vector<String>>(var1);
-MyStruct& val2 = any_cast_ref<MyStruct>(var2);
-~~~~~~~~~~~~~
-# Flags {#utilities_g}
-@ref bs::Flags<Enum, Storage> "Flags" provide a wrapper around an `enum` and allow you to easily perform bitwise operations on them without having to cast to integers. For example when using raw C++ you must do something like this:
-~~~~~~~~~~~~~{.cpp}
-enum class MyFlag
-{
-	Flag1 = 1<<0,
-	Flag2 = 1<<1,
-	Flag3 = 1<<2
-};
-
-MyFlag combined = (MyFlag)((UINT32)MyFlag::Flag1 | (UINT32)MyFlag::Flag2);
-~~~~~~~~~~~~~
-
-Which is cumbersome. Flags require an additional step to define the enum, but after that allow you to manipulate values much more nicely. 
-
-To create @ref bs::Flags<Enum, Storage> "Flags" for an enum simply define a `typedef` with your enum type provided as the template parameter. You must also follow that definition with a @ref BS_FLAGS_OPERATORS macro in order to ensure all operators are properly defined. For example:
-~~~~~~~~~~~~~{.cpp}
-typedef Flags<MyFlag> MyFlags;
-BS_FLAGS_OPERATORS(MyFlag)
-~~~~~~~~~~~~~
-
-Now you can do something like this:
-~~~~~~~~~~~~~{.cpp}
-MyFlags combined = MyFlag::Flag1 | MyFlag::Flag2;
-~~~~~~~~~~~~~
-# String {#utilities_h}
-Banshee uses @ref bs::String "String" for narrow character strings (8-bit) and @ref bs::WString "WString" for wide character strings. Wide character strings are different size depending on platform.
-
-A variety of string manipulation functionality is provided in @ref bs::StringUtil "StringUtil", like matching, replacing, comparing, formating and similar.
-
-Conversion between various types (like int, float, bool, etc.) and string is provided via overloads of @ref bs::toString "toString" and @ref bs::toWString "toWString". You can also convert strings into different types by calling methods like @ref bs::parseINT32 "parseINT32", @ref bs::parseBool "parseBool", and similar for other types.
-
-# Threading {#utilities_i}
-## Primitives {#utilities_i_a}
-This section describes the most basic primitives you can use to manipulate threads. All threading primitives use the standard C++ library constructs, so for more information you should read their documentation.
-
-### Thread {#utilities_i_a_a}
-To create a new thread use @ref bs::Thread "Thread", like so:
-~~~~~~~~~~~~~{.cpp}
-void workerFunc()
-{
-	// This runs on another thread
-}
-
-Thread myThread(&workerFunc);
-~~~~~~~~~~~~~
-
-### Mutex {#utilities_i_a_b}
-Use @ref bs::Mutex "Mutex" and @ref bs::Lock "Lock" to synchronize access between multiple threads, like so:
-~~~~~~~~~~~~~{.cpp}
-Vector<int> output;
-int startIdx = 0;
-Mutex mutex;
-
-void workerFunc()
-{
-	// Lock the mutex before modifying either "output" or "startIdx"
-	// This ensures only one thread every accesses it at once
-	Lock lock(mutex);
-	output.push_back(startIdx++);
-}
-
-// Start two threads that write to "output"
-Thread threadA(&workerFunc);
-Thread threadB(&workerFunc);
-~~~~~~~~~~~~~
-
-If a mutex can be locked recursively, use @ref bs::RecursiveMutex "RecursiveMutex" and @ref bs::RecursiveLock "RecursiveLock" instead.
-
-### Signal {#utilities_i_a_c}
-Use @ref bs::Signal "Signal" to pause thread execution until another thread reaches a certain point. For example:
-~~~~~~~~~~~~~{.cpp}
-bool isReady = false;
-int result = 0;
-
-Signal signal;
-Mutex mutex;
-
-void workerFunc()
-{
-	for(int i = 0; i < 100000; i++)
-		result += i; // Or some more complex calculation
-	
-	// Lock the mutex so we can safely modify isReady
-	{
-		Lock lock(mutex);
-		isReady = true;		
-	} // Automatically unlocked when lock goes out of scope
-	
-	// Notify everyone waiting that the signal is ready
-	signal.notify_all();
-}
-
-// Start executing workerFunc
-Thread myThread(&workerFunc);
-
-// Wait until the signal is triggered, or until isReady is set to true, whichever comes first
-Lock lock(mutex);
-if(!isReady)
-	signal.wait_for(lock);
-~~~~~~~~~~~~~
-
-### Other {#utilities_i_a_d}
-The previous sections covered all the primitives, but there is some more useful functionality to be aware of:
- - @ref BS_THREAD_HARDWARE_CONCURRENCY - Returns number of logical CPU cores.
- - @ref BS_THREAD_CURRENT_ID - Returns @ref bs::ThreadId "ThreadId" of the current thread.
- - @ref BS_THREAD_SLEEP - Pauses the current thread for a set number of milliseconds.
-
-## Thread pool {#utilities_i_b}
-Instead of using @ref bs::Thread "Thread" as described in the previous section, you should instead use @ref bs::ThreadPool "ThreadPool" for running threads. @ref bs::ThreadPool "ThreadPool" allows you to re-use threads and avoid paying the cost of thread creation and destruction. It keeps any thread that was retired in idle state, and will re-use it when user requests a new thread.
-
-An example:
-~~~~~~~~~~~~~{.cpp}
-void workerFunc()
-{
-	// This runs on another thread
-}
-
-ThreadPool::instance().run("MyThread", &workerFunc);
-~~~~~~~~~~~~~
-
-## Task scheduler {#utilities_i_c}
-@ref bs::TaskScheduler "TaskScheduler" allows even more fine grained control over threads. It ensures there are only as many threads as the number of logical CPU cores. This ensures good thread distribution accross the cores, so that multiple threads don't fight for resources on the same core.
-
-It accomplishes that by storing each worker function as a @ref bs::Task "Task". It then dispatches tasks to threads that are free. In case tasks are dependant on one another you may also provide task dependencies, as well as task priorities.
-
-An example:
-~~~~~~~~~~~~~{.cpp}
-void workerFunc()
-{
-	// This runs on another thread
-}
-
-// Create a task with no dependency and normal priority
-SPtr<Task> task = Task::create("MyTask", &workerFunc);
-TaskScheduler::instance().addTask(task);
-~~~~~~~~~~~~~
-
-# Math {#utilities_j}
-Majority of the math related functionality is located in the @ref bs::Math "Math" class. 
-
-Some other useful math classes are:
- - @ref bs::Vector2 "Vector2"
- - @ref bs::Vector3 "Vector3"
- - @ref bs::Vector4 "Vector4"
- - @ref bs::Matrix3 "Matrix3"
- - @ref bs::Matrix4 "Matrix4"
- - @ref bs::Quaternion "Quaternion"
- - @ref bs::Radian "Radian"
- - @ref bs::Degree "Degree"
- - @ref bs::Ray "Ray"
- - @ref bs::Plane "Plane"
- - @ref bs::Rect2 "Rect2"
- - @ref bs::Rect2I "Rect2I"
- - @ref bs::Vector2I "Vector2I"
-
-# Time {#utilities_k}
-To access timing information use the @ref bs::Time "Time" module, more easily accessible via @ref bs::gTime "gTime" method:
- - @ref bs::Time::getTime "Time::getTime" - Returns time since start-up in seconds, updated once per frame.
- - @ref bs::Time::getFrameDelta "Time::getFrameDelta" - Returns the time between execution of this and last frame.
- - @ref bs::Time::getFrameIdx "Time::getFrameIdx" - Returns a sequential index of the current frame.
- - @ref bs::Time::getTimePrecise "Time::getTimePrecise" - Returns time suitable for precision measurements. Returns time at the exact time it was called, instead of being updated once per frame.
- 
-# Logging {#utilities_l}
-To report warnings and errors use the @ref bs::Debug "Debug" module. Call @ref bs::Debug::logDebug "Debug::logDebug", @ref bs::Debug::logWarning "Debug::logWarning" and @ref bs::Debug::logError "Debug::logError" to log messages. 
-
-Use @ref bs::Debug::saveLog "Debug::saveLog" to save a log to the disk in HTML format. Use use @ref bs::Debug::getLog "Debug::getLog" to get a @ref bs::Log "Log" object you can manually parse.
-
-Macros for common log operations are also provided: @ref LOGDBG, @ref LOGWRN and @ref LOGERR. They're equivalent to the methods above.
-
-# Crash handling {#utilities_m}
-Use the @ref bs::CrashHandler "CrashHandler" to report fatal errors. Call @ref bs::CrashHandler::reportCrash "CrashHandler::reportCrash" to manually trigger such an error. An error will be logged, a message box with relevant information displayed and the application terminated.
-
-You can also use @ref BS_EXCEPT macro, which internally calls @ref bs::CrashHandler::reportCrash "CrashHandler::reportCrash" but automatically adds file/line information.
-
-@ref bs::CrashHandler "CrashHandler" also provides @ref bs::CrashHandler::getStackTrace "CrashHandler::getStackTrace" that allows you to retrieve a stack trace to the current method.
-
-# Dynamic libraries {#utilities_n}
-Use @ref bs::DynLibManager "DynLibManager" to load dynamic libraries (.dll, .so). It has two main methods:
- - @ref bs::DynLibManager::load "DynLibManager::load" - Accepts a file name to the library, and returns the @ref bs::DynLib "DynLib" object if the load is successful or null otherwise. 
- - @ref bs::DynLibManager::unload "DynLibManager::unload" - Unloads a previously loaded library.
- 
-Once the library is loaded you can use the @ref bs::DynLib "DynLib" object, and its @ref bs::DynLib::getSymbol "DynLib::getSymbol" method to retrieve a function pointer within the dynamic library, and call into it. For example if we wanted to retrieve a function pointer for the `loadPlugin` method:
-~~~~~~~~~~~~~{.cpp}
-// Load library
-DynLib* myLibrary = DynLibManager::instance().load("myPlugin");
-
-// Retrieve function pointer (symbol)
-typedef void* (*LoadPluginFunc)();
-LoadPluginFunc loadPluginFunc = (LoadPluginFunc)myLibrary->getSymbol("loadPlugin");
-
-// Call the function
-loadPluginFunc();
-
-// Assuming we're done, unload the plugin
-DynLibManager::instance().unload(myLibrary);
-~~~~~~~~~~~~~
-
-# Testing {#utilities_o}
-Implement @ref bs::TestSuite "TestSuite" to set up unit tests for your application. To register new tests call @ref BS_ADD_TEST. Test is assumed to succeed unless either @ref BS_TEST_ASSERT or @ref BS_TEST_ASSERT_MSG are triggered.
-
-~~~~~~~~~~~~~{.cpp}
-class MyTestSuite : TestSuite
-{
-public:
-	EditorTestSuite()
-	{
-		BS_ADD_TEST(MyTestSuite::myTest);
-	}
-	
-private:
-	void myTest()
-	{
-		BS_TEST_ASSERT_MSG(2 + 2 == 4, "Something really bad is going on.");
-	}
-};
-~~~~~~~~~~~~~
-
-To run all tests create a instance of the @ref bs::TestSuite "TestSuite" and run it, like so:
-~~~~~~~~~~~~~{.cpp}
-SPtr<TestSuite> tests = MyTestSuite::create<MyTestSuite>();
-tests->run(ExceptionTestOutput());
-~~~~~~~~~~~~~
-
-When running the test we provide @ref bs::ExceptionTestOutput "ExceptionTestOutput" which tells the test runner to terminate the application when a test fails. You can implement your own @ref bs::TestOutput "TestOutput" to handle test failure more gracefully.
-
-# Allocators {#utilities_p}
-Banshee allows you to allocate memory in various ways, so you can have fast memory allocations for many situations.
-## General {#utilities_p_a}
-The most common memory allocation operations are `new`/`delete` or `malloc`/`free`. Banshee provides its own wrappers for these methods as @ref bs::bs_new "bs_new"/@ref bs::bs_delete "bs_delete" and @ref bs::bs_alloc "bs_alloc"/@ref bs::bs_free "bs_free". They provide the same functionality but make it possible for Banshee to track memory allocations which can be useful for profiling and debugging. You should always use them instead of the standard ones.
-
-Use @ref bs::bs_newN "bs_newN"/@ref bs::bs_deleteN "bs_deleteN" to create and delete arrays of objects.
-
-~~~~~~~~~~~~~{.cpp}
-UINT8* buffer = (UINT8*)bs_alloc(1024); // Allocate a raw buffer of 1024 bytes.
-Vector2* vector = bs_new<Vector2>(); // Allocate and construct a vector
-Vector2** vectors = bs_newN<Vector2>(5); // Allocate an array of five vectors
-
-// Free and destruct everything
-bs_free(buffer);
-bs_delete(vector);
-bs_deleteN(vectors, 5);
-~~~~~~~~~~~~~
-
-## Stack {#utilities_p_b}
-Stack allocator allows you to allocate memory quickly, usually without a call to the OS memory manager, usually making the allocation only little more expensive than using the internal OS stack. It also allocates memory with zero fragmentation, which can be very important for large applications such as games. Whenever possible you should use this allocator instead of the general purpose allocator.
-
-However it comes with a downside that it can only deallocate memory in the opposite order it was allocated. This usually only makes it suitable for temporary allocations within a single method, where you can guarantee the proper order.
-
-Use @ref bs::bs_stack_alloc "bs_stack_alloc" / @ref bs::bs_stack_free "bs_stack_free" and @ref bs::bs_stack_new "bs_stack_new" / @ref bs::bs_stack_delete "bs_stack_delete" to allocate/free memory using the stack allocator.
-
-For example:
-~~~~~~~~~~~~~{.cpp}
-UINT8* buffer = bs_stack_alloc(1024);
-... do something with buffer ...
-UINT8* buffer2 = bs_stack_alloc(512);
-... do something with buffer2 ...
-bs_stack_free(buffer2); // Must free buffer2 first!
-bs_stack_free(buffer);
-~~~~~~~~~~~~~
-
-## Frame {#utilities_p_c}
-Frame allocator is very similar to the stack allocator and it provides the same benefits (it's also very fast and causes no fragmentation). However it has different memory deallocation restrictions which make it usable in more situations than a stack allocator, at the cost of using up more memory.
-
-Frame allocator segments all allocated memory into "frames". These frames are stored in a stack-wise fashion, and must be deallocated in the opposite order they were allocated, similar to how the stack allocator works. The difference is that frame allocator is not able to free memory for individual objects, but only for entire frames.
-
-This releases the restriction that memory must be freed in the order it was allocated, which makes the allocator usable in more situations, but it also means that a lot of memory might be wasted as unused memory will be kept until the entire frame is freed.
-
-Use @ref bs::bs_frame_alloc "bs_frame_alloc" / @ref bs::bs_frame_free "bs_frame_free" or @ref bs::bs_frame_new "bs_frame_new" / @ref bs::bs_frame_delete "bs_frame_delete" to allocate/free memory using the frame allocator. Calls to @ref bs::bs_frame_free "bs_frame_free" / @ref bs::bs_frame_delete "bs_frame_delete" are required even through the frame allocator doesn't process individual deallocations, and this is used primarily for debug purposes.
-
-Use @ref bs::bs_frame_mark "bs_frame_mark" to start a new frame. All frame allocations should happen after this call. If you don't call @ref bs::bs_frame_mark "bs_frame_mark" a global frame will be used. Once done with your calculations use @ref bs::bs_frame_clear "bs_frame_clear" to free all memory in the current frame. The frames have to be released in opposite order they were created.
-
-For example:
-~~~~~~~~~~~~~{.cpp}
-// Mark a new frame
-bs_frame_mark();
-UINT8* buffer = bs_frame_alloc(1024);
-... do something with buffer ...
-UINT8* buffer2 = bs_frame_alloc(512);
-... do something with buffer2 ...
-bs_frame_free(buffer); // Only does some checks in debug mode, doesn't actually free anything
-bs_frame_free(buffer2); // Only does some checks in debug mode, doesn't actually free anything
-bs_frame_clear(); // Frees memory for both buffers
-~~~~~~~~~~~~~
-
-You can also create your own frame allocators by constructing a @ref bs::FrameAlloc "FrameAlloc" and calling memory management methods on it directly. This can allow you to use a frame allocator on a more global scope. For example if you are running some complex algorithm involving multiple classes you might create a frame allocator to be used throughout the algorithm, and then just free all the memory at once when the algorithm finishes.
-
-You may also use frame allocator to allocate containers like @ref bs::String "String", @ref bs::Vector "Vector" or @ref bs::Map "Map". Simply mark the frame as in the above example, and then use the following container alternatives: @ref bs::String "FrameString", @ref bs::FrameVector "FrameVector" or @ref bs::FrameMap "FrameMap" (other container types also available). For example:
-
-~~~~~~~~~~~~~{.cpp}
-// Mark a new frame
-bs_frame_mark();
-{
-	FrameVector<UINT8> vector;
-	... populate the vector ... // No dynamic memory allocation cost as with a normal Vector
-} // Block making sure the vector is deallocated before calling bs_frame_clear
-bs_frame_clear(); // Frees memory for the vector
-~~~~~~~~~~~~~
-
-## Static {#utilities_p_d}
-@ref bs::StaticAlloc<BlockSize, MaxDynamicMemory> "Static allocator" is the only specialized type of allocator that is used for permanent allocations. It allows you to pre-allocate a static buffer on the internal stack. It will then use internal stack memory until it runs out, after which it will use normal dynamic allocations. If you can predict a good static buffer size you can guarantee that most of your objects don't allocate any heap memory, while wasting minimum memory on the stack. This kind of allocator is mostly useful when you have many relatively small objects, each of which requires dynamic allocation of a different size.
-
-An example:
-~~~~~~~~~~~~~{.cpp}
-class MyObj
-{
-	StaticAlloc<512> mAlloc; // Ensures that every instance of this object has 512 bytes pre-allocated
-	UINT8* mData = nullptr;
-	
-	MyObj(int size)
-	{
-		// As long as size doesn't go over 512 bytes, no dynamic allocations will be made
-		mData = mAlloc.alloc(size);
-	}
-	
-	~MyObj()
-	{
-		mAlloc.free(mData);
-	}
-}
-
-~~~~~~~~~~~~~
-
-## Shared pointers {#utilities_p_e}
-Shared pointers are smart pointers that will automatically free memory when the last reference to the pointed memory goes out of scope. They're implemented as @ref bs::SPtr "SPtr", which is just a wrapper for the standard C++ library `std::shared_ptr`. Use @ref bs::bs_shared_ptr_new "bs_shared_ptr_new" to create a new shared pointer, or @ref bs::bs_shared_ptr "bs_shared_ptr" to create one from an existing instance. The pointer memory is allocated and freed using the general allocator.
-
-For example:
-~~~~~~~~~~~~~{.cpp}
-class MyClass() {};
-
-// Create a shared pointer with a new instance of MyClass
-SPtr<MyClass> myObj = bs_shared_ptr_new<MyClass>();
-
-MyClass* myRawObj = bs_new<MyClass>();
-
-// Create a shared pointer with an existing instance of MyClass
-SPtr<MyClass> myObj2 = bs_shared_ptr(myRawObj);
-~~~~~~~~~~~~~