|
2 年之前 | |
---|---|---|
.. | ||
Code | 2 年之前 | |
External | 2 年之前 | |
Registry | 2 年之前 | |
.gitignore | 2 年之前 | |
CMakeLists.txt | 2 年之前 | |
README.md | 2 年之前 | |
gem.json | 2 年之前 | |
preview.png | 2 年之前 |
This is an experimental gem implementing ONNX Runtime in O3DE and demoing it using inference examples from the MNIST dataset with an Editor dashboard displaying inference statistics. Image decoding is done using a modified version of uPNG.
PUBLIC ENABLE_CUDA=true
on line 51 in Code/CMakeLists.txt is either commented out or removed. When you do this, the Model class will inference using CPU regardless of params passed in.Modify lines 1172-1176 as follows:
- file = fopen(filename, "rb");
- if (file == NULL) {
+ errno_t err = fopen_s(&file, filename, "rb");
+ if (err != NULL) {
SET_ERROR(upng, UPNG_ENOTFOUND);
return upng;
}
The gem is fairly well commented, and should give you a general idea of how it works. It's recommended to start with Model.h as that it where the Model class lives. Along with the general steps to run an inference using the Ort::Api above, as well as the documentation for the Ort::Api, take a look at:
InitSettings
struct.Load()
function.Run()
function.In that order, looking through the implementation in the Model.cpp file for each of the functions.
The Ort::Env
, use of eBuses, and all ImGui functionality is implemented in the ONNXBus.h, ONNXSystemComponent.h and ONNXSystemComponent.cpp. The thing to note with the Ort::Env
is that it is initialized only once, and only one instance of it exists inside the gem, and is retrieved using the ONNXRequestBus
. All instances of the Model use the same Ort::Env
instance.
The ImGui dashboard provides basic debugging functionality for the gem, providing information about the time taken to inference for each time the Run()
function is called. In that function, you'll see that an AZ::Debug::Timer
being used to time its execution - this is then dispatched to a function AddTimingSample
in the ONNXRequestBus
, which adds that value to the ImGui HistogramGroup
for that model instance.
Mnist.h and Mnist.cpp implement the inferencing of MNIST models in the ONNX.Tests project using the Model
class. They define an Mnist
struct which extends the Model
class, as well as several functions which inference several thousand MNIST images using an MNIST ONNX model file. These are executed by running the tests project, which calculates an accuracy by measuring the number of correct inferences and ensures it is above 90% - a good MNIST model will easily exceed this level of accuracy. The Mnist components used to be part of the main ONNX gem during development, used for testing features which were being added. Once the gem reached a usable state, the Mnist part of the gem was moved to the Tests project as the typical user will not want to just inference Mnist models, but it's useful leaving it in as a test to make sure everything is set up properly and as a demo so users can see the inferencing process.
Include the ONNX gem as a build dependency in your CMakeLists.txt - the example here is integrating it into the MotionMatching gem.
ly_add_target(
NAME MotionMatching.Static STATIC
NAMESPACE Gem
FILES_CMAKE
motionmatching_files.cmake
INCLUDE_DIRECTORIES
PUBLIC
Include
PRIVATE
Source
BUILD_DEPENDENCIES
PUBLIC
AZ::AzCore
AZ::AzFramework
Gem::EMotionFXStaticLib
Gem::ImguiAtom.Static
ONNX.Private.Object
)
Include ONNX/Model.h in your file.
#include <ONNX/Model.h>
Initialize an ONNX::Model
.
ONNX::Model onnxModel;
Initialize your input vector.
AZStd::vector<AZStd::vector<float>> onnxInput;
//Fill input vector here
Initialize InitSettings
.
ONNX::Model::InitSettings onnxSettings;
onnxSettings.m_modelFile = "D:/MyModel.onnx";
onnxSettings.m_modelName = "Best Model Ever";
onnxSettings.m_modelColor = AZ::Colors::Tomato;
onnxSettings.m_cudaEnable = true;
//////////////////////////////////////////////////////////////////
DEFAULTS:
m_modelFile = “@gemroot:ONNX@/Assets/model.onnx”;
m_modelName = <Model filename without extension>;
m_modelColor = AZ::Color::CreateFromRgba(229, 56, 59, 255)
m_cudaEnable = false;
//////////////////////////////////////////////////////////////////
Load the model (this only needs to happen once when first initializing the model).
onnxModel.Load(onnxSettings);
Run the model.
onnxModel.Run(onnxInput);
Retrieve the outputs of the inference from the m_outputs
member.
AZStd::vector<AZStd::vector<float>> myAmazingResults = onnxModel.m_outputs;