Browse Source

Add package `core:encoding/hxa`

gingerBill 4 years ago
parent
commit
a3abe991a4
4 changed files with 705 additions and 0 deletions
  1. 83 0
      core/encoding/hxa/doc.odin
  2. 193 0
      core/encoding/hxa/hxa.odin
  3. 236 0
      core/encoding/hxa/read.odin
  4. 193 0
      core/encoding/hxa/write.odin

+ 83 - 0
core/encoding/hxa/doc.odin

@@ -0,0 +1,83 @@
+// Implementation of the HxA 3D asset format
+// HxA is a interchangeable graphics asset format.
+// Designed by Eskil Steenberg. @quelsolaar / eskil 'at' obsession 'dot' se / www.quelsolaar.com
+//
+// Author of this Odin package: Ginger Bill
+//
+// Following comment is copied from the original C-implementation
+// ---------
+// -Does the world need another Graphics file format?
+// 	Unfortunately, Yes. All existing formats are either too large and complicated to be implemented from
+// 	scratch, or don't have some basic features needed in modern computer graphics.
+// -Who is this format for?
+// 	For people who want a capable open Graphics format that can be implemented from scratch in
+// 	a few hours. It is ideal for graphics researchers, game developers or other people who
+// 	wants to build custom graphics pipelines. Given how easy it is to parse and write, it
+// 	should be easy to write utilities that process assets to preform tasks like: generating
+// 	normals, light-maps, tangent spaces, Error detection, GPU optimization, LOD generation,
+// 	and UV mapping.
+// -Why store images in the format when there are so many good image formats already?
+// 	Yes there are, but only for 2D RGB/RGBA images. A lot of computer graphics rendering rely
+// 	on 1D, 3D, cube, multilayer, multi channel, floating point bitmap buffers. There almost no
+// 	formats for this kind of data. Also 3D files that reference separate image files rely on
+// 	file paths, and this often creates issues when the assets are moved. By including the
+// 	texture data in the files directly the assets become self contained.
+// -Why doesn't the format support <insert whatever>?
+// 	Because the entire point is to make a format that can be implemented. Features like NURBSs,
+// 	Construction history, or BSP trees would make the format too large to serve its purpose.
+// 	The facilities of the formats to store meta data should make the format flexible enough
+// 	for most uses. Adding HxA support should be something anyone can do in a days work.
+
+// Structure:
+// ----------
+// HxA is designed to be extremely simple to parse, and is therefore based around conventions. It has
+// a few basic structures, and depending on how they are used they mean different things. This means
+// that you can implement a tool that loads the entire file, modifies the parts it cares about and
+// leaves the rest intact. It is also possible to write a tool that makes all data in the file
+// editable without the need to understand its use. It is also possible for anyone to use the format
+// to store data axillary data. Anyone who wants to store data not covered by a convention can submit
+// a convention to extend the format. There should never be a convention for storing the same data in
+// two differed ways.
+// The data is story in a number of nodes that are stored in an array. Each node stores an array of
+// meta data. Meta data can describe anything you want, and a lot of conventions will use meta data
+// to store additional information, for things like transforms, lights, shaders and animation.
+// Data for Vertices, Corners, Faces, and Pixels are stored in named layer stacks. Each stack consists
+// of a number of named layers. All layers in the stack have the same number of elements. Each layer
+// describes one property of the primitive. Each layer can have multiple channels and each layer can
+// store data of a different type.
+
+// HaX stores 3 kinds of nodes
+// 	- Pixel data.
+// 	- Polygon geometry data.
+// 	- Meta data only.
+
+// Pixel Nodes stores pixels in a layer stack. A layer may store things like Albedo, Roughness,
+// Reflectance, Light maps, Masks, Normal maps, and Displacement. Layers use the channels of the
+// layers to store things like color. The length of the layer stack is determined by the type and
+// dimensions stored in the
+
+// Geometry data is stored in 3 separate layer stacks for: vertex data, corner data and face data. The
+// vertex data stores things like verities, blend shapes, weight maps, and vertex colors. The first
+// layer in a vertex stack has to be a 3 channel layer named "position" describing the base position
+// of the vertices. The corner stack describes data per corner or edge of the polygons. It can be used
+// for things like UV, normals, and adjacency. The first layer in a corner stack has to be a 1 channel
+// integer layer named "index" describing the vertices used to form polygons. The last value in each
+// polygon has a negative - 1 index to indicate the end of the polygon.
+
+// Example:
+// 	A quad and a tri with the vertex index:
+// 		[0, 1, 2, 3] [1, 4, 2]
+// 	is stored:
+// 		[0, 1, 2, -4, 1, 4, -3]
+// The face stack stores values per face. the length of the face stack has to match the number of
+// negative values in the index layer in the corner stack. The face stack can be used to store things
+// like material index.
+
+// Storage
+// -------
+// All data is stored in little endian byte order with no padding. The layout mirrors the structs
+// defined below with a few exceptions. All names are stored as a 8-bit unsigned integer indicating
+// the length of the name followed by that many characters. Termination is not stored in the file.
+// Text strings stored in meta data are stored the same way as names, but instead of a 8-bit unsigned
+// integer a 32-bit unsigned integer is used.
+package encoding_hxa

+ 193 - 0
core/encoding/hxa/hxa.odin

@@ -0,0 +1,193 @@
+package encoding_hxa
+
+import "core:mem"
+
+LATEST_VERSION :: 3;
+VERSION_API :: "0.3";
+
+MAGIC_NUMBER :: 'H'<<0 | 'x'<<8 | 'A'<<16 | '\x00'<<24;
+
+Header :: struct #packed {
+	magic_number:        u32le,
+	version:             u32le,
+	internal_node_count: u32le,
+}
+
+File :: struct {
+	using header: Header,
+	backing:   []byte,
+	allocator: mem.Allocator,
+	nodes:     []Node,
+}
+
+Node_Type :: enum u8 {
+	Meta_Only = 0, // node only containing meta data.
+	Geometry  = 1, // node containing a geometry mesh, and meta data.
+	Image     = 2, // node containing a 1D, 2D, 3D, or Cube image, and meta data.
+}
+
+Layer_Data_Type :: enum u8 {
+	Uint8  = 0, // 8-bit unsigned integer,
+	Int32  = 1, // 32-bit little-endian signed integer
+	Float  = 2, // 32-bit little-endian IEEE 754 floating point value
+	Double = 3, // 64-bit little-endian IEEE 754 floating point value
+}
+
+// Pixel data is arranged in the following configurations
+Image_Type :: enum u8 {
+	Image_Cube = 0, // 6 sided qube, in the order of: +x, -x, +y, -y, +z, -z.
+	Image_1D   = 1, // One dimensional pixel data.
+	Image_2D   = 2, // Two dimensional pixel data.
+	Image_3D   = 3, // Three dimensional pixel data.
+}
+
+Meta_Value_Type :: enum u8 {
+	Int64  = 0,
+	Double = 1,
+	Node   = 2,
+	Text   = 3,
+	Binary = 4,
+	Meta   = 5,
+};
+
+Meta :: struct {
+	name: string, // name of the meta data value (maximum length is 255)
+	value: union {
+		[]i64le,
+		[]f64le,
+		[]Node_Index, // a reference to another node
+		string, // text
+		[]byte, // binary data
+		[]Meta,
+	},
+}
+
+Layer :: struct {
+	name: string, // name of the layer (maximum length is 255)
+	components: u8, // 2 for uv, 3 for xyz/rgb, 4 for rgba
+	data: union {
+		[]u8,
+		[]i32le,
+		[]f32le,
+		[]f64le,
+	},
+}
+
+// Layers stacks are arrays of layers where all the layers have the same number of entries (polygons, edges, vertices or pixels)
+Layer_Stack :: distinct []Layer;
+
+Node_Geometry :: struct {
+	vertex_count:      u32le,       // number of vertices
+	vertex_stack:      Layer_Stack, // stack of vertex arrays. the first layer is always the vertex positions
+	edge_corner_count: u32le,       // number of corners
+	corner_stack:      Layer_Stack, // stack of corner arrays, the first layer is always a reference array (see below)
+	edge_stack:        Layer_Stack, // stack of edge arrays
+	face_count:        u32le,       // number of polygons
+	face_stack:        Layer_Stack, // stack of per polygon data.
+}
+
+Node_Image :: struct {
+	type:        Image_Type,
+	resolution:  [3]u32le,
+	image_stack: Layer_Stack,
+}
+
+Node_Index :: distinct u32le;
+
+// A file consists of an array of nodes, All nodes have meta data. Geometry nodes have geometry, image nodes have pixels
+Node :: struct {
+	meta_data: []Meta,
+	content: union {
+		Node_Geometry,
+		Node_Image,
+	},
+}
+
+
+/* Conventions */
+/* ------------
+Much of HxA's use is based on convention. HxA lets users store arbitrary data in its structure that can be parsed but whose semantic meaning does not need to be understood.
+A few conventions are hard, and some are soft. Hard convention that a user HAS to follow in order to produce a valid file. Hard conventions simplify parsing becaus the parser can make some assumptions. Soft convenbtions are basicly recomendations of how to store common data.
+If you use HxA for something not covered by the conventions but need a convention for your use case. Please let us know so that we can add it!
+*/
+
+/* Hard conventions */
+/* ---------------- */
+
+CONVENTION_HARD_BASE_VERTEX_LAYER_NAME       :: "vertex";
+CONVENTION_HARD_BASE_VERTEX_LAYER_ID         :: 0;
+CONVENTION_HARD_BASE_VERTEX_LAYER_COMPONENTS :: 3;
+CONVENTION_HARD_BASE_CORNER_LAYER_NAME       :: "reference";
+CONVENTION_HARD_BASE_CORNER_LAYER_ID         :: 0;
+CONVENTION_HARD_BASE_CORNER_LAYER_COMPONENTS :: 1;
+CONVENTION_HARD_BASE_CORNER_LAYER_TYPE       :: Layer_Data_Type.Int32;
+CONVENTION_HARD_EDGE_NEIGHBOUR_LAYER_NAME    :: "neighbour";
+CONVENTION_HARD_EDGE_NEIGHBOUR_LAYER_TYPE    :: Layer_Data_Type.Int32;
+
+
+
+/* Soft Conventions */
+/* ---------------- */
+
+/* geometry layers */
+
+CONVENTION_SOFT_LAYER_SEQUENCE0      :: "sequence";
+CONVENTION_SOFT_LAYER_NAME_UV0       :: "uv";
+CONVENTION_SOFT_LAYER_NORMALS        :: "normal";
+CONVENTION_SOFT_LAYER_BINORMAL       :: "binormal";
+CONVENTION_SOFT_LAYER_TANGENT        :: "tangent";
+CONVENTION_SOFT_LAYER_COLOR          :: "color";
+CONVENTION_SOFT_LAYER_CREASES        :: "creases";
+CONVENTION_SOFT_LAYER_SELECTION      :: "select";
+CONVENTION_SOFT_LAYER_SKIN_WEIGHT    :: "skining_weight";
+CONVENTION_SOFT_LAYER_SKIN_REFERENCE :: "skining_reference";
+CONVENTION_SOFT_LAYER_BLENDSHAPE     :: "blendshape";
+CONVENTION_SOFT_LAYER_ADD_BLENDSHAPE :: "addblendshape";
+CONVENTION_SOFT_LAYER_MATERIAL_ID    :: "material";
+
+/* Image layers */
+
+CONVENTION_SOFT_ALBEDO            :: "albedo";
+CONVENTION_SOFT_LIGHT             :: "light";
+CONVENTION_SOFT_DISPLACEMENT      :: "displacement";
+CONVENTION_SOFT_DISTORTION        :: "distortion";
+CONVENTION_SOFT_AMBIENT_OCCLUSION :: "ambient_occlusion";
+
+/* tags layers */
+
+CONVENTION_SOFT_NAME      :: "name";
+CONVENTION_SOFT_TRANSFORM :: "transform";
+
+/* destroy procedures */
+
+meta_destroy :: proc(meta: Meta, allocator := context.allocator) {
+	if nested, ok := meta.value.([]Meta); ok {
+		for m in nested {
+			meta_destroy(m);
+		}
+		delete(nested, allocator);
+	}
+}
+nodes_destroy :: proc(nodes: []Node, allocator := context.allocator) {
+	for node in nodes {
+		for meta in node.meta_data {
+			meta_destroy(meta);
+		}
+		delete(node.meta_data, allocator);
+
+		switch n in node.content {
+		case Node_Geometry:
+			delete(n.corner_stack, allocator);
+			delete(n.edge_stack, allocator);
+			delete(n.face_stack, allocator);
+		case Node_Image:
+			delete(n.image_stack, allocator);
+		}
+	}
+	delete(nodes, allocator);
+}
+
+file_destroy :: proc(file: File) {
+	nodes_destroy(file.nodes, file.allocator);
+	delete(file.backing, file.allocator);
+}

+ 236 - 0
core/encoding/hxa/read.odin

@@ -0,0 +1,236 @@
+package encoding_hxa
+
+import "core:fmt"
+import "core:os"
+import "core:mem"
+
+Read_Error :: enum {
+	None,
+	Short_Read,
+	Invalid_Data,
+	Unable_To_Read_File,
+}
+
+read_from_file :: proc(filename: string, print_error := false, allocator := context.allocator) -> (file: File, err: Read_Error) {
+	context.allocator = allocator;
+
+	data, ok := os.read_entire_file(filename);
+	if !ok {
+		err = .Unable_To_Read_File;
+		return;
+	}
+	defer if !ok {
+		delete(data);
+	} else {
+		file.backing = data;
+	}
+	file, err = read(data, filename, print_error, allocator);
+	return;
+}
+
+read :: proc(data: []byte, filename := "<input>", print_error := false, allocator := context.allocator) -> (file: File, err: Read_Error) {
+	Reader :: struct {
+		filename:    string,
+		data:        []byte,
+		offset:      int,
+		print_error: bool,
+	};
+
+	read_value :: proc(r: ^Reader, $T: typeid) -> (value: T, err: Read_Error) {
+		remaining := len(r.data) - r.offset;
+		if remaining < size_of(T) {
+			err = .Short_Read;
+			return;
+		}
+		ptr := raw_data(r.data[r.offset:]);
+		value = (^T)(ptr)^;
+		r.offset += size_of(T);
+		return;
+	}
+
+	read_array :: proc(r: ^Reader, $T: typeid, count: int) -> (value: []T, err: Read_Error) {
+		remaining := len(r.data) - r.offset;
+		if remaining < size_of(T)*count {
+			err = .Short_Read;
+			return;
+		}
+		ptr := raw_data(r.data[r.offset:]);
+
+		value = mem.slice_ptr((^T)(ptr), count);
+		r.offset += size_of(T)*count;
+		return;
+	}
+
+	read_string :: proc(r: ^Reader, count: int) -> (string, Read_Error) {
+		buf, err := read_array(r, byte, count);
+		return string(buf), err;
+	}
+
+	read_name :: proc(r: ^Reader) -> (value: string, err: Read_Error) {
+		len: u8;
+		data: []byte;
+		len, err = read_value(r, u8);
+		if err != nil {
+			return;
+		}
+		data, err = read_array(r, byte, int(len));
+		if err == nil {
+			value = string(data[:len]);
+		}
+		return;
+	}
+
+	read_meta :: proc(r: ^Reader, capacity: u32le) -> (meta_data: []Meta, err: Read_Error) {
+		meta_data = make([]Meta, int(capacity));
+		count := 0;
+		defer meta_data = meta_data[:count];
+		for m in &meta_data {
+			if m.name, err = read_name(r); err != nil { return };
+
+			type: Meta_Value_Type;
+			if type, err = read_value(r, Meta_Value_Type); err != nil { return }
+			if type > max(Meta_Value_Type) {
+				if r.print_error {
+					fmt.eprintf("HxA Error: file '%s' has meta value type %d. Maximum value is ", r.filename, u8(type), u8(max(Meta_Value_Type)));
+				}
+				err = .Invalid_Data;
+				return;
+			}
+			array_length: u32le;
+			if array_length, err = read_value(r, u32le); err != nil { return }
+
+			switch type {
+			case .Int64:
+				if m.value, err = read_array(r, i64le, int(array_length)); err != nil { return }
+			case .Double:
+				if m.value, err = read_array(r, f64le, int(array_length)); err != nil { return }
+			case .Node:
+				if m.value, err = read_array(r, Node_Index, int(array_length)); err != nil { return }
+			case .Text:
+				if m.value, err = read_string(r, int(array_length)); err != nil { return }
+			case .Binary:
+				if m.value, err = read_array(r, byte, int(array_length)); err != nil { return }
+			case .Meta:
+				if m.value, err = read_meta(r, array_length); err != nil { return }
+			}
+
+			count += 1;
+		}
+		return;
+	}
+
+	read_layer_stack :: proc(r: ^Reader, capacity: u32le) -> (layers: Layer_Stack, err: Read_Error) {
+		stack_count: u32le;
+		if stack_count, err = read_value(r, u32le); err != nil { return }
+		layer_count := 0;
+		layers = make(Layer_Stack, stack_count);
+		defer layers = layers[:layer_count];
+		for layer in &layers {
+			type: Layer_Data_Type;
+			if layer.name, err = read_name(r); err != nil { return }
+			if layer.components, err = read_value(r, u8); err != nil { return }
+			if type, err = read_value(r, Layer_Data_Type); err != nil { return }
+			if type > max(type) {
+				if r.print_error {
+					fmt.eprintf("HxA Error: file '%s' has layer data type %d. Maximum value is ", r.filename, u8(type), u8(max(Layer_Data_Type)));
+				}
+				err = .Invalid_Data;
+				return;
+			}
+			data_len := int(layer.components) * int(capacity);
+
+			switch type {
+			case .Uint8:  if layer.data, err = read_array(r, u8,    data_len); err != nil { return }
+			case .Int32:  if layer.data, err = read_array(r, i32le, data_len); err != nil { return }
+			case .Float:  if layer.data, err = read_array(r, f32le, data_len); err != nil { return }
+			case .Double: if layer.data, err = read_array(r, f64le, data_len); err != nil { return }
+			}
+			layer_count += 1;
+		}
+
+		return;
+	}
+
+	if len(data) < size_of(Header) {
+		return;
+	}
+
+	context.allocator = allocator;
+
+	header := cast(^Header)raw_data(data);
+	assert(header.magic_number == MAGIC_NUMBER);
+
+	r := &Reader{
+		filename    = filename,
+		data        = data[:],
+		offset      = size_of(Header),
+		print_error = print_error,
+	};
+
+	node_count := 0;
+	file.nodes = make([]Node, header.internal_node_count);
+	defer if err != nil {
+		nodes_destroy(file.nodes);
+		file.nodes = nil;
+	}
+	defer file.nodes = file.nodes[:node_count];
+
+	for node_idx in 0..<header.internal_node_count {
+		node := &file.nodes[node_count];
+		type: Node_Type;
+		if type, err = read_value(r, Node_Type); err != nil { return }
+		if type > max(Node_Type) {
+			if r.print_error {
+				fmt.eprintf("HxA Error: file '%s' has node type %d. Maximum value is ", r.filename, u8(type), u8(max(Node_Type)));
+			}
+			err = .Invalid_Data;
+			return;
+		}
+		node_count += 1;
+
+		meta_data_count: u32le;
+		if meta_data_count, err = read_value(r, u32le); err != nil { return }
+		if node.meta_data, err = read_meta(r, meta_data_count); err != nil { return }
+
+		switch type {
+		case .Meta_Only:
+			// Okay
+		case .Geometry:
+			g: Node_Geometry;
+
+			if g.vertex_count, err = read_value(r, u32le); err != nil { return }
+			if g.vertex_stack, err = read_layer_stack(r, g.vertex_count); err != nil { return }
+			if g.edge_corner_count, err = read_value(r, u32le); err != nil { return }
+			if g.corner_stack, err = read_layer_stack(r, g.edge_corner_count); err != nil { return }
+			if header.version > 2 {
+				if g.edge_stack, err = read_layer_stack(r, g.edge_corner_count); err != nil { return }
+			}
+			if g.face_count, err = read_value(r, u32le); err != nil { return }
+			if g.face_stack, err = read_layer_stack(r, g.face_count); err != nil { return }
+
+			node.content = g;
+
+		case .Image:
+			img: Node_Image;
+
+			if img.type, err = read_value(r, Image_Type); err != nil { return }
+			dimensions := int(img.type);
+			if img.type == .Image_Cube {
+				dimensions = 2;
+			}
+			img.resolution = {1, 1, 1};
+			for d in 0..<dimensions {
+				if img.resolution[d], err = read_value(r, u32le); err != nil { return }
+			}
+			size := img.resolution[0]*img.resolution[1]*img.resolution[2];
+			if img.type == .Image_Cube {
+				size *= 6;
+			}
+			if img.image_stack, err = read_layer_stack(r, size); err != nil { return }
+
+			node.content = img;
+		}
+	}
+
+	return;
+}

+ 193 - 0
core/encoding/hxa/write.odin

@@ -0,0 +1,193 @@
+package encoding_hxa
+
+import "core:os"
+import "core:mem"
+
+Write_Error :: enum {
+	None,
+	Buffer_Too_Small,
+	Failed_File_Write,
+}
+
+write_to_file :: proc(filepath: string, file: File) -> (err: Write_Error) {
+	required := required_write_size(file);
+	buf, alloc_err := make([]byte, required);
+	if alloc_err == .Out_Of_Memory {
+		return .Failed_File_Write;
+	}
+	defer delete(buf);
+
+	write_internal(&Writer{data = buf}, file);
+	if !os.write_entire_file(filepath, buf) {
+		err =.Failed_File_Write;
+	}
+	return;
+}
+
+write :: proc(buf: []byte, file: File) -> (n: int, err: Write_Error) {
+	required := required_write_size(file);
+	if len(buf) < required {
+		err = .Buffer_Too_Small;
+		return;
+	}
+	n = required;
+	write_internal(&Writer{data = buf}, file);
+	return;
+}
+
+required_write_size :: proc(file: File) -> (n: int) {
+	writer := &Writer{dummy_pass = true};
+	write_internal(writer, file);
+	n = writer.offset;
+	return;
+}
+
+
+@(private)
+Writer :: struct {
+	data:   []byte,
+	offset: int,
+	dummy_pass: bool,
+};
+
+@(private)
+write_internal :: proc(w: ^Writer, file: File) {
+	write_value :: proc(w: ^Writer, value: $T) {
+		if !w.dummy_pass {
+			remaining := len(w.data) - w.offset;
+			assert(size_of(T) <= remaining);
+			ptr := raw_data(w.data[w.offset:]);
+			(^T)(ptr)^ = value;
+		}
+		w.offset += size_of(T);
+	}
+	write_array :: proc(w: ^Writer, array: []$T) {
+		if !w.dummy_pass {
+			remaining := len(w.data) - w.offset;
+			assert(size_of(T)*len(array) <= remaining);
+			ptr := raw_data(w.data[w.offset:]);
+			dst := mem.slice_ptr((^T)(ptr), len(array));
+			copy(dst, array);
+		}
+		w.offset += size_of(T)*len(array);
+	}
+	write_string :: proc(w: ^Writer, str: string) {
+		if !w.dummy_pass {
+			remaining := len(w.data) - w.offset;
+			assert(size_of(byte)*len(str) <= remaining);
+			ptr := raw_data(w.data[w.offset:]);
+			dst := mem.slice_ptr((^byte)(ptr), len(str));
+			copy(dst, str);
+		}
+		w.offset += size_of(byte)*len(str);
+	}
+
+	write_metadata :: proc(w: ^Writer, meta_data: []Meta) {
+		for m in meta_data {
+			name_len := max(len(m.name), 255);
+			write_value(w, u8(name_len));
+			write_string(w, m.name[:name_len]);
+
+			meta_data_type: Meta_Value_Type;
+			length: u32le = 0;
+			switch v in m.value {
+			case []i64le:
+				meta_data_type = .Int64;
+				length = u32le(len(v));
+			case []f64le:
+				meta_data_type = .Double;
+				length = u32le(len(v));
+			case []Node_Index:
+				meta_data_type = .Node;
+				length = u32le(len(v));
+			case string:
+				meta_data_type = .Text;
+				length = u32le(len(v));
+			case []byte:
+				meta_data_type = .Binary;
+				length = u32le(len(v));
+			case []Meta:
+				meta_data_type = .Meta;
+				length = u32le(len(v));
+			}
+			write_value(w, meta_data_type);
+			write_value(w, length);
+
+			switch v in m.value {
+			case []i64le:      write_array(w, v);
+			case []f64le:      write_array(w, v);
+			case []Node_Index: write_array(w, v);
+			case string:       write_string(w, v);
+			case []byte:       write_array(w, v);
+			case []Meta:       write_metadata(w, v);
+			}
+		}
+		return;
+	}
+	write_layer_stack :: proc(w: ^Writer, layers: Layer_Stack) {
+		write_value(w, u32(len(layers)));
+		for layer in layers {
+			name_len := max(len(layer.name), 255);
+			write_value(w, u8(name_len));
+			write_string(w, layer .name[:name_len]);
+
+			write_value(w, layer.components);
+
+			layer_data_type: Layer_Data_Type;
+			switch v in layer.data {
+			case []u8:    layer_data_type = .Uint8;
+			case []i32le: layer_data_type = .Int32;
+			case []f32le: layer_data_type = .Float;
+			case []f64le: layer_data_type = .Double;
+			}
+			write_value(w, layer_data_type);
+
+			switch v in layer.data {
+			case []u8:   write_array(w, v);
+			case []i32le: write_array(w, v);
+			case []f32le: write_array(w, v);
+			case []f64le: write_array(w, v);
+			}
+		}
+		return;
+	}
+
+	write_value(w, &Header{
+		magic_number = MAGIC_NUMBER,
+		version = LATEST_VERSION,
+		internal_node_count = u32le(len(file.nodes)),
+	});
+
+	for node in file.nodes {
+		node_type: Node_Type;
+		switch content in node.content {
+		case Node_Geometry: node_type = .Geometry;
+		case Node_Image:    node_type = .Image;
+		}
+		write_value(w, node_type);
+
+		write_value(w, u32(len(node.meta_data)));
+		write_metadata(w, node.meta_data);
+
+		switch content in node.content {
+		case Node_Geometry:
+			write_value(w, content.vertex_count);
+			write_layer_stack(w, content.vertex_stack);
+			write_value(w, content.edge_corner_count);
+			write_layer_stack(w, content.corner_stack);
+			write_layer_stack(w, content.edge_stack);
+			write_value(w, content.face_count);
+			write_layer_stack(w, content.face_stack);
+		case Node_Image:
+			write_value(w, content.type);
+			dimensions := int(content.type);
+			if content.type == .Image_Cube {
+				dimensions = 2;
+			}
+			for d in 0..<dimensions {
+				write_value(w, content.resolution[d]);
+			}
+			write_layer_stack(w, content.image_stack);
+		}
+	}
+}