Sfoglia il codice sorgente

Grammar and readability fixes for "Step by step", "2D", and "3D" (#1518)

* Minor changes to UI control doc

* Step by step minor grammar fixes

* Editor readability and grammar tweaks

* 2D readability and grammar tweaks

* 3D readibility and grammar tweaks, rewrite part of IK

* Fix bad merge

* Add requested changes, update lifebar UI screenshot
Jason Maa 7 anni fa
parent
commit
9834d242f1

+ 11 - 6
getting_started/editor/command_line_tutorial.rst

@@ -14,7 +14,7 @@ suitable for this workflow.
 Path
 ----
 
-It is recommended that your godot binary is in your PATH environment
+It is recommended that your Godot binary is in your PATH environment
 variable, so it can be executed easily from any place by typing
 ``godot``. You can do so on Linux by placing the Godot binary in
 ``/usr/local/bin`` and making sure it is called ``godot``.
@@ -48,7 +48,10 @@ For example, the full command for exporting your game (as explained below) might
 Creating a project
 ------------------
 
-To create a project from the command line, navigate the to the desired place and create an empty project.godot file.
+
+Creating a project from the command line can be done by navigating the
+shell to the desired place and making a project.godot file.
+
 
 ::
 
@@ -56,13 +59,15 @@ To create a project from the command line, navigate the to the desired place and
     user@host:~$ cd newgame
     user@host:~/newgame$ touch project.godot
 
+
 The project can now be opened with Godot.
 
+
 Running the editor
 ------------------
 
 Running the editor is done by executing godot with the ``-e`` flag. This
-must be done from within the project directory, or a subdirectory,
+must be done from within the project directory or a subdirectory,
 otherwise the command is ignored and the project manager appears.
 
 ::
@@ -79,9 +84,9 @@ the same code with that scene as argument.
 Erasing a scene
 ---------------
 
-Godot is friends with your filesystem, and will not create extra
-metadata files, simply use ``rm`` to erase a file. Make sure nothing
-references that scene, or else an error will be thrown upon opening.
+Godot is friends with your filesystem and will not create extra
+metadata files. Use ``rm`` to erase a scene file. Make sure nothing
+references that scene or else an error will be thrown upon opening.
 
 ::
 

+ 101 - 37
getting_started/editor/unity_to_godot.rst

@@ -7,7 +7,8 @@
 From Unity3D to Godot Engine
 ============================
 
-This guide provides an overview of Godot Engine from the viewpoint of a Unity user, and aims to help you migrate your existing Unity experience into the world of Godot.
+This guide provides an overview of Godot Engine from the viewpoint of a Unity user,
+and aims to help you migrate your existing Unity experience into the world of Godot.
 
 Differences
 -----------
@@ -44,7 +45,8 @@ Differences
 The editor
 ----------
 
-Godot Engine provides a rich-featured editor that allows you to build your games. The pictures below display both editors with colored blocks to indicate common functionalities.
+Godot Engine provides a rich-featured editor that allows you to build your games.
+The pictures below display both editors with colored blocks to indicate common functionalities.
 
 .. image:: img/unity-gui-overlay.png
 .. image:: img/godot-gui-overlay.png
@@ -52,30 +54,48 @@ Godot Engine provides a rich-featured editor that allows you to build your games
 
 Note that Godot editor allows you to dock each panel at the side of the scene editor you wish.
 
-While both editors may seem similar, there are many differences below the surface. Both let you organize the project using the filesystem, but Godot approach is simpler, with a single configuration file, minimalist text format, and no metadata. All this contributes to Godot being much friendlier to VCS systems such as Git, Subversion or Mercurial.
+While both editors may seem similar, there are many differences below the surface.
+Both let you organize the project using the filesystem,
+but Godot's approach is simpler with a single configuration file, minimalist text format,
+and no metadata. All this contributes to Godot being much friendlier to VCS systems such as Git, Subversion, or Mercurial.
 
-Godot's Scene panel is similar to Unity's Hierarchy panel but, as each node has a specific function, the approach used by Godot is more visually descriptive. In other words, it's easier to understand what a specific scene does at a glance.
+Godot's Scene panel is similar to Unity's Hierarchy panel but, as each node has a specific function,
+the approach used by Godot is more visually descriptive. In other words, it's easier to understand
+what a specific scene does at a glance.
 
-The Inspector in Godot is more minimalist and designed to only show properties. Thanks to this, objects can export a much larger amount of useful parameters to the user, without having to hide functionality in language APIs. As a plus, Godot allows animating any of those properties visually, so changing colors, textures, enumerations or even links to resources in real-time is possible without involving code.
+The Inspector in Godot is more minimalist and designed to only show properties.
+Thanks to this, objects can export a much larger amount of useful parameters to the user
+without having to hide functionality in language APIs. As a plus, Godot allows animating any of those properties visually,
+so changing colors, textures, enumerations, or even links to resources in real-time is possible without involving code.
 
-Finally, the Toolbar at the top of the screen is similar in the sense that it allows controlling the project playback, but projects in Godot run in a separate window, as they don't execute inside the editor (but the tree and objects can still be explored in the debugger window).
+Finally, the Toolbar at the top of the screen is similar in the sense that it allows controlling the project playback,
+but projects in Godot run in a separate window, as they don't execute inside the editor
+(but the tree and objects can still be explored in the debugger window).
 
-This approach has the disadvantage that the running game can't be explored from different angles (though this may be supported in the future, and displaying collision gizmos in the running game is already possible), but in exchange has several advantages:
+This approach has the disadvantage that the running game can't be explored from different angles
+(though this may be supported in the future and displaying collision gizmos in the running game is already possible),
+but in exchange has several advantages:
 
-- Running the project and closing it is fast (Unity has to save, run the project, close the project and then reload the previous state).
-- Live editing is a lot more useful, because changes done to the editor take effect immediately in the game, and are not lost (nor have to be synced) when the game is closed. This allows fantastic workflows, like creating levels while you play them.
-- The editor is more stable, because the game runs in a separate process.
+- Running the project and closing it is fast (Unity has to save, run the project, close the project, and then reload the previous state).
+- Live editing is a lot more useful because changes done to the editor take effect immediately in the game and are not lost (nor have to be synced) when the game is closed. This allows fantastic workflows, like creating levels while you play them.
+- The editor is more stable because the game runs in a separate process.
 
-Finally, the top toolbar includes a menu for remote debugging. These options make it simple to deploy to a device (connected phone, tablet or browser via HTML5), and debug/live edit on it after the game was exported.
+Finally, the top toolbar includes a menu for remote debugging.
+These options make it simple to deploy to a device (connected phone, tablet, or browser via HTML5),
+and debug/live edit on it after the game was exported.
 
 The scene system
 ----------------
 
-This is the most important difference between Unity and Godot, and actually the favourite feature of most Godot users.
+This is the most important difference between Unity and Godot and, actually, the favourite feature of most Godot users.
 
-Unity's scene system consist in embedding all the required assets in a scene, and link them together by setting components and scripts to them.
+Unity's scene system consists of embedding all the required assets in a scene
+and linking them together by setting components and scripts to them.
 
-Godot's scene system is different: it actually consists in a tree made of nodes. Each node serves a purpose: Sprite, Mesh, Light... Basically, this is similar to Unity scene system. However, each node can have multiple children, which make each a subscene of the main scene. This means you can compose a whole scene with different scenes, stored in different files.
+Godot's scene system is different: it actually consists in a tree made of nodes.
+Each node serves a purpose: Sprite, Mesh, Light, etc. Basically, this is similar to Unity scene system.
+However, each node can have multiple children, which makes each a subscene of the main scene.
+This means you can compose a whole scene with different scenes stored in different files.
 
 For example, think of a platformer level. You would compose it with multiple elements:
 
@@ -85,35 +105,50 @@ For example, think of a platformer level. You would compose it with multiple ele
 - The enemies
 
 
-In Unity, you would put all the GameObjects in the scene: the player, multiple instances of enemies, bricks everywhere to form the ground of the level, and multiple instances of coins all over the level. You would then add various components to each element to link them and add logic in the level: for example, you'd add a BoxCollider2D to all the elements of the scene so that they can collide. This principle is different in Godot.
+In Unity, you would put all the GameObjects in the scene: the player, multiple instances of enemies,
+bricks everywhere to form the ground of the level and then multiple instances of coins all over the level.
+You would then add various components to each element to link them and add logic in the level: For example,
+you'd add a BoxCollider2D to all the elements of the scene so that they can collide. This principle is different in Godot.
 
 In Godot, you would split your whole scene into 3 separate, smaller scenes, which you would then instance in the main scene.
 
 1. **First, a scene for the Player alone.**
 
-Consider the player as a reusable element in other levels. It is composed of one node in particular: an AnimatedSprite node, which contains the sprite textures to form various animations (for example, walking animation)
+Consider the player as a reusable element in other levels. It is composed of one node in particular:
+an AnimatedSprite node, which contains the sprite textures to form various animations (for example, walking animation)
 
 2. **Second, a scene for the Enemy.**
 
-There again, an enemy is a reusable element in other levels. It is almost the same as the Player node - the only differences are the script (that manages AI, mostly) and sprite textures used by the AnimatedSprite.
+There again, an enemy is a reusable element in other levels. It is almost the same
+as the Player node - the only differences are the script (that manages AI, mostly)
+and sprite textures used by the AnimatedSprite.
 
 3. **Lastly, the Level scene.**
 
-It is composed of Bricks (for platforms), Coins (for the player to grab) and a certain number of instances of the previous Enemy scene. These will be different, separate enemies, whose behaviour and appearance will be the same as defined in the Enemy scene. Each instance is then considered as a node in the Level scene tree. Of course, you can set different properties for each enemy node (to change its color for example).
+It is composed of Bricks (for platforms), Coins (for the player to grab) and a
+certain number of instances of the previous Enemy scene. These will be different, separate enemies,
+whose behaviour and appearance will be the same as defined in the Enemy scene.
+Each instance is then considered as a node in the Level scene tree.
+Of course, you can set different properties for each Enemy node (to change its color, for example).
 
 Finally, the main scene would then be composed of one root node with 2 children: a Player instance node, and a Level instance node.
-The root node can be anything, generally a "root" type such as "Node" which is the most global type, or "Node2D" (root type of all 2D-related nodes), "Spatial" (root type of all 3D-related nodes) or "Control" (root type of all GUI-related nodes).
+The root node can be anything, generally a "root" type such as "Node" which is the most global type,
+or "Node2D" (root type of all 2D-related nodes), "Spatial" (root type of all 3D-related nodes) or
+"Control" (root type of all GUI-related nodes).
 
 
-As you can see, every scene is organized as a tree. The same goes for nodes' properties: you don't *add* a collision component to a node to make it collidable like Unity does. Instead, you make this node a *child* of a new specific node that has collision properties. Godot features various collision types nodes, depending of the use (see the :ref:`Physics introduction <doc_physics_introduction>`).
+As you can see, every scene is organized as a tree. The same goes for nodes' properties: you don't *add* a
+collision component to a node to make it collidable like Unity does. Instead, you make this node a *child* of a
+new specific node that has collision properties. Godot features various collision types nodes, depending of the use
+(see the :ref:`Physics introduction <doc_physics_introduction>`).
 
 - Question: What are the advantages of this system? Wouldn't this system potentially increase the depth of the scene tree? Besides, Unity allows organizing GameObjects by putting them in empty GameObjects.
 
-    - First, this system is closer to the well-known Object-Oriented paradigm: Godot provides a number of nodes which are not clearly "Game Objects", but they provide their children with their own capabilities: this is inheritance.
-    - Second, it allows the extraction a subtree of scene to make it a scene of its own, which answers to the second and third questions: even if a scene tree gets too deep, it can be split into smaller subtrees. This also allows a better solution for reusability, as you can include any subtree as a child of any node. Putting multiple nodes in an empty GameObject in Unity does not provide the same possibility, apart from a visual organization.
+    - First, this system is closer to the well-known object-oriented paradigm: Godot provides a number of nodes which are not clearly "Game Objects", but they provide their children with their own capabilities: this is inheritance.
+    - Second, it allows the extraction a subtree of scene to make it a scene of its own, which answers the second and third questions: even if a scene tree gets too deep, it can be split into smaller subtrees. This also allows a better solution for reusability, as you can include any subtree as a child of any node. Putting multiple nodes in an empty GameObject in Unity does not provide the same possibility, apart from a visual organization.
 
 
-These are the most important concepts you need to remember: "node", "parent node" and "child node".
+These are the most important concepts you need to remember: "node", "parent node", and "child node".
 
 
 Project organization
@@ -121,23 +156,38 @@ Project organization
 
 .. image:: img/unity-project-organization-example.png
 
-We previously observed that there is no perfect solution to set a project architecture. Any solution will work for Unity and Godot, so this point has a lesser importance.
+We previously observed that there is no perfect solution to set a project architecture.
+Any solution will work for Unity and Godot, so this point has a lesser importance.
 
-However, we often observe a common architecture for Unity projects, which consists in having one Assets folder in the root directory, that contains various folders, one per type of asset: Audio, Graphics, Models, Materials, Scripts, Scenes, etc.
+However, we often observe a common architecture for Unity projects, which consists of having one Assets folder in the root directory
+that contains various folders, one per type of asset: Audio, Graphics, Models, Materials, Scripts, Scenes, etc.
 
-As described before, Godot scene system allows splitting scenes in smaller scenes. Since each scene and subscene is actually one scene file in the project, we recommend organizing your project a bit differently. This wiki provides a page for this: :ref:`doc_project_organization`.
+As described before, the Godot scene system allows splitting scenes into smaller scenes.
+Since each scene and subscene is actually one scene file in the project, we recommend organizing your project a bit differently.
+This wiki provides a page for this: :ref:`doc_project_organization`.
 
 
 Where are my prefabs?
 ---------------------
 
-The concept of prefabs as provided by Unity is a 'template' element of the scene. It is reusable, and each instance of the prefab that exists in the scene has an existence of its own, but all of them have the same properties as defined by the prefab.
+The concept of prefabs as provided by Unity is a 'template' element of the scene.
+It is reusable, and each instance of the prefab that exists in the scene has an existence of its own,
+but all of them have the same properties as defined by the prefab.
 
-Godot does not provide prefabs as such, but this functionality is here again filled thanks to its scene system: as we saw the scene system is organized as a tree. Godot allows you to save a subtree of a scene as its own scene, thus saved in its own file. This new scene can then be instanced as many times as you want. Any change you make to this new, separate scene will be applied to its instances. However, any change you make to the instance will not have any impact on the 'template' scene.
+Godot does not provide prefabs as such, but this functionality is here, again, filled thanks to its scene system:
+As we saw the scene system is organized as a tree. Godot allows you to save a subtree of a scene as its own scene,
+thus saved into its own file. This new scene can then be instanced as many times as you want.
+Any change you make to this new, separate scene will be applied to its instances.
+However, any change you make to the instance will not have any impact on the 'template' scene.
 
 .. image:: img/save-branch-as-scene.png
 
-To be precise, you can modify the parameters of the instance in the Inspector panel. However, the nodes that compose this instance are locked and you can unlock them if you need to by right clicking the instance in the Scene tree, and selecting "Editable children" in the menu. You don't need to do this to add new children nodes to this node, but remember that these new children will belong to the instance, not the 'template' scene. If you want to add new children to all the instances of your 'template' scene, then you need to add it once in the 'template' scene.
+To be precise, you can modify the parameters of the instance in the Inspector panel.
+However, the nodes that compose this instance are locked although you can unlock them if you need to by
+right-clicking the instance in the Scene tree and selecting "Editable children" in the menu.
+You don't need to do this to add new children nodes to this node, but it is possible.
+Remember that these new children will belong to the instance, not the 'template' scene.
+If you want to add new children to all the instances of your 'template' scene, then you need to add them in the 'template' scene.
 
 .. image:: img/editable-children.png
 
@@ -157,28 +207,42 @@ Design
 
 As you may know already, Unity supports C#. C# benefits from its integration with Visual Studio and other features, such as static typing.
 
-Godot provides its own scripting language, :ref:`GDScript <doc_scripting>` as well as support for :ref:`Visual Script <toc-learn-scripting-visual_script>` and :ref:`C# <doc_c_sharp>`. GDScript borrows its syntax from Python, but is not related to it. If you wonder about the reasoning for a custom scripting language, please read :ref:`GDScript <doc_gdscript>` and `FAQ <faq>`_ pages. GDScript is strongly attached to the Godot API, but it is easy to learn.
+Godot provides its own scripting language, :ref:`GDScript <doc_scripting>` as well as support
+for :ref:`Visual Script <toc-learn-scripting-visual_script>` and :ref:`doc_c_sharp`.
+GDScript borrows its syntax from Python, but is not related to it. If you wonder about the reasoning for a custom scripting language,
+please read :ref:`GDScript <doc_gdscript>` and `FAQ <faq>`_ pages. GDScript is strongly attached to the Godot API
+and is really easy to learn: Between one evening for an experienced programmer and a week for a complete beginner.
 
-Unity allows you to attach as many scripts as you want to a GameObject. Each script adds a behaviour to the GameObject: for example, you can attach a script so that it reacts to the player's controls, and another that controls its specific game logic.
+Unity allows you to attach as many scripts as you want to a GameObject.
+Each script adds a behaviour to the GameObject: For example, you can attach a script so that it reacts to the player's controls,
+and another that controls its specific game logic.
 
-In Godot, you can only attach one script per node. You can use either an external GDScript file, or include it directly in the node. If you need to attach more scripts to one node, then you may consider two solutions, depending on your scene and on what you want to achieve:
+In Godot, you can only attach one script per node. You can use either an external GDScript file
+or include the script directly in the node. If you need to attach more scripts to one node, then you may consider two solutions,
+depending on your scene and on what you want to achieve:
 
 - either add a new node between your target node and its current parent, then add a script to this new node.
 - or, your can split your target node into multiple children and attach one script to each of them.
 
-As you can see, it can be easy to turn a scene tree to a mess. This is why it is important to have a real reflection, and consider splitting a complicated scene into multiple, smaller branches.
+As you can see, it can be easy to turn a scene tree to a mess. This is why it is important to have a real reflection
+and consider splitting a complicated scene into multiple, smaller branches.
 
 Connections : groups and signals
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-You can control nodes by accessing them using a script, and call functions (built-in or user-defined) on them. But there's more: you can also place them in a group and call a function on all nodes contained in this group! This is explained in :ref:`this page <doc_scripting_continued>`.
-
-But there's more! Certain nodes throw signals when certain actions happen. You can connect these signals to call a specific function when they happen. Note that you can define your own signals and send them whenever you want. This feature is documented `here <../scripting/gdscript/gdscript_basics.html#signals>`_.
+You can control nodes by accessing them using a script and calling functions (built-in or user-defined) on them.
+But there's more: You can also place them in a group and call a function on all nodes contained in this group!
+This is explained in :ref:`this page <doc_scripting_continued>`.
 
+But there's more! Certain nodes throw signals when certain actions happen.
+You can connect these signals to call a specific function when they happen.
+Note that you can define your own signals and send them whenever you want.
+This feature is documented `here <gdscript.html#signals>`_.
 
 Using Godot in C++
 ------------------
 
 For your information, Godot also allows you to develop your project directly in C++ by using its API, which is not possible with Unity at the moment. As an example, you can consider Godot Engine's editor as a "game" written in C++ using Godot API.
 
-If you are interested in using Godot in C++, you may want to start reading the :ref:`Developing in C++ <doc_introduction_to_godot_development>` page.
+If you are interested in using Godot in C++, you may want to start reading the :ref:`Developing in
+C++ <doc_introduction_to_godot_development>` page.

+ 1 - 1
getting_started/step_by_step/animations.rst

@@ -59,7 +59,7 @@ The keyframe will be added in the animation player editor:
 
 .. image:: img/robisplash_anim_editor_keyframe.png
 
-Move the editor cursor to the end, by clicking here:
+Move the editor cursor to the end by clicking here:
 
 .. image:: img/robisplash_anim_editor_track_cursor.png
 

+ 2 - 2
getting_started/step_by_step/filesystem.rst

@@ -7,7 +7,7 @@ Introduction
 ------------
 
 File systems are yet another hot topic in engine development. The
-file system manages how the assets are stored, and how they are accessed.
+file system manages how the assets are stored and how they are accessed.
 A well designed file system also allows multiple developers to edit the
 same source files and assets while collaborating together.
 
@@ -103,7 +103,7 @@ have to be re-defined to point at the new asset location.
 To avoid this, do all your move, delete and rename operations from within Godot, on the FileSystem
 dock. Never move assets from outside Godot, or dependencies will have to be
 fixed manually (Godot detects this and helps you fix them anyway, but why
-going the hardest route?).
+go the hard route?).
 
 The second is that under Windows and macOS file and path names are case insensitive.
 If a developer working in a case insensitive host file system saves an asset as "myfile.PNG",

BIN
getting_started/step_by_step/img/lifebar_tutorial_player_died_signal_enemy_connected.png


+ 10 - 9
getting_started/step_by_step/resources.rst

@@ -7,7 +7,7 @@ Nodes and resources
 -------------------
 
 So far, :ref:`Nodes <class_Node>`
-have been the most important datatype in Godot, as most of the behaviors
+have been the most important datatype in Godot as most of the behaviors
 and features of the engine are implemented through them. There is
 another datatype that is equally important:
 :ref:`Resource <class_Resource>`.
@@ -40,7 +40,7 @@ and again. This corresponds with the fact that resources are just data
 containers, so there is no need to have them duplicated.
 
 Typically, every object in Godot (Node, Resource, or anything else) can
-export properties, properties can be of many types (like a string,
+export properties. Properties can be of many types (like a string,
 integer, Vector2, etc) and one of those types can be a resource. This
 means that both nodes and resources can contain resources as properties.
 To make it a little more visual:
@@ -58,7 +58,7 @@ in a :ref:`Sprite <class_Sprite>` node:
 
 .. image:: img/spriteprop.png
 
-Pressing the ">" button on the right side of the preview allows to
+Pressing the ">" button on the right side of the preview allows us to
 view and edit the resources properties. One of the properties (path)
 shows where it comes from. In this case, it comes from a png image.
 
@@ -99,7 +99,7 @@ first is to use load(), like this:
     }
 
 The second way is more optimal, but only works with a string constant
-parameter, because it loads the resource at compile-time.
+parameter because it loads the resource at compile-time.
 
 .. tabs::
  .. code-tab:: gdscript GDScript
@@ -114,8 +114,8 @@ parameter, because it loads the resource at compile-time.
 
 Loading scenes
 --------------
-Scenes are also resources, but there is a catch. Scenes saved to disk 
-are resources of type :ref:`PackedScene <class_PackedScene>`. This means that 
+Scenes are also resources, but there is a catch. Scenes saved to disk
+are resources of type :ref:`PackedScene <class_PackedScene>`. This means that
 the scene is packed inside a resource.
 
 To obtain an instance of the scene, the method
@@ -127,7 +127,8 @@ must be used.
 
     func _on_shoot():
             var bullet = preload("res://bullet.tscn").instance()
-            add_child(bullet)                  
+            add_child(bullet)
+
 
  .. code-tab:: csharp
 
@@ -139,8 +140,8 @@ must be used.
         AddChild(bullet);
     }
 
-This method creates the nodes in the scene's hierarchy, configures 
-them (sets all the properties) and returns the root node of the scene, 
+This method creates the nodes in the scene's hierarchy, configures
+them (sets all the properties) and returns the root node of the scene,
 which can be added to any other node.
 
 The approach has several advantages. As the

+ 14 - 14
getting_started/step_by_step/scene_tree.rst

@@ -6,17 +6,17 @@ SceneTree
 Introduction
 ------------
 
-This is where things start getting abstract, but don't panic. There's 
+This is where things start getting abstract, but don't panic. There's
 not much more depth than this.
 
 In previous tutorials, everything revolved around the concept of
-nodes. Scenes are simply a collection of nodes. They become active once 
+nodes. Scenes are simply a collection of nodes. They become active once
 they enter the *scene tree*.
 
 This concept deserves going into a little more detail. In fact, the
-scene system is not even a core component of Godot, as it is possible to
+scene system is not even a core component of Godot as it is possible to
 skip it and write a script (or C++ code) that talks directly to the
-servers. But making a game that way would be a lot of work.
+servers, but making a game that way would be a lot of work.
 
 MainLoop
 --------
@@ -40,7 +40,7 @@ level and when making games in Godot, writing your own MainLoop seldom makes sen
 SceneTree
 ---------
 
-One of the ways to explain how Godot works, is that it's a high level
+One of the ways to explain how Godot works is that it's a high level
 game engine over a low level middleware.
 
 The scene system is the game engine, while the :ref:`OS <class_OS>`
@@ -55,12 +55,12 @@ It's important to know that this class exists because it has a few
 important uses:
 
 -  It contains the root :ref:`Viewport <class_Viewport>`, to which a
-   scene is added as a child when it's first opened, to become
+   scene is added as a child when it's first opened to become
    part of the *Scene Tree* (more on that next)
--  It contains information about the groups, and has means to call all
-   nodes in a group, or get a list of them.
+-  It contains information about the groups and has the means to call all
+   nodes in a group or get a list of them.
 -  It contains some global state functionality, such as setting pause
-   mode, or quitting the process.
+   mode or quitting the process.
 
 When a node is part of the Scene Tree, the
 :ref:`SceneTree <class_SceneTree>`
@@ -88,7 +88,7 @@ two different ways:
 This node contains the main viewport, anything that is a child of a
 :ref:`Viewport <class_Viewport>`
 is drawn inside of it by default, so it makes sense that the top of all
-nodes is always a node of this type, otherwise nothing would be seen!
+nodes is always a node of this type otherwise nothing would be seen!
 
 While other viewports can be created in the scene (for split-screen
 effects and such), this one is the only one that is never created by the
@@ -100,7 +100,7 @@ Scene tree
 When a node is connected, directly or indirectly, to the root
 viewport, it becomes part of the *scene tree*.
 
-This means that, as explained in previous tutorials, it will get the
+This means that as explained in previous tutorials, it will get the
 _enter_tree() and _ready() callbacks (as well as _exit_tree()).
 
 .. image:: img/activescene.png
@@ -113,7 +113,7 @@ notifications, play sound, groups, etc. When they are removed from the
 Tree order
 ----------
 
-Most node operations in Godot, such as drawing 2D, processing or getting
+Most node operations in Godot, such as drawing 2D, processing, or getting
 notifications are done in tree order. This means that parents and
 siblings with a smaller rank in the tree order will get notified before
 the current node.
@@ -126,7 +126,7 @@ the current node.
 #. A scene is loaded from disk or created by scripting.
 #. The root node of that scene (only one root, remember?) is added as
    either a child of the "root" Viewport (from SceneTree), or to any
-   child or grand-child of it.
+   child or grandchild of it.
 #. Every node of the newly added scene, will receive the "enter_tree"
    notification ( _enter_tree() callback in GDScript) in top-to-bottom
    order.
@@ -158,7 +158,7 @@ function:
         GetTree().ChangeScene("res://levels/level2.tscn");
     }
 
-This is a quick and useful way to switch scenes, but has the drawback
+This is a quick and useful way to switch scenes but has the drawback
 that the game will stall until the new scene is loaded and running. At
 some point in your game, it may be desired to create proper loading
 screens with progress bar, animated indicators or thread (background)

+ 7 - 7
getting_started/step_by_step/ui_code_a_life_bar.rst

@@ -221,7 +221,7 @@ the ``Player`` node in the scene dock to select it. Head down to the
 Inspector and click on the Node tab. This is the place to connect nodes
 to listen the one you selected.
 
-The first section lists custom signals defined in ``player.GD``:
+The first section lists custom signals defined in ``Player.gd``:
 
 -  ``died`` is emitted when the character died. We will use it in a
    moment to hide the UI.
@@ -235,7 +235,7 @@ Select ``health_changed`` and click on the Connect button in the bottom
 right corner to open the Connect Signal window. On the left side you can
 pick the node that will listen to this signal. Select the ``GUI`` node.
 The right side of the screen lets you pack optional values with the
-signal. We already took care of it in ``player.GD``. In general I
+signal. We already took care of it in ``Player.gd``. In general I
 recommend not to add too many arguments using this window as they're
 less convenient than doing it from the code.
 
@@ -245,7 +245,7 @@ less convenient than doing it from the code.
 
 .. tip::
 
-    You can optionally connect nodes from the code. But doing it from the editor has two advantages:
+    You can optionally connect nodes from the code. However doing it from the editor has two advantages:
 
     1. Godot can write new callback functions for you in the connected script
     2. An emitter icon appears next to the node that emits the signal in the Scene dock
@@ -258,7 +258,7 @@ lets you process them. If you look to the right, there is a "Make
 Function" radio button that is on by default. Click the connect button
 at the bottom of the window. Godot creates the method inside the ``GUI``
 node. The script editor opens with the cursor inside a new
-``_on_player_health_changed`` function.
+``_on_Player_health_changed`` function.
 
 .. note::
 
@@ -288,10 +288,10 @@ its current ``health`` alongside it. Your code should look like:
     }
 
 .. note::
-    
+
     The engine does not convert PascalCase to snake_case, for C# examples we'll be using
     PascalCase for method names & camelCase for method parameters which follows the official `C#
-    naming conventions. <https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/capitalization-conventions>`_ 
+    naming conventions. <https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/capitalization-conventions>`_
 
 
 .. figure:: img/lifebar_tutorial_player_gd_emits_health_changed_code.png
@@ -671,7 +671,7 @@ method:
     {
         var startColor = new Color(1.0f, 1.0f, 1.0f);
         var endColor = new Color(1.0f, 1.0f, 1.0f, 0.0f);
-        
+
         _tween.InterpolateProperty(this, "modulate", startColor, endColor, 1.0f, Tween.TransitionType.Linear,
             Tween.EaseType.In);
     }

+ 5 - 5
tutorials/2d/2d_transforms.rst

@@ -7,7 +7,7 @@ Introduction
 ------------
 
 This tutorial is created after a topic that is a little dark for most
-users, and explains all the 2D transforms going on for nodes from the
+users and explains all the 2D transforms going on for nodes from the
 moment they draw their content locally to the time they are drawn into
 the screen.
 
@@ -38,11 +38,11 @@ Stretch transform
 
 Finally, viewports have a *Stretch Transform*, which is used when
 resizing or stretching the screen. This transform is used internally (as
-described in :ref:`doc_multiple_resolutions`), but can also be manually set
+described in :ref:`doc_multiple_resolutions`) but can also be manually set
 on each viewport.
 
 Input events received in the :ref:`MainLoop._input_event() <class_MainLoop__input_event>`
-callback are multiplied by this transform, but lack the ones above. To
+callback are multiplied by this transform but lack the ones above. To
 convert InputEvent coordinates to local CanvasItem coordinates, the
 :ref:`CanvasItem.make_input_local() <class_CanvasItem_make_input_local>`
 function was added for convenience.
@@ -70,7 +70,7 @@ Obtaining each transform can be achieved with the following functions:
 | CanvasLayer+GlobalCanvas+Stretch | :ref:`CanvasItem.get_viewport_transform() <class_CanvasItem_get_viewport_transform>` |
 +----------------------------------+--------------------------------------------------------------------------------------+
 
-Finally then, to convert a CanvasItem local coordinates to screen
+Finally, then, to convert a CanvasItem local coordinates to screen
 coordinates, just multiply in the following order:
 
 .. tabs::
@@ -109,4 +109,4 @@ way:
     var ie = new InputEventMouseButton();
     ie.ButtonIndex = (int)ButtonList.Left;
     ie.Position = (GetViewportTransform() * GetGlobalTransform()).Xform(localPos);
-    GetTree().InputEvent(ie);
+    GetTree().InputEvent(ie);

+ 3 - 3
tutorials/2d/canvas_layers.rst

@@ -36,7 +36,7 @@ transform. Examples of this are:
 
 -  **Parallax Backgrounds**: Backgrounds that move slower than the rest
    of the stage.
--  **HUD**: Head's up display, or user interface. If the world moves,
+-  **HUD**: Heads-up display, or user interface. If the world moves,
    the life counter, score, etc. must stay static.
 -  **Transitions**: Effects used for transitions (fades, blends) may
    also want it to remain at a fixed location.
@@ -51,8 +51,8 @@ which is a node that adds a separate 2D rendering layer for all its
 children and grand-children. Viewport children will draw by default at
 layer "0", while a CanvasLayer will draw at any numeric layer. Layers
 with a greater number will be drawn above those with a smaller number.
-CanvasLayers also have their own transform, and do not depend on the
-transform of other layers. This allows the UI to be fixed in-place,
+CanvasLayers also have their own transform and do not depend on the
+transform of other layers. This allows the UI to be fixed in-place
 while the world moves.
 
 An example of this is creating a parallax background. This can be done

+ 89 - 30
tutorials/2d/custom_drawing_in_2d.rst

@@ -7,7 +7,7 @@ Why?
 ----
 
 Godot has nodes to draw sprites, polygons, particles, and all sorts of
-stuff. For most cases this is enough, but not always. Before crying in fear, 
+stuff. For most cases this is enough but not always. Before crying in fear,
 angst, and rage because a node to draw that specific *something* does not exist...
 it would be good to know that it is possible to easily make any 2D node (be it
 :ref:`Control <class_Control>` or :ref:`Node2D <class_Node2D>`
@@ -146,15 +146,27 @@ call update() from the _process() callback, like this:
 An example: drawing circular arcs
 ----------------------------------
 
-We will now use the custom drawing functionality of the Godot Engine to draw something that Godot doesn't provide functions for. As an example, Godot provides a draw_circle() function that draws a whole circle. However, what about drawing a portion of a circle? You will have to code a function to perform this, and draw it yourself.
+We will now use the custom drawing functionality of the Godot Engine to draw
+something that Godot doesn't provide functions for. As an example, Godot provides
+a draw_circle() function that draws a whole circle. However, what about drawing a
+portion of a circle? You will have to code a function to perform this and draw it yourself.
 
 Arc function
 ^^^^^^^^^^^^
 
 
-An arc is defined by its support circle parameters. That is: the center position, and the radius. And the arc itself is then defined by the angle it starts from, and the angle at which it stops. These are the 4 parameters we have to provide to our drawing. We'll also provide the color value so we can draw the arc in different colors if we wish.
+An arc is defined by its support circle parameters. That is: the center position
+and the radius. The arc itself is then defined by the angle it starts from
+and the angle at which it stops. These are the 4 parameters that we have to provide to our drawing.
+We'll also provide the color value, so we can draw the arc in different colors if we wish.
 
-Basically, drawing a shape on screen requires it to be decomposed into a certain number of points, linked from one to the following one. As you can imagine, the more points your shape is made of, the smoother it will appear, but the heavier it will be, in terms of processing cost. In general, if your shape is huge (or in 3D, close to the camera), it will require more points to be drawn without it being angular-looking. On the contrary, if your shape is small (or in 3D, far from the camera), you may reduce its number of points to save processing costs. This is called *Level of Detail (LoD)*. In our example, we will simply use a fixed number of points, no matter the radius.
+Basically, drawing a shape on screen requires it to be decomposed into a certain number of points
+linked from one to the following one. As you can imagine, the more points your shape is made of,
+the smoother it will appear, but the heavier it will also be in terms of processing cost. In general,
+if your shape is huge (or in 3D, close to the camera), it will require more points to be drawn without
+it being angular-looking. On the contrary, if your shape is small (or in 3D, far from the camera),
+you may reduce its number of points to save processing costs. This is called *Level of Detail (LoD)*.
+In our example, we will simply use a fixed number of points, no matter the radius.
 
 .. tabs::
  .. code-tab:: gdscript GDScript
@@ -162,11 +174,11 @@ Basically, drawing a shape on screen requires it to be decomposed into a certain
     func draw_circle_arc(center, radius, angle_from, angle_to, color):
         var nb_points = 32
         var points_arc = PoolVector2Array()
-    
+
         for i in range(nb_points+1):
             var angle_point = deg2rad(angle_from + i * (angle_to-angle_from) / nb_points - 90)
             points_arc.push_back(center + Vector2(cos(angle_point), sin(angle_point)) * radius)
-    
+
         for index_point in range(nb_points):
             draw_line(points_arc[index_point], points_arc[index_point + 1], color)
 
@@ -188,19 +200,42 @@ Basically, drawing a shape on screen requires it to be decomposed into a certain
     }
 
 
-Remember the number of points our shape has to be decomposed into? We fixed this number in the nb_points variable to a value of 32. Then, we initialize an empty PoolVector2Array, which is simply an array of Vector2.
-
-The next step consists of computing the actual positions of these 32 points that compose an arc. This is done in the first for-loop: we iterate over the number of points for which we want to compute the positions, plus one to include the last point. We first determine the angle of each point, between the starting and ending angles. 
-
-The reason why each angle is reduced by 90° is that we will compute 2D positions out of each angle using trigonometry (you know, cosine and sine stuff...). However, to be simple, cos() and sin() use radians, not degrees. The angle of 0° (0 radian) starts at 3 o'clock, although we want to start counting at 0 o'clock. So we reduce each angle by 90° in order to start counting from 0 o'clock.
-
-The actual position of a point located on a circle at angle 'angle' (in radians) is given by Vector2(cos(angle), sin(angle)). Since cos() and sin() return values between -1 and 1, the position is located on a circle of radius 1. To have this position on our support circle, which has a radius of 'radius', we simply need to multiply the position by 'radius'. Finally, we need to position our support circle at the 'center' position, which is performed by adding it to our Vector2 value. Finally, we insert the point in the PoolVector2Array which was previously defined.
-
-Now, we need to actually draw our points. As you can imagine, we will not simply draw our 32 points: we need to draw everything that is between each of them. We could have computed every point ourselves using the previous method, and drew it one by one. But this is too complicated and inefficient (except if explicitly needed). So, we simply draw lines between each pair of points. Unless the radius of our support circle is big, the length of each line between a pair of points will never be long enough to see them. If this happens, we simply would need to increase the number of points.
+Remember the number of points our shape has to be decomposed into? We fixed this
+number in the nb_points variable to a value of 32. Then, we initialize an empty
+PoolVector2Array, which is simply an array of Vector2.
+
+The next step consists of computing the actual positions of these 32 points that
+compose an arc. This is done in the first for-loop: we iterate over the number of
+points for which we want to compute the positions, plus one to include the last point.
+We first determine the angle of each point, between the starting and ending angles.
+
+The reason why each angle is reduced by 90° is that we will compute 2D positions
+out of each angle using trigonometry (you know, cosine and sine stuff...). However,
+to be simple, cos() and sin() use radians, not degrees. The angle of 0° (0 radian)
+starts at 3 o'clock although we want to start counting at 12 o'clock. So we reduce
+each angle by 90° in order to start counting from 12 o'clock.
+
+The actual position of a point located on a circle at angle 'angle' (in radians)
+is given by Vector2(cos(angle), sin(angle)). Since cos() and sin() return values
+between -1 and 1, the position is located on a circle of radius 1. To have this
+position on our support circle, which has a radius of 'radius', we simply need to
+multiply the position by 'radius'. Finally, we need to position our support circle
+at the 'center' position, which is performed by adding it to our Vector2 value.
+Finally, we insert the point in the PoolVector2Array which was previously defined.
+
+Now, we need to actually draw our points. As you can imagine, we will not simply
+draw our 32 points: we need to draw everything that is between each of them.
+We could have computed every point ourselves using the previous method, and drew
+it one by one. But this is too complicated and inefficient (except if explicitly needed).
+So, we simply draw lines between each pair of points. Unless the radius of our
+support circle is big, the length of each line between a pair of points will
+never be long enough to see them. If this happens, we simply would need to
+increase the number of points.
 
 Draw the arc on screen
 ^^^^^^^^^^^^^^^^^^^^^^
-We now have a function that draws stuff on the screen: it is time to call in the _draw() function.
+We now have a function that draws stuff on the screen:
+It is time to call in the _draw() function.
 
 .. tabs::
 
@@ -234,7 +269,9 @@ Result:
 
 Arc polygon function
 ^^^^^^^^^^^^^^^^^^^^
-We can take this a step further and not only write a function that draws the plain portion of the disc defined by the arc, but also its shape. The method is exactly the same as previously, except that we draw a polygon instead of lines:
+We can take this a step further and not only write a function that draws the plain
+portion of the disc defined by the arc, but also its shape. The method is exactly
+the same as previously, except that we draw a polygon instead of lines:
 
 .. tabs::
  .. code-tab:: gdscript GDScript
@@ -244,12 +281,12 @@ We can take this a step further and not only write a function that draws the pla
         var points_arc = PoolVector2Array()
         points_arc.push_back(center)
         var colors = PoolColorArray([color])
-    
+
         for i in range(nb_points+1):
             var angle_point = deg2rad(angle_from + i * (angle_to - angle_from) / nb_points - 90)
             points_arc.push_back(center + Vector2(cos(angle_point), sin(angle_point)) * radius)
         draw_polygon(points_arc, colors)
-        
+
  .. code-tab:: csharp
 
     public void DrawCircleArcPoly(Vector2 center, float radius, float angleFrom, float angleTo, Color color)
@@ -268,14 +305,20 @@ We can take this a step further and not only write a function that draws the pla
         DrawPolygon(pointsArc, colors);
     }
 
-        
+
 .. image:: img/result_drawarc_poly.png
 
 Dynamic custom drawing
 ^^^^^^^^^^^^^^^^^^^^^^
-Alright, we are now able to draw custom stuff on screen. However, it is static: let's make this shape turn around the center. The solution to do this is simply to change the angle_from and angle_to values over time. For our example, we will simply increment them by 50. This increment value has to remain constant, else the rotation speed will change accordingly.
+Alright, we are now able to draw custom stuff on screen. However, it is static:
+Let's make this shape turn around the center. The solution to do this is simply
+to change the angle_from and angle_to values over time. For our example,
+we will simply increment them by 50. This increment value has to remain
+constant or else the rotation speed will change accordingly.
 
-First, we have to make both angle_from and angle_to variables global at the top of our script. Also note that you can store them in other nodes and access them using get_node().
+First, we have to make both angle_from and angle_to variables global at the top
+of our script. Also note that you can store them in other nodes and access them
+using get_node().
 
 .. tabs::
  .. code-tab:: gdscript GDScript
@@ -295,11 +338,19 @@ First, we have to make both angle_from and angle_to variables global at the top
         private float _angleTo = 195;
     }
 
-We make these values change in the _process(delta) function. 
+We make these values change in the _process(delta) function.
 
-We also increment our angle_from and angle_to values here. However, we must not forget to wrap() the resulting values between 0 and 360°! That is, if the angle is 361°, then it is actually 1°. If you don't wrap these values, the script will work correctly. But angle values will grow bigger and bigger over time, until they reach the maximum integer value Godot can manage (2^31 - 1). When this happens, Godot may crash or produce unexpected behavior. Since Godot doesn't provide a wrap() function, we'll create it here, as it is relatively simple.
+We also increment our angle_from and angle_to values here. However, we must not
+forget to wrap() the resulting values between 0 and 360°! That is, if the angle
+is 361°, then it is actually 1°. If you don't wrap these values, the script will
+work correctly, but the angle values will grow bigger and bigger over time until
+they reach the maximum integer value Godot can manage (2^31 - 1).
+When this happens, Godot may crash or produce unexpected behavior.
+Since Godot doesn't provide a wrap() function, we'll create it here, as
+it is relatively simple.
 
-Finally, we must not forget to call the update() function, which automatically calls _draw(). This way, you can control when you want to refresh the frame.
+Finally, we must not forget to call the update() function, which automatically
+calls _draw(). This way, you can control when you want to refresh the frame.
 
 .. tabs::
  .. code-tab:: gdscript GDScript
@@ -312,7 +363,7 @@ Finally, we must not forget to call the update() function, which automatically c
     func _process(delta):
         angle_from += rotation_ang
         angle_to += rotation_ang
-     
+
         # We only wrap angles if both of them are bigger than 360
         if angle_from > 360 and angle_to > 360:
             angle_from = wrap(angle_from, 0, 360)
@@ -370,9 +421,17 @@ Also, don't forget to modify the _draw() function to make use of these variables
 Let's run!
 It works, but the arc is rotating insanely fast! What's wrong?
 
-The reason is that your GPU is actually displaying the frames as fast as it can. We need to "normalize" the drawing by this speed. To achieve, we have to make use of the 'delta' parameter of the _process() function. 'delta' contains the time elapsed between the two last rendered frames. It is generally small (about 0.0003 seconds, but this depends on your hardware). So, using 'delta' to control your drawing ensures that your program runs at the same speed on everybody's hardware.
+The reason is that your GPU is actually displaying the frames as fast as it can.
+We need to "normalize" the drawing by this speed. To achieve, we have to make
+use of the 'delta' parameter of the _process() function. 'delta' contains the
+time elapsed between the two last rendered frames. It is generally small
+(about 0.0003 seconds, but this depends on your hardware). So, using 'delta' to
+control your drawing ensures that your program runs at the same speed on
+everybody's hardware.
 
-In our case, we simply need to multiply our 'rotation_ang' variable by 'delta' in the _process() function. This way, our 2 angles will be increased by a much smaller value, which directly depends on the rendering speed.
+In our case, we simply need to multiply our 'rotation_ang' variable by 'delta'
+in the _process() function. This way, our 2 angles will be increased by a much
+smaller value, which directly depends on the rendering speed.
 
 .. tabs::
  .. code-tab:: gdscript GDScript
@@ -380,7 +439,7 @@ In our case, we simply need to multiply our 'rotation_ang' variable by 'delta' i
     func _process(delta):
         angle_from += rotation_ang * delta
         angle_to += rotation_ang * delta
-     
+
         # we only wrap angles if both of them are bigger than 360
         if angle_from > 360 and angle_to > 360:
             angle_from = wrap(angle_from, 0, 360)
@@ -410,7 +469,7 @@ Tools
 -----
 
 Drawing your own nodes might also be desired while running them in the
-editor, to use as preview or visualization of some feature or
+editor to use as a preview or visualization of some feature or
 behavior.
 
 Remember to use the "tool" keyword at the top of the script

+ 13 - 13
tutorials/2d/particle_systems_2d.rst

@@ -22,8 +22,8 @@ Particles2D
 
 Particle systems are added to the scene via the
 :ref:`Particles2D <class_Particles2D>`
-node. However, after creating that node you will notice that only a white dot was created, 
-and that there is a warning icon next to your Particles2D node in the inspector. This 
+node. However, after creating that node you will notice that only a white dot was created,
+and that there is a warning icon next to your Particles2D node in the inspector. This
 is because the node needs a ParticlesMaterial to function.
 
 ParticlesMaterial
@@ -31,7 +31,7 @@ ParticlesMaterial
 
 To add a process material to your particles node, go to Process Material in
 your inspector panel. Click on the box next to material, and from the dropdown
-menu select New Particles Material. 
+menu select New Particles Material.
 
 .. image:: img/particles_material.png
 
@@ -84,14 +84,14 @@ actually drawn the first time.
 Speed Scale
 ~~~~~~~~~~~
 
-The speed scale has a default value of ``1``, and is used to adjust the
-speed of a particle system. Lowering the value will make the particles 
-slower, increasing the value will make the particles much faster.
+The speed scale has a default value of ``1`` and is used to adjust the
+speed of a particle system. Lowering the value will make the particles
+slower while increasing the value will make the particles much faster.
 
 Explosiveness
 ~~~~~~~~~~~~~
 
-If lifetime is ``1`` and there are ten particles, it means a particle
+If lifetime is ``1`` and there are 10 particles, it means a particle
 will be emitted every 0.1 seconds. The explosiveness parameter changes
 this, and forces particles to be emitted all together. Ranges are:
 
@@ -116,7 +116,7 @@ All physics parameters can be randomized. Random values range from ``0`` to
 Fixed FPS
 ~~~~~~~~~
 
-This setting can be used to set the particle system to render at a fixed 
+This setting can be used to set the particle system to render at a fixed
 FPS. For instance, changing the value to ``2`` will make the particles render
 at 2 frames per second. Note this does not slow down the particle system itself.
 
@@ -125,12 +125,12 @@ Fract Delta
 
 This can be used to turn Fract Delta on or off.
 
-Drawing Parameters 
+Drawing Parameters
 ------------------
 
 Visibility Rect
 ~~~~~~~~~~~~~~~
- 
+
 The ``W`` and ``H`` values control width and height of the visibility
 rectangle. The ``X`` and ``Y`` values control the position of the upper-left
 corner of the visibility rectangle relative to the particle emitter.
@@ -186,7 +186,7 @@ in all directions (+/- 180).
 Gravity
 ~~~~~~~
 
-The gravity applied to every particle. 
+The gravity applied to every particle.
 
 .. image:: img/paranim7.gif
 
@@ -272,6 +272,6 @@ Used to change the color of the particles being emitted.
 Hue variation
 ~~~~~~~~~~~~~
 
-The variation value sets the initial hue variation applied to each 
-particle. The Variation rand value controls the hue variation
+The Variation value sets the initial hue variation applied to each
+particle. The Variation Rand value controls the hue variation
 randomness ratio.

+ 10 - 10
tutorials/2d/using_tilemaps.rst

@@ -26,8 +26,8 @@ Having them as separate images also works.
 
 .. image:: img/tileset.png
 
-Create a new project and move the above PNG image into the directory. Next 
-go into the image's import settings and turn off ``Filter``, keeping it on will cause 
+Create a new project and move the above PNG image into the directory. Next
+go into the image's import settings and turn off ``Filter``, keeping it on will cause
 issues later. ``Mipmaps`` should already be disabled, if not, disable this too.
 
 We will be creating a :ref:`TileSet <class_TileSet>`
@@ -80,11 +80,11 @@ recommended because it is easier to edit.
 
 .. image:: img/tile_example3.png
 
-Finally, edit the polygon, this will give the tile a collision, and fix 
-the warning icon next to the CollisionPolygon node. **Remember to use snap!** 
-Using snap will make sure collision polygons are aligned properly, allowing 
-a character to walk seamlessly from tile to tile. Also **do not scale or move** 
-the collision and/or collision polygon nodes. Leave them at offset 0,0, with 
+Finally, edit the polygon, this will give the tile a collision and fix
+the warning icon next to the CollisionPolygon node. **Remember to use snap!**
+Using snap will make sure collision polygons are aligned properly, allowing
+a character to walk seamlessly from tile to tile. Also **do not scale or move**
+the collision and/or collision polygon nodes. Leave them at offset 0,0, with
 scale 1,1 and rotation 0 with respect to the parent sprite.
 
 .. image:: img/tile_example4.png
@@ -148,8 +148,8 @@ using the lock button:
 
 .. image:: img/tile_lock.png
 
-If you accidentally place a tile somewhere you don't want it to be, you 
-can delete it with ``RMB`` while in the tilemap editor. 
+If you accidentally place a tile somewhere you don't want it to be, you
+can delete it with ``RMB`` while in the tilemap editor.
 
 You can also flip and rotate sprites in the TileMap editor (note:
 flipping the sprite in the TileSet will have no effect). Icons at the
@@ -179,5 +179,5 @@ one that looks better for you:
    Rendering > Quality > 2d > Use Pixel Snap`` to true, you can also search for ``Pixel Snap``).
 -  Viewport Scaling can often help with shrinking the map (see the
    :ref:`doc_viewports` tutorial). Simply adding a camera, setting it to ``Current`` and playing around with it's ``Zoom`` may be a good starting point.
--  You can use a single, separate image for each tile. This will remove all artifacts, but
+-  You can use a single, separate image for each tile. This will remove all artifacts but
    can be more cumbersome to implement and is less optimized.

+ 3 - 3
tutorials/3d/3d_performance_and_limitations.rst

@@ -7,13 +7,13 @@ Introduction
 ~~~~~~~~~~~~
 
 Godot follows a balanced performance philosophy. In performance world,
-there are always trade-offs, which consist in trading speed for
+there are always trade-offs which consist in trading speed for
 usability and flexibility. Some practical examples of this are:
 
 -  Rendering objects efficiently in high amounts is easy, but when a
-   large scene must be rendered it can become inefficient. To solve
+   large scene must be rendered, it can become inefficient. To solve
    this, visibility computation must be added to the rendering, which
-   makes rendering less efficient, but at the same time less objects are
+   makes rendering less efficient, but, at the same time, less objects are
    rendered, so efficiency overall improves.
 -  Configuring the properties of every material for every object that
    needs to be rendered is also slow. To solve this, objects are sorted

+ 96 - 52
tutorials/3d/baked_lightmaps.rst

@@ -6,24 +6,33 @@ Baked Lightmaps
 Introduction
 ------------
 
-Baked lightmaps are an alternative workflow for adding indirect (or baked) lighting to a scene. Unlike the :ref:`doc_gi_probes` approach,
-baked lightmaps work fine on low end PCs and mobile devices as they consume almost no resources in run-time.
+Baked lightmaps are an alternative workflow for adding indirect (or baked)
+lighting to a scene. Unlike the :ref:`doc_gi_probes` approach,
+baked lightmaps work fine on low-end PCs and mobile devices as they consume
+almost no resources in run-time.
 
-Unlike GIProbes, Baked Lightmaps are completely static, once baked they can't be modified at all. They also don't provide the scene with
-reflections, so using :ref:`doc_reflection_probes` together with it on interiors (or using a Sky on exteriors) is a requirement to
-get good quality.
+Unlike GIProbes, Baked Lightmaps are completely static. Once baked they can't be
+modified at all. They also don't provide the scene with
+reflections, so using :ref:`doc_reflection_probes` together with it on interiors
+(or using a Sky on exteriors) is a requirement to get good quality.
 
-As they are baked, they have less problems regarding to light bleeding than GIProbe and indirect light can look better if using Raytrace
-mode on high quality setting (but baking can take a while to bake).
+As they are baked, they have less problems regarding to light bleeding than
+GIProbe, and indirect light can look better if using Raytrace
+mode on high quality setting (but baking can take a while).
 
-In the end, deciding which indirect lighting approach is better depends on your use case. In general GIProbe looks better and is much
-easier to set upt. For low end compatibility or mobile, though, Baked Lightmaps are your only choice.
+In the end, deciding which indirect lighting approach is better depends on your
+use case. In general, GIProbe looks better and is much
+easier to set up. For low-end compatibility or mobile, though, Baked Lightmaps
+are your only choice.
 
 Visual Comparison
 -----------------
 
-Here are some comparisons of how Baked Lightmaps vs GIProbe look. Notice that lightmaps are more accurate, but also suffer from the fact
-that lighting is on an unwrapped texture, so transitions and resolution may not be that good. GIProbe looks less accurate (as it's an approximation), but more smooth overall.
+Here are some comparisons of how Baked Lightmaps vs GIProbe look. Notice that
+lightmaps are more accurate, but also suffer from the fact
+that lighting is on an unwrapped texture, so transitions and resolution may not
+be that good. GIProbe looks less accurate (as it's an approximation), but more
+smooth overall.
 
 .. image:: img/baked_light_comparison.png
 
@@ -31,83 +40,106 @@ that lighting is on an unwrapped texture, so transitions and resolution may not
 Setting Up
 ----------
 
-First of all, before the lightmapper can do anything, objects to be baked need an UV2 layer and a texture size. An UV2 layer is a set of secondary texture coordinates
-that ensures any face in the object has it's own place in the UV map. Faces must not share pixels in the texture.
+First of all, before the lightmapper can do anything, the objects to be baked need
+an UV2 layer and a texture size. An UV2 layer is a set of secondary texture coordinates
+that ensures any face in the object has its own place in the UV map. Faces must
+not share pixels in the texture.
 
 There are a few ways to ensure your object has a unique UV2 layer and texture size
 
 Unwrap from your 3D DCC
 ~~~~~~~~~~~~~~~~~~~~~~~
 
-One option is to do it from your favorite 3D app. This approach is generally not recommended but it's explained first so you know it exists.
-The main advantage is that, on complex objects that you may want to re-import a lot, the texture generation process can be quite costly within Godot,
+One option is to do it from your favorite 3D app. This approach is generally
+not recommended, but it's explained first so that you know it exists.
+The main advantage is that, on complex objects that you may want to re-import a
+lot, the texture generation process can be quite costly within Godot,
 so having it unwrapped before import can be faster.
 
 Simply do an unwrap on the second UV2 layer.
 
 .. image:: img/baked_light_blender.png
 
-And import normally. Remember you will need to set the texture size on the mesh after import. 
+And import normally. Remember you will need to set the texture size on the mesh
+after import.
 
 .. image:: img/baked_light_lmsize.png
 
 If you use external meshes on import, the size will be kept.
-Be wary that most unwrappers in 3D DCCs are not quality oriented, as they are meant to work quick. You will mostly need to use seams or other techniques to create better unwrapping.
+Be wary that most unwrappers in 3D DCCs are not quality oriented, as they are
+meant to work quick. You will mostly need to use seams or other techniques to
+create better unwrapping.
 
 Unwrap from within Godot
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
-Godot has an option to unwrap meshes and visualize the UV channels. It can be found in the Mesh menu:
+Godot has an option to unwrap meshes and visualize the UV channels.
+It can be found in the Mesh menu:
 
 .. image:: img/baked_light_mesh_menu.png
 
-This will generate a second set of UV2 coordinates, which can be used for baking and it will set the texture size automatically.
+This will generate a second set of UV2 coordinates which can be used for baking,
+and it will also set the texture size automatically.
 
 Unwrap on Scene import
 ~~~~~~~~~~~~~~~~~~~~~~
 
-This is probably the best approach overall. The only downside is that, on large models, unwrap can take a while on import.
-Just select the imported scene in the filesystem dock, then go to the Import tab. There, the following option can be modified:
+This is probably the best approach overall. The only downside is that, on large
+models, unwrap can take a while on import.
+Just select the imported scene in the filesystem dock, then go to the Import tab.
+There, the following option can be modified:
 
 .. image:: img/baked_light_import.png
 
-The **Light Baking** mode needs to be set to **"Gen Lightmaps"**. A texel size in world units must also be provided, as this will determine the
+The **Light Baking** mode needs to be set to **"Gen Lightmaps"**. A texel size
+in world units must also be provided, as this will determine the
 final size of the lightmap texture (and, in consequence, the UV padding in the map).
 
-The effect of setting this option is that all meshes within the scene will have their UV2 maps properly generated.
+The effect of setting this option is that all meshes within the scene will have
+their UV2 maps properly generated.
 
-As a word of warning: When reusing a mesh within a scene, keep in mind that UVs will be generated for the first instance found. If the mesh is re-used with different scales (and the scales
-are wildly different, more than half or twice), this will result in inefficient lightmaps. Just don't reuse a source mesh at different scales if you are planning to use lightmapping.
+As a word of warning: When reusing a mesh within a scene, keep in mind that UVs
+will be generated for the first instance found. If the mesh is re-used with different
+scales (and the scales are wildly different, more than half or twice), this will
+result in inefficient lightmaps. Just don't reuse a source mesh at different scales
+if you are planning to use lightmapping.
 
 Checking UV2
 ~~~~~~~~~~~~
 
-In the mesh menu mentioned before, the UV2 texture coordinates can be visualized. Make sure, if something is failing, to check that the meshes have these UV2 coordinates:
+In the mesh menu mentioned before, the UV2 texture coordinates can be visualized.
+Make sure, if something is failing, to check that the meshes have these UV2 coordinates:
 
 .. image:: img/baked_light_uvchannel.png
 
 Setting up the Scene
 --------------------
 
-Before anything is done, a **BakedLight** Node needs to be added to a scene. This will enable light baking on all nodes (and sub-nodes) in that scene, even on instanced scenes. 
+Before anything is done, a **BakedLight** Node needs to be added to a scene.
+This will enable light baking on all nodes (and sub-nodes) in that scene, even
+on instanced scenes.
 
 .. image:: img/baked_light_scene.png
 
-A sub-scene can be instanced several times, as this is supported by the baker and each will be assigned a lightmap of it's own (just make sure to respect the rule about scaling mentioned before):
+A sub-scene can be instanced several times, as this is supported by the baker, and
+each will be assigned a lightmap of its own (just make sure to respect the rule
+about scaling mentioned before):
 
 
 Configure Bounds
 ~~~~~~~~~~~~~~~~
 
-Lightmap needs an approximate volume of the area affected, because it uses it to transfer light to dynamic objects inside (more on that later). Just 
-cover the scene with the volume, as you do with GIProbe:
+Lightmap needs an approximate volume of the area affected because it uses it to
+transfer light to dynamic objects inside (more on that later). Just
+cover the scene with the volume as you do with GIProbe:
 
 .. image:: img/baked_light_bounds.png
 
 Setting Up Meshes
 ~~~~~~~~~~~~~~~~~
 
-For a **MeshInstance** node to take part in the baking process, it needs to have the "Use in Baked Light" property enabled.
+For a **MeshInstance** node to take part in the baking process, it needs to have
+the "Use in Baked Light" property enabled.
 
 .. image:: img/baked_light_use.png
 
@@ -116,70 +148,82 @@ When auto-generating lightmaps on scene import, this is enabled automatically.
 Setting up Lights
 ~~~~~~~~~~~~~~~~~
 
-Lights are baked with indirect light by default. This means that shadowmapping and lighting are still dynamic and affect moving objects, but light bounces from that light will
-be baked.
+Lights are baked with indirect light by default. This means that shadowmapping
+and lighting are still dynamic and affect moving objects, but light bounces from
+that light will be baked.
 
-Lights can be disabled (no bake), or be fully baked (direct and indirect), this can be controlled from the **Bake Mode** menu in lights:
+Lights can be disabled (no bake) or be fully baked (direct and indirect). This
+can be controlled from the **Bake Mode** menu in lights:
 
 .. image:: img/baked_light_bake_mode.png
 
 The modes are :
 
 - **Disabled:** Light is ignored in baking. Keep in mind hiding a light will have no effect for baking, so this must be used instead.
-- **Indirect:** This is the default mode, only indirect lighting will be baked.
+- **Indirect:** This is the default mode. Only indirect lighting will be baked.
 - **All:** Both indirect and direct lighting will be baked. If you don't want the light to appear twice (dynamically and statically), simply hide it.
 
 Baking Quality
 ~~~~~~~~~~~~~~
 
-BakedLightmap uses, for simplicity, a voxelized version of the scene to compute lighting. Voxel size can be adjusted with the **Bake Subdiv** parameter. 
-More subdivision results in more detail, but also takes more time to bake.
+BakedLightmap uses, for simplicity, a voxelized version of the scene to compute
+lighting. Voxel size can be adjusted with the **Bake Subdiv** parameter.
+More subdivision results in more detail but also takes more time to bake.
 
-In general, the defaults are good enough. There is also a **Capture Subdivision** (that must always be equal or less to the main subdivision), which is used
-for capturing light in dynamic objects (more on that later). It's default value is also good enough for more cases.
+In general, the defaults are good enough. There is also a **Capture Subdivision**
+(that must always be equal or less to the main subdivision), which is used
+for capturing light in dynamic objects (more on that later). It's default value
+is also good enough for more cases.
 
 .. image:: img/baked_light_capture.png
 
-Besides the capture size, quality can be modified by setting the **Bake Mode**. Two modes of capturing indirect are provided:
+Besides the capture size, quality can be modified by setting the **Bake Mode**.
+Two modes of capturing indirect are provided:
 
 .. image:: img/baked_light_mode.png
 
-- **Voxel Cone**: Trace: Is the default one, it's less precise but fast. Look similar (but slightly better) to GIProbe.
-- **Ray Tracing**: This method is more precise, but can take considerably longer to bake. If used in low or medium quality, some scenes may produce grain.
+- **Voxel Cone**: Trace: Is the default one, it's less precise but faster. Looks similar (but slightly better) to GIProbe.
+- **Ray Tracing**: This method is more precise but can take considerably longer to bake. If used in low or medium quality, some scenes may produce grain.
 
 
 Baking
 ------
 
-To begin the bake process, just push the big **Bake Lightmaps** button on top, when selecting the BakedLightmap node:
+To begin the bake process, just push the big **Bake Lightmaps** button on top
+when selecting the BakedLightmap node:
 
 .. image:: img/baked_light_bake.png
 
-This can take from seconds to minutes (or hours) depending on scene size, bake method and quality selected.
+This can take from seconds to minutes (or hours) depending on scene size, bake
+method and quality selected.
 
 Configuring Bake
 ~~~~~~~~~~~~~~~~
 
 Several more options are present for baking:
 
-- **Bake Subdiv**: Godot lightmapper uses a grid to transfer light information around. The default value is fine and should work for most cases. Increase it in case you want better lighting on small details or your scene is large. 
+- **Bake Subdiv**: Godot lightmapper uses a grid to transfer light information around. The default value is fine and should work for most cases. Increase it in case you want better lighting on small details or your scene is large.
 - **Capture Subdiv**: This is the grid used for real-time capture information (lighting dynamic objects). Default value is generally OK, it's usually smaller than Bake Subdiv and can't be larger than it.
-- **Bake Quality**: Three bake quality modes are provided, Low, Medium and High. Each takes less and more time.
+- **Bake Quality**: Three bake quality modes are provided, Low, Medium and High. Higher quality takes more time.
 - **Bake Mode**: The baker can use two different techniques: *Voxel Cone Tracing* (fast but approximate), or *RayTracing* (slow, but accurate).
-- **Propagation**: Used for the *Voxel Cone Trace* mode, works just like in GIProbe.
+- **Propagation**: Used for the *Voxel Cone Trace* mode. Works just like in GIProbe.
 - **HDR**: If disabled, lightmaps are smaller but can't capture any light over white (1.0).
 - **Image Path**: Where lightmaps will be saved. By default, on the same directory as the scene ("."), but can be tweaked.
 - **Extents**: Size of the area affected (can be edited visually)
-- **Light Data**: Contains the light baked data after baking. Textures are saved to disk, but this also contains the capture data for dynamic objects, which can be a bit heavy. If you are using .tscn formats (instead of .scn) you can save it to disk.
+- **Light Data**: Contains the light baked data after baking. Textures are saved to disk, but this also contains the capture data for dynamic objects which can be a bit heavy. If you are using .tscn formats (instead of .scn), you can save it to disk.
 
 
 Dynamic Objects
 ---------------
 
-In other engines or lightmapper implementations, you are required to manually place small objects called "lightprobes" all around the level to generate *capture* data. This is used to, then, transfer the light to dynamic objects that move around the scene.
+In other engines or lightmapper implementations, you are required to manually
+place small objects called "lightprobes" all around the level to generate *capture*
+data. This is used to, then, transfer the light to dynamic objects that move
+around the scene.
 
-This implementation of lightmapping uses a different method, so this process is automatic and you don't have to do anything. Just move your objects around and they will be lit accordingly. Of course, you have to make sure you set up your scene bounds accordingly or it won't work.
+However, this implementation of lightmapping uses a different method. The process is
+automatic, so you don't have to do anything. Just move your objects around, and
+they will be lit accordingly. Of course, you have to make sure you set up your
+scene bounds accordingly or it won't work.
 
 .. image:: img/baked_light_indirect.gif
-
-

+ 137 - 70
tutorials/3d/environment_and_post_processing.rst

@@ -3,13 +3,16 @@
 Environment and Post-Processing
 ===============================
 
-Godot 3 provides a redesigned Environment resource, as well as a brand new post-processing system with many available effects right out of the box.
+Godot 3 provides a redesigned Environment resource, as well as a brand new
+post-processing system with many available effects right out of the box.
 
 Environment
 -----------
 
-The Environment resource stores all the information required for controlling rendering environment. This includes sky, ambient lighting, tone mapping, effects and adjustments.
-By itself it does nothing, but it becomes enabled once used in one of the following locations, in order of priority:
+The Environment resource stores all the information required for controlling
+rendering environment. This includes sky, ambient lighting, tone mapping,
+effects, and adjustments. By itself it does nothing, but it becomes enabled once
+used in one of the following locations in order of priority:
 
 Camera Node
 ^^^^^^^^^^^^
@@ -18,61 +21,79 @@ An Environment can be set to a camera. It will have priority over any other sett
 
 .. image:: img/environment_camera.png
 
-This is mostly useful when wanting to override an existing environment, but in general it's a better idea to use the option below.
+This is mostly useful when wanting to override an existing environment,
+but in general it's a better idea to use the option below.
 
 
 WorldEnvironment Node
 ^^^^^^^^^^^^^^^^^^^^^
 
-The WorldEnvironment node can be added to any scene, but only one can exist per active scene tree. Adding more than one will result in a warning.
+The WorldEnvironment node can be added to any scene, but only one can exist per
+active scene tree. Adding more than one will result in a warning.
 
 .. image:: img/environment_world.png
 
-Any Environment added has higher priority than the default Environment (explained below). This means it can be overridden on a per-scene basis, which makes it quite useful.
+Any Environment added has higher priority than the default Environment
+(explained below). This means it can be overridden on a per-scene basis,
+which makes it quite useful.
 
 
 Default Environment
 ^^^^^^^^^^^^^^^^^^^^^
 
-A default environment can be set, which acts as a fallback when no Environment was set to a Camera or WorldEnvironment.
+A default environment can be set, which acts as a fallback when no Environment
+was set to a Camera or WorldEnvironment.
 Just head to Project Settings -> Rendering -> Environment:
 
 .. image:: img/environment_default.png
 
-New projects created from the Project Manager come with a default environment (``default_env.tres``). If one needs to be created, save it to disk before referencing it here.
+New projects created from the Project Manager come with a default environment
+(``default_env.tres``). If one needs to be created, save it to disk before
+referencing it here.
 
 Environment Options
 -------------------
 
-Following is a detailed description of all environment options and how they are intended to be used.
+Following is a detailed description of all environment options and how they
+are intended to be used.
 
 
 Background
 ^^^^^^^^^^
 
-The Background section contains settings on how to fill the background (parts of the screen where objects where not drawn). In Godot 3.0, the background not only serves the purpose of displaying an image or color, it can change how objects are affected by ambient and reflected light.
+The Background section contains settings on how to fill the background (parts of
+the screen where objects where not drawn). In Godot 3.0, the background not only
+serves the purpose of displaying an image or color, it can also change how objects
+are affected by ambient and reflected light.
 
 .. image:: img/environment_background1.png
 
-There are many ways to set the background: 
+There are many ways to set the background:
 
 - **Clear Color** uses the default clear color defined by the project. The background will be a constant color.
-- **Custom Color** is like Clear Color, but with a custom color value.
+- **Custom Color** is like Clear Color but with a custom color value.
 - **Sky** lets you define a panorama sky (a 360 degree sphere texture) or a procedural sky (a simple sky featuring a gradient and an optional sun). Objects will reflect it and absorb ambient light from it.
-- **Color+Sky** lets you define a sky (as above), but uses a constant color value for drawing the background. The sky will only be used for reflection and ambient light.
+- **Color+Sky** lets you define a sky (as above) but uses a constant color value for drawing the background. The sky will only be used for reflection and ambient light.
 
 
 Ambient Light
 ^^^^^^^^^^^^^
 
-Ambient (as defined here) is a type of light that affects every piece of geometry with the same intensity. It is global and independent of lights that might be added to the scene. 
+Ambient (as defined here) is a type of light that affects every piece of geometry
+with the same intensity. It is global and independent of lights that might be
+added to the scene.
 
-There are two types of ambient light, the *Ambient Color* (which is a constant color multiplied by the material albedo), and then one obtained from the *Sky* (as described before, but a sky needs to be set as background for this to be enabled). 
+There are two types of ambient light: the *Ambient Color* (which is a constant
+color multiplied by the material albedo) and then one obtained from the *Sky*
+(as described before, but a sky needs to be set as background for this to be
+enabled).
 
 .. image:: img/environment_ambient.png
 
 
-When a *Sky* is set as background, it's possible to blend between ambient color and sky using the **Sky Contribution** setting (this value is 1.0 by default for convenience, so only sky affects objects).
+When a *Sky* is set as background, it's possible to blend between ambient color
+and sky using the **Sky Contribution** setting (this value is 1.0 by default for
+convenience so only sky affects objects).
 
 Here is a comparison of how different ambient light affects a scene:
 
@@ -80,17 +101,24 @@ Here is a comparison of how different ambient light affects a scene:
 
 Finally there is a **Energy** setting, which is a multiplier, useful when working with HDR.
 
-In general, ambient light should only be used for simple scenes, large exteriors or for performance reasons (ambient light is cheap), as it does not provide the best lighting quality. It's better to generate
-ambient light from ReflectionProbe or GIProbe, which will more faithfully simulate how indirect light propagates. Below is a comparison in quality between using a flat ambient color and a GIProbe:
+In general, ambient light should only be used for simple scenes, large exteriors,
+or for performance reasons (ambient light is cheap), as it does not provide the
+best lighting quality. It's better to generate
+ambient light from ReflectionProbe or GIProbe, which will more faithfully simulate
+how indirect light propagates. Below is a comparison in quality between using a
+flat ambient color and a GIProbe:
 
 .. image:: img/environment_ambient_comparison.png
 
-Using one of the methods described above, objects get constant ambient lighting replaced by ambient light from the probes.
+Using one of the methods described above, objects get constant ambient lighting
+replaced by ambient light from the probes.
 
 Fog
 ^^^
 
-Fog, as in real life, makes distant objects fade away into an uniform color. The physical effect is actually pretty complex, but Godot provides a good approximation. There are two kinds of fog in Godot:
+Fog, as in real life, makes distant objects fade away into an uniform color. The
+physical effect is actually pretty complex, but Godot provides a good approximation.
+There are two kinds of fog in Godot:
 
 - **Depth Fog:** This one is applied based on the distance from the camera.
 - **Height Fog:** This one is applied to any objects below (or above) a certain height, regardless of the distance from the camera.
@@ -101,37 +129,53 @@ Both of these fog types can have their curve tweaked, making their transition mo
 
 Two properties can be tweaked to make the fog effect more interesting:
 
-The first is **Sun Amount**, which makes use of the Sun Color property of the fog. When looking towards a directional light (usually a sun), the color of the fog will be changed, simulating the sunlight passing through the fog.
+The first is **Sun Amount**, which makes use of the Sun Color property of the fog.
+When looking towards a directional light (usually a sun), the color of the fog
+will be changed, simulating the sunlight passing through the fog.
 
-The second is **Transmit Enabled** which simulates more realistic light transmittance. In practice, it makes light stand out more across the fog.
+The second is **Transmit Enabled** which simulates more realistic light transmittance.
+In practice, it makes light stand out more across the fog.
 
 .. image:: img/environment_fog_transmission.png
 
 Tonemap
 ^^^^^^^
 
-Selects the tone-mapping curve that will be applied to the scene, from a short list of standard curves used in the film and game industry. Tone mapping can make light and dark areas more homogeneous, even though the result is not that strong. Tone mapping options are:
+Selects the tone-mapping curve that will be applied to the scene, from a short
+list of standard curves used in the film and game industry. Tone mapping can make
+light and dark areas more homogeneous, even though the result is not that strong.
+Tone mapping options are:
 
-- **Mode:** Tone mapping mode, which can be Linear, Reindhart, Filmic or Aces.
-- **Exposure:** Tone mapping exposure, which simulates amount of light received over time.
-- **White:** Tone mapping white, which simulates where in the scale is white located (by default 1.0).
+- **Mode:** Tone mapping mode, which can be Linear, Reindhart, Filmic, or Aces.
+- **Exposure:** Tone mapping exposure which simulates amount of light received over time.
+- **White:** Tone mapping white which simulates where in the scale is white located (by default 1.0).
 
 Auto Exposure (HDR)
 ^^^^^^^^^^^^^^^^^^^
 
-Even though, in most cases, lighting and texturing are heavily artist controlled, Godot suports a simple high dynamic range implementation with auto exposure mechanism. This is generally used for the
-sake of realism, when combining interior areas with low light and outdoors. Auto expure simulates the camera (or eye) effort to adapt between light and dark locations and their different amount of light.
+Even though, in most cases, lighting and texturing are heavily artist controlled,
+Godot suports a simple high dynamic range implementation with the auto exposure
+mechanism. This is generally used for the sake of realism when combining
+interior areas with low light and outdoors. Auto exposure simulates the camera
+(or eye) in an effort to adapt between light and dark locations and their
+different amounts of light.
 
 .. image:: img/environment_hdr_autoexp.gif
 
-The simplest way to use auto exposure is to make sure outdoor lights (or other strong lights) have energy beyond 1.0. This is done by tweaking their **Energy** multiplier (on the Light itself). To
-make it consistent, the **Sky** usually needs to use the energy multiplier too, to match the with the directional light. Normally, values between 3.0 and 6.0 are enough to simulate indoor-oudoor conditions.
+The simplest way to use auto exposure is to make sure outdoor lights (or other
+strong lights) have energy beyond 1.0. This is done by tweaking their **Energy**
+multiplier (on the Light itself). To make it consistent, the **Sky** usually
+needs to use the energy multiplier too, to match the with the directional light.
+Normally, values between 3.0 and 6.0 are enough to simulate indoor-oudoor conditions.
 
-By combining Auto Exposure with *Glow* post processing (more on that below), pixels that go over the tonemap **White** will bleed to the glow buffer, creating the typical bloom effect in photography.
+By combining Auto Exposure with *Glow* post processing (more on that below),
+pixels that go over the tonemap **White** will bleed to the glow buffer,
+creating the typical bloom effect in photography.
 
 .. image:: img/environment_hdr_bloom.png
 
-The user-controllable values in the Auto Exposure section come with sensible defaults, but you can still tweak then:
+The user-controllable values in the Auto Exposure section come with sensible
+defaults, but you can still tweak then:
 
 .. image:: img/environment_hdr.png
 
@@ -143,17 +187,21 @@ The user-controllable values in the Auto Exposure section come with sensible def
 Mid and Post-Processing Effects
 -------------------------------
 
-A large amount of widely-used mid and post-processing effects are supported in Environment.
+A large amount of widely-used mid and post-processing effects are supported
+in Environment.
 
 Screen-Space Reflections (SSR)
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-While Godot supports three sources of reflection data (Sky, ReflectionProbe and GIProbe), they may not provide enough detail for all situations. Scenarios
-where Screen Space Reflections make the most sense are when objects are in contact with each other (object over floor, over a table, floating on water, etc). 
+While Godot supports three sources of reflection data (Sky, ReflectionProbe, and
+GIProbe), they may not provide enough detail for all situations. Scenarios
+where Screen Space Reflections make the most sense are when objects are in
+contact with each other (object over floor, over a table, floating on water, etc).
 
 .. image:: img/environment_ssr.png
 
-The other advantage (even if only enabled to a minimum), is that it works in real-time (while the other types of reflections are pre-computed). This is great to
+The other advantage (even if only enabled to a minimum), is that it works in real-time
+(while the other types of reflections are pre-computed). This is great to
 make characters, cars, etc. reflect when moving around.
 
 A few user-controlled parameters are available to better tweak the technique:
@@ -169,17 +217,29 @@ Keep in mind that screen-space-reflections only work for reflecting opaque geome
 Screen-Space Ambient Occlusion (SSAO)
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-As mentioned in the **Ambient** section, areas where light from light nodes does not reach (either because it's outside the radius or shadowed) are lit with ambient light. Godot can simulate this using GIProbe, ReflectionProbe, the Sky or a constant ambient color. The problem, however, is that all the methods proposed before act more on larger scale (large regions) than at the smaller geometry level.
+As mentioned in the **Ambient** section, areas where light from light nodes
+does not reach (either because it's outside the radius or shadowed) are lit
+with ambient light. Godot can simulate this using GIProbe, ReflectionProbe,
+the Sky, or a constant ambient color. The problem, however, is that all the
+methods proposed before act more on a larger scale (large regions) than at the
+smaller geometry level.
 
-Constant ambient color and Sky are uniform and the same everywhere, while GI and Reflection probes have more local detail, but not enough to simulate situations where light is not able to fill inside hollow or concave features.
+Constant ambient color and Sky are uniform and the same everywhere while GI and
+Reflection probes have more local detail but not enough to simulate situations
+where light is not able to fill inside hollow or concave features.
 
-This can be simulated with Screen Space Ambient Occlusion. As you can see in the image below, the goal of it is to make sure concave areas are darker, simulating a narrower path for the light to enter:
+This can be simulated with Screen Space Ambient Occlusion. As you can see in the
+image below, the goal of it is to make sure concave areas are darker, simulating
+a narrower path for the light to enter:
 
 .. image:: img/environment_ssao.png
 
-It is a common mistake to enable this effect, turn on a light and not be able to appreciate it. This is because SSAO only acts on *ambient* light, not direct light. 
+It is a common mistake to enable this effect, turn on a light, and not be able to
+appreciate it. This is because SSAO only acts on *ambient* light, not direct light.
 
-This is why, in the image above, the effect is less noticeable under the direct light (at the left). If you want to force SSAO to work with direct light too, use the **Light Affect** parameter (even though this is not correct, some artists like how it looks). 
+This is why, in the image above, the effect is less noticeable under the direct
+light (at the left). If you want to force SSAO to work with direct light too, use
+the **Light Affect** parameter (even though this is not correct, some artists like how it looks).
 
 SSAO looks best when combined with a real source of indirect light, like GIProbe:
 
@@ -192,33 +252,38 @@ Tweaking SSAO is possible with several parameters:
 - **Radius/Intensity:** To control the radius or intensity of the occlusion, these two parameters are available. Radius is in world (Metric) units.
 - **Radius2/Intensity2:** A Secondary radius/intensity can be used. Combining a large and a small radius AO generally works well.
 - **Bias:** This can be tweaked to solve self occlusion, though the default generally works well enough.
-- **Light Affect:** SSAO only affects ambient light, but increasing this slider can make it also affect direct light. Some artists prefer this effect.
+- **Light Affect:** SSAO only affects ambient light but increasing this slider can make it also affect direct light. Some artists prefer this effect.
 - **Quality:** Depending on quality, SSAO will do more samplings over a sphere for every pixel. High quality only works well on modern GPUs.
-- **Blur:** Type of blur kernel used. The 1x1 kernel is a simple blur that preserves local detail better, but is not as efficient (generally works better with high quality setting above), while 3x3 will soften the image better (with a bit of dithering-like effect), but does not preserve local detail as well.
+- **Blur:** Type of blur kernel used. The 1x1 kernel is a simple blur that preserves local detail better but is not as efficient (generally works better with high quality setting above), while 3x3 will soften the image better (with a bit of dithering-like effect) but does not preserve local detail as well.
 - **Edge Sharpness**: This can be used to preserve the sharpness of edges (avoids areas without AO on creases).
 
 Depth of Field / Far Blur
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 
-This effect simulates focal distance on high end cameras. It blurs objects behind a given range. 
-It has an initial **Distance** with a **Transition** region (in world units):
+This effect simulates focal distance on high end cameras. It blurs objects behind
+a given range. It has an initial **Distance** with a **Transition** region
+(in world units):
 
 .. image:: img/environment_dof_far.png
 
-The **Amount** parameter controls the amount of blur. For larger blurs, tweaking the **Quality** may be needed in order to avoid arctifacts.
+The **Amount** parameter controls the amount of blur. For larger blurs, tweaking
+the **Quality** may be needed in order to avoid artifacts.
 
 
 Depth of Field / Near Blur
 ^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-This effect simulates focal distance on high end cameras. It blurs objects close to the camera (acts in the opposite direction as far blur).
+This effect simulates focal distance on high end cameras. It blurs objects close
+to the camera (acts in the opposite direction as far blur).
 It has an initial **Distance** with a **Transition** region (in world units):
 
 .. image:: img/environment_dof_near.png
 
-The **Amount** parameter controls the amount of blur. For larger blurs, tweaking the **Quality** may be needed in order to avoid arctifacts.
+The **Amount** parameter controls the amount of blur. For larger blurs, tweaking
+the **Quality** may be needed in order to avoid artifacts.
 
-It is common to use both blurs together to focus the viewer's attention on a given object:
+It is common to use both blurs together to focus the viewer's attention on a
+given object:
 
 .. image:: img/environment_mixed_blur.png
 
@@ -226,14 +291,16 @@ It is common to use both blurs together to focus the viewer's attention on a giv
 Glow
 ^^^^
 
-In photography and film, when light amount exceeds the maxium supported by the media (be it analog or digital), it generally bleeds outwards to darker regions of the image. This is simulated in Godot with
-the **Glow** effect. 
+In photography and film, when light amount exceeds the maximum supported by the
+media (be it analog or digital), it generally bleeds outwards to darker regions
+of the image. This is simulated in Godot with the **Glow** effect.
 
 .. image:: img/environment_glow1.png
 
-By default, even if the effect is enabled, it will be weak or invisible. One of two conditions need to happen for it to actually show:
+By default, even if the effect is enabled, it will be weak or invisible. One of
+two conditions need to happen for it to actually show:
 
-- 1) The light in a pixel surpasses the **HDR Threshold** (where 0 is all light surpasses it, and 1.0 is light over the tonemapper **White** value). Normally this value is expected to be at 1.0, but it can be lowered to allow more light to bleed. There is also an extra parameter, **HDR Scale** that allows scaling (making brighter or darker) the light surpasing the threshold.
+- 1) The light in a pixel surpasses the **HDR Threshold** (where 0 is all light surpasses it, and 1.0 is light over the tonemapper **White** value). Normally this value is expected to be at 1.0, but it can be lowered to allow more light to bleed. There is also an extra parameter, **HDR Scale** that allows scaling (making brighter or darker) the light surpassing the threshold.
 
 .. image:: img/environment_glow_threshold.png
 
@@ -250,46 +317,46 @@ Once glow is visible, it can be controlled with a few extra parameters:
 
 The **Blend Mode** of the effect can also be changed:
 
-- **Additive** is the strongest one, as it just adds the glow effect over the image with no blending involved. In general, it's too strong to be used, but can look good with low intensity Bloom (produces a dream-like effect).
-- **Screen** is the default one. It ensures glow never brights more than itself, and works great as an all around.
+- **Additive** is the strongest one, as it just adds the glow effect over the image with no blending involved. In general, it's too strong to be used but can look good with low intensity Bloom (produces a dream-like effect).
+- **Screen** is the default one. It ensures glow never brights more than itself and works great as an all around.
 - **Softlight** is the weakest one, producing only a subtle color disturbance around the objects. This mode works best on dark scenes.
 - **Replace** can be used to blur the whole screen or debug the effect. It just shows the glow effect without the image below.
 
-To change the glow effect size and shape, Godot provides **Levels**. Smaller levels are strong glows that appear around objects, while large levels are hazy glows covering the whole screen:
+To change the glow effect size and shape, Godot provides **Levels**. Smaller
+levels are strong glows that appear around objects while large levels are hazy
+glows covering the whole screen:
 
 .. image:: img/environment_glow_layers.png
 
-The real strength of this system, though, is to combine levels to create more interesting glow patterns:
+The real strength of this system, though, is to combine levels to create more
+interesting glow patterns:
 
 .. image:: img/environment_glow_layers2.png
- 
-Finally, as the highest layers are created by stretching small blurred images, it is possible that some blockyness may be visible. Enabling **Bicubic Upscaling** gets rids of it,
-at a minimal performance cost.
+
+Finally, as the highest layers are created by stretching small blurred images,
+it is possible that some blockiness may be visible. Enabling **Bicubic Upscaling**
+gets rids of it, at a minimal performance cost.
 
 .. image:: img/environment_glow_bicubic.png
 
 Adjustments
 ^^^^^^^^^^^
 
-At the end of processing, Godot offers the possibility to do some standard image adjustments. 
+At the end of processing, Godot offers the possibility to do some standard
+image adjustments.
 
 .. image:: img/environment_adjustments.png
 
-The first one is being able to change the typical Brightness, Contrast and Saturation:
+The first one is being able to change the typical Brightness, Contrast,
+and Saturation:
 
 .. image:: img/environment_adjustments_bcs.png
 
-The second is by supplying a color correction gradient. A regular black to white gradient like the following one will produce no effect:
+The second is by supplying a color correction gradient. A regular black to
+white gradient like the following one will produce no effect:
 
 .. image:: img/environment_adjusments_default_gradient.png
 
 But creating custom ones will allow to map each channel to a different color:
 
 .. image:: img/environment_adjusments_custom_gradient.png
-
-
-
-
-
-
-

+ 40 - 25
tutorials/3d/gi_probes.rst

@@ -6,48 +6,59 @@ GI Probes
 Introduction
 ------------
 
-Just like with :ref:`doc_reflection_probes`, and as stated in the :ref:`doc_spatial_material`, objects can show reflected or diffuse light.
-GI Probes are similar to Reflection Probes, but they use a different and more complex technique to produce indirect light and reflections.
-
-The strength of GI Probes are real-time, high quality, indirect light. While the scene needs a quick pre-bake for the static objects that
-will be used, lights can be added, changed or removed and this will be updated in real-time. Dynamic objects that move within one of these
+Just like with :ref:`doc_reflection_probes`, and as stated in
+the :ref:`doc_spatial_material`, objects can show reflected or diffuse light.
+GI Probes are similar to Reflection Probes, but they use a different and more
+complex technique to produce indirect light and reflections.
+
+The strength of GI Probes is real-time, high quality, indirect light. While the
+scene needs a quick pre-bake for the static objects that
+will be used, lights can be added, changed or removed, and this will be updated
+in real-time. Dynamic objects that move within one of these
 probes will also receive indirect lighting from the scene automatically.
 
-Just like with ReflectionProbe, GIProbes can be blended (in a bit more limited way), so it is possible to provide full real-time lighting
+Just like with ReflectionProbe, GIProbes can be blended (in a bit more limited
+way), so it is possible to provide full real-time lighting
 for a stage without having to resort to lightmaps.
 
 The main downside of GIProbes are:
 
-- A small amount of light leaking can occur if the level is not carefully designed. this must be artist-tweaked.
-- Performance requirements are higher than for lightmaps, so it may not run properly in low end integrated GPUs (may need to reduce resolution).
-- Reflections are voxelized, so they don't look as sharp as with ReflectionProbe, but in exchange they are volumetric so any room size or shape works for them. Mixing them with Screen Space Reflection also works well.
+- A small amount of light leaking can occur if the level is not carefully designed. This must be artist-tweaked.
+- Performance requirements are higher than for lightmaps, so it may not run properly in low-end integrated GPUs (may need to reduce resolution).
+- Reflections are voxelized, so they don't look as sharp as with ReflectionProbe. However, in exchange they are volumetric, so any room size or shape works for them. Mixing them with Screen Space Reflection also works well.
 - They consume considerably more video memory than Reflection Probes, so they must be used by care in the right subdivision sizes.
 
 Setting Up
 ----------
 
-Just like a ReflectionProbe, simply set up the GIProbe by wrapping it around the geometry that will be affected.
+Just like a ReflectionProbe, simply set up the GIProbe by wrapping it around
+the geometry that will be affected.
 
 .. image:: img/giprobe_wrap.png
 
-Afterwards, make sure to enable the geometry will be baked. This is important in order for GIPRobe to recognize objects, otherwise they will be ignored:
+Afterwards, make sure to enable the geometry will be baked. This is important in
+order for GIPRobe to recognize objects, otherwise they will be ignored:
 
 .. image:: img/giprobe_bake_property.png
 
-Once the geometry is set-up, push the Bake button that appears on the 3D editor toolbar to begin the pre-baking process:
+Once the geometry is set-up, push the Bake button that appears on the 3D editor
+toolbar to begin the pre-baking process:
 
 .. image:: img/giprobe_bake.png
 
 Adding Lights
 --------------
 
-Unless there are materials with emission, GIProbe does nothing by default. Lights need to be added to the scene to have an effect.
+Unless there are materials with emission, GIProbe does nothing by default.
+Lights need to be added to the scene to have an effect.
 
-The effect of indirect light can be viewed quickly (it is recommended you turn off all ambient/sky lighting to tweak this, though as in the picture):
+The effect of indirect light can be viewed quickly (it is recommended you turn
+off all ambient/sky lighting to tweak this, though, as shown below):
 
 .. image:: img/giprobe_indirect.png
 
-In some situations, though, indirect light may be too weak. Lights have an indirect multiplier to tweak this:
+In some situations, though, indirect light may be too weak. Lights have an
+indirect multiplier to tweak this:
 
 .. image:: img/giprobe_light_indirect.png
 
@@ -58,12 +69,14 @@ And, as GIPRobe lighting updates in real-time, this effect is immediate:
 Reflections
 -----------
 
-For materials with high metalness and low roughness, it's possible to appreciate voxel reflections. Keep in mind that these have far less detail than Reflection Probes or Screen Space Reflections,
-but fully reflect volumetrically.
+For materials with high metalness and low roughness, it's possible to appreciate
+voxel reflections. Keep in mind that these have far less detail than Reflection
+Probes or Screen Space Reflections but fully reflect volumetrically.
 
 .. image:: img/giprobe_voxel_reflections.png
 
-GIProbes can easily be mixed with Reflection Probes and Screen Space Reflections, as a full 3-stage fallback-chain. This allows to have precise reflections where needed:
+GIProbes can easily be mixed with Reflection Probes and Screen Space Reflections,
+as a full 3-stage fallback-chain. This allows to have precise reflections where needed:
 
 .. image:: img/giprobe_ref_blending.png
 
@@ -71,15 +84,18 @@ GIProbes can easily be mixed with Reflection Probes and Screen Space Reflections
 Interior vs Exterior
 --------------------
 
-GI Probes normally allow mixing with lighting from the sky. This can be disabled when turning on the *Interior* setting.
+GI Probes normally allow mixing with lighting from the sky. This can be disabled
+when turning on the *Interior* setting.
 
 .. image:: img/giprobe_interior_setting.png
 
-The difference becomes clear in the image below, where light from the sky goes from spreading inside to being ignored.
+The difference becomes clear in the image below, where light from the sky goes
+from spreading inside to being ignored.
 
 .. image:: img/giprobe_interior.png
 
-As complex buildings may mix interiors with exteriors, combining GIProbes for both parts works well.
+As complex buildings may mix interiors with exteriors, combining GIProbes
+for both parts works well.
 
 
 Tweaking
@@ -90,7 +106,7 @@ GI Probes support a few parameters for tweaking:
 .. image:: img/giprobe_tweaking.png
 
 - **Subdiv** Subdivision used for the probe. The default (128) is generally good for small to medium size areas. Bigger subdivisions use more memory.
-- **Extents** Size of the probe, can be tweaked from the gizmo.
+- **Extents** Size of the probe. Can be tweaked from the gizmo.
 - **Dynamic Range** Maximum light energy the probe can absorb. Higher values allow brighter light, but with less color detail.
 - **Energy** Multiplier for all the probe. Can be used to make the indirect light brighter (although it's better to tweak this from the light itself).
 - **Propagation** How much light propagates through the probe internally.
@@ -101,8 +117,7 @@ GI Probes support a few parameters for tweaking:
 Quality
 -------
 
-GIProbes are quite demanding. It is possible to use lower quality voxel cone tracing in exchange of more performance.
+GIProbes are quite demanding. It is possible to use lower quality voxel cone
+tracing in exchange of more performance.
 
 .. image:: img/giprobe_quality.png
-
-

+ 11 - 11
tutorials/3d/high_dynamic_range.rst

@@ -27,8 +27,8 @@ started:
 
 .. image:: img/hdr_tonemap.png
 
-Except the scene is more contrasted, because there is a higher light
-range in play. What is this all useful for? The idea is that the scene
+Except the scene is more contrasted because there is a higher light
+range at play. What is this all useful for? The idea is that the scene
 luminance will change while you move through the world, allowing
 situations like this to happen:
 
@@ -68,12 +68,12 @@ to do this:
 SRGB -> linear conversion on image import
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-This is the most compatible way of using linear-space assets and it will
+This is the most compatible way of using linear-space assets, and it will
 work everywhere including all mobile devices. The main issue with this
 is loss of quality, as sRGB exists to avoid this same problem. Using 8
 bits per channel to represent linear colors is inefficient from the
 point of view of the human eye. These textures might be later compressed
-too, which makes the problem worse.
+too which makes the problem worse.
 
 In any case though, this is the easy solution that works everywhere.
 
@@ -95,7 +95,7 @@ current :ref:`Environment <class_Environment>` (more on that below).
 
 Keep in mind that sRGB -> Linear and Linear -> sRGB conversions
 must always be **both** enabled. Failing to enable one of them will
-result in horrible visuals suitable only for avant garde experimental
+result in horrible visuals suitable only for avant-garde experimental
 indie games.
 
 Parameters of HDR
@@ -104,7 +104,7 @@ Parameters of HDR
 HDR is found in the :ref:`Environment <class_Environment>`
 resource. These are found most of the time inside a
 :ref:`WorldEnvironment <class_WorldEnvironment>`
-node, or set in a camera. There are many parameters for HDR:
+node or set in a camera. There are many parameters for HDR:
 
 .. image:: img/hdr_parameters.png
 
@@ -114,20 +114,20 @@ ToneMapper
 The ToneMapper is the heart of the algorithm. Many options for
 tonemappers are provided:
 
--  Linear: Simplest tonemapper. It does its job for adjusting scene
+-  **Linear:** Simplest tonemapper. It does its job for adjusting scene
    brightness, but if the differences in light are too big, it will
    cause colors to be too saturated.
--  Log: Similar to linear, but not as extreme.
--  Reinhardt: Classical tonemapper (modified so it will not desaturate
+-  **Log:** Similar to linear but not as extreme.
+-  **Reinhardt:** Classical tonemapper (modified so it will not desaturate
    as much)
--  ReinhardtAutoWhite: Same as above, but uses the max scene luminance
+-  **ReinhardtAutoWhite:** Same as above but uses the max scene luminance
    to adjust the white value.
 
 Exposure
 ~~~~~~~~
 
 The same exposure parameter as in real cameras. Controls how much light
-enters the camera. Higher values will result in a brighter scene and
+enters the camera. Higher values will result in a brighter scene, and
 lower values will result in a darker scene.
 
 White

+ 3 - 3
tutorials/3d/introduction_to_3d.rst

@@ -61,7 +61,7 @@ Generated geometry
 ------------------
 
 It is possible to create custom geometry by using the
-:ref:`Mesh <class_Mesh>` resource directly, simply create your arrays
+:ref:`Mesh <class_Mesh>` resource directly. Simply create your arrays
 and use the :ref:`Mesh.add_surface() <class_Mesh_add_surface>`
 function. A helper class is also available, :ref:`SurfaceTool <class_SurfaceTool>`,
 which provides a more straightforward API and helpers for indexing,
@@ -190,7 +190,7 @@ perspective projections:
 
 .. image:: img/tuto_3d10.png
 
-Cameras are associated and only display to a parent or grand-parent
+Cameras are associated with and only display to a parent or grandparent
 viewport. Since the root of the scene tree is a viewport, cameras will
 display on it by default, but if sub-viewports (either as render target
 or picture-in-picture) are desired, they need their own children cameras
@@ -214,4 +214,4 @@ Lights
 ------
 
 There is no limitation on the number of lights nor of types of lights in
-Godot. As many as desired can be added (as long as performance allows). 
+Godot. As many as desired can be added (as long as performance allows).

+ 23 - 21
tutorials/3d/inverse_kinematics.rst

@@ -5,33 +5,37 @@ Inverse kinematics
 
 This tutorial is a follow-up of :ref:`doc_working_with_3d_skeletons`.
 
-Before continuing on, I'd recommend reading some theory, the simplest
-article I could find is this:
+Previously, we were able to control the rotations of bones in order to manipulate
+where our arm was (forward kinematics). But what if we wanted to solve this problem
+in reverse? Inverse kinematics (IK) tells us *how* to rotate our bones in order to reach
+a desired position.
 
-http://freespace.virgin.net/hugo.elias/models/m_ik2.htm
+A simple example of IK is the human arm: While we intuitively know the target
+position of an object we want to reach for, our brains need to figure out how much to
+move each joint in our arm to get to that target.
 
 Initial problem
 ~~~~~~~~~~~~~~~
 
 Talking in Godot terminology, the task we want to solve here is to position
-our 2 angles we talked about above so, that the tip of the lowerarm bone is
-as close to the target point (which is set by the target Vector3()) as possible
-using only rotations. This task is calculation-intensive and never
-resolved by analytical equation solving. So, it is an underconstrained
-problem, which means there is an unlimited number of solutions to the
-equation.
+the 2 angles on the joints of our upperarm and lowerarm so that the tip of the
+lowerarm bone is as close to the target point (which is set by the target Vector3)
+as possible using only rotations. This task is calculation-intensive and never
+resolved by analytical equation solving, as it is an under-constrained
+problem which means that there is more than one solution to an
+IK problem.
 
 .. image:: img/inverse_kinematics.png
 
-For easy calculation, in this chapter we consider the target being a
+For easy calculation in this chapter, we consider the target being a
 child of Skeleton. If this is not the case for your setup you can always
 reparent it in your script, as you will save on calculations if you
 do so.
 
-In the picture you see the angles alpha and beta. In this case we don't
+In the picture, you see the angles alpha and beta. In this case, we don't
 use poles and constraints, so we need to add our own. On the picture
 the angles are 2D angles living in a plane which is defined by bone
-base, bone tip and target.
+base, bone tip, and target.
 
 The rotation axis is easily calculated using the cross-product of the bone
 vector and the target vector. The rotation in this case will be always in
@@ -47,8 +51,8 @@ So we have all the information we need to execute our algorithm.
 In game dev it is common to resolve this problem by iteratively closing
 to the desired location, adding/subtracting small numbers to the angles
 until the distance change achieved is less than some small error value.
-Sounds easy enough, but there are Godot problems we need to resolve
-there to achieve our goal.
+Sounds easy enough, but there are still Godot problems we need to resolve
+to achieve our goal.
 
 -  **How to find coordinates of the tip of the bone?**
 -  **How to find the vector from the bone base to the target?**
@@ -56,8 +60,8 @@ there to achieve our goal.
 For our goal (tip of the bone moved within area of target), we need to know
 where the tip of our IK bone is. As we don't use a leaf bone as IK bone, we
 know the coordinate of the bone base is the tip of the parent bone. All these
-calculations are quite dependent on the skeleton's structure. You can use
-pre-calculated constants as well. You can add an extra bone at the tip of the
+calculations are quite dependent on the skeleton's structure. You could use
+pre-calculated constants, or you could add an extra bone at the tip of the
 IK bone and calculate using that.
 
 Implementation
@@ -72,7 +76,7 @@ We will use an exported variable for the bone length to make it easy.
     export var ik_error = 0.1
 
 Now, we need to apply our transformations from the IK bone to the base of
-the chain. So we apply a rotation to the IK bone then move from our IK bone up to
+the chain, so we apply a rotation to the IK bone, then move from our IK bone up to
 its parent, apply rotation again, then move to the parent of the
 current bone again, etc. So we need to limit our chain somewhat.
 
@@ -145,8 +149,8 @@ somewhere accessible. Since "arm" is an imported scene, we better place
 the target node within our top level scene. But for us to work with target
 easily its Transform should be on the same level as the Skeleton.
 
-To cope with this problem we create a "target" node under our scene root
-node and at runtime we will reparent it copying the global transform,
+To cope with this problem, we create a "target" node under our scene root
+node and at runtime we will reparent it, copying the global transform
 which will achieve the desired effect.
 
 Create a new Spatial node under the root node and rename it to "target".
@@ -165,5 +169,3 @@ Then modify the ``_ready()`` function to look like this:
         skel.add_child(target)
         target.set_global_transform(ttrans)
         set_process(true)
-
-

+ 59 - 47
tutorials/3d/lights_and_shadows.rst

@@ -6,10 +6,10 @@ Lights And Shadows
 Introduction
 ------------
 
-Lights emit light that mix with the materials and produces a visible
+Lights emit light that mixes with the materials and produces a visible
 result. Light can come from several types of sources in a scene:
 
--  From the Material itself, in the form of the emission color (though
+-  From the Material itself in the form of the emission color (though
    it does not affect nearby objects unless baked).
 -  Light Nodes: Directional, Omni and Spot.
 -  Ambient Light in the
@@ -32,16 +32,17 @@ lights:
 Each one has a specific function:
 
 -  **Color**: Base color for emitted light.
--  **Energy**: Energy multiplier. This is useful to saturate lights or working with :ref:`doc_high_dynamic_range`.
+-  **Energy**: Energy multiplier. This is useful for saturating lights or working with :ref:`doc_high_dynamic_range`.
 -  **Indirect Energy**: Secondary multiplier used with indirect light (light bounces). This works in baked light or GIProbe.
--  **Negative**: Light becomes substractive instead of additive. It's sometimes useful to manually compensate some dark corners.
--  **Specular**: Affects the intensity of the specular blob in objects affected by this light. At zero, this light becomes a pure diffuse light. 
+-  **Negative**: Light becomes subtractive instead of additive. It's sometimes useful to manually compensate some dark corners.
+-  **Specular**: Affects the intensity of the specular blob in objects affected by this light. At zero, this light becomes a pure diffuse light.
 -  **Cull Mask**: Objects that are in the selected layers below will be affected by this light.
 
 Shadow Mapping
 ^^^^^^^^^^^^^^
 
-Lights can optionally cast shadows. This gives them greater realism (light does not reach occluded areas), but it can incur a bigger performance cost.
+Lights can optionally cast shadows. This gives them greater realism (light does
+not reach occluded areas), but it can incur a bigger performance cost.
 There is a list of generic shadow parameters, each also has a specific function:
 
 -  **Enabled**: Check to enable shadow mapping in this light.
@@ -50,7 +51,8 @@ There is a list of generic shadow parameters, each also has a specific function:
 -  **Contact**: Performs a short screen-space raycast to reduce the gap generated by the bias.
 -  **Reverse Cull Faces**: Some scenes work better when shadow mapping is rendered with face-culling inverted.
 
-Below is an image of how tweaking bias looks like. Default values work for most cases, but in general it depends on the size and complexity of geometry.
+Below is an image of how tweaking bias looks like. Default values work for most
+cases, but in general it depends on the size and complexity of geometry.
 
 .. image:: img/shadow_bias.png
 
@@ -58,24 +60,25 @@ Finally, if gaps can't be solved, the **Contact** option can help:
 
 .. image:: img/shadow_contact.png
 
-Any sort of bias issues can always be fixed by increasing the shadow map resolution, although that may lead to decreased peformance on low-end hardware.
+Any sort of bias issues can always be fixed by increasing the shadow map resolution,
+although that may lead to decreased peformance on low-end hardware.
 
 Directional light
 ~~~~~~~~~~~~~~~~~
 
-This is the most common type of light and represents a light source 
+This is the most common type of light and represents a light source
 very far away (such as the sun). It is also the cheapest light to compute and should be used whenever possible
-(although it's not the cheapest shadow-map to compute, but more on that later). 
+(although it's not the cheapest shadow-map to compute, but more on that later).
 
 Directional light models an infinite number of parallel light rays
-covering the whole scene. The directional light node is represented by a big arrow, which
+covering the whole scene. The directional light node is represented by a big arrow which
 indicates the direction of the light rays. However, the position of the node
-does not affect the lighting at all, and can be anywhere.
+does not affect the lighting at all and can be anywhere.
 
 .. image:: img/light_directional.png
 
-Every face whose front-side is hit by the light rays is lit, the others stay dark. Most light types
-have specific parameters but directional lights are pretty simple in nature so they don't.
+Every face whose front-side is hit by the light rays is lit while the others stay dark. Most light types
+have specific parameters, but directional lights are pretty simple in nature so they don't.
 
 Directional Shadow Mapping
 ^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -87,7 +90,7 @@ closer to the camera receive blocky shadows.
 .. image:: img/shadow_blocky.png
 
 To fix this, a technique named "Parallel Split Shadow Maps" (or PSSM) is used. This splits the view frustum in 2 or 4 areas. Each
-area gets it's own shadow map. This allows small, close areas to the viewer to have the same shadow resolution as a huge, far-away area.
+area gets its own shadow map. This allows small areas close to the viewer to have the same shadow resolution as a huge, far-away area.
 
 .. image:: img/pssm_explained.png
 
@@ -99,27 +102,34 @@ To control PSSM, a number of parameters are exposed:
 
 .. image:: img/directional_shadow_params.png
 
-Each split distance is controlled relative to the camera far (or shadow **Max Distance** if greater than zero), so *0.0* is the eye position and *1.0* is where the shadow ends at a distance.
-Splits are in-between. Default values generally work well, but tweaking the first split a bit is common to give more detail to close objects (like a character in a third person game).
+Each split distance is controlled relative to the camera far (or shadow
+**Max Distance** if greater than zero), so *0.0* is the eye position and *1.0*
+is where the shadow ends at a distance. Splits are in-between. Default values
+generally work well, but tweaking the first split a bit is common to give more
+detail to close objects (like a character in a third person game).
 
-Always make sure to set a shadow *Max Distance* according to what the scene needs. The closer the max distance, the higher quality they shadows will have.
+Always make sure to set a shadow *Max Distance* according to what the scene needs.
+The closer the max distance, the higher quality they shadows will have.
 
-Sometimes, the transition between a split and the next can look bad. To fix this, the **"Blend Splits"** option can be turned on, which sacrifices detail in exchange for smoother
-transitions:
+Sometimes, the transition between a split and the next can look bad. To fix this,
+the **"Blend Splits"** option can be turned on which sacrifices detail in exchange
+for smoother transitions:
 
 .. image:: img/blend_splits.png
 
-The **"Normal Bias"** parameter can be used to fix special cases of self shadowing when objects are perpendicular to the light. The only downside is that it makes
+The **"Normal Bias"** parameter can be used to fix special cases of self shadowing
+when objects are perpendicular to the light. The only downside is that it makes
 the shadow a bit thinner.
 
 .. image:: img/normal_bias.png
 
-The **"Bias Split Scale"** parameter can control extra bias for the splits that are far away. If self shadowing occurs only on the splits far away, this value can fix them.
+The **"Bias Split Scale"** parameter can control extra bias for the splits that
+are far away. If self shadowing occurs only on the splits far away, this value can fix them.
 
 Finally, the **"Depth Range"** has two settings:
 
-- **Stable**: Keeps the shadow stable while the camera moves, the blocks that appear in the outline when close to the shadow edges remain in-place. This is the default and generally desired, but it reduces the effective shadow resolution.
-- **Optimized**: Triest to achieve the maximum resolution available at any given time. This may result in a "moving saw" effect on shadow edges, but at the same time the shadow looks more detailed (so this effect may be subtle enough to be forgiven).
+- **Stable**: Keeps the shadow stable while the camera moves, and the blocks that appear in the outline when close to the shadow edges remain in-place. This is the default and generally desired, but it reduces the effective shadow resolution.
+- **Optimized**: Tries to achieve the maximum resolution available at any given time. This may result in a "moving saw" effect on shadow edges, but at the same time the shadow looks more detailed (so this effect may be subtle enough to be forgiven).
 
 Just experiment which setting works better for your scene.
 
@@ -127,24 +137,24 @@ Shadowmap size for directional lights can be changed in Project Settings -> Rend
 
 .. image:: img/project_setting_shadow.png
 
-Increasing it can solve bias problems, but reduce performance. Shadow mapping is an art of tweaking.
+Increasing it can solve bias problems but reduce performance. Shadow mapping is an art of tweaking.
 
 Omni light
 ~~~~~~~~~~
 
 Omni light is a point source that emits light spherically in all directions up to a given
-radius .
+radius.
 
 .. image:: img/light_omni.png
 
-In real life, light attenuation is an inverse function, which means omni lights don't have a radius.
-This is a problem, because it means computing several omni lights would become demanding.
+In real life, light attenuation is an inverse function which means omni lights don't have a radius.
+This is a problem because it means computing several omni lights would become demanding.
 
-To solve this, a *Range* is introduced, together with an attenuation function. 
+To solve this, a *Range* is introduced together with an attenuation function.
 
 .. image:: img/light_omni_params.png
 
-These two parameters allow tweaking how this works visually, in order to find aesthetically pleasing results.
+These two parameters allow tweaking how this works visually in order to find aesthetically pleasing results.
 
 .. image:: img/light_attenuation.png
 
@@ -153,14 +163,16 @@ Omni Shadow Mapping
 ^^^^^^^^^^^^^^^^^^^
 
 Omni light shadow mapping is relatively straightforward. The main issue that needs to be
-considered is the algorithm used to render it. 
+considered is the algorithm used to render it.
 
-Omni Shadows can be rendered as either **"Dual Paraboloid" or "Cube Mapped"**. The former renders quickly but can cause deformations,
-while the later is more correct but more costly. 
+Omni Shadows can be rendered as either **"Dual Paraboloid" or "Cube Mapped"**.
+The former renders quickly but can cause deformations
+while the later is more correct but more costly.
 
 .. image:: img/shadow_omni_dp_cm.png
 
-If the objects being renderer are mostly irregular, Dual Paraboloid is usually enough. In any case, as these shadows are cached in a shadow atlas (more on that at the end), it
+If the objects being rendered are mostly irregular, Dual Paraboloid is usually
+enough. In any case, as these shadows are cached in a shadow atlas (more on that at the end), it
 may not make a difference in performance for most scenes.
 
 
@@ -184,12 +196,12 @@ Spot Shadow Mapping
 ^^^^^^^^^^^^^^^^^^^
 
 Spots don't need any parameters for shadow mapping. Keep in mind that, at more than 89 degrees of aperture, shadows
-stop functioning for spots, and you should consider using an Omni light.
+stop functioning for spots, and you should consider using an Omni light instead.
 
 Shadow Atlas
 ~~~~~~~~~~~~
 
-Unlike Directional lights, which have their own shadow texture, Omni and Spot lights are assigned to slots of a shadow atlas.
+Unlike Directional lights which have their own shadow texture, Omni and Spot lights are assigned to slots of a shadow atlas.
 This atlas can be configured in Project Settings -> Rendering -> Quality -> Shadow Atlas.
 
 .. image:: img/shadow_atlas.png
@@ -198,18 +210,20 @@ The resolution applies to the whole Shadow Atlas. This atlas is divided in four
 
 .. image:: img/shadow_quadrants.png
 
-Each quadrant, can be subdivided to allocate any number of shadow maps, following is the default subdivision:
+Each quadrant can be subdivided to allocate any number of shadow maps. The following is the default subdivision:
 
 .. image:: img/shadow_quadrants2.png
 
-The allocation logic is simple, the biggest shadow map size (when no subdivision is used) represents a light the size of the screen (or bigger).
-Subdivisions (smaller maps) represent shadows for lights that are further away from view and proportionally smaller.
+The allocation logic is simple. The biggest shadow map size (when no subdivision is used)
+represents a light the size of the screen (or bigger).
+Subdivisions (smaller maps) represent shadows for lights that are further away
+from view and proportionally smaller.
 
 Every frame, the following logic is done for all lights:
 
-1. Check if the light is on a slot of the right size, if not, re-render it and move it to a larger/smaller slot.
-2. Check if any object affecting the shadow map has changed, if it did, re-render the light.
-3. If neither of the above has happened, nothing is done and the shadow is left untouched.
+1. Check if the light is on a slot of the right size. If not, re-render it and move it to a larger/smaller slot.
+2. Check if any object affecting the shadow map has changed. If it did, re-render the light.
+3. If neither of the above has happened, nothing is done, and the shadow is left untouched.
 
 If the slots in a quadrant are full, lights are pushed back to smaller slots depending on size and distance.
 
@@ -219,14 +233,12 @@ all lights are around the same size and quadrands may have all the same subdivis
 Shadow Filter Quality
 ~~~~~~~~~~~~~~~~~~~~~
 
-The filter quality of shadows can be tweaked. This can be found in Project Settings -> Rendering -> Quality -> Shadows. Godot supports no filter, PCF5 and PCF13.
+The filter quality of shadows can be tweaked. This can be found in
+Project Settings -> Rendering -> Quality -> Shadows.
+Godot supports no filter, PCF5 and PCF13.
 
 .. image:: img/shadow_pcf1.png
 
 It affects the blockyness of the shadow outline:
 
 .. image:: img/shadow_pcf2.png
-
-
-
-

+ 10 - 10
tutorials/3d/mesh_generation_with_heightmap_and_shaders.rst

@@ -7,7 +7,7 @@ Introduction
 ------------
 
 This tutorial will help you to use Godot shaders to deform a plane
-mesh so it appears like a basic terrain. Remember that this solution
+mesh so that it appears like a basic terrain. Remember that this solution
 has pros and cons.
 
 Pros:
@@ -28,7 +28,7 @@ See this tutorial as an introduction, not a method that you should
 employ in your games, except if you intend to do LOD. Otherwise, this is
 probably not the best way.
 
-However, let's first create a heightmap,or a 2D representation of the terrain.
+However, let's first create a heightmap, or a 2D representation of the terrain.
 To do this, I'll use GIMP, but you can use any image editor you like.
 
 The heightmap
@@ -120,7 +120,7 @@ what you want.
 
 In our default scene (3D), create a root node "Spatial".
 
-Create a MeshInstance node as a child of the node we just created. 
+Create a MeshInstance node as a child of the node we just created.
 Then, load the Mesh selecting "Load" and then our "plane.obj" file.
 
 .. image:: img/14_Godot_LoadMesh.png
@@ -145,10 +145,10 @@ editor opens.
 
 .. image:: img/18_Godot_ShaderEditorOpened.png
 
-Let's start writing our shader. If you don't know how to use shaders in Godot 
+Let's start writing our shader. If you don't know how to use shaders in Godot
 you can check the :ref:`doc_shading_language` page.
 
-Let's start with the Fragment part. 
+Let's start with the Fragment part.
 This one is used to texture the plane using an image.
 For this example, we will texture it with the heightmap image itself,
 so we'll actually see mountains as brighter regions and canyons as
@@ -158,9 +158,9 @@ darker regions. Use this code:
 
     shader_type spatial;
     render_mode unshaded;
-    
+
     uniform sampler2D source;
-    
+
     void fragment() {
         ALBEDO = texture(source, UV).rgb;
     }
@@ -172,10 +172,10 @@ greyscale image. We take a parameter (``uniform``) as a ``sampler2D``,
 which will be the texture of our heightmap.
 
 Then, we set the color of every pixel of the image given by
-``texture(source, UV).rgb`` setting it to the ALBEDO variable. 
+``texture(source, UV).rgb`` setting it to the ALBEDO variable.
 Remember that the ``UV`` variable is a shader variable that returns
 the 2D position of the pixel in the texture image, according to the
-vertex we are currently dealing with. That is the use of the UV Layout 
+vertex we are currently dealing with. That is the use of the UV Layout
 we made before.
 
 However, the plane is displayed white! This is because we didn't set
@@ -185,7 +185,7 @@ the texture file and the color to use.
 
 In the Inspector, click the back arrow to get back to the
 ShaderMaterial. This is where you want to set the texture and the
-color. Click the "Shader Param" line, in "Source", click "Load" 
+color. Click the "Shader Param" line, in "Source", click "Load"
 and select the texture file "heightmap.png". Now you will see
 our heightmap.
 

+ 45 - 27
tutorials/3d/reflection_probes.rst

@@ -14,82 +14,100 @@ of it with increasing levels of *blur*. This is used to simulate roughness in ma
 
 While these probes are a efficient way of storing reflections, they have a few shortcomings:
 
-* They are efficient to render, but expensive to compute. This leads to a default behavior where they only capture on scene load.
-* They work best for rectangular shaped rooms or places, otherwise the reflections shown are not as faithful (specially when roughness is 0).
+* They are efficient to render but expensive to compute. This leads to a default
+behavior where they only capture on scene load.
+* They work best for rectangular shaped rooms or places, otherwise the reflections
+shown are not as faithful (especially when roughness is 0).
 
 Setting Up
 ----------
 
-Create a ReflectionProbe node, and wrap it around the area where you want to have reflections:
+Create a ReflectionProbe node and wrap it around the area where you want to have reflections:
 
 .. image:: img/refprobe_setup.png
 
-This should result in immediate local reflections. If you are using a Sky texture, reflections are by default blended. with it. 
+This should result in immediate local reflections. If you are using a Sky texture,
+reflections are by default blended with it.
 
-By default, on interiors, reflections may appear to not have much consistence. In this scenario, make sure to tick the *"Box Correct"* property.
+By default, on interiors, reflections may appear to not have much consistence.
+In this scenario, make sure to tick the *"Box Correct"* property.
 
 .. image:: img/refprobe_box_property.png
 
 
-This setting changes the reflection from an infinite skybox to reflecting a box the size of the probe:
+This setting changes the reflection from an infinite skybox to reflecting
+a box the size of the probe:
 
 .. image:: img/refprobe_boxcorrect.png
 
-Adjusting the box walls may help improve the reflection a bit, but it will always look the best in box shaped rooms.
+Adjusting the box walls may help improve the reflection a bit, but it will
+always look the best in box shaped rooms.
 
-The probe captures the surrounding from the center of the gizmo. If, for some reason, the room shape or contents occlude the center, it
+The probe captures the surrounding from the center of the gizmo. If, for some
+reason, the room shape or contents occlude the center, it
 can be displaced to an empty place by moving the handles in the center:
 
 .. image:: img/refprobe_center_gizmo.png
 
-By default, shadow mapping is disabled when rendering probes (only in the rendered image inside the probe, not the actual scene). This is
-a simple way to save on performance and memory. If you want shadows in the probe, they can be toggled on/off with the *Enable Shadow* setting:
+By default, shadow mapping is disabled when rendering probes (only in the
+rendered image inside the probe, not the actual scene). This is
+a simple way to save on performance and memory. If you want shadows in the probe,
+they can be toggled on/off with the *Enable Shadow* setting:
 
 .. image:: img/refprobe_shadows.png
 
-Finally, keep in mind that you may not want the Reflection Probe to render some objects. A typical scenario is an enemy inside the room which will
-move around. To keep objects from being rendered in the reflections, use the *Cull Mask* setting:
+Finally, keep in mind that you may not want the Reflection Probe to render some
+objects. A typical scenario is an enemy inside the room which will
+move around. To keep objects from being rendered in the reflections,
+use the *Cull Mask* setting:
 
 .. image:: img/refprobe_cullmask.png
 
 Interior vs Exterior
 --------------------
 
-If you are using reflection probes in an interior setting, it is recommended that the **Interior** property is enabled. This makes
-the probe not render the sky, and also allows custom amibent lighting settings.
+If you are using reflection probes in an interior setting, it is recommended
+that the **Interior** property is enabled. This stops
+the probe from rendering the sky and also allows custom ambient lighting settings.
 
 .. image:: img/refprobe_ambient.png
 
-When probes are set to **Interior**, custom constant ambient lighting can be specified per probe. Just choose a color and an energy.
+When probes are set to **Interior**, custom constant ambient lighting can be
+specified per probe. Just choose a color and an energy.
 
-Optionally, you can blend this ambient light with the probe diffuse capture by tweaking the **Ambient Contribution** property (0.0 means, pure ambient color, while 1.0 means pure diffuse capture).
+Optionally, you can blend this ambient light with the probe diffuse capture by
+tweaking the **Ambient Contribution** property (0.0 means, pure ambient color,
+while 1.0 means pure diffuse capture).
 
 
 Blending
 --------
 
-Multiple reflection probes can be used and Godot will blend them where they overlap using a smart algorithm:
+Multiple reflection probes can be used, and Godot will blend them where they overlap using a smart algorithm:
 
 .. image:: img/refprobe_blending.png
 
-As you can see, this blending is never perfect (after all, these are box reflections, not real reflections), but these arctifacts
-are only visible when using perfectly mirrored reflections. Normally, scenes have normal mapping and varying levels of roughness which
-can hide this. 
+As you can see, this blending is never perfect (after all, these are
+box reflections, not real reflections), but these artifacts
+are only visible when using perfectly mirrored reflections.
+Normally, scenes have normal mapping and varying levels of roughness which
+can hide this.
 
-Alternatively, Reflection Probes work well blended together with Screen Space Reflections to solve these problems. Combining them makes local reflections appear
-more faithful, while probes only used as fallback when no screen-space information is found:
+Alternatively, Reflection Probes work well blended together with Screen Space
+Reflections to solve these problems. Combining them makes local reflections appear
+more faithful while probes are only used as fallback when no screen-space information is found:
 
 .. image:: img/refprobe_ssr.png
 
-Finally, blending interior and exterior probes is a recommended approach when making levels that combine both interiors and exteriors. Near the door, a probe can
-be marked as *exterior* (so it will get sky reflections), while on the inside it can be interior.
+Finally, blending interior and exterior probes is the recommended approach when making
+levels that combine both interiors and exteriors. Near the door, a probe can
+be marked as *exterior* (so it will get sky reflections) while on the inside, it can be interior.
 
 Reflection Atlas
 -----------------
 
-In the current renderer implementation, all probes are the same size and they are fit into a Reflection Atlas. The size and amount of probes can be
+In the current renderer implementation, all probes are the same size and
+are fit into a Reflection Atlas. The size and amount of probes can be
 customized in Project Settings -> Quality -> Reflections
 
 .. image:: img/refprobe_atlas.png
-
-

+ 77 - 45
tutorials/3d/spatial_material.rst

@@ -54,18 +54,18 @@ shading and show pure, unlit, color.
 Vertex Lighting
 ~~~~~~~~~~~~~~~
 
-Godot has a more or less uniform cost per pixel (thanks to depth pre pass), all lighting calculations are made
+Godot has a more or less uniform cost per pixel (thanks to depth pre pass). All lighting calculations are made
 by running the lighting shader on every pixel.
 
 As these calculations are costly, performance can be brought down considerably in some corner cases such as drawing
 several layers of transparency (common in particle systems). Switching to per vertex lighting may help these cases.
 
-Additionally, on low end or mobile devices, switching to vertex lighting can considerably increase rendering performance.
+Additionally, on low-end or mobile devices, switching to vertex lighting can considerably increase rendering performance.
 
 
 .. image:: img/spatial_material2.png
 
-Keep in mind that, when vertex lighting is enabled, only directional lighting can produce shadows (for performance reasons).
+Keep in mind that when vertex lighting is enabled, only directional lighting can produce shadows (for performance reasons).
 
 No Depth Test
 ~~~~~~~~~~~~~
@@ -81,20 +81,21 @@ very well with the "render priority" property of Material (see bottom).
 Use Point Size
 ~~~~~~~~~~~~~~~
 
-This option is only active when the geometry rendered is made of points (it generally is just made of triangles when imported from 3D DCCs).
+This option is only active when the geometry rendered is made of points
+(it generally is just made of triangles when imported from 3D DCCs).
 If so, then points can be sized (see below).
 
 World Triplanar
 ~~~~~~~~~~~~~~~
 
-When using triplanar mapping (see below, in the UV1 and UV2 settings) triplanar is computed in object local space. This option
-makes triplanar work in world space.
+When using triplanar mapping (see below, in the UV1 and UV2 settings) triplanar
+is computed in object local space. This option makes triplanar work in world space.
 
 Fixed Size
 ~~~~~~~~~~
 
-Makes the object rendered at the same size no matter the distance. This is, again, useful mostly for indicators (no depth test and high render priority)
-and some types of billboards.
+Makes the object rendered at the same size no matter the distance. This is, again,
+useful mostly for indicators (no depth test and high render priority) and some types of billboards.
 
 Do Not Receive Shadows
 ~~~~~~~~~~~~~~~~~~~~~~
@@ -104,7 +105,8 @@ Makes the object not receive any kind of shadow that would otherwise be cast ont
 Vertex Color
 ------------
 
-This menu allows choosing what is done by default to vertex colors that come from your 3D modelling application. By default, they are ignored.
+This menu allows choosing what is done by default to vertex colors that come
+from your 3D modelling application. By default, they are ignored.
 
 .. image:: img/spatial_material4.png
 
@@ -147,7 +149,7 @@ Specular Mode
 Specifies how the specular blob will be rendered. The specular blob represents the shape of a light source reflected in the object.
 
 * **ShlickGGX:** The most common blob used by PBR 3D engines nowadays.
-* **Blinn:** Common in previous-generation engines. Not worth using nowadays, but left here for the sake of compatibility.
+* **Blinn:** Common in previous-generation engines. Not worth using nowadays but left here for the sake of compatibility.
 * **Phong:** Same as above.
 * **Toon:** Creates a toon blob, which changes size depending on roughness.
 * **Disabled:** Sometimes, that blob gets in the way. Be gone!
@@ -202,7 +204,7 @@ When drawing points, specify the point size in pixels.
 Billboard Mode
 ~~~~~~~~~~~~~~
 
-Enables billboard mode for drawing materials. This control how the object faces the camera:
+Enables billboard mode for drawing materials. This controls how the object faces the camera:
 
 * Disabled: Billboard mode is disabled
 * Enabled: Billboard mode is enabled, object -Z axis will always face the camera.
@@ -220,7 +222,7 @@ Grows the object vertices in the direction pointed by their normals:
 
 .. image:: img/spatial_material10.png
 
-This is commonly used to create cheap outlines. Add a second material pass, make it black an unshaded, reverse culling (Cull Front), and
+This is commonly used to create cheap outlines. Add a second material pass, make it black and unshaded, reverse culling (Cull Front), and
 add some grow:
 
 .. image:: img/spatial_material11.png
@@ -233,7 +235,7 @@ When transparency other than 0 or 1 is not needed, it's possible to set a thresh
 
 .. image:: img/spatial_material12.png
 
-This renders the object via the opaque pipeline, which is faster and allows it to do mid and post process effects such as SSAO, SSR, etc.
+This renders the object via the opaque pipeline which is faster and allows it to do mid and post process effects such as SSAO, SSR, etc.
 
 Material colors, maps and channels
 ----------------------------------
@@ -244,31 +246,40 @@ of them. They will be described in detail below:
 Albedo
 ~~~~~~
 
-Albedo is the base color for the material. Everything else works based on it. When set to *unshaded* this is the only color that is visible as-is.
-In previous versions of Godot, this channel was named *diffuse*. The change of name mainly happens because, in PBR rendering, this color affects many more
+Albedo is the base color for the material. Everything else works based on it.
+When set to *unshaded* this is the only color that is visible as-is.
+In previous versions of Godot, this channel was named *diffuse*. The change of
+name mainly happened because, in PBR rendering, this color affects many more
 calculations than just the diffuse lighting path.
 
-Albedo color and texture can be used together, as they are multiplied.
+Albedo color and texture can be used together as they are multiplied.
 
-*Alpha channel* in albedo color and texture is also used for the object transparency. If you use a color or texture with *alpha channel*, make sure to either enable
+*Alpha channel* in albedo color and texture is also used for the object transparency.
+If you use a color or texture with *alpha channel*, make sure to either enable
 transparency or *alpha scissoring* for it to work.
 
 Metallic
 ~~~~~~~~
 
-Godot uses a Metallic model over competing models due to it's simplicity. This parameter pretty much defines how reflective the materials is. The more reflective it is, the least diffuse/ambient
-light and the more reflected light. This model is called "energy conserving".
+Godot uses a Metallic model over competing models due to its simplicity.
+This parameter pretty much defines how reflective the materials is. The more
+reflective it is, the least diffuse/ambient light and the more reflected light.
+This model is called "energy conserving".
 
-The "specular" parameter here is just a general amount of for the reflectivity (unlike *metallic*, this one is not energy conserving, so simply leave it as 0.5 and don't touch it unless you need to).
+The "specular" parameter here is just a general amount of for the reflectivity
+(unlike *metallic*, this one is not energy conserving, so simply leave it as 0.5
+and don't touch it unless you need to).
 
-The minimum internal reflectivity is 0.04, so (just like in real life) it's impossible to make a material completely unreflective.
+The minimum internal reflectivity is 0.04, so (just like in real life) it's
+impossible to make a material completely unreflective.
 
 .. image:: img/spatial_material13.png
 
 Roughness
 ~~~~~~~~~
 
-Roughness affects mainly the way reflection happens. A value of 0 makes it a perfect mirror, while a value of 1 completely blurs the reflection (simulating the natural microsurfacing).
+Roughness affects mainly the way reflection happens. A value of 0 makes it a
+perfect mirror while a value of 1 completely blurs the reflection (simulating the natural microsurfacing).
 Most common types of materials can be achieved from the right combination of *Metallic* and *Roughness*.
 
 .. image:: img/spatial_material14.png
@@ -276,8 +287,9 @@ Most common types of materials can be achieved from the right combination of *Me
 Emission
 ~~~~~~~~
 
-Emission specifies how much light is emitted by the material (keep in mind this does not do lighting on surrounding geometry unless GI Probe is used). This value is just added to the resulting
-final image, and is not affected by other lighting in the scene.
+Emission specifies how much light is emitted by the material (keep in mind this
+does not do lighting on surrounding geometry unless GI Probe is used).
+This value is just added to the resulting final image and is not affected by other lighting in the scene.
 
 
 .. image:: img/spatial_material15.png
@@ -286,7 +298,8 @@ final image, and is not affected by other lighting in the scene.
 Normalmap
 ~~~~~~~~~
 
-Normal mapping allows to set a texture that represents finer shape detail. This does not modify geometry, just the incident angle for light.
+Normal mapping allows to set a texture that represents finer shape detail.
+This does not modify geometry, just the incident angle for light.
 In Godot, only R and G are used for normalmaps, in order to attain better compatibility.
 
 .. image:: img/spatial_material16.png
@@ -294,24 +307,29 @@ In Godot, only R and G are used for normalmaps, in order to attain better compat
 Rim
 ~~~
 
-Some fabrics have small micro fur that causes light to scatter around it. Godot emulates this with the *rim* parameter. Unlike other rim lighting implementations
-which just use the emission channel, this one actually takes light into account (no light means no rim). This makes the effect considerably more believable.
+Some fabrics have small micro fur that causes light to scatter around it. Godot
+emulates this with the *rim* parameter. Unlike other rim lighting implementations
+which just use the emission channel, this one actually takes light into account
+(no light means no rim). This makes the effect considerably more believable.
 
 .. image:: img/spatial_material17.png
 
-Rim size depends on roughness and there is a special parameter to specify how it must be colored. If *tint* is 0, the color of the light is used for the rim. If *tint* is 1,
+Rim size depends on roughness, and there is a special parameter to specify how
+it must be colored. If *tint* is 0, the color of the light is used for the rim. If *tint* is 1,
 then the albedo of the material is used. Using intermediate values generally works best.
 
 Clearcoat
 ~~~~~~~~~
 
-The *clearcoat* parameter is used mostly to add a *secondary* pass of transparent coat to the material. This is common in car paint and toys.
+The *clearcoat* parameter is used mostly to add a *secondary* pass of transparent
+coat to the material. This is common in car paint and toys.
 In practice, it's a smaller specular blob added on top of the existing material.
 
 Anisotropy
 ~~~~~~~~~~
 
-Changes the shape of the specular blow and aligns it to tangent space. Anisotropy is commonly used with hair, or to make materials such as brushed aluminium more realistic.
+Changes the shape of the specular blow and aligns it to tangent space. Anisotropy
+is commonly used with hair, or to make materials such as brushed aluminium more realistic.
 It works especially well when combined with flowmaps.
 
 .. image:: img/spatial_material18.png
@@ -320,23 +338,28 @@ It works especially well when combined with flowmaps.
 Ambient Occlusion
 ~~~~~~~~~~~~~~~~~~
 
-In Godot's new PBR workflow, it is possible to specify a pre-baked ambient occlusion map. This map affects how much ambient light reaches each surface of the object (it does not affect direct light).
-While it is possible to use Screen Space Ambient Occlusion (SSAO) to generate AO, nothing will beat the quality of a nicely baked AO map. It is recommended to pre-bake AO whenever possible.
+In Godot's new PBR workflow, it is possible to specify a pre-baked ambient occlusion map.
+This map affects how much ambient light reaches each surface of the object (it does not affect direct light).
+While it is possible to use Screen Space Ambient Occlusion (SSAO) to generate AO,
+nothing will beat the quality of a nicely baked AO map. It is recommended to pre-bake AO whenever possible.
 
 .. image:: img/spatial_material19.png
 
 Depth
 ~~~~~
 
-Setting a depth map to a material produces a ray-marched search to emulate the proper displacement of cavities along the view direction. This is not real added geometry, but an illusion of depth.
-It may not work for complex objets, but it produces a realistic depth effect for textues. For best results, *Depth* should be used together with normal mapping.
+Setting a depth map to a material produces a ray-marched search to emulate the
+proper displacement of cavities along the view direction. This is not real added geometry, but an illusion of depth.
+It may not work for complex objets, but it produces a realistic depth effect for textues.
+For best results, *Depth* should be used together with normal mapping.
 
 .. image:: img/spatial_material20.png
 
 Subsurface Scattering
 ~~~~~~~~~~~~~~~~~~~~~
 
-This effect emulates light that goes beneath an object's surface, is scattered, and then comes out. It's useful to make realistic skin, marble, colored liquids, etc.
+This effect emulates light that goes beneath an object's surface, is scattered,
+and then comes out. It's useful to make realistic skin, marble, colored liquids, etc.
 
 .. image:: img/spatial_material21.png
 
@@ -344,15 +367,17 @@ This effect emulates light that goes beneath an object's surface, is scattered,
 Transmission
 ~~~~~~~~~~~~
 
-Controls how much light from the lit side (visible to light) is transferred to the dark side (opposite side to light). This works well for thin objects such as tree/plant leaves,
-grass, human ears, etc.
+Controls how much light from the lit side (visible to light) is transferred to
+the dark side (opposite side to light). This works well for thin objects such as
+tree/plant leaves, grass, human ears, etc.
 
 .. image:: img/spatial_material22.png
 
 Refraction
 ~~~~~~~~~~~
 
-When refraction is enabled, it supersedes alpha blending and Godot attempts to fetch information from behind the object being rendered instead. This allows distorting the transparency
+When refraction is enabled, it supersedes alpha blending, and Godot attempts to
+fetch information from behind the object being rendered instead. This allows distorting the transparency
 in a way similar to refraction.
 
 .. image:: img/spatial_material23.png
@@ -360,30 +385,36 @@ in a way similar to refraction.
 Detail
 ~~~~~~
 
-Godot allows using secondary albedo and normal maps to generate a detail texture, which can be blended in many ways. Combining with secondary UV or triplanar modes, many interesting textures can be achieved.
+Godot allows using secondary albedo and normal maps to generate a detail texture,
+which can be blended in many ways. Combining with secondary UV or triplanar modes,
+many interesting textures can be achieved.
 
 .. image:: img/spatial_material24.png
 
 UV1 and UV2
 ~~~~~~~~~~~~
 
-Godot supports 2 UV channels per material. Secondary UV is often useful for AO or Emission (baked light). UVs can be scaled and offseted, which is useful in textures with repeat.
+Godot supports 2 UV channels per material. Secondary UV is often useful for AO or
+Emission (baked light). UVs can be scaled and offseted which is useful in textures with repeat.
 
 Triplanar Mapping
 ~~~~~~~~~~~~~~~~~
 
-Triplanar mapping is supported for both UV1 and UV2. This is an alternative way to obtain texture coordinates, often called "Autotexture". Textures are sampled in X,Y and Z and blended by the normal.
+Triplanar mapping is supported for both UV1 and UV2. This is an alternative way
+to obtain texture coordinates, often called "Autotexture".
+Textures are sampled in X,Y and Z and blended by the normal.
 Triplanar can be either worldspace or object space.
 
-In the image below, you can see how all primitives share the same material with world triplanar, so bricks continue smoothly between them.
+In the image below, you can see how all primitives share the same material with
+world triplanar, so bricks continue smoothly between them.
 
 .. image:: img/spatial_material25.png
 
 Proximity and Distance Fade
 ----------------------------
 
-Godot allows materials to fade by proximity to another, as well as depending on the distance to the viewer.
-Proximity fade is useful for effects such as soft particles, or a mass of water with a smooth blending to the shores.
+Godot allows materials to fade by proximity to each other as well as depending on the distance to the viewer.
+Proximity fade is useful for effects such as soft particles or a mass of water with a smooth blending to the shores.
 Distance fade is useful for light shafts or indicators that are only present after a given distance.
 
 Keep in mind enabling these enables alpha blending, so abusing them for a whole scene is not generally a good idea.
@@ -393,4 +424,5 @@ Keep in mind enabling these enables alpha blending, so abusing them for a whole
 Render Priority
 ---------------
 
-Rendering order can be changed for objects, although this is mostly useful for transparent objects (or opaque objects that do depth draw but no color draw, useful for cracks on the floor).
+Rendering order can be changed for objects although this is mostly useful for
+transparent objects (or opaque objects that do depth draw but no color draw, useful for cracks on the floor).

+ 33 - 12
tutorials/3d/using_multi_mesh_instance.rst

@@ -6,26 +6,42 @@ Using MultiMeshInstance
 Introduction
 ~~~~~~~~~~~~
 
-In a normal scenario you would use a :ref:`MeshInstance <class_MeshInstance>` node to display a 3D mesh like a human model for the main character. But in some cases you would like to create multiple instances of the same mesh in a scene. You *could* duplicate the same node multiple times and adjust the transforms manually. This may be a tedious process and the result may look mechanical. Also, this method is not favourable to rapid iterations. :ref:`MultiMeshInstance <class_MultiMeshInstance>` is one of the possible solutions to this problem.
-
-MultiMeshInstance, as the name suggests, creates multiple copies of a MeshInstance over a surface of a specific mesh. An example would be having a tree mesh populate a landscape mesh with random scales and orientations. 
+In a normal scenario, you would use a :ref:`MeshInstance <class_MeshInstance>`
+node to display a 3D mesh like a human model for the main character, but in some
+cases, you would like to create multiple instances of the same mesh in a scene.
+You *could* duplicate the same node multiple times and adjust the transforms
+manually. This may be a tedious process and the result may look mechanical.
+Also, this method is not favourable to rapid iterations.
+:ref:`MultiMeshInstance <class_MultiMeshInstance>` is one of the possible
+solutions to this problem.
+
+MultiMeshInstance, as the name suggests, creates multiple copies of a
+MeshInstance over a surface of a specific mesh. An example would be having a
+tree mesh populate a landscape mesh with trees of random scales and orientations.
 
 Setting up the nodes
 ~~~~~~~~~~~~~~~~~~~~
 
-The basic setup requires three nodes. Firstly, the MultiMeshInstance node. Then, two MeshInstance nodes. 
+The basic setup requires three nodes: the MultiMeshInstance node
+and two MeshInstance nodes.
 
-One node is used as the target, the mesh that you want to place multiple meshes on. In the tree example, this would be the landscape.
+One node is used as the target, the mesh that you want to place multiple meshes
+on. In the tree example, this would be the landscape.
 
-Another node is used as the source, the mesh that you want to have duplicated. In the tree case, this would be the tree.
+Another node is used as the source, the mesh that you want to have duplicated.
+In the tree case, this would be the tree.
 
-In our example, we would use a :ref:`Node <class_Node>` as the root node of the scene. Your scene tree would look like this:
+In our example, we would use a :ref:`Node <class_Node>` as the root node of the
+scene. Your scene tree would look like this:
 
 .. image:: img/multimesh_scene_tree.png
 
-.. note:: For simplification purposes, this tutorial uses built-in primitives. 
+.. note:: For simplification purposes, this tutorial uses built-in primitives.
 
-Now you have everything ready. Select the MultiMeshInstance node and look at the toolbar, you should see an extra button called ``MultiMesh`` next to ``View``. Click it and select *Populate surface* in the dropdown menu. A new window titled *Populate MultiMesh* will pop up.
+Now you have everything ready. Select the MultiMeshInstance node and look at the
+toolbar, you should see an extra button called ``MultiMesh`` next to ``View``.
+Click it and select *Populate surface* in the dropdown menu. A new window titled
+*Populate MultiMesh* will pop up.
 
 .. image:: img/multimesh_toolbar.png
 
@@ -38,7 +54,8 @@ Below are descriptions of the options.
 
 Target Surface
 ++++++++++++++
-The mesh you would be using as the target surface for placing copies of you source mesh on.
+The mesh you would be using as the target surface for placing copies of you
+source mesh on.
 
 Source Mesh
 +++++++++++
@@ -66,9 +83,13 @@ The scale of the source mesh that will be placed over the target surface.
 
 Amount
 ++++++
-The amount of mesh instances placed over the target surface. 
+The amount of mesh instances placed over the target surface.
 
-Select the target surface, in the tree case, this should be the landscape node. And the source mesh should be the tree node. Adjust the other parameters according to your preference. Press ``Populate`` and multiple copies of the source mesh will be placed over the target mesh. If you are satisfied with the result, you can delete the mesh instance used as the source mesh. 
+Select the target surface. In the tree case, this should be the landscape node.
+The source mesh should be the tree node. Adjust the other parameters
+according to your preference. Press ``Populate`` and multiple copies of the
+source mesh will be placed over the target mesh. If you are satisfied with the
+result, you can delete the mesh instance used as the source mesh.
 
 The end result should look like this:
 

+ 36 - 31
tutorials/3d/working_with_3d_skeletons.rst

@@ -11,19 +11,19 @@ Skeleton node
 -------------
 
 The Skeleton node can be directly added anywhere you want on a scene. Usually
-mesh is a child of Skeleton, as it easier to manipulate this way, as
+the target mesh is a child of Skeleton, as it easier to manipulate this way, since
 Transforms within a skeleton are relative to where the Skeleton is. But you
 can specify a Skeleton node in every MeshInstance.
 
-Being obvious, Skeleton is intended to deform meshes, and consists of
+Naturally, Skeleton is intended to deform meshes and consists of
 structures called "bones". Each "bone" is represented as a Transform, which is
 applied to a group of vertices within a mesh. You can directly control a group
 of vertices from Godot. For that please reference the :ref:`class_MeshDataTool`
 class and its method :ref:`set_vertex_bones <class_MeshDataTool_set_vertex_bones>`.
 
-The "bones" are organized hierarchically, every bone, except for root
-bone(s) have a parent. Every bone has an associated name you can use to
-refer to it (e.g. "root" or "hand.L", etc.). Also all bones are numbered,
+The "bones" are organized hierarchically. Every bone, except for root
+bone(s) have a parent. Every bone also has an associated name you can use to
+refer to it (e.g. "root" or "hand.L", etc.). All bones are numbered, and
 these numbers are bone IDs. Bone parents are referred by their numbered
 IDs.
 
@@ -35,14 +35,14 @@ For the rest of the article we consider the following scene:
     == skel (Skeleton)
     ==== mesh (MeshInstance)
 
-This scene is imported from Blender. It contains an arm mesh with 2 bones -
+This scene is imported from Blender. It contains an arm mesh with 2 bones,
 upperarm and lowerarm, with the lowerarm bone parented to the upperarm.
 
 Skeleton class
 --------------
 
 You can view Godots internal help for descriptions of all functions.
-Basically all operations on bones are done using their numeric ID. You
+Basically, all operations on bones are done using their numeric ID. You
 can convert from a name to a numeric ID and vice versa.
 
 **To find the number of bones in a skeleton we use the get_bone_count()
@@ -55,10 +55,8 @@ function:**
 
     func _ready():
         skel = get_node("skel")
-        var id = skel.find_bone("upperarm")
-        print("bone id:", id)
-        var parent = skel.get_bone_parent(id)
-        print("bone parent id:", id)
+        var count = skel.get_bone_count()
+        print("bone count:", count)
 
 **To find the ID of a bone, use the find_bone() function:**
 
@@ -73,7 +71,7 @@ function:**
         print("bone id:", id)
 
 Now, we want to do something interesting with the ID, not just printing it.
-Also, we might need additional information - finding bone parents to
+Also, we might need additional information, finding bone parents to
 complete chains, etc. This is done with the get/set_bone\_\* functions.
 
 **To find the parent of a bone we use the get_bone_parent(id) function:**
@@ -91,7 +89,7 @@ complete chains, etc. This is done with the get/set_bone\_\* functions.
         print("bone parent id:", id)
 
 The bone transforms are the things of our interest here. There are 3 kind of
-transforms - local, global, custom.
+transforms: local, global, custom.
 
 **To find the local Transform of a bone we use get_bone_pose(id) function:**
 
@@ -111,10 +109,10 @@ transforms - local, global, custom.
 
 So we get a 3x4 matrix there, with the first column filled with 1s. What can we do
 with this matrix? It is a Transform, so we can do everything we can do with
-Transforms, basically translate, rotate and scale. We could also multiply
+Transforms (basically translate, rotate and scale). We could also multiply
 transforms to have more complex transforms. Remember, "bones" in Godot are
 just Transforms over a group of vertices. We could also copy Transforms of
-other objects there. So lets rotate our "upperarm" bone:
+other objects there. So let's rotate our "upperarm" bone:
 
 ::
 
@@ -138,7 +136,7 @@ other objects there. So lets rotate our "upperarm" bone:
         skel.set_bone_pose(id, t)
 
 Now we can rotate individual bones. The same happens for scale and
-translate - try these on your own and check the results.
+translate. Try these on your own and check the results.
 
 What we used here was the local pose. By default all bones are not modified.
 But this Transform tells us nothing about the relationship between bones.
@@ -165,8 +163,8 @@ Let's find the global Transform for the lowerarm bone:
         print("bone transform: ", t)
 
 As you can see, this transform is not zeroed. While being called global, it
-is actually relative to the Skeleton origin. For a root bone, origin is always
-at 0 if not modified. Lets print the origin for our lowerarm bone:
+is actually relative to the Skeleton origin. For a root bone, the origin is always
+at 0 if not modified. Let's print the origin for our lowerarm bone:
 
 ::
 
@@ -184,31 +182,34 @@ at 0 if not modified. Lets print the origin for our lowerarm bone:
 
 You will see a number. What does this number mean? It is a rotation
 point of the Transform. So it is base part of the bone. In Blender you can
-go to Pose mode and try there to rotate bones - they will rotate around
-their origin. But what about the bone tip? We can't know things like the bone length,
+go to Pose mode and try there to rotate bones. They will rotate around
+their origin.
+
+But what about the bone tip? We can't know things like the bone length,
 which we need for many things, without knowing the tip location. For all
-bones in a chain except for the last one we can calculate the tip location - it is
-simply a child bone's origin. Yes, there are situations when this is not
-true, for non-connected bones. But that is OK for us for now, as it is
-not important regarding Transforms. But the leaf bone tip is nowhere to
-be found. A leaf bone is a bone without children. So you don't have any
-information about its tip. But this is not a showstopper. You can
-overcome this by either adding an extra bone to the chain or just
-calculating the length of the leaf bone in Blender and storing the value in your
-script.
+bones in a chain, except for the last one, we can calculate the tip location. It is
+simply a child bone's origin. There are situations when this is not
+true, such as for non-connected bones, but that is OK for us for now, as it is
+not important regarding Transforms.
+
+Notice that the leaf bone tip is nowhere to be found. A leaf bone is a bone
+without children, so you don't have any information about its tip.
+But this is not a showstopper. You can overcome this by either adding an extra
+bone to the chain or just calculating the length of the leaf bone in Blender
+and storing the value in your script.
 
 Using 3D "bones" for mesh control
 ---------------------------------
 
 Now as you know the basics we can apply these to make full FK-control of our
-arm (FK is forward-kinematics)
+arm (FK is forward-kinematics).
 
 To fully control our arm we need the following parameters:
 
 -  Upperarm angle x, y, z
 -  Lowerarm angle x, y, z
 
-All of these parameters can be set, incremented and decremented.
+All of these parameters can be set, incremented, and decremented.
 
 Create the following node tree:
 
@@ -250,8 +251,10 @@ which does that:
 
     func _ready():
         set_process(true)
+
     var bone = "upperarm"
     var coordinate = 0
+
     func _process(delta):
         if Input.is_action_pressed("select_x"):
             coordinate = 0
@@ -285,8 +288,10 @@ The full code for arm control is this:
     func _ready():
         skel = get_node("arm/Armature/Skeleton")
         set_process(true)
+
     var bone = "upperarm"
     var coordinate = 0
+
     func set_bone_rot(bone, ang):
         var b = skel.find_bone(bone)
         var rest = skel.get_bone_rest(b)