Gregg Tavares 5 years ago
parent
commit
f4f3ccc341

+ 3 - 3
threejs/lessons/ru/threejs-multiple-scenes.md

@@ -149,7 +149,7 @@ const sceneInfo2 = setupScene2();
 `Renderer.setViewport` и `Renderer.setScissor`.
 
 ```js
-function rendenerSceneInfo(sceneInfo) {
+function renderSceneInfo(sceneInfo) {
   const {scene, camera, elem} = sceneInfo;
 
   // получаем относительную позицию окна просмотра этого элемента
@@ -193,8 +193,8 @@ function render(time) {
   sceneInfo1.mesh.rotation.y = time * .1;
   sceneInfo2.mesh.rotation.y = time * .1;
 
-  rendenerSceneInfo(sceneInfo1);
-  rendenerSceneInfo(sceneInfo2);
+  renderSceneInfo(sceneInfo1);
+  renderSceneInfo(sceneInfo2);
 
   requestAnimationFrame(render);
 }

+ 1 - 1
threejs/lessons/threejs-align-html-elements-to-3d.md

@@ -234,7 +234,7 @@ cubes.forEach((cubeInfo, ndx) => {
 +    // hide the label
 +    elem.style.display = 'none';
 +  } else {
-+    // unhide the label
++    // un-hide the label
 +    elem.style.display = '';
 
     // convert the normalized position to CSS coordinates

+ 1 - 1
threejs/lessons/threejs-billboards.md

@@ -246,7 +246,7 @@ completely in the render target. The issue here is the size we're using to
 calculate if the object fits in the camera's view is not taking into account
 that the very edges of the object will end up dipping outside area we
 calculated. We could compute how to make 100% of the box fit but that would
-waste space as well so instead we just *fugde* it.
+waste space as well so instead we just *fudge* it.
 
 Then we render to the render target and remove the object from
 the scene. 

+ 4 - 4
threejs/lessons/threejs-cameras.md

@@ -114,7 +114,7 @@ help us with events so both cameras can easily have their own `OrbitControls`.
 </body>
 ```
 
-And the CSS that will make those 2 views show up side by side overlayed on top of
+And the CSS that will make those 2 views show up side by side overlaid on top of
 the canvas
 
 ```css
@@ -397,7 +397,7 @@ maybe we'll go over later. For now, just be aware you should take care
 to choose appropriate `near` and `far` values for your needs.
 
 The 2nd most common camera is the `OrthographicCamera`. Rather than
-specify a frustum it specfies a box with the settings `left`, `right`
+specify a frustum it specifies a box with the settings `left`, `right`
 `top`, `bottom`, `near`, and `far`. Because it's projecting a box
 there is no perspective.
 
@@ -430,7 +430,7 @@ const gui = new GUI();
 ```
 
 The call to `listen` tells dat.GUI to watch for changes. This is here because
-the `OrbitControls` can also control zoom. For example the scrollwheel on
+the `OrbitControls` can also control zoom. For example the scroll wheel on
 a mouse will zoom via the `OrbitControls`.
 
 Last we just need to change the part that renders the left
@@ -470,7 +470,7 @@ something like
 ```js
 camera.left = -canvas.width / 2;
 camera.right = canvas.width / 2;
-camera.top = canvas.heigth / 2;
+camera.top = canvas.height / 2;
 camera.bottom = -canvas.height / 2;
 camera.near = -1;
 camera.far = 1;

+ 1 - 1
threejs/lessons/threejs-cleanup.md

@@ -376,7 +376,7 @@ class ResourceTracker {
 }
 ```
 
-And with that let's take an example from [the article on loading gltf files](threejs-load-glft.html)
+And with that let's take an example from [the article on loading gltf files](threejs-load-gltf.html)
 and make it load and free files.
 
 ```js

+ 4 - 4
threejs/lessons/threejs-custom-buffergeometry.md

@@ -8,7 +8,7 @@ how to use `Geometry`. This article is about `BufferGeometry`.
 less memory but can be harder to setup.
 
 In [the article on Geometry](threejs-custom-geometry.html) we went over that to use a `Geometry` you supply an
-array of `Vector3` vertices (postions). You then make `Face3` objects specifying
+array of `Vector3` vertices (positions). You then make `Face3` objects specifying
 by index the 3 vertices that make each triangle of the shape you're making. To
 each `Face3` you can specify either a face normal or normals for each individual
 vertex of the face. You can also specify a face color or individual vertex
@@ -20,7 +20,7 @@ of the face.
 
 `BufferGeometry` on the other hand uses *named* `BufferAttribute`s.
 Each `BufferAttribute` represents an array of one type of data: positions,
-normals, colors, and uv. Togther, the added `BufferAttribute`s represent
+normals, colors, and uv. Together, the added `BufferAttribute`s represent
 *parallel arrays* of all the data for each vertex.
 
 <div class="threejs_center"><img src="resources/threejs-attributes.svg" style="width: 700px"></div>
@@ -44,7 +44,7 @@ The truth is when you use `Geometry` three.js transforms it into this format.
 That is where the extra memory and time comes from when using `Geometry`. Extra
 memory for all the `Vector3`s, `Vector2`s, `Face3`s and array objects and then
 extra time to translate all of that data into parallel arrays in the form of
-`BufferAttribute`s like above. Somtimes that makes using `Geometry` easier.
+`BufferAttribute`s like above. Sometimes that makes using `Geometry` easier.
 With `BufferGeometry` it is up to us to supply the data already turned into this format.
 
 As a simple example let's make a cube using `BufferGeometry`. A cube is interesting
@@ -144,7 +144,7 @@ and add it to the `BufferGeometry`.
       new THREE.BufferAttribute(new Float32Array(uvs), uvNumComponents));
 ```
 
-Note that the names are sigificant. You must name your attributes the names
+Note that the names are significant. You must name your attributes the names
 that match what three.js expects (unless you are creating a custom shader).
 In this case `position`, `normal`, and `uv`. If you want vertex colors then
 name your attribute `color`.

+ 2 - 2
threejs/lessons/threejs-debugging-javascript.md

@@ -46,7 +46,7 @@ Similarly with Safari you can
 [use your computer to debug webpages running on Safari on iPhones and iPads](https://www.google.com/search?q=safari+remote+debugging+ios).
 
 I'm most familiar with Chrome so this guide will be using Chrome
-as an example when refering to tools but most browsers have similar
+as an example when referring to tools but most browsers have similar
 features so it should be easy to apply anything here to all browsers.
 
 ## Turn off the cache
@@ -351,7 +351,7 @@ there is no debug info but if we use this url:
 
 there is debug info.
 
-Multiple paramters can be passed in by separating with '&' as in `somepage.html?someparam=somevalue&someotherparam=someothervalue`. 
+Multiple parameters can be passed in by separating with '&' as in `somepage.html?someparam=somevalue&someotherparam=someothervalue`. 
 Using parameters like this we can pass in all kinds of options. Maybe `speed=0.01` to slow down our app for making it easier to understand something or `showHelpers=true` for whether or not to add helpers
 that show the lights, shadow, or camera frustum seen in other lessons.
 

+ 5 - 5
threejs/lessons/threejs-fog.md

@@ -18,7 +18,7 @@ from the camera. Anything closer than `near` is unaffected by fog.
 Anything further than `far` is completely the fog color. Parts between
 `near` and `far` fade from their material color to the fog color.
 
-There's also `FogExp2` which grows expotentially with distance from the camera.
+There's also `FogExp2` which grows exponentially with distance from the camera.
 
 To use either type of fog you create one and and assign it to the scene as in
 
@@ -85,7 +85,7 @@ scene.background = new THREE.Color('#F00');  // red
 
 Here is one of our previous examples with fog added. The only addition
 is right after setting up the scene we add the fog and set the scene's
-backgound color
+background color
 
 ```js
 const scene = new THREE.Scene();
@@ -181,7 +181,7 @@ we get use to easily get such a string, we just have to prepend a '#' to the fro
 // We use this class to pass to dat.gui
 // so when it manipulates near or far
 // near is never > far and far is never < near
-+// Also when dat.gui maniplates color we'll
++// Also when dat.gui manipulates color we'll
 +// update both the fog and background colors.
 class FogGUIHelper {
 *  constructor(fog, backgroundColor) {
@@ -212,7 +212,7 @@ class FogGUIHelper {
 }
 ```
 
-We then call `gui.addColor` to add a color UI for our helper's virutal property.
+We then call `gui.addColor` to add a color UI for our helper's virtual property.
 
 ```js
 {
@@ -232,7 +232,7 @@ We then call `gui.addColor` to add a color UI for our helper's virutal property.
 {{{example url="../threejs-fog-gui.html" }}}
 
 You can see setting `near` to like 1.9 and `far` to 2.0 gives
-a very sharp transition between unfogged and completely fogged.
+a very sharp transition between un-fogged and completely fogged.
 where as `near` = 1.1 and `far` = 2.9 should just about be
 the smoothest given our cubes are spinning 2 units away from the camera.
 

+ 1 - 1
threejs/lessons/threejs-fundamentals.md

@@ -109,7 +109,7 @@ Anything inside the defined frustum will be be drawn. Anything outside
 will not.
 
 The camera defaults to looking down the -Z axis with +Y up. We'll put our cube
-at the origin so we need to move the camera back a litte from the origin
+at the origin so we need to move the camera back a little from the origin
 in order to see anything.
 
 ```js

+ 2 - 2
threejs/lessons/threejs-indexed-textures.md

@@ -208,7 +208,7 @@ canvas.addEventListener('touchstart', (event) => {
   event.preventDefault();
   lastTouch = event.touches[0];
 }, {passive: false});
-canvas.addEventListener('touchsmove', (event) => {
+canvas.addEventListener('touchmove', (event) => {
   lastTouch = event.touches[0];
 });
 canvas.addEventListener('touchend', () => {
@@ -671,7 +671,7 @@ canvas.addEventListener('touchstart', (event) => {
   lastTouch = event.touches[0];
 +  recordStartTimeAndPosition(event.touches[0]);
 }, {passive: false});
-canvas.addEventListener('touchsmove', (event) => {
+canvas.addEventListener('touchmove', (event) => {
   lastTouch = event.touches[0];
 });
 ```

+ 3 - 3
threejs/lessons/threejs-lights.md

@@ -181,7 +181,7 @@ What it does help with is making the darks not too dark.
 ## `HemisphereLight`
 
 Let's switch the code to a `HemisphereLight`. A `HemisphereLight`
-takes a sky color and a ground color and just multplies the
+takes a sky color and a ground color and just multiplies the
 material's color between those 2 colors—the sky color if the
 surface of the object is pointing up and the ground color if
 the surface of the object is pointing down.
@@ -212,7 +212,7 @@ The result:
 
 {{{example url="../threejs-lights-hemisphere.html" }}}
 
-Notice again there is almost no defintion, everything looks kind
+Notice again there is almost no definition, everything looks kind
 of flat. The `HemisphereLight` used in combination with another light
 can help give a nice kind of influence of the color of the sky
 and ground. In that way it's best used in combination with some
@@ -352,7 +352,7 @@ be any shape you want, just add a mesh to the light itself.
 A `PointLight` has the added property of [`distance`](PointLight.distance).
 If the `distance` is 0 then the `PointLight` shines to
 infinity. If the `distance` is greater than 0 then the light shines
-its full intensity at the light and fades to no influnce at `distance`
+its full intensity at the light and fades to no influence at `distance`
 units away from the light.
 
 Let's setup the GUI so we can adjust the distance.

+ 1 - 1
threejs/lessons/threejs-load-gltf.md

@@ -48,7 +48,7 @@ graphics. 3D formats can be divided into 3 or 4 basic types.
 
      This again is different from other formats except maybe App formats. The data
      in a glTF file is mean to be rendered, not edited. Data that's not important to
-     rendering has generally been removed. Polygons have been convered to triangles.
+     rendering has generally been removed. Polygons have been converted to triangles.
      Materials have known values that are supposed to work everywhere.
 
 gLTF was specifically designed so you should be able to download a glTF file and

+ 4 - 4
threejs/lessons/threejs-load-obj.md

@@ -124,7 +124,7 @@ export those files to by picking **File->External Data->Unpack All Into Files**
 
 <div class="threejs_center"><img style="width: 828px;" src="resources/images/windmill-export-textures.jpg"></div>
 
-and then chosing **Write Files to Current Directory**
+and then choosing **Write Files to Current Directory**
 
 <div class="threejs_center"><img style="width: 828px;" src="resources/images/windmill-overwrite.jpg"></div>
 
@@ -634,7 +634,7 @@ Loading models often runs into these kinds of issues. Common issues include:
 
 * Needing to know the size
 
-  Above we made the camera try to frame the scene but that's not always the approriate thing to do. Generally the most approriate thing
+  Above we made the camera try to frame the scene but that's not always the appropriate thing to do. Generally the most appropriate thing
   to do is to make your own models or download the models, load them up in some 3D software and look at their scale and adjust if need be.
 
 * Orientation Wrong
@@ -655,7 +655,7 @@ Loading models often runs into these kinds of issues. Common issues include:
 
 * Textures too large
 
-  Most 3D models are made for either architecture, movies and commericals, or
+  Most 3D models are made for either architecture, movies and commercials, or
   games. For architecture and movies no one really cares about the size
   of the textures since. For games people care because games have limited
   memory but most games run locally. Webpages though you want to load
@@ -668,7 +668,7 @@ Loading models often runs into these kinds of issues. Common issues include:
   textures take memory so a 50k JPG that expands to 4096x4096 will download
   fast but still take a ton of memory.
 
-The last thing I wanted to show is spinning the windmills. Unfortunately, .OBJ files have no hirerarchy. That means all parts of each
+The last thing I wanted to show is spinning the windmills. Unfortunately, .OBJ files have no hierarchy. That means all parts of each
 windmill are basically considered 1 single mesh. You can't spin the blades of the mill as they aren't separated from the rest of the building.
 
 This is one of the main reasons why .OBJ is not really a good format. If I was to guess, the reason it's more common than other formats

+ 5 - 5
threejs/lessons/threejs-multiple-scenes.md

@@ -21,7 +21,7 @@ the 9th context the oldest one will be lost.
    and that model uses 20 meg of textures your 10 meg model will
    have to be loaded twice and your textures will also be loaded
    twice. Nothing can be shared across contexts. This also
-   means things have to be intialized twice, shaders compiled twice,
+   means things have to be initialized twice, shaders compiled twice,
    etc. It gets worse as there are more canvases.
 
 So what's the solution?
@@ -78,7 +78,7 @@ Then we can setup the CSS maybe something like this
 }
 ```
 
-We set the canvsas to fill the screen and we set its `z-index` to
+We set the canvas to fill the screen and we set its `z-index` to
 -1 to make it appear behind other elements. We also need to specify some kind of width and height for our virtual canvas elements since there is nothing inside to give them any size.
 
 Now we'll make 2 scenes each with a light and a camera.
@@ -144,7 +144,7 @@ to only render to part of the canvas by turning on the *scissor*
 test with `Renderer.setScissorTest` and then setting both the scissor and the viewport with `Renderer.setViewport` and `Renderer.setScissor`.
 
 ```js
-function rendenerSceneInfo(sceneInfo) {
+function renderSceneInfo(sceneInfo) {
   const {scene, camera, elem} = sceneInfo;
 
   // get the viewport relative position opf this element
@@ -188,8 +188,8 @@ function render(time) {
   sceneInfo1.mesh.rotation.y = time * .1;
   sceneInfo2.mesh.rotation.y = time * .1;
 
-  rendenerSceneInfo(sceneInfo1);
-  rendenerSceneInfo(sceneInfo2);
+  renderSceneInfo(sceneInfo1);
+  renderSceneInfo(sceneInfo2);
 
   requestAnimationFrame(render);
 }

+ 1 - 1
threejs/lessons/threejs-optimize-lots-of-objects-animated.md

@@ -714,7 +714,7 @@ another target and morph from that to their first positions on the globe. That
 might be a cool way to introduce the globe.
 
 Next you might be interested in adding labels to a globe which is covered
-in [Aligning HTML Elemenst to 3D](threejs-align-html-elements-to-3d.html).
+in [Aligning HTML Elements to 3D](threejs-align-html-elements-to-3d.html).
 
 Note: We could try to just graph percent of men or percent of women or the raw
 difference but based on how we are displaying the info, cubes that grow from the

+ 2 - 2
threejs/lessons/threejs-picking.md

@@ -122,7 +122,7 @@ class PickHelper {
 
 You can see we create a `RayCaster` and then we can call the `pick` function to cast a ray through the scene. If the ray hits something we change the color of the first thing it hits.
 
-Of course we could call this function only when the user pressed the mouse *down* which is probaby usually what you want but for this example we'll pick every frame whatever is under the mouse. To do this we first need to track where the mouse
+Of course we could call this function only when the user pressed the mouse *down* which is probably usually what you want but for this example we'll pick every frame whatever is under the mouse. To do this we first need to track where the mouse
 is
 
 ```js
@@ -159,7 +159,7 @@ window.addEventListener('mouseout', clearPickPosition);
 window.addEventListener('mouseleave', clearPickPosition);
 ```
 
-Notice we're recording a normalized mouse position. Reguardless of the size of the canvas we need a value that goes from -1 on the left to +1 on the right. Similarly we need a value that goes from -1 on the bottom to +1 on the top.
+Notice we're recording a normalized mouse position. Regardless of the size of the canvas we need a value that goes from -1 on the left to +1 on the right. Similarly we need a value that goes from -1 on the bottom to +1 on the top.
 
 While we're at it lets support mobile as well
 

+ 1 - 1
threejs/lessons/threejs-post-processing-3dlut.md

@@ -281,7 +281,7 @@ Let's set the size to 16 and then click save the file which gives us this file.
 
 <div class="threejs_center"><img src="resources/images/identity-lut-s16.png"></div>
 
-We also need to capture an image of the thing we want to apply the LUT to, in this case the scene we created above before applying any effects. Note that normally we could right click on the scene above and pick "Save As..." but the `OrbitControls` might be preventing right clicking depnding on your OS. In my case I used my OSes screen capture feature to get a screenshot.
+We also need to capture an image of the thing we want to apply the LUT to, in this case the scene we created above before applying any effects. Note that normally we could right click on the scene above and pick "Save As..." but the `OrbitControls` might be preventing right clicking depending on your OS. In my case I used my OSes screen capture feature to get a screenshot.
 
 <div class="threejs_center"><img src="resources/images/3dlut-screen-capture.jpg" style="width: 600px"></div>
 

+ 3 - 3
threejs/lessons/threejs-post-processing.md

@@ -58,7 +58,7 @@ target. Usually you need to set this to true on the last pass you add to your
 `EffectComposer`.
 
 Let's put together a basic example. We'll start with the example from [the
-article on responsivness](threejs-responsive.html).
+article on responsiveness](threejs-responsive.html).
 
 To that first we create an `EffectComposer`.
 
@@ -170,7 +170,7 @@ I found this line:
 this.copyUniforms[ "opacity" ].value = strength;
 ```
 
-So we can set the strengh by setting
+So we can set the strength by setting
 
 ```js
 bloomPass.copyUniforms.opacity.value = someValue;
@@ -201,7 +201,7 @@ and
 const gui = new GUI();
 {
   const folder = gui.addFolder('BloomPass');
-  folder.add(bloomPass.copyUniforms.opacity, 'value', 0, 2).name('stength');
+  folder.add(bloomPass.copyUniforms.opacity, 'value', 0, 2).name('strength');
   folder.open();
 }
 {

+ 3 - 3
threejs/lessons/threejs-primitives.md

@@ -272,7 +272,7 @@ whereas the one on the right is.
 
 The other exceptions are the 2 line based examples for `EdgesGeometry`
 and `WireframeGeometry`. Instead of calling `addSolidGeometry` they call
-`addLineGeomtry` which looks like this
+`addLineGeometry` which looks like this
 
 ```js
 function addLineGeometry(x, y, geometry) {
@@ -320,7 +320,7 @@ It's now not so clear that the one on the right with 5000 triangles
 is entirely better than the one in the middle with only 480.
 If you're only drawing a few spheres, like say a single globe for
 a map of the earth, then a single 10000 triangle sphere is not a bad
-choice. If on the otherhand you're trying to draw 1000 spheres
+choice. If on the other hand you're trying to draw 1000 spheres
 then 1000 spheres times 10000 triangles each is 10 million triangles.
 To animate smoothly you need the browser to draw at 60 frames per
 second so you'd be asking the browser to draw 600 million triangles
@@ -343,7 +343,7 @@ is similar.
 So, choose whatever is appropriate for your situation. The less
 subdivisions you choose the more likely things will run smoothly and the less
 memory they'll take. You'll have to decide for yourself what the correct
-tradeoff is for your particular siutation.
+tradeoff is for your particular situation.
 
 Next up let's go over [how three's scene graph works and how
 to use it](threejs-scenegraph.html).

+ 1 - 1
threejs/lessons/threejs-rendering-on-demand.md

@@ -209,7 +209,7 @@ function makeInstance(geometry, color, x) {
 +  const folder = gui.addFolder(`Cube${x}`);
 +  folder.addColor(new ColorGUIHelper(material, 'color'), 'value')
 +      .name('color')
-+      .onChange(rendrequestRenderIfNotRequesteder);
++      .onChange(requestRenderIfNotRequested);
 +  folder.add(cube.scale, 'x', .1, 1.5)
 +      .name('scale x')
 +      .onChange(requestRenderIfNotRequested);

+ 2 - 2
threejs/lessons/threejs-rendertargets.md

@@ -2,7 +2,7 @@ Title: Three.js Render Targets
 Description: How to render to a texture.
 TOC: Render Targets
 
-A render target in three.js is basicaly a texture you can render to.
+A render target in three.js is basically a texture you can render to.
 After you render to it you can use that texture like any other texture.
 
 Let's make a simple example. We'll start with an example from [the article on responsiveness](threejs-responsive.html).
@@ -140,7 +140,7 @@ A few notes about using `WebGLRenderTarget`.
             camera.aspect = canvas.clientWidth / canvas.clientHeight;
             camera.updateProjectionMatrix();
 
-        +    renderTaret.setSize(canvas.width, canvas.height);
+        +    renderTarget.setSize(canvas.width, canvas.height);
         +    rtCamera.aspect = camera.aspect;
         +    rtCamera.updateProjectionMatrix();
           }

+ 2 - 2
threejs/lessons/threejs-responsive.md

@@ -166,7 +166,7 @@ function render(time) {
   ...
 ```
 
-Since the apsect is only going to change if the canvas's display size
+Since the aspect is only going to change if the canvas's display size
 changed we only set the camera's aspect if `resizeRendererToDisplaySize`
 returns `true`.
 
@@ -232,7 +232,7 @@ and pass that to three.js
 
      renderer.setPixelRatio(window.devicePixelRatio);
 
-After that any calls to `renderer.setSize` will magicially
+After that any calls to `renderer.setSize` will magically
 use the size you request multiplied by whatever pixel ratio
 you passed in. **This is strongly NOT RECOMMENDED**. See below
 

+ 3 - 3
threejs/lessons/threejs-scenegraph.md

@@ -75,7 +75,7 @@ the surface. Light is added to that color.
 
 Let's also put a single point light in the center of the scene. We'll go into more
 details about point lights later but for now the simple version is a point light
-represents light that eminates from a single point.
+represents light that emanates from a single point.
 
 ```js
 {
@@ -87,7 +87,7 @@ represents light that eminates from a single point.
 ```
 
 To make it easy to see we're going to put the camera directly above the origin
-looking down. The easist way to do that is to use the `lookAt` function. The `lookAt`
+looking down. The easiest way to do that is to use the `lookAt` function. The `lookAt`
 function will orient the camera from its position to "look at" the position
 we pass to `lookAt`. Before we do that though we need to tell the camera
 which way the top of the camera is facing or rather which way is "up" for the
@@ -360,7 +360,7 @@ Otherwise the grid might overwrite the axes.
 
 Turn on the `solarSystem` and you'll see how the earth is exactly 10
 units out from the center just like we set above. You can see how the
-earth is in the *local space* of the `solarSystem`. Similary if you
+earth is in the *local space* of the `solarSystem`. Similarly if you
 turn on the `earthOrbit` you'll see how the moon is exactly 2 units
 from the center of the *local space* of the `earthOrbit`.
 

+ 132 - 42
threejs/lessons/threejs-shadertoy.md

@@ -2,21 +2,30 @@ Title: Three.js and Shadertoy
 Description: How to use Shadertoy shaders in THREE.js
 TOC: Using Shadertoy shaders
 
-[Shadertoy](https://shadertoy.com) is a famous website hosting amazing shader experiments. People often ask how they can use those shaders with Three.js.
+[Shadertoy](https://shadertoy.com) is a famous website hosting amazing shader
+experiments. People often ask how they can use those shaders with Three.js.
 
-It's important to recognize it's called Shader**TOY** for a reason. In general shadertoy shaders are not about best practices. Rather they are a fun challenge similar to say [dwitter](https://dwitter.net) (write code in 140 characters) or [js13kGames](https://js13kgames.com) (make a game in 13k or less).
+It's important to recognize it's called Shader**TOY** for a reason. In general
+shadertoy shaders are not about best practices. Rather they are a fun challenge
+similar to say [dwitter](https://dwitter.net) (write code in 140 characters) or
+[js13kGames](https://js13kgames.com) (make a game in 13k or less).
 
-In the case of Shadertoy the puzzle is, *write a function that for a given pixel localtion outputs a color that draws something interesting*. It's a fun challenge and many of the result are amazing. But, it is not best practice.
+In the case of Shadertoy the puzzle is, *write a function that for a given pixel
+location outputs a color that draws something interesting*. It's a fun challenge
+and many of the result are amazing. But, it is not best practice.
 
 Compare [this amazing shadertoy shader that draws an entire city](https://www.shadertoy.com/view/XtsSWs)
 
 <div class="threejs_center"><img src="resources/images/shadertoy-skyline.png"></div>
 
-Fullscreen on my GPU it runs at about 5 frames a second. Contrast that to [a game like Cities: Skylines](https://store.steampowered.com/app/255710/Cities_Skylines/)
+Fullscreen on my GPU it runs at about 5 frames a second. Contrast that to
+[a game like Cities: Skylines](https://store.steampowered.com/app/255710/Cities_Skylines/)
 
 <div class="threejs_center"><img src="resources/images/cities-skylines.jpg" style="width: 600px;"></div>
 
-This game runs 30-60 frames a second on the same machine because it uses more traditional techniques, drawing buildings made from triangles with textures on them, etc...
+This game runs 30-60 frames a second on the same machine because it uses more
+traditional techniques, drawing buildings made from triangles with textures on
+them, etc...
 
 Still, let's go over using a Shadertoy shader with three.js.
 
@@ -38,32 +47,64 @@ void mainImage( out vec4 fragColor, in vec2 fragCoord )
 }
 ```
 
-One thing important to understand about shaders is they are witten in a language called GLSL (Graphics Library Shading Language) designed for 3D math which includes special types. Above we see `vec4`, `vec2`, `vec3` as 3 such special types. A `vec2` has 2 values, a `vec3` 3, a `vec4` 4 values. They can be addressed in a bunch of ways. The most common ways are with `x`, `y`, `z`, and `w` as in
+One thing important to understand about shaders is they are written in a
+language called GLSL (Graphics Library Shading Language) designed for 3D math
+which includes special types. Above we see `vec4`, `vec2`, `vec3` as 3 such
+special types. A `vec2` has 2 values, a `vec3` 3, a `vec4` 4 values. They can be
+addressed in a bunch of ways. The most common ways are with `x`, `y`, `z`, and
+`w` as in
 
 ```glsl
 vec4 v1 = vec4(1.0, 2.0, 3.0, 4.0);
 float v2 = v1.x + v1.y;  // adds 1.0 + 2.0
 ```
 
-Unlike JavaScript, GLSL is more like C/C++ where variables have to have their type declared so instead of `var v = 1.2;` it's `float v = 1.2;` declaring `v` to be a floating point number.
+Unlike JavaScript, GLSL is more like C/C++ where variables have to have their
+type declared so instead of `var v = 1.2;` it's `float v = 1.2;` declaring `v`
+to be a floating point number.
 
-Explaining GLSL in detail is more than we can do in this article. For a quick overview see [this article](https://webglfundamentals.org/webgl/lessons/webgl-shaders-and-glsl.html) and maybe follow that up with [this series](https://thebookofshaders.com/).
+Explaining GLSL in detail is more than we can do in this article. For a quick
+overview see [this article](https://webglfundamentals.org/webgl/lessons/webgl-shaders-and-glsl.html)
+and maybe follow that up with [this series](https://thebookofshaders.com/).
   
-It should be noted that, at least as of January 2019, [shadertoy.com](https://shadertoy.com) only concerns itself with *fragment shaders*. A fragment shaders's responsibility is, given a pixel location output a color for that pixel.
-
-Looking at the function above we can see the shader has an `out` parameter called `fragColor`. `out` stands for `output`. It's a parameter the function is expected to provide a value for. We need to set this to some color.
-
-It also has an `in` (for input) parameter called `fragCoord`. This is the pixel coordinate that is about to be drawn. We can use that coordinate to decide on a color. If the canvas we're drawing to is 400x300 pixels then the function will be called 400x400 times or 120,000 times. Each time `fragCoord` will be a different pixel coordinate.
-
-There are 2 more variables being used that are not defined in the code. One is `iResolution`. This is set to the resolution of the canvas. If the canvas is 400x300 then `iResolution` would be 400,300 so as the pixel coordinates change that makes `uv` go from 0.0 to 1.0 across and up the texture. Working with *normalized* values often makes things easier and so the majority of shadertoy shaders start with something like this.
-
-The other undefined variable in the shader is `iTime`. This is the time since the page loaded in seconds.
-
-In shader jargon these global variables are called *uniform* variables. They are called *uinform* because they don't change, they stay uniform from one iteration of the shader to the next. It's important to note all of them are specific to shadertoy. They not *official* GLSL variables. They are variables the makers of shadertoy made up.
-
-The [Shadertoy docs define several more](https://www.shadertoy.com/howto). For now let's write something that handles the two being used in the shader above.
-
-The first thing to do is let's make a single plane that fills the canvas. If you haven't read it yet we did this in [the article on backgounds](threejs-backgrounds.html) so let's grab that example but remove the cubes. It's pretty short so here's the entire thing
+It should be noted that, at least as of January 2019,
+[shadertoy.com](https://shadertoy.com) only concerns itself with *fragment
+shaders*. A fragment shader's responsibility is, given a pixel location output
+a color for that pixel.
+
+Looking at the function above we can see the shader has an `out` parameter
+called `fragColor`. `out` stands for `output`. It's a parameter the function is
+expected to provide a value for. We need to set this to some color.
+
+It also has an `in` (for input) parameter called `fragCoord`. This is the pixel
+coordinate that is about to be drawn. We can use that coordinate to decide on a
+color. If the canvas we're drawing to is 400x300 pixels then the function will
+be called 400x400 times or 120,000 times. Each time `fragCoord` will be a
+different pixel coordinate.
+
+There are 2 more variables being used that are not defined in the code. One is
+`iResolution`. This is set to the resolution of the canvas. If the canvas is
+400x300 then `iResolution` would be 400,300 so as the pixel coordinates change
+that makes `uv` go from 0.0 to 1.0 across and up the texture. Working with
+*normalized* values often makes things easier and so the majority of shadertoy
+shaders start with something like this.
+
+The other undefined variable in the shader is `iTime`. This is the time since
+the page loaded in seconds.
+
+In shader jargon these global variables are called *uniform* variables. They are
+called *uniform* because they don't change, they stay uniform from one iteration
+of the shader to the next. It's important to note all of them are specific to
+shadertoy. They not *official* GLSL variables. They are variables the makers of
+shadertoy made up.
+
+The [Shadertoy docs define several more](https://www.shadertoy.com/howto). For
+now let's write something that handles the two being used in the shader above.
+
+The first thing to do is let's make a single plane that fills the canvas. If you
+haven't read it yet we did this in [the article on backgrounds](threejs-backgrounds.html)
+so let's grab that example but remove the cubes. It's pretty short so here's the
+entire thing
 
 ```js
 function main() {
@@ -111,7 +152,10 @@ function main() {
 main();
 ```
 
-As [explained in the backgrounds article](threejs-backgrounds.html) an `OrthographicCamera` with these parameters and a 2 unit plane will fill the canvas. For now all we'll get is a red canvas as our plane is using a red `MeshBasicMaterial`.
+As [explained in the backgrounds article](threejs-backgrounds.html) an
+`OrthographicCamera` with these parameters and a 2 unit plane will fill the
+canvas. For now all we'll get is a red canvas as our plane is using a red
+`MeshBasicMaterial`.
 
 {{{example url="../threejs-shadertoy-prep.html" }}}
 
@@ -144,7 +188,13 @@ void main() {
 `;
 ```
 
-Above we declared the 2 uniform variables we talked about. Then we inserted the shader GLSL code from shadertoy. Finally we called `mainImage` passing it `gl_FragColor` and `gl_FragCoord.xy`.  `gl_FragColor` is an official WebGL global variable the shader is responsible for setting to whatever color it wants the current pixel to be. `gl_FragCoord` is another official WebGL global variable that tells us the coordinate of the pixel we're currently chosing a color for.
+Above we declared the 2 uniform variables we talked about. Then we inserted the
+shader GLSL code from shadertoy. Finally we called `mainImage` passing it
+`gl_FragColor` and `gl_FragCoord.xy`.  `gl_FragColor` is an official WebGL
+global variable the shader is responsible for setting to whatever color it wants
+the current pixel to be. `gl_FragCoord` is another official WebGL global
+variable that tells us the coordinate of the pixel we're currently choosing a
+color for.
 
 We then need to setup three.js uniforms so we can supply values to the shader.
 
@@ -155,7 +205,8 @@ const uniforms = {
 };
 ```
 
-Each uniform in THREE.js has `value` parameter. That value has to match the type of the uniform.
+Each uniform in THREE.js has `value` parameter. That value has to match the type
+of the uniform.
 
 Then we pass both the fragment shader and uniforms to a `ShaderMaterial`.
 
@@ -188,11 +239,14 @@ and before rendering we need to set the values of the uniforms
 }
 ```
 
-> Note: I have no idea why `iResolution` is a `vec3` and what's in the 3rd value [is not documented on shadertoy.com](https://www.shadertoy.com/howto). It's not used above so just setting it to 1 for now. ¯\\\_(ツ)\_/¯
+> Note: I have no idea why `iResolution` is a `vec3` and what's in the 3rd value
+> [is not documented on shadertoy.com](https://www.shadertoy.com/howto). It's
+> not used above so just setting it to 1 for now. ¯\\\_(ツ)\_/¯
 
 {{{example url="../threejs-shadertoy-basic.html" }}}
 
-This [matches what we see on Shadertoy for a new shader](https://www.shadertoy.com/new), at least as of January 2019 😉. What's the shader above doing? 
+This [matches what we see on Shadertoy for a new shader](https://www.shadertoy.com/new),
+at least as of January 2019 😉. What's the shader above doing? 
 
 * `uv` goes from 0 to 1. 
 * `cos(uv.xyx)` gives us 3 cosine values as a `vec3`. One for `uv.x`, another for `uv.y` and another for `uv.x` again.
@@ -201,20 +255,33 @@ This [matches what we see on Shadertoy for a new shader](https://www.shadertoy.c
 * `cos` goes from -1 to 1 so the `0.5 * 0.5 + cos(...)` converts from -1 <-> 1 to 0.0 <-> 1.0
 * the results are then used as the RGB color for the current pixel
 
-A minor change will make it easier to see the cosine waves. Right now `uv` only goes from 0 to 1. A cosine repeats at 2π so let's make it go from 0 to 40 by multiplying by 40.0. That should make it repeat about 6.3 times.
+A minor change will make it easier to see the cosine waves. Right now `uv` only
+goes from 0 to 1. A cosine repeats at 2π so let's make it go from 0 to 40 by
+multiplying by 40.0. That should make it repeat about 6.3 times.
 
 ```glsl
 -vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
 +vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx*40.0+vec3(0,2,4));
 ```
 
-Counting below I see about 6.3 repeats. We can see the blue between the red since it's offset by 4 via the `+vec3(0,2,4)`. Without that the blue and red would overlap perfectly making purple.
+Counting below I see about 6.3 repeats. We can see the blue between the red
+since it's offset by 4 via the `+vec3(0,2,4)`. Without that the blue and red
+would overlap perfectly making purple.
 
 {{{example url="../threejs-shadertoy-basic-x40.html" }}}
 
-Knowing how simple the inputs are and then seeing results like [a city canal](https://www.shadertoy.com/view/MdXGW2), [a forest](https://www.shadertoy.com/view/4ttSWf), [a snail](https://www.shadertoy.com/view/ld3Gz2), [a mushroom](https://www.shadertoy.com/view/4tBXR1) make the challenge all that much more impressive. Hopefully they also make it clear why it's not generally the right approach vs the more traditional ways of making scenes from triangles. The fact that so much math has to be put into computing the color of every pixel means those examples run very slow.
+Knowing how simple the inputs are and then seeing results like
+[a city canal](https://www.shadertoy.com/view/MdXGW2),
+[a forest](https://www.shadertoy.com/view/4ttSWf),
+[a snail](https://www.shadertoy.com/view/ld3Gz2),
+[a mushroom](https://www.shadertoy.com/view/4tBXR1)
+make the challenge all that much more impressive. Hopefully they also make it
+clear why it's not generally the right approach vs the more traditional ways of
+making scenes from triangles. The fact that so much math has to be put into
+computing the color of every pixel means those examples run very slow.
 
-Some shadertoy shaders take textures as inputs like [this one](https://www.shadertoy.com/view/MsXSzM). 
+Some shadertoy shaders take textures as inputs like
+[this one](https://www.shadertoy.com/view/MsXSzM). 
 
 ```glsl
 // By Daedelus: https://www.shadertoy.com/user/Daedelus
@@ -240,9 +307,12 @@ void mainImage( out vec4 fragColor, in vec2 fragCoord )
 }
 ```
 
-Passing a texture into a shader is similar to [passing one into a normal material](threejs-textures.html) but we need to set up the texture on the uniforms.
+Passing a texture into a shader is similar to 
+[passing one into a normal material](threejs-textures.html) but we need to set
+up the texture on the uniforms.
 
-First we'll add the uniform for the texture to the shader. They're referred to as `sampler2D` in GLSL.
+First we'll add the uniform for the texture to the shader. They're referred to
+as `sampler2D` in GLSL.
 
 ```js
 const fragmentShader = `
@@ -273,11 +343,24 @@ const uniforms = {
 
 {{{example url="../threejs-shadertoy-bleepy-blocks.html" }}}
 
-So far we've been using Shadertoy shaders as they are used on [Shadertoy.com](https://shadertoy.com), namely drawing to cover the canvas. There's no reason we need to limit it to just that use case though. The important part to remember is the functions people write on shadertoy generally just take a `fragCoord` input and a `iResolution`. `fragCoord` does not have to come from pixel coordinates, we could use something else like texture coordinates instead and could then use them kind of like other textures. This technique of using a function to generate textures is often called a [*procedural texture*](https://www.google.com/search?q=procedural+texture).
-
-Let's change the shader above to do this. The simplest thing to do might be to take the texture coordinates that three.js normally supplies, mutliply them by `iResolution` and pass that in for `fragCoords`. 
-
-To do that we add in a *varying*. A varying is a value passed from the vertex shader to the fragment shader that gets interpolated (or varied) between vertices. To use it in our fragment shader we declare it. Three.js refers to its texture coordinates as `uv` with the `v` in front meaning *varying*.
+So far we've been using Shadertoy shaders as they are used on
+[Shadertoy.com](https://shadertoy.com), namely drawing to cover the canvas.
+There's no reason we need to limit it to just that use case though. The
+important part to remember is the functions people write on shadertoy generally
+just take a `fragCoord` input and a `iResolution`. `fragCoord` does not have to
+come from pixel coordinates, we could use something else like texture
+coordinates instead and could then use them kind of like other textures. This
+technique of using a function to generate textures is often called a
+[*procedural texture*](https://www.google.com/search?q=procedural+texture).
+
+Let's change the shader above to do this. The simplest thing to do might be to
+take the texture coordinates that three.js normally supplies, multiply them by
+`iResolution` and pass that in for `fragCoords`. 
+
+To do that we add in a *varying*. A varying is a value passed from the vertex
+shader to the fragment shader that gets interpolated (or varied) between
+vertices. To use it in our fragment shader we declare it. Three.js refers to its
+texture coordinates as `uv` with the `v` in front meaning *varying*.
 
 ```glsl
 ...
@@ -290,7 +373,9 @@ void main() {
 }
 ```
 
-Then we need to also provide our own vertex shader. Here is a fairly common minimal three.js vertex shader. Three.js declares and will provide values for `uv`, `projectionMatrix`, `modelViewMatrix`, and `position`.
+Then we need to also provide our own vertex shader. Here is a fairly common
+minimal three.js vertex shader. Three.js declares and will provide values for
+`uv`, `projectionMatrix`, `modelViewMatrix`, and `position`.
 
 ```js
 const vertexShader = `
@@ -331,8 +416,13 @@ and we no longer need to set it at render time
 uniforms.iTime.value = time;
 ```
 
-Otherwise I copied back in the original camera and code that sets up 3 rotating cubes from [the article on responsiveness](threejs-responsive.html). The result:
+Otherwise I copied back in the original camera and code that sets up 3 rotating
+cubes from [the article on responsiveness](threejs-responsive.html). The result:
 
 {{{example url="../threejs-shadertoy-as-texture.html" }}}
 
-I hope this at least gets you started on how to use a shadertoy shader with three.js. Again, it's important to remember that most shadertoy shaders are an interesting challenge (draw everything with a single function) rather than the recommended way to actually display things in a performant way. Still, they are amazing, impressive, beautiful, and you can learn a ton by seeing how they work.
+I hope this at least gets you started on how to use a shadertoy shader with
+three.js. Again, it's important to remember that most shadertoy shaders are an
+interesting challenge (draw everything with a single function) rather than the
+recommended way to actually display things in a performant way. Still, they are
+amazing, impressive, beautiful, and you can learn a ton by seeing how they work.

+ 4 - 4
threejs/lessons/threejs-shadows.md

@@ -166,7 +166,7 @@ for (let i = 0; i < numSpheres; ++i) {
 }
 ```
 
-We setup 2 lights. One is a `HemisphereLight` with the itensity set to 2 to really
+We setup 2 lights. One is a `HemisphereLight` with the intensity set to 2 to really
 brighten things up.
 
 ```js
@@ -179,7 +179,7 @@ brighten things up.
 }
 ```
 
-The other is a `DirectionalLight` so the spheres get some defintion
+The other is a `DirectionalLight` so the spheres get some definition
 
 ```js
 {
@@ -244,7 +244,7 @@ appears to also use this kind of shadow for the main character.
 So, moving on to shadow maps, there are 3 lights which can cast shadows. The `DirectionalLight`,
 the `PointLight`, and the `SpotLight`.
 
-Let's start with the `DirectionaLight` with the helper example from [the lights article](threejs-lights.html).
+Let's start with the `DirectionalLight` with the helper example from [the lights article](threejs-lights.html).
 
 The first thing we need to do is turn on shadows in the renderer.
 
@@ -448,7 +448,7 @@ also blur the result
 
 
 And finally there's shadows with a `PointLight`. Since a `PointLight`
-shines in all directions the only relevent settings are `near` and `far`.
+shines in all directions the only relevant settings are `near` and `far`.
 Otherwise the `PointLight` shadow is effectively 6 `SpotLight` shadows
 each one pointing to the face of a cube around the light. This means
 `PointLight` shadows are much slower since the entire scene must be

+ 2 - 2
threejs/lessons/threejs-textures.md

@@ -288,7 +288,7 @@ In order for three.js to use the texture it has to hand it off to the GPU and th
 GPU *in general* requires the texture data to be uncompressed.
 
 The moral of the story is make your textures small in dimensions not just small
-in file size. Small in file size = fast to download. Small in dimesions = takes
+in file size. Small in file size = fast to download. Small in dimensions = takes
 less memory. How small should you make them?
 As small as you can and still look as good as you need them to look.
 
@@ -347,7 +347,7 @@ original size.
 For setting the filter when the texture is drawn larger than its original size
 you set [`texture.magFilter`](Texture.magFilter) property to either `THREE.NearestFilter` or
  `THREE.LinearFilter`.  `NearestFilter` means
-just pick the closet single pixel from the orignal texture. With a low
+just pick the closet single pixel from the original texture. With a low
 resolution texture this gives you a very pixelated look like Minecraft.
 
 `LinearFilter` means choose the 4 pixels from the texture that are closest

+ 1 - 1
threejs/lessons/threejs-tips.md

@@ -372,7 +372,7 @@ This is the solution used on [the front page of this site](/).
 In your webpage just insert an iframe, for example
 
 ```html
-<iframe id="background" src="threejs-repsonsive.html">
+<iframe id="background" src="threejs-responsive.html">
 <div>
   Your content goes here.
 </div>

+ 2 - 2
threejs/lessons/threejs-voxel-geometry.md

@@ -69,7 +69,7 @@ is 65536 boxes!
 Using [the technique of merging the geometry](threejs-rendering-on-demand.html)
 will fix the issue for this example but what if instead of just making
 a single layer we filled in everything below the ground with voxel. 
-In otherwords change the loop filling in the voxels to this
+In other words change the loop filling in the voxels to this
 
 ```js
 for (let y = 0; y < cellSize; ++y) {
@@ -376,7 +376,7 @@ class VoxelWorld {
     const cellX = Math.floor(x / cellSize);
     const cellY = Math.floor(y / cellSize);
     const cellZ = Math.floor(z / cellSize);
-    if (cellX !== 0 || cellY !== 0 || celllZ !== 0) {
+    if (cellX !== 0 || cellY !== 0 || cellZ !== 0) {
       return null
     }
     return this.cell;