mickceb 4 years ago
parent
commit
445c0c3f70

+ 451 - 0
threejs/lessons/fr/threejs-custom-buffergeometry.md

@@ -0,0 +1,451 @@
+Title: Three.js Custom BufferGeometry
+Description: How to make your own BufferGeometry.
+TOC: Custom BufferGeometry
+
+`BufferGeometry` is three.js's way of representing all geometry. A `BufferGeometry`
+essentially a collection *named* of `BufferAttribute`s.
+Each `BufferAttribute` represents an array of one type of data: positions,
+normals, colors, uv, etc... Together, the named `BufferAttribute`s represent
+*parallel arrays* of all the data for each vertex.
+
+<div class="threejs_center"><img src="resources/threejs-attributes.svg" style="width: 700px"></div>
+
+Above you can see we have 4 attributes: `position`, `normal`, `color`, `uv`.
+They represent *parallel arrays* which means that the Nth set of data in each
+attribute belongs to the same vertex. The vertex at index = 4 is highlighted
+to show that the parallel data across all attributes defines one vertex.
+
+This brings up a point, here's a diagram of a cube with one corner highlighted.
+
+<div class="threejs_center"><img src="resources/cube-faces-vertex.svg" style="width: 500px"></div>
+
+Thinking about it that single corner needs a different normal for each face of the
+cube. A normal is info about which direction something faces. In the diagram
+the normals are presented by the arrows around the corner vertex showing that each
+face that shares that vertex position needs a normal that points in a different direction.
+
+That corner needs different UVs for each face as well. UVs are texture coordinates
+that specify which part of a texture being drawn on a triangle corresponds to that
+vertex position. You can see the green face needs that vertex to have a UV that corresponds
+to the top right corner of the F texture, the blue face needs a UV that corresponds to the
+top left corner of the F texture, and the red face needs a UV that corresponds to the bottom
+left corner of the F texture.
+
+A single *vertex* is the combination of all of its parts. If a vertex needs any
+part to be different then it must be a different vertex.
+
+As a simple example let's make a cube using `BufferGeometry`. A cube is interesting
+because it appears to share vertices at the corners but really
+does not. For our example we'll list out all the vertices with all their data
+and then convert that data into parallel arrays and finally use those to make
+`BufferAttribute`s and add them to a `BufferGeometry`.
+
+We start with a list of all the data needed for the cube. Remember again
+that if a vertex has any unique parts it has to be a separate vertex. As such
+to make a cube requires 36 vertices. 2 triangles per face, 3 vertices per triangle,
+6 faces = 36 vertices.
+
+```js
+const vertices = [
+  // front
+  { pos: [-1, -1,  1], norm: [ 0,  0,  1], uv: [0, 0], },
+  { pos: [ 1, -1,  1], norm: [ 0,  0,  1], uv: [1, 0], },
+  { pos: [-1,  1,  1], norm: [ 0,  0,  1], uv: [0, 1], },
+
+  { pos: [-1,  1,  1], norm: [ 0,  0,  1], uv: [0, 1], },
+  { pos: [ 1, -1,  1], norm: [ 0,  0,  1], uv: [1, 0], },
+  { pos: [ 1,  1,  1], norm: [ 0,  0,  1], uv: [1, 1], },
+  // right
+  { pos: [ 1, -1,  1], norm: [ 1,  0,  0], uv: [0, 0], },
+  { pos: [ 1, -1, -1], norm: [ 1,  0,  0], uv: [1, 0], },
+  { pos: [ 1,  1,  1], norm: [ 1,  0,  0], uv: [0, 1], },
+
+  { pos: [ 1,  1,  1], norm: [ 1,  0,  0], uv: [0, 1], },
+  { pos: [ 1, -1, -1], norm: [ 1,  0,  0], uv: [1, 0], },
+  { pos: [ 1,  1, -1], norm: [ 1,  0,  0], uv: [1, 1], },
+  // back
+  { pos: [ 1, -1, -1], norm: [ 0,  0, -1], uv: [0, 0], },
+  { pos: [-1, -1, -1], norm: [ 0,  0, -1], uv: [1, 0], },
+  { pos: [ 1,  1, -1], norm: [ 0,  0, -1], uv: [0, 1], },
+
+  { pos: [ 1,  1, -1], norm: [ 0,  0, -1], uv: [0, 1], },
+  { pos: [-1, -1, -1], norm: [ 0,  0, -1], uv: [1, 0], },
+  { pos: [-1,  1, -1], norm: [ 0,  0, -1], uv: [1, 1], },
+  // left
+  { pos: [-1, -1, -1], norm: [-1,  0,  0], uv: [0, 0], },
+  { pos: [-1, -1,  1], norm: [-1,  0,  0], uv: [1, 0], },
+  { pos: [-1,  1, -1], norm: [-1,  0,  0], uv: [0, 1], },
+
+  { pos: [-1,  1, -1], norm: [-1,  0,  0], uv: [0, 1], },
+  { pos: [-1, -1,  1], norm: [-1,  0,  0], uv: [1, 0], },
+  { pos: [-1,  1,  1], norm: [-1,  0,  0], uv: [1, 1], },
+  // top
+  { pos: [ 1,  1, -1], norm: [ 0,  1,  0], uv: [0, 0], },
+  { pos: [-1,  1, -1], norm: [ 0,  1,  0], uv: [1, 0], },
+  { pos: [ 1,  1,  1], norm: [ 0,  1,  0], uv: [0, 1], },
+
+  { pos: [ 1,  1,  1], norm: [ 0,  1,  0], uv: [0, 1], },
+  { pos: [-1,  1, -1], norm: [ 0,  1,  0], uv: [1, 0], },
+  { pos: [-1,  1,  1], norm: [ 0,  1,  0], uv: [1, 1], },
+  // bottom
+  { pos: [ 1, -1,  1], norm: [ 0, -1,  0], uv: [0, 0], },
+  { pos: [-1, -1,  1], norm: [ 0, -1,  0], uv: [1, 0], },
+  { pos: [ 1, -1, -1], norm: [ 0, -1,  0], uv: [0, 1], },
+
+  { pos: [ 1, -1, -1], norm: [ 0, -1,  0], uv: [0, 1], },
+  { pos: [-1, -1,  1], norm: [ 0, -1,  0], uv: [1, 0], },
+  { pos: [-1, -1, -1], norm: [ 0, -1,  0], uv: [1, 1], },
+];
+```
+
+We can then translate all of that into 3 parallel arrays
+
+```js
+const positions = [];
+const normals = [];
+const uvs = [];
+for (const vertex of vertices) {
+  positions.push(...vertex.pos);
+  normals.push(...vertex.norm);
+  uvs.push(...vertex.uv);
+}
+```
+
+Finally we can create a `BufferGeometry` and then a `BufferAttribute` for each array
+and add it to the `BufferGeometry`.
+
+```js
+  const geometry = new THREE.BufferGeometry();
+  const positionNumComponents = 3;
+  const normalNumComponents = 3;
+  const uvNumComponents = 2;
+  geometry.setAttribute(
+      'position',
+      new THREE.BufferAttribute(new Float32Array(positions), positionNumComponents));
+  geometry.setAttribute(
+      'normal',
+      new THREE.BufferAttribute(new Float32Array(normals), normalNumComponents));
+  geometry.setAttribute(
+      'uv',
+      new THREE.BufferAttribute(new Float32Array(uvs), uvNumComponents));
+```
+
+Note that the names are significant. You must name your attributes the names
+that match what three.js expects (unless you are creating a custom shader).
+In this case `position`, `normal`, and `uv`. If you want vertex colors then
+name your attribute `color`.
+
+Above we created 3 JavaScript native arrays, `positions`, `normals` and `uvs`.
+We then convert those into
+[TypedArrays](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray)
+of type `Float32Array`. A `BufferAttribute` requires a TypedArray not a native
+array. A `BufferAttribute` also requires you to tell it how many components there
+are per vertex. For the positions and normals we have 3 components per vertex,
+x, y, and z. For the UVs we have 2, u and v.
+
+{{{example url="../threejs-custom-buffergeometry-cube.html"}}}
+
+That's a lot of data. A small thing we can do is use indices to reference
+the vertices. Looking back at our cube data, each face is made from 2 triangles
+with 3 vertices each, 6 vertices total, but 2 of those vertices are exactly the same;
+The same position, the same normal, and the same uv.
+So, we can remove the matching vertices and then
+reference them by index. First we remove the matching vertices.
+
+```js
+const vertices = [
+  // front
+  { pos: [-1, -1,  1], norm: [ 0,  0,  1], uv: [0, 0], }, // 0
+  { pos: [ 1, -1,  1], norm: [ 0,  0,  1], uv: [1, 0], }, // 1
+  { pos: [-1,  1,  1], norm: [ 0,  0,  1], uv: [0, 1], }, // 2
+-
+-  { pos: [-1,  1,  1], norm: [ 0,  0,  1], uv: [0, 1], },
+-  { pos: [ 1, -1,  1], norm: [ 0,  0,  1], uv: [1, 0], },
+  { pos: [ 1,  1,  1], norm: [ 0,  0,  1], uv: [1, 1], }, // 3
+  // right
+  { pos: [ 1, -1,  1], norm: [ 1,  0,  0], uv: [0, 0], }, // 4
+  { pos: [ 1, -1, -1], norm: [ 1,  0,  0], uv: [1, 0], }, // 5
+-
+-  { pos: [ 1,  1,  1], norm: [ 1,  0,  0], uv: [0, 1], },
+-  { pos: [ 1, -1, -1], norm: [ 1,  0,  0], uv: [1, 0], },
+  { pos: [ 1,  1,  1], norm: [ 1,  0,  0], uv: [0, 1], }, // 6
+  { pos: [ 1,  1, -1], norm: [ 1,  0,  0], uv: [1, 1], }, // 7
+  // back
+  { pos: [ 1, -1, -1], norm: [ 0,  0, -1], uv: [0, 0], }, // 8
+  { pos: [-1, -1, -1], norm: [ 0,  0, -1], uv: [1, 0], }, // 9
+-
+-  { pos: [ 1,  1, -1], norm: [ 0,  0, -1], uv: [0, 1], },
+-  { pos: [-1, -1, -1], norm: [ 0,  0, -1], uv: [1, 0], },
+  { pos: [ 1,  1, -1], norm: [ 0,  0, -1], uv: [0, 1], }, // 10
+  { pos: [-1,  1, -1], norm: [ 0,  0, -1], uv: [1, 1], }, // 11
+  // left
+  { pos: [-1, -1, -1], norm: [-1,  0,  0], uv: [0, 0], }, // 12
+  { pos: [-1, -1,  1], norm: [-1,  0,  0], uv: [1, 0], }, // 13
+-
+-  { pos: [-1,  1, -1], norm: [-1,  0,  0], uv: [0, 1], },
+-  { pos: [-1, -1,  1], norm: [-1,  0,  0], uv: [1, 0], },
+  { pos: [-1,  1, -1], norm: [-1,  0,  0], uv: [0, 1], }, // 14
+  { pos: [-1,  1,  1], norm: [-1,  0,  0], uv: [1, 1], }, // 15
+  // top
+  { pos: [ 1,  1, -1], norm: [ 0,  1,  0], uv: [0, 0], }, // 16
+  { pos: [-1,  1, -1], norm: [ 0,  1,  0], uv: [1, 0], }, // 17
+-
+-  { pos: [ 1,  1,  1], norm: [ 0,  1,  0], uv: [0, 1], },
+-  { pos: [-1,  1, -1], norm: [ 0,  1,  0], uv: [1, 0], },
+  { pos: [ 1,  1,  1], norm: [ 0,  1,  0], uv: [0, 1], }, // 18
+  { pos: [-1,  1,  1], norm: [ 0,  1,  0], uv: [1, 1], }, // 19
+  // bottom
+  { pos: [ 1, -1,  1], norm: [ 0, -1,  0], uv: [0, 0], }, // 20
+  { pos: [-1, -1,  1], norm: [ 0, -1,  0], uv: [1, 0], }, // 21
+-
+-  { pos: [ 1, -1, -1], norm: [ 0, -1,  0], uv: [0, 1], },
+-  { pos: [-1, -1,  1], norm: [ 0, -1,  0], uv: [1, 0], },
+  { pos: [ 1, -1, -1], norm: [ 0, -1,  0], uv: [0, 1], }, // 22
+  { pos: [-1, -1, -1], norm: [ 0, -1,  0], uv: [1, 1], }, // 23
+];
+```
+
+So now we have 24 unique vertices. Then we specify 36 indices
+for the 36 vertices we need drawn to make 12 triangles by calling `BufferGeometry.setIndex` with an array of indices.
+
+```js
+geometry.setAttribute(
+    'position',
+    new THREE.BufferAttribute(positions, positionNumComponents));
+geometry.setAttribute(
+    'normal',
+    new THREE.BufferAttribute(normals, normalNumComponents));
+geometry.setAttribute(
+    'uv',
+    new THREE.BufferAttribute(uvs, uvNumComponents));
+
++geometry.setIndex([
++   0,  1,  2,   2,  1,  3,  // front
++   4,  5,  6,   6,  5,  7,  // right
++   8,  9, 10,  10,  9, 11,  // back
++  12, 13, 14,  14, 13, 15,  // left
++  16, 17, 18,  18, 17, 19,  // top
++  20, 21, 22,  22, 21, 23,  // bottom
++]);
+```
+
+{{{example url="../threejs-custom-buffergeometry-cube-indexed.html"}}}
+
+`BufferGeometry` has a [`computeVertexNormals`](BufferGeometry.computeVertexNormals) method for computing normals if you
+are not supplying them. Unfortunately, 
+since positions can not be shared if any other part of a vertex is different,
+the results of calling `computeVertexNormals` will generate seams if your
+geometry is supposed to connect to itself like a sphere or a cylinder.
+
+<div class="spread">
+  <div>
+    <div data-diagram="bufferGeometryCylinder"></div>
+  </div>
+</div>
+
+For the cylinder above the normals were created using `computeVertexNormals`.
+If you look closely there is a seam on the cylinder. This is because there
+is no way to share the vertices at the start and end of the cylinder since they
+require different UVs so the function to compute them has no idea those are
+the same vertices to smooth over them. Just a small thing to be aware of.
+The solution is to supply your own normals.
+
+We can also use [TypedArrays](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray) from the start instead of native JavaScript arrays.
+The disadvantage to TypedArrays is you must specify their size up front. Of
+course that's not that large of a burden but with native arrays we can just
+`push` values onto them and look at what size they end up by checking their
+`length` at the end. With TypedArrays there is no push function so we need
+to do our own bookkeeping when adding values to them.
+
+In this example knowing the length up front is pretty easy since we're using
+a big block of static data to start.
+
+```js
+-const positions = [];
+-const normals = [];
+-const uvs = [];
++const numVertices = vertices.length;
++const positionNumComponents = 3;
++const normalNumComponents = 3;
++const uvNumComponents = 2;
++const positions = new Float32Array(numVertices * positionNumComponents);
++const normals = new Float32Array(numVertices * normalNumComponents);
++const uvs = new Float32Array(numVertices * uvNumComponents);
++let posNdx = 0;
++let nrmNdx = 0;
++let uvNdx = 0;
+for (const vertex of vertices) {
+-  positions.push(...vertex.pos);
+-  normals.push(...vertex.norm);
+-  uvs.push(...vertex.uv);
++  positions.set(vertex.pos, posNdx);
++  normals.set(vertex.norm, nrmNdx);
++  uvs.set(vertex.uv, uvNdx);
++  posNdx += positionNumComponents;
++  nrmNdx += normalNumComponents;
++  uvNdx += uvNumComponents;
+}
+
+geometry.setAttribute(
+    'position',
+-    new THREE.BufferAttribute(new Float32Array(positions), positionNumComponents));
++    new THREE.BufferAttribute(positions, positionNumComponents));
+geometry.setAttribute(
+    'normal',
+-    new THREE.BufferAttribute(new Float32Array(normals), normalNumComponents));
++    new THREE.BufferAttribute(normals, normalNumComponents));
+geometry.setAttribute(
+    'uv',
+-    new THREE.BufferAttribute(new Float32Array(uvs), uvNumComponents));
++    new THREE.BufferAttribute(uvs, uvNumComponents));
+
+geometry.setIndex([
+   0,  1,  2,   2,  1,  3,  // front
+   4,  5,  6,   6,  5,  7,  // right
+   8,  9, 10,  10,  9, 11,  // back
+  12, 13, 14,  14, 13, 15,  // left
+  16, 17, 18,  18, 17, 19,  // top
+  20, 21, 22,  22, 21, 23,  // bottom
+]);
+```
+
+{{{example url="../threejs-custom-buffergeometry-cube-typedarrays.html"}}}
+
+A good reason to use typedarrays is if you want to dynamically update any
+part of the vertices.
+
+I couldn't think of a really good example of dynamically updating the vertices
+so I decided to make a sphere and move each quad in and out from the center. Hopefully
+it's a useful example.
+
+Here's the code to generate positions and indices for a sphere. The code
+is sharing vertices within a quad but it's not sharing vertices between
+quads because we want to be able to move each quad separately.
+
+Because I'm lazy I used a small hierarchy of 3 `Object3D` objects to compute
+sphere points. How this works is explained in [the article on optimizing lots of objects](threejs-optimize-lots-of-objects.html).
+
+```js
+function makeSpherePositions(segmentsAround, segmentsDown) {
+  const numVertices = segmentsAround * segmentsDown * 6;
+  const numComponents = 3;
+  const positions = new Float32Array(numVertices * numComponents);
+  const indices = [];
+
+  const longHelper = new THREE.Object3D();
+  const latHelper = new THREE.Object3D();
+  const pointHelper = new THREE.Object3D();
+  longHelper.add(latHelper);
+  latHelper.add(pointHelper);
+  pointHelper.position.z = 1;
+  const temp = new THREE.Vector3();
+
+  function getPoint(lat, long) {
+    latHelper.rotation.x = lat;
+    longHelper.rotation.y = long;
+    longHelper.updateMatrixWorld(true);
+    return pointHelper.getWorldPosition(temp).toArray();
+  }
+
+  let posNdx = 0;
+  let ndx = 0;
+  for (let down = 0; down < segmentsDown; ++down) {
+    const v0 = down / segmentsDown;
+    const v1 = (down + 1) / segmentsDown;
+    const lat0 = (v0 - 0.5) * Math.PI;
+    const lat1 = (v1 - 0.5) * Math.PI;
+
+    for (let across = 0; across < segmentsAround; ++across) {
+      const u0 = across / segmentsAround;
+      const u1 = (across + 1) / segmentsAround;
+      const long0 = u0 * Math.PI * 2;
+      const long1 = u1 * Math.PI * 2;
+
+      positions.set(getPoint(lat0, long0), posNdx);  posNdx += numComponents;
+      positions.set(getPoint(lat1, long0), posNdx);  posNdx += numComponents;
+      positions.set(getPoint(lat0, long1), posNdx);  posNdx += numComponents;
+      positions.set(getPoint(lat1, long1), posNdx);  posNdx += numComponents;
+
+      indices.push(
+        ndx, ndx + 1, ndx + 2,
+        ndx + 2, ndx + 1, ndx + 3,
+      );
+      ndx += 4;
+    }
+  }
+  return {positions, indices};
+}
+```
+
+We can then call it like this
+
+```js
+const segmentsAround = 24;
+const segmentsDown = 16;
+const {positions, indices} = makeSpherePositions(segmentsAround, segmentsDown);
+```
+
+Because positions returned are unit sphere positions so they are exactly the same
+values we need for normals so we can just duplicated them for the normals.
+
+```js
+const normals = positions.slice();
+```
+
+And then we setup the attributes like before
+
+```js
+const geometry = new THREE.BufferGeometry();
+const positionNumComponents = 3;
+const normalNumComponents = 3;
+
++const positionAttribute = new THREE.BufferAttribute(positions, positionNumComponents);
++positionAttribute.setUsage(THREE.DynamicDrawUsage);
+geometry.setAttribute(
+    'position',
++    positionAttribute);
+geometry.setAttribute(
+    'normal',
+    new THREE.BufferAttribute(normals, normalNumComponents));
+geometry.setIndex(indices);
+```
+
+I've highlighted a few differences. We save a reference to the position attribute.
+We also mark it as dynamic. This is a hint to THREE.js that we're going to be changing
+the contents of the attribute often.
+
+In our render loop we update the positions based off their normals every frame.
+
+```js
+const temp = new THREE.Vector3();
+
+...
+
+for (let i = 0; i < positions.length; i += 3) {
+  const quad = (i / 12 | 0);
+  const ringId = quad / segmentsAround | 0;
+  const ringQuadId = quad % segmentsAround;
+  const ringU = ringQuadId / segmentsAround;
+  const angle = ringU * Math.PI * 2;
+  temp.fromArray(normals, i);
+  temp.multiplyScalar(THREE.MathUtils.lerp(1, 1.4, Math.sin(time + ringId + angle) * .5 + .5));
+  temp.toArray(positions, i);
+}
+positionAttribute.needsUpdate = true;
+```
+
+And we set `positionAttribute.needsUpdate` to tell THREE.js to use our changes.
+
+{{{example url="../threejs-custom-buffergeometry-dynamic.html"}}}
+
+I hope these were useful examples of how to use `BufferGeometry` directly to
+make your own geometry and how to dynamically update the contents of a
+`BufferAttribute`.
+
+<!-- needed in English only to prevent warning from outdated translations -->
+<a href="resources/threejs-geometry.svg"></a>
+<a href="threejs-custom-geometry.html"></a>
+
+<canvas id="c"></canvas>
+<script type="module" src="resources/threejs-custom-buffergeometry.js"></script>
+

+ 275 - 0
threejs/lessons/fr/threejs-fog.md

@@ -0,0 +1,275 @@
+Title: Brouillard dans Three.js
+Description: Brouillard dans Three.js
+TOC: Brouillard
+
+This article is part of a series of articles about three.js. The
+first article is [three.js fundamentals](threejs-fundamentals.html). If
+you haven't read that yet and you're new to three.js you might want to
+consider starting there. If you haven't read about cameras you might
+want to start with [this article](threejs-cameras.html).
+
+Fog in a 3D engine is generally a way of fading to a specific color
+based on the distance from the camera. In three.js you add fog by
+creating `Fog` or `FogExp2` object and setting it on the scene's
+[`fog`](Scene.fog) property.
+
+`Fog` lets you choose `near` and `far` settings which are distances
+from the camera. Anything closer than `near` is unaffected by fog.
+Anything further than `far` is completely the fog color. Parts between
+`near` and `far` fade from their material color to the fog color.
+
+There's also `FogExp2` which grows exponentially with distance from the camera.
+
+To use either type of fog you create one and and assign it to the scene as in
+
+```js
+const scene = new THREE.Scene();
+{
+  const color = 0xFFFFFF;  // white
+  const near = 10;
+  const far = 100;
+  scene.fog = new THREE.Fog(color, near, far);
+}
+```
+
+or for `FogExp2` it would be
+
+```js
+const scene = new THREE.Scene();
+{
+  const color = 0xFFFFFF;
+  const density = 0.1;
+  scene.fog = new THREE.FogExp2(color, density);
+}
+```
+
+`FogExp2` is closer to reality but `Fog` is used
+more commonly since it lets you choose a place to apply
+the fog so you can decide to show a clear scene
+up to a certain distance and then fade out to some color
+past that distance.
+
+<div class="spread">
+  <div>
+    <div data-diagram="fog" style="height: 300px;"></div>
+    <div class="code">THREE.Fog</div>
+  </div>
+  <div>
+    <div data-diagram="fogExp2" style="height: 300px;"></div>
+    <div class="code">THREE.FogExp2</div>
+  </div>
+</div>
+
+It's important to note that the fog is applied to *things that are rendered*.
+It is part of the calculation of each pixel of the color of the object.
+What that means is if you want your scene to fade to a certain color you
+need to set the fog **and** the background color to the same color.
+The background color is set using the
+[`scene.background`](Scene.background)
+property. To pick a background color you attach a `THREE.Color` to it. For example
+
+```js
+scene.background = new THREE.Color('#F00');  // red
+```
+
+<div class="spread">
+  <div>
+    <div data-diagram="fogBlueBackgroundRed" style="height: 300px;" class="border"></div>
+    <div class="code">fog blue, background red</div>
+  </div>
+  <div>
+    <div data-diagram="fogBlueBackgroundBlue" style="height: 300px;" class="border"></div>
+    <div class="code">fog blue, background blue</div>
+  </div>
+</div>
+
+Here is one of our previous examples with fog added. The only addition
+is right after setting up the scene we add the fog and set the scene's
+background color
+
+```js
+const scene = new THREE.Scene();
+
++{
++  const near = 1;
++  const far = 2;
++  const color = 'lightblue';
++  scene.fog = new THREE.Fog(color, near, far);
++  scene.background = new THREE.Color(color);
++}
+```
+
+In the example below the camera's `near` is 0.1 and its `far` is 5.
+The camera is at `z = 2`. The cubes are 1 unit large and at Z = 0.
+This means with a fog setting of `near = 1` and `far = 2` the cubes
+will fade out right around their center.
+
+{{{example url="../threejs-fog.html" }}}
+
+Let's add an interface so we can adjust the fog. Again we'll use
+[dat.GUI](https://github.com/dataarts/dat.gui). dat.GUI takes
+an object and a property and automagically makes an interface
+for that type of property. We could just simply let it manipulate
+the fog's `near` and `far` properties but it's invalid to have
+`near` be greater than `far` so let's make a helper so dat.GUI
+can manipulate a `near` and `far` property but we'll make sure `near`
+is less than or equal to `far` and `far` is greater than or equal `near`.
+
+```js
+// We use this class to pass to dat.gui
+// so when it manipulates near or far
+// near is never > far and far is never < near
+class FogGUIHelper {
+  constructor(fog) {
+    this.fog = fog;
+  }
+  get near() {
+    return this.fog.near;
+  }
+  set near(v) {
+    this.fog.near = v;
+    this.fog.far = Math.max(this.fog.far, v);
+  }
+  get far() {
+    return this.fog.far;
+  }
+  set far(v) {
+    this.fog.far = v;
+    this.fog.near = Math.min(this.fog.near, v);
+  }
+}
+```
+
+We can then add it like this
+
+```js
+{
+  const near = 1;
+  const far = 2;
+  const color = 'lightblue';
+  scene.fog = new THREE.Fog(color, near, far);
+  scene.background = new THREE.Color(color);
++
++  const fogGUIHelper = new FogGUIHelper(scene.fog);
++  gui.add(fogGUIHelper, 'near', near, far).listen();
++  gui.add(fogGUIHelper, 'far', near, far).listen();
+}
+```
+
+The `near` and `far` parameters set the minimum and maximum values
+for adjusting the fog. They are set when we setup the camera.
+
+The `.listen()` at the end of the last 2 lines tells dat.GUI to *listen*
+for changes. That way when we change `near` because of an edit to `far`
+or we change `far` in response to an edit to `near` dat.GUI will update
+the other property's UI for us.
+
+It might also be nice to be able to change the fog color but like was
+mentioned above we need to keep both the fog color and the background
+color in sync. So, let's add another *virtual* property to our helper
+that will set both colors when dat.GUI manipulates it.
+
+dat.GUI can manipulate colors in 4 ways, as a CSS 6 digit hex string (eg: `#112233`). As an hue, saturation, value, object (eg: `{h: 60, s: 1, v: }`).
+As an RGB array (eg: `[255, 128, 64]`). Or, as an RGBA array (eg: `[127, 200, 75, 0.3]`).
+
+It's easiest for our purpose to use the hex string version since that way
+dat.GUI is only manipulating a single value. Fortunately `THREE.Color`
+as a [`getHexString`](Color.getHexString) method
+we get use to easily get such a string, we just have to prepend a '#' to the front.
+
+```js
+// We use this class to pass to dat.gui
+// so when it manipulates near or far
+// near is never > far and far is never < near
++// Also when dat.gui manipulates color we'll
++// update both the fog and background colors.
+class FogGUIHelper {
+*  constructor(fog, backgroundColor) {
+    this.fog = fog;
++    this.backgroundColor = backgroundColor;
+  }
+  get near() {
+    return this.fog.near;
+  }
+  set near(v) {
+    this.fog.near = v;
+    this.fog.far = Math.max(this.fog.far, v);
+  }
+  get far() {
+    return this.fog.far;
+  }
+  set far(v) {
+    this.fog.far = v;
+    this.fog.near = Math.min(this.fog.near, v);
+  }
++  get color() {
++    return `#${this.fog.color.getHexString()}`;
++  }
++  set color(hexString) {
++    this.fog.color.set(hexString);
++    this.backgroundColor.set(hexString);
++  }
+}
+```
+
+We then call `gui.addColor` to add a color UI for our helper's virtual property.
+
+```js
+{
+  const near = 1;
+  const far = 2;
+  const color = 'lightblue';
+  scene.fog = new THREE.Fog(color, near, far);
+  scene.background = new THREE.Color(color);
+
+*  const fogGUIHelper = new FogGUIHelper(scene.fog, scene.background);
+  gui.add(fogGUIHelper, 'near', near, far).listen();
+  gui.add(fogGUIHelper, 'far', near, far).listen();
++  gui.addColor(fogGUIHelper, 'color');
+}
+```
+
+{{{example url="../threejs-fog-gui.html" }}}
+
+You can see setting `near` to like 1.9 and `far` to 2.0 gives
+a very sharp transition between un-fogged and completely fogged.
+where as `near` = 1.1 and `far` = 2.9 should just about be
+the smoothest given our cubes are spinning 2 units away from the camera.
+
+One last thing, there is a boolean [`fog`](Material.fog)
+property on a material for whether or not objects rendered
+with that material are affected by fog. It defaults to `true`
+for most materials. As an example of why you might want
+to turn the fog off, imagine you're making a 3D vehicle
+simulator with a view from the driver's seat or cockpit.
+You probably want the fog off for everything inside the vehicle when
+viewing from inside the vehicle.
+
+A better example might be a house
+and thick fog outside house. Let's say the fog is set to start
+2 meters away (near = 2) and completely fogged out at 4 meters (far = 4).
+Rooms are longer than 2 meters and the house is probably longer
+than 4 meters so you need to set the materials for the inside
+of the house to not apply fog otherwise when standing inside the
+house looking outside the wall at the far end of the room will look
+like it's in the fog.
+
+<div class="spread">
+  <div>
+    <div data-diagram="fogHouseAll" style="height: 300px;" class="border"></div>
+    <div class="code">fog: true, all</div>
+  </div>
+</div>
+
+Notice the walls and ceiling at the far end of the room are getting fog applied.
+By turning fog off on the materials for the house we can fix that issue.
+
+<div class="spread">
+  <div>
+    <div data-diagram="fogHouseInsideNoFog" style="height: 300px;" class="border"></div>
+    <div class="code">fog: true, only outside materials</div>
+  </div>
+</div>
+
+<canvas id="c"></canvas>
+<script type="module" src="resources/threejs-fog.js"></script>

+ 156 - 0
threejs/lessons/fr/threejs-rendertargets.md

@@ -0,0 +1,156 @@
+Title: Three.js Render Targets
+Description: How to render to a texture.
+TOC: Render Targets
+
+A render target in three.js is basically a texture you can render to.
+After you render to it you can use that texture like any other texture.
+
+Let's make a simple example. We'll start with an example from [the article on responsiveness](threejs-responsive.html).
+
+Rendering to a render target is almost exactly the same as normal rendering. First we create a `WebGLRenderTarget`.
+
+```js
+const rtWidth = 512;
+const rtHeight = 512;
+const renderTarget = new THREE.WebGLRenderTarget(rtWidth, rtHeight);
+```
+
+Then we need a `Camera` and a `Scene`
+
+```js
+const rtFov = 75;
+const rtAspect = rtWidth / rtHeight;
+const rtNear = 0.1;
+const rtFar = 5;
+const rtCamera = new THREE.PerspectiveCamera(rtFov, rtAspect, rtNear, rtFar);
+rtCamera.position.z = 2;
+
+const rtScene = new THREE.Scene();
+rtScene.background = new THREE.Color('red');
+```
+
+Notice we set the aspect to the aspect for the render target, not the canvas.
+The correct aspect to use depends on what we are rendering for. In this case
+we'll use the render target's texture on the side of a cube. Since faces of
+the cube are square we want an aspect of 1.0.
+
+We fill the scene with stuff. In this case we're using the light and the 3 cubes [from the previous article](threejs-responsive.html).
+
+```js
+{
+  const color = 0xFFFFFF;
+  const intensity = 1;
+  const light = new THREE.DirectionalLight(color, intensity);
+  light.position.set(-1, 2, 4);
+*  rtScene.add(light);
+}
+
+const boxWidth = 1;
+const boxHeight = 1;
+const boxDepth = 1;
+const geometry = new THREE.BoxGeometry(boxWidth, boxHeight, boxDepth);
+
+function makeInstance(geometry, color, x) {
+  const material = new THREE.MeshPhongMaterial({color});
+
+  const cube = new THREE.Mesh(geometry, material);
+*  rtScene.add(cube);
+
+  cube.position.x = x;
+
+  return cube;
+}
+
+*const rtCubes = [
+  makeInstance(geometry, 0x44aa88,  0),
+  makeInstance(geometry, 0x8844aa, -2),
+  makeInstance(geometry, 0xaa8844,  2),
+];
+```
+
+The `Scene` and `Camera` from the previous article are still there. We'll use them to render to the canvas.
+We just need to add stuff to render.
+
+Let's add a cube that uses the render target's texture.
+
+```js
+const material = new THREE.MeshPhongMaterial({
+  map: renderTarget.texture,
+});
+const cube = new THREE.Mesh(geometry, material);
+scene.add(cube);
+```
+
+Now at render time first we render the render target scene to the render target.
+
+```js
+function render(time) {
+  time *= 0.001;
+
+  ...
+
+  // rotate all the cubes in the render target scene
+  rtCubes.forEach((cube, ndx) => {
+    const speed = 1 + ndx * .1;
+    const rot = time * speed;
+    cube.rotation.x = rot;
+    cube.rotation.y = rot;
+  });
+
+  // draw render target scene to render target
+  renderer.setRenderTarget(renderTarget);
+  renderer.render(rtScene, rtCamera);
+  renderer.setRenderTarget(null);
+```
+
+Then we render the scene with the single cube that is using the render target's texture to the canvas.
+
+```js
+  // rotate the cube in the scene
+  cube.rotation.x = time;
+  cube.rotation.y = time * 1.1;
+
+  // render the scene to the canvas
+  renderer.render(scene, camera);
+```
+
+And voilà
+
+{{{example url="../threejs-render-target.html" }}}
+
+The cube is red because we set the `background` of the `rtScene` to red so the
+render target's texture is being cleared to red.
+
+Render targets are used for all kinds of things. [Shadows](threejs-shadows.html) use render targets.
+[Picking can use a render target](threejs-picking.html). Various kinds of
+[post processing effects](threejs-post-processing.html) require render targets.
+Rendering a rear view mirror in a car or a live view on a monitor inside a 3D
+scene might use a render target.
+
+A few notes about using `WebGLRenderTarget`.
+
+* By default `WebGLRenderTarget` creates 2 textures. A color texture and a depth/stencil texture. If you don't need the depth or stencil textures you can request to not create them by passing in options. Example:
+
+    ```js
+    const rt = new THREE.WebGLRenderTarget(width, height, {
+      depthBuffer: false,
+      stencilBuffer: false,
+    });
+    ```
+
+* You might need to change the size of a render target
+
+  In the example above we make a render target of a fixed size, 512x512. For things like post processing you generally need to make a render target the same size as your canvas. In our code that would mean when we change the canvas size we would also update both the render target size and the camera we're using when rendering to the render target. Example:
+
+      function render(time) {
+        time *= 0.001;
+
+        if (resizeRendererToDisplaySize(renderer)) {
+          const canvas = renderer.domElement;
+          camera.aspect = canvas.clientWidth / canvas.clientHeight;
+          camera.updateProjectionMatrix();
+
+      +    renderTarget.setSize(canvas.width, canvas.height);
+      +    rtCamera.aspect = camera.aspect;
+      +    rtCamera.updateProjectionMatrix();
+      }