- Code and resources in .unitypackage, exe in Bin folder: My Drive
Use, share and modify the code as you want. Enjoy!
(PS: write if the links are gone, I reupload the files)
Coding, math && graphics.
private IEnumerator GrowBranch(Vector3 branchPivot, Vector3 growDirection, float width, int currentLayer, AnimLine root) { if (currentLayer < maxLevels) { Transform rootTransform = null; if (root != null) rootTransform = root.Line.transform; AnimLine mainBranch = new AnimLine(rootTransform, branchPivot, tessellation, treeMaterial); Vector3 norm = new Vector3(growDirection.y, -growDirection.x, growDirection.z).normalized; Vector3 middle = growDirection / 2.0f; middle += norm * Random.Range(-growDirection.magnitude / 10.0f, growDirection.magnitude / 10.0f); List<Vector3> controlPoints = new List<Vector3>() { Vector3.zero, middle, growDirection }; mainBranch.CreateSpline(controlPoints, width, width * widthDecrease); if (root != null) { mainBranch.Line.transform.parent = rootTransform; mainBranch.Line.RootType = LineMesh.RootPoint.RootLine; mainBranch.Line.RootLine = root.Line; } var growEnum = StartCoroutine(mainBranch.UpdateLine(LayerGrowTime)); lines.Add(mainBranch); int noBranches = (int) Mathf.Lerp(Random.Range(minBrachPerLayer, maxBrachPerLayer), minBrachPerLayer, (float) (currentLayer) / maxLevels); Vector3[] newBranches = new Vector3[noBranches]; if (lines.Count+newBranches.Length > maxBranches) yield break; for (int i = 0; i < newBranches.Length; i++) { Vector3 angles = new Vector3 { z = (((i & 1) * 2) - 1) * Random.Range(minAngle, maxAngle) }; newBranches[i] = RotatePointAroundPivot(growDirection, Vector3.zero, angles) * Random.Range(minHeightMult, maxHeightMult); } yield return growEnum; foreach (var branch in newBranches) { StartCoroutine(GrowBranch(growDirection, branch, width * widthDecrease, currentLayer + 1, mainBranch)); } } }
I implemented an AnimLine class, which can create an animated line, weld lines and transform them. It uses meshes and only updates the necessary parts, so it's pretty fast. It also looks good to have the wind blowing the branches. Since in the AnimLine class I can parent lines and set their pivots, the wind function uses local angles to move the branches, and some randomness.
private IEnumerator UpdateBranches() { while (true) { if (maxWindStrength==0.0f) { yield return null; continue; } var time = Time.time; for (int i = 1; i < lines.Count; i++) { if (lines[i].Line.PointCount < 2) continue; float localStrength = 1.0f; if (maxWindHeight != 0.0f) localStrength = Mathf.Clamp01(lines[i].Line.GetPoint(0).y / maxWindHeight); lines[i].Line.transform.localEulerAngles = new Vector3(0, 0, maxWindStrength * localStrength * Mathf.Sin(time * windSpeed - i)); lines[i].Line.UpdateLine(0, 1); } yield return new WaitForSeconds(0.03f); } }
Now this is what I currently have. However, there are still problems with it. The infamous Z-fighting. Since I use meshes on the same plane, and because I use perspective camera and cannot offset the meshes too much, also deciding about the offsets is comlex. The weld points of different lines have wrong uv coordinates, so the texture clamps there. I will add a better work-sharing model too, so it will be more effective to update branches. Now when it updates a specified number in a frame, the update returns and continues it in the next frame, so it keeps the desired fps even with tens of thousands of branches, altough it looks terrible to have a low update batch size.
/// <summary> /// neuron[l][i] is the i-th neuron in layer l /// <summary> public readonly double[][] NeuronLayers; /// <summary> /// weights[l][in][out] is the weight between neuron[l][in] and neuron[l + 1][out] /// <summary> public readonly double[][,] WeightsLayers;
public abstract class NNTrainer { ... public abstract void StartTraining(); protected NeuralNetwork NeuralNetwork; NNTrainer(NeuralNetwork nn) { NeuralNetwork = nn; } }
//Weight[j, i] is the weight from Input[j] to Output[i] neuron. Outputs[i] = Step(Sum(Input[j] * Weight[j, i] for j from 0 to Input.Length));
public static double Step(double d) { return d < 0.0D ? 0.0D : 1.0D; }The goal is to teach the network to map an input vector to an output vector. We can teach these networks by examples, input-to-output pairs. Then we should figure out the weights, which generate the desired output (also called target) from it's input pair in the example.
Weight[i, j] += LearningRate * (Target[j] - Output[j]) * Input[i];LearningRate is used for fine-tuning the weights, because of the characteristics of the gradient descent method (next part).
The bias can be thought of as the propensity (a tendency towards a particular way of behaving) of the perceptron to fire irrespective of its inputs.This cannot be modified by the user, but its weight is computed just for another input neuron.
double[] Inputs = new double[NumberOfInputs + 1]; //+1 for the bias Inputs[NumberOfInputs] = 1.0; //The bias is constant 1. double[] Outputs = new double[NumberOfOutputs]; double[,] Weights = new double[Inputs.Length, Outputs.Length];
//Weights[i, j] is the weight between Input[i] and Output[j]
public override double[] CalculateOutput(params double[] inputs) { if (inputs == null) return Outputs; //Outputs is the last neuron layer, it is an array of doubles FillInputs(inputs); //it does range checks and fills a number of input neurons with the values given in the parameters Parallel.For(0, Outputs.Length, i => //the neuron loop can go in parallel in a layer. In case of a multilayer perceptron, the values are propagated from the first to the last layer { Outputs[i] = 0; for (int j = 0; j < Inputs.Length; j++) Outputs[i] += Inputs[j] * SingleLayerWeights[j, i]; Outputs[i] = Step(Outputs[i]); //apply step function to clamp the value to 0 or 1. }); return Outputs; }And how to train it? Using backpropagation, but in case of a single-layer system, it becomes the delta rule. I only share the code, you can also find the formula in the link above. We pass the inputs, and the target outputs, and let the code modify the weights to get the outputs right. This way if we present new input to the system, it can also classify it based on the rules learned from the previous examples.
public override void Teach_Backpropagation(double[] inputs, double[] targets) { if (inputs == null || targets == null) return; FillInputs(inputs); var maxLoop = Math.Min(targets.Length, Outputs.Length); //bound checking Parallel.For(0, maxLoop, i => { Outputs[i] = 0; for (int j = 0; j < Inputs.Length; j++) { Outputs[i] += Inputs[j] * SingleLayerWeights[j, i]; } Outputs[i] = Step(Outputs[i]); //the block above is just the CalculateOutput function, but I can calculate it during the learning algorithm, and save another loop //This is the real part: it trains the weight to reproduce the target values. var diff = targets[i] - Outputs[i]; for (int j = 0; j < Inputs.Length; j++) SingleLayerWeights[j, i] += LearningRate * diff * Inputs[j]; }); }These networks are used for linearly-separable problems. It means, that if we represent our input vector in space as dots, (in case of 2 inputs, it can be a 2D cartesian coordinate-system). If the output of this system is 1 (true), make the dot black, else make the dot white. If you can draw a line to separate the black and white dots, the perceptron can also do this. An example is the logical-and and the logical-or operators:
if {...} else {...}statements?