Terrain generation

It’s been a while since my last update, but I’ve been working on terrain generation for some time now. I’ve built my own Marching tetrahedron implementation. Which has not been the simplest of tasks. But I’ve got it working OK for now. I’m going to keep working on it.., it still needs some tweaking. But I thought I wanted to share. Have a look!

I’ve also worked a little more on my poly water and movement along the waves, which isn’t perfect, but OK at the moment.

Screen Space Texture Shader: Part 2

Yesterday we started working on our Screen Space Texture Shader. We got the texture part working, but we also want an outline.

To get the outline rendered we will be adding vertex and fragment functions to our shader, and this needs to render underneath, or before, the surface function we did yesterday! What we are essentially going to do is scale our model up by the amount needed to get our outline width, and render that without any lighting applied, so it is all in the color we set our outline to be.

First we are going to take advantage of some of the functions Unity can provide us with, so we will include UnityCG.cginc in our shader. We add this just beneath our properties like this:

Then we need to add a new tag to our subshader that queues the rendering of our pass after other geometry, this is done with the “Queue” tag.  By setting queue Transparent-10 our shader is queued before Transparent shaders but after geometry.

Our vertex and fragment functions needs to be in it’s own pass, unlike the surface function. So we add a pass{ } and then we need to set up some parameters for this pass.

First you can set the Pass name. I name it “OUTLINE”, but this is not required. The pass has it’s own tags. The “LightMode” tag says if this should be rendered in Forward(Base or Add), Deffered or like we do Always. This pass isn’t affected by lighting so that’s ok in either.

Then we have Cull, which handles what parts of our mesh we want rendered, For my outline I’m going to not render the Front parts of the mesh with Cull Front. You could do Cull Off, then every part of the mesh is rendered, but it has some trade offs, but you can get the benefit of getting outline around intersecting meshes like in the example below.

Cull test

You can try out which you prefer for your self. If you Cull Back you will not render the rear facing triangles of your mesh this will not work for our outline. So steer away from that.

ZWrite Off means that we don’t want this pass to render to our DepthBuffer, which we wouldn’t want our outline to do. ZTest less means that this will render over objects furter away. ZTest Greater means that this will render over things in front of it. Then finally you have the pass blendmode: Blend SrcAlpha OneMinusSrcAlpha. This is actually the standard blendmode so you don’t need it. But you can experiment with different Blend paramterers and also you can do BlendOp Sub, or RevSub to get diffent effects. All about blending..

The we add the CGPROGRAM for this pass, we start with the compilation directives

#pragma vertex vert – our vertex function is called vert
#pragma fragment fra – fragment shader frag

Our vertex function needs something to process – the appdata structure, contains vertex positions and normals, the vertex fuction returns the v2f structure which is passed to our fragment function for final processing. v2f contains the POSITION in camera clip space and the color.

And we need our _Outline (width) and _OutlineColor.

Then we have our vertex function:

Return v2f , input is appdata v.
Declare “o”. “o.pos” is the vertex (position) multiplied with MVP (Model, View, Projection) Matrix. This brings our position from world position to clip space which our fragment function wants to work with.

Then we bring our normals from world space to eye space by multiplying with the UNITY_MATRIX_IT_MV. Then we normalize the resulting normal.

Our offset should be the projection space normal.x and y, this makes vertices that has normals that has high angle (90 deg max) with the “sight vector”/”eye line” move the furthest and vertices with normals pointing into or out of the screen won’t be offset.

Then we displace the position. Current position + offset multiplied with the z position ( to get wider outline for objects further away (consistency)) and our _Outline width.

Then we set our outline color in the o.color variable, and return “o”.

Next up is our fragment funtion which is the simplest kind of fragment function you can get. It’s self explanatory. Remember to end the CG program with ENDCG

That is actually the entire shader! We now have a complete shader!

ssts3

This  is the resulting output I get from this shader. If you messed up or just want to download the source for the shader. Here you go!

That is it for now! Good luck, shading your worlds!

 

Screen Space Texture Shader: Part 1

Redditor u/pigrockets asked a few days ago on r/Unity3D “What’s do you call this kind of static pattern shader?”

I call it the “Screen space Texture Shader” and here is how to make one in Unity.

First of all: What do we need to make this?

We need some way to calculate the screen position of every point on your model so that you know what pixel of your screen space texture/static pattern goes where. In Unity you can get this position calculated for free!

And as you can see from the video you want the shader to have an outline to get that cartoony look! This we will accomplish with a vertex program, where we want to scale our model a little bigger depending on the width we want on our outline, and then we color this model without any lighting in a fragment program. This “pass” will render underneath the static pattern.

For this simple shader example we won’t make a custom lighting model, which we would do if we wanted a more cartoony look.

Create your custom shader

CreateSurfaceShader.JPG

First you create your shader Create>Shader>Standard Surface Shader, and name it “ScreenSpaceTextureShader” or whatever you wan’t to call it! Open up the shader and you will see that you’ve got a lot of free code for your new shader:

Okey! That’s a nice start! On the top we have
“Shader “Custom/ScreenSpaceTextureShader” {” This names our shader “ScreenSpaceTextureShader” and puts it in the Custom shader folder.

Then we have our properties to set up our materials from the inspector. You can actually skip the properties and set up your materials entirely from script, but that is another story.

Then there’s the Subshader. This is where the magic happens, the “Tags” defines certain parameters for the shader, mainly render order. You can set the LOD for your shader also.
In the CGPROGRAM you have your shader code.

“#pragma surface surf Standard fullforwardshadows” says that we will have a surface-function called “surf” using the Standard lighting model. And finally we want fullforwardshadows, which basically means we get shadows cast from all light sources in forward rendering.

“#pragma target 3.0” means we are targeting shader model 3.0, this gives a little nicer lighting i guess, but limits the compatibility of the shader, so you can remove this if you want it to work “everywhere”.

Next up is our definitions _MainTex is a sampler2D, _Glossiness is half, _Metallic is half and _Color is fixed4. float, half, and fixed is basically the same thing with different precisions, float being the most precise, then half which you should use alot for mobile optimization, and fixed which is the least precise.
Then we have our Input structure.

This sets up what we want to pass to our surface-function. Like our models UV info. Here we have uv_MainTex which is the first UV set of the rendered mesh.

Finally we have the surface-funtion:

“void” – we are not returning anything from this function. “surf” – like we said in our “#pragma surface surf” our function is called “surf”. “Input IN” – structure type Input, with variable name “IN” is passed to our surface-function. “inout SurfaceOutputStandard o” – what light model output structure we are using.
tex2D(_MainTex, IN.uv_MainTex) read the pixelcolor of our main texture in the xy position from the uv_map, this is multiplied with our _Color and we have our Albedo Color. Metallic and Smoothness is set directly in the inspector.

Now try making a material with this “Standard” Shader! Create a new material. I’ve called it “ScreenSpaceMaterial” and select our new custom shader like so:

CreateMaterial

The material should have an inspector view with these properties:

Inspector

As you can see we have material Color, Albedo Texture, Smoothness and Metallic, but we need more properties for our Screen Space Texture Shader. We need a “screen space texture”, “outline color”, and a way to set our “outline width”.

So we add some new properties to our shader. _ScreenTex, _OutlineColor and _Outline.

Our _ScreenTex defaults to all white, _Outline color defaults to black (0, 0, 0, 1), and our _Outline needs to be within the range 0.0 to 0.03 and defaults to 0.005. Save this and your material inspector will look like this.

Inspector2.JPG

The Screen space Texture implementation is quite simple compared to the outline so we’ll start there. We need to sample our _ScreenTex in the surface-function so add “sampler2D _ScreenTex;” like we have for our _MainTex. And we need to know what the screen position is in our surface-function so add “float4 screenPos” to the Input-structure. Like this:

When we add screenPos to our Input like this, with exactly that name Unity gives us this “for free”. We can also have worldPos, viewDir and others that is done by Unity. But we only need screenPos for this.

Now we want to get the screen position in normalized values and sample the texture. To get the normalized screenpos we do:
half2 screenUV = IN.screenPos.xy / IN.screenPos.w
then we sample the texture with this coordinate:
fixed4 sstc = tex2D(_ScreenTex, screenUV);
then we multiply this color with the color we already have and get this surface-function:

That is all you need for the screen space texture part. Save the shader and try it out! Add a screen space texture to your material and you should get a result like this:

In the next part we will wrap up the shader, by adding the outline! So long!

Arduino <3 Unity

I’ve recently bought myself an Arduino Uno R3 kit and I’ve built myself some simple projects to get a feeling of what is possible with this stuff! Having little to none experience with microcontrollers I must say the Arduino is quite simple to get into, and the community is awesome with ALOT of tutorials and questions answered online! Much like the Unity community! One of my first projects was to try to get the arduino to communicate with Unity! My plan was to make a gyro-accelerometer for use with my computer! 

For this I paired the Arduino with a  Wifi-Shield(ESP8266) and a GY-521(MPU6050) 6DOF gyro/accelerometer. I set the esp8266 to be a client, hooked up to my computer as the server. I used a free library for handeling the gyroscope which converts the gyro/accel raw data to quaternion rotation and sends it as a string on the serial line. The ESP8266 transmits the string over TCP to the server. 

In Unity i set up a TCPListener on a seperate Thread that continuesly reads the strings from the Arduino and converts string back to quaternion rotation.

In operation it looks like this:  

 The setup is not very accurate so not very useful, but fun project to get working! 

Now I’ve started working on something else! Bigger, better and more beautiful! 

Poly particles

I’m not sure this is a good idea or what, but I’ve made another shader, that takes a particle system and creates a low poly version of the particles.. Looks like this:

Screenshots: 

   
    
 
Screenshots are nice and all, but video says more than a thousand screenshots..

I guess it will probably be quite heavy, since it needs extra camera that renders a displacement texture.. But it does look good or what?