Sunday, August 25, 2013

Daily Updates (29th July 2013)

Report Day 16 - 29 July 2013
Lesson 7 : Mantra Render Engines
What I learnt in this lesson :
u Mantra Render engines and how it goes through the rendering pipeline
u Render at command line (save as .IFD)
u Mantra
u Rendering pipeline
u Instancing
u Bump and Shader
Houdini Command Line
-j to call how many process will be dedicated to houdini








The command line “-j 3” mean I will use 3 process just for houdini

 Extra info :
In order to ensure that the render will not take all the cpu, I can apply the same “-j” method for the rendering part.
 Under the out, under the mantra's parameter there are many pre-build commands that we can use.
If choose “use one cpu” it will show mantra -j  1. Here I can change how many processor I want houdini to use to render.
 Under the command file there is a disk file and I can save out my renders as .ifd format

 What is ifd.?
 Combination of h-script and binary data
In simple terms, it is for mantra to produce rendered image.

 Rendering scenes using command lines
under the command line I can type cd (drag the file where you save your render image)
press Enter and type ls and enter again, it shows you the folder and files that is inside your project folder.
Then type “mantra < (name of render image.ifd) and it will start rendering the scene.
 I tried applied the same method on to my command line but apparently it did not work.
Do I need to turn on disk file when I do external renders from the command lines? As disk file does not work on apprentice version.
 Houdini website itself offers masterclass for mantra rendering.

 Viewport vs Rendering Speed
In order not to slow down the viewport, I can click on the object and under render< geometry tab.
By ticking this, it allows us to see our geo high when it is rendered.

 Refinement
it takes every faces of geo and render from one point to another depending on light and shader
eg) it takes from one vertex to another and smooth it by subdiving and makes new shade from one to a new one and if it cant, then it will keep subdividing.
 If lets say we have a lot of geo in our scene and we have to instance geo then houdini has to refine all of the geo individually and that will take up render time. In order to be more efficient I could just refine the geo by using the subdive node before I instance and in that way, it will render most faster as it already been pre-define

 Increasing shading quality
 under mantra node < dicing, there is  a shading quality parameter.
The higher the value, the refine your polygons are.
 To be more productive when doing work, always ensure that if objects are more futher away from the camera, the obj can afford to have a slightly low divisions.

 Bump Maps
Using bump maps will faster render time significantly
 One way to add bump maps is to just use the mantra surface node and under its displacement tab < bump|normal map and I can use that to add bumps.
 Another way is to use material shader builder and use the displace along normals node and tick the check box “bump only”
If using bump map avoid subdividing geo.

 Properties affecting refinement
u Geometry measuring
Non-uniform : Measuring everything through camera spaceRaster – Screen SpaceUniform Measuring : Generating uniform division and the size of it control by the shading quality.
 Buckets


I can render a greater “bucket” when rendering by increasing tile size



























Coving
Some areas  in geo need to refine more to get a smooth shading, and we might get holes in rendering and coving will help mend all back.
Opacity

Materials which are opaque and transparent (Of) on the surface geo will get calculated,anything further will be further down the pipeline.






















Shading
 When using PBR, arealight make a faster render and better image result.

 Pixel sampling
The higher the pixel value, it has a better edge when render. The jitter will throw samples and random distances and getting a little smudge to get a cleaner lookHowever only if the geo has a lot of noises frequency, increase the pixel sample to get a smoother look.
First half of lesson


Lesson 8 : Mantra Procedurals
rest node - enable the texture to stick on to the geometry when the geo surface move/deformed
instance node :
under the instance node, there is this node called the point instancing node. In this, we have 3 options, "off", "full point instancing" and "first point instancing"
When use option "off" it is basically just duplicating objects
when use option "full point instancing" that is when point instancing occurs.
fast point instancing is the recommended choice as it is faster.
If i just want to render my instance and not the original geo, I can off the renderable on my original geometry.

I can create an instance tab on my geo and save it as presets to be more efficient.







we could also instance points from a geo and place another geo.
However, if we have a lot of geometry, it will slow down the machine. After instancing geometry on a point of another geometry, we could add a add point and under the point node there is an option called "delete geometry but keep the points"

When this is turned on, point can only be seen in 3d space but when render, we are able to see the geometry that we instanced.

Instancing is really an effective way to instance complex objects as point. By using the instance node, you will not be able to see  the geo in 3d space but only when rendering.


This will allow the machine to be more fast and efficient if for example they are alot of high res geo to render.

Daily Updates (26th July 2013)

Bundle List and Light Linking
Using the bundle list is an efficient way to light link by creating a bundle folder and putting geo inside and link to specific lights you want.
And the smart bundle is also useful as I can call my geo, light, etc quickly without having to drag them all in a folder one by one.
This applies only if my naming convention is neat
for example (lights):
lgt_(...)_(...)
            This way I could call all my lights by using lgt*




How the light rigg is being build
First a simple set up is created using a grid, teapot, environment light. Then another grid is created and scale is small to act as a portal geo in the environment light.
Create another two area light and place them like in a three point lighting scene.
Select all the lights and create a subnet
After all is well, we could hide the parameters that we will not be using in our light rig, example, scale, uniform and subnet.
  • Right click on the subnet and click "Type properties"
  • There is an tick box called "invisible", click the parameters that you will not use and hide it.


















We need to have the ability to able to control all the light we created together and separately, that is why a main null is needed to move, rotate all lights at the same time, and separate null is needed to for each of the lights to move individually.





It is important to test out first before creating your digital assest.
Now we can create the digital assest. As how we did before, right click on the subnet node and "create Digital Assest"
After creating, we now need to create a control outside the digital assest so that we could control the light rigs at the obj level.
In order to do that, we need to open our digital assest type properties and click the area light on the scene, right click it, and choose "Export Handle to Digital Assest"
and what this does is that it allows me to control my light parameters on the obj level of my digital assest.
Under the type properties, we could create a folder name Control to put all the light control inside. Then create a sub folder inside it for each individual light.








Daily Updates (25th July)

Report Day 14 : 25 July 2013
Summing Noises
Instead of using diffuse color, we now use noise to drive the colors.

The same set up is used by creating a material shader builder and using the surface model node. However we use displacement node. We learn this at school from Mr Ron and it uses the same concept. However, we only use displacement bound. For this tutorial, we learn re-dice displacement and true displacement.





luminance - it takes in rgb color (vector) and returns a value.
 re - dice displacement - it occurs when there is an extreme amount of displacement. Redicing will redice the shaded displacement to give it a more regular shape.
 true displacement - true displacement actually moves the position of the points on the geometry and not just giving it a bump.
Lesson 5 - Light Objects
v  test assest
v  viewport lighting options
v  shelf tools
v  point light
v  deep shadow maps
v  ray tracing
v  light type

Headlight - do not show lights on scene (default lighting)
Normal lighting - show lights on scene
High Quality Lighting -
 I can right click on each of the parameter and control whether i want to see the diffuse, specular, ambient and emission, this allows us to see the difference on the 3d space interactively without render, although rendering is more effective.
Ambient light is strictly not recommended as its a making all the color values lighter, like adjusting brightness of an image in photoshop.
Depth Map Shadows




When using a depth map shadow, an image is created and its stored in a temp folder.
I found a good read about deep shadows and the pros and cons of using it.






Ordinary Shadow
Depth Map Shadow (Deep Shadow)
2D image rendered from a light
2.5D image rendered from a light
Each pixels representing the distance to the nearest surface
Each pixels stores multiple value of each surface the light hits
Use to determine which surface are visible to light and shadow
Are able to calculate shadow cast by translucent surfaces

Deep shadow stores 2 channel, Pz and Of. Pz store Z depth, depth from the light to the geo. Of, the total opacity of the Z depth. It is good for compositing.
Naming Convention to store the deep shadow rat file
$HIP/dmaps/{$OS}_dshad_$F3.rat
$HIP - Directory
dmaps - Name of folder
{$OS}_dshad_$F3 - Naming Convention of the light (eg. pointlight_dshad_001)

Render
One important thing to take note, when regenerating depth map shadow always remember to click the render button 





Depth Map Shadow Parameters
Shadow BiasThe larger the obj the greater the value of the shadow according to how much you want the shadow to be lifted.


Line, Grid and Disk are area lights
Sphere and Geometry are volume lights
Distant and sun are mostly used in an environment scene











Spot Light
The most important parameters of the spot light is cone angle, cone delta and cone rolloff.
Cone Angle



















The higher the value, the bigger the cone angle.
Cone Delta
Cone delta and rolloff works together. Cone Rolloff make the hard edge more soft.






Area Light
Area light gives a softer shadow however, a noisier render. To reduce the noise, increase the softer quality.





Volume Light
Volume light give a very nice soft shadow and give a realistic render. However, it takes a longer render time.






By using the attenuation value with the intensity we could get effects like this.





Environment Light
Environment light gives to all areas.





One thing I did know is that I could light up a scene really fast by using an environment light.





Blasting a sphere and taking only 3 primitive. Then using the portal light in the environment light node (Under the render options, tick the portal geo and attach the geo into it)
This will only enable light to emit from the primitive. With this technique, I am able to light a scene really fast.

Lesson 6 - Light Rigs
In the first part of the lesson, I learn a bit more on how to use the sky light.
When we click skylight, two nodes would appear skylight and sunlight





I can put latitude and longitude values and even add location and time to suit the scene. That is something I found really useful.
In the next, I learn how to use the three point lighting node. When you use the node, you will get three different lights, key, fill and rim light. I could interactively move my three point light to another I want also rotate them.
It also useful to add your camera into the three point lighting under "look at camera"
It will orient the light towards the camera so if we make changes to our camera, it will always have consistent lighting.









Daily Updates (24th July 2013)

Lesson 3 - Custom Shader (Modifying Mantra Surface Shader)

v  creating modifier for mantra surface shader that helps to enhance
v  the capability to add maps, gradients and color map to any channel.
v  vop based digital assest
v  difference between exposing and promoting parameters
v  ramps in a shader
v  digital assest network

Creating the Shader 
























Surface model is a VOP node that was previously introduce in Houdini 11.
It is really flexible and versatile as it has most the parameter you want in a shader, for example : Subsurface Scattering, reflections, refractions, fresnal, etc.


  • Bias Amount
I want the bias amount from the color mix node to be access on the shop level on the vop material. For me to be able to do that, I would need to promote and expose parameters.
  • Adding a file texture

I can also add an a file image for my textures by using color map and promote parameter for base and use color map.
  • Ramps
Ramp parameter allows me to add more colors, and the bias amount would control how much ramp color or file image textures I want. This gives me more control.

We could nice outcomes with ramp parameter and color map and overlay many things together.






































  • Null
Adding a null before the ramp parameter always me to compile networks to the null on a digital assest and i am able to reuse.
I am also able to add or replace patterns into the null without affecting other nodes.
Creating Custom Shader Digital Assets
After creating all the parameters we want to able to adjust, it is time to make it into a digital assets so it can be reuse.
- Null (exposing parameter from the null node)
-make sure that "use input value if parameter not bound" selected
Issue

- If I use two of the custom shader I made, I get two each parameter but I do not get a second ramp because of the naming convention
Lesson 4 - Shader Development
Creating a 3d Shader
 U,V,W Coordinates
It is commonly used to convert a 2D image to a three dimensional object
It allows texture maps to wrap in irregular surfaces.
 To be able to create the 3d shader, we would have to add in a W coordinate.
  • Duplicate the checkered shader and save it as an otl with another name

(Right click and "Allow Editing of Content") and it would turn blue.













  • Right click on the node and click operator type manager. Here, I could duplicate the node and create the same node with another name, so I do not touch the original checkered node.

























In order to create W coordinate on the new checkered 3d that has been created we would to add in more nodes inside of the pattern.

In the 13 minute of the lesson, Ari when through some of the nodes that were made checkered pattern. The nodes were isConnected and ifConnected.
isConnected - Test if the value is true or false
ifConnected - passes the value through
Creating New Variable to create the W coordinate

  • Under the checkered3d node, right click and choose "Type Properties"
    Under the parameter tab, create 2 new float values















  • Under the input/output tab create a new float value for W coord.














Now if I connect the surface position to the W coordinate, I get this 3d pattern on the box 


Daily Updates (23th July 2013)

Report Day 12 : 23 July 2013
Naming Conventions
Following naming convention is essential as we do not get confused if we have complicated models and lots of lights in one scene. When we name the nodes, it easier to spot which light we need to adjust etc.
Naming Convention for lights : lgt_use_type_description_number
example : lgt_key_area_atiqah_face_001
I can also apply this technique when naming other nodes.
Sometimes when we are handling complex scenes, the network view might get a bit confusing and messy even with netboxes



However, there is another way we could access all the nodes, and this information will definitely benefit me in the future when I start working.

We could put all our nodes into one folder in the bundle list and we could easily access our nodes just like network view. 




The eye icon enable us to switch on and off the visibility (visibility flag (blue))
The mouse icon is used to select nodes (How many selected)
The box icon the template flag and it is used for geometry
The pipe looking icon is a bypassed flag, bypassing a node will prevent any ROPS from rendering.
The X icon is to the hide the nodes completely
Rendering lights
Another neat trick, I learn is that without turning off all the lights that I do not want to render, I could just actually called all the lights I want to render in a bundle.

In this case, if I want to render bundle BarnLights, I could go to my mantra node, and under candidate lights I could type @BarnLights



Light Linker

The light linker works like any other softwares, it just link lights with geometry




Render Scheduler
Render Scheduler allows me to pause my render. It also shows active and the elapsed time that it rendering. I can prioritize the ones I want to render to render first by pausing other renders.        

This is very useful and other softwares I know of are not able to pause a frame render.





Render Scheduler
Render Scheduler allows me to pause my render. It also shows active and the elapsed time that it rendering. I can prioritize the ones I want to render to render first by pausing other renders.        

This is very useful and other softwares I know of are not able to pause a frame render.




I can also change the parameter by calling the name of the parameter

eg) light_type and I am able to change the light type.



1,2,3,4,5,6 Command Script
- H script
-  They are used when houdini launch
Some command use
opcf - change directory
opadd - add a node
opparm - change a parameter


I could the textport and call my geo as well by using the commands above





There is a whole list of commands I can go through.http://www.sidefx.com/docs/houdini12.5/commands/
Lesson 2 - Material Gallery
v  Create  a digital assets that is already built for to create shaders
v  Create own surface material by going into VOPS network
v  Mantra Surface Shader
v  Building a vop network using surface shader builder

Material tester Digital Assest

After creating a simple scene with a grid, teapot, camera and light, we can now select all and create a subnet.

Under the subnet, add an operator path so that we could test the material on the teapot.
Copy parameter and paste relative reference at the material tab of the teapot which is inside the subnet, so everytime I add a new material, the teapot would get updated as it is using the same parameter.

Material Gallery
After we are happy with our custom made shader, I can create that shader as a .gal and put it at the material palette so that I can reuse it again.
Right click on the material, and click save to gallery.


Digital Asset instead of Gallery
If you want to save the shader as a digital assest, right click on the shader and save it as a digital assest.

Different VOP Models
v  Mantra Surface Shader



General, flexible material including subsurface scattering, environment maps, refractions, and displacement.




v  Surface Model

fully featured model of surface shading. It supports diffuse, specular, and refractive components and computes direct lighting (from light sources) and indirect lighting (lighting bouncing off other objects in the scene)





A render that is not white as it already has built in shader.
v  Material Shader Builder

container for other shader types, letting you “package up” combinations of lower-level shaders (such as surface shaders and displacement shaders) with individual settings into a new “look” you can assign as a single unit.
















It is constant white as there are no lighting or surface model
Different between Color Mix and Mix

Color Mix works with RGB colors and mix works with grayscale and luminance. Color Map however, uses uv.
It is not difficult to use material surface shader as we could add multiple patterns color and textures.









Daily Updates (22th July 2013)

L-system Bonus Lesson
 -or the long term called the Lindenmayer system
-bunch of symbols to create trees, plant and geo shapes
-evolve from young to complex trees
-good for creating mid and far field trees
-animate the growth of trees

Here are some of the examples I found.

This veins test using Houdini l-system looks pretty good.

Forward to 31 sec.
The way the braches grow is really nice and how the butterfly coming out of the cocoon looks realistic.








L System has four input node, J,K,M,Meta Test Input
J,K,M are used to create leaves, tree and branches.
The L-system nodes meta-test input lets you generate rules that will cause the system to stop when it reaches the edges of a defined shape, like a topiary hedge.
 There are a lot of default shapes/values to l system that I can already use and edit from there.
L system parameter definitions
 Premise
- Is the initial state of the tree when it is at generation 0
-Initiator, initial the growth of rules of the system
 Rules
- generator, growth of trees
 Generation
-generator, the greater the generation, the more branches as it is more mature
(EDGE REWRITING)
F - go straight and draw a line (1 unit)
F = F+F-F
+ means turn right
- means turn left
f - going a full step without drawing anything






















h - going a half step without drawing anything



















When I change the generation to 1, I get F+F-F





















But when I change my generation to 2, I get this,






















At first I did not understand what I get such irregular shape.
 Then after Ari explained, I understand how it is been calculated.
 Generation 1 : F = F+F-F
Generation 2 : F+F-F + F+F-F - F+F-F
 By adding [] i can create trees, branches and twigs

Example
F = F[+F]F[-F]F
F = F












F = F[+F]

















F = F[+F]F[-F]




















F = "F[+F]F[-F]Fby adding " I can scale the lsystem by going to the values tab and adjusting the step size scale.




















F = "TF[+F]F[-F]Fby adding T I can add gravity on the branches and twigs by changing the values of the gravity.




















However, the main structure stays the same. In order to add gravity to the main structure add a value after the T and now the main structure will deform as well according to the gravity values
 F = "T+(1)F[+F]F[-F]F




















NODE REWRITING
 -creating new variable eg) X





















The variable X will be the premise for the next generation.
 Multiple Rules




















 I do not quite understand about the multiple rules but this forum gave me an idea on how it is used.
! - multiply the thickness; - multiply current angle





















~ - random l system up to a degree
eg)
X = !T~(20)F[;+X][-X]FX




















3D  TREES
+  - Rotate Right
-   - Rotate Left
&  - Pitch Up
^  - Pitch Down
\ - Roll Clockwise
/ - Roll Anti-Clockwise
eg)
X = !/(140)~F[+FJ]X:0.5
if I have a J in my rule, I can attach a geo to the J input. It applies the same for the K input.
















Conditionals
EG)
t< 4 = F+F[F][-F]

EG)t< 4 = F+F[F][-F]
(it means how many generations produced)


















First Steps - Light, Shade, Render (Scene Setup)
In the first 12 minutes of the lesson, it teaches us how to create our own project folder in Houdini. It does not work the same way as maya, however, we are able to create a project files anywhere we want.
http://forums.odforce.net/index.php?/topic/10436-noob-maya-to-houdini-question/
By reading this forum, I am able to understand more on how it can be used.
One other thing that houdini has is the $HIP

It allows me to access all my files easily as I do not have to keep clicking and choosing the folder I want.

I was also taught how to set up the desktop for lighting and shading.