A parallax background with a fixed camera is easy to do, but since i'm making a topdown view 2D space exploration game, I figured that having a single SKSpriteNode filling the screen and being a child of my SKCameraNode and using a SKShader to draw a parallax starfield would be easier.
I went on shadertoy and found this simple looking shader. I adapted it successfully on shadertoy to accept a vec2() for the velocity of the movement that I want to pass as an SKAttribute so it can follow the movement of my ship.
Here is the original source:
https://www.shadertoy.com/view/XtjSDh
I managed to make the conversion of the original code so it compiles without any error, but nothing shows up on the screen. I tried the individual functions and they do work to generate a fixed image.
Any pointers to make it work?
Thanks!
This isn't really an answer, but it's a lot more info than a comment, and highlights some of the oddness and appropriateness of how SK does particles:
There's a couple of weird things about particles in SceneKit, that might apply to SpriteKit.
when you move the particle system, you can have the particles move with them. This is the default behaviour:
From the docs:
When the emitter creates particles, they are rendered as children of
the emitter node. This means that they inherit the characteristics of
the emitter node, just like nodes do. For example, if you rotate the
emitter node, the positions of all of the spawned particles are
rotated also. Depending on what effect you are simulating with the
emitter, this may not be the correct behavior.
For most applications, this is the wrong behaviour, in fact. But for what you're wanting to do, this is ideal. You can position new SKNodeEmitters offscreen where the ship is heading, and fix them to "space" so they rotate in conjunction with the directional changes of the player's ship, and the particles will do exactly as you want/need to create the feeling of moving throughout space.
SpriteKit has a prebuild, or populate ability in the form of advancing the simulation: https://developer.apple.com/reference/spritekit/skemitternode/1398027-advancesimulationtime
This means you can have stars ready to show wherever the ship is heading to, through space, as the SKEmittors come on screen. There's no need for a loading delay to build stars, this does it immediately.
As near as I can figure, you'd need a 3 particle emitters to pull this off, each the size of the screen of the device. Burst the particles out, then release each layer you want for parallax to a target node at the right "depth" from the camera, and carry on by moving these targets as per the screen movement.
Bit messy, but probably quicker, easier, and much more powerfully full of potential for playful effects than creating your own system.
Maybe... I could be wrong.
EDIT : Code is clean and working now. I've setup a GitHub repo for this.
I guess I didnt explain what I wanted properly. I needed a starfield background that follows the camera like you could find in Subspace (back in the days)
The result is pretty cool and convincing! I'll probably come back to this later when the node quantity becomes a bottleneck. I'm still convinced that the proper way to do that is with shaders!
Here is a link to my code on GitHub. I hope it can be useful to someone. It's still a work in progress but it works well. Included in the repo is the source from SKTUtils (a library by Ray Wenderlich that is already freely available on github) and from my own extension to Ray's tools that I called nuts-n-bolts. these are just extensions for common types that everyone should find useful. You, of course, have the source for the StarfieldNode and the InteractiveCameraNode along with a small demo project.
https://github.com/sonoblaise/StarfieldDemo
The short answer is, in SpriteKit you use the fragment coordinates directly without needing to scale against the viewport resolution (iResoultion in shadertoy land), so the line:
vec2 samplePosition = (fragCoord.xy / maxResolution) + vec2(0.0, iTime * 0.01);
can be changed to omit the scaling:
vec2 samplePosition = fragCoord.xy + vec2(0.0, iTime * 0.01);
this is likely the root issue (hard to know for sure without your rendition of the shader code) of why you're only seeing black from the shader.
For a full answer for an implementation of a SpriteKit shader making a star field, let's take the original shader and simplify it so there's only one star field, no "fog" (just to keep things simple), and add a variable to control the velocity vector of the movement of the stars:
(this is still in shadertoy code)
float Hash(in vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
return -1.0 + 2.0 * fract(sin(h) * 43758.5453);
}
vec2 Hash2D(in vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
float h2 = dot(p, vec2(37.271, 377.632));
return -1.0 + 2.0 * vec2(fract(sin(h) * 43758.5453), fract(sin(h2) * 43758.5453));
}
float Noise(in vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 u = f * f * (3.0 - 2.0 * f);
return mix(mix(Hash(n), Hash(n + vec2(1.0, 0.0)), u.x),
mix(Hash(n + vec2(0.0, 1.0)), Hash(n + vec2(1.0)), u.x), u.y);
}
vec3 Voronoi(in vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 mg, mr;
float md = 8.0;
for(int j = -1; j <= 1; ++j)
{
for(int i = -1; i <= 1; ++i)
{
vec2 g = vec2(float(i), float(j));
vec2 o = Hash2D(n + g);
vec2 r = g + o - f;
float d = dot(r, r);
if(d < md)
{
md = d;
mr = r;
mg = g;
}
}
}
return vec3(md, mr);
}
vec3 AddStarField(vec2 samplePosition, float threshold)
{
vec3 starValue = Voronoi(samplePosition);
if(starValue.x < threshold)
{
float power = 1.0 - (starValue.x / threshold);
return vec3(power * power * power);
}
return vec3(0.0);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float maxResolution = max(iResolution.x, iResolution.y);
vec2 velocity = vec2(0.01, 0.01);
vec2 samplePosition = (fragCoord.xy / maxResolution) + vec2(iTime * velocity.x, iTime * velocity.y);
vec3 finalColor = AddStarField(samplePosition * 16.0, 0.00125);
fragColor = vec4(finalColor, 1.0);
}
If you paste that into a new shadertoy window and run it you should see a monochrome star field moving towards the bottom left.
To adjust it for SpriteKit is fairly simple. We need to remove the "in"s from the function variables, change the name of some constants (there's a decent blog post about the shadertoy to SpriteKit changes which are needed), and use an Attribute for the velocity vector so we can change the direction of the stars for each SKSpriteNode this is applied to, and over time, as needed.
Here's the full SpriteKit shader source, with a_velocity as a needed attribute defining the star field movement:
float Hash(vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
return -1.0 + 2.0 * fract(sin(h) * 43758.5453);
}
vec2 Hash2D(vec2 p)
{
float h = dot(p, vec2(12.9898, 78.233));
float h2 = dot(p, vec2(37.271, 377.632));
return -1.0 + 2.0 * vec2(fract(sin(h) * 43758.5453), fract(sin(h2) * 43758.5453));
}
float Noise(vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 u = f * f * (3.0 - 2.0 * f);
return mix(mix(Hash(n), Hash(n + vec2(1.0, 0.0)), u.x),
mix(Hash(n + vec2(0.0, 1.0)), Hash(n + vec2(1.0)), u.x), u.y);
}
vec3 Voronoi(vec2 p)
{
vec2 n = floor(p);
vec2 f = fract(p);
vec2 mg, mr;
float md = 8.0;
for(int j = -1; j <= 1; ++j)
{
for(int i = -1; i <= 1; ++i)
{
vec2 g = vec2(float(i), float(j));
vec2 o = Hash2D(n + g);
vec2 r = g + o - f;
float d = dot(r, r);
if(d < md)
{
md = d;
mr = r;
mg = g;
}
}
}
return vec3(md, mr);
}
vec3 AddStarField(vec2 samplePosition, float threshold)
{
vec3 starValue = Voronoi(samplePosition);
if (starValue.x < threshold)
{
float power = 1.0 - (starValue.x / threshold);
return vec3(power * power * power);
}
return vec3(0.0);
}
void main()
{
vec2 samplePosition = v_tex_coord.xy + vec2(u_time * a_velocity.x, u_time * a_velocity.y);
vec3 finalColor = AddStarField(samplePosition * 20.0, 0.00125);
gl_FragColor = vec4(finalColor, 1.0);
}
(worth noting, that is is simply a modified version of the original )
Related
I would like to set the saturation of an entire color channel in my main camera. The closest option that I've found was the Hue vs. Sat(uration) Grading Curve. In the background of the scene is a palm tree that is colored teal. I want the green level of the tree to still show. Same with the top of the grass in the foreground, It's closer to yellow than green, but I'd still want to see the little bit of green value that it has.
I have been searching the Unity documentation and the asset store for a possible 3rd party shader for weeks, but have come up empty handed. My current result is the best I could come up with, any help would be greatly appreciated. Thank you
SOLVED
-by check-marked answer. Just wanted to share what the results look like for anyone in the future who stumbles across this issue. Compare the above screenshot, where the palm tree in the background and the grass tops in the foreground are just black and white, to the after screenshot below. Full control in the scene of RGB saturation!
Examples using this method:
Below is a postprocessing shader intended to let you set the saturation of each color channel.
It first takes the original pixel color, gets the hue, saturation, and luminance. That color is taken to its most saturated, neutral-luminance version. The rgb of that is then multiplied by the desaturation factor to compute the rgb of the new hue. The magnitude of that rgb is multiplied by the original saturation to get the new saturation. This new hue and saturation is fed back in with the original luminance to compute the new color.
Shader "Custom/ChannelSaturation" {
Properties{
_MainTex("Base", 2D) = "white" {}
_rSat("Red Saturation", Range(0, 1)) = 1
_gSat("Green Saturation", Range(0, 1)) = 1
_bSat("Blue Saturation", Range(0, 1)) = 1
}
SubShader{
Pass {
CGPROGRAM
#pragma vertex vert_img
#pragma fragment frag
#include "UnityCG.cginc"
uniform sampler2D _MainTex;
float _rSat;
float _gSat;
float _bSat;
/*
source: modified version of https://www.shadertoy.com/view/MsKGRW
written # https://gist.github.com/hiroakioishi/
c4eda57c29ae7b2912c4809087d5ffd0
*/
float3 rgb2hsl(float3 c) {
float epsilon = 0.00000001;
float cmin = min( c.r, min( c.g, c.b ) );
float cmax = max( c.r, max( c.g, c.b ) );
float cd = cmax - cmin;
float3 hsl = float3(0.0, 0.0, 0.0);
hsl.z = (cmax + cmin) / 2.0;
hsl.y = lerp(cd / (cmax + cmin + epsilon),
cd / (epsilon + 2.0 - (cmax + cmin)),
step(0.5, hsl.z));
float3 a = float3(1.0 - step(epsilon, abs(cmax - c)));
a = lerp(float3(a.x, 0.0, a.z), a, step(0.5, 2.0 - a.x - a.y));
a = lerp(float3(a.x, a.y, 0.0), a, step(0.5, 2.0 - a.x - a.z));
a = lerp(float3(a.x, a.y, 0.0), a, step(0.5, 2.0 - a.y - a.z));
hsl.x = dot( float3(0.0, 2.0, 4.0) + ((c.gbr - c.brg)
/ (epsilon + cd)), a );
hsl.x = (hsl.x + (1.0 - step(0.0, hsl.x) ) * 6.0 ) / 6.0;
return hsl;
}
/*
source: modified version of
https://stackoverflow.com/a/42261473/1092820
*/
float3 hsl2rgb(float3 c) {
float3 rgb = clamp(abs(fmod(c.x * 6.0 + float3(0.0, 4.0, 2.0),
6.0) - 3.0) - 1.0, 0.0, 1.0);
return c.z + c.y * (rgb - 0.5) * (1.0 - abs(2.0 * c.z - 1.0));
}
float4 frag(v2f_img i) : COLOR {
float3 sat = float3(_rSat, _gSat, _bSat);
float4 c = tex2D(_MainTex, i.uv);
float3 hslOrig = rgb2hsl(c.rgb);
float3 rgbFullSat = hsl2rgb(float3(hslOrig.x, 1, .5));
float3 diminishedrgb = rgbFullSat * sat;
float diminishedHue = rgb2hsl(diminishedrgb).x;
float diminishedSat = hslOrig.y * length(diminishedrgb);
float3 mix = float3(diminishedHue, diminishedSat, hslOrig.z);
float3 newc = hsl2rgb(mix);
float4 result = c;
result.rgb = newc;
return result;
}
ENDCG
}
}
}
If you're using URP (Universal Rendering Pipeline), which is recommended, you can create a new forward renderer pipeline asset, assign the shader to that asset, and configure it appropriately. Further information including diagrams can be found in the official unity tutorial for custom render passes with URP.
If you aren't using URP, you have other options. You could attach it to specific materials, or use the below script from Wikibooks to the camera's gameobject to apply a material using the above shader as a postprocessing effect to the camera:
using System;
using UnityEngine;
[RequireComponent(typeof(Camera))]
[ExecuteInEditMode]
public class PostProcessingEffectScript : MonoBehaviour {
public Material material;
void OnEnable()
{
if (null == material || null == material.shader ||
!material.shader.isSupported)
{
enabled = false;
}
}
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
Graphics.Blit(source, destination, material);
}
}
If you use the postprocessing effect, you will want to render the things you want to exclude from the effect with a different camera, then put everything together. However, this is a bit out of scope for this answer.
My best guess would be to use a custom shader or camera FX that would gives you control over each channel.
Hope that helped ;)
This is going to be quite a long post, sorry, but I think it's worth it because it's quite complicated and I would imagine quite a lot of other people would really like to be able to achieve this effect. There are a few other questions on here about SPH but none of them relate to a Niagara implementation. I've also posted this question on Unreal Engine Answers.
I've been attempting to replicate the fluid simulation in Niagara as shown by Asher Zhu here: The Art of Illusion - Niagara Simulation Framework Overview. Skip to 20:25 for the effect I'm after.
Seeing as he shows none of the Niagara system at all part from some of the bits for rendering it (as far as which I've yet to get), I've followed the article here: link.
Now, I have it looking more or less like a fluid. However, it doesn't really look anything like Asher's. It's rather unstable and will tend to sit for a few seconds with a region of higher density before exploding and then settling down. It also never develops any depth. All the particles, unless they're flying about erratically, sit on the floor. The other problem is collision - I can't see how Asher has managed to get such clean collisions with the environment. My signed distance fields are big, round and uneven and the particles never get anywhere near the walls.
The fourth image below shows it exploding just after it got to the third image and the fifth image is what it looks like after it finally settles down (as well as how far away from the walls the particles end up). The last image shows that it's completely flat (this isn't an issue with the volume of the box; I've tested that).
It's difficult to show everything in the Niagara system on here but the crucial bit is the HLSL code:
OutVelocity = Velocity;
OutPosition = Position;
Density = 0;
float Pressure = 0;
float smoothingRadius = 1.0f;
float restDensity = 0.2f;
float viscosity = 0.018f;
float gas = 500.0f;
const float3 gravity = float3(0, 0, -98);
float pi = 3.141593;
int numParticles;
DirectReads.GetNumParticles(numParticles);
const float Poly6_constant = (315 / (64 * pi * pow(smoothingRadius, 9)));
const float Spiky_constant = (-45 / (pi * pow(smoothingRadius, 6)));
float3 forcePressure = float3(0,0,0);
float3 forceViscosity = float3(0,0,0);
#if GPU_SIMULATION
//Calculate the density of this particle based on the proximity of the other particles.
for (int i = 0; i < numParticles; ++i)
{
bool myBool; //Temporary bool used to catch valid/invalid results for direct reads.
float OtherMass;
DirectReads.GetFloatByIndex<Attribute="Mass">(i, myBool, OtherMass);
float3 OtherPosition;
DirectReads.GetVectorByIndex<Attribute="Position">(i, myBool, OtherPosition);
// Calculate the distance and direction between the target Particle and itself
float distanceBetween = distance(OtherPosition, OutPosition);
if (distanceBetween < smoothingRadius)
{
Density += OtherMass * Poly6_constant * pow(smoothingRadius - distanceBetween, 3);
}
}
//Avoid negative pressure by clamping density to reference value
Density = max(restDensity, Density);
//Calculate pressure
Pressure = gas * (Density - restDensity);
//Calculate the forces.
for (int i = 0; i < numParticles; ++i)
{
if (i != InstanceId) //Only calculate the pressure-based force and Laplacian smoothing function if the other particle is not the current particle.)
{
bool myBool; //Temporary bool used to catch valid/invalid results for direct reads.
float OtherMass;
DirectReads.GetFloatByIndex<Attribute="Mass">(i, myBool, OtherMass);
float OtherDensity;
DirectReads.GetFloatByIndex<Attribute="Density">(i, myBool, OtherDensity);
float3 OtherPosition;
DirectReads.GetVectorByIndex<Attribute="Position">(i, myBool, OtherPosition);
float3 OtherVelocity;
DirectReads.GetVectorByIndex<Attribute="Velocity">(i, myBool, OtherVelocity);
float3 direction = OutPosition - OtherPosition;
float3 normalisedVector = normalize(direction);
float distanceBetween = distance(OtherPosition, OutPosition);
if (distanceBetween > 0 && distanceBetween < smoothingRadius) //distanceBetween must be >0 to avoide a div0 error.
{
float OtherPressure = gas * (OtherDensity - restDensity);
//Calculate particle pressure.
forcePressure += -1 * Mass * normalisedVector * (Pressure + OtherPressure) / (2 * Density * OtherDensity) * Spiky_constant * pow(smoothingRadius - distanceBetween, 2);
//Viscosity-based force computation with Laplacian smoothing function (W).
const float W = -(pow(distanceBetween, 3) / (2 * pow(smoothingRadius, 3))) + (pow(distanceBetween, 2) / pow(smoothingRadius, 2)) + (smoothingRadius / (2 * distanceBetween)) - 1;
forceViscosity += viscosity * (OtherMass / Mass) * (1 / OtherDensity) * (OtherVelocity - Velocity) * W * normalisedVector;
//forceViscosity += viscosity * (OtherMass / Mass) * (1 / OtherDensity) * (OtherVelocity - Velocity) * (45 / (pi * pow(smoothingRadius, 6))) * (smoothingRadius - distanceBetween);
}
}
}
OutVelocity += DeltaTime * ((forcePressure + forceViscosity) / Density);
OutPosition += DeltaTime * OutVelocity;
#endif
This code does two loops through all the other particles in the system, one to calculate the pressure and one to calculate the forces. Then it outputs the velocity and position. Just like the article I linked to above and like some other things I've seen. Yet it simply doesn't behave as shown in those resources.
I haven't applied any grid-based optimisation. To do this I'll just apply the grid optimisation used in the PBD example in UE's Content Examples project. But for now it's an added complication that isn't really needed. It runs fine with a thousands of particles even without it.
I've looked at a few resources (articles, videos and academic research papers) and I've spent a fortnight experimenting, including trial and error on the values at the top of the code. I'm obviously missing something crucial. What can it be? I'm so frustrated now that any help would be much appreciated.
I'm using an SKAction to rotate an SKCamera right now, which intern rotates the scene. This works really great as gravity and all nodes (except the cameras children) are rotated as well. Now I have a chromatic aberration shader applied to the scene and every time I rotate the scene the background color of the scene immediately covers the entire screen. This happens regardless of shader used. I even made a shader that just gives back the exact texture and it still happens. Also just setting the cameras zRotation does not fix this problem, so it's not the SKAction causing this. This is the shader:
void main( void )
{
float divider = 200;
float deltaX = sin(v_tex_coord.y * 3.14 * 10 + u_time * 4) * 0.01;
vec2 rCoord = v_tex_coord;
rCoord.x = rCoord.x - sin(u_time * 10) / divider + deltaX;
rCoord.y = rCoord.y - sin(u_time * 10) / divider;
vec2 gCoord = v_tex_coord;
gCoord.x = gCoord.x + sin(u_time * 10) / divider + deltaX;
gCoord.y = gCoord.y + sin(u_time * 10) / divider;
vec2 bCoord = v_tex_coord;
bCoord.x = bCoord.x + deltaX;
vec4 rChannel = texture2D(u_texture, rCoord);
vec4 gChannel = texture2D(u_texture, gCoord);
vec4 bChannel = texture2D(u_texture, bCoord);
gl_FragColor = vec4(rChannel.r, gChannel.g, bChannel.b, 1.0);;
}
Also this is the code to initialize the shader:
let shader = SKShader(fileNamed: "ChromaShader.fsh")
self.shader = shader
shouldEnableEffects = true
Thanks in advance!
I'm trying to make a projectile which has movement behaviour shown as red in the diagram.
What I know and have now is two vectors; Start and End.
The end goal is to have some randomness of the arc at iteration and projectile velocity change in a lerp-fashion. I've done linear movement generations before, but nothing like this.
If my question feels like asking you to do my work for me (My usual fear of asking questions as a novice coder) could I have some tips and hints on what methods/commands I should look into? The language is C# and Unity version is 5.6
EDIT # 1
After getting some head-direction I could achieve something closer to the end-goal function of this.
Blue linear line is just representation of distance and angle between A(initiation point) and B(target). The red arc is the trajectory I'm willing to make my projectile to move as.
Fortunately, I figured what I wanted my path to guide the projectile to follow was a cubic bezier and got the result in editor shown in the diagram above with A, B, modA, and modB. There are just a few more things I need to get working on actually mounting the projectile to follow this path and control its velocity and etc. Following are more questions which I couldn't get through today.
First, the general condition is A is fixed and B is not. In order to maintain the generally desired flight path, I figured I need another virtual line lineB(from modA to modB) to sync lineA's angle and distance so when B(the target) is moving around in all directions the arc is not too extremely skewed, but in my attempts today either I got the wrong angle from lineA or something modA just orbited around A and the numbers were weird like angle changing in straight line movement of B from A.
Second is to have some random-but-similar variety of the red arc after the first projectile fires and to the next. I'm guessing this would be somewhat easier when I get past the first one since it's just matter of controlling lineB.
Edit # 2
All the functions asked above are resolved: A path is generated from A to B with arc made with modA and modB as well as the randomness of modA and modB at each iteration as well as modA and modB adjusting according to B's position in real time.
Now All that's left is to actually make the projectile follow the path and control its velocity till reaching B. Below is the code generating the arc-path. How should I approach this?
public Transform[] controlPoints = new Transform[4];
public LineRenderer lineRenderer;
private int curveCount = 0;
private int SEGMENT_COUNT = 50;
private void DrawCurve()
{
for (int j = 0; j < curveCount; j++)
{
for (int i = 1; i <= SEGMENT_COUNT; i++)
{
float t = i / (float)SEGMENT_COUNT;
int nodeIndex = j * 3;
Vector3 pixel = CalculateCubicBezierPoint(
t,
controlPoints[nodeIndex].position,
controlPoints[nodeIndex + 1].position,
controlPoints[nodeIndex + 2].position,
controlPoints[nodeIndex + 3].position);
lineRenderer.positionCount = (((j * SEGMENT_COUNT) + i));
lineRenderer.SetPosition((j * SEGMENT_COUNT) + (i - 1), pixel);
}
}
}
private Vector3 CalculateCubicBezierPoint(float t, Vector3 start, Vector3 modA, Vector3 modB, Vector3 end)
{
float u = 1 - t;
float t2 = Mathf.Pow(t, 2);
float u2 = Mathf.Pow(u, 2);
float t3 = Mathf.Pow(t, 3);
float u3 = Mathf.Pow(u, 3);
Vector3 p = u3 * start;
p += 3 * u2 * t * modA;
p += 3 * u * t2 * modB;
p += t3 * end;
return p;
}
You should use AnimationCurve.
you can edit the "graphic curve" in the inspector (public variable AnimationCurve) then use this srcipt to move object along the path.
using UnityEngine;
using System.Collections;
public class AnimationPath : MonoBehaviour
{
public AnimationCurve XCurve;
public float TotalTravelTime = 5.0f;
public float TravelSpeed = 50.0f;
public float XRange = 10.0f;
// Use this for initialization
void Start ()
{
StartCoroutine("Travel");
}
IEnumerator Travel()
{
float ElapsedTime = 0.0f;
while(ElapsedTime < TotalTravelTime)
{
float XPos = XCurve.Evaluate(ElapsedTime/TotalTravelTime) * XRange;
transform.position = new Vector3(XPos, transform.position.y, transform.position.z + TravelSpeed * -Time.deltaTime);
yield return null;
ElapsedTime += Time.deltaTime;
}
}
}
I hope this can help you.
In a surface shader, given the world's up axis (and the others too), a world space position and a normal in world space, how can we rotate the worldspace position into the space of the normal?
That is, given a up vector and a non-orthogonal target-up vector, how can we transform the position by rotating its up vector?
I need this so I can get the vertex position only affected by the object's rotation matrix, which I don't have access to.
Here's a graphical visualization of what I want to do:
Up is the world up vector
Target is the world space normal
Pos is arbitrary
The diagram is bidimensional, but I need to solve this for a 3D space.
Looks like you're trying to rotate pos by the same rotation that would transform up to new_up.
Using the rotation matrix found here, we can rotate pos using the following code. This will work either in the surface function or a supplementary vertex function, depending on your application:
// Our 3 vectors
float3 pos;
float3 new_up;
float3 up = float3(0,1,0);
// Build the rotation matrix using notation from the link above
float3 v = cross(up, new_up);
float s = length(v); // Sine of the angle
float c = dot(up, new_up); // Cosine of the angle
float3x3 VX = float3x3(
0, -1 * v.z, v.y,
v.z, 0, -1 * v.x,
-1 * v.y, v.x, 0
); // This is the skew-symmetric cross-product matrix of v
float3x3 I = float3x3(
1, 0, 0,
0, 1, 0,
0, 0, 1
); // The identity matrix
float3x3 R = I + VX + mul(VX, VX) * (1 - c)/pow(s,2) // The rotation matrix! YAY!
// Finally we rotate
float3 new_pos = mul(R, pos);
This is assuming that new_up is normalized.
If the "target up normal" is a constant, the calculation of R could (and should) only happen once per frame. I'd recommend doing it on the CPU side and passing it into the shader as a variable. Calculating it for every vertex/fragment is costly, consider what it is you actually need.
If your pos is a vector-4, just do the above with the first three elements, the fourth element can remain unchanged (it doesn't really mean anything in this context anyway).
I'm away from a machine where I can run shader code, so if I made any syntactical mistakes in the above, please forgive me.
Not tested, but should be able to input a starting point and an axis. Then all you do is change procession which is a normalized (0-1) float along the circumference and your point will update accordingly.
using UnityEngine;
using System.Collections;
public class Follower : MonoBehaviour {
Vector3 point;
Vector3 origin = Vector3.zero;
Vector3 axis = Vector3.forward;
float distance;
Vector3 direction;
float procession = 0f; // < normalized
void Update() {
Vector3 offset = point - origin;
distance = offset.magnitude;
direction = offset.normalized;
float circumference = 2 * Mathf.PI * distance;
angle = (procession % 1f) * circumference;
direction *= Quaternion.AngleAxis(Mathf.Rad2Deg * angle, axis);
Ray ray = new Ray(origin, direction);
point = ray.GetPoint(distance);
}
}