Calculate random value * unknown value = 1 - unity3d

I'm having trouble importing the models from maya fbx to unity at the right scale. I detected the problem being inside Unity when importing the fbx file.
There's not really a workaround this other than changing by hand the .meta file:
useFileScale: 0
Since modelImporter.isFileScaleUsed in Unity is read-only, I can't change the value with a script, but I can change the global scale:
globalScale
Say I have file scale at 0.01, the normal value is 1 for scale, how can I calculate 0.01 * 100 = 1 with UnityScript, meaning I need to get the value 100 out of equation 0.01 * ? = 1?

Divide 1 by the value you know. For example, if a * b = 1 then 1/a = b.

Related

Unity wants a Y-flipped projection matrix when rendering under Direct3D

First of all, hello,
I have several questions tied together to this title, because I can't summarize all into one good question.
To put the settings, I am using Unity 2020.1.2f1 URP and I am trying to rebuild myself the Unity's projection matrix used with Direct3D 11 in order to fully understand the working of it.
I know that Unity uses the left-handed system for the object and world spaces, but not for the view space, which still use the OpenGL's old convention of the right-handed one. I could say that the clip space is LH too as the Z axis points towards the screen, but Unity makes me doubt a lot.
Let me explain : we all know that the handedness is given by the matrix, which is why the projection matrix (column-major here) used by Unity for OpenGL-like APIs looks like that :
[ x 0 0 0 ] x = cot(fovH/2) c = (f+n)/(n-f)
[ 0 y 0 0 ] y = cot(fovV/2) e = (2*f*n)/(n-f)
[ 0 0 c e ] d = -1
[ 0 0 d 0 ]
where 'c' and 'e' clip and flip 'z' into the depth buffer from the RH view space to the LH clip space (or NDC once the perspective division is applied), 'w' holds the flipped view depth, and the depth buffer is not reversed.
With the near plane = 0.3 and the far plane = 100, the Unity's frame debugger confirms that our matrix sent to the shader is equal to 'glstate_matrix_projection' (it's the matrix behind UNITY_MATRIX_P macro in the shader), as well as the projection matrix from the camera itself 'camera.projectionMatrix' since it's the matrix built internally by Unity, following the OpenGL convention. It is even confirmed with 'GL.GetGPUProjectionMatrix()' which tweaks the projection matrix of our camera to match the Graphic API requirements before sending it to the GPU, but changes nothing in this case.
// _CamProjMat
float n = viewCam.nearClipPlane;
float f = viewCam.farClipPlane;
float fovV = Mathf.Deg2Rad * viewCam.fieldOfView;
float fovH = 2f * Mathf.Atan(Mathf.Tan(fovH / 2f) * viewCam.aspect);
Matrix4x4 projMat = new Matrix4x4();
projMat.m00 = 1f / Mathf.Tan(fovH / 2f);
projMat.m11 = 1f / Mathf.Tan(fovV / 2f);
projMat.m22 = (f + n) / (n - f);
projMat.m23 = 2 * f * n / (n - f);
projMat.m32 = -1f;
// _GPUProjMat
Matrix4x4 GPUMat = GL.GetGPUProjectionMatrix(viewCam.projectionMatrix, false);
Shader.SetGlobalMatrix("_GPUProjMat", projMat);
// _UnityProjMat
Shader.SetGlobalMatrix("_UnityProjMat", viewCam.projectionMatrix);
gives us :
frame_debugger_OpenGL
HOWEVER, when I switch to Direct3D11 the 'glstate_matrix_projection' is flipped vertically. I mean that the m11 component of the matrix is negative, which flips the Y axis when applied to the vertex. The projection matrix for Direct3D used in Unity applies the Reversed Z buffer technique, giving us a matrix like :
[ x 0 0 0 ] x = cot(fovH/2) c = n/(f-n)
[ 0 y 0 0 ] y = -cot(fovV/2) e = (f*n)/(f-n)
[ 0 0 c e ] d = -1
[ 0 0 d 0 ]
(you'll notice that 'c' and 'e' are respectively the same as f/(n-f) and (fn)/(n-f)* given by Direct3D documentation of D3DXMatrixPerspectiveFovRH() function, with 'f' and 'n' swapped to apply the Reversed Z buffer )
From there, there are several concerns :
if we try to give a projection matrix to the shader, instead of 'glstate_matrix_projection', using 'GL.GetGPUProjectionMatrix()' specifying false as the second parameter, the matrix won't be correct, the rendered screen will be flipped vertically, which is not wrong given the parameter.
frame_debugger_Direct3D
Indeed, this boolean parameter is to modify the matrix whether the image is rendered into a renderTexture or not, and it is justified since OpenGL vs Direct3D render texture coordinates are like this :
D3D_vs_OGL_rt_coord
In a way that makes sense because the screen space of Direct3D is in pixel coordinates, where the handedness is the same as for render texture coordinates, accessed in the pixel shader through the 'SV_Position' semantic. The clip space is only flipped vertically then, into a right-handed system with the positive Y going down the screen, and the positive Z going towards the screen.
Nontheless, I render my vertices directly to the screen, and not into any render texture ... is this parameter from 'GL.GetGPUProjectionMatrix()' a trick to set to true when used with Direct3D like APIs ?
another concern is that we can guess that, given the clip space, NDC, and screen space are left-handed in OpenGL-like APIs, these spaces are right-handed in Direct3D-like APIs... right ? where am I wrong ? Although nobody never qualified or stated on any topic, documentation, dev blog, etc.. I ever read, the handedness of those doesn't seem to bother anyone. Even the projection matrices provided by the official Direct3D documentation don't flip the Y-axis, why then ? I admit I only tried to render graphics with D3D or OGL only inside Unity, perhaps Unity does black magic again under the coat, as usual heh.
I hope I explained clearly enough all this mess, thanks to everyone who reach this point ;)
I really need to find out what's going on here, because Unity's documentation becomes more and more legacy, with poor explanation on precise engine parts.
Any help is really appreciated !!

Gravity in accelerometric measurements

I have taken from a data set the values ​​of x and z of activity (e.g. walking, running) detected by an accelerometer. Since the data collected also contains the gravity values, I removed it with the following filter in Matlab:
fc = 0.3;
fs = 50;
x = ...;
y = ...;
z = ...;
[but,att] = butter(6,fc/(fs/2));
gx = filter(but,att,x);
gy = filter(but,att,y);
gz = filter(but,att,z);
new_x = x-gx;
new_y = y-gy;
new_z = z-gz;
A = magnitude(new_x,new_y,new_z);
plot(A)
Then I calculated the magnitude value and plotted the magnitude value on a graph.
However, every graph, even after removing gravity, starts with a magnitude of 1g (9.8 m / s ^ 2), why? Should not it start at 0 since I removed gravity?
You need to wait for the filter value to ramp up. Include some additional data that you don't graph at the beginning of the file for this purpose.
How accurate do your calculations need to be? With walking and running the angle of the accelerometer can change, so the orientation of the gravity vector can change throughout the gait cycle. How much of a change in orientation you can expect to see depends on the sensor location and the particular motion you are trying to capture.

How to perform an orthographic projection on a z-Buffer image in Matlab?

I am facing the same problem as mentioned in this post, however, I am not facing it with OpenGL, but simply with MATLAB. Depth as distance to camera plane in GLSL
I have a depth image rendered from the Z-Buffer from 3ds max. I was not able to get an orthographic representation of the z-buffer. For a better understanding, I will use the same sketch as made by the previous post:
* |--*
/ |
/ |
C-----* C-----*
\ |
\ |
* |--*
The 3 asterisks are pixels and the C is the camera. The lines from the
asterisks are the "depth". In the first case, I get the distance from the pixel to the camera. In the second, I wish to get the distance from each pixel to the plane.
The settins of my camera are the following:
WIDTH = 512;
HEIGHT = 424;
FOV = 89.971;
aspect_ratio = WIDTH/HEIGHT;
%clipping planes
near = 500;
far = 5000;
I calulate the frustum settings like the following:
%calculate frustums settings
top = tan((FOV/2)*5000)
bottom = -top
right = top*aspect_ratio
left = -top*aspect_ratio
And set the projection matrix like this:
%Generate matrix
O_p = [2/(right-left) 0 0 -((right+left)/(right-left)); ...
0 2/(top-bottom) 0 -((top+bottom)/(top-bottom));...
0 0 -2/(far-near) -(far+near)/(far-near);...
0 0 0 1];
After this I read in the depth image, which was saved as a 48 bit RGB- image, where each channel is the same, thus only one channel has to be used.
%Read in image
img = imread('KinectImage.png');
%Throw away, except one channel (all hold the same information)
c1 = img(:,:,1);
The pixel values have to be inverted, since the closer the values are to the camera, the brigher they were. If a pixel is 0 (no object to render available) it is set to 2^16, so , that after the bit complementation, the value is still 0.
%Inverse bits that are not zero, so that the z-image has the correct values
c1(c1 == 0) = 2^16
c1_cmp = bitcmp(c1);
To apply the matrix, to each z-Buffer value, I lay out the vector one dimensional and build up a vector like this [0 0 z 1] , over every element.
c1_cmp1d = squeeze(reshape(c1_cmp,[512*424,1]));
converted = double([zeros(WIDTH*HEIGHT,1) zeros(WIDTH*HEIGHT,1) c1_cmp1d zeros(WIDTH*HEIGHT,1)]) * double(O_p);
After that, I pick out the 4th element of the result vector and reshape it to a image
img_con = converted(:,4);
img_con = reshape(img_con,[424,512]);
However, the effect, that the Z-Buffer is not orthographic is still there, so did I get sth wrong? Is my calculation flawed ? Or did I make mistake here?
Depth Image coming from 3ds max
After the computation (the white is still "0" , but the color axis has changed)
It would be great to achieve this with 3ds max, which would resolve this issue, however I was not able to find this setting for the z-buffer. Thus, I want to solve this using Matlab.

Visualizing Sine Wave with Processing

I have 1000+ row Sine Wave data which changes with time and I want to visualize it with Processing language. My aim is to create an animation which will draw a Sine Wave with time starting from the middle of the rectangular [height/2]. I also want to show only the 1 second periods of that wave. I mean after 1 second, first coordinate should dissappear, and so forth.
How can I achieve that ?
Thanks
Sample Data :
TIME X Y
0.1333 0 0
0.2666 0.1 0.0999983333
0.3999 0.2 0.1999866669
0.5332 0.3 0.299955002
0.6665 0.4 0.3998933419
0.7998 0.5 0.4997916927
0.9331 0.6 0.5996400648
1.0664 0.7 0.6994284734
The way you'd achieve that is to split this project into tasks:
load & parse data
update time and render data
To make sure part 1 goes smoothly it's probably best to make sure your data is easy to parse first. The sample data looks like a table/spreadsheet, but it's not formatted with a standard separator(e.g. comma or tab). You can fiddle things when you parse, but I recommend using clean data first, for example, if you plan on using space as a separator:
TIME X Y
0.1333 0.0 0
0.2666 0.1 0.0999983333
0.3999 0.2 0.1999866669
0.5332 0.3 0.299955002
0.6665 0.4 0.3998933419
0.7998 0.5 0.4997916927
0.9331 0.6 0.5996400648
1.0664 0.7 0.6994284734
Once that's done, you can use loadStrings() to load the data and split() to break a row into 3 elements which can be converted from string to float.
Once you've got value to use, you can store them. You can either create three arrays, each holding a field from the loaded data(one for all the X values, one for all the Y values and one for all the time values) or you can cheat and use a single array of PVector objects. Although PVector is meant for 3D math/linear algebra, you have 2D coordinates, so you can store time as 3rd 'dimension'/component.
Part two revolves mostly around updating based on time, and this is where millis() comes in handy. You can check the amount of time passed between updates and if it's greater than a certain (delay) value, it's time for another update (of the frame/data row index).
The last part you need to worry about is rendering the data on screen. Luckily in your sample data the coordinates are normalized(between 0.0 and 1.0) which makes easy to map to the sketch dimensions(by using simple multiplication). Otherwise the map() function comes in handy.
Here's a sketch to illustrate the above, data.csv is a text file containing the formatted sample data from above:
PVector[] frames;//keep track of the frame data(position(x,y) and time(store in PVector's z property))
int currentFrame = 0,totalFrames;//keep track of the current frame and total frames from the csv
int now, delay = 1000;//keep track of time and a delay to update frames
void setup(){
//handle data
String[] rows = loadStrings("data.csv");//load data
totalFrames = rows.length-1;//get total number of lines (-1 = sans the header)
frames = new PVector[totalFrames];//initialize/allocate frame data array
for(int i = 1 ; i <= totalFrames; i++){//start parsing data(from 1, skip header)
String[] frame = rows[i].split(" ");//chop each row into 3 strings(time,x,y)
frames[i-1] = new PVector(float(frame[1]),float(frame[2]),float(frame[0]));//parse each row(not i-1 to get back to 0 index) and how the PVector's initialized 1,2,0 (x,y,time)
}
now = millis();//initialize this to keep track of time
//render setup, up to you
size(400,400);smooth();fill(0);strokeWeight(15);
}
void draw(){
//update
if(millis() - now >= delay){//if the amount of time between the current millis() and the last time we updated is greater than the delay (i.e. every 'delay' ms)
currentFrame++;//update the frame index
if(currentFrame >= totalFrames) currentFrame = 0;//reset to 0 if we reached the end
now = millis();//finally update our timer/stop-watch variable
}
PVector frame = frames[currentFrame];//get the data for the current frame
//render
background(255);
point(frame.x * width,frame.y * height);//draw
text("frame index: " + currentFrame + " data: " + frame,mouseX,mouseY);
}
There are a couple of extra notes needed:
You mentioned moving to the next coordinate after 1 second. From what I can see in your sample data there are 8 updates per second, so 1000/8 would probably work better. It's up to you how you handle timing though.
I assume your full set includes data for a sine wave movement. I've mapped to the full coordinates, but in the render part of the draw() loop you can map however you like(e.g. including a height/2 offset, etc.). Also if you're not familiar with sine waves, have a look at these Processing resources: Daniel Shiffman's SineWave sample, Ira Greenberg's trig tutorial.

iPhone - AVAudioPlayer - convert decibel level into percent

I like to update an existing iPhone application which is using AudioQueue for playing audio files. The levels (peakPowerForChannel, averagePowerForChannel) were linear form 0.0f to 1.0f.
Now I like to use the simpler class AVAudioPlayer which works fine, the only issue is that the levels which are now in decibel, not linear from -120.0f to 0.0f.
Has anyone a formula to convert it back to the linear values between 0.0f and 1.0f?
Thanks
Tom
Several Apple examples use the following formula to convert the decibels into a linear range (from 0.0 to 1.0):
double percentage = pow (10, (0.05 * power));
where power is the value you get from one of the various level meter methods or functions, such as AVAudioPlayer's averagePowerForChannel:
Math behind the Linear and Logarithmic value conversion:
1. Linear to Decibel (logarithmic):
decibelValue = 20.0f * log10(linearValue)
Note: log is base 10
Suppose the linear value in the form of percentage range from [ 0 (min vol) to 100 (max vol)] then the decibelValue for half of the volume (50%) is
decibelValue = 20.0f * log10(50.0f/100.0f) = -6 dB
Full volume:
decibelValue = 20.0f * log10(100.0f/100.0f) = 0 dB
Complete mute:
decibelValue = 20.0f * log10(0/100.0f) = -infinity
2. Decibel(logarithmic) to Linear:
LinearValue = pow(10.0f, decibelValue/20.0f)
Apple uses a lookup table in their SpeakHere sample that converts from dB to a linear value displayed on a level meter.
I moulded their calculation in a small routine; see here.