Dividing a character with a Float in Ada - type-conversion

So i´m trying to divide a character with a float using an operator, but I dont know why my code gets the error message "unexpected argument for "Position" attribute"
function "/"(Teck: in Character;
Flyt: in Float) return Float is
begin
return Float(Character'Position(Teck))/Flyt;
end "/";
Can somebody explain how Character'position works and what I need to change here, because I´ve used pretty much the same code previously in a different assigment.

Regarding the ARM, characters are, for example, defined as
UC_C_Cedilla : constant Character := 'Ç'; --Character'Val(199)
And if you read the operations on discrete types in the ARM, you'll se that the inverse attribute of Val is Pos, not Position.

Related

Is there any way to set length of integer array with variable in hlsl?

firstly, I wish you guys can understand awkward grammar skills of next writing. English is not my first language.
Im currently using UnityEngine Now. what i wanna do is sending a number of rows to a shader, so that i can set a count of rows of stripes in Gameobject mesh using the shader which got the number of stripes.
And I made to send a number variable to a shader, but when i try to create int array in CG program part(which is HLSL) with size of the rows that i want using the number, unity Engine gives me this error message - "array dimensions must be literal scalar expressions".
This is the integer variable that i set in my unity shader script. This gets integer value from c# script function(this part doenst have any issue)
_LowCount ("LowCount", int) = 0
And this is CG Program part which im struggling with.
The variable below is declared in global field. It receives number value from the properties.
int _LowCount;
And this is fragment shader function part and it declares integer array in its local field setting the array size on integer variable - "_LowCount"
fixed4 frag(v2f i):COLOR{
fixed4 c=0;
int ColorsArray[_LowCount];
for(int aa=0;aa<_LowCount;aa++){
ColorsArray[aa]=0;
}
return c;
And below part from fragment shader function gives me the error that i mentioned in above.
int ColorsArray[_LowCount];
I searched this issue in google, then i realized i have to set array size with number value( not a variable). But I need an integer array with size of number variable that i can give any integer value anytime i want. Is there any solution?
*ps. I started to learn CG graphics from just 2 weeks ago. So I might be wrong in my understading and my knowledge. Thank you.
There is no way to define an hlsl array with variable size. From the docs:
Literal scalar expressions declared containing most other types
Either preallocate with the maximum array size int ColorsArray[maximum possible _LowCount];
It's not super clear what your end goal is, but another solution may be to execute a different shader for each object instead. See if you can update your question a little and I'll update the answer.

HLSL - asuint of a float seems to return the wrong value

I've been attempting to encode 4 uints (8-bit) into one float so that I can easily store them in a texture along with a depth value. My code wasn't working, and ultimately I found that the issue boiled down to this:
asuint(asfloat(uint(x))) returns 0 in most cases, when it should return x.
In theory, this code should return x (where x is a whole number) because the bits in x are being converted to float, then back to uint, so the same bits end up being interpreted as a uint again. However, I found that the only case where this function seems to return x is when the bits of x are interpreted as a very large float. I considered the possibility that this could be a graphics driver issue, so I tried it on two different computers and got the same issue on both.
I tested several other variations of this code, and all of these seem to work correctly.
asfloat(asuint(float(x))) = x
asuint(asint(uint(x))) = x
asuint(uint(x)) = x
The only case that does not work as intended is the first case mentioned in this post. Is this a bug, or am I doing something wrong? Also, this code is being run in a fragment shader inside of Unity.
After a long time of searching, I found some sort of answer, so I figured I would post it here just in case anyone else stumbles across this problem. The reason that this code does not work has something to do with float denormalization. (I don't completely understand it.) Anyway, denormalized floats were being interpreted as 0 by asuint so that asuint of a denormalized float would always be 0.
A somewhat acceptable solution may be (asuint(asfloat(x | 1073741824)) & 3221225471)
This ensures that the float is normalized, however it also erases any data stored in the second bit. If anyone has any other solutions that can preserve this bit, let me know!

Using a variable-sized argument in Matlab coder

I want to generate a c++ code for DCT function using Matlab coder. I wrote this simple function and tried to convert it to c++.
function output_signal = my_dct(input_signal)
output_signal = dct(input_signal);
end
When I use a fixed size type for the input argument (such as double 1x64), there is no problem; however, a variable-sized type (such as double 1x:64) for the input argument results in these errors:
The preceding error is caused by: Non-constant expression..
The input to coder.const cannot be reduced to a constant.
Can anyone please help me?
Thanks in advance.
The documentation is a bit vague for DCT in Coder, but it implies that the input size must be a constant power of 2 along the dimension of the transform. From DCT help:
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
C and C++ code generation for dct requires DSP System Toolbox™ software.
The length of the transform dimension must be a power of two. If specified, the pad or truncation value must be constant. Expressions or variables are allowed if their values do not change.
It doesn't directly say that the length of the variable (at least along the dimension being transformed) going into the dct function must be a constant, but given how coder works, it really probably has to be. Since that's the error it's returning, it appears that's a limitation.
You could always modify your calling function to zero pad to a known maximum length, thus making the length constant.
For instance, something like this might work:
function output_signal = my_dct(input_signal)
maxlength = 64;
tinput = zeros(1,maxlength);
tinput(1:min(end, numel(input_signal))) = input_signal(1:min(end, maxlength));
output_signal = dct(tinput);
end
That code will cause tinput to always have a size of 1 by 64 elements. Of course, the output will also always be 64 elements long also, which means it'll be scaled and may have a difference frequency scale than you're expecting.

uint64 is not exact for vectors in Matlab

I have discovered an inconsistency for uint64 when using vectors in Matlab. It seems as an array of uint64 is not exact for all 64 bits. This did not give the output I expected,
p=uint64([0;0]);
p(1)=13286492335502040542
p =
13286492335502041088
0
However
q = uint64(13286492335502040542)
q =
13286492335502040542
does. It is also working with
p(1)=uint64(13286492335502040542)
p =
13286492335502040542
0
Working with unsigned integers one expect a special behaviour and usually also perfect precision. This seems weird and even a bit uncanny. I do not see this problem with smaller numbers. Maybe anyone knows more? I do not expect this to be an unknown problem, so I guess there must be some explanation to it. I would be good to know why this happen and when, to be able to avoid it. As usual this kind of issues is mentioned nowhere in the documentation.
Matlab 2014a, windows 7.
EDIT
It is worth mentioning that I can see the same behaviour when defining arrays directly.
p=uint64([13286492335502040542;13286492335502040543])
p =
13286492335502041088
13286492335502041088
This is the root to why I ask this question. I have hard to see workaround for this case.
While it might be surprising, this is a floating point precision issue. :-)
The thing is, all numeric literals are by default of type double in MATLAB; that's why:
13286492335502040542 == 13286492335502041088
will return true; the floating point representation in double precision of 13286492335502040542 is 13286492335502041088. Since p has the class uint64, all assignments done to it will cast the right-hand-side to its class.
On another hand, the uint64(13286492335502040542) "call" will be optimized by the MATLAB interpreter to avoid the overhead of calling the uint64 function for the double argument, and will convert the literal directly to its unsigned integer representation (which is exact).
On a third hand [sic], the function call optimization doesn't apply to
p = uint64([13286492335502040542;13286492335502040543])
because the argument of uint64 is not a literal, but the result of an expression, i.e. the result of the vertcat operator applied to two double operands. In this case the MATLAB interpreter is not smart enough to figure out that the two function calls should "commute" (concatenation of uint should be the same as uint of concatenation), so it evaluates the concatenation (which gives an array of equal double because FP precision), then converts the two similar double values to uint64.
TLDR: the difference between
p = uint64(13286492335502040542);
and
u = 13286492335502040542; p = uint64(u);
is a side effect of function call optimization.
Matlab, unless told otherwise reads numbers as double, then casts to the relevant datatype. The Matlab double datatype allows for 51 bits for the floating point fraction, giving the possibility to store 52 bit integers without loss of prepossession (mantissa). Notice that 13286492335502041088 is just 13286492335502040543 with the last 12 bits set to zero.
the solution as you said, is to convert the literals directly uint64(13286492335502040543).
p=uint64([13286492335502040542;13286492335502040543]) does not work because it creates a double array and then converts it to uint64
This issue is mentioned in the uint64 documentation, under 'More About', although it doesn't mention that laterals are read as doubles unless otherwise specified.
I agree this seems weird and I don't have an explanation. I do have a workaround:
p=[uint64(13286492335502040542);uint64(13286492335502040543)]
i.e., cast the separate values to uint64s.

glUniform4fv is giving GL_INVALID_OPERATION

I' trying to develop a basic game in iOS and OpenGL ES but I'm stuck on this problem with the uniforms, here is the code that passes the value to my uniform:
glBindVertexArrayOES(_vertexArray);
// Render the object with ES2
glUseProgram(self.shaderProgram);
glUniformMatrix4fv(uniformModelViewProjection, 1, 0, modelViewProjectionMatrix.m);
// Get uniform center position
glUniform4fv(uniformCenterPosition, 1, centerPosition.v);
// Get uniform time position
glUniform1f(uniformTime, time);
// Set the sampler texture unit to 0
glUniform1i(uniformTexture, 0);
glDrawArrays(GL_POINTS, 0, numVertices);
Notice that care has been taken to position the glUniform function preceded with glUseProgram and before the glDrawArrays call is made. The uniform locations look fine also as confirmed via tracing. Here is what I get when I run the OpenGL ES Analyzer tool in XCode:
It returns a GL_INVALID_OPERATION for glUniform4fv, notice that the values represented seem to be correct.
Here are the possible causes for the GL_INVALID_OPERATION error I found from the documentation:
there is no current program object.
the size of the uniform variable declared in the shader does not match the size indicated by the glUniform command.
one of the signed or unsigned integer variants of this function is used to load a uniform variable of type float, vec2, vec3, vec4, or an array of these, or if one of the floating-point variants of this function is used to load a uniform variable of type int, ivec2, ivec3, ivec4, unsigned int, uvec2, uvec3, uvec4, or an array of these.
one of the signed integer variants of this function is used to load a uniform variable of type unsigned int, uvec2, uvec3, uvec4, or an array of these.
one of the unsigned integer variants of this function is used to load a uniform variable of type int, ivec2, ivec3, ivec4, or an array of these.
location is an invalid uniform location for the current program object and location is not equal to -1.
count is greater than 1 and the indicated uniform variable is not an array variable.
a sampler is loaded using a command other than glUniform1i and glUniform1iv.
None of them seem to explain why the heck I am receiving this error. It's driving me crazy, please help!
Adding my comment as an answer, since it turned out to be the solution:
The only causes from that list that I could imagine are points 2 and 3:
the size of the uniform variable declared in the shader does not match the size indicated by the glUniform command.
one of the signed or unsigned integer variants of this function is used to load a uniform variable of type float, vec2, vec3, vec4, or an
array of these, or if one of the floating-point variants of this
function is used to load a uniform variable of type int, ivec2, ivec3,
ivec4, unsigned int, uvec2, uvec3, uvec4, or an array of these.
So make sure that the corresponding uniform variable is really declared as vec4 in the shader (maybe it's a vec3?).