Movesense DEBUGLOG float value - movesense

I would like to print float values to the RTT Viewer.
I tried the DEBUGLOG function but nothing seems to happen.
Could you please help me with this issue?
Thank you

The DEBUGLOG macro uses snprintf internally on the device which does not support float values at the moment. Easiest is to multiply by 10 or 100 and cast to "int" and print with "%d".

Related

Dividing a character with a Float in Ada

So i´m trying to divide a character with a float using an operator, but I dont know why my code gets the error message "unexpected argument for "Position" attribute"
function "/"(Teck: in Character;
Flyt: in Float) return Float is
begin
return Float(Character'Position(Teck))/Flyt;
end "/";
Can somebody explain how Character'position works and what I need to change here, because I´ve used pretty much the same code previously in a different assigment.
Regarding the ARM, characters are, for example, defined as
UC_C_Cedilla : constant Character := 'Ç'; --Character'Val(199)
And if you read the operations on discrete types in the ARM, you'll se that the inverse attribute of Val is Pos, not Position.

Subtracting a float from another float does not equal a float?

I am coding with the unity scripting API but I don't think that changes anything.
I have a variable playerMovementScript.rightWallX that is a float and is equal to -20.84.
I then have this line of code:
Debug.Log(-21.24f == playerMovementScript.rightWallX - 0.4f);
It always prints false. I feel like I'm missing something obvious cause I'd like to think I'm not THAT new to coding. Any help would be appreciated.
Floating points not accurate. Use them only for fast calculations. If u want to compare 2 floats use if (Mathf.Approximately(floatA,floatB))
The reason it prints false is because you are using the == operator, which checks if 2 things are equal, therefore since you are doing -21.24f == playerMovementScript.rightWallX - 0.4f, the result is unequal and then it prints false. I am pretty sure you meant =, but we all make mistakes.

Expression in Debugger shows correct value of calculation, but value is rounded when assigning to variable

I read 16 Bit two's complement sensor data byte after byte via I2C with a STM32, so I have to stick the High- and Low-Byte back together and convert this number to a float to get the real value.
My expression in C for this is
// Convert temperature value (256 LSBs/°C with +25°C Offset)
float temp = (tmpData[1] << 8 | tmpData[0])/256.0 + 25.0;
When I use the debugger of the STM32CubeIDE to check the calculation, the expression shows the correct conversion of the data (see screenshot). But the value assigned to the temp variable is always 25! It seems to me like the first term of the expression is always assumed to be 0 or something? I already tried a direct cast of the term in brackets to float, but that does not change anything.
Can anybody point me to the problem? Why is the debugger showing the correct value, but the code is assigning a wrong one?
The expressions in the screenshots below are captured by howering the mouse over the corresponding part of the above code line in debug mode.
Fig. 1: Complete expression of calculation (gives result as expected)
Fig. 2: tmpData content (original two Bytes)
Fig. 3: Result of byte shifting and sticking
Fig. 4: temp result (always 25, even when expression above showes the expected value)
temp is only a volatile for the moment, because I don't use that value yet and the compiler optimizes it out.
many issues in this single line of code.
tmpData[1] << 8 has to be (((uint16_t)tmpData[1]) << 8). If you shift byte 8 times the result will be zero (standard says undefined). You probably got the warning about it.
256.0 & 25.0 are double values and the result will also be double then converted to the float. You shuold use 256.0f and 25.0f.
you can see the difference here: https://godbolt.org/z/Cm-S8T
OK, new morning, new intuition...
The problem wasn't with this code line from the question, but (typ. Murphy) with the stuff I did not post. The I2C is read via DMA to the array and at the point I wanted to do the conversion the peripheral was not done yet. That's why the array elements were always empty when the code accessed them, but at the time I checked the values in the debugger the I2C peripheral had finished the transmission and I saw the expected values.
So the solution is to check if the peripheral is still busy...

HLSL - asuint of a float seems to return the wrong value

I've been attempting to encode 4 uints (8-bit) into one float so that I can easily store them in a texture along with a depth value. My code wasn't working, and ultimately I found that the issue boiled down to this:
asuint(asfloat(uint(x))) returns 0 in most cases, when it should return x.
In theory, this code should return x (where x is a whole number) because the bits in x are being converted to float, then back to uint, so the same bits end up being interpreted as a uint again. However, I found that the only case where this function seems to return x is when the bits of x are interpreted as a very large float. I considered the possibility that this could be a graphics driver issue, so I tried it on two different computers and got the same issue on both.
I tested several other variations of this code, and all of these seem to work correctly.
asfloat(asuint(float(x))) = x
asuint(asint(uint(x))) = x
asuint(uint(x)) = x
The only case that does not work as intended is the first case mentioned in this post. Is this a bug, or am I doing something wrong? Also, this code is being run in a fragment shader inside of Unity.
After a long time of searching, I found some sort of answer, so I figured I would post it here just in case anyone else stumbles across this problem. The reason that this code does not work has something to do with float denormalization. (I don't completely understand it.) Anyway, denormalized floats were being interpreted as 0 by asuint so that asuint of a denormalized float would always be 0.
A somewhat acceptable solution may be (asuint(asfloat(x | 1073741824)) & 3221225471)
This ensures that the float is normalized, however it also erases any data stored in the second bit. If anyone has any other solutions that can preserve this bit, let me know!

Simple issue with floats and ints

A simple question best asked with two lines of code:
CGFloat answer = (abs(-27.460757f) - 9.0f) * (800.0f / (46.0f - 9.0f));
NSLog(#"According to my calculator, answer should be 399.15, but instead it is: %f", answer);
When I run this in Xcode (specifically, in the iPhone simulator), I get:
According to my calculator, answer
should be 399.15, but instead it is:
389.189209
Is this just due to my lack of understanding of how floats are rounded?
Thanks!
Stewart
The abs() function operates on integers, so abs(-27.460757f) returns 27. Since you’re using floats, use fabsf() instead.