I want to use xor for my double numbers in matlab,but bitxor is only working for int numbers. Is there a function that could convert double to int in Matlab?
The functions You are looking for might be: int8(number), int16(number), uint32(number) Any of them will convert Double to an Integer, but You must pick the best one for the result You want to achieve. Remember that You cannot cast from Double to Integer without rounding the number.
If I understood You correcly, You could create a function that would simply remove the "comma" from the Double number by multiplying your starting value by 2^n and then casting it to Integer using any of the functions mentioned earlier, performing whatever you want and then returning comma to its original position by dividing the number by 2^n
Multiplying the starting value by 2^n is a hack that will decrease the rounding error.
The perfect value for n would be the number of digits after the comma if this number is relatively small.
Please also specify, why are You trying to do this? This doesn't seem to be the optimal solution.
You can just cast to an integer:
a = 1.003
int8(a)
ans =
1
That gives you an 8 bit signed integer, you can also get other size i.e. int16 or else unsigned i.e. uint8 depending on what you want to do
Related
Please let me know how to achieve this as I tried a lot but didn't get the desired result for vector, u=[0;1;0;0;1;1;0;0;0;0;1;1;0;1;1;1;1;0;0;0;1;0;0;1;0;1;0;0;0;0;0;1;0;1;1;0;0;0;0;0;0;0;0;0;1;1;0;1;0;1;0;1;1;0;1;1;1;1;0;0;0;0;0;0];
desired output=-108.209
Regards
Nitin
First off, I think your expectation for a correct answer is off. The first bit in a double is the sign. So if you're expecting a negative number, the first bit should be 1. Even if you had your bits backward, it's still a leading 0. There are all sorts of binary to float calculators if you search for them. Here's an example:
http://www.binaryconvert.com/result_double.html?hexadecimal=4C378941600D5BC0
To answer your question for how to do this in Matlab, Matlab's built in function for converting from binary is bin2dec. However, it's expecting a char array as an input, so you'll need to convert to char with num2str. The other trick here is that bin2dec only supports up to 53 bits. So you'll need to break it into two 32 bit numbers. The last piece of the puzzle is to use typecast to convert your pair of 32bit integers into a double. Put it all together, and it looks like this:
bits = [0;1;0;0;1;1;0;0;0;0;1;1;0;1;1;1;1;0;0;0;1;0;0;1;0;1;0;0;0;0;0;1;0;1;1;0;0;0;0;0;0;0;0;0;1;1;0;1;0;1;0;1;1;0;1;1;1;1;0;0;0;0;0;0];
int1 = uint32(bin2dec(num2str(bits(1:32)')));
int2 = uint32(bin2dec(num2str(bits(33:64)')));
double_final = typecast([int2 int1],'double')
I am trying to write a program in C to get the percent of even numbers in an array. I am thinking of writing it using int datatype. But some one mentioned to me using double will be easier. I don't understand that. Can anyone guide me with it?
What does double datatype return?
Can the return statement be given as return (double)? What will that give?
Can double convert a real number to a percent? Eg: 0.5 to 50.0
The int datatype is, as the name would suggest, integers (or whole numbers). So you cannot represent a decimal like 0.5. A double is decimal number. So you can hold numbers like 0.5. Common practice is to store your percentage as a simple decimal number like 0.5 (using the double type). Then when you need to display nicely as 50.0%, just multiply by 100 before displaying.
Here's a useful link on C's basic datatypes: http://www.tutorialspoint.com/ansi_c/c_basic_datatypes.htm
Here is my code:
println(Double(2/5))
When I run this, it prints out
0.0
How can I fix this? I want it to come out to 0.4. It there some issue with the rounding?
The problem is that you're not converting to a Double until after you've done integer division between two integers. Let's take a look at order of operations. We start at the inside and move outward.
Perform integer division between the integer 2 and the integer 5, which results in the integer 0.
Create a double from the integer 0, which creates the double 0.0.
Call description on the double 0.0, which returns the string "0.0"
Call println on the string "0.0"
We can fix this by calling the Double constructor on each side of the division before we divide them.
println((Double(2)/Double(5)))
Now the order of operations is:
Convert the integer 2 to the floating point 2.0
Convert the integer 5 to the floating point 5.0
Perform floating point division between these floating point numbers, resulting in 0.4
Call description on the floating point number 0.4, which returns the string "0.4".
Call println on the string "0.4".
Note that it's not strictly necessary to convert both sides of the division to Double.
And as long as we're dealing with literals, we can just write println(2.0/5.0).
We could also get away with writing println((2 * 1.0)/5) which should now interpret all of our literals as floating point (as we've multiplied it by a floating point).
As long as either side of a math operating is a floating point type, the integer literal will be interpreted as a floating point type by Swift, but in my opinion, it's far better to explicitly convert our types so that we're excruciatingly clear on exactly what we want to happen. So let's get all of our numbers into the same type and be explicitly clear what we actually want.
If we're dealing with literals, we can add .0 to them to force them as floating point numbers:
println(2.0/5.0)
If we're doing with variables, we can use a constructor:
let myTwoInt: Int = 2
let myFiveInt: Int = 5
println((Double(myTwoInt)/Double(myFiveInt))
I think your issue is that you are dividing two integers which normally will return an integer.
I had a similar issue in java, adding a .0 to one or the other integers or converting either to a double by using the double function should fix it.
It's a feature of typed languages that creates a result of the same type as the values being divided.
Digits is correct about the cause; instead of the approach you're taking, try this:
print(2.0 / 5.0)
i have code and use double function several time to convert sym to double.to increase precision , I want to use digits function.
I want to know it is enough that I write digits in the top of code or I must write digits in above of every double function.
digits set's the precision until it is changed again. Calling digits() without any input you get the precision to verify it's set correct.
In many cases digis has absoluetly no influence on symbolic variables because an analytical solution is found. This means there are no precision errors unless you convert to double. When convertig, digits should be set to at least 16 because this matches double precision.
This is literally the number I obtain (from symsum function), which is of type sym:
a=328791078344903739363762093060350430076929707044786898291940722052812676355129485878814911641516759087483581972443760841410582114920781832660013389681326267351368505696628653562484228680842650173635989588528021721039959787053654401351638478786763875479187208098871238084448485336138651690856082810553570419028927840285091142054111375001
I would like to make mathematical operations (in particular, take a natural log) on this number and so want to transform it to double, however the output from double(a) is simply "Inf". How to go about this problem and convert it from "sum" to a numeric type?
Your number is ~3.3x10335 but the largest number that can be represented by MATLAB's double precision floating point numbers is ~1.8x10308 (see the output of realmax). Converting your number to double precision causes overflow because the number is larger than can be represented so MATLAB just returns Inf.
For an exhaustive overview of floating point representations and arithmetic, you can check out this PDF.
Can you count the digits and insert a decimal point before converting to double?
If so, take advantage of the fact that the natural log of a number that overflows may not itself overflow.
Using "^" for power, you can represent your number as 3.28791078344903739363762093060350430076929707044786898291940722052812676355129485878814911641516759087483581972443760841410582114920781832660013389681326267351368505696628653562484228680842650173635989588528021721039959787053654401351638478786763875479187208098871238084448485336138651690856082810553570419028927840285091142054111375001 * (10 ^ 335).
The decimal log of (10^335) is 335. Its natural log is 335*log(10).
The natural log of the original number is:
log(3.287910783449037393637620930603504300769297070447868982919407220528)
+ 335*log(10)
All inputs, intermediate results, and the final result of this calculation are in the double range.