How can I convert hexadecimal to decimal numbers in Emacs calc?
For example, if I enter FF, I want it to convert it to 255.
UPDATE: How do I get the reverse operation, turn base 10 to base 16?
You can enter any number in the format <base>#<number>. Example: 16#FF is immediately converted to 255.
For the reverse, you need to set the output display mode. In this example, d r 16 RET sets the display to base 16. Set it to base 10 to get the default behaviour again.
By the way, you can also Read The Fine ManualTM: GNU Emacs Calc Manual.
Svante answered your question, but I'd like to add that the Radix Display Mode change has a quicker keystroke:
Show in hexidecimal mode: d 6
Show in decimal mode: d 0
Of course you could type 16#FF to enter 0xFF, but there is a more convenient way.
The other option:
change the display radix to hex with d 6
then enter all the hex numbers you want by prefixing them with a # like #FF and <enter>. (The # means interpret number with given display radix)
After this, change the display radix back to decimal with d 0.
Note: A number entered without # always inserts a decimal number.
Note2: This also works the other way round.
Negative Values:
Now lets say you have a 8-bit system and you want to know how decimal -3 is stored in this systems RAM.
switch word size: b w 8
enter dec -3 by typing 3 n and <enter>
set display radix to hex with two's complement notation: O d 6. (The O as Option is important to enable two's complement.)
Note: you see 16##FD. Two # means it is signed and the value stored in RAM is 0xFD
The above stuff works also with d 2, d 8 (as shortcuts for bin and oct) and other possible display radixes from 2 to 36 (d r <radix-number>).
This info is taken from the Emacs Calc Manual.
Related
Let's say I create some number A, of the order 10^4:
A = 81472.368639;
disp(A)
8.1472e+04
That wasn't what I wanted. Where are my decimals? There should be six decimals more. Checking the variable editor shows me this:
Again, I lost my decimals. How do I keep these for further calculations?
Scientific notation, or why you didn't lose any decimals
You didn't lose any decimals, this is just MATLAB's way of displaying large numbers. MATLAB rounds the display of numbers, both in the command window and in the variable editor, to one digit before the dot and four after that, using scientific notation. Scientific notation is the Xe+y notation, where X is some number, and y an integer. This means X times 10 to the power of y, which can be visualised as "shift the dot to the right for y places" (or to the left if y is negative).
Force MATLAB to show you all your decimals
Now that we know what MATLAB does, can we force it to show us our number? Of course, there're several options for that, the easiest is setting a longer format. The most used for displaying long numbers are format long and format longG, whose difference is apparent when we use them:
format long
A
A =
8.1472368639e+04
format longG
A
A =
81472.368639
format long displays all decimals (up to 16 total) using scientific notation, format longG tries to display numbers without scientific notation but with most available decimals, again: as many as there are or up to 16 digits, both before and after the dot, in total.
A more fancy solution is using disp(sprintf()) or fprintf if you want an exact number of decimals before the dot, after the dot, or both:
fprintf('A = %5.3f\n',A) % \n is just to force a line break
A = 81472.369
disp(sprintf('A = %5.2f\n',A))
A = 81472.37
Finally, remember the variable editor? How do we get that to show our variable completely? Simple: click on the cell containing the number:
So, in short: we didn't lose any decimals along the way, MATLAB still stores them internally, it just displays less decimals by default.
Other uses of format
format has another nice property in that you can set format compact, which gets rid of all the additional empty lines which MATLAB normally adds in the command window:
format compact
format long
A
A =
8.147236863931789e+04
format longG
A
A =
81472.3686393179
which in my opinion is very handy when you don't want to make your command window very big, but don't want to scroll a lot either.
format shortG and format longG are useful when your array has very different numbers in them:
b = 10.^(-3:3);
A.*b
ans =
1.0e+07 *
0.0000 0.0001 0.0008 0.0081 0.0815 0.8147 8.1472
format longG
A.*b
ans =
Columns 1 through 3
81.472368639 814.72368639 8147.2368639
Columns 4 through 6
81472.368639 814723.68639 8147236.8639
Column 7
81472368.639
format shortG
A.*b
ans =
81.472 814.72 8147.2 81472 8.1472e+05 8.1472e+06 8.1472e+07
i.e. they work like long and short on single numbers, but chooses the most convenient display format for each of the numbers.
There's a few more exotic options, like shortE, shortEng, hex etc, but those you can find well documented in The MathWork's own documentation on format.
I use !objsize command to get the true value of an object. For example when I run the command below, it tells me that size of object at address 00000003a275f218 is 18 hex which translates to 24 in decimal.
0:000> !ObjSize 00000003a275f218
sizeof(00000003a275f218) = 24 (0x18) bytes
So far so good. I am running same command on an object and its size seem to have a discrepancy between hex and decimal.
So the size in hex is 0xafbde200. When I convert it to decimal using my calc, this comes to be 2948456960 whereas the output of command is showing decimal size to be -1346510336. Can someone help me understand why there is difference in sizes?
It's a bug in SOS. If you look at the source code, you'll find the method declared as
DECLARE_API(ObjSize)
It uses the following format as output
ExtOut("sizeof(%p) = %d (0x%x) bytes (%S)\n", SOS_PTR(obj), size, size, methodTable.GetName());
As you can see it uses %d as the format specifier, which is for signed decimal integers. That should be %u for unsigned decimal integers instead, since obviously you can't have objects using negative amount of memory.
If you know how to use Git, you may provide a patch.
You can use ? in WinDbg to see the unsigned value:
0:000> ? 0xafbde200
Evaluate expression: 2948456960 = 00000000`afbde200
Difference is the sign. It seems to be interpreting the first bit (which is 1 since the first hex byte is "A") as a negative sign. These two numbers are otherwise the same.
Paste -1346510336 on calc.exe (programmer mode), switch to Hex:
FFFFFFFFAFBDE200
Paste 2948456960, switch to Hex:
AFBDE200
How to change the number of decimal digits?
Changing the format Matlab can show only 4 (if short) or 15 (if long). But I want exactly 3 digits to show.
To elaborate on Hamataro's answer, you could also use roundn function to round to a specific decimal precision, e.g.: roundn(1.23456789,-3) will yield 1.235. However, Matlab will still display the result in either of the formats you have mentioned, i.e 1.2350 if format is set to short, and 1.235000000000000 if format is set for long.
Alternatively, if you use sprintf, you can use the %g formatting option to display only a set number of digits, regardless of where the decimal point is. sprintf('%0.3g',1.23456789) yields 1.23; sprintf('%0.3g',12.3456789) yields 12.3
You can either use sprintf or do *
var2 = round(var1*1000)/1000
I have numbers with upto 10 decimal points stored in a text file.
I read it in MATLAB and then do str2double on the number but i get only upto 4 decimal points. What should I do to get all the values after decimal.
Forexample:
str2double('-122.345464646')
ans =
-122.3455
but I need the entire number
Thanks
Please follow the below steps:
Go to preference
List item
Go to Command Window Option
Then change the numeric format to long g
Alternatively, type long g in command window
how to write a lisp program to convert given hexadecimal number into decimal. can somebody give me a clue.
thank you
I'm assuming its a homework problem so i'll give you a hint in the right direction.
Here is how to convert decimal to binary ->
Lets say you start with the number 9 in binary its 1001.
Start of by dividing 9 by 2. You get 4 with remainder 1. Save the remainder.
Now divide that 4 by 2 again, you get 2 with remainder 0. Save the remainder.
Divide that 2 again by 2, you get 1 with remainder 0. Save the remainder.
Divide that 1 by 2 and finally you get 0 with reaminder 1. Save the remainder.
If you read the saved remainders backwards you get 1001! The binary number you've been looking for. Best to push the remainders on the stack and pop them back out, that way they'll come out backwards.
It's already provided by Common Lisp.
The input is the string for the hex integer.
Then you parse the integer with radix 16
the result is the number
if you write the number with base 10 to an output stream, then you can get the number as a string in base 10