Please help me out, im studying operating systems. under virtual memory i found this:
A user process generates a virtual address 11123456. and it is said the virtual address in binary form is 0001 0001 0001 0010 0011 0100 0101 0110. how was that converted because when i convert 11123456 to bin i get 0001 0101 0011 0111 0110 000 0000. it is said The virtual memory is implemented by paging, and the page size is 4096 bytes
You assume that 11123456 is a decimal number, while according to the result it's hexadecimal. In general, decimal numbers are rarely used in CS, representation in orders of 2 is much more common and convenient. Today mostly used are base 16 (hexadecimal) and 2 (binary).
Converting into binary may help to identify the page number and offset so that you can calculate the physical address corresponding to the logical address. It should be good if you can understand how to do this if you are CS student.
For the particular problem, i.e. paging, you can convert from logical to physical address without converting into binary using modulo (%) and divide (/) operators. However, doing things in binary is original way for this.
In your question, the value 11123456 should be a hexadecimal number and it should be written as 0x11123456 to distinguish with the decimal numbers. And from the binary format "0001 0001 0001 0010 0011 0100 0101 0110", we can infer that the offset of the logical address is "0100 0101 0110" (12 rightmost bits, or 132182 in decimal, or 0x20456 in hexadecimal) and the page number is "0001 0001 0001 0010 0011" (the rest bits, 69923 in decimal, or 0x11123 in hexadecimal).
Related
i want so send Data from Raspberry Pi with UART. If i send an character 'A' which is 01000001 in binary the scope shows 0100 0001 0101 0110 0001 0010 1000. What do the last 20 Bit mean in this case? These Bits are set at every transmission.
Im watching the Packages with a scope and i cant tell what these bits mean.
I am currently trying to hash a set of strings using MurmurHash3, since 32 bit hash seems to be too large for me to handle. I wanted to reduce the number of bit used to generate hashes to around 24 bits. I already found some questions explaining how to reduce it to 16, 8, 4, 2 bit using XOR folding, but these are too few bits for my application.
Can somebody help me?
When you have a 32-bit hash, it's something like (with spaces for readability):
1101 0101 0101 0010 1010 0101 1110 1000
To get a 24-bit hash, you want to keep the lower order 24 bits. The notation for that will vary by language, but many languages use "x & 0xFFF" for a bit-wise AND operation with 0xFFF hex. That effectively does (with the AND logic applied to each vertical column of numbers, so 1 AND 1 is 1, and 0 and 1 is 0):
1101 0101 0101 0010 1010 0101 1110 1000 AND <-- hash value from above
0000 0000 1111 1111 1111 1111 1111 1111 <-- 0xFFF in binary
==========================================
0000 0000 0101 0010 1010 0101 1110 1000
You do waste a little randomness from your hash value though, which doesn't matter so much with a pretty decent hash like murmur32, but you can expect slightly reduced collisions if you instead further randomise the low-order bits using the high order bits you'd otherwise chop off. To do that, right-shift the high order bits and XOR them with lower-order bits (it doesn't really matter which). Again, a common notation for that is:
((x & 0xF000) >> 8) ^ x
...which can be read as: do a bitwise-AND to retrain only the most significant byte of x, then shift that right by 8 bits, then bitwise excluse-OR that with the original value of X. The result of the above expression then has bit 23 (counting from 0 as the least signficant bit) set if and only if one or other (but not both) of bits 23 and 31 were set in the value of x. Similarly, bit 22 is the XOR of bits 22 and 30. So it goes down to bit 16 which is the XOR of bit 16 and bit 24. Bits 0..15 remain the same as in the original value of x.
Yet another approach is to pick a prime number ever-so-slightly lower than 2^24-1, and mod (%) your 32-bit murmur hash value by that, which will mix in the high order bits even more effectively than the XOR above, but you'll obviously only get values up to the prime number - 1, and not all the way to 2^24-1.
We learned about data compression in my class last week and I'm confused about a certain strategy.
My teacher showed us an example where he truncated 8-bit characters by cutting off the first 5 bits, which were all zeros.
A is 0000 0100 which is truncated to 100
B is 0000 0010 which is truncated to 010
The last one is 0000 0011 which is truncated to 011
What type of data compression is this?
I've been having trouble getting answers to any questions related to data structures. I used to get plenty of responses with language-specific questions when I was studying C++. . .
Anyway, I found the answer on this page:
https://wiki.nesdev.com/w/index.php/Fixed_Bit_Length_Encoding
It's called fixed bit-length encoding.
I have a GEZE door reader for RFID tags. The web app shows for one RFID tag the number "0552717541244". When I read the same tag with a USB reader connected to my computer, it shows "0219281982".
The values in hex are d11fa3e and 80b0885f7c. So it does not seem to be the difference in byte order discussed in other similar questions.
Is there a way of finding out the longer number when only the shorter one is known?
How come one single tag can have two different identifiers?
Looking at only a single value pair makes it impossible to verify if there actually is some systematic translation scheme between the two values. However, looking at the binary representation of the two values gives the following:
decimal binary
0552717541244 -> 1000 0000 1011 0000 1000 1000 0101 1111 0111 1100
0219281982 -> 0000 1101 0001 0001 1111 1010 0011 1110
So it looks as if the web app reverses the bit order of each byte when compared to the reading of the USB reader and adds an additional byte 0x80 as the MSB:
decimal binary
0552717541244 -> 1000 0000 1011 0000 1000 1000 0101 1111 0111 1100
(added) --------> --------> --------> -------->
<-------- <-------- <-------- <--------
0219281982 -> 0000 1101 0001 0001 1111 1010 0011 1110
Let's say I've created a type in Ada:
type Coord_Type is range -32 .. 31;
What can I expect the bits to look like in memory, or specifically when transmitting this value to another system?
I can think of two options.
One is that the full (default integer?) space is used for all variables of "Coord_Type", but only the values within the range are possible. If I assume 2s complement, then, the value 25 and -25 would be possible, but not 50 or -50:
0000 0000 0001 1001 ( 25)
1111 1111 1110 0111 (-25)
0000 0010 0011 0010 ( 50) X Not allowed
1111 1111 1100 1110 (-50) X Not allowed
The other option is that the space is compressed to only what is needed. (I chose a byte, but maybe even only 6 bits?) So with the above values, the bits might be arranged as such:
0000 0000 0001 1001 ( 25)
0000 0000 1110 0111 (-25)
0000 0000 0011 0010 ( 50) X Not allowed
0000 0000 1100 1110 (-50) X Not allowed
Essentially, does Ada further influence the storage of values beyond limiting what range is allowed in a variable space? Is this question, Endianness, and 2s complement even controlled by Ada?
When you declare the type like that, you leave it up to the compiler to choose the optimal layout for each architecture. You might even get binary-coded-decimal (BCD) instead of two's complement on some architectures.