This is a question about Numbers in Racket.
According to Reading Numbers section in the Racket documentation, a number can be optionally suffixed by an exp-mark followed by an exact integer.
exp-marks can be s | l | d | e | f
The documentation says:
An exponent-mark in an inexact number serves both to specify an exponent and to specify a numerical precision. If single-flonums are supported and the read-single-flonum parameter is set to #t, the marks f and s specify single-flonums.
However there is no mention of what exactly the letters s, l, d, e and f specify.
1.2e2 ; 1.2*(10^2) = 120.0
1.2e-2 ; 1.2*(10^-2) = 0.012
1.2f2 ; 1.2*(10^2) = 120.0
1.2f-2 ; 1.2*(10^-2) = 0.012
1.2s2 ; 1.2*(10^2) = 120.0
1.2s-2 ; 1.2*(10^-2) = 0.012
1.2d2 ; 1.2*(10^2) = 120.0
1.2d-2 ; 1.2*(10^-2) = 0.012
1.2l2 ; 1.2*(10^2) = 120.0
1.2l-2 ; 1.2*(10^-2) = 0.012
It is apparent (from math) what e does, but does anyone know how the other are different (even though they produce the same result)?
A lot of these are inherited from Scheme. See https://groups.csail.mit.edu/mac/ftpdir/scheme-reports/r5rs-html/r5rs_8.html:
The letters s, f, d, and l specify the use of short, single, double, and long precision, respectively. (When fewer than four internal inexact representations exist, the four size specifications are mapped onto those available. For example, an implementation with two internal representations may map short and single together and long and double together.) In addition, the exponent marker e specifies the default precision for the implementation.
Also note that in hex number, you can't use d, e, and f as an exponent marker, since they correspond to 13, 14, and 15 respectively.
> 1d1
10.0
> #x1d1 ; 16^2 + 13*16 + 1
465
> #x1s1 ; 1 * 16^1
16.0
Related
I need to find the value (or position) of the most significant bit (MSB) of an integer in Swift.
Eg:
Input number: 9
Input as binary: 1001
MS value as binary: 1000 -> (which is 8 in decimal)
MS position as decimal: 3 (because 1<<3 == 1000)
Many processors (Intel, AMD, ARM) have instructions for this. In c, these are exposed. Are these instructions similarly available in Swift through a library function, or would I need to implement some bit twiddling?
The value is more useful in my case.
If a position is returned, then the value can be easily derived by a single shift.
Conversely, computing position from value is not so easy unless a fast Hamming Weight / pop count function is available.
You can use the flsl() function ("find last set bit, long"):
let x = 9
let p = flsl(x)
print(p) // 4
The result is 4 because flsl() and the related functions number the bits starting at 1, the least significant bit.
On Intel platforms you can use the _bit_scan_reverse intrinsic,
in my test in a macOS application this translated to a BSR
instruction.
import _Builtin_intrinsics.intel
let x: Int32 = 9
let p = _bit_scan_reverse(x)
print(p) // 3
You can use the the properties leadingZeroBitCount and trailingZeroBitCount to find the Most Significant Bit and Least Significant Bit.
For example,
let i: Int = 95
let lsb = i.trailingZeroBitCount
let msb = Int.bitWidth - 1 - i.leadingZeroBitCount
print("i: \(i) = \(String(i, radix: 2))") // i: 95 = 1011111
print("lsb: \(lsb) = \(String(1 << lsb, radix: 2))") // lsb: 0 = 1
print("msb: \(msb) = \(String(1 << msb, radix: 2))") // msb: 6 = 1000000
If you look at the disassembly(ARM Mac) in LLDB for the Least Significant Bit code, it uses a single instruction, clz, to count the zeroed bits. (ARM Reference)
** 15 let lsb = i.trailingZeroBitCount
0x100ed947c <+188>: rbit x9, x8
0x100ed9480 <+192>: clz x9, x9
0x100ed9484 <+196>: mov x10, x9
0x100ed9488 <+200>: str x10, [sp, #0x2d8]
In finding the values of x and y, if (x567) + (2yx5) = (71yx) ( all in base 8) I proceeded as under.
I assumed x=abc and y=def and followed.
(abc+010 def+101 110+abc 111+101)=(111 001 def abc) //adding ()+()=() and equating LHS=RHS.
abc=111-010=101 which is 5 in base 8 and then def=001-101 which is -4
so x=5 and y=-4
Now the Question is that the answer mentioned in my book is x=4 and y=3.
Is the above method correct.If so,then what's issue here ??
you can't compare the digits beginning with the most significant digit, because you don't know the carry from the digit below. Also a digit cannot have a negative value.
You can start with the least significant digit, because there is no carry:
7 + 5 = 14
so x = 4 with a carry of 1 at the next digit.
now you can rewrite your equation to:
(4567) + (2y45) = (71y4)
now you can look at the second least significant digit (the carry in mind):
6 + 4 + 1 (carry) = 13
so y = 3, also with a carry of 1.
the whole equation is:
(4567) + (2345) = (7134)
which is true for the octal system.
I noticed that Verilog rounds my real number results into integer results. For example when I look at simulator, it shows the result of 17/2 as 9. What should I do? Is there anyway to define something like a: output real reg [11:0] output_value ? Or is it something that has to be done by simulator settings?
Simulation only (no synthesis). Example:
x defined as a signed input and output_value defined as output reg.
output_value = ((x >>> 1) + x) + 5;
If x=+1 then output value has to be: 13/2=6.5.
However when I simulate I see output_value = 6.
Code would help, but I suspect your not dividing reals at all. 17 and 2 are integers, and so a simple statement like that will do integer division.
17 / 2 = 8 (not 9, always rounds towards 0)
17.0 / 2.0 = 8.5
In your second case
output_value = ((x >>> 1) + x) + 5
If x is 1, x >>> 1 is 0, not 0.5 because you've just gone off the bottom of the word.
output_value = ((1 >>> 1) + 1) + 5 = 0 + 1 + 5 = 6
There's nothing special about verilog here. This is true for the majority of languages.
How can I write a Unicode symbol in lua. For example I have to write symbol with 9658
when I write
string.char( 9658 );
I got an error. So how is it possible to write such a symbol.
Lua does not look inside strings. So, you can just write
mychar = "►"
(added in 2015)
Lua 5.3 introduced support for UTF-8 escape sequences:
The UTF-8 encoding of a Unicode character can be inserted in a literal string with the escape sequence \u{XXX} (note the mandatory enclosing brackets), where XXX is a sequence of one or more hexadecimal digits representing the character code point.
You can also use utf8.char(9658).
Here is an encoder for Lua that takes a Unicode code point and produces a UTF-8 string for the corresponding character:
do
local bytemarkers = { {0x7FF,192}, {0xFFFF,224}, {0x1FFFFF,240} }
function utf8(decimal)
if decimal<128 then return string.char(decimal) end
local charbytes = {}
for bytes,vals in ipairs(bytemarkers) do
if decimal<=vals[1] then
for b=bytes+1,2,-1 do
local mod = decimal%64
decimal = (decimal-mod)/64
charbytes[b] = string.char(128+mod)
end
charbytes[1] = string.char(vals[2]+decimal)
break
end
end
return table.concat(charbytes)
end
end
c=utf8(0x24) print(c.." is "..#c.." bytes.") --> $ is 1 bytes.
c=utf8(0xA2) print(c.." is "..#c.." bytes.") --> ¢ is 2 bytes.
c=utf8(0x20AC) print(c.." is "..#c.." bytes.") --> € is 3 bytes.
c=utf8(0x24B62) print(c.." is "..#c.." bytes.") --> 𤭢 is 4 bytes.
Maybe this can help you:
function FromUTF8(pos)
local mod = math.mod
local function charat(p)
local v = editor.CharAt[p]; if v < 0 then v = v + 256 end; return v
end
local v, c, n = 0, charat(pos), 1
if c < 128 then v = c
elseif c < 192 then
error("Byte values between 0x80 to 0xBF cannot start a multibyte sequence")
elseif c < 224 then v = mod(c, 32); n = 2
elseif c < 240 then v = mod(c, 16); n = 3
elseif c < 248 then v = mod(c, 8); n = 4
elseif c < 252 then v = mod(c, 4); n = 5
elseif c < 254 then v = mod(c, 2); n = 6
else
error("Byte values between 0xFE and OxFF cannot start a multibyte sequence")
end
for i = 2, n do
pos = pos + 1; c = charat(pos)
if c < 128 or c > 191 then
error("Following bytes must have values between 0x80 and 0xBF")
end
v = v * 64 + mod(c, 64)
end
return v, pos, n
end
To get broader support for Unicode string content, one approach is slnunicode which was developed as part of the Selene database library. It will give you a module that supports most of what the standard string library does, but with Unicode characters and UTF-8 encoding.
I've got an 16bit bitmap image with each colour represented as a single short (2 bytes), I need to display this in a 32bit bitmap context. How can I convert a 2 byte colour to a 4 byte colour in C++?
The input format contains each colour in a single short (2 bytes).
The output format is 32bit RGB. This means each pixel has 3 bytes I believe?
I need to convert the short value into RGB colours.
Excuse my lack of knowledge of colours, this is my first adventure into the world of graphics programming.
Normally a 16-bit pixel is 5 bits of red, 6 bits of green, and 5 bits of blue data. The minimum-error solution (that is, for which the output color is guaranteed to be as close a match to the input colour) is:
red8bit = (red5bit << 3) | (red5bit >> 2);
green8bit = (green6bit << 2) | (green6bit >> 4);
blue8bit = (blue5bit << 3) | (blue5bit >> 2);
To see why this solution works, let's look at at a red pixel. Our 5-bit red is some fraction fivebit/31. We want to translate that into a new fraction eightbit/255. Some simple arithmetic:
fivebit eightbit
------- = --------
31 255
Yields:
eightbit = fivebit * 8.226
Or closely (note the squiggly ≈):
eightbit ≈ (fivebit * 8) + (fivebit * 0.25)
That operation is a multiply by 8 and a divide by 4. Owch - both operations that might take forever on your hardware. Lucky thing they're both powers of two and can be converted to shift operations:
eightbit = (fivebit << 3) | (fivebit >> 2);
The same steps work for green, which has six bits per pixel, but you get an accordingly different answer, of course! The quick way to remember the solution is that you're taking the top bits off of the "short" pixel and adding them on at the bottom to make the "long" pixel. This method works equally well for any data set you need to map up into a higher resolution space. A couple of quick examples:
five bit space eight bit space error
00000 00000000 0%
11111 11111111 0%
10101 10101010 0.02%
00111 00111001 -1.01%
Common formats include BGR0,
RGB0, 0RGB, 0BGR. In the code below I have assumed 0RGB. Changing this
is easy, just modify the shift amounts in the last line.
unsigned long rgb16_to_rgb32(unsigned short a)
{
/* 1. Extract the red, green and blue values */
/* from rrrr rggg gggb bbbb */
unsigned long r = (a & 0xF800) >11;
unsigned long g = (a & 0x07E0) >5;
unsigned long b = (a & 0x001F);
/* 2. Convert them to 0-255 range:
There is more than one way. You can just shift them left:
to 00000000 rrrrr000 gggggg00 bbbbb000
r <<= 3;
g <<= 2;
b <<= 3;
But that means your image will be slightly dark and
off-colour as white 0xFFFF will convert to F8,FC,F8
So instead you can scale by multiply and divide: */
r = r * 255 / 31;
g = g * 255 / 63;
b = b * 255 / 31;
/* This ensures 31/31 converts to 255/255 */
/* 3. Construct your 32-bit format (this is 0RGB): */
return (r << 16) | (g << 8) | b;
/* Or for BGR0:
return (r << 8) | (g << 16) | (b << 24);
*/
}
Multiply the three (four, when you have an alpha layer) values by 16 - that's it :)
You have a 16-bit color and want to make it a 32-bit color. This gives you four times four bits, which you want to convert to four times eight bits. You're adding four bits, but you should add them to the right side of the values. To do this, shift them by four bits (multiply by 16). Additionally you could compensate a bit for inaccuracy by adding 8 (you're adding 4 bits, which has the value of 0-15, and you can take the average of 8 to compensate)
Update This only applies to colors that use 4 bits for each channel and have an alpha channel.
There some questions about the model like is it HSV, RGB?
If you wanna ready, fire, aim I'd try this first.
#include <stdint.h>
uint32_t convert(uint16_t _pixel)
{
uint32_t pixel;
pixel = (uint32_t)_pixel;
return ((pixel & 0xF000) << 16)
| ((pixel & 0x0F00) << 12)
| ((pixel & 0x00F0) << 8)
| ((pixel & 0x000F) << 4);
}
This maps 0xRGBA -> 0xRRGGBBAA, or possibly 0xHSVA -> 0xHHSSVVAA, but it won't do 0xHSVA -> 0xRRGGBBAA.
I'm here long after the fight, but I actually had the same problem with ARGB color instead, and none of the answers are truly right: Keep in mind that this answer gives a response for a slightly different situation where we want to do this conversion:
AAAARRRRGGGGBBBB >>= AAAAAAAARRRRRRRRGGGGGGGGBBBBBBBB
If you want to keep the same ratio of your color, you simply have to do a cross-multiplication: You want to convert a value x between 0 and 15 to a value between 0 and 255: therefore you want: y = 255 * x / 15.
However, 255 = 15 * 17, which itself, is 16 + 1: you now have y = 16 * x + x
Which is actually the same as doing a for bits shift to the left and then adding the value again (or more visually, duplicating the value: 0b1101 becomes 0b11011101).
Now that you have this, you can compute your whole number by doing:
a = v & 0b1111000000000000
r = v & 0b111100000000
g = v & 0b11110000
b = v & 0b1111
return b | b << 4 | g << 4 | g << 8 | r << 8 | r << 12 | a << 12 | a << 16
Moreover, as the lower bits wont have much effect on the final color and if exactitude isnt necessary, you can gain some performances by simply multiplying each component by 16:
return b << 4 | g << 8 | r << 12 | a << 16
(All the left shifts values are strange because we did not bother doing a right shift before)