157,453796 = hex 18068A
157,455093 = hex 180697
71,5037 = hex E91D00
71,506104 = hex E93500
71,507103 = hex E93F00
0 = hex 000000
I know exactly what it is not IEEE 754
The following depends on the byte-order of your processor architecture and thus can't be read back on every system:
double f = 10020.2093;
char acz[sizeof(double)+1] = '\0';
std::copy((char*)(&f), ((char*)&f)+sizeof(double), acz);
Related
I'm currently working on a file converter. I've never done anything using binary file reading before. There are many converters available for this file type (gdsII to text), but none in swift that I can find.
I've gotten all the other data types working (2byte int, 4byte int), but I'm really struggling with the real data type.
From a spec document :
http://www.cnf.cornell.edu/cnf_spie9.html
Real numbers are not represented in IEEE format. A floating point number is made up of three parts: the sign, the exponent, and the mantissa. The value of the number is defined to be (mantissa) (16) (exponent). If "S" is the sign bit, "E" is exponent bits, and "M" are mantissa bits then an 8-byte real number has the format
SEEEEEEE MMMMMMMM MMMMMMMM MMMMMMMM
MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM
The exponent is in "excess 64" notation; that is, the 7-bit field shows a number that is 64 greater than the actual exponent. The mantissa is always a positive fraction greater than or equal to 1/16 and less than 1. For an 8-byte real, the mantissa is in bits 8 to 63. The decimal point of the binary mantissa is just to the left of bit 8. Bit 8 represents the value 1/2, bit 9 represents 1/4, and so on.
I've tried implementing something similar to what I've seen in python or Perl but each language has features that swift doesn't have, also the type conversions get very confusing.
This is one method I tried, based on Perl. Doesn't seem to get the right value. Bitwise math is new to me.
var sgn = 1.0
let andSgn = 0x8000000000000000 & bytes8_test
if( andSgn > 0) { sgn = -1.0 }
// var sgn = -1 if 0x8000000000000000 & num else 1
let manta = bytes8_test & 0x00ffffffffffffff
let exp = (bytes8_test >> 56) & 0x7f
let powBase = sgn * Double(manta)
let expPow = (4.0 * (Double(exp) - 64.0) - 56.0)
var testReal = pow( powBase , expPow )
Another I tried:
let bitArrayDecode = decodeBitArray(bitArray: bitArray)
let valueArray = calcValueOfArray(bitArray: bitArrayDecode)
var exponent:Int16
//calculate exponent
if(negative){
exponent = valueArray - 192
} else {
exponent = valueArray - 64
}
//calculate mantessa
var mantissa = 0.0
//sgn = -1 if 0x8000000000000000 & num else 1
//mant = num & 0x00ffffffffffffff
//exp = (num >> 56) & 0x7f
//return math.ldexp(sgn * mant, 4 * (exp - 64) - 56)
for index in 0...7 {
//let mantaByte = bytes8_1st[index]
//mantissa += Double(mantaByte) / pow(256.0, Double(index))
let bit = pow(2.0, Double(7-index))
let scaleBit = pow(2.0, Double( index ))
var mantab = (8.0 * Double( bytes8_1st[1] & UInt8(bit)))/(bit*scaleBit)
mantissa = mantissa + mantab
mantab = (8.0 * Double( bytes8_1st[2] & UInt8(bit)))/(256.0 * bit * scaleBit)
mantissa = mantissa + mantab
mantab = (8.0 * Double( bytes8_1st[3] & UInt8(bit)))/(256.0 * bit * scaleBit)
mantissa = mantissa + mantab
}
let real = mantissa * pow(16.0, Double(exponent))
UPDATE:
The following part seems to work for the exponent. Returns -9 for the data set I'm working with. Which is what I expect.
var exp = Int16((bytes8 >> 56) & 0x7f)
exp = exp - 65 //change from excess 64
print(exp)
var sgnVal = 0x8000000000000000 & bytes8
var sgn = 1.0
if(sgnVal == 1){
sgn = -1.0
}
For the mantissa though I can't get the calculation correct some how.
The data set:
3d 68 db 8b ac 71 0c b4
38 6d f3 7f 67 5e f6 ec
I think it should return 1e-9 for exponent and 0.0001
The closest I've gotten real Double 0.0000000000034907316148746757
var bytes7 = Array<UInt8>()
for (index, by) in data.enumerated(){
if(index < 4) {
bytes7.append(by[0])
bytes7.append(by[1])
}
}
for index in 0...7 {
mantissa += Double(bytes7[index]) / (pow(256.0, Double(index) + 1.0 ))
}
var real = mantissa * pow(16.0, Double(exp));
print(mantissa)
END OF UPDATE.
Also doesn't seem to produce the correct values. This one was based on a C file.
If anyone can help me out with an English explanation of what the spec means, or any pointers on what to do I would really appreciate it.
Thanks!
According to the doc, this code returns the 8-byte Real data as Double.
extension Data {
func readUInt64BE(_ offset: Int) -> UInt64 {
var value: UInt64 = 0
_ = Swift.withUnsafeMutableBytes(of: &value) {bytes in
copyBytes(to: bytes, from: offset..<offset+8)
}
return value.bigEndian
}
func readReal64(_ offset: Int) -> Double {
let bitPattern = readUInt64BE(offset)
let sign: FloatingPointSign = (bitPattern & 0x80000000_00000000) != 0 ? .minus: .plus
let exponent = (Int((bitPattern >> 56) & 0x00000000_0000007F)-64) * 4 - 56
let significand = Double(bitPattern & 0x00FFFFFF_FFFFFFFF)
let result = Double(sign: sign, exponent: exponent, significand: significand)
return result
}
}
Usage:
//Two 8-byte Real data taken from the example in the doc
let data = Data([
//1.0000000000000E-03
0x3e, 0x41, 0x89, 0x37, 0x4b, 0xc6, 0xa7, 0xef,
//1.0000000000000E-09
0x39, 0x44, 0xb8, 0x2f, 0xa0, 0x9b, 0x5a, 0x54,
])
let real1 = data.readReal64(0)
let real2 = data.readReal64(8)
print(real1, real2) //->0.001 1e-09
Another example from "UPDATE":
//0.0001 in "UPDATE"
let data = Data([0x3d, 0x68, 0xdb, 0x8b, 0xac, 0x71, 0x0c, 0xb4, 0x38, 0x6d, 0xf3, 0x7f, 0x67, 0x5e, 0xf6, 0xec])
let real = data.readReal64(0)
print(real) //->0.0001
Please remember that Double has only 52-bit significand (mantissa), so this code loses some significant bits in the original 8-byte Real. I'm not sure that can be an issue or not.
I am following along my Scala textbook and I see this:
scala> val hex = 0x5
hex: Int = 5
scala> val hex2 = 0x00ff
hex2: Int = 255
scala> val hex3 = 0xff
hex2: Int = 255
scala> var hex4 = 0xbe
magic: Int = 190
scala> var hex5 = 0xFF
magic: Int = 255
val magic = 0xcafebabe
magic: Int = -889275714
scala> var prog = 0xCAFEBABEL
prog: Long = 3405691582
scala> val tower = 35l
tower: Long = 35
My questions:
why do you need the extra 00 after the x in 0x00FF?
I get why FF = 255... hexadecimal is base16 starting at 00 = 0 and 0F = 15. But why does 0xcafebabe = -889275714?
Why is going on with the Longs? I don't understand what is going on?
You don't, it's just to show that leading 0s are ignored as far as I can tell
int is a 32-bit signed integer: if you exceed 2^31, the highest-value bit gets set but is interpreted as a minus. In short, you have an overflow.
If you add "l", the variable is a long which uses 64 bits, so the overflow doesn't happen
00FF needs the 2 zeros to make sure that this is a SIGNED number, proving that it is positive by using the two zeros.
The cafebabe doesn't have that since it is a negative number. We found that out because of the lack of zeros at the end.
Finally, the point of the long (though im not sure of that one) is to set the idea that there are unseen zeros stretching backwards, thus giving us a positive number.
I want to convert Hexa value into decimal. So I have tried this. When value is >0, then it is working fine. But when value is <0, then is returns wrong value.
let h2 = “0373”
let d4 = Int(h2, radix: 16)!
print(d4) // 883
let h3 = “FF88”
let d5 = Int(h3, radix: 16)!
print(d5) // 65416
When I am passing FF88, then it returns 65416. But actually it is -120.
Right now, I am following this Convert between Decimal, Binary and Hexadecimal in Swift answer. But it didn't works always.
Please check this conversation online. Please see following image for more details of this conversation.
Is there any other solution for get minus decimal from Hexa value.?
Any help would be appreciated !!
FF 88 is the hexadecimal representation of the 16-bit signed integer
-120. You can create an unsigned integer first and then convert it
to the signed counterpart with the same bit pattern:
let h3 = "FF88"
let u3 = UInt16(h3, radix: 16)! // 65416
let s3 = Int16(bitPattern: u3) // -120
Hexadecimal conversion depends on integer Type ( signed , unsigned ) and size ( 64 bits , 32 bits , 16 bits . . ) , this is what you missed .
source code :
let h3 = "FF88"
let d5 = Int16(truncatingBitPattern: strtoul(h3, nil, 16))
print(d5) // -120
I do not understand why
char test = '\032';
converts to
26 dec
'\032' seems to be interpreted as octal, but I want it to be treated as a decimal number.
I think I am confused with the character encoding.
Can anybody clarify this for me and give me a hint on how to convert it the way I want it?
In C, '\octal-digit' begins an octal-escape-sequence. There is no decimal-escape-sequence.
Code could simply use:
char test = 32;
To assign the value of 32 to a char, code has many options:
// octal escape sequence
char test1 = '\040'; // \ and then 1, 2 or 3 octal digits
char test2 = '\40';
// hexadecimal escape sequence
char test3 = '\x20'; // \x and then 1 or more hexadecimal digits
// integer decimal constant
char test4 = 32; // 1-9 and then 0 or more decimal digits
// integer octal constant
char test5 = 040; // 0 and then 0 or more octal digits
char test6 = 0040;
char test7 = 00040;
// integer hexadecimal constant
char test8 = 0x20; // 0x or 0X and then 1 or more hexadecimal digits
char test9 = 0X20;
// universal-character-name
char testA = '\u0020'; // \u & 4 hex digits
char testB = '\U00000020'; // \U & 8 hex digits
// character constant
char testC = ' '; // When the character set is ASCII
The syntax you are using (\0xxx) is for octal. To use decimal, you can just do:
char test = (char)32;
is there a way to convert numbers to str in matlab
for example, 30120 turns into cat
where c is 03 a is 01 t is 20
here is my progress on RSA encryption/decryption I am trying to decrypt into plain text.
% variables
p=vpi('22953686867719691230002707821868552601124472329079')
q=vpi('30762542250301270692051460539586166927291732754961')
e=vpi('555799999999999');
n=(p.*q)
phi=((q-1).*(p-1))
% how to convert plaintext to integer mod 26
abc = 'abcdefghijklmnopqrstuvwxyz';
word = 'acryptographicallystrongrandomnumbergeneratorwhichhasbeenproperlyseededwithadequateentropymustbeusedtogeneratetheprimespandqananalysiscomparingmillionsofpublickeysgatheredfromtheinternetwasrecentlycarriedoutbylenstrahughesaugierboskleinjungandwachteracryptographicallystrongrandomnumbergeneratorwhichhasbeenproperlyseededwithadequateentropymustbeused';
[temp1, temp2, temp3, temp4, temp5, temp6, temp7,temp8,temp9] = split(word)
[int1,int2,int3,int4,int5,int6,int7,int8,int9] = intsplit(temp1,temp2,temp3,temp4,temp5,temp6,temp7,temp8,temp9)
[encrypt1, encrypt2, encrypt3, encrypt4, encrypt5, encrypt6, encrypt7, encrypt8, encrypt9] = encrypt_this(int1, int2, int3, int4, int5, int6, int7, int8, int9)
[decrypt1, decryt2, decrypt3, decrypt4, decryt5, decrypt6,decrypt7, decryt8, decrypt9] = decrypt_this(encrypt1, encrypt2, encrypt3, encrypt4, encrypt5, encrypt6, encrypt7, encrypt8, encrypt9)
From 30120 to 'cat':
num = 30120;
l = ceil(numel(num2str(num))/2); % number of characters
num2 = sprintf('%06i', num); % num in string form, with leading zero if needed
% '030120'
str = '';
for i = 1:l
str = [str char(96+str2num(num2(2*i-1:2*i)))];
end % 96 is the ASCII offset needed to get a=1, c=3 etc like in your example
Results in str = 'cat'. I think that's what you want.