I receive two equivalent strings from my database depending on whether I ask for it in binary or text format.
Binary is hexadecimal... 4d4d002a0000100801010101010101...(134916 characters)
Text is (I think ASCII decimal)... //x3464346430303261303030... (269832 characters)
I can convert the hexadecimal version into a byte array and ultimately an NSData (67458 bytes):
let data = NSMutableData(capacity: self.characters.count / 2)
for var index = self.startIndex; index < self.endIndex; index = index.advancedBy(2) {
let byteString = self.substringWithRange(Range<String.Index>(start: index, end: index.advancedBy(2)))
let byteUInt = UInt8(strtoul(byteString, nil, 16))
data?.appendBytes([UInt8]([byteUInt]), length: 1)
}
But I am having no such luck with the text version. Tried parsing it a million different ways and I can't come up with an equivalent conversion.
If it matters, the database is PostgreSQL v9.5 and the data in text format is returned as a null-terminated character string (char *).
Any insight would be greatly appreciated.
It appears that "ASCII representation" is a hex encoding of the hex encoding, so you should be able to produce a proper result by applying the same conversion twice:
34 | 64 | 34 | 64 | 30 | 30 | 32 | 61 | 30 | 30 | 30 | -- Original
4 | d | 4 | d | 0 | 0 | 2 | a | 0 | 0 | 0 | -- ASCII conversion
Related
With Perl, one could use bignum to set the level of precision for all operators. As in:
use bignum ( p => -50 );
print sqrt(20); # 4.47213595499957939281834733746255247088123671922305
With Raku I have no problems with rationals since I can use Rat / FatRat, but I don't know how to use a longer level of precision for sqrt
say 20.sqrt # 4.47213595499958
As stated on Elizabeth's answer, sqrt returns a Num type, thus it has limited precision. See Elizabeth's answer for more detail.
For that reason I created a raku class: BigRoot, which uses newton's method and FatRat types to calculate the roots. You may use it like this:
use BigRoot;
# Can change precision level (Default precision is 30)
BigRoot.precision = 50;
my $root2 = BigRoot.newton's-sqrt: 2;
# 1.41421356237309504880168872420969807856967187537695
say $root2.WHAT;
# (FatRat)
# Can use other root numbers
say BigRoot.newton's-root: root => 3, number => 30;
# 3.10723250595385886687766242752238636285490682906742
# Numbers can be Int, Rational and Num:
say BigRoot.newton's-sqrt: 2.123;
# 1.45705181788431944566113502812562734420538186940001
# Can use other rational roots
say BigRoot.newton's-root: root => FatRat.new(2, 3), number => 30;
# 164.31676725154983403709093484024064018582340849939498
# Results are rounded:
BigRoot.precision = 8;
say BigRoot.newton's-sqrt: 2;
# 1.41421356
BigRoot.precision = 7;
say BigRoot.newton's-sqrt: 2;
# 1.4142136
In general it seems to be pretty fast (at least compared to Perl's bigfloat)
Benchmarks:
|---------------------------------------|-------------|------------|
| sqrt with 10_000 precision digits | Raku | Perl |
|---------------------------------------|-------------|------------|
| 20000000000 | 0.714 | 3.713 |
|---------------------------------------|-------------|------------|
| 200000.1234 | 1.078 | 4.269 |
|---------------------------------------|-------------|------------|
| π | 0.879 | 3.677 |
|---------------------------------------|-------------|------------|
| 123.9/12.29 | 0.871 | 9.667 |
|---------------------------------------|-------------|------------|
| 999999999999999999999999999999999 | 1.208 | 3.937 |
|---------------------------------------|-------------|------------|
| 302187301.3727 / 123.30219380928137 | 1.528 | 7.587 |
|---------------------------------------|-------------|------------|
| 2 + 999999999999 ** 10 | 2.193 | 3.616 |
|---------------------------------------|-------------|------------|
| 91200937373737999999997301.3727 / π | 1.076 | 7.419 |
|---------------------------------------|-------------|------------|
If want to implement your own sqrt using newton's method, this the basic idea behind it:
sub newtons-sqrt(:$number, :$precision) returns FatRat {
my FatRat $error = FatRat.new: 1, 10 ** ($precision + 1);
my FatRat $guess = (sqrt $number).FatRat;
my FatRat $input = $number.FatRat;
my FatRat $diff = $input;
while $diff > $error {
my FatRat $new-guess = $guess - (($guess ** 2 - $input) / (2 * $guess));
$diff = abs($new-guess - $guess);
$guess = $new-guess;
}
return $guess.round: FatRat.new: 1, 10 ** $precision;
}
In Rakudo, sqrt is implemented using the sqrt_n NQP opcode. Which indicates it only supports native nums (because of the _n suffix). Which implies limited precision.
Internally, I'm pretty sure this just maps to the sqrt functionality of one of the underlying math libraries that MoarVM uses.
I guess what we need is an ecosystem module that would export a sqrt function based on Rational arithmetic. That would give you the option to use higher precision sqrt implementations at the expense of performance. Which then in turn, might turn out to be interesting enough to integrate in core.
I need to output a number with leading zeros and as six digits. In C or Java I would use "%06d" as a format string to do this. Does PureScript support format strings? Or how would I achieve this?
I don't know of any module that would support a printf-style functionality in PureScript. It would be very nice to have a type-safe way to format numbers.
In the meantime, I would write something likes this:
import Data.String (length, fromCharArray)
import Data.Array (replicate)
-- | Pad a string with the given character up to a maximum length.
padLeft :: Char -> Int -> String -> String
padLeft c len str = prefix <> str
where prefix = fromCharArray (replicate (len - length str) c)
-- | Pad a number with leading zeros up to the given length.
padZeros :: Int -> Int -> String
padZeros len num | num >= 0 = padLeft '0' len (show num)
| otherwise = "-" <> padLeft '0' len (show (-num))
Which produces the following results:
> padZeros 6 8
"000008"
> padZeros 6 678
"000678"
> padZeros 6 345678
"345678"
> padZeros 6 12345678
"12345678"
> padZeros 6 (-678)
"-000678"
Edit: In the meantime, I've written a small module that can format numbers in this way:
https://github.com/sharkdp/purescript-format
For your particular example, you would need to do the following:
If you want to format Integers:
> format (width 6 <> zeroFill) 123
"000123"
If you want to format Numbers
> format (width 6 <> zeroFill <> precision 1) 12.345
"0012.3"
I have integer values: 3 60 150 1500 and float values 1.23354, 1.234, 1.234567...
I calculate the number of digits of the biggest integer:
$nInt = [System.Math]::Ceiling([math]::log10($maxInt))
# nInt = 4
and in another way the biggest number of dec. behind the decimal point of the float-variable: $nDec = 6
How can I format a print out that all integer do have the same string-length with leading spaces?
|1500
| 500
| 60
| 3
And all float with the same string-length as well?
1.234567|
1.23354 |
1.234 |
The | is just to mark my 'point of measure'.
Of course I have to choose a character-set where all characters do have the same pixex-size.
I am thinking of formatting by "{0:n}" or $int.ToString(""), but I can't see how to use this.
Try PadLeft or PadRight. For example, for your integers:
$maxInt.ToString().PadLeft($nInt.ToString().Length, ' ')
In the code below, I am losing the last character in my string.
NSString *testString = #"— choose a category —";
NSData *testData = [NSData dataWithBytes:[testString UTF8String] length:[testString length]];
NSString *newString = [[[NSString alloc] initWithData:testData encoding:NSUTF8StringEncoding] autorelease];
The debugger is showing this:
(lldb) po testString
(NSString *) $7 = 0x002ec7f0 — choose a category —
(lldb) po testData
(NSData *) $8 = 0x1003d1c0 <e2809420 63686f6f 73652061 20636174 65676f72 79>
(lldb) po newString
(NSString *) $9 = 0x09109f50 — choose a category
(lldb)
The bytes correspond to characters as follows:
e2 80 94 | 20 | 63 | 68 | 6f | 6f | 73 | 65 | 20 | 61 | 20 | 63 | 61 | 74 | 65 | 67 | 6f | 72 | 79 |
EM DASH | sp | c | h | o | o | s | e | sp | a | sp | c | a | t | e | g | o | r | y | sp | EM DASH
I am seeing the same problem with longer strings that I am uploading to my server, and it seems to always be where multi-byte UTF8 characters are used.
When I download the logged data from my server, the unicode characters (that haven't been truncated) appear correctly. But the logged string on my server is truncated, indicating that the truncation exists in the NSData object.
What am I doing wrong here?
Here is the solution. This may help someone else, so I'll leave it up here, rather than deleting the question.
NSData dataWithBytes:length: requires a length value of the resulting array of bytes. This is determined after the NSString has been converted to a null-terminated UTF8 representation.
So the conversion to NSData is handled correctly this way:
NSData *testData = [NSData dataWithBytes:[testString UTF8String] length:strlen([testString UTF8String])];
To avoid converting the testString twice, this can be done:
const char *testStringUTF8 = [testString UTF8String];
NSData *testData = [NSData dataWithBytes:testStringUTF8 length:strlen(testStringUTF8)];
The NSString class reference states that the C string returned by the UTF8String method is handled "just as a returned object is released", meaning it is autoreleased. (See the class reference for the exact wording.)
I am building an application to read and write to a microchip`s memory.
I need to pass an unsigned char array that has 4 fixed bytes and 2 variable bytes so suposing I want to read memory bank 0004 I will pass
unsigned char sRequest[32]={0};
sRequest[0] = 0x02; //FIXED
sRequest[1] = 0x1F; //FIXED
sRequest[2] = 0x0A; //FIXED
sRequest[3] = 0x20; //FIXED
sRequest[4] = 0x00; //VARIABLE
sRequest[5] = 0x04; //VARIABLE
I want to put 2 CEdit boxes for the user to input that variable memory bank, so it would write 0x00 on first CEdit and 0x04 on second one.
so my question is, how can I translate eacho of these inputs on an unsigned char so I can set it on my sRequest variable?
thanks in advance
(thanks dave, mistyped bytes into bits, fixed now)
Actually, I wouldn't do that with free-form text entry at all - it would be too much of a bother to do validation (being inherently lazy). If there's a chance to restrict what your user is able to give you as early as possible, you should take it :-)
I would provide drop-down boxes for the values, each nybble so that the quantity to select from is not too onerous. Something like (and with apologies for my graphical skills):
+---+-+ +---+-+ +---+-+ +---+-+
Enter value 0x | 0 |V| | 0 |V| | 0 |V| | 4 |V|
+---+-+ +---+-+ +---+-+ +---+-+
|>0<|
| 1 |
| 1 |
| 2 |
| 3 |
: : :
| E |
| F |
+---+
For the values in the listbox, I would set the item data to be 0 through 15 inclusive (for the items 0 through F).
That way, you could get a single byte value with something like:
byte04 = listboxA.GetItemData(listboxA.GetCurSel()) * 16 +
listboxB.GetItemData(listboxB.GetCurSel());
byte05 = listboxC.GetItemData(listboxC.GetCurSel()) * 16 +
listboxD.GetItemData(listboxD.GetCurSel());
If you must use a less restrictive input method, C++ provides:
int stoi (const string& str, size_t *idx = 0, int base = 10);
(and stol/stoul for signed and unsigned longs) to allow you to convert C++ strings to integral types.
For your purposes. you'll probably have to detect and strip a leading 0x (along with possibly leading/trailing spaces and so forth) which is why I suggest the restrictive route as a better option.