I have integer values: 3 60 150 1500 and float values 1.23354, 1.234, 1.234567...
I calculate the number of digits of the biggest integer:
$nInt = [System.Math]::Ceiling([math]::log10($maxInt))
# nInt = 4
and in another way the biggest number of dec. behind the decimal point of the float-variable: $nDec = 6
How can I format a print out that all integer do have the same string-length with leading spaces?
|1500
| 500
| 60
| 3
And all float with the same string-length as well?
1.234567|
1.23354 |
1.234 |
The | is just to mark my 'point of measure'.
Of course I have to choose a character-set where all characters do have the same pixex-size.
I am thinking of formatting by "{0:n}" or $int.ToString(""), but I can't see how to use this.
Try PadLeft or PadRight. For example, for your integers:
$maxInt.ToString().PadLeft($nInt.ToString().Length, ' ')
Related
Lets say I have this range of numbers, I want to expand these intervals. What is going wrong with my code here? The answer I am getting isn't correct :(
intervals are only represented with -
each 'thing' is separated by ;
I would like the output to be:
-6 -3 -2 -1 3 4 5 7 8 9 10 11 14 15 17 18 19 20
range_expansion('-6;-3--1;3-5;7-11;14;15;17-20 ')
function L=range_expansion(S)
% Range expansion
if nargin < 1;
S='[]';
end
if all(isnumeric(S) | (S=='-') | (S==',') | isspace(S))
error 'invalid input';
end
ixr = find(isnumeric(S(1:end-1)) & S(2:end) == '-')+1;
S(ixr)=':';
S=['[',S,']'];
L=eval(S) ;
end
ans =
-6 -2 -2 -4 14 15 -3
You can use regexprep to replace ;by , and the - that define ranges by :. Those - are identified by them being preceded by a digit. The result is a string that can be transformed into the desired output using str2num. However, since this function evaluates the string, for safety it is first checked that the string only contains the allowed characters:
in = '-6;-3--1;3-5;7-11;14;15;17-20 '; % example
assert(all(ismember(in, '0123456789 ,;-')), 'Characters not allowed') % safety check
out = str2num(regexprep(in, {'(?<=\d)-' ';'}, {':' ','})); % replace and evaluate
With Perl, one could use bignum to set the level of precision for all operators. As in:
use bignum ( p => -50 );
print sqrt(20); # 4.47213595499957939281834733746255247088123671922305
With Raku I have no problems with rationals since I can use Rat / FatRat, but I don't know how to use a longer level of precision for sqrt
say 20.sqrt # 4.47213595499958
As stated on Elizabeth's answer, sqrt returns a Num type, thus it has limited precision. See Elizabeth's answer for more detail.
For that reason I created a raku class: BigRoot, which uses newton's method and FatRat types to calculate the roots. You may use it like this:
use BigRoot;
# Can change precision level (Default precision is 30)
BigRoot.precision = 50;
my $root2 = BigRoot.newton's-sqrt: 2;
# 1.41421356237309504880168872420969807856967187537695
say $root2.WHAT;
# (FatRat)
# Can use other root numbers
say BigRoot.newton's-root: root => 3, number => 30;
# 3.10723250595385886687766242752238636285490682906742
# Numbers can be Int, Rational and Num:
say BigRoot.newton's-sqrt: 2.123;
# 1.45705181788431944566113502812562734420538186940001
# Can use other rational roots
say BigRoot.newton's-root: root => FatRat.new(2, 3), number => 30;
# 164.31676725154983403709093484024064018582340849939498
# Results are rounded:
BigRoot.precision = 8;
say BigRoot.newton's-sqrt: 2;
# 1.41421356
BigRoot.precision = 7;
say BigRoot.newton's-sqrt: 2;
# 1.4142136
In general it seems to be pretty fast (at least compared to Perl's bigfloat)
Benchmarks:
|---------------------------------------|-------------|------------|
| sqrt with 10_000 precision digits | Raku | Perl |
|---------------------------------------|-------------|------------|
| 20000000000 | 0.714 | 3.713 |
|---------------------------------------|-------------|------------|
| 200000.1234 | 1.078 | 4.269 |
|---------------------------------------|-------------|------------|
| π | 0.879 | 3.677 |
|---------------------------------------|-------------|------------|
| 123.9/12.29 | 0.871 | 9.667 |
|---------------------------------------|-------------|------------|
| 999999999999999999999999999999999 | 1.208 | 3.937 |
|---------------------------------------|-------------|------------|
| 302187301.3727 / 123.30219380928137 | 1.528 | 7.587 |
|---------------------------------------|-------------|------------|
| 2 + 999999999999 ** 10 | 2.193 | 3.616 |
|---------------------------------------|-------------|------------|
| 91200937373737999999997301.3727 / π | 1.076 | 7.419 |
|---------------------------------------|-------------|------------|
If want to implement your own sqrt using newton's method, this the basic idea behind it:
sub newtons-sqrt(:$number, :$precision) returns FatRat {
my FatRat $error = FatRat.new: 1, 10 ** ($precision + 1);
my FatRat $guess = (sqrt $number).FatRat;
my FatRat $input = $number.FatRat;
my FatRat $diff = $input;
while $diff > $error {
my FatRat $new-guess = $guess - (($guess ** 2 - $input) / (2 * $guess));
$diff = abs($new-guess - $guess);
$guess = $new-guess;
}
return $guess.round: FatRat.new: 1, 10 ** $precision;
}
In Rakudo, sqrt is implemented using the sqrt_n NQP opcode. Which indicates it only supports native nums (because of the _n suffix). Which implies limited precision.
Internally, I'm pretty sure this just maps to the sqrt functionality of one of the underlying math libraries that MoarVM uses.
I guess what we need is an ecosystem module that would export a sqrt function based on Rational arithmetic. That would give you the option to use higher precision sqrt implementations at the expense of performance. Which then in turn, might turn out to be interesting enough to integrate in core.
I receive two equivalent strings from my database depending on whether I ask for it in binary or text format.
Binary is hexadecimal... 4d4d002a0000100801010101010101...(134916 characters)
Text is (I think ASCII decimal)... //x3464346430303261303030... (269832 characters)
I can convert the hexadecimal version into a byte array and ultimately an NSData (67458 bytes):
let data = NSMutableData(capacity: self.characters.count / 2)
for var index = self.startIndex; index < self.endIndex; index = index.advancedBy(2) {
let byteString = self.substringWithRange(Range<String.Index>(start: index, end: index.advancedBy(2)))
let byteUInt = UInt8(strtoul(byteString, nil, 16))
data?.appendBytes([UInt8]([byteUInt]), length: 1)
}
But I am having no such luck with the text version. Tried parsing it a million different ways and I can't come up with an equivalent conversion.
If it matters, the database is PostgreSQL v9.5 and the data in text format is returned as a null-terminated character string (char *).
Any insight would be greatly appreciated.
It appears that "ASCII representation" is a hex encoding of the hex encoding, so you should be able to produce a proper result by applying the same conversion twice:
34 | 64 | 34 | 64 | 30 | 30 | 32 | 61 | 30 | 30 | 30 | -- Original
4 | d | 4 | d | 0 | 0 | 2 | a | 0 | 0 | 0 | -- ASCII conversion
I am building an application to read and write to a microchip`s memory.
I need to pass an unsigned char array that has 4 fixed bytes and 2 variable bytes so suposing I want to read memory bank 0004 I will pass
unsigned char sRequest[32]={0};
sRequest[0] = 0x02; //FIXED
sRequest[1] = 0x1F; //FIXED
sRequest[2] = 0x0A; //FIXED
sRequest[3] = 0x20; //FIXED
sRequest[4] = 0x00; //VARIABLE
sRequest[5] = 0x04; //VARIABLE
I want to put 2 CEdit boxes for the user to input that variable memory bank, so it would write 0x00 on first CEdit and 0x04 on second one.
so my question is, how can I translate eacho of these inputs on an unsigned char so I can set it on my sRequest variable?
thanks in advance
(thanks dave, mistyped bytes into bits, fixed now)
Actually, I wouldn't do that with free-form text entry at all - it would be too much of a bother to do validation (being inherently lazy). If there's a chance to restrict what your user is able to give you as early as possible, you should take it :-)
I would provide drop-down boxes for the values, each nybble so that the quantity to select from is not too onerous. Something like (and with apologies for my graphical skills):
+---+-+ +---+-+ +---+-+ +---+-+
Enter value 0x | 0 |V| | 0 |V| | 0 |V| | 4 |V|
+---+-+ +---+-+ +---+-+ +---+-+
|>0<|
| 1 |
| 1 |
| 2 |
| 3 |
: : :
| E |
| F |
+---+
For the values in the listbox, I would set the item data to be 0 through 15 inclusive (for the items 0 through F).
That way, you could get a single byte value with something like:
byte04 = listboxA.GetItemData(listboxA.GetCurSel()) * 16 +
listboxB.GetItemData(listboxB.GetCurSel());
byte05 = listboxC.GetItemData(listboxC.GetCurSel()) * 16 +
listboxD.GetItemData(listboxD.GetCurSel());
If you must use a less restrictive input method, C++ provides:
int stoi (const string& str, size_t *idx = 0, int base = 10);
(and stol/stoul for signed and unsigned longs) to allow you to convert C++ strings to integral types.
For your purposes. you'll probably have to detect and strip a leading 0x (along with possibly leading/trailing spaces and so forth) which is why I suggest the restrictive route as a better option.
Is there a way to express this in a less repeative fashion with the optional positive and negative signs?
What I am trying to accomplish is how to express optionally provide positive + ( default ) and negative - signs on number literals that optionally have exponents and or decimal parts.
NUMBER : ('+'|'-')? DIGIT+ '.' DIGIT* EXPONENT?
| ('+'|'-')? '.'? DIGIT+ EXPONENT?
;
fragment
EXPONENT : ('e' | 'E') ('+' | '-') ? DIGIT+
;
fragment
DIGIT : '0'..'9'
;
I want to be able to recognize NUMBER patterns, and am not so concerned about arithmetic on those numbers at that point, I will later, but I am trying to understand how to recognize any NUMBER literals where numbers look like:
123
+123
-123
0.123
+.123
-.123
123.456
+123.456
-123.456
123.456e789
+123.456e789
-123.456e789
and any other standard formats that I haven't thought to include here.
To answer your question: no, there is no way to improve this AFAIK. You could place ('+' | '-') inside a fragment rule and use that fragment, just like the exponent-fragment, but I wouldn't call it a real improvement.
Note that unary + and - signs generally are not a part of a number-token. Consider the input source "1-2". You don't want that to be tokenized as 2 numbers: NUMBER[1] and NUMBER[-2], but as NUMBER[1], MINUS[-] and NUMBER[2] so that your parser contains the following:
parse
: statement+ EOF
;
statement
: assignment
;
assignment
: IDENTIFIER '=' expression
;
expression
: addition
;
addition
: multiplication (('+' | '-') multiplication)*
;
multiplication
: unary (('*' | '/') unary)*
;
unary
: '-' atom
| '+' atom
| atom
;
atom
: NUMBER
| IDENTIFIER
| '(' expression ')'
;
IDENTIFIER
: ('a'..'z' | 'A'..'Z' | '_') ('a'..'z' | 'A'..'Z' | '_' | DIGIT)*
;
NUMBER
: DIGIT+ '.' DIGIT* EXPONENT?
| '.'? DIGIT+ EXPONENT?
;
fragment
EXPONENT
: ('e' | 'E') ('+' | '-') ? DIGIT+
;
fragment
DIGIT
: '0'..'9'
;
and addition will therefor match the input "1-2".
EDIT
An expression like 111.222 + -456 will be parsed as this:
and +123 + -456 as: