How to convert integer to char - type-conversion

I have a bunch of integers ns where 0 <= n <= 9 for all n in ns. I need to save them as characters or strings. I used #time to compare memory usage and I got this:
julia> #time a = "a"
0.000010 seconds (84 allocations: 6.436 KiB)
"a"
julia> #time a = 'a'
0.000004 seconds (4 allocations: 160 bytes)
'a': ASCII/Unicode U+0061 (category Ll: Letter, lowercase)
Why such a huge difference?
I chose to convert the integers into characters, but I don't understand what's the proper way to do it. When I do Char(1) in the REPL I get '\x01': ASCII/Unicode U+0001 (category Cc: Other, control) and if I try to print it I get this symbol: .
Instead when I do '1' in the REPL I get '1': ASCII/Unicode U+0031 (category Nd: Number, decimal digit) and if I print it I get 1. This is the behavior I want.
How to achieve it?
I thought about creating a dictionary to assign to each integer its corresponding character, but I am pretty sure that's not the way to go ...

Use Char(n + '0'). This will add the ASCII offset of the 0 digit and fix the rest of the digits too. For example:
julia> a = 5
5
julia> Char(a+'0')
'5': ASCII/Unicode U+0035 (category Nd: Number, decimal digit)
Also note, timing with #time is a bit problematic, especially for very small operations. It is better to use #btime or #benchmark from BenchmarkTools.jl .

You probably need something like:
julia> bunch_of_integers = [1, 2, 3, 4, 5]
julia> String(map(x->x+'0', bunch_of_integers))
"12345"
or something like:
julia> map(Char, bunch_of_integers.+'0')
5-element Array{Char,1}:
'1'
'2'
'3'
'4'
'5'

Related

What's the meaning of "[" and ")" in semver (Semantic versioning) which is used in SCA Analysis [duplicate]

I have seen number ranges represented as [first1,last1) and [first2,last2).
I would like to know what such a notation means.
A bracket - [ or ] - means that end of the range is inclusive -- it includes the element listed. A parenthesis - ( or ) - means that end is exclusive and doesn't contain the listed element. So for [first1, last1), the range starts with first1 (and includes it), but ends just before last1.
Assuming integers:
(0, 5) = 1, 2, 3, 4
(0, 5] = 1, 2, 3, 4, 5
[0, 5) = 0, 1, 2, 3, 4
[0, 5] = 0, 1, 2, 3, 4, 5
That's a half-open interval.
A closed interval [a,b] includes the end points.
An open interval (a,b) excludes them.
In your case the end-point at the start of the interval is included, but the end is excluded. So it means the interval "first1 <= x < last1".
Half-open intervals are useful in programming because they correspond to the common idiom for looping:
for (int i = 0; i < n; ++i) { ... }
Here i is in the range [0, n).
The concept of interval notation comes up in both Mathematics and Computer Science. The Mathematical notation [, ], (, ) denotes the domain (or range) of an interval.
The brackets [ and ] means:
The number is included,
This side of the interval is closed,
The parenthesis ( and ) means:
The number is excluded,
This side of the interval is open.
An interval with mixed states is called "half-open".
For example, the range of consecutive integers from 1 .. 10 (inclusive) would be notated as such:
[1,10]
Notice how the word inclusive was used. If we want to exclude the end point but "cover" the same range we need to move the end-point:
[1,11)
For both left and right edges of the interval there are actually 4 permutations:
(1,10) = 2,3,4,5,6,7,8,9 Set has 8 elements
(1,10] = 2,3,4,5,6,7,8,9,10 Set has 9 elements
[1,10) = 1,2,3,4,5,6,7,8,9 Set has 9 elements
[1,10] = 1,2,3,4,5,6,7,8,9,10 Set has 10 elements
How does this relate to Mathematics and Computer Science?
Array indexes tend to use a different offset depending on which field are you in:
Mathematics tends to be one-based.
Certain programming languages tends to be zero-based, such as C, C++, Javascript, Python, while other languages such as Mathematica, Fortran, Pascal are one-based.
These differences can lead to subtle fence post errors, aka, off-by-one bugs when implementing Mathematical algorithms such as for-loops.
Integers
If we have a set or array, say of the first few primes [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ], Mathematicians would refer to the first element as the 1st absolute element. i.e. Using subscript notation to denote the index:
a1 = 2
a2 = 3
:
a10 = 29
Some programming languages, in contradistinction, would refer to the first element as the zero'th relative element.
a[0] = 2
a[1] = 3
:
a[9] = 29
Since the array indexes are in the range [0,N-1] then for clarity purposes it would be "nice" to keep the same numerical value for the range 0 .. N instead of adding textual noise such as a -1 bias.
For example, in C or JavaScript, to iterate over an array of N elements a programmer would write the common idiom of i = 0, i < N with the interval [0,N) instead of the slightly more verbose [0,N-1]:
function main() {
var output = "";
var a = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ];
for( var i = 0; i < 10; i++ ) // [0,10)
output += "[" + i + "]: " + a[i] + "\n";
if (typeof window === 'undefined') // Node command line
console.log( output )
else
document.getElementById('output1').innerHTML = output;
}
<html>
<body onload="main();">
<pre id="output1"></pre>
</body>
</html>
Mathematicians, since they start counting at 1, would instead use the i = 1, i <= N nomenclature but now we need to correct the array offset in a zero-based language.
e.g.
function main() {
var output = "";
var a = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ];
for( var i = 1; i <= 10; i++ ) // [1,10]
output += "[" + i + "]: " + a[i-1] + "\n";
if (typeof window === 'undefined') // Node command line
console.log( output )
else
document.getElementById( "output2" ).innerHTML = output;
}
<html>
<body onload="main()";>
<pre id="output2"></pre>
</body>
</html>
Aside:
In programming languages that are 0-based you might need a kludge of a dummy zero'th element to use a Mathematical 1-based algorithm. e.g. Python Index Start
Floating-Point
Interval notation is also important for floating-point numbers to avoid subtle bugs.
When dealing with floating-point numbers especially in Computer Graphics (color conversion, computational geometry, animation easing/blending, etc.) often times normalized numbers are used. That is, numbers between 0.0 and 1.0.
It is important to know the edge cases if the endpoints are inclusive or exclusive:
(0,1) = 1e-M .. 0.999...
(0,1] = 1e-M .. 1.0
[0,1) = 0.0 .. 0.999...
[0,1] = 0.0 .. 1.0
Where M is some machine epsilon. This is why you might sometimes see const float EPSILON = 1e-# idiom in C code (such as 1e-6) for a 32-bit floating point number. This SO question Does EPSILON guarantee anything? has some preliminary details. For a more comprehensive answer see FLT_EPSILON and David Goldberg's What Every Computer Scientist Should Know About Floating-Point Arithmetic
Some implementations of a random number generator, random() may produce values in the range 0.0 .. 0.999... instead of the more convenient 0.0 .. 1.0. Proper comments in the code will document this as [0.0,1.0) or [0.0,1.0] so there is no ambiguity as to the usage.
Example:
You want to generate random() colors. You convert three floating-point values to unsigned 8-bit values to generate a 24-bit pixel with red, green, and blue channels respectively. Depending on the interval output by random() you may end up with near-white (254,254,254) or white (255,255,255).
+--------+-----+
|random()|Byte |
|--------|-----|
|0.999...| 254 | <-- error introduced
|1.0 | 255 |
+--------+-----+
For more details about floating-point precision and robustness with intervals see Christer Ericson's Real-Time Collision Detection, Chapter 11 Numerical Robustness, Section 11.3 Robust Floating-Point Usage.
It can be a mathematical convention in the definition of an interval where square brackets mean "extremal inclusive" and round brackets "extremal exclusive".

Store 2 4-bit numbers in 1 8 bit number

I am new in thinking about binary numbers. I'm wondering if there is a way to encode 2 4-bit numbers (i.e. hex-encoded numbers) into 1 8-bit number. So if I had a and 5 as the hex numbers, that would be 10 and 5. Maybe there is a way to store that in 1 8 bit number, in such a way that you can get it out of the 8-bit number back into its component 4-bit parts.
[10, 5]! = 15
15! = [10, 5]
Wondering if there is such a way to encode the numbers to accomplish this.
It seems like it is possible, because the first value could be stored in the first 16 digits, then the next value could be stored in the remaining, using 16 as 1, 32 as 2, 48 as 3, etc.
Can't tell if the answer here is how to do it:
How can i store 2 numbers in a 1 byte char?
Not really giving what I'd want:
> a = 10
10
> b = 5
5
> c = a + b
15
> d = (c & 0xF0) >> 4
0
> e = c & 0x0F
15
Maybe I'm not using it right, not sure. This seems like it could be it too but I am not quite sure how to accomplish this in JavaScript.
How to combine 2 4-bit unsigned numbers into 1 8-bit number in C
Any help would be greatly appreciated. Thank you!
I think the first post has the key.
Having a and 5 as the two 4-bit hex numbers to store, you can store them in a variable like:
var store = 0xa5;
or dynamically
var store = parseInt('0x' + ('a' + '9'), 16);
Then to extract the parts:
var number1 = ((store & 0xF0) >> 4).toString(16)
var number2 = ((store & 0x0F)).toString(16)
I hope this helps.
Yes this is supported in most programming languages. You have to do bitwise manipulation. The following is an example in Java.
To encode (validate input beforehand)
byte in1 = <valid input>, in2 = <valid input>;
byte out = in1<<4 | in2;
To decode:
byte in = <valid input>;
byte out1 = in>>4;
byte out2 = in & 0x0f;

T-SQL Decimal Multiplication

MSDN says about precision and scale of decimal multiplicatuion result:
The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated.
So when we execute this:
DECLARE #a DECIMAL(18,9)
DECLARE #b DECIMAL(19,9)
set #a = 1.123456789
set #b = 1
SELECT #a * #b
the result is 1.12345689000000000 (9 zeros) and we see that it is not truncated because 18 + 19 + 1 = 38 (up limit).
When we raise precision of #a to 27 we lose all zeros and the result is just 1.123456789. Going futher we proceed with truncating and get the result being rounded. For example, raising precision of #a to 28 results in 1.12345679 (8 digits).
The interesting thing is that at some point, with precision equal to 30, we have 1.123457 and this result won't change any futher (it stops being truncated).
31, 32 and up to 38 results in the same. How could this be explained?
Decimal and numeric operation results have a minimum scale of 6 - this is specified in the table of the msdn documentation for division, but the same behavior applies for multiplication as well in case of scale truncation as in your example.
This behavior is described in more detail on the sqlprogrammability blog.

binary to decimal in objective-c

I want to convert the decimal number 27 into binary such a way that , first the digit 2 is converted and its binary value is placed in an array and then the digit 7 is converted and its binary number is placed in that array. what should I do?
thanks in advance
That's called binary-coded decimal. It's easiest to work right-to-left. Take the value modulo 10 (% operator in C/C++/ObjC) and put it in the array. Then integer-divide the value by 10 (/ operator in C/C++/ObjC). Continue until your value is zero. Then reverse the array if you need most-significant digit first.
If I understand your question correctly, you want to go from 27 to an array that looks like {0010, 0111}.
If you understand how base systems work (specifically the decimal system), this should be simple.
First, you find the remainder of your number when divided by 10. Your number 27 in this case would result with 7.
Then you integer divide your number by 10 and store it back in that variable. Your number 27 would result in 2.
How many times do you do this?
You do this until you have no more digits.
How many digits can you have?
Well, if you think about the number 100, it has 3 digits because the number needs to remember that one 10^2 exists in the number. On the other hand, 99 does not.
The answer to the previous question is 1 + floor of Log base 10 of the input number.
Log of 100 is 2, plus 1 is 3, which equals number of digits.
Log of 99 is a little less than 2, but flooring it is 1, plus 1 is 2.
In java it is like this:
int input = 27;
int number = 0;
int numDigits = Math.floor(Log(10, input)) + 1;
int[] digitArray = new int [numDigits];
for (int i = 0; i < numDigits; i++) {
number = input % 10;
digitArray[numDigits - i - 1] = number;
input = input / 10;
}
return digitArray;
Java doesn't have a Log function that is portable for any base (it has it for base e), but it is trivial to make a function for it.
double Log( double base, double value ) {
return Math.log(value)/Math.log(base);
}
Good luck.

how to create unique integer number from 3 different integers numbers(1 Oracle Long, 1 Date Field, 1 Short)

the thing is that, the 1st number is already ORACLE LONG,
second one a Date (SQL DATE, no timestamp info extra), the last one being a Short value in the range 1000-100'000.
how can I create sort of hash value that will be unique for each combination optimally?
string concatenation and converting to long later:
I don't want this, for example.
Day Month
12 1 --> 121
1 12 --> 121
When you have a few numeric values and need to have a single "unique" (that is, statistically improbable duplicate) value out of them you can usually use a formula like:
h = (a*P1 + b)*P2 + c
where P1 and P2 are either well-chosen numbers (e.g. if you know 'a' is always in the 1-31 range, you can use P1=32) or, when you know nothing particular about the allowable ranges of a,b,c best approach is to have P1 and P2 as big prime numbers (they have the least chance to generate values that collide).
For an optimal solution the math is a bit more complex than that, but using prime numbers you can usually have a decent solution.
For example, Java implementation for .hashCode() for an array (or a String) is something like:
h = 0;
for (int i = 0; i < a.length; ++i)
h = h * 31 + a[i];
Even though personally, I would have chosen a prime bigger than 31 as values inside a String can easily collide, since a delta of 31 places can be quite common, e.g.:
"BB".hashCode() == "Aa".hashCode() == 2122
Your
12 1 --> 121
1 12 --> 121
problem is easily fixed by zero-padding your input numbers to the maximum width expected for each input field.
For example, if the first field can range from 0 to 10000 and the second field can range from 0 to 100, your example becomes:
00012 001 --> 00012001
00001 012 --> 00001012
In python, you can use this:
#pip install pairing
import pairing as pf
n = [12,6,20,19]
print(n)
key = pf.pair(pf.pair(n[0],n[1]),
pf.pair(n[2], n[3]))
print(key)
m = [pf.depair(pf.depair(key)[0]),
pf.depair(pf.depair(key)[1])]
print(m)
Output is:
[12, 6, 20, 19]
477575
[(12, 6), (20, 19)]