Does somebody know, why this even compile? - swift

There is a lot of different Double initializers, but any should except String as a parameter.
Why this compile (and returns a value!) ???
if let d = Double("0xabcdef10.abcdef10") {
print(d)
}
it prints
2882400016.671111
no import required, please guys, check it with Your environment ...
UPDATE
Thank You guys, My trouble is not to uderstand how to represent Double value as a hexadecimal string.
I am confused with inconsistent implemetation of
protocol LosslessStringCovertible
init?(_:) REQUIRED
Instantiates an instance of the conforming type from a string representation.
Declaration
init?(_ description: String)
Both of Double and Int conforms to LosslessStringCovertible (Int indirectly, via conformance to FixedWidthInteger)
At the begining I started with
public func getValue<T: LosslessStringConvertible>(_ value: String)->T {
guard let ret = T.init(value) else {
// should never happen
fatalError("failed to assign to \(T.self)")
}
return ret
}
// standart notation
let id: Int = getValue("15")
// hexadecimal notation
let ix: Int = getValue("0Xf") // Fatal error: failed to assign to Int
OK, that is implementation detail, so I decided to implemet it by my own, which accept string with binary, oktal, hexadecimal notation
next I did the same for Double and by testing it I found that when I forgot to import my LosslessStringConvertibleExt, my tests passed for expected Double where the string was in hexadecimal notation and in decimal notation.
Thank You LeoDabus for Your comment with the link to docs, which I didn't find before (yes, most likely I am blinded, it saves me few hours :-)
I appologize the rest of You for the "stupid" question.

From the documentation of Double's failable init:
A hexadecimal value contains the significand, either 0X or 0x, followed by a sequence of hexadecimal digits. The significand may include a decimal point.
let f = Double("0x1c.6") // f == 28.375
So 0xabcdef10.abcdef10 is interpreted as an hexadecimal number, given the 0x prefix.

The string was interpreted as fractional hex. Here's how the decimal value is calculated:
| Hex Digit | Decimal Value | Base Multipler | Decimal Result |
|-----------|---------------|----------------|---------------------------|
| a | 10 | x 16 ^ 7 | 2,684,354,560.0000000000 |
| b | 11 | x 16 ^ 6 | 184,549,376.0000000000 |
| c | 12 | x 16 ^ 5 | 12,582,912.0000000000 |
| d | 13 | x 16 ^ 4 | 851,968.0000000000 |
| e | 14 | x 16 ^ 3 | 57,344.0000000000 |
| f | 15 | x 16 ^ 2 | 3,840.0000000000 |
| 1 | 1 | x 16 ^ 1 | 16.0000000000 |
| 0 | 0 | x 16 ^ 0 | 0.0000000000 |
| . | | | |
| a | 10 | x 16 ^ -1 | 0.6250000000 |
| b | 11 | x 16 ^ -2 | 0.0429687500 |
| c | 12 | x 16 ^ -3 | 0.0029296875 |
| d | 13 | x 16 ^ -4 | 0.0001983643 |
| e | 14 | x 16 ^ -5 | 0.0000133514 |
| f | 15 | x 16 ^ -6 | 0.0000008941 |
| 1 | 1 | x 16 ^ -7 | 0.0000000037 |
| 0 | 0 | x 16 ^ -8 | 0.0000000000 |
--------------------------------------------------------------------------
| Total | | | 2,882,400,016.6711100000 |

Related

double precision in MATLAB

So double precision takes 64 bits in MATLAB. I know that 0 or 1 will take one bit.
But when I type realmax('double') I get a really big number 1.7977e+308. How can this number be saved in only 64 bits?
Would appreciate any clarafication. Thanks.
This is not a MATLAB question. A 64-bit IEEE 754 double-precision binary floating-point format is represented in this format:
bit layout:
| 0 | 1 | 2 | ... | 11 | 12 | 13 | 14 | ... | 63 |
| sign | exponent(E) (11 bit) | fraction (52 bit) |
The first bit is the sign:
0 => +
1 => -
The next 11 bits are used for the representation of the exponent. So we can have integers all the way to +2^10-1 = 1023. Wait... that does not sound good! To represent large numbers, the so-called biased form is used in which the value is represented as:
2^(E-1023)
where E is what the exponent represents. Say, The exponent bits are like these examples:
Bit representation of the exponent:
Bit no: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
Example 1: | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Example 2: | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
Example 3: | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 |
Example 4: | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
Example 5: | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Base 10 representation:
Example 1 => E1: 1
Example 2 => E2: 32
Example 3 => E3: 515
Example 4 => E4: 2046
Example 5 => E4: Infinity or NaN (**** Special case ****)
Biased form:
Example 1 => 2^(E1-1023) = 2^-1022 <= The smallest possible exponent
Example 2 => 2^(E2-1023) = 2^-991
Example 3 => 2^(E3-1023) = 2^-508
Example 4 => 2^(E4-1023) = 2^+1023 <= The largest possible exponent
Example 5 => 2^(E5-1023) = Infinity or NaN
When E meets 0 < E < 2047 then the number is known as a normalized number represented by:
Number = (-1)^sign * 2^(E-1023) * (1.F)
but if E is 0, then the number if known as a denormalized number represented by:
Number = (-1)^sign * 2^(E-1022) * (0.F)
Now F is basically the what is determined by the fraction bits:
// Sum over i = 12, 13, ..... , 63
F = sum(Bit(i) * 2^(-i))
and Bit(i) refers the ith bit of the number. Examples:
Bit representation of the fraction:
Bit no: | 12 | 13 | 14 | 15 | ... ... ... ... | 62 | 63 |
Example 1: | 0 | 0 | 0 | 0 | 0 ... .... 0 | 0 | 1 |
Example 2: | 1 | 0 | 0 | 0 | 0 ... .... 0 | 0 | 0 |
Example 3: | 1 | 1 | 1 | 1 | 1 ... .... 1 | 1 | 1 |
F value assuming 0 < E < 2047:
Example 1 => 1.F1 = 1 + 2^-52
Example 2 => 1.F2 = 1 + 2^-1
Example 3 => 1.F3 = 1 + 1 - 2^-52
But when I type realmax('double') I get a really big number
1.7977e+308. How can this number be saved in only 64 bits?
realmax('double')'s binary representation is
| sign | exponent(E) (11 bit) | fraction (52 bit) |
0 11111111110 1111111111111111111111111111111111111111111111111111
Which is
+2^1023 x (1 + (1-2^-52)) = 1.79769313486232e+308
I took some definitions and examples from this Wikipedia page.

Create dummy variables in PostgreSQL

Is it possible to create a dummy variable when querying
For instance the query below will give me only the observations that satisfy the var1 conditions. I also want the remaining observations but with some kind of tag on it (0/1, indicator values would be sufficient)
SELECT distinct ON (id) id,var1,var2,var3
FROM table
where var2 = ANY('{blue,yellow}');
Have
+-----+------+--------+------+
| id | Var1 | Var2 | Var3 |
+-----+------+--------+------+
| 345 | 12 | Blue | 3456 |
| 345 | 12 | Red | 2134 |
| 346 | 45 | Blue | 3451 |
| 347 | 25 | yellow | 1526 |
+-----+------+--------+------+
Want
+-----+------+--------+------+--------------------+
| id | Var1 | Var2 | Var3 | Indicator variable |
+-----+------+--------+------+--------------------+
| 345 | 12 | Blue | 3456 | 1 |
| 345 | 12 | Red | 2134 | 0 |
| 346 | 45 | Blue | 3451 | 1 |
| 347 | 25 | yellow | 1526 | 1 |
+-----+------+--------+------+--------------------+
Instead of a expression in where you can use an expression in select output expressions:
=> select a, a = any('{1,2,3,5,7}') as asmallprime
from generate_series(1,10) as a;
a | asmallprime
----+-------------
1 | t
2 | t
3 | t
4 | f
5 | t
6 | f
7 | t
8 | f
9 | f
10 | f
(10 rows)
Tometzky's answer is sufficient, but if you want something more complex you can also use CASE statements.
Tometzky's example using CASE with an extra indicator
SELECT a, CASE WHEN a = any('{1,2,3,5,7}') THEN 'YES'
WHEN a = any('{4,9}') THEN 'SQUARE' ELSE 'NO' END as asmallprime
FROM generate_series(1,10) as a;

How to find the lowest and biggest value in a maximum distance between points (SQL)

Currently, I have a PostgreSQL database (and a SQL Server database with almost the same structure), with some data, like example below:
+----+---------+-----+
| ID | Name | Val |
+----+---------+-----+
| 01 | Point A | 0 |
| 02 | Point B | 050 |
| 03 | Point C | 075 |
| 04 | Point D | 100 |
| 05 | Point E | 200 |
| 06 | Point F | 220 |
| 07 | Point G | 310 |
| 08 | Point H | 350 |
| 09 | Point I | 420 |
| 10 | Point J | 550 |
+----+---------+-----+
ID = PK (auto increment);
Name = unique;
Val = unique;
Now, suppose I have only Point F (220), and I wanna to find the lowest value and biggest value with a maximum distance less than 100 between the data.
So, my result must return:
Lowest: Point E (200)
Biggest: Point I (420)
Step by step explanation (and because english is not my primary language):
Looking for lowest value:
Initial value = Point F (220);
Look for the lower closest value of Point F (220): Point E (200);
200(E) < 220(F) = True; 220(F) - 200(E) < 100 = True;
Lowest value until now = Point E (200)
Repeat
Look for the lower closest value of Point E (200): Point D (100);
100(D) < 200(E) = True; 200(E) - 100(D) < 100 = False;
Lowest value = Point E (200); Break;
Looking fot the biggest value:
Initial value = Point F (220);
Look for the biggest closest value of Point F (220): Point G (310);
310(G) > 220(F) = True; 310(G) - 220(F) < 100 = True;
Biggest value until now = Point G (310)
Repeat
Look for the biggest closest value of Point G (310): Point H (350);
350(H) > 310(G) = True; 350(H) - 310(G) < 100 = True;
Biggest value until now = Point H (350)
Repeat
Look for the biggest closest value of Point H (350): Point I (420);
420(I) > 350(H) = True; 420(I) - 350(H) < 100 = True;
Biggest value until now = Point I (420)
Repeat
Look for the biggest closest value of Point I (420): Point J (550);
550(J) > 420(I) = True; 550(J) - 420(I) < 100 = False;
Biggest value Point I (420); Break;
This can be done using Windows Functions and some working.
In a step by step fashion, you would start by having one table (let's call it point_and_prev_next) defined by this select:
SELECT
id, name, val,
lag(val) OVER(ORDER BY id) AS prev_val,
lead(val) OVER(ORDER BY id) AS next_val
FROM
points
which produces:
| id | name | val | prev_val | next_val |
|----|---------|-----|----------|----------|
| 1 | Point A | 0 | (null) | 50 |
| 2 | Point B | 50 | 0 | 75 |
| 3 | Point C | 75 | 50 | 100 |
| 4 | Point D | 100 | 75 | 200 |
| 5 | Point E | 200 | 100 | 220 |
| 6 | Point F | 220 | 200 | 310 |
| 7 | Point G | 310 | 220 | 350 |
| 8 | Point H | 350 | 310 | 420 |
| 9 | Point I | 420 | 350 | 550 |
| 10 | Point J | 550 | 420 | (null) |
The lag and lead window functions serve to get the previous and next values from the table (sorted by id, and not partitioned by anything).
Next, we make a second table point_and_dist_prev_next which uses val, prev_val and next_val, to compute distance to previous point and distance to next point. This would be computed with the following SELECT:
SELECT
id, name, val, (val-prev_val) AS dist_to_prev, (next_val-val) AS dist_to_next
FROM
point_and_prev_next
This is what you get after executing it:
| id | name | val | dist_to_prev | dist_to_next |
|----|---------|-----|--------------|--------------|
| 1 | Point A | 0 | (null) | 50 |
| 2 | Point B | 50 | 50 | 25 |
| 3 | Point C | 75 | 25 | 25 |
| 4 | Point D | 100 | 25 | 100 |
| 5 | Point E | 200 | 100 | 20 |
| 6 | Point F | 220 | 20 | 90 |
| 7 | Point G | 310 | 90 | 40 |
| 8 | Point H | 350 | 40 | 70 |
| 9 | Point I | 420 | 70 | 130 |
| 10 | Point J | 550 | 130 | (null) |
And, at this point, (and starting with point "F"), we can get the first "wrong point up" (the first that fails the "distance to previous" < 100) by means of the following query:
SELECT
max(id) AS first_wrong_up
FROM
point_and_dist_prev_next
WHERE
dist_to_prev >= 100
AND id <= 6 -- 6 = Point F
This just looks for the point closest to our reference one ("F") which FAILS to have a distance with the previous one < 100.
The result is:
| first_wrong_up |
|----------------|
| 5 |
The first "wrong point" going down is computed in an equivalent manner.
All these queries can be put together using Common Table Expressions, also called WITH queries, and you get:
WITH point_and_dist_prev_next AS
(
SELECT
id, name, val,
val - lag(val) OVER(ORDER BY id) AS dist_to_prev,
lead(val) OVER(ORDER BY id)- val AS dist_to_next
FROM
points
),
first_wrong_up AS
(
SELECT
max(id) AS first_wrong_up
FROM
point_and_dist_prev_next
WHERE
dist_to_prev >= 100
AND id <= 6 -- 6 = Point F
),
first_wrong_down AS
(
SELECT
min(id) AS first_wrong_down
FROM
point_and_dist_prev_next
WHERE
dist_to_next >= 100
AND id >= 6 -- 6 = Point F
)
SELECT
(SELECT name AS "lowest value"
FROM first_wrong_up
JOIN points ON id = first_wrong_up),
(SELECT name AS "biggest value"
FROM first_wrong_down
JOIN points ON id = first_wrong_down) ;
Which provides the following result:
| lowest value | biggest value |
|--------------|---------------|
| Point E | Point I |
You can check it at SQLFiddle.
NOTE: It is assumed that the id column is always increasing. If it were not, the val column would have to be used instead (assuming, obviously, that it always keeps growing).

Break on Column, Compute sum in postgres

I am trying to transform an SQL query into Postgres, and i am struggling to do the below,
Example :
RESULTS FROM FIRST QUERY FOR MY NEXT QUERY : select sourcename, type, count(type), sum(size) from table;
sourcename | Type | count | sum
------------+-----------------------------------------------------+-------+----------
A | TYPE1 | 21 | 10485378
B | TYPE1 | 12 | 5241177
C | TYPE1 | 12 | 5242254
D | TYPE1 | 12 | 5570560
A | TYPE2 | 11 | 5239645
B | TYPE2 | 12 | 5570560
C | TYPE2 | 12 | 5241862
D | TYPE2 | 11 | 5570560
OUTPUT I NEED:
sourcename | Type | count | sum
------------+-----------------------------------------------------+-------+----------
A | TYPE1 | 21 | 10485378
| TYPE2 | 12 | 5241177
TOTAL | | 33 | sum(values above)
B | TYPE1 | 12 | 5242254
| TYPE2 | 12 | 5570560
TOTAL | | 24 | sum(values above)
C | TYPE1 | 11 | 5239645
| TYPE2 | 12 | 5570560
TOTAL | | 23 | sum(values above)
D | TYPE1 | 12 | 5241862
| TYPE2 | 11 | 5570560
TOTAL | | 23 | sum(values above)
NOTE: I have used WITH AS to get the total but displaying them between each Source name i.e. A,B,C,D is something i am not able to do, i am also not able to suppress A from getting displayed twice in the results, like in oracle its break on sourcename, but is there a postgres equivalent?. also compute sum on column is in oracle , i could not find a postgres equivalent. I have finally found a way out using both shell and psql to get the below desired results, but i need to know if there is a better way to do this without using shell in between. Any help is much appreciated. I am using psql 9.1.3
I am new to this forum, so if you see my table results not aligned for viewing let me know, i will try to set it right.
(Using PostgreSQL 9.1)

How to compute the dot product of two column (think full column as a vector)?

gave this table:
| a | b | c |
|---+---+----+
| 3 | 4 | |
| 1 | 2 | |
| 1 | 3 | |
| 2 | 2 | |
I want to get the dot product of two column a and b ,the result should be equel to (3*4)+(1*2)+(1*3)+(2*2) which is 21.
I don't want use the clumsy formula (B1*B2+C1*C2+D1*D2+E1*E2) because actually I have a large table waiting to calculate.
I know emacs's Calc tool has a "vprod" function which can do those sort of things ,but I dont' know how to turn the full column to a vector.
Can anybody tell me how to achieve this task,appreciate it!
In emacs-calc, the simple product of 2 vectors calculates the dot product.
This works (I put the result in #6$3; also the parenthesis can be omitted):
| a | b | c |
|---+---+----|
| 3 | 4 | |
| 1 | 2 | |
| 1 | 3 | |
| 2 | 2 | |
|---+---+----|
| | | 21 |
#+TBLFM: #6$3=(#I$1..#II$1)*(#I$2..#II$2)
#I and #II span from the 1st hline to the second.
This can be solved using babel and R in org-mode:
#+name: mytable
| a | b | c |
|---+---+----+
| 3 | 4 | |
| 1 | 2 | |
| 1 | 3 | |
| 3 | 2 | |
#+begin_src R :var mytable=mytable
sum(mytable$a * mytable$b)
#+end_src
#+RESULTS:
: 23