select current_setting('my.session_id');
current_setting
-----------------
14
(1 row)
select set_config ( 'my.session_id', 15::text , true) ;
set_config
------------
15
(1 row)
select current_setting('my.session_id');
current_setting
-----------------
14
(1 row)
set my.session_id = 15;
SET
select current_setting('my.session_id');
current_setting
-----------------
15
(1 row)
As seen above, SET is working, but set_config is behaving slightly differently. Probably I am missing something.
set my.session_id = 14;
select current_setting('my.session_id');
current_setting
-----------------
14
(1 row)
select set_config ( 'my.session_id', 15::text , false) ;
set_config
------------
15
(1 row)
select current_setting('my.session_id');
current_setting
-----------------
15
(1 row)
as per manual:
Sets the parameter setting_name to new_value, and returns that value. If is_local is true, the new value will only apply during the
current transaction. If you want the new value to apply for the rest of the current session, use false instead.
Related
Hi I have a table that with integer data type and have values like "1000" ,"10000" "1000000" and I want to convert them as "1.000" , "10.000" and "1.000.000". Also I want to keep them in integer format. Is that possible ?
No you cannot store an integer that way:
show lc_numeric;
en_US.UTF-8
select 10,000::integer;
?column? | int4
----------+------
10 | 0
select 10.000::integer;
int4
------
10
select to_char(10000, '99G999');
to_char
---------
10,000
select to_number('10,000', '99G999');
to_number
-----------
1000
set lc_numeric='de_DE.UTF-8';
SET
show lc_numeric ;
lc_numeric
-------------
de_DE.UTF-8
select 10,000::integer;
?column? | int4
----------+------
10 | 0
(1 row)
select 10.000::integer;
int4
------
10
(1 row)
select to_char(10000, '99G999');
to_char
---------
10.000
select to_number('10.000', '99G999');
to_number
-----------
10000
lc_numeric:
lc_numeric (string)
Sets the locale to use for formatting numbers, for example with the to_char family of functions. Acceptable values are system-dependent; see Section 24.1 for more information. If this variable is set to the empty string (which is the default) then the value is inherited from the execution environment of the server in a system-dependent way.
So the only way locale specific information is going to be relevant is when you format the number to a string or vice versa.
I am writing a custom postgresql function to round TIMESTAMPTZ fields to an arbitrary interval with the basic algorithm of round(timestamp / interval) * interval and some research, I found a solution:
SELECT to_timestamp(round((extract('epoch' from timestamp)) / interval) * interval)
it works. My question: is there a more efficient way of doing this?
Look this like a template:
Round a timestamp to the nearest 5-minute mark.
The sample usage:
postgres=# select now(), round_time('2010-09-17 16:48');
now | round_time
-------------------------------+------------------------
2010-09-19 08:36:31.701919+02 | 2010-09-17 16:50:00+02
(1 row)
postgres=# select now(), round_time('2010-09-17 16:58');
now | round_time
-------------------------------+------------------------
2010-09-19 08:36:43.860858+02 | 2010-09-17 17:00:00+02
(1 row)
postgres=# select now(), round_time('2010-09-17 16:57');
now | round_time
-------------------------------+------------------------
2010-09-19 08:36:53.273612+02 | 2010-09-17 16:55:00+02
(1 row)
postgres=# select now(), round_time('2010-09-17 23:58');
now | round_time
------------------------------+------------------------
2010-09-19 08:37:09.41387+02 | 2010-09-18 00:00:00+02
(1 row)
I think this might be a PostgreSQL bug but I'm posting it here in case I'm just missing something. When my WHERE clause has a NOT IN () clause, having null in the list makes the clause no longer truthy. Below is a dumbed down example of my issue.
=# select 1 where 1 not in (1);
?column?
----------
(0 rows)
=# select 1 where 1 not in (2);
?column?
----------
1
(1 row)
=# select 1 where 1 not in (null);
?column?
----------
(0 rows)
=# select 1 where 1 not in (null, 2);
?column?
----------
(0 rows)
=# select 1 where 1 not in (2, null);
?column?
----------
(0 rows)
=# select 1 where 1 not in (2, 3);
?column?
----------
1
(1 row)
So where 1 not in (1) returns 0 rows as expected since 1 is in the list, where 1 not in (2) returns 1 row as expected since 1 is not in the list, but where 1 not in (null) returns 0 rows even though 1 is not in the list.
This is not a PostgreSQL bug.
The problem is that NOT IN is just the short version for testing all inequalities one by one.
1 NOT IN (null, 2) is equivalent to:
1 <> null
AND
1 <> 2
However, NULL is a special value, so 1 <> null is itself NULL (not TRUE). See the documentation:
Do not write expression = NULL because NULL is not “equal to” NULL. (The null value represents an unknown value, and it is not known whether two unknown values are equal.)
As far as I know that's the standard SQL behaviour.
PostgreSQL has an additional keyword to check whether a value is different from null:
1 IS DISTINCT FROM NULL would be TRUE.
I was under the impression that PostgreSQL rounded half-microseconds in timestamps to the nearest even microsecond. E.g.:
> select '2000-01-01T00:00:00.0000585Z'::timestamptz;
timestamptz
-------------------------------
2000-01-01 01:00:00.000058+01
(1 row)
> select '2000-01-01T00:00:00.0000575Z'::timestamptz;
timestamptz
-------------------------------
2000-01-01 01:00:00.000058+01
(1 row)
Then I discovered that:
> select '2000-01-01T00:00:00.5024585Z'::timestamptz;
timestamptz
-------------------------------
2000-01-01 01:00:00.502459+01
(1 row)
Does anybody know the rounding algorithm Postgresql uses for timestamps?
For your information, here's the version of Postgresql I'm running:
> select version();
version
----------------------------------------------------------------------------------------------------------------
PostgreSQL 9.6.1 on x86_64-apple-darwin15.6.0, compiled by Apple LLVM version 8.0.0 (clang-800.0.42.1), 64-bit
(1 row)
All the PostgreSQL time types have a microsecond resolution, six decimal places. Rounding to the nearest even microsecond would not be microsecond resolution.
Its behavior looks consistent with round half-up to me, the usual way to round. >= 0.5 round up, else round down.
0.5024585 rounded half-up to 6 decimal places rounds up to 0.502459 because the 7th digit is 5.
test=# select '2000-01-01T00:00:00.5024585Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502459
(1 row)
0.5024584999999 rounds down to 0.502458 because the 7th digit is 4.
test=# select '2000-01-01T00:00:00.5024584999999Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502458
(1 row)
Nevermind, the above appears to be anomalous. Stepping through '2000-01-01T00:00:00.5024235Z' to '2000-01-01T00:00:00.5024355Z' is consistent with half-even rounding.
I'm going to guess the anomalies are due to floating point error converting from floating point seconds in the input to the integer microseconds that timestamp uses.
test=# select '2000-01-01T00:00:00.5024235Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502424
(1 row)
test=# select '2000-01-01T00:00:00.5024245Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502425
(1 row)
test=# select '2000-01-01T00:00:00.5024255Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502425
(1 row)
test=# select '2000-01-01T00:00:00.5024265Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502426
(1 row)
test=# select '2000-01-01T00:00:00.5024275Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502428
(1 row)
test=# select '2000-01-01T00:00:00.5024285Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502428
(1 row)
test=# select '2000-01-01T00:00:00.5024295Z'::timestamp;
timestamp
---------------------------
2000-01-01 00:00:00.50243
(1 row)
test=# select '2000-01-01T00:00:00.5024305Z'::timestamp;
timestamp
---------------------------
2000-01-01 00:00:00.50243
(1 row)
test=# select '2000-01-01T00:00:00.5024315Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502432
(1 row)
test=# select '2000-01-01T00:00:00.5024325Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502432
(1 row)
test=# select '2000-01-01T00:00:00.5024335Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502434
(1 row)
test=# select '2000-01-01T00:00:00.5024345Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502434
(1 row)
test=# select '2000-01-01T00:00:00.5024355Z'::timestamp;
timestamp
----------------------------
2000-01-01 00:00:00.502436
(1 row)
This also plays out with interval N microsecond. Less decimal places means less floating point error.
test=# select interval '0.5 microsecond';
interval
----------
00:00:00
(1 row)
test=# select interval '1.5 microsecond';
interval
-----------------
00:00:00.000002
(1 row)
test=# select interval '2.5 microsecond';
interval
-----------------
00:00:00.000002
(1 row)
test=# select interval '3.5 microsecond';
interval
-----------------
00:00:00.000004
(1 row)
test=# select interval '4.5 microsecond';
interval
-----------------
00:00:00.000004
(1 row)
test=# select interval '5.5 microsecond';
interval
-----------------
00:00:00.000006
(1 row)
test=# select interval '6.5 microsecond';
interval
-----------------
00:00:00.000006
(1 row)
A small C program confirms there's a floating point accuracy problem with single precision floats at 7 decimal places that would affect rounding.
#include <math.h>
#include <stdio.h>
int main() {
float nums[] = {
0.5024235f,
0.5024245f,
0.5024255f,
0.5024265f,
0.5024275f,
0.5024285f,
0.5024295f,
0.5024305f,
NAN
};
for( int i = 0; !isnan(nums[i]); i++ ) {
printf("%0.8f\n", nums[i]);
}
}
This produces:
0.50242352
0.50242448
0.50242549
0.50242651
0.50242752
0.50242847
0.50242949
0.50243050
Whereas with doubles, there's no problem.
0.50242350
0.50242450
0.50242550
0.50242650
0.50242750
0.50242850
0.50242950
0.50243050
Exactly what the question says.
mydb=> select '2016-01-03 24:00'::timestamp;
timestamp
---------------------
2016-01-04 00:00:00
(1 row)
That's what I expected.
mydb=> select date_trunc('seconds', '2016-01-03 23:59.9999999999'::timestamp);
date_trunc
---------------------
2016-01-03 00:24:00
(1 row)
Um. Wait, what?
It has nothing to do with date_trunc ... once you introduce the decimal point, 23:59.9999999999 is being interpreted as minutes and seconds rather than hours and minutes.
Without decimal point
db=# select '2016-01-03 23:59'::timestamp;
timestamp
---------------------
2016-01-03 23:59:00
(1 row)
With decimal point
db=# select '2016-01-03 23:59.9999999'::timestamp;
timestamp
---------------------
2016-01-03 00:24:00
(1 row)
It's understandable, given what you were expecting to get back, but you seem to have misread 24 minutes as 24 hours in the result here.
As a side note, the rounding kicks in once you go past six digits (i.e. microseconds) after the decimal place:
db=# select '2016-01-03 23:59.999999'::timestamp;
timestamp
----------------------------
2016-01-03 00:23:59.999999
(1 row)