I am running below query in Oracle and Postgres, both shows different output with respect to ordering of the values.
with test as (
select 'Summary-Account by User (Using Contact ID)' col1 from dual
union all
select 'Summary-Account by User by Client by Day (Using Contact ID)' col1 from dual
)
select * from test
order by col1 desc;
Below is Oracle one
Postgres
with test as (
select 'Summary-Account by User (Using Contact ID)' col1
union all
select 'Summary-Account by User by Client by Day (Using Contact ID)' col1
)
select * from test
order by col1 desc;
Oracle collation is AL32UTF8
Postgres has LC_CTYPS is en_US.UTF-8
Both of them look same from how database should behave. How to fix this?
I have read few posts on stackoverflow about POSIX and C, after changing the query order by to order by col1 collate "C" desc; The result matches Oracle output.
Is there anyway to apply this permanently?
AL32UTF8 is not a collation, but an encoding (character set).
Oracle uses the “binary collation” by default, which corresponds the the C or POSIX collation in PostgreSQL.
You have several options to get a similar result in PostgreSQL:
create the database with LOCALE "C"
if you are selecting from a table, define the column to use the "C" collation:
ALTER TABLE tab ALTER col1 TYPE text COLLATE "C";
add an explicit COLLATE clause:
ORDER BY col1 COLLATE "C"
Related
I have this query and insert rows to MYSQl database and work perfect.
insert int test(id,user)
select null,user from table2
union
select null,user from table3
But when run the above query in PostgreSQL not work. And I get this error column "id" is of type integer but expression is of type text, But when I run two query below as shown as worked.
When I run below query in PostgreSQL it works properly:
insert into test(id,user)
select null,user from table2
Or below query in PostgreSQL it works properly:
insert int test(id,user)
select null,user from table3
Or below query in PostgreSQL it works properly:
select null,user from table2
union
select null,user from table3
null is not a real value and thus has no data type. The default assumed data type is text, that's where the error message comes from. Just cast the value to int in the first SELECT:
insert into test(id, "user")
select null::int, "user" from table2
union
select null, "user" from table3
Or even better, leave out the id completely so that any default defined for the id column is used. It sounds strange to try and insert null into a column named id
insert into test("user")
select "user" from table2
union
select "user" from table3
Note that user is a reserved keyword and a built-in function, so you will have to quote it to avoid problems. In the long run I recommend to find a different name for that column.
For a first time I find very handy way for importing "last year data" to "this year data".
This works well:
DROP TABLE IF EXISTS mytable;
CREATE TABLE mytable AS
SELECT col1, col2, col3, col4
FROM dblink('host=localhost port=xxxx user=xxxx password=xxxx dbname=mylastyeardb',
'SELECT col1, col2, col3, col4
FROM mytable
WHERE TRIM(col1)<>'''' ')
AS x(col1 text, col2 text, col3 text, col4 text);
ALTER TABLE mytable ADD COLUMN cols_id SERIAL PRIMARY KEY;
Since 'cols_id' from old table is not appropriate for a new table maybe some of experienced users know how to setup a table in CREATE TABLE AS that it have 'cols_id' as (serial) primary key nice ordered and as a first column. Maybe such way I can avoid using of second (ALTER) command?
Any other advice for showed situation will be welcome too.
you either create table, defining its structure (with all handy shortcuts and options in one statement), or create table as select, "inheriting" [partially] the structure. Thus if you want primary key, you will need alter tabale any way...
To put id as first column in one statement, you can simply use a dummy value, eg sequential number:
t=# create table s as select row_number() over() as id,chr(n) from generate_series(197,200) n;
SELECT 4
t=# select * from s;
id | chr
----+-----
1 | Å
2 | Æ
3 | Ç
4 | È
(4 rows)
Of course after that you still need to create sequence, assign its value as default to the id column and add primary key on ot. Which makes it even more statements then you have ATM...
I need a way to get a "description" of the columns from a SELECT query (cursor), such as their names, data types, precision, scale, etc., in PostgreSQL (or better yet PL/pgSQL).
I'm transitioning from Oracle PL/SQL, where I can get such description using a built-in procedure dbms_sql.describe_columns. It returns an array of records, one for each column of a given (parsed) cursor.
EDB has it implemented too (https://www.enterprisedb.com/docs/en/9.0/oracompat/Postgres_Plus_Advanced_Server_Oracle_Compatibility_Guide-127.htm#P13324_681237)
An examples of such query:
select col1 from tab where col2 = :a
I need an API (or a workaround) that could be called like this (hopefully):
select query_column_description('select col1 from tab where col2 = :a');
that will return something similar to:
{{"col1","numeric"}}
Why? We build views where these queries become individual columns. For example, view's query would look like the following:
select (select col1 from tab where col2 = t.colA) as col1::numeric
from tab_main t
http://sqlfiddle.com/#!17/21c7a/2
You can use systems table :
First step create a temporary view with your query (without clause where)
create or replace view temporary view a_view as
select col1 from tab
then select
select
row_to_json(t.*)
from (
select
column_name,
data_type
from
information_schema.columns
where
table_schema = 'public' and
table_name = 'a_view'
) as t
I have a PostgreSQL database table with 4 columns - labeled column_a, column_b, etc. I want to query this table with a simple select query:
select * from table_name;
I get a handful of results looking like:
column_a | column_b
---------+---------
'a value'|'b_value'
But when I use this query:
select * from schema_name.table_name;
I get the full result:
column_a | column_b | column_c | column_d
---------+----------+----------+---------
'a value'|'b value' |'c value' |'d_value'
Columns c and d were added at a later date, after initial table creation. My question is: Why would the database ignore the later columns when the schema name is left out of the select query?
Table names are not unique within a database in Postgres. There can be any number of tables named 'table_name' in different schemas - including the temporary schema, which always comes first unless you explicitly list it after other schemas in the search_path. Obviously, there are multiple tables named table_name. You must understand the role of the search_path to interpret this correctly:
How does the search_path influence identifier resolution and the "current schema"
The first table lives in a schema that comes before schema_name in your search_path (or schema_name is not listed there at all). So the unqualified table name is resolved to this table (or view). Check the list of tables named 'table_name' that your current role has access to in your database:
SELECT *
FROM information_schema.tables
WHERE table_name = 'table_name';
Views are just special tables with an attached RULE internally. They could play the same role as a regular table and are included in the above query.
Details:
How to check if a table exists in a given schema
If I'm selecting from one source into another can I specify the collation at the same time.
e.g.
SELECT Column1, Column2
INTO DestinationTable
FROM SourceTable
Where 'DestinationTable' doesn't already exist.
I know I can do something like
SELECT Column1, Column2 COLLATE Latin1_General_CI_AS
INTO DestinationTable
FROM SourceTable
In my real problem the data types of the column aren't known in advance so I can't just add the collation to each column. It's in a corner of a legacy application using large nasty stored procedures that generate SQL and I'm trying to get it working on a new server that has a different collation in tempdb with minimal changes.
I'm looking for something like:
SELECT Column1, Column2
INTO DestinationTable COLLATE Latin1_General_CI_AS
FROM SourceTable
But that doesn't work.
Can you create the table first?
You can define a collation for the relevant columns. On INSERT, they will be coerced.
It sounds like you don't know the structure of the target table though... so then no, you can't without dynamic SQL. Which will make things worse...
You can do this like that if it helps:
SELECT *
INTO DestinationTable
FROM
(
SELECT Column1 COLLATE Latin1_General_CI_AS, Column2 COLLATE Latin1_General_CI_AS
FROM SourceTable
) as t
To correct Kasia's answer:
SELECT *
INTO DestinationTable
FROM
(
SELECT Column1 COLLATE Latin1_General_CI_AS as Column1
,Column2 COLLATE Latin1_General_CI_AS as Column1
FROM SourceTable
) as t
You have to add an alias for each column to get this work.