I have created a table in postgres with some timestamp columns.
create table glacier_restore_progress_4(
id SERIAL NOT NULL ,
email VARCHAR(50),
restore_start timestamp,
restore_end timestamp,
primary key (id)
);
In the dbeaver client, it shows those timestamp column value as "2021-06-22 03:25:00". But when i try to fetch them via an API, those value will become "2021-06-22T03:25:00.000Z". How to get rid from it.
I tried to change the data type of the columns in dbeaver client. They didn't work
Related
I have created a table in postgres with some timestamp columns.
create table glacier_restore_progress_4(
id SERIAL NOT NULL ,
email VARCHAR(50),
restore_start timestamp,
restore_end timestamp,
primary key (id)
);
In the dbeaver client, it shows those timestamp column value as "2021-06-22 03:25:00". But when i try to fetch them via an API, those value will become "2021-06-22T03:25:00.000Z". How to get rid from it.
I tried to change the data type of the columns in dbeaver client. They didn't work
I got a table of the following form in Postgres:
CREATE TABLE contract (
id serial NOT NULL,
start_date date NOT NULL,
end_date date NOT NULL,
price float8 NOT NULL,
CONSTRAINT contract_pkey PRIMARY KEY (id)
);
In Microsoft Powerapps, I create a EditForm to update the table above. For other databases, like MS SQL, I didn't need to supply the id, since it's auto increment. But for some reason, PowerApps keeps demanding to fill in the id for this table, even though it's auto increment and shouldn't be supplied to Postgres.
Anyone with the same experience with Powerapps in combination with Postgres? Struggling with it for hours...
I use knex to create a postgres table as following:
knex.schema.createTable('users', table => {
table.bigIncrements('user_id');
....
})
But after the table was created, the column user_id is a integer not the serial as expected.
The sql get by the pgAdmin is as following:
CREATE TABLE public.users
(
user_id bigint NOT NULL DEFAULT nextval('users_user_id_seq'::regclass),
....
)
And the consequence is that when I do insert statement, the user_id won't auto increment as expected.
Any gives?
====================
Currently I just changed to mysql connection, and the inserting works well. But if I changed the database back to postgresql, then inserting would fail due to the duplication of user_id. The code can be found here: https://github.com/buzz-buzz/buzz-service
serial and bigserial are not real types they are just shorthand for what pgAdmin is showing.
You will also find that a sequence has been created with the name users_user_id_seq when you look under sequences in pgAdmin.
I've got a PgSQL 9.4.3 server setup and previously I was only using the public schema and for example I created a table like this:
CREATE TABLE ma_accessed_by_members_tracking (
reference bigserial NOT NULL,
ma_reference bigint NOT NULL,
membership_reference bigint NOT NULL,
date_accessed timestamp without time zone,
points_awarded bigint NOT NULL
);
Using the Windows Program PgAdmin III I can see it created the proper information and sequence.
However I've recently added another schema called "test" to the same database and created the exact same table, just like before.
However this time I see:
CREATE TABLE test.ma_accessed_by_members_tracking
(
reference bigint NOT NULL DEFAULT nextval('ma_accessed_by_members_tracking_reference_seq'::regclass),
ma_reference bigint NOT NULL,
membership_reference bigint NOT NULL,
date_accessed timestamp without time zone,
points_awarded bigint NOT NULL
);
My question / curiosity is why in a public schema the reference shows bigserial but in the test schema reference shows bigint with a nextval?
Both work as expected. I just do not understand why the difference in schema's would show different table creations. I realize that bigint and bigserial allow the same volume of ints to be used.
Merely A Notational Convenience
According to the documentation on Serial Types, smallserial, serial, and bigserial are not true data types. Rather, they are a notation to create at once both sequence and column with default value pointing to that sequence.
I created test table on schema public. The command psql \d shows bigint column type. Maybe it's PgAdmin behavior ?
Update
I checked PgAdmin source code. In function pgColumn::GetDefinition() it scans table pg_depend for auto dependency and when found it - replaces bigint with bigserial to simulate original table create code.
When you create a serial column in the standard way:
CREATE TABLE new_table (
new_id serial);
Postgres creates a sequence with commands:
CREATE SEQUENCE new_table_new_id_seq ...
ALTER SEQUENCE new_table_new_id_seq OWNED BY new_table.new_id;
From documentation: The OWNED BY option causes the sequence to be associated with a specific table column, such that if that column (or its whole table) is dropped, the sequence will be automatically dropped as well.
Standard name of a sequence is built from table name, column name and suffix _seq.
If a serial column was created in such a way, PgAdmin shows its type as serial.
If a sequence has non-standard name or is not associated with a column, PgAdmin shows nextval() as default value.
I have a set of files on S3 that I am trying to load into redshift.
I am using the amazon data pipeline to do it. the wizard took the cluster, db and file format info but I get errors that a primary key is needed to keep existing fields in th table (KEEP_EXISTING) on the table
My table schema is:
create table public.Bens_Analytics_IP_To_FileName(
Day date not null encode delta32k,
IP varchar(30) not null encode text255,
FileName varchar(300) not null encode text32k,
Count integer not null)
distkey(Day)
sortkey(Day,IP);
so then I added a composite primary key on the table to see if it will work, but I get the same error.
create table public.Bens_Analytics_IP_To_FileName(
Day date not null encode delta32k,
IP varchar(30) not null encode text255,
FileName varchar(300) not null encode text32k,
Count integer not null,
primary key(Day,IP,FileName))
distkey(Day)
sortkey(Day,IP);
So I decided to add an identity column as the last column and made it the primary key but then the COPY operation wants a value in the input files for that identity column which did not make much sense
ideally I want it to work without a primary key or a composite primary key
any ideas?
Thanks
Documentation is not in a great condition. They have added a 'mergeKey' concept that can be any arbitrary key (announcement, docs). You should not have to define a primary key on table with this.
But you would still need to supply a key to perform join between your new data coming in and the existing data in redshift table.
In Edit Pipeline, under Parameters, there is a field named: myPrimaryKeys (optional). Enter you Pk there, instead of adding it to your table definition.