type "hstore" is only a shell - postgresql

I am trying to setup automatic audit Logging in Postgres Using Triggers and Trigger Functions. For this i want to create the table logged_actions in audit schema. When i run the following query :
CREATE TABLE IF NOT EXISTS audit.logged_actions
(
event_id bigint NOT NULL DEFAULT nextval('audit.logged_actions_event_id_seq'::regclass),
schema_name text COLLATE pg_catalog."default" NOT NULL,
table_name text COLLATE pg_catalog."default" NOT NULL,
relid oid NOT NULL,
session_user_name text COLLATE pg_catalog."default",
action_tstamp_tx timestamp with time zone NOT NULL,
action_tstamp_stm timestamp with time zone NOT NULL,
action_tstamp_clk timestamp with time zone NOT NULL,
transaction_id bigint,
application_name text COLLATE pg_catalog."default",
client_addr inet,
client_port integer,
client_query text COLLATE pg_catalog."default",
action text COLLATE pg_catalog."default" NOT NULL,
row_data hstore,
changed_fields hstore,
statement_only boolean NOT NULL,
CONSTRAINT logged_actions_pkey PRIMARY KEY (event_id),
CONSTRAINT logged_actions_action_check CHECK (action = ANY (ARRAY['I'::text, 'D'::text, 'U'::text, 'T'::text]))
)
I have already created the extension "hstore" and query is not executed and error message appears stating that
ERROR: type "hstore" is only a shell LINE 17: row_data hstore

That's a cryptic way of saying the hstore extension isn't loaded. You need to create extension hstore before you can use it.
Note that jsonb more-or-less makes hstore obsolete.

Related

Postgres - how to bulk insert table with foreign keys

I am looking to do a bulk insert into my postgreSQL database.
database is not yet live
postgreSQL 13
I have a temporary staging table which I bulk inserted data
TABLE public.temp_inverter_location
(
id integer ,
inverter_num_in_sld integer,
lift_requirements character varying,
geo_location_id integer NOT NULL (foreign key references geo_location.id),
location_name character varying,
project_info_id integer NOT NULL (foreign key references project_info.id)
)
I am trying to populate the two foreign key columns temp_inverter_location.geo_location_id and temp_inverter_location.project_info_id.
The two referenced tables are referenced by their id columns:
geo_location
CREATE TABLE public.geo_location
(
id integer,
country character varying(50) COLLATE pg_catalog."default",
region character varying(50) COLLATE pg_catalog."default",
city character varying(100) COLLATE pg_catalog."default",
location_name character varying COLLATE pg_catalog."default",
)
and
project_info
CREATE TABLE public.project_info
(
id integer
operation_name character varying,
project_num character varying(10),
grafana_site_num character varying(10)
)
I want to populate the correct foreign keys into the columns temp_inverter_location.geo_location_id and temp_inverter_location.project_info_id.
I am trying to use INSERT INTO SELECT to populate temp_inverter_location.geo_location_id with a JOIN that matches geo_location.location_name and temp_inverter_location.name.
I have tried this query however inverter_location.geo_location_id remains blank:
INSERT INTO temp_inverter_location(geo_location_id) SELECT geo_location.id FROM geo_location INNER JOIN temp_inverter_location ON geo_location.location_name=temp_inverter_location.location_name;
Please let me know if more info is needed, thanks!
I was able to resolve this issue using update referencing another table.
Basically, I updated the geo_location_id column using
UPDATE temp_inverter_location SET geo_location_id = geo_location.id FROM geo_location WHERE geo_location.location_name = temp_inverter_location.location_name;
and updated the project_info_id using
UPDATE load_table SET project_info_id = project_info.id FROM project_info WHERE project_info.operation_name = load_table.location_name;
It seems to have worked.

postgresql unique constraint allows duplicate

I have users table like below
CREATE TABLE public.users
(
id integer NOT NULL DEFAULT nextval('users_id_seq'::regclass),
uid uuid DEFAULT (md5(((random())::text || (clock_timestamp())::text)))::uuid,
createdon timestamp without time zone DEFAULT now(),
createdby integer,
modifiedon timestamp without time zone,
modifiedby integer,
comments boolean DEFAULT false,
verified boolean DEFAULT false,
active boolean DEFAULT true,
deleted boolean DEFAULT false,
tags text[] COLLATE pg_catalog."default",
user_type user_types NOT NULL,
fullname character varying(100) COLLATE pg_catalog."default" NOT NULL,
email character varying(84) COLLATE pg_catalog."default" NOT NULL,
pword character varying(32) COLLATE pg_catalog."default",
salt character varying(32) COLLATE pg_catalog."default",
hash text COLLATE pg_catalog."default",
source character varying(100) COLLATE pg_catalog."default",
reference character varying(100) COLLATE pg_catalog."default",
CONSTRAINT users_pkey PRIMARY KEY (id),
CONSTRAINT email_unique UNIQUE (email)
,
CONSTRAINT users_createdby_fkey FOREIGN KEY (createdby)
REFERENCES public.users (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION,
CONSTRAINT users_modifiedby_fkey FOREIGN KEY (modifiedby)
REFERENCES public.users (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
)
email field is set to unique
when I try to insert record twice on pgadmin, I got the error.
however, if the same query run over my nodejs app via pg library, records are inserted
what is the reason of this misoperation?
the query object that used in app:
{ text: 'INSERT INTO public.players ( createdby, user_type, fullname, email, pword, reference, source, salt, hash ) \n VALUES ( $1, $2, $3, $4, $5, $6, $7, $8, $9 ) RETURNING id',
values:
[ null,
'player',
'James De Souza',
'james#desouza.com',
'4297f44b13955235245b2497399d7a93',
'organic',
'on-site',
'07ecab28a4bab8f1bf63208ac8961053',
'25571c0618c701087495069cb4e45bf4fb07197e5ff301d963b670a9e033d2044557eb46537cee27a51da8b1fd0c8987b68ad7e8e47f48a06bcb1f44e6d3678d2c875d3dd4311a506a75eabf6fff23b65deba6a202606cc6d6b42abe4b25d136faffed8bd0046620f4e10ef0b974b108b27511a42c150983e268e1f7522ad678f0699848747a9e2f4a0cafc66704915a38966fbc76647678d907ca960533a5dc4de983167fafb7807e583dd5affcc2e14900295c6f396e768a32f106a4c636be78a6df96268216bc9410373fcc2528eb7984e2cb91ae62c3c65660dc477db3c3bfeadfacb214a055a48a1e9ed0c169ee54fcc6e7b24435cb53c3596e19bedbfef2c289ffb784f6fce18b9623253260e17aca5b3d810248ece6c51d810f3b44b1eb95225d5170cde0f3c9fda8ceefd9a287016c785576264f95ee961254bc371fed8671a7497456ce439d7318f21e539ce5940bd2fd73a350fc5d139cbe06bda568663a35488ceb7c62dadf3ee6d5810b5248abe447472b9c294a13c30144271a06e10b6a7f070df5bd7e804b13b1ab541c65de65dc5b85cf3199d7b13431095aff83de6939afc2d72d187597bf8214bf45f356591f7e513e7026322a20beed430966fbd3cbe4ec2c95b54d081c032f5e2ba930019857bb63e7c631668e3f607559b4ffffc1de6c957f687930f2900fb27123aaaf5f55a06844586cee94d10757' ] }
NOTE: public.players is inherited from public.users
CREATE TABLE public.players (
"username" character varying(100) UNIQUE DEFAULT concat('player', (random() * 100000000)::int::text),
"location" int REFERENCES public.list_locations ON DELETE RESTRICT,
"address" text,
"bio" text
) INHERITS (public.users);
just realized that unique constraint not working over inherited table
is there any solution or workaround for this problem(or whatever)?

.net identityUser datetimeoffset mismatch postgresql

I'm using dotnet core with postgresql and all of a sudden (i guess there was an update to something) it all stoped working.
This is the entity my user entity is inheriting from.
Here lockoutend is DateTimeOffset? and in my postgres table:
CREATE TABLE public.users
(
id integer NOT NULL DEFAULT nextval('users_id_seq'::regclass),
accessfailedcount integer NOT NULL,
concurrencystamp character varying(255) COLLATE pg_catalog."default",
email character varying(128) COLLATE pg_catalog."default",
emailconfirmed boolean NOT NULL,
lockoutenabled boolean NOT NULL,
lockoutend timestamp with time zone,
name character varying(128) COLLATE pg_catalog."default",
normalizedemail character varying(128) COLLATE pg_catalog."default",
normalizedusername character varying(128) COLLATE pg_catalog."default",
passwordhash character varying(512) COLLATE pg_catalog."default",
phonenumber character varying(50) COLLATE pg_catalog."default",
phonenumberconfirmed boolean NOT NULL,
securitystamp character varying(255) COLLATE pg_catalog."default",
twofactorenabled boolean NOT NULL,
username character varying(50) COLLATE pg_catalog."default",
locale integer NOT NULL DEFAULT 1,
CONSTRAINT users_pkey PRIMARY KEY (id)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE public.users
OWNER to notifiedlocal;
CREATE INDEX emailindex
ON public.users USING btree
(normalizedemail COLLATE pg_catalog."default")
TABLESPACE pg_default;
CREATE UNIQUE INDEX usernameindex
ON public.users USING btree
(normalizedusername COLLATE pg_catalog."default")
TABLESPACE pg_default;
This is the error i get when i try to do a simple get from the user table:
System.InvalidOperationException: 'An exception occurred while reading a database value. The expected type was 'System.Nullable`1[System.DateTimeOffset]'
"An exception occurred while reading a database value. The expected type was
'System.Nullable`1[System.DateTimeOffset]' but the actual value was of type
'System.DateTime'."
So have i accidentally updated postgres and .net identity so that one of the sides changed?
This used to work and i haven't changed anything on purpose.
Is it possible to change the identity entity to use normal datetime instead?

Postgres Update table id with sequence

I am trying to connect a sequence for user table to auto incremental value for id field.
I created following sequence,
CREATE SEQUENCE "USER_MGMT"."USER_SEQ"
INCREMENT 1
START 1000
MINVALUE 1000
MAXVALUE 99999999
CACHE 1;
ALTER SEQUENCE "USER_MGMT"."USER_SEQ"
OWNER TO postgres;
following is my table,
-- Table: "USER_MGMT"."USER"
-- DROP TABLE "USER_MGMT"."USER";
CREATE TABLE "USER_MGMT"."USER"
(
"USER_ID" bigint NOT NULL,
"FIRST_NAME" character varying(50) COLLATE pg_catalog."default" NOT NULL,
"LAST_NAME" character varying(50) COLLATE pg_catalog."default" NOT NULL,
"EMAIL_ID" character varying(100) COLLATE pg_catalog."default" NOT NULL,
"DESK_NUMBER" bigint,
"MOBILE_NUMBER" bigint,
"IS_ACTIVE" boolean NOT NULL DEFAULT true,
"CREATED_BY" character varying(100) COLLATE pg_catalog."default",
"MODIFIED_BY" character varying(100) COLLATE pg_catalog."default",
"DATE_CREATED" timestamp without time zone NOT NULL,
"DATE_MODIFIED" timestamp without time zone,
CONSTRAINT "USER_ID_PK" PRIMARY KEY ("USER_ID"),
CONSTRAINT "EMAIL_ID_UK" UNIQUE ("EMAIL_ID"),
CONSTRAINT "MOBILE_NUMBER_UK" UNIQUE ("MOBILE_NUMBER"),
CONSTRAINT "USER_ID_UK" UNIQUE ("USER_ID")
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE "USER_MGMT"."USER"
OWNER to postgres;
I want to connect this sequence to USER_ID column, so it will be auto incremented.
Table name and fields should be in upper case,
I am trying to execute the following query, but its not working
ALTER TABLE USER_MGMT.USER ALTER COLUMN USER_ID SET DEFAULT nextval('USER_MGMT.USER_SEQ');
It says the following error message in console.
ERROR: schema "user_mgmt" does not exist
********** Error **********
That is because when you use double quotes then you are creating case sensitive object identifier or to be more precise - this object will have identifier with exact case as given in the query during creation. If you do not double quote them, then they are converted to lower case.
So what you need is to either stop using double quotes, create objects in lower case or use double quotes in your alter query:
ALTER TABLE "USER_MGMT"."USER" ALTER COLUMN "USER_ID" SET DEFAULT nextval('"USER_MGMT"."USER_SEQ"');

Alter command is taking long time, but not executing in PostgreSQL

The below ALTER command is taking long time, but not executing.
alter table DETAILS alter column row_id type numeric(20);
DDL is as follows:
CREATE TABLE Details
(
row_id numeric(15,0) NOT NULL,
intfid character varying(20) NOT NULL,
seqno numeric(15,0) NOT NULL,
record_id numeric(15,0) NOT NULL,
lstmoddate timestamp without time zone NOT NULL,
rcvddate timestamp without time zone NOT NULL DEFAULT current_date,
record_type character varying(60),
xmldata bytea,
CONSTRAINT mrd_pk PRIMARY KEY (rcvddate, intfid, seqno, record_id)
)