Postgresql SQL throws ambiguous column error - postgresql

I have the following table in my Postgres database
CREATE TABLE "public"."zuffs"
(
"hash" bigint NOT NULL,
"zuff" BIGINT NOT NULL,
"lat" INTEGER NOT NULL,
"lng" INTEGER NOT NULL,
"weather" INTEGER DEFAULT 0,
"expires" INTEGER DEFAULT 0,
"clients" INTEGER DEFAULT 0,
CONSTRAINT "zuffs_hash" PRIMARY KEY ("hash")
) WITH (oids = false);
to which I want to add a new row or update the weather, expires & clients columns if the row already exists. To do this I get my PHP script to generate the following SQL
INSERT INTO zuffs (hash,zuff,lat,lng,weather,expires)
VALUES(5523216,14978310951341,4978,589,105906435,4380919) ON CONFLICT(hash) DO UPDATE SET
weather = 105906435,expires = 4380919,clients = clients + 1;
which fails with the error
ERROR: column reference "clients" is ambiguous
I fail to see why this might be happening. I hope that someone here can explain

In the UPDATE part you should use the EXCLUDED "row" to reference the values. And to reference the existing value, you need to prefix the column with the table again to avoid the ambiguity between "excluded" and "current" values:
INSERT INTO zuffs (hash,zuff,lat,lng,weather,expires)
VALUES (5523216,14978310951341,4978,589,105906435,4380919)
ON CONFLICT(hash) DO UPDATE
SET weather = excluded.weather,
expires = excluded.expires,
clients = zuffs.clients + 1;

Related

duplicate key value violates unique constraint "pk_user_governments"

I am trying to insert a record with many to many relationship in EfCore to postgres table
When adding a simple record to Users...it works but when I introduced 1:N with User_Governments
It started giving me duplicate key value violates unique constraint "pk_user_governments"
I have tried a few things:
SELECT MAX(user_governments_id) FROM user_governments;
SELECT nextval('users_gov_user_id_seq');
This keeps incrementing everytime I run it in postgres..but the issue does not go
I am inserting it as follows:
User user = new();
user.Organisation = organisation;
user.Name = userName;
user.Email = email;
user.IsSafetyDashboardUser = isSafetyFlag;
if (isSafetyFlag)
{
List<UserGovernment> userGovernments = new List<UserGovernment>();
foreach (var govId in lgas)
{
userGovernments.Add(new UserGovernment()
{
LocalGovId = govId,
StateId = 7
});
}
user.UserGovernments = userGovernments;
}
_context.Users.Add(user);
int rows_affected = _context.SaveChanges();
Table and column in db is as follows:
CREATE TABLE IF NOT EXISTS user_governments
(
user_government_id integer NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 2147483647 CACHE 1 ),
user_id integer NOT NULL,
state_id integer NOT NULL,
local_gov_id integer NOT NULL,
CONSTRAINT pk_user_governments PRIMARY KEY (user_government_id),
CONSTRAINT fk_user_governments_local_govs_local_gov_id FOREIGN KEY (local_gov_id)
REFERENCES local_govs (local_gov_id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE CASCADE,
CONSTRAINT fk_user_governments_states_state_id FOREIGN KEY (state_id)
REFERENCES states (state_id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE CASCADE,
CONSTRAINT fk_user_governments_users_user_id FOREIGN KEY (user_id)
REFERENCES users (user_id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE CASCADE
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
I have also tried running following command as per this post
SELECT SETVAL((SELECT PG_GET_SERIAL_SEQUENCE('user_governments', 'user_government_id')), (SELECT (MAX("user_government_id") + 1) FROM "user_governments"), FALSE);
but I get error:
ERROR: relation "user_governments" does not exist
IDENTITY is an table integrated automatic increment. No needs to use PG_GET_SERIAL_SEQUENCE wich is dedicated for SEQUENCES that is another way to have increment outside the table. So you cannot use a query like :
SELECT SETVAL((SELECT PG_GET_SERIAL_SEQUENCE('user_governments', 'user_government_id')),
(SELECT (MAX("user_government_id") + 1) FROM "user_governments"), FALSE)
If your purpose is to assigne the seed for an IDENTITY, the ways to do that is :
You must use a syntax like this one :
ALTER TABLE user_governments
ALTER COLUMN user_government_id RESTART WITH (select Max(user_government_id) + 1 from user_governments);
It turned out that I did not build the model correctly.
The user_government table had an incremental key, but I had defined the model as follows
modelBuilder.Entity<UserGovernment>()
.HasKey(bc => new { bc.UserId, bc.LocalGovId });
I replaced it with:
modelBuilder.Entity<UserGovernment>()
.HasKey(bc => new { bc.UserGovernmentId});
The Journey :)
Initially I found out that once I commented the following line
_context.UserGovernments.AddRange(userGovernments);
It just inserted data with user_government_id as 0
Then I tried manually giving a value to user_government_id and it also went successfully, this lead me to check my modelbuilder code!!

How to prevent overlapping of int ranges

I have a table as follow :
CREATE TABLE appointments (
id SERIAL PRIMARY KEY,
date TIMESTAMP NOT NULL,
start_mn INT NOT NULL,
end_mn INT NOT NULL,
EXCLUDE using gist((array[start_mn, end_mn]) WITH &&)
)
I want to prevent start_mn and end_mn overlapping between rows so I've added a gist exclusion :
EXCLUDE using gist((array[start_mn, end_mn]) WITH &&)
But inserting the two following do not trigger the exclusion:
INSERT INTO appointments(date, start_mn, end_mn) VALUES('2020-08-08', 100, 200);
INSERT INTO appointments(date, start_mn, end_mn) VALUES('2020-08-08', 90, 105);
How can I achieve this exclusion ?
If you want to prevent an overlapping range you will have to use a range type not an array.
I also assume that start and end should never overlap on the same day, so you need to include the date column in the exclusion constraint:
CREATE TABLE appointments
(
id SERIAL PRIMARY KEY,
date TIMESTAMP NOT NULL,
start_mn INT NOT NULL,
end_mn INT NOT NULL,
EXCLUDE using gist( int4range(start_mn, end_mn, '[]') WITH &&, "date" with =)
)
If start_mn and end_mn are supposed to be "time of the day", then those columns should be defined as time, not as integers.

Speed up heavy UPDATE..FROM..WHERE PostgreSQL query

I have 2 big tables
CREATE TABLE "public"."linkages" (
"supplierid" integer NOT NULL,
"articlenumber" character varying(32) NOT NULL,
"article_id" integer,
"vehicle_id" integer
);
CREATE INDEX "__linkages_datasupplierarticlenumber" ON "public"."__linkages" USING btree ("datasupplierarticlenumber");
CREATE INDEX "__linkages_supplierid" ON "public"."__linkages" USING btree ("supplierid");
having 215 000 000 records, and
CREATE TABLE "public"."article" (
"id" integer DEFAULT nextval('tecdoc_article_id_seq') NOT NULL,
"part_number" character varying(32),
"supplier_id" integer,
CONSTRAINT "tecdoc_article_part_number_supplier_id" UNIQUE ("part_number", "supplier_id")
) WITH (oids = false);
having 5 500 000 records.
I need to update linkages.article_id according article.part_number and article.supplier_id, like this:
UPDATE linkages
SET article_id = article.id
FROM
article
WHERE
linkages.supplierid = article.supplier_id AND
linkages.articlenumber = article.part_number;
But it is to heavy. I tried this, but it processed for a day with no result. So I had terminated it.
I need to do this update only once to normalize my datatable structure for using foreign keys in Django ORM. How can I resolve this issue?
Thanks a lot!

an empty row with null-like values in not-null field

I'm using postgresql 9.0 beta 4.
After inserting a lot of data into a partitioned table, i found a weird thing. When I query the table, i can see an empty row with null-like values in 'not-null' fields.
That weird query result is like below.
689th row is empty. The first 3 fields, (stid, d, ticker), are composing primary key. So they should not be null. The query i used is this.
select * from st_daily2 where stid=267408 order by d
I can even do the group by on this data.
select stid, date_trunc('month', d) ym, count(*) from st_daily2
where stid=267408 group by stid, date_trunc('month', d)
The 'group by' results still has the empty row.
The 1st row is empty.
But if i query where 'stid' or 'd' is null, then it returns nothing.
Is this a bug of postgresql 9b4? Or some data corruption?
EDIT :
I added my table definition.
CREATE TABLE st_daily
(
stid integer NOT NULL,
d date NOT NULL,
ticker character varying(15) NOT NULL,
mp integer NOT NULL,
settlep double precision NOT NULL,
prft integer NOT NULL,
atr20 double precision NOT NULL,
upd timestamp with time zone,
ntrds double precision
)
WITH (
OIDS=FALSE
);
CREATE TABLE st_daily2
(
CONSTRAINT st_daily2_pk PRIMARY KEY (stid, d, ticker),
CONSTRAINT st_daily2_strgs_fk FOREIGN KEY (stid)
REFERENCES strgs (stid) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE,
CONSTRAINT st_daily2_ck CHECK (stid >= 200000 AND stid < 300000)
)
INHERITS (st_daily)
WITH (
OIDS=FALSE
);
The data in this table is simulation results. Multithreaded multiple simulation engines written in c# insert data into the database using Npgsql.
psql also shows the empty row.
You'd better leave a posting at http://www.postgresql.org/support/submitbug
Some questions:
Could you show use the table
definitions and constraints for the
partions?
How did you load your data?
You get the same result when using
another tool, like psql?
The answer to your problem may very well lie in your first sentence:
I'm using postgresql 9.0 beta 4.
Why would you do that? Upgrade to a stable release. Preferably the latest point-release of the current version.
This is 9.1.4 as of today.
I got to the same point: "what in the heck is that blank value?"
No, it's not a NULL, it's a -infinity.
To filter for such a row use:
WHERE
case when mytestcolumn = '-infinity'::timestamp or
mytestcolumn = 'infinity'::timestamp
then NULL else mytestcolumn end IS NULL
instead of:
WHERE mytestcolumn IS NULL

Postgres_erro --> ERROR: operator does not exist: double precision[] = numeric[]

I am trying to create a table in postgres for storing raster data.
I have 2 different environments: dev and prod.
if I execute the DDL statement in dev then it is creating the table without any problem.
But in prod I am getting the some strange error. How to solve this issue? I am not an admin person and currently facing difficulties with this.
DDL for the table
CREATE TABLE test_shema.test_table (
rid int4 NOT NULL,
rast raster NULL,
CONSTRAINT elevation_hi_pkey_test PRIMARY KEY (rid),
CONSTRAINT enforce_height_rast_test CHECK ((st_height(rast) = ANY (ARRAY[100, 92]))),
CONSTRAINT enforce_max_extent_rast_test CHECK ((st_envelope(rast) # '0103000020E61000000100000005000000A2221ECF131C64C07F55AF453F8C3240A2221ECF131C64C0FEE6DF13C4963640444672D5B14263C0FEE6DF13C4963640444672D5B14263C07F55AF453F8C3240A2221ECF131C64C07F55AF453F8C3240'::geometry)) NOT VALID,
CONSTRAINT enforce_nodata_values_rast_test CHECK ((_raster_constraint_nodata_values(rast) = '{32767.0000000000}'::numeric[])),
CONSTRAINT enforce_num_bands_rast_test CHECK ((st_numbands(rast) = 1)),
CONSTRAINT enforce_out_db_rast_test CHECK ((_raster_constraint_out_db(rast) = '{f}'::boolean[])),
CONSTRAINT enforce_pixel_types_rast_test CHECK ((_raster_constraint_pixel_types(rast) = '{16BSI}'::text[])),
CONSTRAINT enforce_same_alignment_rast_test CHECK (st_samealignment(rast, '01000000006A98816335DA4E3F6A98816335DA4EBFA2221ECF131C64C0FEE6DF13C496364000000000000000000000000000000000E610000001000100'::raster)),
CONSTRAINT enforce_scalex_rast_test CHECK ((round((st_scalex(rast))::numeric, 10) = round(0.000941539829921079, 10))),
CONSTRAINT enforce_scaley_rast_test CHECK ((round((st_scaley(rast))::numeric, 10) = round((-0.000941539829921079), 10))),
CONSTRAINT enforce_srid_rast_test CHECK ((st_srid(rast) = 4326)),
CONSTRAINT enforce_width_rast_test CHECK ((st_width(rast) = ANY (ARRAY[100, 15])))
);
Error that I am getting in prod environment
ERROR: operator does not exist: double precision[] = numeric[]
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Must be the third constraint. Compare with this instead:
'{32767.0000000000}'::double precision[]