Can't get past Postgres not null constraint - postgresql

I am trying to move data from one table to another:
zuri=# INSERT INTO lili_code (uuid, arrow, popularname, slug, effdate, codetext)
SELECT uuid, arrow, popularname, slug, effdate, codetext
FROM newarts;
ERROR: null value in column "shorttitle" violates not-null constraint
My Question:
shorttitle is NOT one of the columns I was trying to fill, so why does it matter if it is null or not?
Please note:
shorttitle is blank=True. I understand this is != null, but I thought it might be relevant.
There are a lot of additional columns in lili_code besides shorttitle that weren't in newarts.
At this point it looks to me like my only options are
a. inserting by hand (yuck!)
b. making a csv and importing that
c. adding all the missing columns from lili_code to newarts and making sure they are NOT NULL and have at least a default value in them.
Have I missed an option? What's the best solution here? I'm using django 1.9.1, python 2.7.11, Ubuntu 15.10, and Postgresql 9.4.
Thanks as always.

Since the column is non-null, you need to provide a value.
If there is no real data for it, you can calculate it on the fly in the SELECT part of your INSERT statement, it does not have to be an actual column in the source table.
If you are fine with an empty string, you can do
INSERT INTO lili_code
(uuid, arrow, popularname, slug, effdate, codetext, shorttitle)
SELECT uuid, arrow, popularname, slug, effdate, codetext, ''
FROM newarts;
Or maybe you want to use popularname for this column as well:
INSERT INTO lili_code
(uuid, arrow, popularname, slug, effdate, codetext, shorttitle)
SELECT uuid, arrow, popularname, slug, effdate, codetext, popularname
FROM newarts;

If you not define the value on the INSERT the db will insert NULL or the default value.
If column doesnt allow NULL you will get an error. So either provide a dummy value or change the field to allow NULL

You can define a default value for the column, e.g.:
alter table lili_code alter shorttitle set default '';

Related

PostgreSQL Not Recognizing NULL Values

A table in my database (PostgreSQL 9.6) has a mixture of NULL and not null values, which I need to COALESCE() as a part of the creation of another attribute during insert into a resulting dimension table. However, Postgres seems unable to recognize the NULL values as NULL.
SELECT DISTINCT name, description
FROM my_table
WHERE name IN('STUDIO', 'ONE BEDROOM')
AND description IS NOT NULL;
returns
name
description
STUDIO
NULL
ONE BEDROOM
NULL
Whereas
SELECT DISTINCT name, description
FROM my_table
WHERE name IN('STUDIO', 'ONE BEDROOM')
AND description IS NULL;
returns
name
description
as such, something like
SELECT DISTINCT name, COALESCE(description, 'N/A')
FROM my_table
WHERE name IN('STUDIO', 'ONE BEDROOM');
will return
name
coalesce
STUDIO
NULL
ONE BEDROOM
NULL
instead of the expected
name
coalesce
STUDIO
N/A
ONE BEDROOM
N/A
The DDL for these attributes is fairly straightforward:
...
name text COLLATE pg_catalog."default",
description text COLLATE pg_catalog."default",
...
I've already checked whether the attribute was filled with 'NULL' rather than being an actual NULL value, and that's not the case. I've also tried quoting the attribute in question as "description" and that hasn't made a difference. Casting to VARCHAR hasn't helped (I thought it might be the fact that it's a TEXT attribute). If I nullify some values in the other text column (name) I'm able to coalesce with a test value, so that one is seemingly behaving as expected leading me to think it's not a data type issue. This table exists in multiple databases on multiple servers and exhibits the same behavior in all of them.
I've tried inserting into a new table that has different attribute definitions:
...
floorplan_name "character varying(128)" COLLATE pg_catalog."default" NOT NULL DEFAULT 'Unknown'::character varying,
floorplan_desc "character varying(256)" COLLATE pg_catalog."default" NOT NULL DEFAULT 'Not Provided'::character varying,
...
resulting in
name
coalesce
STUDIO
NULL
ONE BEDROOM
NULL
so, not only is the default value unable to populate, leaving the values NULL in an attribute that is defined as NOT NULL, but the example SELECT statements above all behave in exactly the same way when run against the new table.
Does anyone have any idea what might be causing this?
It turns out that the source database is writing empty strings instead of proper NULLs. Adding NULLIF(description, '') before trying to COALESCE() solves the problem.
Thanks to everyone!

Add a not null constraint to a table in which a column has a certain value in PostgreSQL

I'm trying to add a NOT NULL constraint in PostgreSQL, but can't find the right syntax.
Here is, essentially, what I'm trying to do:
ALTER TABLE olimpic.tb_athlete
ADD CONSTRAINT soloESP CHECK(country = 'ESP' AND substitute_id IS NOT NULL)
and the table I'm trying to modify:
CREATE TABLE olimpic.tb_athlete
(
athlete_id CHAR(7) PRIMARY KEY,
name VARCHAR(50) NOT NULL,
country CHAR(3) NOT NULL,
substitute_id CHAR(7),
FOREIGN KEY (substitute_id) REFERENCES olimpic.tb_athlete(athlete_id)
);
I have already deleted or set default values to the column country where the value is 'ESP', with this code being an example:
DELETE FROM olimpic.tb_athlete
where substitute_id is NULL and country = 'ESP';
but I'm still getting the following error:
ERROR: ERROR: check constraint 'soloESP' on table tb_athlete is violated by some row
SQL state: 23514
Any help you could give me as to how to proceed would be greatly appreciated.
Do you realize that the constraint you're trying to add does not allow any rows with the country field other than 'ESP'? You say you want two conditions simultaneously (because you use the AND operator): each row must have country = 'ESP' AND non-null substitute_id
I believe what you wanted is
ALTER TABLE olimpic.tb_athlete
ADD CONSTRAINT soloESP CHECK(country != 'ESP' OR substitute_id IS NOT NULL);
This constraint will ensure that if country = 'ESP' then substitute_id must be non-null. For other countries both null and non-null values of substitute_id are valid.
But the above is only a guess because you provided neither your database's schema, nor meanings of the fields, nor the error text in English, nor the data stored in your database so that we could analyze what is really happening in your case. Please, consider editing the question to add the above

db2 change column from null to not null

Have a Location column in XYZ table in db2, now I want to change to not null and using the below command
ALTER table xyz ALTER COLUMN LOCATIONID set not null
But asking to give default value. How to change the command for that
As you are making a previously optional column into a mandatory column, if there is already at least one row in the table that contains a NULL in LOCATIONID then Db2 may prevent the alteration (SQL0407N).
If the table has no rows, or if no rows have null in LOCATIONID column, then Db2-LUW will allow the alteration. You may need to REORG the table before/after the alteration in some cases.
If the table already has rows with LOCATIONID null, you must either set these rows LOCATIONID value to some not-null value before doing the alteration, or you must recreate the table.
When recreating the table, consider specifying a default value via 'NOT NULL WITH DEFAULT ...' if that makes sense for the data concerned.

Upgrading enum type column to varchar in postgresql

I have an enum type column in my table . I have now decided to make it a varchar type and put some constraint.
Q1) Which is a good practice : to have enums or to put constarint on the column.
Q2) How to change my enum type column to varchar. Just opposite to this question question .
I tried using this:
ALTER TABLE tablename ALTER COLUMN columnname TYPE VARCHAR
But this gives me the error : No operator matches the given name and argument type(s). You might need to add explicit type casts.
This is the table definition:
CREATE TABLE tablename (
id1 TEXT NOT NULL,
id2 VARCHAR(100) NOT NULL,
enum_field table_enum,
modified_on TIMESTAMP NOT NULL DEFAULT NOW(),
modified_by VARCHAR(100),
PRIMARY key (id1, id2)
);
For future reference: I had the similar issue with altering enum type and I got the same error message as the one above. In my case, the issue was caused by having a partial index that referenced a column that was using that enum type.
As for best practice, it is best if you define a separate table of possible values, and make your column a foreign key to this table. This has the following benefits:
The new table can not only have the specific key as a type, it can have additional columns with details such as a friendly name, meaning of the type or more information
Changing the possible values is a matter of manipulating data rows (INSERT,UPDATE or DELETE) which is more accessible and manageable than changing constraints or enum values.

Getting error for auto increment fields when inserting records without specifying columns

We're in process of converting over from SQL Server to Postgres. I have a scenario that I am trying to accommodate. It involves inserting records from one table into another, WITHOUT listing out all of the columns. I realize this is not recommended practice, but let's set that aside for now.
drop table if exists pk_test_table;
create table public.pk_test_table
(
recordid SERIAL PRIMARY KEY NOT NULL,
name text
);
--example 1: works and will insert a record with an id of 1
insert into pk_test_table values(default,'puppies');
--example 2: fails
insert into pk_test_table
select first_name from person_test;
Error I receive in the second example:
column "recordid" is of type integer but expression is of type
character varying Hint: You will need to rewrite or cast the
expression.
The default keyword will tell the database to grab the next value.
Is there any way to utilize this keyword in the second example? Or some way to tell the database to ignore auto-incremented columns and just them be populated like normal?
I would prefer to not use a subquery to grab the next "id".
This functionality works in SQL Server and hence the question.
Thanks in advance for your help!
If you can't list column names, you should instead use the DEFAULT keyword, as you've done in the simple insert example. This won't work with a in insert into ... select ....
For that, you need to invoke nextval. A subquery is not required, just:
insert into pk_test_table
select nextval('pk_test_table_id_seq'), first_name from person_test;
You do need to know the sequence name. You could get that from information_schema based on the table name and inferring its primary key, using a function that takes just the table name as an argument. It'd be ugly, but it'd work. I don't think there's any way around needing to know the table name.
You're inserting value into the first column, but you need to add a value in the second position.
Therefore you can use INSERT INTO table(field) VALUES(value) syntax.
Since you need to fetch values from another table, you have to remove VALUES and put the subquery there.
insert into pk_test_table(name)
select first_name from person_test;
I hope it helps
I do it this way via a separate function- though I think I'm getting around the issue via the table level having the DEFAULT settings on a per field basis.
create table public.pk_test_table
(
recordid integer NOT NULL DEFAULT nextval('pk_test_table_id_seq'),
name text,
field3 integer NOT NULL DEFAULT 64,
null_field_if_not_set integer,
CONSTRAINT pk_test_table_pkey PRIMARY KEY ("recordid")
);
With function:
CREATE OR REPLACE FUNCTION func_pk_test_table() RETURNS void AS
$BODY$
INSERT INTO pk_test_table (name)
SELECT first_name FROM person_test;
$BODY$
LANGUAGE sql VOLATILE;
Then just execute the function via a SELECT FROM func_pk_test_table();
Notice it hasn't had to specify all the fields- as long as constraints allow it.