PostgreSQL Not Recognizing NULL Values - postgresql

A table in my database (PostgreSQL 9.6) has a mixture of NULL and not null values, which I need to COALESCE() as a part of the creation of another attribute during insert into a resulting dimension table. However, Postgres seems unable to recognize the NULL values as NULL.
SELECT DISTINCT name, description
FROM my_table
WHERE name IN('STUDIO', 'ONE BEDROOM')
AND description IS NOT NULL;
returns
name
description
STUDIO
NULL
ONE BEDROOM
NULL
Whereas
SELECT DISTINCT name, description
FROM my_table
WHERE name IN('STUDIO', 'ONE BEDROOM')
AND description IS NULL;
returns
name
description
as such, something like
SELECT DISTINCT name, COALESCE(description, 'N/A')
FROM my_table
WHERE name IN('STUDIO', 'ONE BEDROOM');
will return
name
coalesce
STUDIO
NULL
ONE BEDROOM
NULL
instead of the expected
name
coalesce
STUDIO
N/A
ONE BEDROOM
N/A
The DDL for these attributes is fairly straightforward:
...
name text COLLATE pg_catalog."default",
description text COLLATE pg_catalog."default",
...
I've already checked whether the attribute was filled with 'NULL' rather than being an actual NULL value, and that's not the case. I've also tried quoting the attribute in question as "description" and that hasn't made a difference. Casting to VARCHAR hasn't helped (I thought it might be the fact that it's a TEXT attribute). If I nullify some values in the other text column (name) I'm able to coalesce with a test value, so that one is seemingly behaving as expected leading me to think it's not a data type issue. This table exists in multiple databases on multiple servers and exhibits the same behavior in all of them.
I've tried inserting into a new table that has different attribute definitions:
...
floorplan_name "character varying(128)" COLLATE pg_catalog."default" NOT NULL DEFAULT 'Unknown'::character varying,
floorplan_desc "character varying(256)" COLLATE pg_catalog."default" NOT NULL DEFAULT 'Not Provided'::character varying,
...
resulting in
name
coalesce
STUDIO
NULL
ONE BEDROOM
NULL
so, not only is the default value unable to populate, leaving the values NULL in an attribute that is defined as NOT NULL, but the example SELECT statements above all behave in exactly the same way when run against the new table.
Does anyone have any idea what might be causing this?

It turns out that the source database is writing empty strings instead of proper NULLs. Adding NULLIF(description, '') before trying to COALESCE() solves the problem.
Thanks to everyone!

Related

Postgresql jsonb column to row extracted values

I have the below table
BEGIN;
CREATE TABLE IF NOT EXISTS "public".appevents (
id uuid DEFAULT uuid_generate_v4() NOT NULL,
"eventId" uuid NOT NULL,
name text NOT NULL,
"creationTime" timestamp without time zone NOT NULL,
"creationTimeInMilliseconds" bigint NOT NULL,
metadata jsonb NOT NULL,
PRIMARY KEY(id)
);
COMMIT;
I would like to extract with a query the metadata jsonb column as a row and tried with the below query.
SELECT
userId
FROM
appevents, jsonb_to_record(appevents.metadata) as x(userId text)
Unfortunately, all the rows returned for userid have the value NULL which is not true. The only weird thing noticed is that it is converting camelcase to lowercase but doesn't seem like the issue.
Here are the 2 records I currently have in the database where userId exists.
The only weird thing noticed is that it is converting camelcase to lowercase but doesn't seem like the issue.
Actually that is the culprit - column names are case-insensitive by default, and userId is normalised to userid, for which the JSON doesn't contain a property. Quoting the identifier (… as x("userId" text)) should work.
However, there's a much simpler solution for accessing json object properties as text: the ->> operator. You can use
SELECT metadata->>'userId' AS userid FROM appevents

create table in Db2 for iSeries

I have a table structure like this in sql server:
CREATE TABLE [dbo].[taname](
[ID] [char](7) NOT NULL,
[SOURCE] [char](14) NOT NULL,
[TARGET] [char](14) NOT NULL,
[ID1] [char](100) NULL,
)
this similar table I'm trying to create in DB2:
CREATE TABLE schema.taname(
ID char(7) NOT NULL,
SOURCE char(14) NOT NULL,
TARGET char(14) NOT NULL,
ID1 char(100) NULL --error is here
);
However, I'm getting error in "ID":
Keyword NULL not expected. Valid tokens: AS NO FOR NOT FILE WITH CCSID CHECK LOGGED UNIQUE COMPACT. Cause . . . . . : The keyword NULL was not expected here. A syntax error was detected at keyword NULL. The partial list of valid tokens is AS NO FOR NOT FILE WITH CCSID CHECK LOGGED UNIQUE COMPACT. This list assumes that the statement is correct up to the unexpected keyword. The error may be earlier in the statement but the syntax of the statement seems to be valid up to this point. Recovery . . . : Examine the SQL statement in the area of the specified keyword. A colon or SQL delimiter may be missing. SQL requires reserved words to be delimited when they are used as a name. Correct the SQL statement and try the request again.
Processing ended because the highlighted statement did not complete successfully
I would like to create table similar to SQL Server and allow NULL in the ID field. How can I correct this?
NULL is the default... you can just leave it off...
CREATE TABLE schema.taname(
ID char(7) NOT NULL,
SOURCE char(14) NOT NULL,
TARGET char(14) NOT NULL,
ID1 char(100)
);
alternatively, specify the DEFAULT clause...
CREATE TABLE schema.taname(
ID char(7) NOT NULL,
SOURCE char(14) NOT NULL,
TARGET char(14) NOT NULL,
ID1 char(100) DEFAULT NULL
);

Upgrading enum type column to varchar in postgresql

I have an enum type column in my table . I have now decided to make it a varchar type and put some constraint.
Q1) Which is a good practice : to have enums or to put constarint on the column.
Q2) How to change my enum type column to varchar. Just opposite to this question question .
I tried using this:
ALTER TABLE tablename ALTER COLUMN columnname TYPE VARCHAR
But this gives me the error : No operator matches the given name and argument type(s). You might need to add explicit type casts.
This is the table definition:
CREATE TABLE tablename (
id1 TEXT NOT NULL,
id2 VARCHAR(100) NOT NULL,
enum_field table_enum,
modified_on TIMESTAMP NOT NULL DEFAULT NOW(),
modified_by VARCHAR(100),
PRIMARY key (id1, id2)
);
For future reference: I had the similar issue with altering enum type and I got the same error message as the one above. In my case, the issue was caused by having a partial index that referenced a column that was using that enum type.
As for best practice, it is best if you define a separate table of possible values, and make your column a foreign key to this table. This has the following benefits:
The new table can not only have the specific key as a type, it can have additional columns with details such as a friendly name, meaning of the type or more information
Changing the possible values is a matter of manipulating data rows (INSERT,UPDATE or DELETE) which is more accessible and manageable than changing constraints or enum values.

Can't get past Postgres not null constraint

I am trying to move data from one table to another:
zuri=# INSERT INTO lili_code (uuid, arrow, popularname, slug, effdate, codetext)
SELECT uuid, arrow, popularname, slug, effdate, codetext
FROM newarts;
ERROR: null value in column "shorttitle" violates not-null constraint
My Question:
shorttitle is NOT one of the columns I was trying to fill, so why does it matter if it is null or not?
Please note:
shorttitle is blank=True. I understand this is != null, but I thought it might be relevant.
There are a lot of additional columns in lili_code besides shorttitle that weren't in newarts.
At this point it looks to me like my only options are
a. inserting by hand (yuck!)
b. making a csv and importing that
c. adding all the missing columns from lili_code to newarts and making sure they are NOT NULL and have at least a default value in them.
Have I missed an option? What's the best solution here? I'm using django 1.9.1, python 2.7.11, Ubuntu 15.10, and Postgresql 9.4.
Thanks as always.
Since the column is non-null, you need to provide a value.
If there is no real data for it, you can calculate it on the fly in the SELECT part of your INSERT statement, it does not have to be an actual column in the source table.
If you are fine with an empty string, you can do
INSERT INTO lili_code
(uuid, arrow, popularname, slug, effdate, codetext, shorttitle)
SELECT uuid, arrow, popularname, slug, effdate, codetext, ''
FROM newarts;
Or maybe you want to use popularname for this column as well:
INSERT INTO lili_code
(uuid, arrow, popularname, slug, effdate, codetext, shorttitle)
SELECT uuid, arrow, popularname, slug, effdate, codetext, popularname
FROM newarts;
If you not define the value on the INSERT the db will insert NULL or the default value.
If column doesnt allow NULL you will get an error. So either provide a dummy value or change the field to allow NULL
You can define a default value for the column, e.g.:
alter table lili_code alter shorttitle set default '';

Getting error for auto increment fields when inserting records without specifying columns

We're in process of converting over from SQL Server to Postgres. I have a scenario that I am trying to accommodate. It involves inserting records from one table into another, WITHOUT listing out all of the columns. I realize this is not recommended practice, but let's set that aside for now.
drop table if exists pk_test_table;
create table public.pk_test_table
(
recordid SERIAL PRIMARY KEY NOT NULL,
name text
);
--example 1: works and will insert a record with an id of 1
insert into pk_test_table values(default,'puppies');
--example 2: fails
insert into pk_test_table
select first_name from person_test;
Error I receive in the second example:
column "recordid" is of type integer but expression is of type
character varying Hint: You will need to rewrite or cast the
expression.
The default keyword will tell the database to grab the next value.
Is there any way to utilize this keyword in the second example? Or some way to tell the database to ignore auto-incremented columns and just them be populated like normal?
I would prefer to not use a subquery to grab the next "id".
This functionality works in SQL Server and hence the question.
Thanks in advance for your help!
If you can't list column names, you should instead use the DEFAULT keyword, as you've done in the simple insert example. This won't work with a in insert into ... select ....
For that, you need to invoke nextval. A subquery is not required, just:
insert into pk_test_table
select nextval('pk_test_table_id_seq'), first_name from person_test;
You do need to know the sequence name. You could get that from information_schema based on the table name and inferring its primary key, using a function that takes just the table name as an argument. It'd be ugly, but it'd work. I don't think there's any way around needing to know the table name.
You're inserting value into the first column, but you need to add a value in the second position.
Therefore you can use INSERT INTO table(field) VALUES(value) syntax.
Since you need to fetch values from another table, you have to remove VALUES and put the subquery there.
insert into pk_test_table(name)
select first_name from person_test;
I hope it helps
I do it this way via a separate function- though I think I'm getting around the issue via the table level having the DEFAULT settings on a per field basis.
create table public.pk_test_table
(
recordid integer NOT NULL DEFAULT nextval('pk_test_table_id_seq'),
name text,
field3 integer NOT NULL DEFAULT 64,
null_field_if_not_set integer,
CONSTRAINT pk_test_table_pkey PRIMARY KEY ("recordid")
);
With function:
CREATE OR REPLACE FUNCTION func_pk_test_table() RETURNS void AS
$BODY$
INSERT INTO pk_test_table (name)
SELECT first_name FROM person_test;
$BODY$
LANGUAGE sql VOLATILE;
Then just execute the function via a SELECT FROM func_pk_test_table();
Notice it hasn't had to specify all the fields- as long as constraints allow it.