Constraint on columns based on single column not firing - postgresql

I made a constraint where to mark the column completed to true some of the other columns would have to have a value.
But for some reason the constraint does not complain when I leave a specified column blank when completed is marked true. I have also purposely inserted NULL a specified column and still no constraint.
Any ideas?
CREATE TABLE info (
id bigserial PRIMARY KEY,
created_at timestamptz default current_timestamp,
posted_by text REFERENCES users ON UPDATE CASCADE ON DELETE CASCADE,
title character varying(31),
lat numeric,
lng numeric,
contact_email text,
cost money,
description text,
active boolean DEFAULT false,
activated_date date,
deactivated_date date,
completed boolean DEFAULT false,
images jsonb,
CONSTRAINT columns_null_check CHECK (
(completed = true
AND posted_by != NULL
AND title != NULL
AND lat != NULL
AND lng != NULL
AND contact_email != NULL
AND cost != NULL
AND description != NULL
AND images != NULL) OR completed = false)
);

In Chapter 9. Functions and Operators:
To check whether a value is or is not null, use the predicates:
expression IS NULL
expression IS NOT NULL
or the equivalent, but nonstandard, predicates:
expression ISNULL
expression NOTNULL
Therefore you can not use value != NULL to check null values, you can only use value IS NULL and value IS NOT NULL.
For boolean values they are the same:
Boolean values can also be tested using the predicates
boolean_expression IS TRUE
boolean_expression IS NOT TRUE
boolean_expression IS FALSE
boolean_expression IS NOT FALSE
boolean_expression IS UNKNOWN
boolean_expression IS NOT UNKNOWN

Related

POSTGRES: expected DEFAULT expression to have type varchar, but '0' has type int

I'm trying to do a 3rd party software installation and I'm getting the error:
SQL Error DB_ERROR_ERROR: XXUUU: expected DEFAULT expression to have type varchar, but '0' has type int ERROR: XXUUU: expected DEFAULT expression to have type varchar, but '0' has type int
The query that the installation is trying to execute is:
Executed query : create table llx_adherent_type_lang ( rowid SERIAL PRIMARY KEY, fk_type integer DEFAULT 0 NOT NULL, lang varchar(5) DEFAULT 0 NOT NULL, label varchar(255) NOT NULL, description text, email text, import_key varchar(14) DEFAULT NULL );
My guess is that the problem has to do with lang varchar(5) DEFAULT 0 NOT NULL part of the query? Is it because the column is expecting a varchar(5) while the default value is 0? Which is an integer? Is my way of thinking valid?
If I am to fix it does it mean that I just need to find where the query is and change the default value from 0 to "0"?
change it from 0 to '0', or as Bengi suggests '' if that makes more sense.
In postgresql "0" would be a name (could be used for the name of a table or name of a column etc) but '0' is a string literal(text, char, varchar, etc)

check constraint being printed without parenthesis

I have this DDL:
CREATE TABLE checkout_value (
id BIGSERIAL PRIMARY KEY,
start_value INTEGER,
end_value INTEGER,
);
With an e-commerce in mind, I want to save several ranges of possible values, where future rules will be applied at checkout. examples:
values until $20
values from $400
between $30 and $300
This way, I want to allow one null value, but if both are not null, start_value should be smaller than end_value.
I though about triggers, but I'm trying to do this using a check constraint, this way:
CREATE TABLE checkout_value (
id BIGSERIAL PRIMARY KEY,
start_value INTEGER,
end_value INTEGER,
CHECK
(
(start_value IS NOT NULL AND end_value IS NULL)
OR
(start_value IS NULL AND end_value IS NOT NULL)
OR
(start_value IS NOT NULL AND end_value IS NOT NULL AND end_value > start_value)
)
);
this works! but when I run \d checkout_value, it prints without any parenthesis:
Check constraints:
"checkout_value_check" CHECK (start_value IS NOT NULL AND end_value IS NULL OR start_value IS NULL AND end_value IS NOT NULL OR start_value IS NOT NULL AND end_value IS NOT NULL AND end_value > start_value)
which, without parenthesis, would lead to an unwanted rule. Is this a bug at printing the details of the table? Is there an easier way to apply while documenting these rules in a more explicit way?
AND binds stronger than OR, so both versions are equivalent. PostgreSQL doesn't store the string, but the parsed expression.

Want Not Null and Default to False if Supplied value is NULL

I want to achieve this:
column bool is not null
when supplied value is null it will fill in with default value false
thought this will make it:
create table public.testnull
(
xid integer not null, bool boolean default false
)
test got error
insert into public.testnotnull values(2, null)
ERROR: null value in column "bool" violates not-null constraint
DETAIL: Failing row contains (2, null).
SQL state: 23502
this will run but won't use default. Please don't tell me to use trigger.
CREATE TABLE public.testnull
(
xid integer NOT NULL, bool boolean DEFAULT false
)
You need to use the DEFAULT keyword instead of NULL in your INSERT statement.
From the docs:
DEFAULT: The corresponding column will be filled with its default value. An identity column will be filled with a new value generated by the associated sequence. For a generated column, specifying this is permitted but merely specifies the normal behavior of computing the column from its generation expression.
Also, always explicitly specify column names when using INSERT.
Speaking from decades of experience: unless you're using an ORM it's impossible to keep your CREATE TABLE definitions and INSERT statements in-sync, and eventually you'll add a new column or alter an existing column somewhere that the INSERT statements aren't expecting and everything will break.
INSERT INTO table ( xid, bool ) VALUES ( 2, DEFAULT )
Please don't tell me to use trigger.
However, if you want to change the NULL into DEFAULT or FALSE in a statement like this: INSERT INTO table ( xid, bool ) VALUES ( 2, NULL ) then you have to use a TRIGGER. There's no real way around that.
(You could use a VIEW with a custom INSERT handler, of course, but that's the same thing as creating a trigger).

No function matches the given name and argument types. You might need to add explicit type casts

I have created table in Postgres like as below.
CREATE TABLE rv_data_bkt_country_m
(
interval_start timestamp without time zone NOT NULL,
creative_id integer NOT NULL,
zone_id integer NOT NULL,
country character(3) NOT NULL DEFAULT ''::bpchar,
city character(50) NOT NULL DEFAULT ''::bpchar,
count integer NOT NULL DEFAULT 0,
CONSTRAINT rv_data_bkt_country_m_pkey PRIMARY KEY (interval_start, creative_id, zone_id, country, city)
)
When I insert using
SELECT bucket_update_rv_data_bkt_country_m('2018-03-14 08:00:00,1,1,'IN','',1)"
this query. It shows following error
ERROR: function bucket_update_rv_data_bkt_country_m(unknown, integer, integer, unknown, unknown, integer) does not exist
LINE 1: SELECT bucket_update_rv_data_bkt_country_m('2018-03-14 08:00...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
How to solve this error?

Block records from updating

I have the following tables
tbl_orders
CREATE TABLE tbl_orders (
id integer NOT NULL,
customer_name character varying NOT NULL,
is_archived boolean NOT NULL
);
tbl_order_items
CREATE TABLE tbl_order_items (
id integer NOT NULL,
product_name character varying NOT NULL,
quantity integer NOT NULL,
order_id int NOT NULL
);
In my application I have the possibility to archive an order, which set the boolean isArchived to true to that order record. The order record can have multiple order items which I want to prevent from being updated when the order has the boolean isArchived set to true. Do I have to set an isArchived boolean on the order items level?
Is this possible to prevent this on database level?
You can create a BEFORE UPDATE trigger on tbl_order_items FOR EACH ROW that throws an error when the order NEW.order_id is archived.
That is the most elegant and normalized solution, but it requires that a trigger runs whenever a row in tbl_order_items is updated.