This is a scalar query, originally within a function. The result datatype varies, depending on which field I'm intresting of.
In this example, I expect a scalar result of datatype NVARCHAR 'Andy' but got an error:
Msg 245, Level 16, State 1, Line xx Conversion failed when converting
the nvarchar value 'Andy' to data type int.
Is there any way to get around this?
CREATE TABLE ATable (
Idf INT PRIMARY KEY,
Col1 INT,
Col2 NVARCHAR(255),
)
GO
INSERT INTO Atable (Idf, Col1, Col2) VALUES (1, 75, 'Andy')
INSERT INTO Atable (Idf, Col1, Col2) VALUES (2, 39, 'Pete')
GO
DECLARE #Idf INT = 1
DECLARE #Col NVARCHAR(15) = 'Col2'
SELECT
CASE
WHEN #Col='Col1' THEN Col1
WHEN #Col='Col2' THEN Col2
ELSE NULL
END AS MyScalarResultOfDynamicDatatype
FROM ATable
WHERE Idf=#Idf
If it is really, really necessary... you might use the sql_variant data type.
CASE
WHEN #Col='Col1' THEN CAST(Col1 AS SQL_VARIANT)
WHEN #Col='Col2' THEN CAST(Col2 AS SQL_VARIANT)
END AS MyScalarResultOfDynamicDatatype
(Note that you do not specifically need an ELSE NULL in your CASE-statement. If there is no matching WHEN, the result will be NULL by default.)
Edit:
Based on a question in comment regarding the drawbacks, I would like to refer to the article Problems Caused by the Use of the SQL_VARIANT Datatype, written by somebody under the alias Phil Factor (Redgate Simple Talk, Redgate Blog, GitHub).
Related
Right now I have a generic notification function that is triggered after create on a couple of tables in my database (there's a node process on the other end listening for notifications). Here's what my update trigger looks like:
CREATE OR REPLACE FUNCTION notify_create() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
PERFORM pg_notify('update_watchers',
json_build_object(
'eventType', 'new',
'type', TG_TABLE_NAME,
'payload', row_to_json(NEW)
)::text
);
RETURN NEW;
END;
$$;
The problem is, if NEW is too big, this will overflow the limit of 8000 bytes in a couple of limited corner cases (I rarely have a new item in the table that is that big). In the notify_update function, I just report on which columns have changed by listing the column names. That would work here, but what I would rather do is only have row_to_json pull out entries from NEW that are of type integer.
That is because sometimes what I'm notifying is "hey there's a new entry in an entity table". The new entry could be from a couple of different tables (documents, profiles, etc). In that case, I really only need the id, since anyone who is interested in the new value ends up fetching it later anyway.
Sometimes I'm notifying "hey, there's a new entry in a join table", in which case I don't have an id field but instead have something like documents_id and profiles_id.
I could just write a bunch of different notify_create functions, for each scenario. I'd prefer to have one that did something like
row_to_json(NEW.filter(t => typeof t === 'number'))
to mix together plpgsql and javascript notation, but I'm sure you get the point: only include those fields of NEW that are number typed
Is this possible, or should I just write a bunch of different notifiers?
You can easily eliminate json objects of type other than number, e.g.:
with my_table(int1, text1, int2, date1, float1) as (
values
(1, 'text1', 100, '2017-01-01'::date, 123.54)
)
select jsonb_object_agg(key, value) filter (where jsonb_typeof(value) = 'number')
from my_table,
jsonb_each(to_jsonb(my_table))
jsonb_object_agg
--------------------------------------------
{"int1": 1, "int2": 100, "float1": 123.54}
(1 row)
The function below leaves only integers:
create or replace function leave_integers(jdata jsonb)
returns jsonb language sql as $$
select jsonb_object_agg(key, value)
filter (
where jsonb_typeof(value) = 'number'
and value::text not like '%.%')
from jsonb_each(jdata)
$$;
with my_table(int1, text1, int2, date1, float1) as (
values
(1, 'text1', 100, '2017-01-01'::date, 123.54)
)
select leave_integers(to_jsonb(my_table))
from my_table;
leave_integers
--------------------------
{"int1": 1, "int2": 100}
(1 row)
Alternative (better) solution
This function checks Postgres types directly and returns values strictly from integer columns.
create or replace function integer_columns_to_jsonb(anyelement)
returns jsonb language sql as $$
select jsonb_object_agg(key, value)
from jsonb_each(to_jsonb($1))
where key in (
select attname
from pg_type t
join pg_attribute on typrelid = attrelid
where t.oid = pg_typeof($1)
and atttypid = 'int'::regtype)
$$;
The example shows that the function eliminates some corner cases handled incorrectly by leave_integers():
create table my_table (int1 int, int2 int, float1 float, text1 text);
insert into my_table values (1, 2, 3, '4');
select integer_columns_to_jsonb(t), leave_integers(to_jsonb(t))
from my_table t;
integer_columns_to_jsonb | leave_integers
--------------------------+-------------------------------------
{"int1": 1, "int2": 2} | {"int1": 1, "int2": 2, "float1": 3}
(1 row)
What are good ways to add a constraint to PostgreSQL to check that exactly one column (from a set of columns) contains a non-null value?
Update: It is likely that I want to use a check expression as detailed in Create Table and Alter Table.
Update: I'm looking through the available functions.
Update: Just for background, here is the Rails validation logic I'm currently using:
validate :multi_column_validation
def multi_column_validation
n = 0
n += 1 if column_1
n += 1 if column_2
n += 1 if column_3
unless 1 == n
errors.add(:base, "Exactly one column from " +
"column_1, column_2, column_3 must be present")
end
end
To be clear, I'm looking for PSQL, not Ruby, here. I just wanted to show the logic I'm using since it is more compact than enumerating all "truth table" possibilities.
Since PostgreSQL 9.6 you have the num_nonnulls and num_nulls comparison functions that accept any number of VARIADIC arguments.
For example, this would make sure exactly one of the three columns is not null.
ALTER TABLE your_table
ADD CONSTRAINT chk_only_one_is_not_null CHECK (num_nonnulls(col1, col2, col3) = 1);
History & References
The PostgreSQL 9.6.0 Release Notes from 2016-09-29 say:
Add variadic functions num_nulls() and num_nonnulls() that count the number of their arguments that are null or non-null (Marko Tiikkaja)
On 2015-08-12, Marko Tiikkaja proposed this feature on the pgsql-hacker mailing list:
I'd like to suggest $SUBJECT for inclusion in Postgres 9.6. I'm sure everyone would've found it useful at some point in their lives, and the fact that it can't be properly implemented in any language other than C I think speaks for the fact that we as a project should provide it.
A quick and dirty proof of concept (patch attached):
=# select count_nulls(null::int, null::text, 17, 'bar');
count_nulls
-------------
2
(1 row)
Its natural habitat would be CHECK constraints, e.g:
CHECK (count_nulls(a,b,c) IN (0, 3))
Avid code historians can follow more discussion from that point. :)
2021-09-17 update: As of today, gerardnll provides a better answer (the best IMO):
"Since PostgreSQL 9.6 you have the num_nonnulls and num_nulls comparison functions that accept any number of VARIADIC arguments."
In order to help people find the cleanest solution, I recommend you upvote gerardnll's answer.
(FYI, I'm the same person who asked the original question.)
Here is my original answer from 2013
Here is an elegant two column solution according to the "constraint -- one or the other column not null" PostgreSQL message board:
ALTER TABLE my_table ADD CONSTRAINT my_constraint CHECK (
(column_1 IS NULL) != (column_2 IS NULL));
(But the above approach is not generalizable to three or more columns.)
If you have three or more columns, you can use the truth table approach illustrated by a_horse_with_no_name. However, I consider the following to be easier to maintain because you don't have to type out the logical combinations:
ALTER TABLE my_table
ADD CONSTRAINT my_constraint CHECK (
(CASE WHEN column_1 IS NULL THEN 0 ELSE 1 END) +
(CASE WHEN column_2 IS NULL THEN 0 ELSE 1 END) +
(CASE WHEN column_3 IS NULL THEN 0 ELSE 1 END) = 1;
To compact this, it would be useful to create a custom function so that the CASE WHEN column_k IS NULL THEN 0 ELSE 1 END boilerplate could be removed, leaving something like:
(non_null_count(column_1) +
non_null_count(column_2) +
non_null_count(column_3)) = 1
That may be as compact as PSQL will allow (?). That said, I'd prefer to get to this kind of syntax if possible:
non_null_count(column_1, column_2, column_3) = 1
I think the most clean and generic solution is to create a function to count the null values from some arguments. For that you can use the pseudo-type anyarray and a SQL function like that:
CREATE FUNCTION count_not_nulls(p_array anyarray)
RETURNS BIGINT AS
$$
SELECT count(x) FROM unnest($1) AS x
$$ LANGUAGE SQL IMMUTABLE;
With that function, you can create your CHECK CONSTRAINT as:
ALTER TABLE your_table
ADD chk_only_one_is_not_null CHECK(count_not_nulls(array[col1, col2, col3]) = 1);
This will only work if the columns are of the same data type. If it's not the case, you can cast them, as text for instance (as you just care for the null case):
ALTER TABLE your_table
ADD chk_only_one_is_not_null CHECK(count_not_nulls(array[col1::text, col2::text, col3::text]) = 1);
As well remembered by #muistooshort, you can create the function with variadic arguments, which makes it clear to call:
CREATE FUNCTION count_not_nulls(variadic p_array anyarray)
RETURNS BIGINT AS
$$
SELECT count(x) FROM unnest($1) AS x
$$ LANGUAGE SQL IMMUTABLE;
ALTER TABLE your_table
ADD chk_only_one_is_not_null CHECK(count_not_nulls(col1, col2, col3) = 1);
As hinted by mu is too short:
alter table t
add constraint only_one_null check (
(col1 is not null)::integer + (col2 is not null)::integer = 1
)
A bit clumsy, but should do the trick:
create table foo
(
col1 integer,
col2 integer,
col3 integer,
constraint one_is_not_null check
( (col1 is not null and col2 is null and col3 is null)
or (col1 is null and col2 is not null and col3 is null)
or (col1 is null and col2 is null and col3 is not null)
)
)
Here's a solution using the built-in array functions:
ALTER TABLE your_table
ADD chk_only_one_is_not_null CHECK (array_length(array_remove(ARRAY[col1, col2, col3], NULL), 1) = 1);
DB2 V9 z/os
Background: I have a 4 column table defined as (col1 int, col2 smallint, col3 int, col4 date)
Row 1 has values of (1,123,456,2012-08-23)
When I execute the following:
SELECT CAST(col2 AS VARCHAR(5)) CONCAT CAST(col3 AS VARCHAR(5))
FROM db.T1
WHERE col1 = 1;
Value 123456 is returned, which is exactly what i want.
When I execute the following:
UPDATE db.table2
SET col3 = SELECT CAST(col2 AS VARCHAR(5)) CONCAT CAST(col3 AS VARCHAR(5))
FROM db.T1
WHERE col1 = 1;
Error is:
SQL0408N A value is not compatible with the data type of its assignment target. Target name is "col3". SQLSTATE=42821
I understand the error is due to attempting to insert a varchar into an integer. What else can I do? I've tried using various CAST statements but cannot get a value to insert into col3. i need the value to appear joined as shown above.
Any help would be appreciated.
Wrapping all of the casts as a final cast( ... as integer) should work:
UPDATE db.table2
SET col3 = SELECT CAST(
CAST(col2 AS VARCHAR(5)) CONCAT CAST(col3 AS VARCHAR(5))
AS INTEGER)
FROM db.T1
WHERE col1 = 1;
depend on the MAXIMUM value of your small integers,
you can convert them to base-9,
then concatenate with '9'.
CONCAT( IFNULL(CONV(sint1, 10, 9),''),
'9',
IFNULL(CONV(tint2, 10, 9),'')
) AS combined
and another option, ALSO depend on the MAXIMUM value of your small integers,
to PAD-LEFT with zeros one of them.
CONCAT(
sint1,
LPAD(tint2,3,0)
) AS combined
See full code and explanation: https://mdb-blog.blogspot.com/2021/10/mysql-joincombine-2-smallinttinyint.html
I have a table like this:
create table1 (field1 int,
field2 int default 5557,
field3 int default 1337,
field4 int default 1337)
I want to insert a row which has the default values for field2 and field4.
I've tried insert into table1 values (5,null,10,null) but it doesn't work and ISNULL(field2,default) doesn't work either.
How can I tell the database to use the default value for the column when I insert a row?
Best practice it to list your columns so you're independent of table changes (new column or column order etc)
insert into table1 (field1, field3) values (5,10)
However, if you don't want to do this, use the DEFAULT keyword
insert into table1 values (5, DEFAULT, 10, DEFAULT)
Just don't include the columns that you want to use the default value for in your insert statement. For instance:
INSERT INTO table1 (field1, field3) VALUES (5, 10);
...will take the default values for field2 and field4, and assign 5 to field1 and 10 to field3.
This works if all the columns have associated defaults and one does not want to specify the column names:
insert into your_table
default values
Try it like this
INSERT INTO table1 (field1, field3) VALUES (5,10)
Then field2 and field4 should have default values.
I had a case where I had a very simple table, and I basically just wanted an extra row with just the default values. Not sure if there is a prettier way of doing it, but here's one way:
This sets every column in the new row to its default value:
INSERT INTO your_table VALUES ()
Note: This is extra useful for MySQL where INSERT INTO your_table DEFAULT VALUES does not work.
If your columns should not contain NULL values, you need to define the columns as NOT NULL as well, otherwise the passed in NULL will be used instead of the default and not produce an error.
If you don't pass in any value to these fields (which requires you to specify the fields that you do want to use), the defaults will be used:
INSERT INTO
table1 (field1, field3)
VALUES (5,10)
You can write in this way
GO
ALTER TABLE Table_name ADD
column_name decimal(18, 2) NOT NULL CONSTRAINT Constant_name DEFAULT 0
GO
ALTER TABLE Table_name SET (LOCK_ESCALATION = TABLE)
GO
COMMIT
To insert the default values you should omit them something like this :
Insert into Table (Field2) values(5)
All other fields will have null or their default values if it has defined.
CREATE TABLE #dum (id int identity(1,1) primary key, def int NOT NULL default(5), name varchar(25))
-- this works
INSERT #dum (def, name) VALUES (DEFAULT, 'jeff')
SELECT * FROM #dum;
DECLARE #some int
-- this *doesn't* work and I think it should
INSERT #dum (def, name)
VALUES (ISNULL(#some, DEFAULT), 'george')
SELECT * FROM #dum;
CREATE PROC SP_EMPLOYEE --By Using TYPE parameter and CASE in Stored procedure
(#TYPE INT)
AS
BEGIN
IF #TYPE=1
BEGIN
SELECT DESIGID,DESIGNAME FROM GP_DESIGNATION
END
IF #TYPE=2
BEGIN
SELECT ID,NAME,DESIGNAME,
case D.ISACTIVE when 'Y' then 'ISACTIVE' when 'N' then 'INACTIVE' else 'not' end as ACTIVE
FROM GP_EMPLOYEEDETAILS ED
JOIN GP_DESIGNATION D ON ED.DESIGNATION=D.DESIGID
END
END
I have a stored procedure in an old SQL 2000 database that takes a comment column that is formatted as a varchar and exports it out as a money object. At the time this table structure was setup, it was assumed this would be the only data going into this field. The current procedure functions simply this this:
SELECT CAST(dbo.member_category_assign.coment AS money)
FROM dbo.member_category_assign
WHERE member_id = #intMemberId
AND
dbo.member_category_assign.eff_date <= #dtmEndDate
AND
(
dbo.member_category_assign.term_date >= #dtmBeginDate
OR
dbo.member_category_assign.term_date Is Null
)
However, data is now being inserted into this column that is not parsable to a money object and is causing the procedure to crash. I am unable to remove the "bad" data (since this is a third party product), but need to update the stored procedure to test for a money parsable entry and return that.
How can I update this procedure so that it will only return the value that is parsable as a money object? Do I create a temporary table and iterate through every item, or is there a more clever way to do this? I'm stuck with legacy SQL 2000 (version 6.0) so using any of the newer functions unfortunately is not available.
Checking for IsNumeric may help you - you can simply return a Zero value. If you want to return a 'N/a' or some other string value
I created the sample below with the columns from your query.
The first query just returns all rows.
The second query returns a MONEY value.
The third one returns a String value with N/A in place of the non-integer value.
set nocount on
drop table #MoneyTest
create table #MoneyTest
(
MoneyTestId Int Identity (1, 1),
coment varchar (100),
member_id int,
eff_date datetime,
term_date datetime
)
insert into #MoneyTest (coment, member_id, eff_date, term_date)
values
(104, 1, '1/1/2008', '1/1/2009'),
(200, 1, '1/1/2008', '1/1/2009'),
(322, 1, '1/1/2008', '1/1/2009'),
(120, 1, '1/1/2008', '1/1/2009')
insert into #MoneyTest (coment, member_id, eff_date, term_date)
values ('XX', 1, '1/1/2008', '1/1/2009')
Select *
FROM #MoneyTest
declare #intMemberId int = 1
declare #dtmBeginDate DateTime = '1/1/2008'
declare #dtmEndDate DateTime = '1/1/2009'
SELECT
CASE WHEN ISNUMERIC (Coment)=1 THEN CAST(#MoneyTest.coment AS money) ELSE cast (0 as money) END MoneyValue
FROM #MoneyTest
WHERE member_id = #intMemberId
AND #MoneyTest.eff_date <= #dtmEndDate
AND
(
#MoneyTest.term_date >= #dtmBeginDate
OR
#MoneyTest.term_date Is Null
)
SELECT
CASE WHEN ISNUMERIC (Coment)=1 THEN CAST (CAST(#MoneyTest.coment AS money) AS VARCHAR) ELSE 'N/a' END StringValue
FROM #MoneyTest
WHERE member_id = #intMemberId
AND #MoneyTest.eff_date <= #dtmEndDate
AND
(
#MoneyTest.term_date >= #dtmBeginDate
OR
#MoneyTest.term_date Is Null
)
Apologies for making a new answer, where a comment would suffice, but I lack the required permissions to do so. Onto the answer to your question, I would only like to add that you should use the above ISNUMERIC carefully. While it works much as expected, it also parses things like '1.3E-2' as a value numeric, which strangely enough you cannot cast into a numeric or money without generating an exception. I generally end up using:
SELECT
CASE WHEN ISNUMERIC( some_value ) = 1 AND CHARINDEX( 'E', Upper( some_value ) ) = 0
THEN Cast( some_value as money )
ELSE Cast( 0 as money )
END as money_value