I use psycopg2 and have input varaiables with a where condition that looks like this:
WHERE %(variable)s is null or some_column in %(variable)s
The problem is that when variable is null I get a syntax error:
psycopg2.errors.SyntaxError: syntax error at or near "NULL"`
(NULL is null or (some_column IN NULL)) and
^
That condition should of course not be evaluated since null is null is true already, but the syntax does not pass. Anyway to fix this without creating an extra variable?
Related
SELECT * FROM Entity e WHERE e.Status <> ANY(ARRAY[1,2,3]);
Here Status is a nullable integer column. Using the above query i am unable to fetch the records whose status value is NULL.
SELECT * FROM Entity e WHERE (e.Status is NULL OR e.Status = 4);
This query does the trick. Could someone explain me why the first query was not working as expected.
NULL kinda means "unknown", so the expressions
NULL = NULL
and
NULL != NULL
are neither true nor false, they're NULL. Because it is not known whether an "unknown" value is equal or unequal to another "unknown" value.
Since <> ANY uses an equality test, if the value searched in the array is NULL, then the result will be NULL.
So your second query is correct.
It is spelled out in the docs Array ANY:
If the array expression yields a null array, the result of ANY will be null. If the left-hand expression yields null, the result of ANY is ordinarily null (though a non-strict comparison operator could possibly yield a different result). Also, if the right-hand array contains any null elements and no true comparison result is obtained, the result of ANY will be null, not false (again, assuming a strict comparison operator). This is in accordance with SQL's normal rules for Boolean combinations of null values.
FYI:
e.Status is NULL OR e.Status = 4
can be shortened to:
e_status IS NOT DISTINCT FROM 4
per Comparison operators.
In python, None !=1 will return True.
But why in Pyspark "Null_column" != 1 will return false?
example:
data = [(1,5),(2,5)]
columns=["id","test"]
df_null=spark.createDataFrame(data,columns)
df_null = df_null.withColumn("nul_val",lit(None))
df_null.printSchema()
df_null.show()
but df_null.filter(df_null.nul_val != 1).count() will return 0
Please check NULL Semantics - Spark 3.0.0 for how to handle comparison with null in spark.
But to summerize, in Spark, null is undefined , so any comparison with null will result in undefined and should be avoided to avoid unwanted results. And in your case, since undefined is not True, the count will be 0.
Apache spark supports the standard comparison operators such as ‘>’, ‘>=’, ‘=’, ‘<’ and ‘<=’. The result of these operators is unknown or NULL when one of the operarands or both the operands are unknown or NULL.
If you want to compare with a column that might contain null, use the null-safe operation <=> which results in False if one of the operands is null:
In order to compare the NULL values for equality, Spark provides a null-safe equal operator (‘<=>’), which returns False when one of the operand is NULL
So, back to your problem. To solve it I would do a null-check and the comparison with 1:
df_null.filter((df_null.nul_val.isNull()) | (df_null.nul_val != 1)).count()
Another solution would be to replace null with 0, if that does not destroy any other logic:
df_null.fill(value=0,subset=["nul_val"]).filter(df_null.nul_val != 1).count()
I am using python3.6 and py-postgresql==1.2.1.
I have the following statement:
db.prepapre("SELECT * FROM seasons WHERE user_id=$1 AND season_id=$2 LIMIT 1), where season_id can be NULL.
I want to be able to be able to get the latest record with a NULL season_id by passing None as the $2 param, but it does not work. Instead, I need to create this second statement:
db.prepapre("SELECT * FROM seasons WHERE user_id=$1 AND season_id IS NULL LIMIT 1)
It must have something to do with season_id = NULL not working and season_id IS NULL is, but is there a way to make this work?
From Comparison Functions and Operators:
Do not write expression = NULL because NULL is not “equal to” NULL. (The null value represents an unknown value, and it is not known whether two unknown values are equal.)
Some applications might expect that expression = NULL returns true if expression evaluates to the null value. It is highly recommended that these applications be modified to comply with the SQL standard. However, if that cannot be done the transform_null_equals configuration variable is available. If it is enabled, PostgreSQL will convert x = NULL clauses to x IS NULL.
and:
19.13.2. Platform and Client Compatibility
transform_null_equals (boolean)
When on, expressions of the form expr = NULL (or NULL = expr) are treated as expr IS NULL, that is, they return true if expr evaluates to the null value, and false otherwise. The correct SQL-spec-compliant behavior of expr = NULL is to always return null (unknown). Therefore this parameter defaults to off.
You could rewrite your query:
SELECT *
FROM seasons
WHERE user_id = $1
AND (season_id = $2 OR ($2 IS NULL AND season_id IS NULL))
-- ORDER BY ... --LIMIT without sorting could be dangerous
-- you should explicitly specify sorting
LIMIT 1;
I have table variable and all its columns can not be null (NOT NULL definition for each):
DECLARE #SampleTable
(
,SampleColumnID nvarchar(400) NOT NULL PRIMARY KEY
,SampleColumnText nvarchar(max) NOT NULL
)
I have done some operation with this variable and initialize the "SampleColumnText" with some text.
Then I try to replace some part of it with text return from other function. What happens is that the function returns NULL in some cases, so I this code generates me error:
REPLACE(SampleColumnText , '{*}', #InitByFunctionText)
WHERE #InitByFunctionText is NULL this time.
So, is it normal error to be generated as I am replacing only part of the text with NULL, not the whole text?
This is expected behaviour. REPLACE:
Returns NULL if any one of the arguments is NULL.
If you want to replace it with an empty string (which is not the same as NULL), you can use COALESCE:
REPLACE(SampleColumnText , '{*}', COALESCE(#InitByFunctionText,''))
I had a similar thing recently and the following got around this issue:
REPLACE(SampleColumnText , '{*}', ISNULL(#InitByFunctionText, ''))
The following TQL query is generated from a tool I'm using but when it's executed there is a syntax error near 'LIKE'. I can't seem to figure out what the problem is. Does anybody know what's wrong?
The error from SQL Management Studio is "Msg 156, Level 15, State 1, Line 17 Incorrect syntax near the keyword 'LIKE'."
SELECT COUNT_BIG(*)
FROM [HistoryReport] AS t0
WHERE (1 <> 0 AND
(CASE WHEN (
(CASE WHEN (t0.[CategoryValue] IS NULL)
THEN NULL
ELSE LOWER(t0.[CategoryValue])
END) IS NULL
)
THEN NULL
ELSE (
(CASE WHEN (t0.[CategoryValue] IS NULL)
THEN NULL
ELSE LOWER(t0.[CategoryValue])
END) LIKE 'U' + '%'
)
END) <> 0)
A few things, It seems very strange that you are testing if a value is null, and returning null if it is and the value if it isn't. Then you're checking for nulls again in the branch of code that is only executed if the value is definitely not null. A bit unnecessary and very confusing. In addition, I suspect your comparison with NULL isn't going to work the way you think it is since NULL <> 0 will evaluate to NULL by default.
As an aside: normally, in SQL SERVER, strings are case-insensitive (unless you configure the column or server with a case-sensitive collation which is somewhat uncommon.)
I'm trying hard to figure out what you actually mean, here is a syntactically correct version of what I think you're trying to do:
SELECT COUNT_BIG(*)
FROM [HistoryReport] AS t0
WHERE (1 <> 0 AND LOWER(t0.categoryValue) like 'U' + '%')
Basically your query states that if t0.categoryValue is null, return null otherwise convert t0.categoryvalue to lowercase and compare, using like, to 'U%' and return true if the LIKE comparison returns true. The query above accomplishes the same thing.
IF you aren't using a case-sensitive collation, you can remove LOWER() since it only adds cost and prevents any index usage.
Now, in SQL Server's world NULL means 'unknown' so asking "Does this unknown value = 0" can only give the answer "I don't know." This confuses a lot of people because they expect "NULL==NULL" which works in some languages, but in SQL Sever you're basically asking "Is this unknown value the same as this other unknown value" and the answer is, again, unknown.
So I guess my only follow-up question, is how to do you want nulls to be treated?
Also, as to your original question, it would appear that expressions like your LIKE 'U' + '%' don't like being in the middle of CASE statements.
BOL states the syntax is:
Simple CASE expression:
CASE input_expression
WHEN when_expression THEN result_expression [ ...n ]
[ ELSE else_result_expression ]
END
Searched CASE expression:
CASE
WHEN Boolean_expression THEN result_expression [ ...n ]
[ ELSE else_result_expression ]
END
and that:
THEN result_expression Is the expression returned when
input_expression equals when_expression evaluates to TRUE, or
Boolean_expression evaluates to TRUE. result expression is any valid
expression.
And what you have with LIKE seems like it should be a valid expression but it appears that LIKE is on the fringe of valid operatiors.