when i run the program,it shows "Failed to build basics1_dart:basics1_dart:
bin/basics1_dart.dart:38:12: Error: The operator '<' isn't defined for the class 'String?'.
Try correcting the operator to an existing operator, or defining a '<' operator.
if(num1<0)"
what should i do
You are asking about String? type, which is a String with added possibility to contain null value.
Here under the question is a comment from #jamesdlin, it possible could be an answer, but operator < does not exist also for a String type. There is a compareTo function, which can be used instead to compare it with other String: if (num1 != null && num1.compareTo("0") < 0)
You can easily compare Your value in case of it does not actually contain the null. As have said #jamesdlin, You can compare you value with null first, but if You are sure it is not null, also You can use ! operator to write compact cast of Your value to not nullable: if (num1!.compareTo("0") < 0). This cast will throw an exception when it is null, but if You sure - why not?
Also I wonder why Your num1 variable is a nullable string while You compare it with a number? May be You need to cast to a numeric type first? You can use int.parse(String) to do this, and store parsed integer value in another variable, or use it once for comparing if (int.parse(num1!) < 0)
Related
In python, None !=1 will return True.
But why in Pyspark "Null_column" != 1 will return false?
example:
data = [(1,5),(2,5)]
columns=["id","test"]
df_null=spark.createDataFrame(data,columns)
df_null = df_null.withColumn("nul_val",lit(None))
df_null.printSchema()
df_null.show()
but df_null.filter(df_null.nul_val != 1).count() will return 0
Please check NULL Semantics - Spark 3.0.0 for how to handle comparison with null in spark.
But to summerize, in Spark, null is undefined , so any comparison with null will result in undefined and should be avoided to avoid unwanted results. And in your case, since undefined is not True, the count will be 0.
Apache spark supports the standard comparison operators such as ‘>’, ‘>=’, ‘=’, ‘<’ and ‘<=’. The result of these operators is unknown or NULL when one of the operarands or both the operands are unknown or NULL.
If you want to compare with a column that might contain null, use the null-safe operation <=> which results in False if one of the operands is null:
In order to compare the NULL values for equality, Spark provides a null-safe equal operator (‘<=>’), which returns False when one of the operand is NULL
So, back to your problem. To solve it I would do a null-check and the comparison with 1:
df_null.filter((df_null.nul_val.isNull()) | (df_null.nul_val != 1)).count()
Another solution would be to replace null with 0, if that does not destroy any other logic:
df_null.fill(value=0,subset=["nul_val"]).filter(df_null.nul_val != 1).count()
I have a requirement to load null if the total hours is less than previous total hours else the difference
iif(lesser(TOTAL_HOURS, PREVIOUS_TOTAL_HOURS),null(),TOTAL_HOURS-PREVIOUS_TOTAL_HOURS)
It gives me expression could not be evaluated.
Not all rows have values for these fields, some of them are null. They are numeric fields in database.
I just want to replace negative results with null
If you look at the document iif it says
iif(<condition> : boolean, <true_expression> : any, [<false_expression> : any]) => any
Based on a condition applies one value or the other. If other is
unspecified it is considered NULL. Both the values must be
compatible(numeric, string...).
Now as per your expression:
iif(lesser(TOTAL_HOURS, PREVIOUS_TOTAL_HOURS),null(),TOTAL_HOURS-PREVIOUS_TOTAL_HOURS)
since first value you have mentioned is of type null it expects TOTAL_HOURS-PREVIOUS_TOTAL_HOURS must also return a same type null
What you can try is:
iif(lesser(TOTAL_HOURS, PREVIOUS_TOTAL_HOURS),toInteger(null()),TOTAL_HOURS-PREVIOUS_TOTAL_HOURS)
OR
case(TOTAL_HOURS < PREVIOUS_TOTAL_HOURS, toInteger(null()), minus(TOTAL_HOURS,PREVIOUS_TOTAL_HOURS) )
I have an integer field coming and I want to extract the first digit from the field, how can I do it. I cannot cast the field since the data is coming from a dataset, is there a way to extract first digit from the transformer stage in IBM datastage?
Example:
Input:
ABC = 1234
Output: 1
Can anyone please help me with the same?
Thanks!
Use a transformer, define a stage variable as varchar and use this formula to get the substring
ABC[1,1]
Alternatively you can also convert your numeric value by using the DecimalToString
You CAN convert to string within the context of your expression, and back again if the result needs to be an integer.
AsInteger(Left(ln_jn_ENCNTR_DTL.CCH,1)
This solution has used implicit conversion from integer to string. It assumes that the value of CCH is always an integer.
I would say- if ABC has type int, you can define a stage variable of type char having length 1.
then you need to convert Number to string first.And use Left function to extract the first char.
Left(DecimalToString(ABC),1).
If you are getting ABC as string, you can directly apply left function.
You can first define a stage variable (name say SV) of varchar type (to convert input integer column into varchar) :
Stage variable definition
Now assign the input integer column to stage variable SV and derive output integer column as AsInteger(SV[1,1]) : Column definition
i.e. input integer => (Type conversion to varchar) Stage variable => Substring[1,1] and Substring Conversion to Integer using AsInteger.
DecimalToString is an implicit conversion, so all you need is the Left() function. Left(MyString,1)
I have field that has up to 9 comma separated values each of which have a string value and a numeric value separated by colon. After parsing them all some of the values between 0 and 1 are being set to an integer rather than a numeric as cast. The problem is obviously related to data type but I am unsure what is causing it or how to fix it. The problem only exists in the case statement, the split_part function seems to be working perfect.
Things I have tried:
nvl(split_part(one,':',2),0) = COALESCE types text and integer cannot be matched
nvl(split_part(one,':',2)::numeric,0) => Invalid input syntax for type numeric
numerous other cast/convert variations
(CASE WHEN split_part(one,':',2) = '' THEN 0::numeric ELSE split_part(one,':',2)::numeric END)::numeric => runs but get int value of 0
When using the split_part function outside of case statement it does work correctly. However, I need the result to be zero for null values.
split_part(one,':',2) => 0.02068278096187390979 (expected result)
When running the code above I get zero but expect 0.02068278096187390979
Field "one" has the following value 'xyz: 0.02068278096187390979' before the split_part function.
EXAMPLE:
create table test(one varchar);
insert into test values('XYZ: 0.50000000000000000000')
select
one ,split_part(one,':',2) as correct_value_for_those_that_are_not_null ,
case
when split_part(one,':',2) = '' then null
else split_part(one,':',2)::numeric
end::numeric as this_one_is_the_problem
from test
However, I need the result to be zero for null values.
Your example does not deal with NULL values at all, though. Only addressing the empty string ('').
To replace either with 0 reliably, efficiently and without casting issues:
SELECT part1, CASE WHEN part2 <> '' THEN part2::numeric ELSE numeric '0' END AS part2
FROM (
SELECT split_part(one, ':', 1) AS part1
, split_part(one, ':', 2) AS part2
FROM test
) sub;
See:
Best way to check for "empty or null value"
Also note that all SQL CASE branches must agree on a common data type. There have been minor adjustments in the logic that determines the resulting type in the past, so the version of Postgres may play a role in corner cases. Don't recall the details now.
nvl()is not a Postgres function. You probably meant COALESCE. The manual:
This SQL-standard function provides capabilities similar to NVL and IFNULL, which are used in some other database systems.
I am joining the two tables using the query below:
update campaign_items
set last_modified = evt.event_time
from (
select max(event_time) event_time
,result
from events
where request = '/campaignitem/add'
group by result
) evt
where evt.result = campaign_items.id
where the result column is of character varying type and the id is of integer type
But the data in the result column contains digits(i.e. 12345)
How would I run this query with converting the type of the result(character) into id
(integer)
Well you don't need to because postgresql will do implicit type conversion in this situation. For example, you can try
select ' 12 ' = 12
You will see that it returns true even though there is extra whitespace in the string version. Nevertheless, if you need explicit conversion.
where evt.result::int = campaign_items.id
According to your comment you have values like convRepeatDelay, these obviously cannot be converted to int. What you should then do is convert your int to char!!
where evt.result = campaign_items.id::char
There are several solutions. You can use the cast operator :: to cast a value from a given type into another type:
WHERE evt.result::int = campaign_items.id
You can also use the CAST function, which is more portable:
WHERE CAST(evt.result AS int) = campaign_items.id
Note that to improve performances, you can add an index on the casting operation (note the mandatory double parentheses), but then you have to use GROUP BY result::int instead of GROUP BY result to take advantage of the index:
CREATE INDEX i_events_result ON events_items ((result::int));
By the way the best option is maybe to change the result column type to int if you know that it will only contain integers ;-)