Lambda Expression selecting Max Numeric Value from a collection of AlphaNumeric Strings - entity-framework

I have this below SQL query that selects the Maximum integer value from a varchar field where the values start with MB.
select max( cast(substring( sLicenseNo, 4, len(sLicenseNo)) as int)) as licno from ApplicationForm where sLicenseNo like 'MB%'
How can I convert this lambda expression for use with entity framework ?

Try This
TestDataContext db = new TestDataContext();
var res = db.ApplicationForms.Where(y => y.sLicenseNo.Contains("MB")).Select(x => Convert.ToInt32(x.sLicenseNo.Substring(2))).Max();
Tell me if it works or not.

Related

Postgresql update column of numeric type with NULL value fails if all value of this column is NULL

I have a database table like this:
idx[PK]
a[numeric]
b[numeric]
1
1
1
2
2
2
3
3
3
4
4
4
...
...
...
In pgadmin4, I tried to update this table with some null values, and I noticed the following queries failed:
UPDATE test as t SET
a = e.a,b = e.b
FROM (VALUES (1,NULL,NULL),(2,NULL,NULL),(3,NULL,NULL))
AS e(idx, a, b)
WHERE t.idx = e.idx
UPDATE test as t SET
a = e.a,b = e.b
FROM (VALUES (1,NULL,1),(2,NULL,2),(3,NULL,NULL))
AS e(idx, a, b)
WHERE t.idx = e.idx
The error message is like this:
ERROR: column "a" is of type numeric but expression is of type text
LINE 2: a = e.a,b = e.b
^
HINT: You will need to rewrite or cast the expression.
SQL state: 42804
Character: 43
However, this will be successful:
UPDATE test as t SET
a = e.a,b = e.b
FROM (VALUES (1,NULL,1),(2,2,NULL),(3,NULL,NULL))
AS e(idx, a, b)
WHERE t.idx = e.idx
It seems like if the new values for one of the columns I am updating are all NULL, then the query fails. However, as long as there is at least one value is numeric but NOT NULL, the query would be successful. Why is this?
I did simplify my real world case here as my actual table has millions of rows and more than 10 columns. Using Python and psycopg2, when I tried to update 50,000 rows in one query, even though there is a value in a column is NOT NULL, the previous error could still show up. I guess that is because the system scans a certain number of rows to decide if the type is correct or not instead of all 50,000 rows.
Therefore, how to avoid this failure in my real world situation? Is there a better query to use instead of UPDATE?
Thank you very much!
UPDATE
Per comments from #Marth and #Gordon Linoff, and as I am using psycopg2, I did the following in my code:
from psycopg2.extras import execute_values
sql = """UPDATE test as t SET
a = (e.a::numeric),
b = (e.b::numeric)
FROM (VALUES %s)
AS e(idx, a, b)
WHERE t.idx = e.idx"""
execute_values(cursor, sql, data)
cursor is from the database connection. data is a list of tuples in the form (idx, a, b) of my values.
This is due to the default behavior of how NULL works in these situations. NULL is generally an unknown type, which is then treated as whatever type is necessary.
In a values() statement, Postgres tries to decipher the types. It treats the individual records as it would with a union. But if all are NULL . . . well, then there is no information. And Postgres decides on using text as the universal default.
It is also important to understand that this fails with the same error:
UPDATE test t
SET a = ''
WHERE t.id = 1;
The issue is that Postgres does not convert empty strings to numbers (unlike some other databases).
In any case, this is easily fixed by casting the NULL to an appropriate type:
UPDATE test t
SET a = e.a,b = e.b
FROM (VALUES (1, NULL::numeric, NULL::numeric),
(2, NULL, NULL),
(3, NULL, NULL)
) e(idx, a, b)
WHERE t.idx = e.idx ;
You can be explicit for all occurrences of NULL, but that is not necessary.
Here is a db<>fiddle that illustrates some of this.

Postgres jsonb : cast array element to integer

Using Postgres 11.2.9 (ubuntu),
In my database, I have a jsonb field containing values that look like this :
[1618171589133, 1618171589245, 1618171589689]
I'd like to retrieve rows where the first element is lower than a specific value. I've tried this :
SELECT * FROM user.times WHERE time ->> 0 < 1618171589133
but I get the following error : ERROR: operator does not exist: text = bigint
Should I somehow cast the time value to numeric value ? I've tried time ->> 0::numeric but I actually don't know what to do.
The ->> operator returns the element at given position as text, which you can then convert to integer (or as it seems in this case, bigint), as you would normally do in postgres, using the :: as suffix.
SELECT * FROM user.times WHERE ((time ->> 0)::bigint) < 1618171589133

Explicit type conversion in postgreSQL

I am joining the two tables using the query below:
update campaign_items
set last_modified = evt.event_time
from (
select max(event_time) event_time
,result
from events
where request = '/campaignitem/add'
group by result
) evt
where evt.result = campaign_items.id
where the result column is of character varying type and the id is of integer type
But the data in the result column contains digits(i.e. 12345)
How would I run this query with converting the type of the result(character) into id
(integer)
Well you don't need to because postgresql will do implicit type conversion in this situation. For example, you can try
select ' 12 ' = 12
You will see that it returns true even though there is extra whitespace in the string version. Nevertheless, if you need explicit conversion.
where evt.result::int = campaign_items.id
According to your comment you have values like convRepeatDelay, these obviously cannot be converted to int. What you should then do is convert your int to char!!
where evt.result = campaign_items.id::char
There are several solutions. You can use the cast operator :: to cast a value from a given type into another type:
WHERE evt.result::int = campaign_items.id
You can also use the CAST function, which is more portable:
WHERE CAST(evt.result AS int) = campaign_items.id
Note that to improve performances, you can add an index on the casting operation (note the mandatory double parentheses), but then you have to use GROUP BY result::int instead of GROUP BY result to take advantage of the index:
CREATE INDEX i_events_result ON events_items ((result::int));
By the way the best option is maybe to change the result column type to int if you know that it will only contain integers ;-)

Call aliased column result of aggregate function JOOQ

I'm currently trying to retrieve a single double value from this query in JOOQ Query Builder and PostgreSQL as the database, providing that DRINKS.PRICE is of type double and ORDER_DRINK.QTY is of type integer.
Record rec = create.select(DSL.sum(DRINKS.PRICE.multiply(ORDER_DRINK.QTY)).as("am_due")).from(ORDERS
.join(ORDER_DRINK
.join(DRINKS)
.on(DRINKS.DRINK_KEY.equal(ORDER_DRINK.DRINK_KEY)))
.on(ORDERS.ORDKEY.equal(ORDER_DRINK.ORDER_KEY)))
.where(ORDERS.TOKEN.eq(userToken))
.fetchOne();
As I've understood from the (brief) tutorial, once I retrieve the value from that aliased record, in the form:
double v = rec.getValue("am_due");
I should have the sum of all the prices multiplied by their quantities.
Still, I get a NullPointerException instead.
Any help would be very welcome.
Thank you.
Your jOOQ usage is correct, but I suspect that your sum is simply null because:
Your join doesn't return any records
All of your multiplication operands are null
and since you're unboxing Double to double, you're getting that NullPointerException. You can solve this
... using Java:
Double v1 = rec.getValue("am_due"); // will return null
double v2 = rec.getValue("am_due", double.class); // will return 0.0
... using jOOQ / SQL
Record rec = create.select(DSL.nvl(
DSL.sum(
DRINKS.PRICE.multiply(ORDER_DRINK.QTY)
), 0.0
).as("am_due"))

Count Group Ordinal in LINQ to Dataset

I have an old FoxPro program that does a SQL query which includes the following:
SELECT Region,
Year AS yr_qtr,
SUM(Stock) AS inventory
**...
COUNT(Rent) AS rent_ct
FROM
**...
GROUP BY Region, Year
ORDER BY Region, Year
INTO CURSOR tmpCrsr
The query is against a .DBF table file, and includes data from an Excel file. I've used both to populate an enumeration of user-defined objects in my C# program. (Not sure .AsEnumerable is needed or not.) I then attempt to use LINQ to Dataset to query the list of user objects and create the same result set:
var rslt1 = from rec in recs_list //.AsEnumerable()
group rec by new {rec.Region, rec.Year} into grp
select new
{
RegName = grp.Key.Region,
yr_qtr = grp.Key.Year,
inventory = grp.Sum(s => s.Stock),
// ...
rent_count = grp.Count(r => r.Rent != null)
};
This gives me the warning that "The result of the expression is always 'true' since a value of type 'decimal' is never equal to 'null' of type 'decimal'" for the Count() of the Rent column.
This makes sense, but then how do I do a count exclusive of the rows that have a value of .NULL. for that column in the FoxPro table (or NULL in any SQL database table, for that matter)? I can't do a null test of a decimal value.
If rent is based off of a column which is not a nullable value, then checking for null makes no sense which I believe the compiler accurately shows. Change the line to
rent_count = grp.Count(r => r.Rent != 0)
instead.
For if the code is actuall nullable such as:
Decimal? rent;
That would make checking rent against null valid. If that is the case then the line would be:
rent_count = grp.Count(r => (r.Rent ?? 0) != 0)
where null coalesding operator ?? can be used. Which states if r.rent is null, use the value 0 (or any value you want technically) for r.Rent. in the next process.