Postgres COALESCE inside nullif for 2 different fields - postgresql

I am new to SQL and POSTGRES and had a quick question. Right now I have 2 different tables one with car info and one with partial car info and I would like to sort on car.vin OR partial_car.vin depending if either exists and sending all nulls/empty strings to the end of the sort. Currently my ORDER BY statement looks like:
ORDER BY nullif(coalesce(car.vin, partial_car.partial_vin), '') asc nulls last limit 50 offset 0
My expectation for this is that coalesce will take the first non null value and use that for sorting or it will return null and send that to the end. My results so far I haven't been able to make sense of. There are null values being placed in between actual values etc.. If I make this change coalesce(car.vin, '') again I see it work properly. Anyone have an ideas as to why this is the behavior? Let me know if you need something more from me.

It was human error on my end. The object being sent to client was not being populated properly with partial data. So sorting was correct but was seeing blanks due to those values not being present.

Related

Only saving files without null value on Nifi

an absolute newbie here trying out Nifi and postgresql on docker compose.
I have a sample CSV file with 4 columns.
I want to split this CSV file into two
based on whether if it contains a row with null value or not.
Grade ,BreedNm ,Gender ,Price
C++ ,beef_cattle ,Female ,10094
C++ ,milk_cow ,Female ,null
null ,beef_cattle ,Male ,12704
B++ ,milk_cow ,Female ,16942
for example, above table should be split into two tables each containing row 1,4 and 2,3
and save each of them into a Postgresql table.
Below is what I have tried so far.
I was trying to
split flowfile into 2 and only save rows without null value on left side and with null values on right side.
Write each of them into a table each named 'valid' and 'invalid'
but I do not know how to split the csv file and save them as a psql table through Nifi.
Can anyone help?
What you could do is use a RouteOnContent with the "Content Must Contain Match" factor, with the match being null. Therefore, anything that matches null would be routed that way, and anything not matching null would be routed a different way. Not sure if it's possible the way you're doing it, but that is 1 possibility. The match could be something like (.*?)null
I used QueryRecord processor with two SQL statements each sorting out the rows with null value and the other without the null value and it worked as intended!

Add contents of one column into another without overwriting

I can copy the contents of one column to another using the sql UPDATE easily. But I need to do it without deleting the content already there, so in essence I want to append a column to another without overwriting the other's original content.
I have a column called notes then for some unknown reason after several months I added another column called product_notes and after 2 days realised that I have two sets of notes I urgently need to merge.
Usually when making a note we just add to any note already there with a form. I need to put these two columns like that, keeping any note in the first column eg
Column notes = Out of stock Pete 040618--- ordered 200 units Jade
050618 --- 200 units received Lila 080618
and
Column product_notes = 5 units left Dave 120618 --- unit 10724 unacceptable quality noted in list Dave 130618
I need to put them together with our spacer of --- without losing the first column's content so the result needs to be like this for my test case:
Column notes = Out of stock Pete 040618--- ordered 200 units Jade
050618 --- 200 units received Lila 080618 --- 5 units left Dave 120618 --- unit 10724 unacceptable quality noted in list Dave 130618
It's simple -
update table1 set notes = notes || '---' || product_notes;
The solution provided by #MaheshHViraktamath is fine, but the problem with simple string concatenation is that if any of the items being concatenated are NULL, the whole result becomes NULL.
Another potential issue is if either field is empty. In that case you might get a result of field a--- or ---field b.
To guard against the first scenario (without putting checks in the WHERE clause) you can use CONCAT_WS like so: CONCAT_WS('---', notes, product_notes). This will combine the two (or however many you put in there) fields with the first parameter, i.e. '---'. If either of those two fields are NULL, the separator won't be used, so you won't get a result with the separator prepended or appended.
There are two issues with the above: if both fields are NULL, the result isn't NULL but an empty string. To handle this case just put it in a NULLIF: NULLIF(CONCAT_WS('---', notes, product_notes), '') so that NULL is returned if both fields are NULL.
The other issue is if either field is empty, the separator will still be used. To guard against this scenario (and only you will know whether it's a scenario worth guarding against, or if this is even desired, based on your data), put each field in a NULLIF as well: NULLIF(CONCAT_WS('---', NULLIF(notes, ''), NULLIF(product_notes, '')), '')
As a result you get: UPDATE your_table SET notes = NULLIF(CONCAT_WS('---', NULLIF(notes, ''), NULLIF(product_notes, '')), '');

How to get all data without any filtering in "in" clause

As a part of reporting I want to get some values from database.
Also I included filtering in report UI, like :
select * from invoice where id in (92)
So I am making the postgres statement dynamically(here 92 is the value getting from UI and assigning dynamically). But I want to return all data without any condition if the user select no option, id in this case (no filtering). So how can I handle the "in" clause to return all data without any filtering in this case.
I am asking for a common term that can be included in 'in' clause, so it retun all rows without filtering.
Thanks!
One method is using logic like:
where (v_id is null) or (id = v_id)
Note: be careful about the use of in. It probably will not do what you intend if you expect multiple values to match.

Between... And... To work without values

I've tried to do this in a million different ways. At first I couldn't get it to work at all, but now I've managed to get it to work if I put in values.
What I need to happen is for my query to filter my records based on what I put into my form.
I've used this code in the 'Criteria' section of my MovieYear column, and when I put in numbers into my MovieYear1 and MovieYear2 text boxes in my form, it filters correctly.
Between [Forms]![SearchForm]![MovieYear1] And [Forms]![SearchForm]![MovieYear2]
But if I don't put in any values, it doesn't come up with any records at all. Any help?
I've tried pretty much everything (well, at least I think I have). I've tried using wildcards "*" but then I found out you can't actually use them with Between functions...
I've also trying doing Me.Filter in VBA, but it didn't seem to work. Maybe I just missed something?
This is my form.
Thanks in advance! :)
You can add a check for a Null in the form to the query, for example
SELECT *
FROM table
WHERE Between [Forms]![SearchForm]![MovieYear1]
And [Forms]![SearchForm]![MovieYear2]
OR [Forms]![SearchForm]![MovieYear1] Is Null
This will return all records if the first year is null. The second year will be ignored.
You could use a conditional query builder where after checking the value of the boxes you could build the query as per the following cases :
if only MovieYear1 is given then data from all years after MovieYear1 that is date>MovieYear1.
if only MovieYear2 is given then data from all years after MovieYear2 that is date<MovieYear2.
if both are given then use the between clause to get the data.
This can be implemented using CASE WHEN along the lines of following
CASE WHEN MovieYear2 IS NULL then date>MovieYear1
else when MovieYear1 IS NULL then date<MovieYear2
else date between MovieYear1 and MovieYear2

Does DataReader.NextResult retrieves the result is always the same order

I have a SELECT query that yields multiple results and do not have any ORDER BY clause.
If I execute this query multiple times and then iterate through results using DataReader.NextResult(), would I be guaranteed to get the results in the same order?
For e.g. if I execute the following query that return 199 rows:
SELECT * FROM products WHERE productid < 200
would I always get the first result with productid = 1 and so on?
As far as I have observed it always return the results in same order, but I cannot find any documentation for this behavior.
======================================
As per my research:
Check out this blog Conor vs. SQL. I actually wanted to ask if the query-result changes even if the data in table remains the same (i.e no update or delete). But it seems like in case of large table, when SQL server employees parallelism, the order can be different
First of all, to iterate the rows in a DataReader, you should call Read, not NextResult.
Calling NextResult will move to the next result set if your query has multiple SELECT statements.
To answer your question, you must not rely on this.
A query without an ORDER BY clause will return rows in SQL Server's default iteration order.
For small tables, this will usually be the order in which the rows were added, but this is not guaranteed and is liable to change at any time. For example, if the table is indexed or partitioned, the order will be different.
No, DataReader will return the results in the order they come back from SQL. If you don't specify an ORDER BY clause, that will be the order that they exist in the table.
It is possible, perhaps even likely that they will always return in the same order, but this isn't guaranteed. The order is determined by the queryplan (at least in SQL Server) on the database server. If something changes that queryplan, the order could change. You should always use ORDER BY if the order of results is in anyway important to your processing of the data.