How to search for multiple strings in a field in DB2? my query fails with '' (black/empty) values - db2

Is there a way to build a DB2 query to search a combination of strings in a field?
I am trying to do this:
select ITEM, ITEM_DESC
where instr(ITEM_DESC,>parameter1<)>0
AND instr(ITEM_DESC,>parameter2<)>0
AND instr(ITEM_DESC,>parameter3<)>0
AND instr(ITEM_DESC,>parameter4<)>0
UNION
select ITEM, ITEM_DESC
where instr(ITEM_DESC,>parameter5<)>0
OR instr(ITEM_DESC,>parameter6<)>0
OR instr(ITEM_DESC,>parameter7<)>0
OR instr(ITEM_DESC,>parameter8<)>0
The >parameters< are string inputs for the user, sometimes the user will not use all of them (dynamic list?).
The query works only if all parameters have a string value, when at least one has a '' (empty), the query fails.

Related

Grafana: how to overwrite values in one table with value mapping retrieved from another table

I have two tables as below. 1st table contain descriptions of all statuses. How to overwrite column status_id on right table with name from left table based on id?
Option 1: fixed list of mappings
When the status are fixed and a low number, I would recommend to use value mapping.
Make sure not to apply it to the whole table, but instead to use an override for the field status_id in your table.
Option 2: variable list of mappings
In case the status change or new ones are added, I would suggest a workaround as follows (since I don't use PostgreSQL, I show SQL statements from Google BigQuery standard SQL, please adjust for your usecase):
Create a query variable: query the table with the mappings (the description of the status) and by using concatenation, create the WHEN ... THEN ... part of a case statement. Example:
SELECT CONCAT('WHEN "', id, '" THEN "', name, '" ')
FROM ID_TABLE
This will give you rows like this: WHEN "1" THEN "OK". Then you have to concat/aggregate those rows to a single string.
Then use the variable in the query for your final table like in this example:
SELECT
CASE status_id
${QUERY_VARIABLE}
ELSE "UNKNOWN STATUS"
END AS status_id,
...
FROM YOUR_DATA_TABLE

Azure data factory: pass where clause as a string to dynamic query with quotes

I have a Lookup that retrieves a few records from a MS SQL table containing schema, table name and a whole where clause. These values are passed to a copy data (within a ForEach) In the copy data i use a Dynamic query statement like:
#concat('select a.*, current_date as crt_tms from ',item().shm_nam,'.',item().tab_nam,
item().where_clause )
This construction works fine without the where_clause or with a where clause with an integer. But it goes wrong with strings like:
'a where a.CODSYSBRN ='XXX' ;'
it's about the quote (')
How can i pass it through?
I know that the where clause as a fixed string in the dynamic query works when i use double quotes (to escape the single quote):
'a where a.CODSYSBRN =''XXX'' ;'
Point is i need the where clause to be completely dynamic because it differ per table
whatever i try i get this kind of error:
Syntax error or access violation;257 sql syntax error: incorrect syntax near "where a"
ps i also tested this, but with the same result:
select a.*, current_date as crt_tms from #{item().shm_nam}.#{item().tab_nam} a #{item().where_clause}
As you have mentioned you are getting whole where clause from the lookup table, the query must have included the column values in where clause for string and integer types separately.
Example lookup table:
In your copy activity, you can use Concat() function as you were already doing it, to combine static values & parameters.
#concat('select * from ',item().schma_name,'.',item().table_name,' ',item().where_clause)
For debugging purposes, I have added the expression in set variable activity, to see the value of the expression.
Iteration1:
Iteration2:

PostgreSQL, allow to filter by not existing fields

I'm using a PostgreSQL with a Go driver. Sometimes I need to query not existing fields, just to check - maybe something exists in a DB. Before querying I can't tell whether that field exists. Example:
where size=10 or length=10
By default I get an error column "length" does not exist, however, the size column could exist and I could get some results.
Is it possible to handle such cases to return what is possible?
EDIT:
Yes, I could get all the existing columns first. But the initial queries can be rather complex and not created by me directly, I can only modify them.
That means the query can be simple like the previous example and can be much more complex like this:
WHERE size=10 OR (length=10 AND n='example') OR (c BETWEEN 1 and 5 AND p='Mars')
If missing columns are length and c - does that mean I have to parse the SQL, split it by OR (or other operators), check every part of the query, then remove any part with missing columns - and in the end to generate a new SQL query?
Any easier way?
I would try to check within information schema first
"select column_name from INFORMATION_SCHEMA.COLUMNS where table_name ='table_name';"
And then based on result do query
Why don't you get a list of columns that are in the table first? Like this
select column_name
from information_schema.columns
where table_name = 'table_name' and (column_name = 'size' or column_name = 'length');
The result will be the columns that exist.
There is no way to do what you want, except for constructing an SQL string from the list of available columns, which can be got by querying information_schema.columns.
SQL statements are parsed before they are executed, and there is no conditional compilation or no short-circuiting, so you get an error if a non-existing column is referenced.

What PostgreSQL type is good for stroring array of strings and offering fast lookup afterwards

I am using PostgreSQL 11.9
I have a table containing a jsonb column with arbitrary number of key-values. There is a requirement when we perform a search to include all values from this column as well. Searching in jsonb is quite slow so my plan is to create a trigger which will extract all the values from the jsonb column:
select t.* from app.t1, jsonb_each(column_jsonb) as t(k,v)
with something like this. And then insert the values in a newly created column in the same table so I can use this column for faster searches.
My question is what type would be most suitable for storing the keys and then searchin within them. Currently the search looks like this:
CASE
WHEN something IS NOT NULL
THEN EXISTS(SELECT value FROM jsonb_each(column_jsonb) WHERE value::text ILIKE search_term)
END
where the search_term is what the user entered from the front end.
This is not going to be pretty, and normalizing the data model would be better.
You can define a function
CREATE FUNCTION jsonb_values_to_string(
j jsonb,
separator text DEFAULT ','
) RETURNS text LANGUAGE sql IMMUTABLE STRICT
AS 'SELECT string_agg(value->>0, $2) FROM jsonb_each($1)';
Then you can query like
WHERE jsonb_values_to_string(column_jsonb, '|') ILIKE 'search_term'
and you can define a trigram index on the left hand side expression to speed it up.
Make sure that you choose a separator that does not occur in the data or the pattern...

Column returns error when called in Select statement, but returns values when * is used

Values retrieved for a table column are incorrect unless I use SELECT * FROM table
I'm sure this is something basic that just isn't clicking with me.
Accurate values for the User column are returned when wildcard * is ran against the table, but my SQL Server username replaces the values in the column when it is called in a select statement. If I try to call the column in the SELECT statement with a table alias, I get "Incorrect syntax near the keyword 'User'."
User column is nvarchar(12), Action column is nvarchar(50)
This query returns accurate data in the User column:
SELECT *
FROM TransHistory as th
This query returns my SQL server username in the User column instead of the actual values:
SELECT Action, User
FROM TransHistory
This query, using a table alias, results in "incorrect syntax near the keyword 'User'.":
SELECT th.Action, th.User
FROM TransHistory as th
I expect to get the accurate User ID from the table in all 3 queries.