DB2 Xml Extract into columns - db2

I'm having these kind of xml,with nearly 100 tags.
Extract these xml into columns,
<root><care>123</care><look>test</look><see>type</see></root>
My Expected Result:
care look see
123 test type
Have these Example,but it will work depends upon columns mapping,
i'm having nearly 100 tags,
SELECT X.care,
X.look
FROM (SELECT <root><care>123</care><look>test</look><see>type</see></root> AS care_test FROM SYSIBM.SYSDUMMY1)
AS A,
XMLTABLE (
'$d/row'
PASSING care_test AS "d"
COLUMNS care VARCHAR (10)
PATH 'care',
look VARCHAR (10)
PATH 'look',
see VARCHAR (10)
PATH 'see') AS X;

Related

Is there a way to add the same row multiple times with different ids into a table with postgresql?

I am trying to add the same data for a row into my table x number of times in postgresql. Is there a way of doing that without manually entering the same values x number of times? I am looking for the equivalent of the go[count] in sql for postgres...if that exists.
Use the function generate_series(), e.g.:
insert into my_table
select id, 'alfa', 'beta'
from generate_series(1,4) as id;
Test it in db<>fiddle.
Idea
Produce a resultset of a given size and cross join it with the record that you want to insert x times. What would still be missing is the generation of proper PK values. A specific suggestion would require more details on the data model.
Query
The sample query below presupposes that your PK values are autogenerated.
CREATE TABLE test ( id SERIAL, a VARCHAR(10), b VARCHAR(10) );
INSERT INTO test (a, b)
WITH RECURSIVE Numbers(i) AS (
SELECT 1
UNION ALL
SELECT i + 1
FROM Numbers
WHERE i < 5 -- This is the value `x`
)
SELECT adhoc.*
FROM Numbers n
CROSS JOIN ( -- This is the single record to be inserted multiple times
SELECT 'value_a' a
, 'value_b' b
) adhoc
;
See it in action in this db fiddle.
Note / Reference
The solution is adopted from here with minor modifications (there are a host of other solutions to generate x consecutive numbers with SQL hierachical / recursive queries, so the choice of reference is somewhat arbitrary).

In a PostgreSQL crosstab, can I automate the tuple part?

I'm trying to do get a tall table (with just 3 columns indicating variable, timestamp and value) into a wide format where timestamp is the index, the columns are the variable names, and the values are the values of the new table.
In python/pandas this would be something along the lines of
import pandas as pd
df = pd.read_csv("./mydata.csv") # assume timestamp, varname & value columns
df.pivot(index="timestamp", columns="varname", values="value")
for PostgreSQL there exists crosstab, so far I have:
SELECT * FROM crosstab(
$$
SELECT
"timestamp",
"varname",
"value"
FROM mydata
ORDER BY "timestamp" ASC, "varname" ASC
$$
) AS ct(
"timestamp" timestamp,
"varname1" numeric,
...
"varnameN" numeric
);
The problem is that I can potentially have dozens to hundreds of variable names. The types are always numeric, number of variable names is not stable (we could need more variables or realize that others are not necessary).
Is there a way to automate the "ct" part so that some other query (e.g. select distinct "varname" from mydata) produces it instead of me having to type in every single variable name present?
PS: PSQL version is 12.9 at home, 14.0 in production. Number of rows in the original table is around 2 million, however I'm going to filter by timestamp and varname, so potentially only a few hundreds of thousands rows. After filtering I got ~50 unique varnames, but that will increase in a few weeks.

Ignore specific rows from a table

I have a table with data like this:
ND
10212121
10232323
10212323
212526
295652
232565
I would like make a select to all ND from this table excluding these starting with 10...using openquery to a oracle database.
Regards
In the following query I check the first two characters of the ND column and compare against 10 to see if they be equal. You did not mention whether or not ND is a numeric type, so I added a cast to varchar2 so that the substring will work.
SELECT ND
FROM yourTable
WHERE SUBSTR(CAST(ND AS varchar2(30)), 1, 2) <> '10'

SELECT * except nth column

Is it possible to SELECT * but without n-th column, for example 2nd?
I have some view that have 4 and 5 columns (each has different column names, except for the 2nd column), but I do not want to show the second column.
SELECT * -- how to prevent 2nd column to be selected?
FROM view4
WHERE col2 = 'foo';
SELECT * -- how to prevent 2nd column to be selected?
FROM view5
WHERE col2 = 'foo';
without having to list all the columns (since they all have different column name).
The real answer is that you just can not practically (See LINK). This has been a requested feature for decades and the developers refuse to implement it. The best practice is to mention the column names instead of *. Using * in itself a source of performance penalties though.
However, in case you really need to use it, you might need to select the columns directly from the schema -> check LINK. Or as the below example using two PostgreSQL built-in functions: ARRAY and ARRAY_TO_STRING. The first one transforms a query result into an array, and the second one concatenates array components into a string. List components separator can be specified with the second parameter of the ARRAY_TO_STRING function;
SELECT 'SELECT ' ||
ARRAY_TO_STRING(ARRAY(SELECT COLUMN_NAME::VARCHAR(50)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME='view4' AND
COLUMN_NAME NOT IN ('col2')
ORDER BY ORDINAL_POSITION
), ', ') || ' FROM view4';
where strings are concatenated with the standard operator ||. The COLUMN_NAME data type is information_schema.sql_identifier. This data type requires explicit conversion to CHAR/VARCHAR data type.
But that is not recommended as well, What if you add more columns in the long run but they are not necessarily required for that query?
You would start pulling more column than you need.
What if the select is part of an insert as in
Insert into tableA (col1, col2, col3.. coln) Select everything but 2 columns FROM tableB
The column match will be wrong and your insert will fail.
It's possible but I still recommend writing every needed column for every select written even if nearly every column is required.
Conclusion:
Since you are already using a VIEW, the simplest and most reliable way is to alter you view and mention the column names, excluding your 2nd column..
-- my table with 2 rows and 4 columns
DROP TABLE IF EXISTS t_target_table;
CREATE TEMP TABLE t_target_table as
SELECT 1 as id, 1 as v1 ,2 as v2,3 as v3,4 as v4
UNION ALL
SELECT 2 as id, 5 as v1 ,-6 as v2,7 as v3,8 as v4
;
-- my computation and stuff that i have to messure, any logic could be done here !
DROP TABLE IF EXISTS t_processing;
CREATE TEMP TABLE t_processing as
SELECT *, md5(t_target_table::text) as row_hash, case when v2 < 0 THEN true else false end as has_negative_value_in_v2
FROM t_target_table
;
-- now we want to insert that stuff into the t_target_table
-- this is standard
-- INSERT INTO t_target_table (id, v1, v2, v3, v4) SELECT id, v1, v2, v3, v4 FROM t_processing;
-- this is andvanced ;-)
INSERT INTO t_target_table
-- the following row select only the columns that are pressent in the target table, and ignore the others.
SELECT r.* FROM (SELECT to_jsonb(t_processing) as d FROM t_processing) t JOIN LATERAL jsonb_populate_record(NULL::t_target_table, d) as r ON TRUE
;
-- WARNING : you need a object that represent the target structure, an exclusion of a single column is not possible
For columns col1, col2, col3 and col4 you will need to request
SELECT col1, col3, col4 FROM...
to omit the second column. Requesting
SELECT *
will get you all the columns

How to return gaps in numbers stored as char with leading zeros?

I have a table with a char(5) field for tracking Bin Numbers. The numbers are stored with leading zeros. The numbers go from 00200 through 90000. There are a lot of gaps in the numbers already in use and I need to be able to query them out so the user knows which numbers are available to use.
Assume you have a table of valid bin numbers.
Table: bins
bin_num
--
00200
00201
00202
...
90000
Assume your table is named "inventory". The bin numbers returned by this query are the ones that aren't in "inventory".
select bins.bin_num
from bins
left join inventory t2
on bins.bin_num = t2.bin_num
where t2.bin_num is null
order by bins.bin_num
If your version of SQL Server supports analytic functions (and, solely for convenience, common table expressions), you can find most of the gaps like this.
with bin_and_next_bin as (
select bin, lead(bin) over (order by bin) next_bin
from inventory
)
select bin
from bin_and_next_bin
where cast(bin as integer) <> cast(next_bin as integer) - 1
Analytic functions don't require a table of valid bin numbers, although you can make a really strong case that you ought to have such a table in the first place. If you're working in an environment where you don't have such a table, and you're not allowed to build such a table, a common table expression can save the day. (It doesn't show "missing" bin numbers before the first used bin number, though, as it's written here.)
One other disadvantage of this statement is that the WHERE clause isn't sargable; it can't use an index. Yet another is that it assumes bin numbers can be cast to integer. The table-based approach doesn't assume anything about the value or data type of the bin number; it works just as well with mixed alphanumerics as it does with integers or anything else.
I was able to get exactly what I needed by reading this article by Pinal Dave
I created a stored procedure that returned the gaps in the bin number sequence starting from the first bin number to the last. In my application I group the bin numbers by Shop (Vehicles would be 1000 through 2000, Buildings 2001 through 3000, etc).
ALTER PROCEDURE [dbo].[spSelectLOG_BinsAvailable]
(#Shop varchar(9))
AS
BEGIN
declare #start as varchar(5) = (SELECT b.Start FROM BinShopCodeBlocks b WHERE b.Shop = #Shop)
declare #finish as varchar(5) = (SELECT b.Finish FROM BinShopCodeBlocks b WHERE b.Shop = #Shop)
SET NOCOUNT ON;
WITH CTE
AS
(SELECT
CAST(#Start as int) as start,
cast(#Finish as int) as finish
UNION ALL
SELECT
Start + 1,
Finish
FROM
CTE
WHERE
Start < Finish
)
SELECT
RIGHT('00000' + CAST(Start AS VARCHAR(5)), 5)
FROM CTE
WHERE
NOT EXISTS
(SELECT *
FROM
BinMaster b
WHERE
b.BinNumber = RIGHT('00000' + CAST(Start AS VARCHAR(5)), 5)
)
OPTION (MAXRECURSION 0);
END