I have two columns colA, colb in DB2.
colA contains string - #+1 XAT59001XBY999$
colB has 2 ROWS containing- XAT59001 and XBY999
I want to check whether colA contains colB data.
You could use INSTR here:
SELECT *
FROM yourTable
WHERE INSTR(colA, colB) > 0;
You may also use LIKE as follows:
SELECT *
FROM yourTable
WHERE colA LIKE '%' || colB || '%';
Self join the table with the Like condition
SELECT distinct t1.colA,t1.colB
FROM TestTable t1
join TestTable t2 on t1.colA LIKE '%' || t2.colB || '%';
Related
Here is my query:
SELECT * FROM table1
WHERE col1 IN (SELECT cola, colb, colc, cold FROM table2)
where all cols are of data type integer. When I execute this query I get "sub query has too many columns". What whould be the right way to do this?
SELECT * FROM table1
WHERE col1 = ANY (
(SELECT array_agg(ARRAY[cola, colb, colc, cold])
FROM a
)::integer[]
);
I have Table1 and Table2.
Table1 has columnA and columnB and a bunch of others. Table2 has columnC and other columns.
I want to query out all rows from Table2 that satisfies: columnC follows this pattern: {Table1_columnA}\_{blahblahblah}\_{Table1_columnB}. Note that columnC can have other values like "123_456_789" which has two underscores but does not follow above pattern.
SELECT t2.*
FROM Table2 t2
INNER JOIN Table1 t1
ON t2.columnC LIKE '%' || t1.columnA || '\blahblahblah\' || t1.columnB || '%'
Something like this:
select *
from table1 t1
join table2 t2 on
t2.columc = concat('{', t1.columna, '}\_{blablablah}\_{', t1.columnb,'}')
Anyone know how string_agg results need to be "massaged" so they can be used in an IN statement?
The following is some sample code. Thanks for your time.
P.S: Before scratching your head and asking what the hell. I'm only using this code to show the problem of the string_agg b/c as you can see the query otherwise is a bit pointless.
Henry
WITH TEMP AS
(
SELECT 'John' AS col1
UNION ALL
SELECT 'Peter' AS col1
UNION ALL
SELECT 'Henry' AS col1
UNION ALL
SELECT 'Mo' AS col1
)
-- results that are being used in the IN statement
--SELECT string_agg('''' || col1::TEXT || '''',',') AS col1 FROM TEMP
SELECT col1 FROM TEMP
WHERE col1 IN
(
SELECT string_agg('''' || col1::TEXT || '''',',') AS col1
FROM TEMP
)
You can't mix dynamic code with static code. Your example is not very clear as to what exactly is it that you want to do. Your sample could be written as:
WITH TEMP(col1) AS (values ('John'), ('Peter'), ('Henry'), ('Mo'))
SELECT col1 FROM TEMP
WHERE col1 IN (SELECT col1 FROM TEMP)
or using an array:
WITH TEMP(col1) AS (values ('John'), ('Peter'), ('Henry'), ('Mo'))
SELECT col1 FROM TEMP
WHERE col1 = ANY (SELECT ARRAY(SELECT col1 FROM TEMP))
or simply (in this case since the main from and the subselect are the same table without any filters):
WITH TEMP(col1) AS (values ('John'), ('Peter'), ('Henry'), ('Mo'))
SELECT col1 FROM TEMP
I have a result from a query like the below, which does not have a fixed number of columns
ID COL1 COL2 COL3 COL4
-------------------------------------
1 VAL11 VAL12 VAL13 VAL14
2 VAL21 VAL22 VAL23 VAL24
Now I want the result to be something like this.
RESULT
-----------------------------------------------------
ID:1, COL1:VAL11, COL2:VAL12, COL3:VAL13, COL4:VAL14
ID:2, COL1:VAL21, COL2:VAL22, COL3:VAL23, COL4:VAL24
Please help.
The quick and dirty way, but without the column names and including NULL values:
SELECT tbl::text
FROM tbl;
The slow & sure way:
SELECT array_to_string(ARRAY[
'ID:' || id
,'COL1:' || col1
,'COL2:' || col2
], ', ') AS result
FROM tbl;
If a column holds a NULL value, it will be missing in the result. I do not just concatenate, because NULL values would nullify the whole row.
array_to_string() makes sure that commas are only inserted where needed.
PostgreSQL 9.1 introduced the new function concat_ws() (much like the one in MySQL) with which we can further simplify:
SELECT concat_ws(', '
'ID:' || id
,'COL1:' || col1
,'COL2:' || col2
) AS result
FROM tbl;
SELECT
'ID:' ||coalesce(id::text, '<null>')
||', '||'COL1:'||coalesce(col1::text, '<null>')
||', '||'COL2:'||coalesce(col2::text, '<null>')
FROM tbl;
You can use this SQL to generate the first one for you (in case there're lot's of columns):
SELECT E'SELECT \n'||string_agg(trim(stmt), E' \n')||E'\n FROM tbl;'
FROM (SELECT
CASE WHEN a.attnum > 1 THEN $$||', '||$$ ELSE '' END ||
$$'$$||upper(a.attname)||$$:'||coalesce($$||quote_ident(a.attname)||
$$::text, '<null>')$$ AS stmt
FROM pg_attribute a, pg_class t
WHERE t.relkind='r' AND t.relname = 'tbl' AND a.attrelid = t.oid
AND NOT a.attisdropped AND a.attnum > 0) AS s;
My SQL is rusty -- I have a simple requirement to calculate the sum of the greater of two column values:
CREATE TABLE [dbo].[Test]
(
column1 int NOT NULL,
column2 int NOT NULL
);
insert into Test (column1, column2) values (2,3)
insert into Test (column1, column2) values (6,3)
insert into Test (column1, column2) values (4,6)
insert into Test (column1, column2) values (9,1)
insert into Test (column1, column2) values (5,8)
In the absence of the GREATEST function in SQL Server, I can get the larger of the two columns with this:
select column1, column2, (select max(c)
from (select column1 as c
union all
select column2) as cs) Greatest
from test
And I was hoping that I could simply sum them thus:
select sum((select max(c)
from (select column1 as c
union all
select column2) as cs))
from test
But no dice:
Msg 130, Level 15, State 1, Line 7
Cannot perform an aggregate function on an expression containing an aggregate or a subquery.
Is this possible in T-SQL without resorting to a procedure/temp table?
UPDATE: Eran, thanks - I used this approach. My final expression is a little more complicated, however, and I'm wondering about performance in this case:
SUM(CASE WHEN ABS(column1 * column2) > ABS(column3 * column4)
THEN column5 * ABS(column1 * column2) * column6
ELSE column5 * ABS(column3 * column4) * column6 END)
Try this:
SELECT SUM(CASE WHEN column1 > column2
THEN column1
ELSE column2 END)
FROM test
Try this... Its not the best performing option, but should work.
SELECT
'LargerValue' = CASE
WHEN SUM(c1) >= SUM(c2) THEN SUM(c1)
ELSE SUM(c2)
END
FROM Test
SELECT
SUM(MaximumValue)
FROM (
SELECT
CASE WHEN column1 > column2
THEN
column1
ELSE
column2
END AS MaximumValue
FROM
Test
) A
FYI, the more complicated case should be fine, so long as all of those columns are part of the same table. It's still looking up the same number of rows, so performance should be very similar to the simpler case (as SQL Server performance is usually IO bound).
How to find max from single row data
-- eg (empid , data1,data2,data3 )
select emplid , max(tmp.a)
from
(select emplid,date1 from table
union
select emplid,date2 from table
union
select emplid,date3 from table
) tmp , table
where tmp.emplid = table.emplid
select sum(id) from (
select (select max(c)
from (select column1 as c
union all
select column2) as cs) id
from test
)
The best answer to this is simply put :
;With Greatest_CTE As
(
Select ( Select Max(ValueField) From ( Values (column1), (column2) ) ValueTable(ValueField) ) Greatest
From Test
)
Select Sum(Greatest)
From Greatest_CTE
It scales a lot better than the other answers with more than two value columns.