This query works in PostgreSQL:
Select ot.MCode,array_to_string(array_agg(tk1.TName || ',' || ot.TTime), ' - ') as oujyu_name_list
From TR_A ot
inner join MS_B tk1 on ot.Code = tk1.Code
Where ot.Code in (Select Code From TR_C )
Group byot.MCode
but it does not work in SQLite, because SQLite does not have the array_agg() function. How can this query be converted to SQLite?
For this query, you can use group_concat, which directly returns a string:
SELECT ..., group_concat(tk1.TName || ',' || ot.TTime, ' - ')
FROM ...
SQLite now has the JSON1 extension (which ships in the default distribution) that can group and create arrays of JSON objects. For example,
select
ot.MCode,
json_group_array(json_object('tname', tk1.TName, 'ttime', ot.TTime)) as oujyu_name_list
from TR_A as ot
inner join MS_B as tk1
on (ot.Code = tk1.Code)
where ot.Code in (select code from TR_C)
group by ot.MCode;
The second column will be formatted as a JSON array e.g. [{"tname":...,"ttime":...},...].
Related
SELECT ACTOR_NAME
FROM MOVIES
WHERE MOVIE_NAME = 'Isle of Dogs'
ORDER BY SUBSTR(ACTOR_NAME, INSTR(ACTOR_NAME, ' ', -1)+1) ASC;
I think what you're looking for is the POSITION command in place of INSTR
My problem is, when I run the following query in MySQL, it looks like this
Query;
SELECT
CONCAT(b.tarih, '#', CONCAT(b.enlem, ',', b.boylam), '#', b.aldigi_yol) AS IlkMesaiEnlemBoylamImei,
CONCAT(tson.max_tarih, '#', CONCAT(tson.max_enlem, ',', tson.max_boylam), '#', tson.max_aldigi_yol) AS SonMesaiEnlemBoylamImei,
Max(CAST(b.hiz AS UNSIGNED)) As EnYuksekHiz,
TIME_FORMAT(Sec_TO_TIME(TIMESTAMPDIFF(SECOND, (b.tarih), (tson.max_tarih))), '%H:%i') AS DurmaSuresi
FROM
(Select id as max_id, tarih as max_tarih, enlem as max_enlem, boylam as max_boylam, aldigi_yol as max_aldigi_yol from _213gl2015016424 where id in(
SELECT MAX(id)
FROM _213gl2015016424 where (tarih between DATE('2016-11-30 05:45:00') AND Date('2017-01-13 14:19:06')) AND CAST(hiz AS UNSIGNED) > 0
GROUP BY DATE(tarih))
) tson
LEFT JOIN _213gl2015016424 a ON a.id = tson.max_id
LEFT JOIN _213gl2015016424 b ON DATE(b.tarih) = DATE(a.tarih)
WHERE b.tarih is not null And (b.tarih between DATE('2016-11-30 05:45:00') AND Date('2017-01-13 14:19:06')) AND b.hiz > 0
GROUP BY tson.max_tarih
Output is order by date;
Result query
When I try to run a query in PostgreSQL, I get group by mistake.
Query;
SELECT
CONCAT(b.tarih, '#', CONCAT(b.enlem, ',', b.boylam), '#', b.toplamyol) AS IlkMesaiEnlemBoylamImei,
CONCAT(tson.max_tarih, '#', CONCAT(tson.max_enlem, ',', tson.max_boylam), '#', tson.max_toplamyol) AS SonMesaiEnlemBoylamImei,
Max(CAST(b.hiz AS OID)) As EnYuksekHiz,
to_char(to_timestamp((extract(epoch from (tson.max_tarih)) - extract(epoch from (b.tarih)))) - interval '2 hour','HH24:MI') AS DurmaSuresi
FROM
(Select id as max_id, tarih as max_tarih, enlem as max_enlem, boylam as max_boylam, toplamyol as max_toplamyol from _213GL2016008691 where id in(
SELECT MAX(id)
FROM _213GL2016008691 where (tarih between DATE('2018-02-01 03:31:54') AND DATE('2018-03-01 03:31:54')) AND CAST(hiz AS OID) > 0
GROUP BY DATE(tarih))
) tson
LEFT JOIN _213GL2016008691 a ON a.id = tson.max_id
LEFT JOIN _213GL2016008691 b ON DATE(b.tarih) = DATE(a.tarih)
WHERE b.tarih is not null And (b.tarih between DATE('2018-02-12 03:31:54') AND DATE('2018-02-13 03:31:54')) AND b.hiz > 0
GROUP BY tson.max_tarih
Group by error is : To use the aggregate function, you must add the column "b.tarih" to the GROUP BY list.
When I add it I get the same error for another column.I'm waiting for your help.
You are using a feature of MySQL that is not standard SQL and you can also deactivate.
You are grouping by tson.max_tarih in your query. That means that for all rows that share the same value in that field, you will get only one row as a result of that group.
If you have several different values in the rest of the fields (enlem, boylam, etc...) which one are you trying to get in as the result of the query? That's the question that PostgreSQL is asking you.
MySQL is just returning any value for those fields among the rows in the group. PostgreSQL requires you to actually specify it.
Two typical solutions would be grouping by the rest of the fields (b.tarih, b.enlem) or specifying the value those fields to something like MAX(b.tarih), etc.
I need to make changes to an SP which has a bunch of complex XML functions and what not
Declare ResultCsr2 Cursor For
WITH
MDI_BOM_COMP(PROD_ID,SITE_ID, xml ) AS (
SELECT TC401F.T41PID,TC401F.T41SID,
XMLSERIALIZE(
XMLAGG(
XMLELEMENT( NAME "MDI_BOM_COMP",
XMLFOREST(
trim(TC401F.T41CTY) AS COMPONENT_TYPE,
TC401F.T41LNO AS COMP_NUM,
trim(TC401F.T41CTO) AS CTRY_OF_ORIGIN,
trim(TC401F.T41DSC) AS DESCRIPTION,
TC401F.T41EFR AS EFFECTIVE_FROM,
TC401F.T41EFT AS EFFECTIVE_TO,
trim(TC401F.T41MID) AS MANUFACTURER_ID,
trim(TC401F.T41MOC) AS MANUFACTURER_ORG_CODE,
trim(TC401F.T41CNO) AS PROD_ID,
trim(TC401F.T41POC) AS PROD_ORG_CODE,
TC401F.T41QPR AS QTY_PER,
trim(TC401F.T41SBI) AS SUB_BOM_ID,
trim(TC401F.T41SBO) AS SUB_BOM_ORG_CODE, --ADB01
trim(TC401F.T41VID) AS SUPPLIER_ID,
trim(TC401F.T41SOC) AS SUPPLIER_ORG_CODE,
TC401F.T41UCT AS UNIT_COST
)
)
) AS CLOB(1M)
)
FROM TC401F TC401F
GROUP BY T41PID,T41SID
)
SELECT
RowNum, '<BOM_INBOUND>' ||
XMLSERIALIZE (
XMLELEMENT(NAME "INTEGRATION_MESSAGE_CONTROL",
XMLFOREST(
'FULL_UPDATE' as ACTION,
'POLARIS' as COMPANY_CODE,
TRIM(TC400F.T40OCD) as ORG_CODE,
'5' as PRIORITY,
'INBOUND_ENTITY_INTEGRATION' as MESSAGE_TYPE,
'POLARIS_INTEGRATION' as USERID,
'TA' as RECEIVER,
HEX(Generate_Unique()) as SOURCE_SYSTEM_TOKEN
),
XMLELEMENT(NAME "BUS_KEY",
XMLFOREST(
TRIM(TC400F.T40BID) as BOM_ID,
TRIM(TC400F.T40OCD) as ORG_CODE
)
)
) AS VARCHAR(1000)
)
|| '<MDI_BOM>' ||
XMLSERIALIZE (
XMLFOREST(
TRIM(TC400F.T40ATP) AS ASSEMBLY_TYPE,
TRIM(TC400F.T40BID) AS BOM_ID,
TRIM(TC400F.T40CCD) AS CURRENCY_CODE,
TC400F.T40DPC AS DIRECT_PROCESSING_COST,
TC400F.T40EFD AS EFFECTIVE_FROM,
TC400F.T40EFT AS EFFECTIVE_TO,
TRIM(TC400F.T40MID) AS MANUFACTURER_ID,
TRIM(TC400F.T40MOC) AS MANUFACTURER_ORG_CODE,
TRIM(TC400F.T40OCD) AS ORG_CODE,
TRIM(TC400F.T40PRF) AS PROD_FAMILY,
TRIM(TC400F.T40PID) AS PROD_ID,
TRIM(TC400F.T40POC) AS PROD_ORG_CODE,
TRIM(TC400F.T40ISA) AS IS_ACTIVE,
TRIM(TC400F.T40VID) AS SUPPLIER_ID,
TRIM(TC400F.T40SOC) AS SUPPLIER_ORG_CODE,
TRIM(TC400F.T40PSF) AS PROD_SUB_FAMILY,
CASE TRIM(TC400F.T40PML)
WHEN '' THEN TRIM(TC400F.T40PML)
ELSE TRIM(TC400F.T40PML) || '~' || TRIM(TC403F.T43MDD)
END AS PROD_MODEL
) AS VARCHAR(3000)
)
|| IFNULL(MBC.xml, '') ||
XMLSERIALIZE (
XMLFOREST(
XMLFOREST(
TRIM(TC400F.T40CCD) AS CURRENCY_CODE,
TC400F.T40PRI AS PRICE,
TRIM(TC400F.T40PTY) AS PRICE_TYPE
) AS MDI_BOM_PRICE,
XMLFOREST(
TRIM(TC400F.T40CCD) AS CURRENCY_CODE,
TRIM(TC400F.T40PRI) AS PRICE,
'TRANSACTION_VALUE' AS PRICE_TYPE
) AS MDI_BOM_PRICE,
XMLFOREST(
TRIM(TC400F.T40INA) AS INCLUDE_IN_AVERAGING
) AS MDI_BOM_IMPL_BOM_PROD_FAMILY_AUTOMOBILES
) AS VARCHAR(3000)
)
|| '</MDI_BOM>' ||
'</BOM_INBOUND>' XML
FROM (
SELECT
ROW_NUMBER() OVER (
ORDER BY T40STS
,T40SID
,T40BID
) AS RowNum
,t.*
FROM TC400F t
) TC400F
LEFT OUTER JOIN MDI_BOM_COMP MBC
ON TC400F.T40SID = MBC.SITE_ID
AND TC400F.T40PID = MBC.PROD_ID
LEFT OUTER JOIN TC403F TC403F
ON TC400F.T40PML <> ''
AND TC400F.T40PML = TC403F.T43MDL
WHERE TC400F.T40STS = '10'
AND TC400F.RowNUM BETWEEN
(P_STARTROW + (P_PAGENOS - 1) * P_NBROFRCDS)
AND (P_STARTROW + (P_PAGENOS - 1) * P_NBROFRCDS +
P_NBROFRCDS - 1);
Given above is a cursor declaration in the SP code which I am struggling to understand. The very first WITH itself seems to be mysterious. I have used it along with temporary table names but this is the first time, Im seeing something of this sort which seems to be an SP or UDF? Can someone please guide me on how to understand and make sense out of all this?
Adding further to the question, the actual requirement here is to arrange the data in the XML such a way that that those records which have TC401F.T41SBI field populated should appear in the beginning of the XML output..
This field is being selected as below in the code:
trim(TC401F.T41SBI) AS SUB_BOM_ID. If this field is non-blank, this should appear first in the XML and any records with this field value Blank should appear only after. What would be the best approach to do this? Using ORDER BY in any way does not really seem to help as the XML is actually created through some functions and ordering by does not affect how the items are arranged within the XML. One approach I could think of was using a where clause where TC401F.T41SBI <> '' first then append those records where TC401F.T41SBI = ''
Best I can do is help with the CTE.
WITH
MDI_BOM_COMP(PROD_ID,SITE_ID, xml ) AS (
SELECT TC401F.T41PID,TC401F.T41SID,
This just generates a table named MDI_BOM_COMP with three columns named PROD_ID, SITE_ID, and XML. The table will have one record for each PROD_ID, SITE_ID, and the contents of XML will be an XML snippet with all the components for that product and site.
Now the XML part can be a bit confusing, but if we break it down into it's scalar and aggregate components, we can make it a bit more understandable.
First ignore the grouping. so the CTE retrieves each row in TC401F. XMLELEMENT and XMLFORREST are scalar functions. XMLELEMENT creates a single XML element The tag is the first parameter, and the content of the element is the second in the above example. XMLFORREST is like a bunch of XMLELEMENTs concatenated together.
XMLSERIALIZE(
XMLAGG(
XMLELEMENT( NAME "MDI_BOM_COMP",
XMLFOREST(
trim(TC401F.T41CTY) AS COMPONENT_TYPE,
TC401F.T41LNO AS COMP_NUM,
trim(TC401F.T41CTO) AS CTRY_OF_ORIGIN,
trim(TC401F.T41DSC) AS DESCRIPTION,
TC401F.T41EFR AS EFFECTIVE_FROM,
TC401F.T41EFT AS EFFECTIVE_TO,
trim(TC401F.T41MID) AS MANUFACTURER_ID,
trim(TC401F.T41MOC) AS MANUFACTURER_ORG_CODE,
trim(TC401F.T41CNO) AS PROD_ID,
trim(TC401F.T41POC) AS PROD_ORG_CODE,
TC401F.T41QPR AS QTY_PER,
trim(TC401F.T41SBI) AS SUB_BOM_ID,
trim(TC401F.T41SBO) AS SUB_BOM_ORG_CODE, --ADB01
trim(TC401F.T41VID) AS SUPPLIER_ID,
trim(TC401F.T41SOC) AS SUPPLIER_ORG_CODE,
TC401F.T41UCT AS UNIT_COST
)
)
) AS CLOB(1M)
So in the example, for each row in the table, XMLFORREST creates a list of XML elements, one for each of COMPONENT_TYPE, COMP_NUM, CTRY_OF_ORIGIN, etc. These elements form the content of another XML element MDI_BOM_COMP which is created by XMLELEMENT.
Now for each row in the table we have selected PROD_ID, SITE_ID, and created some XML. Next we group by PROD_ID, and SITE_ID. The aggregation function XMLAGG will collect all the XML for each PROD_ID and SITE_ID, and concatenate it together.
Finally XMLSERIALIZE will convert the internal XML representation to the string format we all know and love ;)
I think I found the answer for my requirement. I had to add an order by field name after XMLELEMENT function
I have a table with an integer column. It has 12 records numbered 1000 to 1012. Remember, these are ints.
This query returns, as expected, 12 results:
select count(*) from proposals where qd_number::text like '%10%'
as does this:
SELECT COUNT(*) FROM "proposals" WHERE (lower(first_name) LIKE '%10%' OR qd_number::text LIKE '%10%' )
but this query returns 2 records:
SELECT COUNT(*) FROM "proposals" WHERE (lower(first_name) || ' ' || qd_number::text LIKE '%10%' )
which implies using || in concatenated where expressions is not equivalent to using OR. Is that correct or am I missing something else here?
You probably have nulls in first_name. For these records (lower(first_name) || ' ' || qd_number::text results in null, so you don't find the numbers any longer.
using || in concatenated where expressions is not equivalent to using ORIs that correct or am I missing something else here?
That is correct.
|| is the string concatenation operator in SQL, not the OR operator.
I got stuck during database migration on PostgreSQL and need your help.
I have two tables that I need to join: drzewa_mateczne.migracja (data I need to migrate) and ibl_as.t_adres_lesny (dictionary I need to join with migracja).
I need to join them on replace(drzewa_mateczne.migracja.adresy_lesne, ' ', '') = replace(ibl_as.t_adres_lesny.adres, ' ', ''). However my data is not very regular, so I want to join it on first good match with the dictionary.
I've created the following query:
select
count(*)
from
drzewa_mateczne.migracja a
where
length(a.adresy_lesne) > 0
and replace(a.adresy_lesne, ' ', '') = (select substr(replace(al.adres, ' ', ''), 1, length(replace(a.adresy_lesne, ' ', ''))) from ibl_as.t_adres_lesny al limit 1)
The query doesn't return any rows.
It does successfully join empty rows if ran without
length(a.adresy_lesne) > 0
The two following queries return rows (as expected):
select replace(adres, ' ', '')
from ibl_as.t_adres_lesny
where substr(replace(adres, ' ', ''), 1, 16) = '16-15-1-13-180-c'
limit 1
select replace(adresy_lesne, ' ', ''), length(replace(adresy_lesne, ' ', ''))
from drzewa_mateczne.migracja
where replace(adresy_lesne, ' ', '') = '16-15-1-13-180-c'
I'm suspecting that there might be a problem in sub-query inside the 'where' clause in my query. If you guys could help me resolve this issue, or at least point me in the right direction, I'd be very greatful.
Thanks in advance,
Jan
You can largely simplify to:
SELECT count(*)
FROM drzewa_mateczne.migracja a
WHERE a.adresy_lesne <> ''
AND EXISTS (
SELECT 1 FROM ibl_as.t_adres_lesny al
WHERE replace(al.adres, ' ', '')
LIKE (replace(a.adresy_lesne, ' ', '') || '%')
)
a.adresy_lesne <> '' does the same as length(a.adresy_lesne) > 0, just faster.
Replace the correlated subquery with an EXISTS semi-join (to get only one match per row).
Replace the complex string construction with a simple LIKE expression.
More information on pattern matching and index support in these related answers:
PostgreSQL LIKE query performance variations
Difference between LIKE and ~ in Postgres
speeding up wildcard text lookups
What you're basically telling the database to do is to get you the count of rows from drzewa_mateczne.migracja that have a non-empty adresy_lesne field that is a prefix of the adres field of a semi-random ibl_as.t_adres_lesny row...
Lose the "limit 1" in the subquery and substitute the "=" with "in" and see if that is what you wanted...