How to CAST a value in PostgreSQL for use in WHERE with LIKE statement? - postgresql

I'm trying to fix a SQL query for later convert it to Doctrine2 DQL since it's part of a Symfony2 project. This is what my DDL has:
CREATE TABLE "nomencladores"."norma" (
"id" int4 NOT NULL,
"comite_tecnico_id" int4,
"numero" VARCHAR (10) COLLATE "default" NOT NULL,
"anno" int4 NOT NULL,
"nombre" VARCHAR (255) COLLATE "default" NOT NULL,
"activo" bool,
CONSTRAINT "norma_pkey" PRIMARY KEY ("id"),
CONSTRAINT "fk_f00cbe8e84edad75" FOREIGN KEY ("comite_tecnico_id") REFERENCES "nomencladores"."comite_tecnico" ("id") ON DELETE NO ACTION ON UPDATE NO ACTION
) WITH (OIDS = FALSE);
And I'm trying to execute a LIKE query to find %45% and I've tried all this queries without success:
The one generated by Doctrine2 in DQL
SELECT
n0_.numero AS numero0,
n0_.anno AS anno1,
n0_. ID AS id2,
n0_.nombre AS nombre3,
n0_.activo AS activo4,
n0_.comite_tecnico_id AS comite_tecnico_id5
FROM
nomencladores.norma n0_
WHERE
n0_.anno LIKE %45%;
Trying to cast the values
SELECT
n0_.numero AS numero0,
n0_.anno AS anno1,
n0_. ID AS id2,
n0_.nombre AS nombre3,
n0_.activo AS activo4,
n0_.comite_tecnico_id AS comite_tecnico_id5
FROM
nomencladores.norma n0_
WHERE
CAST (n0_.anno AS CHAR) LIKE %45%;
SELECT
n0_.numero AS numero0,
n0_.anno AS anno1,
n0_. ID AS id2,
n0_.nombre AS nombre3,
n0_.activo AS activo4,
n0_.comite_tecnico_id AS comite_tecnico_id5
FROM
nomencladores.norma n0_
WHERE
CAST (n0_.anno, "FM9999") LIKE %45%
SELECT
n0_.numero AS numero0,
n0_.anno AS anno1,
n0_. ID AS id2,
n0_.nombre AS nombre3,
n0_.activo AS activo4,
n0_.comite_tecnico_id AS comite_tecnico_id5
FROM
nomencladores.norma n0_
WHERE
to_char(n0_.anno, "FM9999") LIKE %45%
SELECT
n0_.numero AS numero0,
n0_.anno AS anno1,
n0_. ID AS id2,
n0_.nombre AS nombre3,
n0_.activo AS activo4,
n0_.comite_tecnico_id AS comite_tecnico_id5
FROM
nomencladores.norma n0_
WHERE
n0_.anno::text LIKE "%45%"
And none works, what is the right way to achieve this on PostgreSQL?

The syntax could be:
WHERE n0_.anno::text LIKE '%45%';
You need to cast the number to text (or varchar) before you can use it with the LIKE operator.
The right hand argument for LIKE is a text value. Your input is a string literal to be precise. You need single quotes for values, double quotes are for identifiers.
If anno is supposed to hold a year and you are just interested in the last two digits, make that:
WHERE n0_.anno::text LIKE '%45';
Or better, yet:
WHERE n0_.anno % 100 = 45;
% being the modulo operator. (Not related to the % symbol in LIKE patterns!)
45 (without quotes) being a numeric constant.

Related

SELECT query using collate binary_ci with arg mapped to multiple params

I am trying to create a spring data jpa custom query that takes 2 args and uses collate binary_ci. One arg is compared to a string using '=', the other is compared to a string using LIKE.
Example that works without collate binary_ci:
SELECT * FROM MYTABLE WHERE ID = :id AND ((MODEL LIKE %:in%) OR (DESCR LIKE %:in%)) ORDER BY ...
The LIKE arg in is mapped to multiple parameters. I cannot get this query to work. I have tried multiple things, but in the end, my attempt to include collate binary_ci is the issue. Here is what I've tried:
WHERE ID = :id AND ((MODEL LIKE %:in%) OR (DESCR LIKE %:in%)) collate binary_ci ORDER BY ...
WHERE ID = :id AND (MODEL LIKE %:in% OR DESCR LIKE %:in%) collate binary_ci ORDER BY ...
WHERE ID = :id AND ((MODEL LIKE %:in%) collate binary_ci OR (DESCR LIKE %:in%) collate binary_ci) ORDER BY
Running these queries gets me either Could not locate named parameter [in], expecting one of [in%, id] or sql statement was not ended properly or missing right parentheses
How can I make this work?
Version: Spring-Boot: (v2.4.3)
Here are version values from my sqldeveloper:
org.openide.specification.version 6.2
org.osgi.framework.os.version 10.0.0
org.osgi.framework.version 1.7.0
os.version 10.0
osgi.framework.version 3.9.1.v20140110-1610
Figured it out:
WHERE ID = :id AND ((MODEL LIKE %:in% collate binary_ci) OR (DESCR LIKE %:in% collate binary_ci)) ORDER BY...

Postgres use xpath_table parsing with xmlnamespaces

Can I use xpath_table parsing with xmlnamespaces
drop table if exists _xml;
create temporary table _xml (fbf_xml_id serial,str_Xml xml);
insert into _xml(str_Xml)
select '<DataSet1 xmlns="http://tempuri.org/DataSet_LocalMaMC.xsd">
<Stations>
<ID>5</ID>
</Stations>
<Stations>
<ID>1</ID>
</Stations>
<Stations>
<ID>2</ID>
</Stations>
<Stations>
<ID>10</ID>
</Stations>
<Stations>
<ID/>
</Stations>
</DataSet1>' ;
drop table if exists _y;
create temporary table _y as
SELECT *
FROM xpath_table('FBF_xml_id','str_Xml','_xml',
'/DataSet1/Stations/ID',
'true') AS t(FBF_xml_id int,ID text);
select * from _y
If I take of the xmlnamespaces it works fine.
I thought to work with Xpath, but when there is null, it gives me wrong results.
With Postgres 10 or later, xmltable() is the preferred way to do this.
You can easily specify a list of namespaces with that.
SELECT fbf_xml_id, xt.id
FROM _xml
cross join
xmltable(xmlnamespaces ('http://tempuri.org/DataSet_LocalMaMC.xsd' as x),
'/x:DataSet1/x:Stations'
passing str_xml
columns
id text path 'x:ID') as xt
Note that in the XPath expression used for the xmltable() function, the tags are prefixed with the namespace alias defined in the xmlnamespaces option even though they are not prefixed in the input XML.
Online example

postgresql: "...where X IN <array type column values>" syntax?

I'm having problem with using the values of an array column in a where clause. Complete example to reproduce:
create type public.genre_type as enum ('scifi', 'fantasy', 'crime', 'horror', 'classics');
create table public.reader_profile(
id integer,
fave_genres genre_type ARRAY
);
create table public.books(
id serial not null,
title text,
genre_type public.genre_type
);
insert into public.reader_profile(id, fave_genres) values (1, array['crime', 'horror']::public.genre_type[]);
insert into public.reader_profile(id, fave_genres) values (2, array['fantasy', 'scifi']::public.genre_type[]);
insert into public.reader_profile(id, fave_genres) values (3, array['scifi', 'classics']::public.genre_type[]);
insert into public.books(title, genre_type) values ('gone with the wind', 'classics');
insert into public.books(title, genre_type) values ('Foundation', 'scifi');
insert into public.books(title, genre_type) values ('Dune', 'scifi');
-- THE FOLLOWING FAILS!!!
select * from public.books
where genre_type in (
select fave_genres from public.reader_profile where id = 2
);
I've tried ...where genre_type = ANY() per other stackoverflow answers as well as ...where genre_type <# () and I can't get anything to work! It seems the inner query (which works) is being return as an array type and not a list of values or something. Any help appreciated!!
I agree with #Hogan that this seems doable with a JOIN but the syntax you are looking for is the following:
SELECT *
FROM books
WHERE genre_type = ANY(ARRAY(SELECT fave_genres FROM reader_profile WHERE id = 2))
;
Demo
Can I suggest using a join instead?
select *
from public.books b
join public.reader_profile fg on b.genre_type = ANY(rp.fave_genres) and fg.id = 2

Column is of type timestamp without time zone but expression is of type character

I'm trying to insert records on my trying to implement an SCD2 on Redshift
but get an error.
The target table's DDL is
CREATE TABLE ditemp.ts_scd2_test (
id INT
,md5 CHAR(32)
,record_id BIGINT IDENTITY
,from_timestamp TIMESTAMP
,to_timestamp TIMESTAMP
,file_id BIGINT
,party_id BIGINT
)
This is the insert statement:
INSERT
INTO ditemp.TS_SCD2_TEST(id, md5, from_timestamp, to_timestamp)
SELECT TS_SCD2_TEST_STAGING.id
,TS_SCD2_TEST_STAGING.md5
,from_timestamp
,to_timestamp
FROM (
SELECT '20150901 16:34:02' AS from_timestamp
,CASE
WHEN last_record IS NULL
THEN '20150901 16:34:02'
ELSE '39991231 11:11:11.000'
END AS to_timestamp
,CASE
WHEN rownum != 1
AND atom.id IS NOT NULL
THEN 1
WHEN atom.id IS NULL
THEN 1
ELSE 0
END AS transfer
,stage.*
FROM (
SELECT id
FROM ditemp.TS_SCD2_TEST_STAGING
WHERE file_id = 2
GROUP BY id
HAVING count(*) > 1
) AS scd2_count_ge_1
INNER JOIN (
SELECT row_number() OVER (
PARTITION BY id ORDER BY record_id
) AS rownum
,stage.*
FROM ditemp.TS_SCD2_TEST_STAGING AS stage
WHERE file_id IN (2)
) AS stage
ON (scd2_count_ge_1.id = stage.id)
LEFT JOIN (
SELECT max(rownum) AS last_record
,id
FROM (
SELECT row_number() OVER (
PARTITION BY id ORDER BY record_id
) AS rownum
,stage.*
FROM ditemp.TS_SCD2_TEST_STAGING AS stage
)
GROUP BY id
) AS last_record
ON (
stage.id = last_record.id
AND stage.rownum = last_record.last_record
)
LEFT JOIN ditemp.TS_SCD2_TEST AS atom
ON (
stage.id = atom.id
AND stage.md5 = atom.md5
AND atom.to_timestamp > '20150901 16:34:02'
)
) AS TS_SCD2_TEST_STAGING
WHERE transfer = 1
and to short things up, I am trying to insert 20150901 16:34:02 to from_timestamp and 39991231 11:11:11.000 to to_timestamp.
and get
ERROR: 42804: column "from_timestamp" is of type timestamp without time zone but expression is of type character varying
Can anyone please suggest how to solve this issue?
Postgres isn't recognizing 20150901 16:34:02 (your input) as a valid time/date format, so it assumes it's a string.
Use a standard date format instead, preferably ISO-8601. 2015-09-01T16:34:02
SQLFiddle example
Just in case someone ends up here trying to insert into a postgresql a timestamp or a timestampz from a variable in groovy or Java from a prepared statement and getting the same error (as I did), I managed to do it by setting the property stringtype to "unspecified". According to the documentation:
Specify the type to use when binding PreparedStatement parameters set
via setString(). If stringtype is set to VARCHAR (the default), such
parameters will be sent to the server as varchar parameters. If
stringtype is set to unspecified, parameters will be sent to the
server as untyped values, and the server will attempt to infer an
appropriate type. This is useful if you have an existing application
that uses setString() to set parameters that are actually some other
type, such as integers, and you are unable to change the application
to use an appropriate method such as setInt().
Properties props = [user : "user", password: "password",
driver:"org.postgresql.Driver", stringtype:"unspecified"]
def sql = Sql.newInstance("url", props)
With this property set, you can insert a timestamp as a string variable without the error raised in the question title. For instance:
String myTimestamp= Instant.now().toString()
sql.execute("""INSERT INTO MyTable (MyTimestamp) VALUES (?)""",
[myTimestamp.toString()]
This way, the type of the timestamp (from a String) is inferred correctly by postgresql. I hope this helps.
Inside apache-tomcat-9.0.7/conf/server.xml
Add "?stringtype=unspecified" to the end of url address.
For example:
<GlobalNamingResources>
<Resource name="jdbc/??" auth="Container" type="javax.sql.DataSource"
...
url="jdbc:postgresql://127.0.0.1:5432/Local_DB?stringtype=unspecified"/>
</GlobalNamingResources>

Inserting values into multiple columns by splitting a string in PostgreSQL

I have the following heap of text:
"BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,
URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,
16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,
1.0,Version,1.0,".
What I'd like to do is extract data from this in the following manner:
BundleSize:155648
DynamicSize:204800
Identifier:com.URLConnectionSample
Name:URLConnectionSample
ShortVersion:1.0
Version:1.0
BundleSize:155648
DynamicSize:16384
Identifier:com.IdentifierForVendor3
Name:IdentifierForVendor3
ShortVersion:1.0
Version:1.0
All tips and suggestions are welcome.
It isn't quite clear what do you need to do with this data. If you really need to process it entirely in the database (looks like the task for your favorite scripting language instead), one option is to use hstore.
Converting records one by one is easy:
Assuming
%s =
BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0
SELECT * FROM each(hstore(string_to_array(%s, ',')));
Output:
key | value
--------------+-------------------------
Name | URLConnectionSample
Version | 1.0
BundleSize | 155648
Identifier | com.URLConnectionSample
DynamicSize | 204800
ShortVersion | 1.0
If you have table with columns exactly matching field names (note the quotes, populate_record is case-sensitive to key names):
CREATE TABLE data (
"BundleSize" integer, "DynamicSize" integer, "Identifier" text,
"Name" text, "ShortVersion" text, "Version" text);
You can insert hstore records into it like this:
INSERT INTO data SELECT * FROM
populate_record(NULL::data, hstore(string_to_array(%s, ',')));
Things get more complicated if you have comma-separated values for more than one record.
%s = BundleSize,155648,DynamicSize,204800,Identifier,com.URLConnectionSample,Name,URLConnectionSample,ShortVersion,1.0,Version,1.0,BundleSize,155648,DynamicSize,16384,Identifier,com.IdentifierForVendor3,Name,IdentifierForVendor3,ShortVersion,1.0,Version,1.0,
You need to break up an array into chunks of number_of_fields * 2 = 12 elements first.
SELECT hstore(row) FROM (
SELECT array_agg(str) AS row FROM (
SELECT str, row_number() OVER () AS i FROM
unnest(string_to_array(%s, ',')) AS str
) AS str_sub
GROUP BY (i - 1) / 12) AS row_sub
WHERE array_length(row, 1) = 12;
Output:
"Name"=>"URLConnectionSample", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.URLConnectionSample", "DynamicSize"=>"204800", "ShortVersion"=>"1.0"
"Name"=>"IdentifierForVendor3", "Version"=>"1.0", "BundleSize"=>"155648", "Identifier"=>"com.IdentifierForVendor3", "DynamicSize"=>"16384", "ShortVersion"=>"1.0"
And inserting this into the aforementioned table:
INSERT INTO data SELECT (populate_record(NULL::data, hstore(row))).* FROM ...
the rest of the query is the same.