How to write redshift aws query for matching string? - amazon-redshift

This is string pattern that i want to match . The string can be alpha or numeric or both (example of two different string)
"a252-449e8740ac24_1" ,
"9161-dbc9d0f07af9_0"
this is my query
select * from table where string like '([0-9]|[a-z]%^-%[0-9]|[a-z]%^_[0-9])'
This is not giving me output. i am new to aws redshift. please help me!!!

Here's the regex -
SELECT *
FROM table
WHERE string ~ '^([0-9]|[a-z])*-([0-9]|[a-z])*_[0-9]$'
Redshift has the ~ operator which matches a string against a POSIX regex.
This page describes the regex patterns in detail (should you wish to make changes to the regex I specified) -
https://docs.aws.amazon.com/redshift/latest/dg/pattern-matching-conditions-posix.html

Related

PostgreSQL - number treated as the string, most of aggregate functions don't work

I am new to PostgreSQL. I am watching the tutirial from FreeCodeCamp by following their examples:
https://www.youtube.com/watch?v=qw--VYLpxG4&t=7241s&ab_channel=freeCodeCamp.org
But instead of PostgreSQL I use the Server-based platform (PhpPgAdmin).
The problem is, that neither SUM nor AVG and many others aggregate functions cannot be executed.
The problem seem to be the same constantly:
"Function ... does not exist"
No function matches the given name and argument types. You might need to add explicit type casts.
I found some similar problem here:
No function matches the given name and argument types
but it's related to more complicated example.
What I guess, the PhpPgAdmin treats all my numbers as the strings and here is the problem.
I tried this example:
How do I convert an integer to string as part of a PostgreSQL query?
but it returns the other error:
operator does not exist: character varying = bigint
I think the $ before the price is not a problem as the MIN and MAX functions work.
What is the reason behind it?
You may trim the leading $ in the price column, then cast the string amount to float, before summing, e.g.
SELECT SUM(CAST(TRIM('$' FROM price) AS float)) AS total_sum
FROM car;

SQLAlchemy search on ts_vector column without to_tsvector

I have a computed column that is a tsvector.
The api sends a search query, but, of course, these are not valid tsvectors. Postgres's plainto_tsquery converts text input to a correctly formatted tsvector for matching.
This breaks with SQLAlchemy.
column.match(func.plainto_tsquery('english', search)) does not work because SQLAlchemy converts that to:
column ## to_tsquery(plainto_tsquery('english', 'the search query'))
what I actually want is the correct operator (##) but without the magic conversion
column ## plainto_tsquery('english', 'the search query')
A dumb way that works but is not what I want is:
column.match(
cast(func.plainto_tsquery('english', search), String)
)
How about
column.op("##")(func.plainto_tsquery('english', search))

How to perform a search query on a column value containing a string with comma separated values?

I have a table which looks like below
date | tags | name
------------+-----------------------------------+--------
2018-10-08 | 100.21.100.1, cpu, del ZONE1
2018-10-08 | 100.21.100.1, mem, blr ZONE2
2018-10-08 | 110.22.100.3, cpu, blr ZONE3
2018-10-09 | 110.22.100.3, down, hyd ZONE2
2018-10-09 | 110.22.100.3, down, del ZONE1
I want to select the name for those rows which have certain strings in the tags column
Here column tags has values which are strings containing comma separated values.
For example I have a list of strings ["down", "110.22.100.3"]. Now if I do a look up into the table which contains the strings in the list, I should get the last two rows which have the names ZONE2, ZONE1 respectively.
Now I know there is something called in operator but I am not quite sure how to use it here.
I tried something like below
select name from zone_table where 'down, 110.22.100.3' in tags;
But I get syntax error.How do I do it?
You can do something like this.
select name from zone_table where
string_to_array(replace(tags,' ',''),',')#>
string_to_array(replace('down, 110.22.100.3',' ',''),',');
1) delete spaces in the existing string for proper string_to_array separation without any spaces in the front using replace
2)string_to_array converts your string to array separated by comma.
3) #> is the contains operator
(OR)
If you want to match as a whole
select name from zone_table where POSITION('down, 110.22.100.3' in tags)!=0
For separate matches you can do
select name from zone_table where POSITION('down' in tags)!=0 and
POSITION('110.22.100.3' in tags)!=0
More about position here
We can try using the LIKE operator here, and check for the presence of each tag in the CSV tag list:
SELECT name
FROM zone_table
WHERE ', ' || tags LIKE '%, down,%' AND ', ' || tags LIKE '%, 110.22.100.3,%';
Demo
Important Note: It is generally bad practice to store CSV data in your SQL tables, for the very reason that it is unnormalized and makes it hard to work with. It would be much better design to have each individual tag persisted in its own record.
demo: db<>fiddle
I would do a check with array overlapping (&& operator):
SELECT name
FROM zone_table
WHERE string_to_array('down, 110.22.100.3', ',') && string_to_array(tags,',')
Split your string lists (the column values and the compare text 'down, 110.22.100.3') into arrays with string_to_array() (of course if your compare text is an array already you don't have to split it)
Now the && operator checks if both arrays overlap: It checks if one array element is part of both arrays (documentation).
Notice:
"date" is a reserved word in Postgres. I recommend to rename this column.
In your examples the delimiter of your string lists is ", " and not ",". You should take care of the whitespace. Either your string split delimiter is ", " too or you should concatenate the strings with a simple "," which makes some things easier (aside the fully agreed thoughts about storing the values as string lists made by #TimBiegeleisen)

test JOOQ postgres jsonb column for key exists

I've got a table TABLE that contains a jsonb column named tags. The tags element in each row may or may not contain a field called group. My goal is to group by tags.group for all rows where tags contains a group field. Like the following postgres query:
select tags->>'group' as group, sum(n) as sum
from TABLE
where tags ? 'group'
group by tags->>'group';
I'm trying to turn it into JOOQ and cannot find out how to express the where tags ? 'group' condition.
For example,
val selectGroup = DSL.field("{0}->>'{1}'", String::class.java, TABLE.TAGS, "group")
dsl().select(selectGroup, DSL.sum(TABLE.N))
.from(TABLE)
.where(TABLE.TAGS.contains('group'))
.groupBy(selectGroup)
This is equivalent to testing contains condition #> in postgres. But I need to do exists condition ?. How can I express that in JOOQ?
There are two things worth mentioning here:
The ? operator in JDBC
Unfortunately, there's no good solution to this as ? is currently strictly limited to be used as a bind variable placeholder in the PostgreSQL JDBC driver. So, even if you could find a way to send that character to the server through jOOQ, the JDBC driver would still misinterpret it.
A workaround is documented in this Stack Overflow question.
Plain SQL and string literals
When you're using the plain SQL templating language in jOOQ, beware that there is a parser that will parse certain tokens of your string, including e.g. comments and string literals. This means that your usage of...
DSL.field("{0}->>'{1}'", String::class.java, TABLE.TAGS, "group")
is incorrect, as '{1}' will be parsed as a string literal and sent to the server as is. If you want to use a variable string literal, do this instead:
DSL.field("{0}->>{1}", String::class.java, TABLE.TAGS, DSL.inline("group"))
See also DSL.inline()

How do I match variables from FreeRADIUS in PostgreSQL with the LIKE operator?

In a PostgreSQL query, executed by FreeRADIUS, I want to do something similar to (the table names and values are just examples):
SELECT name
FROM users
WHERE city LIKE '%blahblah%';
but there is a catch: the blahblah value is contained in a FreeRADIUS variable, represented with '%{variable-name}'. It expands to 'blahblah'.
Now my question is: How do I match the %{variable-name} variable to the value stored in the table using the LIKE operator?
I tried using
SELECT name
FROM users
WHERE city LIKE '%%{variable-name}%';
but it doesn't expand correctly like that and is obviously incorrect.
The final query I want to achieve is
...
WHERE city LIKE '%blahblah%';
so it matches the longer string containing 'blahblah' stored in the table, but I want the variable to expand dynamically into the correct query. Is there a way to do it?
Thanks!
Wild guess:
Assuming that FreeRADIUS is doing dumb substitution across the entire SQL string, with no attempt to parse literals, etc, before sending the SQL to PostgreSQL then you could use:
SELECT name
FROM users
WHERE city LIKE '%'||'%{variable-name}'||'%';
Edit: To avoid the warnings caused by FreeRADIUS not parsing cleverly enough, hide the %s as hex chars:
WHERE city LIKE E'\x25%{variable-name}\x25';
Note the leading E for the string marking it as a string subject to escape processing.
SELECT name
FROM users
WHERE city LIKE '%%'||'%{variable-name}'||'%%';
Is slightly cleaner. %% is FreeRADIUS' escape sequence for percents.