Convert comma-separated fields to concat() function - postgresql

I have a table_product that contains comma-separated strings;
id | products
-----------
1 | tv,phone,tablet
2 | computer,tv
3 | printer,tablet,radio
To avoid manual concatenation, like concat(tv,',',phone,',',tablet)
I want to select the data from table_product.products as concat() statement.
Tried this, but getting an error:
select concat(select products from table_product where id=1) from table_sales
Is there any short and basic way to perform this query?

Related

How to use ts_query with ANY(anyarray)

I currently have a query in PostgreSQL like:
SELECT
name
FROM
ingredients
WHERE
name = ANY({"string value",tomato,other})
My ingredients table is simply a list of names:
name
----------
jalapeno
tomatoes
avocados
lime
My issue is that plural values in the array will not match single values in the query. To solve this, I created a tsvector column on the table:
name | tokens
---------------+--------------
jalapeno | 'jalapeno':1
tomatoes | 'tomato':1
avocados | 'avocado':1
lime | 'lime':1
I'm able to correctly query single values from the table like this:
SELECT
name,
ts_rank_cd(tokens, plainto_tsquery('tomato'), 16) AS rank
FROM
ingredients
WHERE
tokens ## plainto_tsquery('tomato')
ORDER BY
rank DESC;
However, I need to query values from the entire array. The array is generated from another function, so I have control over the type of each of items in the array.
How can I use the ## operand with ANY(anyarray)?
That should be straight forward:
WHERE tokens ## ANY
(ARRAY[
plainto_tsquery('tomato'),
plainto_tsquery('celery'),
plainto_tsquery('vodka')
])

Recursive postgres query to view

I have the following table which models a very simple hierarchical data structure with each element pointing to its parent:
Table "public.device_groups"
Column | Type | Modifiers
--------------+------------------------+---------------------------------------------------------------
dg_id | integer | not null default nextval('device_groups_dg_id_seq'::regclass)
dg_name | character varying(100) |
dg_parent_id | integer |
I want to query the recursive list of subgroups of a specific group.
I constructed the following recursive query which works fine:
WITH RECURSIVE r(dg_parent_id, dg_id, dg_name) AS (
SELECT dg_parent_id, dg_id, dg_name FROM device_groups WHERE dg_id=1
UNION ALL
SELECT dg.dg_parent_id, dg.dg_id, dg.dg_name
FROM r pr, device_groups dg
WHERE dg.dg_parent_id = pr.dg_id
)
SELECT dg_id, dg_name
FROM r;
I now want to turn this into a view where I can choose which group I want to drill down for using a WHERE clause. This means I want to be able to do:
SELECT * FROM device_groups_recursive WHERE dg_id = 1;
And get all the (recursive) subgroups of the group with id 1
I was able to write a function (by wrapping the query from above), but I would like to have a view instead of the function.
Side-Node: I know of the shortcoming of an adjacency list representation, I cannot change it currently.

How to convert tsvector?

A typical and relevant application of tsvectot is to query and summarize information about the set of occurred words and about its frequency... And JSONB is the natural choice (!) to represent tsvectot datatype for these "querying applications"... So,
There are a simple workaround to cast tsvector into JSONB?
Example: counting global frequency of words of a cached tsvectot's, will be something like this query
SELECT r.key as word, SUM(r.value) as occurrences
FROM (
SELECT jsonb_each(kx_tsvectot::jsonb) as r FROM terms
) t
GROUP BY 1;
You can use ts_stat() function, which will give you exactly what you need
word text — the value of a lexeme
ndoc integer — number of documents (tsvectors) the word occurred in
nentry integer — total number of occurrences of the word
Example may be the following:
CREATE TABLE t (
tsv TSVECTOR
);
INSERT INTO t VALUES
('word'::TSVECTOR),
('second word'::TSVECTOR),
('third word'::TSVECTOR);
SELECT * FROM
ts_stat('SELECT tsv FROM t');
Result:
word | ndoc | nentry
--------+------+--------
word | 3 | 3
third | 1 | 1
second | 1 | 1
(3 rows)
If you still want to convert it to jsonb you can use cast word from text to jsonb.

Is it possible in PL/pgSQL to evaluate a string as an expression, not a statement?

I have two database tables:
# \d table_1
Table "public.table_1"
Column | Type | Modifiers
------------+---------+-----------
id | integer |
value | integer |
date_one | date |
date_two | date |
date_three | date |
# \d table_2
Table "public.table_2"
Column | Type | Modifiers
------------+---------+-----------
id | integer |
table_1_id | integer |
selector | text |
The values in table_2.selector can be one of one, two, or three, and are used to select one of the date columns in table_1.
My first implementation used a CASE:
SELECT value
FROM table_1
INNER JOIN table_2 ON table_2.table_1_id = table_1.id
WHERE CASE table_2.selector
WHEN 'one' THEN
table_1.date_one
WHEN 'two' THEN
table_1.date_two
WHEN 'three' THEN
table_1.date_three
ELSE
table_1.date_one
END BETWEEN ? AND ?
The values for selector are such that I could identify the column of interest as eval(date_#{table_2.selector}), if PL/pgSQL allows evaluation of strings as expressions.
The closest I've been able to find is EXECUTE string, which evaluates entire statements. Is there a way to evaluate expressions?
In the plpgsql function you can dynamically create any expression. This does not apply, however, in the case you described. The query must be explicitly defined before it is executed, while the choice of the field occurs while the query is executed.
Your query is the best approach. You may try to use a function, but it will not bring any benefits as the essence of the issue will remain unchanged.

Search inside full search column using certain letters

I want to search inside a full search column using certain letters, I mean:
select "Name","Country","_score" from datatable where match("Country", 'China');
Returns many rows and is ok. My question is, how can I search for example:
select "Name","Country","_score" from datatable where match("Country", 'Ch');
I want to see, China, Chile, etc.
I think that match_type phrase_prefix can be the answer, but I don't know how I can use (correct syntax).
The match predicate supports different types by use of using match_type [with (match_parameter = [value])].
So in your example using the phrase_prefix match type:
select "Name","Country","_score" from datatable where match("Country", 'Ch') using phrase_prefix;
gives you your desired results.
See the match predicate documentation: https://crate.io/docs/en/latest/sql/fulltext.html?#match-predicate
If you just need to match the beginning of a string column, you don't need a fulltext analyzed column. You can use the LIKE operator instead, e.g.:
cr> create table names_table (name string, country string);
CREATE OK (0.840 sec)
cr> insert into names_table (name, country) values ('foo', 'China'), ('bar','Chile'), ('foobar', 'Austria');
INSERT OK, 3 rows affected (0.049 sec)
cr> select * from names_table where country like 'Ch%';
+---------+------+
| country | name |
+---------+------+
| Chile | bar |
| China | foo |
+---------+------+
SELECT 2 rows in set (0.037 sec)