Split column to many columns in PostgresQL - postgresql

I have a table like:
id Name f_data
1 Raj {"review":"perfect", "tech":{"scalability":"complete", "backbonetech":["satellite"], "lastmiletech":["Fiber","Wireless","DSL"] }}
I want to split f_data column to multiple columns. Expected result:
id Name review scalability backbonetech lastmiletech
1 Raj perfect complete satellite Fiber,wireless,DSL
when I tray split json column I couldn't remove the bracket. My output is:
id Name review scalability backbonetech lastmiletech
1 Raj perfect complete ["satellite"] ["Fiber","wireless","DSL"]
I used this code:
SELECT id, Name,
f_data->'review' ->>0 as review,
f_data->'tech' ->> 'scalability' as scalability,
f_data->'tech' ->> 'backbonetech' as backbonetech,
f_data->'tech' ->> 'lastmiletech' as lastmileteck
from my_table;

One possibility is to use the json_array_elements_text function to transform the arrays in your JSON into a set of text and then use the string_agg function to concatenate the individual elements into a single string.
For lastmiletech, the request might look like :
select
string_agg(t.lastmiletech, ',') as lastmiletech
from
(
select
json_array_elements_text('{"scalability":"complete", "backbonetech":>["satellite"], "lastmiletech":["Fiber","Wireless","DSL"]}'::json->'lastmiletech') as lastmiletech
) t
You can modify the subquery to include the additional fields you need.
As the name imply, the first parameter of the json_array_elements_text has to be a json array, so be careful not to convert the array into text before passing it to the function. In other words, f_data->tech->>lastmiletech will not be accepted.

Related

How to unnest single quoted json array in Postgresql

I have a postgresql table containing a column (movies) with json array. The column type is text. Below is example of table:
name
movies
bob
['movie1', 'movie2']
mary
['movie1', 'movie3']
How can I unnest the above table to look like below:
name
movie
bob
movie1
bob
movie2
mary
movie1
mary
movie3
Also note that the elements in the json array are single quoted
Im using postgresql database on AWS RDS engine version 10.17.
Thanks in advance
That is not JSON, that is "something vaguely inspired by JSON". we don't know how it will deal with things like apostrophes in the titles, or non ASCII characters, or any of the other things that an actual standard should specify but something vaguely inspired by a standard doesn't.
If you want to ignore such niceties and make something that works on this one example, we could suppress the characters '[] (done by regexp_replace) and then split/unnest on commas followed by optional space (done by regexp_split_to_table).
with t as (select 'bob' name ,$$['movie1', 'movie2']$$ movies union select 'mary',$$['movie1', 'movie3']$$)
select name, movie from t, regexp_split_to_table(regexp_replace(movies,$$['\[\]]$$,$$$$,'g'),', ?') g(movie);
Another slightly more resilient option would be to swap ' for " then use an actual JSON parser:
with t as (select 'bob' name ,$$['lions, and tigers, and bears', 'movie2']$$ movies union select 'mary',$$['movie1','movie3']$$)
select name, movie from t, jsonb_array_elements_text(regexp_replace(movies,$$'$$,$$"$$,'g')::jsonb) g(movie);

How to convert json variable to table in postgresql

I have created a JSON type variable that looks like so, named group.
'{"SG1": ["2", "4"], "SG2": ["6", "8", "10"], "SG3": ["9"]}'
The idea is to create multiple values for a single key for example PF1 is a group (key) that has a list of values.
I want to create a table for such JSON variable that can give me output like,
group
values
PF1
2,4
PF2
6,8,10
PF3
9
The value column can be a string or comma-separated integer values, string is preferred.
I have tried
SELECT group ->> 'PF1' as group_1
from metadata;
Ths gives me a blank cell.
I am working with JSON in PostgreSQL for the first time so I have no idea about functions that might help me achieve this. Any help is appreciated.
It's not clear to me, what you are trying to achieve, but it seems you want each key/value pair as a row in the output:
select t.*
from metadata m
cross join jsonb_each_text(m."group") as t("group", values)
Online example
If your group column isn't jsonb but json you need to use json_each_text()
Note that group is reserved keyword and needs to be quoted every time you use it. It would be easier if you found a different name.

Get string after ',' delimeter comma or special characters

The field name is message, table name is log.
Data Examples:
Values for message:
"(wsname,cmdcode,stacode,data,order_id) values (hyd-l904149,2,1,,1584425657892);"
"(wsname,cmdcode,stacode,data,order_id) values (hyd-l93mt54,2,1,,1584427657892);"
(command_execute,order_id,workstation,cmdcode,stacode,application_to_kill,application_parameters) values (kill, 1583124192811, hyd-psag314, 10, 2, tsws.exe, -u production ); "
and in log table i need to get separated column wsname with values as hyd-l904149 and hyd-l93mt54 and hyd-psag314, column cmdcode with values as 2,2 and 10 and column stacode with values as 1,1 and 2, e.g.:
wsname cmdcode stacode
hyd-l904149 2 1
hyd-l93mt54 2 1
hyd-psag314 10 2
Use regexp_matches to extract left and right part of values clause, then regexp_split_to_array to split these parts by commas, then filter rows containing wsname using = any(your_array) construct, then select required columns from array.
Or - alternative solution - fix data to be syntactically valid part of insert statement, create auxiliary tables, insert data into them and then just select.
As in comment section I mentioned about inbuilt function in posgressql
split_part(string,delimiter, field_number)
http://www.sqlfiddle.com/#!15/eb1df/1
As the json capabilities of the un-supported version 9.3 are very limited, I would install the hstore extension, and then do it like this:
select coalesce(vals -> 'wsname', vals -> 'workstation') as wsname,
vals -> 'cmdcode' as cmdcode,
vals -> 'stacode' as stacode
from (
select hstore(regexp_split_to_array(e[1], '\s*,\s*'), regexp_split_to_array(e[2], '\s*,\s*')) as vals
from log l,
regexp_matches(l.message, '\(([^\)]+)\)\s+values\s+\(([^\)]+)\)') as x(e)
) t
regexp_matches() splits the message into two arrays: one for the list of column names and one for the matching values. These arrays are used to create a key/value pair so that I can access the value for each column by the column name.
If you know that the positions of the columns are always the same, you can remove the use of the hstore type. But that would require quite a huge CASE expression to test where the actual columns appear.
Online example
With a modern, supported version of Postgres, I would use jsonb_object(text[], text[]) passing the two arrays resulting from the regexp_matches() call.

How to perform a search query on a column value containing a string with comma separated values?

I have a table which looks like below
date | tags | name
------------+-----------------------------------+--------
2018-10-08 | 100.21.100.1, cpu, del ZONE1
2018-10-08 | 100.21.100.1, mem, blr ZONE2
2018-10-08 | 110.22.100.3, cpu, blr ZONE3
2018-10-09 | 110.22.100.3, down, hyd ZONE2
2018-10-09 | 110.22.100.3, down, del ZONE1
I want to select the name for those rows which have certain strings in the tags column
Here column tags has values which are strings containing comma separated values.
For example I have a list of strings ["down", "110.22.100.3"]. Now if I do a look up into the table which contains the strings in the list, I should get the last two rows which have the names ZONE2, ZONE1 respectively.
Now I know there is something called in operator but I am not quite sure how to use it here.
I tried something like below
select name from zone_table where 'down, 110.22.100.3' in tags;
But I get syntax error.How do I do it?
You can do something like this.
select name from zone_table where
string_to_array(replace(tags,' ',''),',')#>
string_to_array(replace('down, 110.22.100.3',' ',''),',');
1) delete spaces in the existing string for proper string_to_array separation without any spaces in the front using replace
2)string_to_array converts your string to array separated by comma.
3) #> is the contains operator
(OR)
If you want to match as a whole
select name from zone_table where POSITION('down, 110.22.100.3' in tags)!=0
For separate matches you can do
select name from zone_table where POSITION('down' in tags)!=0 and
POSITION('110.22.100.3' in tags)!=0
More about position here
We can try using the LIKE operator here, and check for the presence of each tag in the CSV tag list:
SELECT name
FROM zone_table
WHERE ', ' || tags LIKE '%, down,%' AND ', ' || tags LIKE '%, 110.22.100.3,%';
Demo
Important Note: It is generally bad practice to store CSV data in your SQL tables, for the very reason that it is unnormalized and makes it hard to work with. It would be much better design to have each individual tag persisted in its own record.
demo: db<>fiddle
I would do a check with array overlapping (&& operator):
SELECT name
FROM zone_table
WHERE string_to_array('down, 110.22.100.3', ',') && string_to_array(tags,',')
Split your string lists (the column values and the compare text 'down, 110.22.100.3') into arrays with string_to_array() (of course if your compare text is an array already you don't have to split it)
Now the && operator checks if both arrays overlap: It checks if one array element is part of both arrays (documentation).
Notice:
"date" is a reserved word in Postgres. I recommend to rename this column.
In your examples the delimiter of your string lists is ", " and not ",". You should take care of the whitespace. Either your string split delimiter is ", " too or you should concatenate the strings with a simple "," which makes some things easier (aside the fully agreed thoughts about storing the values as string lists made by #TimBiegeleisen)

how to deal with Memo fields listed in a SELECT query, in foxpro 9?

i have a query where i need to use the DISTINCT keyword, the issue is that a field i have in the select is of type MEMO (needs to be so because of its large content...).
SELECT distinct customerid, commentdate, commenttext....
is not accepted in FOXPRO 9 because commenttext field is f type Memo !
any idea?
You have a couple of options, depending on your needs:
1) Omit the memo field from the query.
2) Use an expression to convert the memo field to character. For example, LEFT(commenttext,254).
Are you really trying to apply distinct to the memo field, as well? What's your actual goal here?
Tamar
Wrap the memo field in the SELECT statement in a function such as ALLTRIM.
SELECT distinct customerid, commentdate, ALLTRIM(commenttext)....
Another option is to use something like PHDBase which is a text search indexer for Visual Foxpro. It allows character columns and memo fields to be indexed and searchable. And it's incredibly fast.