Following is sample table and data
create table chk_vals (vals text);
insert into chk_vals values ('1|2|4|3|9|8|34|35|38|1|37|1508|1534');
So,How to update column vals by appending integer in 4th position of the existing value(ie. 3 | is used as a seperator) into the last position along with symbol |
as you can see the existsing value if 1|2|4|3|9|8|34|35|38|1|37|1508|1534 and the output should be 1|2|4|3|9|8|34|35|38|1|37|1508|1534|3
Use PostgreSQL's split_part() to splits the field and find the value at position 4
select split_part(vals,'|',4) val from chk_vals
this will return value 3
update chk_vals
set vals=vals||format('|%s',(select split_part(vals,'|',4) val from chk_vals))
Format()
Related
Can anyone tell me which command is used for concatenate three columns data into one column in PostgreSQL database?
e.g. If the columns are
begin | Month | Year
12 | 1 | 1988
13 | 3 |
14 | | 2000
| 5 | 2012
output:
Result
12-1-1988
13-3-null
14-null-2000
null-5-2012
Actually, I have concatenated two columns but it is displaying only those values in the result
which is not null in all columns but i want to display that value also which is not null in single
column.
If you simply used a standard concatenation function like concat() or the || operator, you'd get a complete null string when any element is null.
You could use the function concat_ws() which ignores a null value. But you are expecting them to be shown.
So you need to cast the real null value into a non-null text 'null'. This could be done using the COALESCE() function, which takes several arguments and returns the first non-null. But here the problem occurs, that the 'null' string is of another type (text) than the columns (int). So you have to equalize the types, e.g. by casting the int values into text before. So, finally your query could look like this:
Click: demo:db<>fiddle
SELECT
concat_ws('-',
COALESCE(begin::text, 'null'),
COALESCE(month::text, 'null'),
COALESCE(year::text, 'null')
)
FROM mytable
When I created the table Tab, I specified the columns as string,
Tab: ([Key1:string()] Col1:string();Col2:string();Col3:string())
But the column datatype (t) is empty. I suppose specifying the column as string has no effect.
meta Tab
c t f a
--------------------
Key1
Col1
Col2
Col3
After I do a bulk upsert in Java...
c.Dict dict = new c.Dict((Object[]) columns.toArray(new String[columns.size()]), data);
c.Flip flip = new c.Flip(dict);
conn.c.ks("upsert", table, flip);
The datatypes are all symbols:
meta Tab
c t f a
--------------------
Key1 s
Col1 s
Col2 s
Col3 s
How can I specify the datatype of the columns as string and have it remain as string?
You cant define a column of the empty table with as strings as they are merely lists of lists of characters
You can just set them as empty lists which is what your code is doing.
But the column will then take on the type of whatever data is inserted into it.
Real question is what is your java process sending symbols when it should be sending strings. You need to make the change there before publishing to KDB
Note if you define as chars you still wont be able to upsert strings
q)Tab: ([Key1:`char$()] Col1:`char$();Col2:`char$();Col3:`char$())
q)Tab upsert ([Key1:enlist"test"] Col1:enlist"test";Col2:enlist"test";Col3:enlist "test")
'rank
[0] Tab upsert ([Key1:enlist"test"] Col1:enlist"test";Col2:enlist"test";Col3:enlist "test")
^
q)Tab: ([Key1:()] Col1:();Col2:();Col3:())
q)Tab upsert ([Key1:enlist"test"] Col1:enlist"test";Col2:enlist"test";Col3:enlist "test")
Key1 | Col1 Col2 Col3
------| --------------------
"test"| "test" "test" "test"
KDB does not allow to define column types as list during creation of table. So that means you can not define your column type as String because that is also a list.
To do that only way is to define column as empty list like:
q) t:([]id:`int$();val:())
Then when you insert data to this table the column will automatically take type of that data.
q)`t insert (4;"row1")
q) meta t
c | t f a
---| -----
id | i
val| C
In your case, one option is to send string data from your Java process as mentioned by user 'emc211' or other option is to convert your data to string in KDB process before insertion.
I have a column in a Postgresql table that is unique and character varying(10) type. The table contains old alpha-numeric values that I need to keep. Every time a new row is created from this point forward, I want it to be numeric only. I would like to get the max numeric-only value from this table for this column then create a new row with that max value incremented by 1.
Is there a way to query this table for the max numeric value only for this column?
For example, if this column currently has the values:
1111
A1111A
1234
1234A
3331
B3332
C-3333
33-D33
3**333*
Is there a query that will return 3333, AKA cut out all the non-numeric characters from the values and then perform a MAX() on them?
Not precisely what you asking, but something that I think will work better for you.
To go over all the columns, convert each to numbers, and then cast it to integer & return max.:
SELECT MAX(regexp_replace(my_column, '[^0-9]', '', 'g')::int) FROM public.foobar;
This gets you your max value... say 2999.
Now, going forward, consider making the default for your column a serial-like value, and convert it to text... that way you set the "MAX" once, and then let postgres do all the work for future values.
-- create simple integer sequence
CREATE SEQUENCE public.foobar_my_column_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 1
CACHE 0;
-- use new sequence as default value for column __and__ convert to text
ALTER TABLE foobar
ALTER COLUMN my_column
SET DEFAULT nextval('publc.foobar_my_column_seq'::regclass)::text;
-- initialize "next value" of sequence to whatever is larger than
-- what you already have in your data ... say 3000:
ALTER SEQUENCE public.foobar_my_column_seq RESTART WITH 3000;
Because you're simply setting default, you don't change your current alpha-numeric values.
I figured it out. The following query works.
select text_value, regexp_replace(text_value, '[^0-9]+', '') as new_value from the_table;
Result:
text_value | new_value
-----------------------+-------------
4*215474 | 4215474
740024 | 740024
4*100535 | 4100535
42356 | 42356
CASH |
4*215474 | 4215474
740025 | 740025
740026 | 740026
4*5089655798 | 45089655798
4*15680 | 415680
4*224034 | 4224034
4*265718708 | 4265718708
Looking at postgres documentation for JSON functions (https://www.postgresql.org/docs/9.6/static/functions-json.html), there is a section I don't understand about expanding a JSON object into a set of rows.
The docs give a sample use of this function: json_populate_recordset(base anyelement, from_json json) as select * from json_populate_recordset(null::myrowtype, '[{"a":1,"b":2},{"a":3,"b":4}]')
But I'm not sure what that first argument (null::myrowtype) is -- a table definition?
The description of this function is: Expands the outermost array of objects in from_json to a set of rows whose columns match the record type defined by base (see note below).
None of the notes at the bottom seemed relevant. I'm hoping for a better explanation with sample code to understand it all.
The 2nd notice is the one of interest in the doc as it explains how missing fields/values are handled
Note: In json_populate_record, json_populate_recordset, json_to_record
and json_to_recordset, type coercion from the JSON is "best effort"
and may not result in desired values for some types. JSON keys are
matched to identical column names in the target row type. JSON fields
that do not appear in the target row type will be omitted from the
output, and target columns that do not match any JSON field will
simply be NULL.
json_populate_recordset maps the name of the json object to the column name in the table given as first argument.
create table public.test (a int, b text);
select * from json_populate_recordset(null::public.test, '[{"a":1,"b":"b2"},{"a":3,"b":"b4"}]');
a | b
---+----
1 | b2
3 | b4
(2 rows)
--Wrong column name:
select * from json_populate_recordset(null::public.test, '[{"a":1,"c":"c2"},{"a":3,"c":"c4"}]');
a | b
---+---
1 |
3 |
(2 rows)
--Wrong datatype:
select * from json_populate_recordset(null::public.test, '[{"a":1.1,"b":22},{"a":3.1,"b":44}]');
ERROR: invalid input syntax for integer: "1.1"
Alternatively, instead of using the column name/type from an existing table, you can define the columns on the fly
select * from json_to_recordset('[{"a":1,"b":"foo"},{"a":"2","c":"bar"}]') as x(a int, b text);
a | b
---+-----
1 | foo
2 |
(2 rows)
--> note that default type cast occurs ("2" is mapped to 2), missing fields are ignored (b, in second record) as well as fields not defined (c)
If I generate an identity for a table on the column cust-id, I want the next column userid to be cust-id+CID.
E.g. 000000001CID, 0000000002CID
What sql do I include for this?
Similarly if I have 00001 in the column Cust-id and abcd in the column section, the 3rd column must have value 00001abcd
Please let me know the solutions
You just need to create a trigger. Something like
CREATE TRIGGER A
BEFORE INSERT ON TABLE B
REFERENCING NEW AS N
FOR EACH ROW
BEGIN
SET N.userid = N.CUST_ID + N.CID ;
IF (N.CUST_ID = '00001' AND N.SECTION = 'abcd') THEN
SET N.THIRD = N.CUST_ID + N.SECTION
END IF;
END #
By the way, generating values in column shows that your module is not normalize, and sometimes this is a source of errors.