export records from a table by modifying without double quotes for the numeric columns from the table in db2(udb) - db2

I am trying to remove the double quotes for the numeric columns using export command by using replace function but it wont worked out, below is the query I used in Linux environment,
EXPORT TO '/Staging/ebi/src/CLP/legal_bill_charge_adjustment11.csv' OF DEL
MESSAGES '/Staging/ebi/src/CLP/legal_bill_charge_adjustment11.log'
select
CLIENT_ID,
CLIENT_DIVISION_ID,
CLIENT_OFFICE_ID,
MATTER_ID,
LEGAL_BILL_CHARGE_ADJ_ID,
LEGAL_BILL_CHARGE_ID,
ADJUSTMENT_DT,
replace ( ORIGINAL_ADJUSTMENT_AMT,""),
replace (CURRENT_ADJUSTMENT_AMT,""),
replace (SYSTEM_ADJUSTMENT_AMT,""),
replace (CLIENT_ADJUSTMENT_AMT,""),
replace (DELETED_ADJUSTMENT,""),
FLAGGED_AMOUNT,
ADJUSTMENT_USER,
STATUS_DESC,
ADJUSTMENT_COMMENT,
WF_TASK_NAME,
WF_TASK_DESC from CLP.legal_bill_charge_adjustment1;
If anyone suggest me the exact db2 query it would be helpful.
Thanks in advance.

Export will not have quotes around numeric data types. You have not provided any data type information so I suppose your numeric content may be stored in a CHAR/VARCHAR column.
Try casting the columns to numeric data types in the export SQL statement.
i.e.
SELECT cast(Textcol as integer) as colname
..

Related

PostgreSQL - jsonb - How to get the datatype for value in query with jsonpath

In PostgreSQL using jsonb column, is there a way to select / convert an attribute with actual datatype the datatype instead of getting it as a string object when using jsonpath? I would like to try to avoid cast as well as -> and ->> type of construct since I have to select many attributes with very deep paths, I am trying to do it using jsonpath and * or ** in the path
Is it possible to do it this way or must I use the -> and ->> for each node in the path ? This will make the query look complicated as I have to select about 35+ attributes in the select with quite deep paths.
Also, how do we remove quotes from the selected value?
This is what I was trying, but doesn't work to remove quotes from Text value and gives an error on numeric
Select
PolicyNumber AS "POLICYNUMBER",
jsonb_path_query(payload, '$.**.ProdModelID')::text AS "PRODMODELID",
jsonb_path_query(payload, '$.**.CashOnHand')::float AS "CASHONHAND"
from policy_json_table
the PRODMODELID still shows the quotes around the value and when I add ::float to second column, it gives an error
SQL Error [22023]: ERROR: cannot cast jsonb string to type double precision
Thank you
When you try to directly cast the jsonb value to another datatype, postgres will attempt to first convert it to a json text and then parse that. See
How to convert Postgres json(b) to integer?
How to convert Postgres json(b) to float?
How to convert Postgres json(b) to text?
How to convert Postgres json(b) to boolean?
When you have strings in your JSON values, to avoid the quotes you'll need to extract them by using one of the json functions/operators returning text. In your case:
SELECT
PolicyNumber AS "POLICYNUMBER",
jsonb_path_query(payload, '$.**.ProdModelID') #>> '{}' AS "PRODMODELID",
(jsonb_path_query(payload, '$.**.CashOnHand') #>> '{}')::float AS "CASHONHAND"
FROM policy_json_table
jsonb_path_query function returns data with quotes (""), so you cannot cast this to integer or float. For casting value to integer, you need value without quotes.
You can use this SQL for getting without quotes:
Select
PolicyNumber AS "POLICYNUMBER",
(payload->>'ProdModelID')::text AS "PRODMODELID",
(payload->>'CashOnHand')::float AS "CASHONHAND"
from policy_json_table
If you need to use exactly jsonb_path_query then you can trim these quotes:
Select
PolicyNumber AS "POLICYNUMBER",
jsonb_path_query(payload, '$.**.ProdModelID')::text AS "PRODMODELID",
trim(jsonb_path_query(payload, '$.**.CashOnHand')::text, '"')::float AS "CASHONHAND"
from policy_json_table

Export Postgres data to s3 with headers on each row

I'm was able to export data from Postgres to AWS S3 using this document by using the aws_commons extension.
The table columns are id and name. with this table I was able to export as csv file using below mentioned query
SELECT * from aws_s3.query_export_to_s3('select * from sample_table',
aws_commons.create_s3_uri('rds-export-bucket', 'rds-export', 'us-east-1') ,
options :='format csv , HEADER true'
);
with the query I'm able to generate csv file like
id,name
1,Monday
2,Tuesday
3,Wednesday
but is it possible to generate the csv file in the below mentioned format
id:1,name:Monday
id:2,name:Tuesday
id:3,name:Wednesday
I tried to create a different table with jsonb structure, and each row inserted as a json. then export had curly braces and two double quotes in it.
Sample insertion command,
CREATE TABLE unstructured_table (data JSONB NOT NULL);
INSERT INTO unstructured_table VALUES($$
{
"id": "1",
"name": "test"
}}
$$);
after exporting from this table, I'm getting csv file like,
"{""id"": ""1"", ""name"": "test"}"
Thanks in advance
JSON requires double quotes around strings and CSV also requires double quotes around fields when they contain commas or double quotes.
If your goal is to produce a comma-separated list of ColumnName:ColumnValue, for all columns and rows without any kind of quoting, then this requirement is not compatible with the CSV format.
This could however be generated in SQL relatively generically, for all columns and rows of any sample_table, id being the primary key, with a query like this:
select string_agg(k||':'||v, ',')
from sample_table t,
lateral row_to_json(t) as r(j),
lateral json_object_keys(j) as keys(k),
lateral (select j->>k) as val(v)
group by id order by id;
If you feed that query to aws_s3.query_export_to_s3 with a format csv option, it will enclose each output line with double quotes. That may be close enough to your goal.
Alternatively, the text format could be used. Then the lines would not be enclosed with double quotes, but backslashes sequences might be emitted for control characters and backslashes themselves (see the text format in the COPY documentation).
Ideally the result of this query should be output verbatim into a file, not using COPY, as you would do locally in a shell with:
psql -Atc "select ..." > output-file.txt
But it doesn't seem like aws_s3.query_export_to_s3 provides an equivalent way to do that, since it's an interface on top of the COPY command.

Timescaledb - How to display chunks of a hypertable in a specific schema

I have a table named conditions on a schema named test. I created a hypertable and inserted hundreds of rows.
When I run select show_chunks(), it works and displays chunks but I cannot use the table name as parameter as suggested in the manual. This does not work:
SELECT show_chunks("test"."conditions");
How can I fix this?
Ps: I want to query the chunk itself by its name? How can I do this?
The show_chunks expects a regclass, which depending on your current search path means you need to schema qualify the table.
The following should work:
SELECT public.show_chunks('test.conditions');
The double quotes are only necessary if your table is a delimited identifier, for example if your tablename contains a space, you would need to add the double quotes for the identifier. You will still need to wrap it in single quotes though:
SELECT public.show_chunks('test."equipment conditions"');
SELECT public.show_chunks('"test schema"."equipment conditions"');
For more information about identifier quoting:
https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS
Edit: Addressing the PS:
I want to query the chunk itself by its name? How can I do this?
feike=# SELECT public.show_chunks('test.conditions');
show_chunks
--------------------------------------------
_timescaledb_internal._hyper_28_1176_chunk
_timescaledb_internal._hyper_28_1177_chunk
[...]
SELECT * FROM _timescaledb_internal._hyper_28_1176_chunk;

How do I escape single quotes in data which is of hstore datatype using Pentaho

I am trying to read hstore data from source and insert into target hstore column. But for some weird reason the data has some single quotes in it and I cannot delete or remove them. Source hstore data looks something like
Value 1: "Target_Payment_Type"=>"Auto_Renew", "Target_Membership_term"=>"1 Year"
Value 2: "Target_Payment_Type"=>"'Auto_Renew'", "Target_Membership_term"=>"'1 Year'"
The transformation works fine with the 1st value but fails when at Value2. Can could anyone suggest me a way I can escape the single quotes which may appear in data using pentaho or postgresql (source & target database). Thanks in advance.
At least, you can use postgres replace function in Table Input step:
SELECT
,all_your_non_string_columns
,replace(string_column,'''', '') //note that '''' represents '
FROM
your_table
Real solution you could find in up-to-date driver perhaps.

exporting to csv from db2 with no delimiter

I need to export content of a db2 table to CSV file.
I read that nochardel would prevent to have the separator between each data but that is not happening.
Suppose I have a table
MY_TABLE
-----------------------
Field_A varchar(10)
Field_B varchar(10)
Field_A varchar(10)
I am using this command
export to myfile.csv of del modified by nochardel select * from MY_TABLE
I get this written into the myfile.csv
data1 ,data2 ,data3
but I would like no ',' separator like below
data1 data2 data3
Is there a way to do that?
You're asking how to eliminate the comma (,) in a comma separated values file? :-)
NOCHARDEL tells DB2 not to surround character-fields (CHAR and VARCHAR fields) with a character-field-delimiter (default is the double quote " character).
Anyway, when exporting from DB2 using the delimited format, you have to have some kind of column delimiter. There isn't a NOCOLDEL option for delimited files.
The EXPORT utility can't write fixed-length (positional) records - you would have to do this by either:
Writing a program yourself,
Using a separate utility (IBM sells the High Performance Unload utility)
Writing an SQL statement that concatenates the individual columns into a single string:
Here's an example for the last option:
export to file.del
of del
modified by nochardel
select
cast(col1 as char(20)) ||
cast(intcol as char(10)) ||
cast(deccol as char(30));
This last option can be a pain since DB2 doesn't have an sprintf() function to help format strings nicely.
Yes there is another way of doing this. I always do this:
Put select statement into a file (input.sql):
select
cast(col1 as char(20)),
cast(col2 as char(10)),
cast(col3 as char(30));
Call db2 clp like this:
db2 -x -tf input.sql -r result.txt
This will work for you, because you need to cast varchar to char. Like Ian said, casting numbers or other data types to char might bring unexpected results.
PS: I think Ian points right on the difference between CSV and fixed-length format ;-)
Use "of asc" instead of "of del". Then you can specify the fixed column locations instead of delimiting.