Output file using ISQL - interbase

Already I have a ISQL script working with Interbase that works with output into csv and I want to do the same thing to another type, but is not for example the first code works and it works . But the second one using Join statement is only working in the sql window, but could not output it.
Working :
OUTPUT TESTING1.csv FORMAT ASCII DELIMITED BY ';' QUOTE '';
Select * from aircraft;
OUTPUT;
Not working :
OUTPUT TESTING1.csv FORMAT ASCII DELIMITED BY ';' QUOTE '';
SELECT * FROM aircraft
Join Operation ON aircraft.AC_ID = Operation.ac_ID
OUTPUT;

I got it working now realise that I need to put the semi colon after the join statement:
OUTPUT TESTING1.csv FORMAT ASCII DELIMITED BY ';' QUOTE '';
SELECT * FROM aircraft
Join Operation ON aircraft.AC_ID = Operation.ac_ID;
OUTPUT;

Related

How to add a single quote before each comma

a have a column as below
mystring
AC1853551,AC1854125,AC1855220,AC188115,AC1884120,AC1884390,AC1885102
I need to transformm it to get this output
mystring
('AC1853551','AC1854125','AC1855220','AC188115','AC1884120','AC1884390','AC1885102')
Here is my query that i tried
select CONCAT('( , CONCAT (mystring, ')')) from mytablename
I'm getting an error when it comes to insert a single quote '
Then i thought about replacing the comma with a ','
How to get desired output
i'm using postgres 10
A literal quote is coded as a doubled quote:
select '(''' || replace(mycolumn, ',', ''',''') || ''')'
from mytable
See live demo.

PSQL - encrypt_iv return multiple line encoded text

I am facing a issue to load a csv resulting froma query that encryts a text.
This is the query I run:
select
id,
encode(encrypt_iv(raw_address::bytea, '<aes_key>', '<iv>', 'aes-cbc/pad:pkcs'), 'base64') raw_address
from some_table;
But I got a multiline text as result for raw_address column. So I tried:
select
id,
encode(encrypt_iv(replace(raw_address, chr(92), chr(47))::bytea, '<aes_key>', '<iv>', 'aes-cbc/pad:pkcs'), 'base64') raw_address
from some_table;
This because I just wanted to make this \ into this / (to avoid \n)
This is the result example:
But got the same result. Then I found this answer and realize that + char was present, so I tried:
select
id,
replace(encode(encrypt_iv(replace(raw_address, chr(92), chr(47))::bytea, '<aes_key>', '<iv>', 'aes-cbc/pad:pkcs'), 'base64'), chr(10), '')
from, some_table;
Then I got one line:
But I don't know if I am modifing the original value, I can not decrypt the value. I tried:
select encode(decrypt_iv('55WHZ7tyGAlQxTIM0fPfY5tOKpbYzwdXCsemIgYV5TRG+h45IW1nU/zCqZbkIeiXQ3OXZSlHo0RPgq5wcgJ0xQ==', '<aes_key>', '<iv>', 'aes-cbc/pad:pkcs'), 'base64') ;
But I got:
ERROR: decrypt_iv error: Data not a multiple of block size
Any suggestion will be appreciated.
Thanks in advance!

Copy into snowflake table from raw data file using Perl DBI

There's not much info out there for perl dbi and snowflake so I'll give this a shot. I have a raw file, of which the headers are contained in line 1. This exact 'copy into' command works from the snowflake gui. I'm not sure if I can just take this exact command and put it into a perl prepare and execute.
COPY INTO DBTABLE.LND_LND_STANDARD_DATA FROM (
SELECT SPLIT_PART(METADATA$FILENAME,'/',4) as SEAT_ID,
$1:auction_id_64 as AUCTION_ID_64,
DATEADD(S,\$1:date_time,'1970-01-01') as DATE_TIME,
$1:user_tz_offset as USER_TZ_OFFSET,
$1:creative_width as CREATIVE_WIDTH,
$1:creative_height as CREATIVE_HEIGHT,
$1:media_type as MEDIA_TYPE,
$1:fold_position as FOLD_POSITION,
$1:event_type as EVENT_TYPE
FROM #DBTABLE.lnd.S3_STAGE_READY/pr/data/standard/data_dt=20200825/00/STANDARD_FILE.gz.parquet)
pattern = '.*.parquet' file_format = (TYPE = 'PARQUET' SNAPPY_COMPRESSION = TRUE)
ON_ERROR = 'SKIP_FILE_10%'
my $SQL = "COPY INTO DBTABLE.LND_LND_STANDARD_DATA FROM (
SELECT SPLIT_PART(METADATA\$FILENAME,'/',4) as SEAT_ID,
\$1:auction_id_64 as AUCTION_ID_64,
DATEADD(S,\$1:date_time,'1970-01-01') as DATE_TIME,
\$1:user_tz_offset as USER_TZ_OFFSET,
\$1:creative_width as CREATIVE_WIDTH,
\$1:creative_height as CREATIVE_HEIGHT,
\$1:media_type as MEDIA_TYPE,
\$1:fold_position as FOLD_POSITION,
\$1:event_type as EVENT_TYPE
FROM \#DBTABLE.lnd.S3_STAGE_READY/pr/data/standard/data_dt=20200825/00/STANDARD_FILE.gz.parquet)
pattern = '.*.parquet' file_format = (TYPE = 'PARQUET' SNAPPY_COMPRESSION = TRUE)
ON_ERROR = 'SKIP_FILE_10%'";
my $sth = $dbh->prepare($sql);
$sth->execute;
In looking at the output from snowflake I see this error
syntax error line 3 at position 4 unexpected '?'.
syntax error line 4 at position 13 unexpected '?'.
COPY INTO DBTABLE.LND_LND_STANDARD_DATA FROM (
SELECT SPLIT_PART(METADATA$FILENAME,'/',4) as SEAT_ID,
$1? as AUCTION_ID_64,
DATEADD(S,$1?,'1970-01-01') as DATE_TIME,
$1? as USER_TZ_OFFSET,
$1? as CREATIVE_WIDTH,
$1? as CREATIVE_HEIGHT,
$1? as MEDIA_TYPE
Do I need to create bind variables for each of the columns? I usually pull in the data from the file and put them into variables but this is different as I can't read the raw file first, it has to come directly from the copy into command.
Any help would be appreciated.
It was interpreting the : as a bind variable value, rather than a value in a variant. I used the bracket notation, instead like the following:
my $SQL = "COPY INTO DBTABLE.LND_LND_STANDARD_DATA FROM (
SELECT SPLIT_PART(METADATA\$FILENAME,'/',4) as SEAT_ID,
\$1['auction_id_64'] as AUCTION_ID_64,
DATEADD(S,\$1['date_time,'1970-01-01') as DATE_TIME,
\$1['user_tz_offset'] as USER_TZ_OFFSET,
\$1:creative_width'] as CREATIVE_WIDTH,
etc...
That worked

powershell - get a keyword from query string

I'm learning PowerShell scripting & want to extract tableName from SQL Query String. For example, I've this query -
$q = "SELECT * FROM [TestDB].[dbo].Invoice_Details where Clientname='ABC'"
where I want to extract table name i.e. it should output this - Invoice_Details
Currently, I'm doing this with following working code -
$q1 = $q -split '\[dbo\]\.'
$q2 = $q1[1] -split ' where '
write-host $q2[0] #here I get it right (Invoice_Details)
But, sometimes the query may/ may not have bracketed names like - [TestDB].[dbo].
So, I want to optimize this code so that it will work even if query containing any combination of bracketed/ bracketless tableNames
Try something like this:
$res = ([regex]'(?is)\b(?:from|into|update)\s+(\[?.+\]?\.)?\[?(\[?\w+)\]?').Matches($q)
write-host $res[0].Groups[2].Value

Replacing Turkish characters with English characters

I have a table which has 120 columns and some of them is including Turkish characters (for example "ç","ğ","ı","ö"). So i want to replace this Turkish characters with English characters (for example "c","g","i","o"). When i use "TRANWRD Function" it could be really hard because i should write the function 120 times and sometimes hte column names could be change so always i have to check the code one by one because of that.
Is there a simple macro which replaces this characters in all columns .
EDIT
In retrospect, this is an overly complicated solution... The translate() function should be used, as pointed by another user. It could be integrated in a SAS function defined with PROC FCMP when used repeatedly.
A combination of regular expressions and a DO loop can achieve that.
Step 1: Build a conversion table in the following manner
Accentuated letters that resolve to the same replacement character are put on a single line, separated by the | symbol.
data conversions;
infile datalines dsd;
input orig $ repl $;
datalines;
ç,c
ğ,g
ı,l
ö|ò|ó,o
ë|è,e
;
Step 2: Store original and replacement strings in macro variables
proc sql noprint;
select orig, repl, count(*)
into :orig separated by ";",
:repl separated by ";",
:nrepl
from conversions;
quit;
Step 3: Do the actual conversion
Just to show how it works, let's deal with just one column.
data convert(drop=i re);
myString = "ç ğı òö ë, è";
do i = 1 to &nrepl;
re = prxparse("s/" || scan("&orig",i,";") || "/" || scan("&repl",i,";") || "/");
myString = prxchange(re,-1,myString);
end;
run;
Resulting myString: "c gl oo e, e"
To process all character columns, we use an array
Say your table is named mySource and you want all character variables to be processed; we'll create a vector called cols for that.
data convert(drop=i re);
set mySource;
array cols(*) _character_;
do c = 1 to dim(cols);
do i = 1 to &nrepl;
re = prxparse("s/" || scan("&orig",i,";") || "/" || scan("&repl",i,";") || "/");
cols(c) = prxchange(re,-1,cols(c));
end;
end;
run;
When changing single characters TRANSLATE is the proper function, it will be one line of code.
translated = translate(string,"cgio","çğıö");
First get all your columns from dictionary, and then replace the values of all of them in a macro do loop.
You can try a program like this (Replace MYTABLE with your table name):
proc sql;
select name , count(*) into :columns separated by ' ', :count
from dictionary.columns
where memname = 'MYTABLE';
quit;
%macro m;
data mytable;
set mytable;
%do i=1 %to &count;
%scan(&columns ,&i) = tranwrd(%scan(&columns ,&i),"ç","c");
%scan(&columns ,&i) = tranwrd(%scan(&columns ,&i),"ğ","g");
...
%end;
%mend;
%m;