Special characters handling with backticks/backquotes in StreamSets Data Collector - streamsets

My source fields have special characters and need to be enclosed with backticks.
Ex:
Source - ahj##
Target - ` ahj## `
How do I implement this in StreamSets - enclosing the column names?

I assume you're trying to write to MySQL. The correct way to do this is to enable Enclose Object Names in the JDBC tab and append ?sessionVariables=sql_mode=ANSI_QUOTES to the JDBC URL.

Related

Capture File name from File Browser Item Field in Oracle Apex

I want to capture file_name and store it in another item field at run time using dynamic action.
I tried it to capture File Browser Item field but its capturing entire file path.
Please suggest if this can be done using PLSQL or JS in Oracle Apex 4.2
You can use regular expression, relying on the fact that there is always a backslash before the filename:
$('#P6_FILE').val().match(/\\([^\\]*)$/)[1]
explaining the regular expression:
\\ double backslash: escaping the backlash
() parenthesis to dig out the filename from the result that initially includes backslashes
[^\\]* any letter that is not a backslash, as many times as you'd like
$ end of string
Stackoverflow sometimes escapes backslashes itself, so this might look messy
$('#P6_FILE').val().match(/\([^\]*)$/)[1]
this has worked in DA set value
I use this SQL to get the file name from the file browser :
select filename
FROM apex_application_temp_files
where name = :P2_FILE_SELECT;
P2_FILE_SELECT is the file browser item.
Write this SQL into "SQL Query" of SOURCE.

Kentico Import Toolkit 9.0

We have been using Kentico Import Toolkit v9.0 to import some of the legacy data from the SQL Server 2008 R2 into the newly created proprietary (Kentico Custom tables).. and at one step, the CMS query is NOT ABLE to handle the SQL Server comparison (with an apostrophe in the string value).. Is there anything in the Tookit that can help overcome that kind of handling? We would not want to alter the source (legacy) data as a text without an apostrophe will likely alter the meaning of the text itself!
A sample query is as below:
SELECT NodeID FROM View_CMS_Tree_Joined WHERE ClassDisplayName ='Custom Table Name' and NodeName =
'Alzheimer's Disease' (as an example)
Your help is much appreciated! The key ask is how and where can we ESCAPE apostrophe in the CMS query, while inside the Kentico Import Tool?
Try escaping the single quote in the string by repeating it:
SELECT NodeID FROM View_CMS_Tree_Joined WHERE NodeName = 'Alzheimer''s Disease'
Hope this helps;
If you're using display names, you need to convert them to code names. Code names don't allow you to use special characters like single and double quotes. Typically underscores and dashes are about it aside from alpha/numeric characters.
If you can modify the WHERE statement make sure to escape your quotes when writing the query.

postgresql - pgloader - quotes handling

I am new to postgresql and just starting to use it. I am trying to load a file into a table and facing some issues.
Sample data - the file file1.RPT contains data in the below format
"Bharath"|Kumar|Krishnan
abc"|def|ghi
qwerty|asdfgh|lkjhg
Below is the load script that is used
LOAD CSV
INTO table1
....
WITH truncate,
fields optionally enclosed by '"',
fields escaped by '"'
fields terminated by '|'
....
However, the above script is not working and is not loading any data into the table. I am not sure whats the issue here. My understanding is that first row data has to be successfully loaded (since I have given optionally enclosed by) and the second row also must be loaded (since I am trying to escape the double quote).
Request help in getting the same rectified.
Thank you.
We cannot escape and optionally quote the same character. If the double-quote will be part of the data, then it can be ignored using field not enclosed option. The default option is field optionally enclosed by double-quote.
Apparently, you're not escaping the quote in the second row, because either you must use a backslash (or another quoting character) before:
abc\"|def|ghi
or you should enclose the entire line with quote
another alternative is to accept to have quotes in the first field, then you should use the following:
fields not enclosed
in your load script

Single quotes stored in a Postgres database

I've been working on an Express app that has a form designed to hold lines and quotes.
Some of the lines will have single quotes('), but overall it's able to store the info and I'm able to back it up and store it without any problems. Now, when I want do pg_dump and have the database put into an SQL file, the quotes seem to cause some things to appear a bit wonky in my text editor.
Would I have to create a method to change all the single quotation marks into double, or can I leave it as is and be able to upload it back to the database without causing major issues. I know people will continue to enter in lines that contain either single or double quotations, so I would like to know any solution or answer that would help greatly.
Single quotes in character data types are no problem at all. You just need to escape them properly in string literals.
To write data with INSERT you need to quote all string literals according to SQL syntax rules. There are tools to do that for you ...
Insert text with single quotes in PostgreSQL
However, pg_dump takes care of escaping automatically. The default mode produces text output to be re-imported with COPY (much faster than INSERT), and single quotes have no special meaning there. And in (non-default) csv mode, the default quote character is double-quote (") and configurable. The manual:
QUOTE
Specifies the quoting character to be used when a data value is quoted. The default is double-quote. This must be a single one-byte character. This option is allowed only when using CSV format.
The format is defined by rules for COPY and not by SQL syntax rules.

Uploading data to RedShift using COPY

I am trying to upload data to RedShift using COPY command.
On this row:
4072462|10013868|default|2015-10-14 21:23:18.0|0|'A=0
I am getting this error:
Delimited value missing end quote
This is the COPY command:
copy test
from 's3://test/test.gz'
credentials 'aws_access_key_id=xxx;aws_secret_access_key=xxx' removequotes escape gzip
First, I hope you know why you are getting the mentioned error: You have a a single quote in one of the column values. While using the removequotes option, Redshift documentation clearly says that:
If a string has a beginning single or double quotation mark but no corresponding ending mark, the COPY command fails to load that row and returns an error.
One thing is certain: removequotes is certainly not what you are looking for.
Second, so what are your options?
If preprocessing the S3 file is in your control, consider using the escape option. Per the documentation,
When this parameter is specified, the backslash character (\) in input data is treated as an escape character.
So your input row in S3 should change to something like:
4072462|10013868|default|2015-10-14 21:23:18.0|0|\'A=0
See if the CSV DELIMITER '|' works for you. Check documentation here.