Error Code: 1582. Incorrect parameter count in the call to native function 'SUBSTRING_INDEX' - mysql-workbench

When I tried to copy the text from before the first comma in the first column to the second column using the following command:
UPDATE Table_Name
SET second_column = SUBSTRING_INDEX(first_column, ‘,’, 1);
I got the error message:
Error Code: 1582. Incorrect parameter count in the call to native function 'SUBSTRING_INDEX'
What is going wrong?
Thanks in advance.

The error came from copying and pasting the command from a non-programming text editor (i.e. Word):
The quotes around the delimiter were changed from two normal single quotes (') and (') to left and right single quotes (‘) and (’).
Changing the delimiter to two normal single quotes around the delimiter solved the problem:
UPDATE Table_Name
SET second_column = SUBSTRING_INDEX(first_column, ',', 1);

Related

Renaming columns with ' in pyspark

How to rename column "RANDY'S" to 'RANDYS' in pyspark?
I tried below code and its not working
test_rename_df=df.withColumnRenamed('"RANDY''S"','RANDYS')
Note that original column name has double quotes around it
enter image description here
You're adding too many quotes around the original column name. Try this:
test_rename_df = df.withColumnRenamed("RANDY\'S", "RANDYS")
Side-note
When you call df.columns, the column RANDY'S is surrounded by double quotes instead of single quotes to avoid confusion.
If your column had the name RANDY"S, df.columns would instead use single quotes around the column name (see screenshot below):

Use of column names in Redshift COPY command which is a reserved keyword

I have a table in redshift where the column names are 'begin' and 'end'. They are Redshift keywords. I want to explicitly use them in the Redshift COPY command. Is there a workaround rather than renaming the column names in the table. That will be my last option.
I tried to enclose them within single/double quotes, but looks like the COPY command only accepts comma separated column names.
Copy command works fails if you don't escape keywords as column name. e.g. begin or end.
copy test1(col1,begin,end,col2) from 's3://example/file/data1.csv' credentials 'aws_access_key_id=XXXXXXXXXXXXXXX;aws_secret_access_key=XXXXXXXXXXX' delimiter ',';
ERROR: syntax error at or near "end"
But, it works fine if as begin and end are enclosed by double quote(") as below.
copy test1(col1,"begin","end",col2) from 's3://example/file/data1.csv' credentials 'aws_access_key_id=XXXXXXXXXXXXXXX;aws_secret_access_key=XXXXXXXXXXX' delimiter ',';
I hope it helps.
If there is some different error please update your question.

USQL Escape Quotes

I am new to Azure data lake analytics, I am trying to load a csv which is double quoted for sting and there are quotes inside a column on some random rows.
For example
ID, BookName
1, "Life of Pi"
2, "Story about "Mr X""
When I try loading, it fails on second record and throwing an error message.
1, I wonder if there is a way to fix this in csv file, unfortunatly we cannot extract new from source as these are log files?
2, is it possible to let ADLA to ignore the bad rows and proceed with rest of the records?
Execution failed with error '1_SV1_Extract Error :
'{"diagnosticCode":195887146,"severity":"Error","component":"RUNTIME","source":"User","errorId":"E_RUNTIME_USER_EXTRACT_ROW_ERROR","message":"Error
occurred while extracting row after processing 9045 record(s) in the
vertex' input split. Column index: 9, column name:
'instancename'.","description":"","resolution":"","helpLink":"","details":"","internalDiagnostics":"","innerError":{"diagnosticCode":195887144,"severity":"Error","component":"RUNTIME","source":"User","errorId":"E_RUNTIME_USER_EXTRACT_EXTRACT_INVALID_CHARACTER_AFTER_QUOTED_FIELD","message":"Invalid
character following the ending quote character in a quoted
field.","description":"Invalid character is detected following the
ending quote character in a quoted field. A column delimiter, row
delimiter or EOF is expected.\nThis error can occur if double-quotes
within the field are not correctly escaped as two
double-quotes.","resolution":"Column should be fully surrounded with
double-quotes and double-quotes within the field escaped as two
double-quotes."
As per the error message, if you are importing a quoted csv, which has quotes within some of the columns, then these need to be escaped as two double-quotes. In your particular example, you second row needs to be:
..."Life after death and ""good death"" models - a qualitative study",...
So one option is to fix up the original file on output. If you are not able to do this, then you can import all the columns as one column, use RegEx to fix up the quotes and output the file again, eg
// Import records as one row then use RegEx to clean columns
#input =
EXTRACT oneCol string
FROM "/input/input132.csv"
USING Extractors.Text( '|', quoting: false );
// Fix up the quotes using RegEx
#output =
SELECT Regex.Replace(oneCol, "([^,])\"([^,])", "$1\"\"$2") AS cleanCol
FROM #input;
OUTPUT #output
TO "/output/output.csv"
USING Outputters.Csv(quoting : false);
The file will now import successfully. My results:

Variable substitution of multiline list of strings in PostgreSQL

I'm trying to substitute the list in the following code:
kategori NOT IN ('Fors',
'Vattenfall',
'Markerad vinterled',
'Fångstarm till led',
'Ruskmarkering',
'Tält- och eldningsförbud, tidsbegränsat',
'Skidspår')
I found this question for the multiline part. However
SELECT ('Fors',
'Vattenfall',
'Markerad vinterled',
'Fångstarm till led',
'Ruskmarkering',
'Tält- och eldningsförbud, tidsbegränsat',
'Skidspår') exclude_fell \gset
gives
ERROR: column "fors" does not exist
LINE 1: SELECT (Fors,
^
, so I tried using triple quotes, dollar quotation and escape sequenses. Nothing has worked to satisfaction. This is true even if I use a single line variable and \set, so I must have misunderstood something about variable substitution. What is the best way of doing this?

How to update a record with literal percent literal (%) in PostgreSQL without saving it as "\%"

I need to update a record, which contains literal percent signs, using PostgreSQL in Railo. The query looks like
<cfquery>
update foo set bar = 'string with % in it %'
</cfQuery>
It throws error as ColdFusion normally interprets it as a wildcard character. I can escape it using the following query.
<cfquery>
update foo set bar = 'string with escaped \% in it \%'
</cfQuery>
However, the record now contains "\%" in the database and will be displayed on the page as "\%".
I found a documentation with an example of escaping percent sign in a SELECT. But it does not work for me: syntax error at or near "ESCAPE".
SELECT emp_discount
FROM Benefits
WHERE emp_discount LIKE '10\%'
ESCAPE '\';
Is there a better to achieve the same goal? The underlining database is PostgreSQL. Thanks!
Queryparameters escape special characters. Yet another reason to use them.