Try to update postgresql database using Doobie but no update happening - postgresql

I'm trying to update table in postgresql database passing dynamic value using doobie functional JDBC while executing sql statement getting below error.Any help will be appreciable.
Code
Working code
sql"""UPDATE layout_lll
|SET runtime_params = 'testing string'
|WHERE run_id = '123-ksdjf-oreiwlds-9dadssls-kolb'
|""".stripMargin.update.quick.unsafeRunSync
Not working code
val abcRunTimeParams="testing string"
val runID="123-ksdjf-oreiwlds-9dadssls-kolb"
sql"""UPDATE layout_lll
|SET runtime_params = '${abcRunTimeParams}'
|WHERE run_id = '$runID'
|""".stripMargin.update.quick.unsafeRunSync
Error
Exception in thread "main" org.postgresql.util.PSQLException: The column index is out of range: 3, number of columns: 2.

Remove the ' quotes - Doobie make sure they aren't needed. Doobie (and virtually any other DB library) uses parametrized queries, like:
UPDATE layout_lll
SET runtime_params = ?
WHERE run_id = ?
where ? will be replaced by parameters passes later on. This:
makes SQL injection impossible
helps spotting errors in SQL syntax
When you want to pass parameter, the ' is part of the value passed, not part of the parametrized query. And Doobie (or JDBC driver) will "add" it for you. The variables you pass there are processed by Doobie, they aren't just pasted there like in normal string interpolation.
TL;DR Try running
val abcRunTimeParams="testing string"
val runID="123-ksdjf-oreiwlds-9dadssls-kolb"
sql"""UPDATE layout_lll
|SET runtime_params = ${abcRunTimeParams}
|WHERE run_id = $runID
|""".stripMargin.update.quick.unsafeRunSync

Related

how to pass the case statements in to .selectExpr("*", "all case statements") spark code, from external file

I have below case statement in sql file
note - it is just a sample statement and i saved it as col_sql.sql
"CASE WHEN a = 1 THEN ONE END AS INT_VAL"
, "CASE WHEN a = 'DE' THEN 'APHABET' AS STR_VAL"
In spark scala code
Im getting the col_sql.sql as per below
val col_file = "dir/path/col_sql.sql"
val col_query = readFile(col_file) --- It is internal converted as string using .mkString
Then passing it to my select query in spark code
.selectExpr("*", col_query )
Expectation --
My expectation is when my spark job is running the case statement should be passed in .selectExpr() function as it is given in sql file, like below it should be passed.
When manually running in spark2-shell it is working correctly but in spark2-summit job it throwing parserDriver error .
Kindly assit me on this.
.selectExpr("*", "CASE WHEN a = 1 THEN ONE END AS INT_VAL", "CASE WHEN a = 'DE' THEN 'APHABET' AS STR_VAL")
Each argument in selectExpr should resolve to one column (see examples in the doc). In this case you will have to split the expression read from the file, e.g.:
// Example given the complete string, you could split already when reading the file
val col_query = "\"CASE WHEN a = 1 THEN ONE END AS INT_VAL\", \"CASE WHEN a = 'DE' THEN 'APHABET' AS STR_VAL\""
val cols_queries = col_query.split(",").map(x => x.trim().stripPrefix("\"").stripSuffix("\""))
df.selectExpr("*", cols_queries: _*) // to expand the list into arguments

how to link python pandas dataframe to mysqlconnector '%s' value

I am trying to pipe a webscraped pandas dataframe into a MySql table with mysql.connector but I can't seem to link df values to the %s variable. The connection is good (I can add individual rows) but it just returns errors when I replace the value witht he %s.
cnx = mysql.connector.connect(host = 'ip', user = 'user', passwd = 'pass', database = 'db')
cursor = cnx.cursor()
insert_df = ("""INSERT INTO table"
"(page_1, date_1, record_1, task_1)"
"VALUES ('%s','%s','%s','%s')""")
cursor.executemany(insert_df, df)
cnx.commit()
cnx.close()
This returns "ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()."
If I add any additional oiperations it returns "ProgrammingError: Parameters for query must be an Iterable."
I am very new to this so any help is appreciated
Work around for me was to redo my whole process. I ran sqlalchemy, all the documentation makes this very easy. message if you want the code I used.

How to change Postgres JDBC driver properties to change return class on count function?

I am running a Jasper report (via an jrxml), I am connecting / reading from a Postgres database.
The Sql returns a value from a count function, this then causes java.lang.ClassCastException when writing this value to the Jasper report (via an xml), can I amend the JDBC driver properties to handle this (rather than amend the sql).
The line in the SQL that caused the error was
COALESCE(B.GP_COUNT,0) as GP_COUNT
If I amend the line that populates GP_COUNT using a CAST statement then this works OK in the xml:-
CAST(COUNT(DISTINCT PD_CDE) AS INT4) AS GP_COUNT
I am looking for a solution that avoids changes to the xml’s & jrxml’s (as we have hundreds of reports to convert to Postgres from DB2)
Any help appreciated, I am not a java person so I apologise in advance.
The PostgreSQL JDBC Driver does not return a string, but a BIGINT as result of the count aggregate function.
This Java code:
Class.forName("org.postgresql.Driver");
java.sql.Connection conn = java.sql.DriverManager.getConnection(
"jdbc:postgresql://127.0.0.1/mydb?user=myuser"
);
java.sql.Statement stmt = conn.createStatement();
java.sql.ResultSet rs = stmt.executeQuery("SELECT count(*) FROM pg_class");
System.out.println("Type of count(*) is a BIGINT: "
+ (rs.getMetaData().getColumnType(1) == java.sql.Types.BIGINT)
);
rs.close();
stmt.close();
conn.close();
produces:
Type of count(*) is a BIGINT: true

Python psycopg2 - using .format() with a dbname inside a string

I'm using psycopg2 to query a database that starts with a number + ".district", so my code goes like:
number = 2345
cur = conn.cursor()
myquery = """ SELECT *
FROM {0}.districts
;""".format(number)
cur.execute("""{0};""".format(query))
data = cur.fetchall()
conn.close()
And i keep receiving the following psycopg2 error..
psycopg2.ProgrammingError: syntax error at or near "2345."
LINE 1: SELECT * FROM 2345.districts...
Thought it was the type of data the problem, maybe int(number) or str(number)..but no, same error appears.
¿ What am i doing wrong ?
The way you are trying to use to pass parameters is not supported. Please read the docs.

Prevent sql injection, Activerecord #where, with multiple AND/OR clause

I am fairly new to rails. I have a rails Model 'Message' with: 'belongs_to :sender' and 'belongs_to :receiver' relations.
I am trying to create a message thread between two users: 'current_user' and 'params'.
In the show controller action of the MessagesController, I want to use the equivalent of this sql query:
Message.find_by_sql(
"SELECT *
FROM messages
WHERE
(sender_id = #{current_user.id} OR sender_id = #{params[:id]})
AND
(receiver_id = #{current_user.id} OR receiver_id = #{params[:id]});"
)
If I where looking for one Message I would use this Activerecord queryto prevent SQL injection:
Message.where('sender_id = ? OR receiver_id = ?', current_user.id, current_user.id).find(params[:id])
My current query is:
Message.where(sender_id: [current_user.id, params[:id]], receiver_id: [current_user.id, params[:id]])
Is this query currently guarded against SQL injection?
it's safe. the final query would be something like sender_id IN (1, 2) AND receiver_id IN (3, 4) and all integer values are sanitized. You can simply run tests:
Message.where(sender_id: [current_user.id, "' is dangerous"], receiver_id: [current_user.id, params[:id]])
and see raw SQL output in console. illegal integers should be converted to 0.