The column index is out of range: 2, number of columns: 1 error while updating jsonb column - postgresql

I am trying to update jsonb column in java with mybatis.
Following is my mapper method
#Update("update service_user_assn set external_group = external_group || '{\"service_name\": \"#{service_name}\" }' where user=#{user} " +
" and service_name= (select service_name from services where service_name='Google') " )
public int update(#Param("service_name")String service_name,#Param("user") Integer user);
I am getting the following error while updating the jsonb (external_group) cloumn.
### Error updating database. Cause: org.postgresql.util.PSQLException: The column index is out of range: 2, number of columns: 1.
### The error may involve com.apds.mybatis.mapper.ServiceUserMapper.update-Inline
I am able to update with the same way for non-jsonb columns.
Also if I am putting hardcoded value it's working for jsonb columns.
How to solve this error while updating jsonb column?

You should not enclose #{} in single quotes because it will become part of a literal rather than a placeholder. i.e.
external_group = external_group || '{"service_name": "?"}' where ...
So, there will be only one placeholder in the PreparedStatement and you get the error.
The correct way is to concatenate the #{} in SQL.
You may also need to cast the literal to jsonb type explicitly.
#Update({
"update service_user_assn set",
"external_group = external_group",
"|| ('{\"service_name\": \"' || #{service_name} || '\" }')::jsonb",
"where user=#{user} and",
"service_name= (select service_name from services where service_name='Google')"})
The SQL being executed would look as follows.
external_group = external_group || ('{"service_name": "' || ? || '"}')::jsonb where ...

Related

Postgresql jsonb operators not working with spring data jpa

I am trying to use postgresql jsonb operators with spring data jpa query as:
#Query(value="SELECT * from Employee e WHERE e.details #> '{\"province\":{\"city\":{\"town\": \":town\"}}, \"hobbies\": [\":hobby\"]}'",nativeQuery = true)
town and hobby are inputs.
There is no error but no result is returned, though records are there which meets the criteria
It seems parameter binding is not working.
What can be the solution?
Here, :town and :hobby is inside ''(single quote) means string literal, so parameter can't be replaced. You can use || to concat as string so that parameters are not inside in '' and can be replaced.
#Query(value="SELECT * from Employee e WHERE e.details #> ''||'{\"province\":{\"city\":{\"town\": \"' || :town || '\"}}, \"hobbies\": [\"' || :hobby || '\"]}'||'' ", nativeQuery = true)

error syntax error at or near ":" for split_part () of createNativeQuery in spring boot for postgresql

I'm getting SQL exception while executing the query which contains
split_part() method as split_part(value::TEXT,':', 1).
String queryStr = " select split_part(value::TEXT,':', 1) from table";
Query query = entityManager.createNativeQuery(queryStr);
List results = query.getResultList();
ERROR 2020-02-10 14:54:37,926 [http-nio-7070-exec-1] 142 - ERROR: syntax error at or near ":"
Position: 855
Your obfuscation layer probably chokes on the :: operator. Use the cast() operator instead:
String queryStr = " select split_part(cast(value as text),':', 1) from table";
But why do you think you need the cast to begin with? If you are storing : characters that column, it is most probably a text (or varchar) column anyway and you don't need a cast at all.

Postgresql SQLSTATE[42P18]: Indeterminate datatype with PDO and CONCAT

I'm having issues with CONCAT() when used on a WHERE, in PDO.
The code:
<?php
require_once('config.php');
$fdate = '01/01/2010';
$tdate = '31/12/2030';
$identification = '';
$count = "SELECT count(*) as total FROM ( select time_id from doc_sent WHERE date >= :fdate AND date <= :tdate AND identification LIKE concat('%',:identification,'%') ) x;";
//$count = "SELECT count(*) as total FROM ( select time_id from doc_sent WHERE date >= :fdate AND date <= :tdate ) x;";
$stmt_count_row_main_table = $pdo->prepare($count);
$stmt_count_row_main_table->execute(['fdate' => $fdate, 'tdate' => $tdate, 'identification' => $identification]);
//$stmt_count_row_main_table->execute(['fdate' => $fdate, 'tdate' => $tdate]);
$count_row_main_table = $stmt_count_row_main_table->fetch();
print_r( $count_row_main_table);
?>
The code works when the 'identification' part is commented.
When I'm trying to use CONCAT(), it doesn't.
I tried many "version" of CONCAT() (and read many other questions, like this one: How do I create a PDO parameterized query with a LIKE statement? ) but I am always referring to the main documentation:
https://www.postgresql.org/docs/9.1/static/functions-string.html
Which say:
concat('abcde', 2, NULL, 22) --> abcde222
The FULL error when I use CONCAT() is:
PHP Fatal error: Uncaught PDOException: SQLSTATE[42P18]: Indeterminate datatype: 7 ERROR: could not determine data type of parameter $3 in /var/www/pdo-reporter/show.php:17\nStack trace:\n#0 /var/www/pdo-reporter/show.php(17): PDOStatement->execute(Array)\n#1 {main}\n thrown in /var/www/pdo-reporter/show.php on line 17
What's wrong with my code?
CONCAT is a function that takes a VARIADIC argument list, which means that internally postgres will convert them into an array of the same type.
postgres=# \df concat
List of functions
Schema | Name | Result data type | Argument data types | Type
------------+--------+------------------+---------------------+------
pg_catalog | concat | text | VARIADIC "any" | func
When trying to resolve the input type to a single type, the SQL parser fails. It can be reproduced in this simpler form:
postgres=# PREPARE p AS select concat('A', $1);
ERROR: could not determine data type of parameter $1
The parser can't figure out the datatype of $1 so it errs on the side of caution.
One easy solution is to cast the parameter as text:
postgres=# PREPARE p AS select concat($1::text);
PREPARE
or with the CAST operator:
postgres=# PREPARE p AS select concat(cast($1 as text));
PREPARE
I haven't tested with PDO but presumably it would work (given how it deals with parameters to produce prepared statements) to change the query to:
"...identification LIKE '%' || :identification || '::text%'..."
or use the '||' operator instead of concat in the query:
identification LIKE '%' || :identification || '%'
EDIT: BTW if you want to find that a parameter :X is a substring of identification, this clause is more secure: strpos(identification, :X) > 0, because :X may contain '%' or '_' without causing any side-effect in the match, contrary to what happens with LIKE.

Psycopg2 insert python dictionary in postgres database

In python 3+, I want to insert values from a dictionary (or pandas dataframe) into a database. I have opted for psycopg2 with a postgres database.
The problems is that I cannot figure out the proper way to do this. I can easily concatenate a SQL string to execute, but the psycopg2 documentation explicitly warns against this. Ideally I wanted to do something like this:
cur.execute("INSERT INTO table VALUES (%s);", dict_data)
and hoped that the execute could figure out that the keys of the dict matches the columns in the table. This did not work. From the examples of the psycopg2 documentation I got to this approach
cur.execute("INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%s" for pair in dict_data]) + ");", dict_data)
from which I get a
TypeError: 'dict' object does not support indexing
What is the most phytonic way of inserting a dictionary into a table with matching column names?
Two solutions:
d = {'k1': 'v1', 'k2': 'v2'}
insert = 'insert into table (%s) values %s'
l = [(c, v) for c, v in d.items()]
columns = ','.join([t[0] for t in l])
values = tuple([t[1] for t in l])
cursor = conn.cursor()
print cursor.mogrify(insert, ([AsIs(columns)] + [values]))
keys = d.keys()
columns = ','.join(keys)
values = ','.join(['%({})s'.format(k) for k in keys])
insert = 'insert into table ({0}) values ({1})'.format(columns, values)
print cursor.mogrify(insert, d)
Output:
insert into table (k2,k1) values ('v2', 'v1')
insert into table (k2,k1) values ('v2','v1')
I sometimes run into this issue, especially with respect to JSON data, which I naturally want to deal with as a dict. Very similar. . .But maybe a little more readable?
def do_insert(rec: dict):
cols = rec.keys()
cols_str = ','.join(cols)
vals = [ rec[k] for k in cols ]
vals_str = ','.join( ['%s' for i in range(len(vals))] )
sql_str = """INSERT INTO some_table ({}) VALUES ({})""".format(cols_str, vals_str)
cur.execute(sql_str, vals)
I typically call this type of thing from inside an iterator, and usually wrapped in a try/except. Either the cursor (cur) is already defined in an outer scope or one can amend the function signature and pass a cursor instance in. I rarely insert just a single row. . .And like the other solutions, this allows for missing cols/values provided the underlying schema allows for it too. As long as the dict underlying the keys view is not modified as the insert is taking place, there's no need to specify keys by name as the values will be ordered as they are in the keys view.
[Suggested answer/workaround - better answers are appreciated!]
After some trial/error I got the following to work:
sql = "INSERT INTO table (" + ", ".join(dict_data.keys()) + ") VALUES (" + ", ".join(["%("+k+")s" for k in dict_data]) + ");"
This gives the sql string
"INSERT INTO table (k1, k2, ... , kn) VALUES (%(k1)s, %(k2)s, ... , %(kn)s);"
which may be executed by
with psycopg2.connect(database='deepenergy') as con:
with con.cursor() as cur:
cur.execute(sql, dict_data)
Post/cons?
using %(name)s placeholders may solve the problem:
dict_data = {'key1':val1, 'key2':val2}
cur.execute("""INSERT INTO table (field1, field2)
VALUES (%(key1)s, %(key2)s);""",
dict_data)
you can find the usage in psycopg2 doc Passing parameters to SQL queries
Here is another solution inserting a dictionary directly
Product Model (has the following database columns)
name
description
price
image
digital - (defaults to False)
quantity
created_at - (defaults to current date)
Solution:
data = {
"name": "product_name",
"description": "product_description",
"price": 1,
"image": "https",
"quantity": 2,
}
cur = conn.cursor()
cur.execute(
"INSERT INTO products (name,description,price,image,quantity) "
"VALUES(%(name)s, %(description)s, %(price)s, %(image)s, %(quantity)s)", data
)
conn.commit()
conn.close()
Note: The columns to be inserted is specified on the execute statement .. INTO products (column names to be filled) VALUES ..., data <- the dictionary (should be the same **ORDER** of keys)

npgsql : Selecting a null data throws exception with error "Column is Null"

I am running npgsql v3.7 with .NetCore on Ubuntu.
When I execute a select query and a cell in any row in the results is null, an exception is thrown with the error message "Column is null".
I am having to work around this by putting every column in the select clause inside a case statement which tests for NULL
"CASE WHEN " + fieldName + " IS NULL THEN '' ELSE " + fieldName + " END "
This seems a bit extreme and should not be necessary. Has anyone else come across this.
Thanks.
You are probably trying to read the column like this:
using (var reader = cmd.ExecuteReader()) {
reader.Next();
var o = reader.GetString(0); // Or any other of the Get methods on reader
...
}
This code will fail if the column contains a null, and is the expected behavior. In ADO.NET, you need to check for a null value with reader.IsDBNull(0) before actually getting the value. That's just how the database API works.
I don't know why NULL values are giving you errors, but you can do away with the ugly CASE statement in favor of using COALESCE:
"COALESCE(" + fieldName + ", '')"
Ideally you should make a configuration change such that NULL values do not cause this problem.