When querying WHERE 1 = 2 in rust postgres. I get an invalid byte sequence error - postgresql

It seems that when I try to query WHERE 1 = 2 from postgresql in rust, everything breaks because I pass a null. Below I have pasted my exact query and the arg params. Below that is the code I use
SELECT "spec", "id" FROM "ppm"."ppm_database_account" WHERE $1 = $2 AND "name" = $3 ORDER BY "id" DESC PostgresValues(
[
PostgresValue(
Int(
Some(
1,
),
),
),
PostgresValue(
Int(
Some(
2,
),
),
),
PostgresValue(
String(
Some(
"ppm",
),
),
),
],
)
thread 'rocket-worker-thread' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: Some(Error), code: SqlSt
ate(E22021), message: "invalid byte sequence for encoding \"UTF8\": 0x00", detail: None, hint: None, position: None, where_: Some("unnamed portal parameter $1"), schema: None, table: None,
column: None, datatype: None, constraint: None, file: Some("mbutils.c"), line: Some(1665), routine: Some("report_invalid_encoding") }) }', src/models/database_account/v1_0_0.rs:15:60
stack backtrace:
Code used
let rows = trx.query(&query, &args.as_params()).await.unwrap();
Minimal reproducible example:
In this case client is a tokio postgres client. I queried a specific table but this error occurs on all tables
let rows = client.query("SELECT \"id\" FROM \"ppm\".\"ppm_database_account\" WHERE $1 = $2", &[&PostgresValue(sea_query::Value::Int(Some(1))), &PostgresValue(sea_query::Value::Int(Some(2)))]).await.unwrap();
Any idea why it seems that the values seem to get passed as null (0x00)?

Related

How to insert 'NULL' values for 'int' column tupes in Aurora PostgreSQL db using Python boto3 client

I have a CSV file (MS SQL server table export) and I would like to import it to Aurora Serverless PostgreSQL database table. I did a basic preprocessing of the CSV file to replace all of the NULL values in it (i.e. '') to "NULL". The file looks like that:
CSV file:
ID,DRAW_WORKS
10000002,NULL
10000005,NULL
10000004,FLEXRIG3
10000003,FLEXRIG3
The PostgreSQL table has the following schema:
CREATE TABLE T_RIG_ACTIVITY_STATUS_DATE (
ID varchar(20) NOT NULL,
DRAW_WORKS_RATING int NULL
)
The code I am using to read and insert the CSV file is the following:
import boto3
import csv
rds_client = boto3.client('rds-data')
...
def batch_execute_statement(sql, sql_parameter_sets, transaction_id=None):
parameters = {
'secretArn': db_credentials_secrets_store_arn,
'database': database_name,
'resourceArn': db_cluster_arn,
'sql': sql,
'parameterSets': sql_parameter_sets
}
if transaction_id is not None:
parameters['transactionId'] = transaction_id
response = rds_client.batch_execute_statement(**parameters)
return response
transaction = rds_client.begin_transaction(
secretArn=db_credentials_secrets_store_arn,
resourceArn=db_cluster_arn,
database=database_name)
sql = 'INSERT INTO T_RIG_ACTIVITY_STATUS_DATE VALUES (:ID, :DRAW_WORKS);'
parameter_set = []
with open('test.csv', 'r') as file:
reader = csv.DictReader(file, delimiter=',')
for row in reader:
entry = [
{'name': 'ID','value': {'stringValue': row['RIG_ID']}},
{'name': 'DRAW_WORKS', 'value': {'longValue': row['DRAW_WORKS']}}
]
parameter_set.append(entry)
response = batch_execute_statement(
sql, parameter_set, transaction['transactionId'])
However, there is an error that gets returned suggests that there is a type mismatch:
Invalid type for parameter parameterSets[0][5].value.longValue,
value: NULL, type: <class 'str'>, valid types: <class 'int'>"
Is there a way to configure Aurora to accept NULL values for types such as int?
Reading the boto3 documentation more carefully I found that we can use isNull value set to True in case a field is NULL. The bellow code snippet shows how to insert null value to the database:
...
entry = [
{'name': 'ID','value': {'stringValue': row['ID']}}
]
if row['DRAW_WORKS'] == 'NULL':
entry.append({'name': 'DRAW_WORKS', 'value': {'isNull': True}})
else:
entry.append({'name': 'DRAW_WORKS_RATING', 'value': {'longValue': int(row['DRAW_WORKS'])}})
parameter_set.append(entry)

Duplicate key value violates unique constraint with Postgres, Knex, and Promises

I'm having a very weird issue. When I insert five roles into my "repository" table with unique ids, the following error below comes up multiple times (same id being mentioned!). I'm not using autoincrement for PK.
Error saving repo { error: duplicate key value violates unique constraint "repository_pkey"
at Connection.parseE (/Users/macintosh/node-projects/risingstack/node_modules/pg/lib/connection.js:554:11)
at Connection.parseMessage (/Users/macintosh/node-projects/risingstack/node_modules/pg/lib/connection.js:379:19)
at Socket.<anonymous> (/Users/macintosh/node-projects/risingstack/node_modules/pg/lib/connection.js:119:22)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at Socket.Readable.push (_stream_readable.js:208:10)
at TCP.onread (net.js:601:20)
name: 'error',
length: 202,
severity: 'ERROR',
code: '23505',
detail: 'Key (id)=(80073079) already exists.',
hint: undefined,
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: 'public',
table: 'repository',
column: undefined,
dataType: undefined,
constraint: 'repository_pkey',
file: 'nbtinsert.c',
line: '434',
routine: '_bt_check_unique' }
Postgres code generated by knex:
insert into "repository" ("description", "full_name", "html_url", "id", "language", "owner_id", "stargazers_count") values ('Node.js JavaScript runtime :sparkles::turtle::rocket::sparkles:', 'nodejs/node', 'https://github.com/nodejs/node', 27193779, 'JavaScript', 9950313, 56009)
insert into "repository" ("description", "full_name", "html_url", "id", "language", "owner_id", "stargazers_count") values (':closed_book:《Node.js 包教不包会》 by alsotang', 'alsotang/node-lessons', 'https://github.com/alsotang/node-lessons', 24812854, 'JavaScript', 1147375, 13989)
insert into "repository" ("description", "full_name", "html_url", "id", "language", "owner_id", "stargazers_count") values ('Node.js based forum software built for the modern web', 'NodeBB/NodeBB', 'https://github.com/NodeBB/NodeBB', 9603889, 'JavaScript', 4449608, 9399)
insert into "repository" ("description", "full_name", "html_url", "id", "language", "owner_id", "stargazers_count") values (':baby_chick:Nodeclub 是使用 Node.js 和 MongoDB 开发的社区系统', 'cnodejs/nodeclub', 'https://github.com/cnodejs/nodeclub', 3447593, 'JavaScript', 1455983, 7907)
insert into "repository" ("description", "full_name", "html_url", "id", "language", "owner_id", "stargazers_count") values ('Mysterium Node - VPN server and client for Mysterium Network', 'mysteriumnetwork/node', 'https://github.com/mysteriumnetwork/node', 80073079, 'Go', 23056638, 478)
Knex schema for repository:
return knex.schema.createTable('repository', (table) => {
table.integer('id').primary();
table.integer('owner_id');
table.foreign('owner_id').references('user.id').onDelete('CASCADE').onUpdate('CASCADE');
table.string('full_name');
table.string('description');
table.string('html_url');
table.string('language');
table.integer('stargazers_count');
})
Code run to insert Repository:
const fn = composeMany(withOwner, removeIrrelevantProperties, defaultLanguageAndDescToString, saveAndPublish);
const tRepos = r.map(fn);
return Promise.all(tRepos);
const saveAndPublish = (r) => {
return User
.insert(r.owner)
.catch(e => console.log('Error saving User', e))
.then(() => {
const { owner, ...repo } = r;
const q = Repository.insert(repo);
console.log(q.toQuery());
return q;
})
.catch(e => {
console.log('Error saving repo', e)}
);
Sounds like your database already had a row inserted with primary key id == 80073079.
To be sure about it try to query DB rows with that key just before inserting. I just wonder how are those ids generated, since you are clearly not using id sequence for it.
It is possible that input data, where IDs were fetched is corrupted and has duplicate ids

Elixir ecto: check if postgresql map column has key

To use json/jsonb data type ecto suggets to use fragments.
In my case, I've to use PostgreSQL ? operator to see if the map has such key, this however it will become something like:
where(events, [e], e.type == 1 and not fragment("???", e.qualifiers, "?", "2"))
but of course fragment reads the PostgreSQL ? as a placeholder. How can I check if the map has such key?
You need to escape the middle ? and pass a total of three arguments to fragment:
fragment("? \\? ?", e.qualifiers, "2")
Demo:
iex(1)> MyApp.Repo.insert! %MyApp.Food{name: "Foo", meta: %{price: 1}}
iex(2)> MyApp.Repo.insert! %MyApp.Food{name: "Foo", meta: %{}}
iex(3)> MyApp.Repo.all from(f in MyApp.Food, where: fragment("? \\? ?", f.meta, "price"))
[debug] SELECT f0."id", f0."name", f0."meta", f0."inserted_at", f0."updated_at" FROM "foods" AS f0 WHERE (f0."meta" ? 'price') [] OK query=8.0ms
[%MyApp.Food{__meta__: #Ecto.Schema.Metadata<:loaded>, id: 1,
inserted_at: #Ecto.DateTime<2016-06-19T03:51:40Z>, meta: %{"price" => 1},
name: "Foo", updated_at: #Ecto.DateTime<2016-06-19T03:51:40Z>}]
iex(4)> MyApp.Repo.all from(f in MyApp.Food, where: fragment("? \\? ?", f.meta, "a"))
[debug] SELECT f0."id", f0."name", f0."meta", f0."inserted_at", f0."updated_at" FROM "foods" AS f0 WHERE (f0."meta" ? 'a') [] OK query=0.8ms
[]
I'm not sure if this is documented anywhere, but I found the method from this test.

Knex.js SQL syntax error near 'select'

I'm getting an odd error:
{ __cid: '__cid9',
method: 'insert',
options: undefined,
bindings:
[ 500,
'Dinner',
'10/02/2015 7:57 PM',
'09/29/2015 8:00 PM',
'Grand Plaza',
1 ],
sql: 'insert into "expense" ("amount", "description", "due_date", "payment_date", "vendor_id") values ($1, $2, $3, $4, select "vendor_id" from "vendor" where "name" = $5 limit $6)',
returning: undefined }
error: syntax error at or near "select"
at [object Object].Connection.parseE (/.../node_modules/pg/lib/connection.js:534:11)
at [object Object].Connection.parseMessage (/.../node_modules/pg/lib/connection.js:361:17)
at Socket.<anonymous> (/.../node_modules/pg/lib/connection.js:105:22)
at Socket.emit (events.js:107:17)
at readableAddChunk (_stream_readable.js:163:16)
at Socket.Readable.push (_stream_readable.js:126:10)
at TCP.onread (net.js:538:20)
I have run the raw SQL with those value cut and paste and it works just fine.
This is the code thats generating the error:
Promise.each subbudget.expenses, (expense) ->
vendor.get(expense.vendor).then (vendor_id) ->
knex('expense').insert(
due_date: expense.dueDate
vendor_id: (knex.first("vendor_id").from("vendor").where({name: vendor_id}))
amount: expense.amount
description: expense.description
payment_date: expense.paidDate
)
Edit (Partial Solution):
The issue seems to be parentheses missing around the SELECT statement. Knex offers .wrap(), which only works on raw, and .as(), which only works on nested statements; for some reason this does not qualify as a nested statement, so I can't get parentheses around it. Any ideas?
knex.raw("(" + knex.first("vendor_id").from("vendor").where({name: vendor_id}).toString() + ")")
Not the cleanest, but use .toString(), then wrap it in .raw()

coffeescript error: 'unexpected .' for console.log

Have no idea why I am getting this error but my code:
angular.module('authAppApp')
.factory 'AuthService', (Session) ->
# Service logic
# ...
# Public API here
{
login: (creds)->
res =
id: 1,
user:
id: 1,
role: "admin"
Session.create(res.id, res.user.id, res.user.role)
return
}
Error:
[stdin]:30:14: error: unexpected .
Session.create(res.id, res.user.id, res.user.role)
^
This also happens with console.log
Why?
It looks like your indentation is off:
res =
id: 1,
user:
id: 1,
role: "admin"
Session.create(res.id, res.user.id, res.user.role)
return
The indentation of Session should match the indentation of res =. Otherwise, the coffeescript compiler will parse it as a property of the object you are setting res to. In particular, it's probably expecting a : and a value after Session.