Elixir ecto: check if postgresql map column has key - postgresql

To use json/jsonb data type ecto suggets to use fragments.
In my case, I've to use PostgreSQL ? operator to see if the map has such key, this however it will become something like:
where(events, [e], e.type == 1 and not fragment("???", e.qualifiers, "?", "2"))
but of course fragment reads the PostgreSQL ? as a placeholder. How can I check if the map has such key?

You need to escape the middle ? and pass a total of three arguments to fragment:
fragment("? \\? ?", e.qualifiers, "2")
Demo:
iex(1)> MyApp.Repo.insert! %MyApp.Food{name: "Foo", meta: %{price: 1}}
iex(2)> MyApp.Repo.insert! %MyApp.Food{name: "Foo", meta: %{}}
iex(3)> MyApp.Repo.all from(f in MyApp.Food, where: fragment("? \\? ?", f.meta, "price"))
[debug] SELECT f0."id", f0."name", f0."meta", f0."inserted_at", f0."updated_at" FROM "foods" AS f0 WHERE (f0."meta" ? 'price') [] OK query=8.0ms
[%MyApp.Food{__meta__: #Ecto.Schema.Metadata<:loaded>, id: 1,
inserted_at: #Ecto.DateTime<2016-06-19T03:51:40Z>, meta: %{"price" => 1},
name: "Foo", updated_at: #Ecto.DateTime<2016-06-19T03:51:40Z>}]
iex(4)> MyApp.Repo.all from(f in MyApp.Food, where: fragment("? \\? ?", f.meta, "a"))
[debug] SELECT f0."id", f0."name", f0."meta", f0."inserted_at", f0."updated_at" FROM "foods" AS f0 WHERE (f0."meta" ? 'a') [] OK query=0.8ms
[]
I'm not sure if this is documented anywhere, but I found the method from this test.

Related

When querying WHERE 1 = 2 in rust postgres. I get an invalid byte sequence error

It seems that when I try to query WHERE 1 = 2 from postgresql in rust, everything breaks because I pass a null. Below I have pasted my exact query and the arg params. Below that is the code I use
SELECT "spec", "id" FROM "ppm"."ppm_database_account" WHERE $1 = $2 AND "name" = $3 ORDER BY "id" DESC PostgresValues(
[
PostgresValue(
Int(
Some(
1,
),
),
),
PostgresValue(
Int(
Some(
2,
),
),
),
PostgresValue(
String(
Some(
"ppm",
),
),
),
],
)
thread 'rocket-worker-thread' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: Some(Error), code: SqlSt
ate(E22021), message: "invalid byte sequence for encoding \"UTF8\": 0x00", detail: None, hint: None, position: None, where_: Some("unnamed portal parameter $1"), schema: None, table: None,
column: None, datatype: None, constraint: None, file: Some("mbutils.c"), line: Some(1665), routine: Some("report_invalid_encoding") }) }', src/models/database_account/v1_0_0.rs:15:60
stack backtrace:
Code used
let rows = trx.query(&query, &args.as_params()).await.unwrap();
Minimal reproducible example:
In this case client is a tokio postgres client. I queried a specific table but this error occurs on all tables
let rows = client.query("SELECT \"id\" FROM \"ppm\".\"ppm_database_account\" WHERE $1 = $2", &[&PostgresValue(sea_query::Value::Int(Some(1))), &PostgresValue(sea_query::Value::Int(Some(2)))]).await.unwrap();
Any idea why it seems that the values seem to get passed as null (0x00)?

How to insert 'NULL' values for 'int' column tupes in Aurora PostgreSQL db using Python boto3 client

I have a CSV file (MS SQL server table export) and I would like to import it to Aurora Serverless PostgreSQL database table. I did a basic preprocessing of the CSV file to replace all of the NULL values in it (i.e. '') to "NULL". The file looks like that:
CSV file:
ID,DRAW_WORKS
10000002,NULL
10000005,NULL
10000004,FLEXRIG3
10000003,FLEXRIG3
The PostgreSQL table has the following schema:
CREATE TABLE T_RIG_ACTIVITY_STATUS_DATE (
ID varchar(20) NOT NULL,
DRAW_WORKS_RATING int NULL
)
The code I am using to read and insert the CSV file is the following:
import boto3
import csv
rds_client = boto3.client('rds-data')
...
def batch_execute_statement(sql, sql_parameter_sets, transaction_id=None):
parameters = {
'secretArn': db_credentials_secrets_store_arn,
'database': database_name,
'resourceArn': db_cluster_arn,
'sql': sql,
'parameterSets': sql_parameter_sets
}
if transaction_id is not None:
parameters['transactionId'] = transaction_id
response = rds_client.batch_execute_statement(**parameters)
return response
transaction = rds_client.begin_transaction(
secretArn=db_credentials_secrets_store_arn,
resourceArn=db_cluster_arn,
database=database_name)
sql = 'INSERT INTO T_RIG_ACTIVITY_STATUS_DATE VALUES (:ID, :DRAW_WORKS);'
parameter_set = []
with open('test.csv', 'r') as file:
reader = csv.DictReader(file, delimiter=',')
for row in reader:
entry = [
{'name': 'ID','value': {'stringValue': row['RIG_ID']}},
{'name': 'DRAW_WORKS', 'value': {'longValue': row['DRAW_WORKS']}}
]
parameter_set.append(entry)
response = batch_execute_statement(
sql, parameter_set, transaction['transactionId'])
However, there is an error that gets returned suggests that there is a type mismatch:
Invalid type for parameter parameterSets[0][5].value.longValue,
value: NULL, type: <class 'str'>, valid types: <class 'int'>"
Is there a way to configure Aurora to accept NULL values for types such as int?
Reading the boto3 documentation more carefully I found that we can use isNull value set to True in case a field is NULL. The bellow code snippet shows how to insert null value to the database:
...
entry = [
{'name': 'ID','value': {'stringValue': row['ID']}}
]
if row['DRAW_WORKS'] == 'NULL':
entry.append({'name': 'DRAW_WORKS', 'value': {'isNull': True}})
else:
entry.append({'name': 'DRAW_WORKS_RATING', 'value': {'longValue': int(row['DRAW_WORKS'])}})
parameter_set.append(entry)

Inserting nested json file in postgreSQL

I am trying to insert a nested json file in PostgreSQL DB, Below is the sample data of the json file.
[
{
"location_id": 11111,
"recipe_id": "LLLL324",
"serving_size_number": 1,
"recipe_fraction_description": null,
"description": "1/2 gallon",
"recipe_name": "DREXEL ALMOND MILK 32 OZ",
"marketing_name": "Almond Milk",
"marketing_description": null,
"ingredient_statement": "Almond Milk (ALMOND MILK (FILTERED WATER, ALMONDS), CANE SUGAR, CONTAINS 2% OR LESS OF: VITAMIN AND MINERAL BLEND (CALCIUM CARBONATE, VITAMIN E ACETATE, VITAMIN A PALMITATE, VITAMIN D2), SEA SALT, SUNFLOWER LECITHIN, LOCUST BEAN GUM, GELLAN GUM.)",
"allergen_attributes": {
"allergen_statement_not_available": null,
"contains_shellfish": "NO",
"contains_peanut": "NO",
"contains_tree_nuts": "YES",
"contains_milk": "NO",
"contains_wheat": "NO",
"contains_soy": "NO",
"contains_eggs": "NO",
"contains_fish": "NO",
"contains_added_msg": "UNKNOWN",
"contains_hfcs": "UNKNOWN",
"contains_mustard": "UNKNOWN",
"contains_celery": "UNKNOWN",
"contains_sesame": "UNKNOWN",
"contains_red_yellow_blue_dye": "UNKNOWN",
"gluten_free_per_fda": "UNKNOWN",
"non_gmo_claim": "UNKNOWN",
"contains_gluten": "NO"
},
"dietary_attributes": {
"vegan": "YES",
"vegetarian": "YES",
"kosher": "YES",
"halal": "UNKNOWN"
},
"primary_attributes": {
"protein": 7.543,
"total_fat": 19.022,
"carbohydrate": 69.196,
"calories": 463.227,
"total_sugars": 61.285,
"fiber": 5.81,
"calcium": 3840.228,
"iron": 3.955,
"potassium": 270.768,
"sodium": 1351.208,
"cholesterol": 0.0,
"trans_fat": 0.0,
"saturated_fat": 1.488,
"monounsaturated_fat": 11.743,
"polyunsaturated_fat": 4.832,
"calories_from_fat": 171.195,
"pct_calories_from_fat": 36.957,
"pct_calories_from_saturated_fat": 2.892,
"added_sugars": null,
"vitamin_d_(mcg)": null
},
"secondary_attributes": {
"ash": null,
"water": null,
"magnesium": 120.654,
"phosphorous": 171.215,
"zinc": 1.019,
"copper": 0.183,
"manganese": null,
"selenium": 1.325,
"vitamin_a_(IU)": 5331.357,
"vitamin_a_(RAE)": null,
"beta_carotene": null,
"alpha_carotene": null,
"vitamin_e_(A-tocopherol)": 49.909,
"vitamin_d_(IU)": null,
"vitamin_c": 0.0,
"thiamin_(B1)": 0.0,
"riboflavin_(B2)": 0.449,
"niacin": 0.979,
"pantothenic_acid": 0.061,
"vitamin_b6": 0.0,
"folacin_(folic_acid)": null,
"vitamin_b12": 0.0,
"vitamin_k": null,
"folic_acid": null,
"folate_food": null,
"folate_DFE": null,
"vitamin_a_(RE)": null,
"pct_calories_from_protein": 6.514,
"pct_calories_from_carbohydrates": 59.751,
"biotin": null,
"niacin_(mg_NE)": null,
"vitamin_e_(IU)": null
}
}
]
When tried to copy the data using below postgres query
\copy table_name 'location of thefile'
got below error
ERROR: invalid input syntax for type integer: "["
CONTEXT: COPY table_name, line 1, column location_id: "["
I tried below approach as well but no luck
INSERT INTO json_table
SELECT [all key fields]
FROM json_populate_record (NULL::json_table,
'{
sample data
}'
);
What is the best simple way to insert this type of nested json files in postegreSQL tables. Is there a query which we can use to insert any nested json files ?
Insert json to a table. In fact I don't what is your expect.
yimo=# create table if not exists foo(a int,b text);
CREATE TABLE
yimo=# insert into foo select * from json_populate_record(null::foo, ('[{"a":1,"b":"3"}]'::jsonb->>0)::json);
INSERT 0 1
yimo=# select * from foo;
a | b
---+---
1 | 3
(1 row)

Golang Postgres pq failed scanning to *string

I'm trying to scan a postgresql list into an empty slice of string. However, I'm getting the error below:
Failed creating education: sql: Scan error on column index 14, name "descriptions": unsupported Scan, storing driver.Value type string into type *[]*string. Looks like I need to customize the scanner somehow, but how do I do that with squirrel? thanks.
Here's how I'm building the query:
squirrel.StatementBuilder.PlaceholderFormat(squirrel.Dollar).RunWith(db).Insert("educations").
Columns("id", "school", "city", "state", "degree", "month_start", "year_start", "month_end", "year_end", "\"order\"", "logo_url", "created_at", "updated_at", "style", "descriptions").
Values(
uuid.Must(uuid.NewV4()).String(),
education.School,
education.City,
education.State,
education.Degree,
education.MonthStart,
education.YearStart,
education.MonthEnd,
education.YearEnd,
education.Order,
education.LogoURL,
currentTime,
currentTime,
savedStyle.ID,
pq.Array(education.Descriptions),
).
Suffix("RETURNING *").
Scan(
&savedEducation.ID,
&savedEducation.School,
&savedEducation.City,
&savedEducation.State,
&savedEducation.Degree,
&savedEducation.MonthStart,
&savedEducation.YearStart,
&savedEducation.MonthEnd,
&savedEducation.YearEnd,
&savedEducation.Order,
&savedEducation.LogoURL,
&savedEducation.CreatedAt,
&savedEducation.UpdatedAt,
&ignored,
&savedEducation.Descriptions,
)

BSON::InvalidDocument: Cannot serialize an object into BSON

I'm trying to follow along with http://mongotips.com/b/array-keys-allow-for-modeling-simplicity/
I have a Story document and a Rating document. The user will rate a story, so I wanted to create a many relationship to ratings by users as such:
class StoryRating
include MongoMapper::Document
# key <name>, <type>
key :user_id, ObjectId
key :rating, Integer
timestamps!
end
class Story
include MongoMapper::Document
# key <name>, <type>
timestamps!
key :title, String
key :ratings, Array, :index => true
many :story_ratings, :in => :ratings
end
Then
irb(main):006:0> s = Story.create
irb(main):008:0> s.ratings.push(Rating.new(user_id: '0923ksjdfkjas'))
irb(main):009:0> s.ratings.last.save
=> true
irb(main):010:0> s.save
BSON::InvalidDocument: Cannot serialize an object of class StoryRating into BSON.
from /usr/local/lib/ruby/gems/1.9.1/gems/bson-1.6.2/lib/bson/bson_c.rb:24:in `serialize' (...)
Why?
You should be using the association "story_rating" method for your push/append rather than the internal "rating" Array.push to get what you want to follow John Nunemaker's "Array Keys Allow For Modeling Simplicity" discussion. The difference is that with the association method, MongoMapper will insert the BSON::ObjectId reference into the array, with the latter you are pushing a Ruby StoryRating object into the Array, and the underlying driver driver cant serialize it.
Here's a test that works for me, that shows the difference. Hope that this helps.
Test
require 'test_helper'
class Object
def to_pretty_json
JSON.pretty_generate(JSON.parse(self.to_json))
end
end
class StoryTest < ActiveSupport::TestCase
def setup
User.delete_all
Story.delete_all
StoryRating.delete_all
#stories_coll = Mongo::Connection.new['free11513_mongomapper_bson_test']['stories']
end
test "Array Keys" do
user = User.create(:name => 'Gary')
story = Story.create(:title => 'A Tale of Two Cities')
rating = StoryRating.create(:user_id => user.id, :rating => 5)
assert_equal(1, StoryRating.count)
story.ratings.push(rating)
p story.ratings
assert_raise(BSON::InvalidDocument) { story.save }
story.ratings.pop
story.story_ratings.push(rating) # note story.story_ratings, NOT story.ratings
p story.ratings
assert_nothing_raised(BSON::InvalidDocument) { story.save }
assert_equal(1, Story.count)
puts Story.all(:ratings => rating.id).to_pretty_json
end
end
Result
Run options: --name=test_Array_Keys
# Running tests:
[#<StoryRating _id: BSON::ObjectId('4fa98c25e4d30b9765000003'), created_at: Tue, 08 May 2012 21:12:05 UTC +00:00, rating: 5, updated_at: Tue, 08 May 2012 21:12:05 UTC +00:00, user_id: BSON::ObjectId('4fa98c25e4d30b9765000001')>]
[BSON::ObjectId('4fa98c25e4d30b9765000003')]
[
{
"created_at": "2012-05-08T21:12:05Z",
"id": "4fa98c25e4d30b9765000002",
"ratings": [
"4fa98c25e4d30b9765000003"
],
"title": "A Tale of Two Cities",
"updated_at": "2012-05-08T21:12:05Z"
}
]
.
Finished tests in 0.023377s, 42.7771 tests/s, 171.1084 assertions/s.
1 tests, 4 assertions, 0 failures, 0 errors, 0 skips