Pass values in an array to redshift parameterized query - amazon-redshift

let query ='INSERT INTO tablename (id, test1, test2, test3, test4, test5) VALUES ($1,$2,$3,$4,$5,$6)' let params='id, test1, test2, test3,test4, test5'; client.parameterizedQuery(query, params, function (err, result) { if (err) reject(err) else { resolve('INSERTED Batch'); } });
Its is triggering the query with these parameters.

Your params is a string. But you should pass a list of items.
let params=['id', 'test1', 'test2', 'test3', 'test4', 'test5'];

Related

Async/Await with MongoDB Rust Driver not Waiting for Insert

I am attempting to write an integration test for a utility module where my search function returns a new aggregate pipeline. When the database has 0 collections, this fails with an index out-of-bounds error on accounts[0]. It's acting as if the aggregate pipeline is not waiting for the insert to happen. How do I refactor this to make the aggregate stage wait for the insert?
#[actix_web::test]
async fn test_search_pipeline() {
let pipeline = vec![doc! { "$match": { "isDeleted": { "$ne": true } } }];
let mut updated_pipeline = super::search(&pipeline, String::from("foo"));
let db: Database = connect_to_database().await;
let accounts_col: Collection<Document> = db.collection("accounts");
let account_fixture = account::create();
// Insert a new account in the database
let new_account = match accounts_col.insert_one(account_fixture, None).await {
Ok(new_account) => new_account,
Err(e) => {
panic!("Unable to insert new account test fixture document: {}", e);
}
};
// Execute the search aggregate pipeline to find the document we just inserted
let accounts = match accounts_col.aggregate(updated_pipeline, None).await {
Ok(mut cursor) => {
let mut accounts: Vec<Document> = Vec::new();
while let Some(doc) = cursor.next().await {
match doc {
Ok(doc) => accounts.push(doc),
Err(e) => panic!("Error iterating through the new account test pipeline cursor : {}", e)
}
}
accounts
},
Err(e) => {
panic!("Unable to execute test search aggregate pipeline: {}", e);
}
};
// Compare the new account id that we inserted with the first search result account id
let new_account_id: String = new_account.inserted_id.as_object_id().unwrap().to_hex();
let search_account_id: String = accounts[0].get_object_id("_id").unwrap().to_hex();
assert_eq!(new_account_id, search_account_id);
}
I have tried removing the actual database calls from the match statements and nesting the aggregate stage inside of the Ok arm of the insert Result. Neither of those changes seem to change the result.
account_fixture:
doc! {
"name": "ufc0kpu!pgu1QJW3unj",
"logo": "60e9ca73c500a9001534ad84-logo.png",
"isActive": true,
"isDeleted": false,
"isUnlimited": false,
"parentAccount": null,
"credits": 100,
"isProAccount": true,
"is360Enabled": true,
"isResourcesDisabled": false,
"isEcoPrinting": false,
"isCoreExtended": false,
"createdAt": "2021-09-02T12:11:37.995+0000",
}
updated_pipeline:
doc! {
"$search": {
"autocomplete": {
"query": "ufc0kpu!pgu1QJW3unj",
"path": "name",
"fuzzy": {
"maxEdits": 2,
"prefixLength": 8
}
}
}
}
UPDATE: I have tried specifying 1 instead of majority for the write concern option on the insert_one, and unfortunately, it still results in a race condition. I was reading this page on casual consistency when I decided to try the code below. It seems like this should guarantee read-after-write consistency, but it does not.
let new_account = match accounts_col.insert_one(account_fixture, {
let mut options = InsertOneOptions::default();
options.bypass_document_validation = None;
options.write_concern = Some(WriteConcern::builder().w(Acknowledgment::Nodes(1)).build());
options
}).await {
Ok(new_account) => new_account,
Err(e) => {
panic!("Unable to insert new account test fixture document: {}", e);
}
};

MongoDB change stream returns empty fullDocument on insert

Mongo 4.4 and respective Golang driver are used. Database’s replica set is being run locally at localhost:27017, localhost:27020. I’ve also tried using Atlas’s sandbox cluster which gave me the same results.
According to Mongo's documentation when handling insertion of a new document fullDocument field of event data is supposed to contain newly inserted document which for some reason is not the case for me. ns field where database and collection name are supposed to be and documentKey where affected document _id is stored are empty as well. operationType field contains correct operation type. In another test it appeared that update operations do not appear in a change stream at all.
It used to work as it should but now it doesn't. Why does it happen and what am I doing wrong?
Code
// ds is the connection to discord, required for doing stuff inside handlers
func iterateChangeStream(stream *mongo.ChangeStream, ds *discordgo.Session, ctx context.Context, cancel context.CancelFunc) {
defer stream.Close(ctx)
defer cancel() // for graceful crashing
for stream.Next(ctx) {
var event bson.M
err := stream.Decode(&event)
if err != nil {
log.Print(errors.Errorf("Failed to decode event: %w\n", err))
return
}
rv := reflect.ValueOf(event["operationType"]) // getting operation type
opType, ok := rv.Interface().(string)
if !ok {
log.Print("String expected in operationType\n")
return
}
// event["fullDocument"] will be empty even when handling insertion
// models.Player is a struct representing a document of the collection
// I'm watching over
doc, ok := event["fullDocument"].(models.Player)
if !ok {
log.Print("Failed to convert document into Player type")
return
}
handlerCtx := context.WithValue(ctx, "doc", doc)
// handlerToEvent maps operationType to respective handler
go handlerToEvent[opType](ds, handlerCtx, cancel)
}
}
func WatchEvents(ds *discordgo.Session, ctx context.Context, cancel context.CancelFunc) {
pipeline := mongo.Pipeline{
bson.D{{
"$match",
bson.D{{
"$or", bson.A{
bson.D{{"operationType", "insert"}}, // !!!
bson.D{{"operationType", "delete"}},
bson.D{{"operationType", "invalidate"}},
},
}},
}},
}
// mongo instance is initialized on program startup and stored in a global variable
opts := options.ChangeStream().SetFullDocument(options.UpdateLookup)
stream, err := db.Instance.Collection.Watch(ctx, pipeline, opts)
if err != nil {
log.Panic(err)
}
defer stream.Close(ctx)
iterateChangeStream(stream, ds, ctx, cancel)
}
My issue might be related to this, except that it consistently occurs on insertion instead ocuring sometimes on updates.
If you know how to enable change stream optimization feature flag mentioned inside link above, let me know.
Feel free to ask for more clarifications.
The question was answered here.
TLDR
You need to create the following structure to unmarshal event into:
type CSEvent struct {
OperationType string `bson:"operationType"`
FullDocument models.Player `bson:"fullDocument"`
}
var event CSEvent
err := stream.Decode(&event)
event will contain a copy of the inserted document.
From sample events that I see from this link we can see that fullDocument exists only on operationType: 'insert'.
{
_id: { _data: '825DE67A42000000072B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7680004' },
operationType: 'insert',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 7, high_: 1575385666 },
fullDocument: {
_id: 5de67a42113ea7de6472e768,
name: 'Sydney Harbour Home',
bedrooms: 4,
bathrooms: 2.5,
address: { market: 'Sydney', country: 'Australia' } },
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e768 }
}
{
_id: { _data: '825DE67A42000000082B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7680004' },
operationType: 'delete',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 8, high_: 1575385666 },
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e768 }
}
So I recommend You
to limit Your $match to insert
or add if statement to operationType.
if opType == "insert" {
doc, ok := event["fullDocument"].(models.Player)
if !ok {
log.Print("Failed to convert document into Player type")
return
}
handlerCtx := context.WithValue(ctx, "doc", doc)
// handlerToEvent maps operationType to respective handler
go handlerToEvent[opType](ds, handlerCtx, cancel)
return
}
or make sure You're getting document using id of document from event["documentKey"]["_id"] and call playersCollection.findOne({_id: event["documentKey"]["_id"]})

How to pass multiple parameters for a query statement in tokio_postgres?

I need to pass several parameters into the tokio_postgres::query Statement. How can I properly do this?
The way I do it below doesn't work, what gets passed into the SQL database is the unconverted $2, instead of the date 2021-04-06.
use tokio_postgres::{Error, NoTls};
#[tokio::main]
async fn main() -> Result<(), Error> {
let (client, connection) = tokio_postgres::connect(
"dbname=database user=admin password=postgres host=db port=5432",
NoTls,
)
.await?;
// Spawn the connector in a separate async task
tokio::spawn(async move {
if let Err(e) = connection.await {
eprintln!("connection error: {}", e);
}
});
// How do I pass several parameters correctly here?
client
.query(
"INSERT INTO table_name (url, date_added) \
VALUES ('$1', '$2');",
&[&url, &"2021-04-06"],
)
.await?;
}

Golang and MongoDB: DeleteMany with filter

I try to read and write and delete data from a Go application with the official mongodb driver for go (go.mongodb.org/mongo-driver).
Here is my struct I want to use:
Contact struct {
ID xid.ID `json:"contact_id" bson:"contact_id"`
SurName string `json:"surname" bson:"surname"`
PreName string `json:"prename" bson:"prename"`
}
// xid is https://github.com/rs/xid
I omit code to add to the collection as this is working find.
I can get a list of contacts with a specific contact_id using the following code (abbreviated):
filter := bson.D{}
cursor, err := contactCollection.Find(nil, filter)
for cur.Next(context.TODO()) {
...
}
This works and returns the documents. I thought about doing the same for delete or a matched get:
// delete - abbreviated
filter := bson.M{"contact_id": id}
result, _ := contactCollection.DeleteMany(nil, filter)
// result.DeletedCount is always 0, err is nil
if err != nil {
sendError(c, err) // helper function
return
}
c.JSON(200, gin.H{
"ok": true,
"message": fmt.Sprintf("deleted %d patients", result.DeletedCount),
}) // will be called, it is part of a webservice done with gin
// get complete
func Get(c *gin.Context) {
defer c.Done()
id := c.Param("id")
filter := bson.M{"contact_id": id}
cur, err := contactCollection.Find(nil, filter)
if err != nil {
sendError(c, err) // helper function
return
} // no error
contacts := make([]types.Contact, 0)
for cur.Next(context.TODO()) { // nothing returned
// create a value into which the single document can be decoded
var elem types.Contact
err := cur.Decode(&elem)
if err != nil {
sendError(c, err) // helper function
return
}
contacts = append(contacts, elem)
}
c.JSON(200, contacts)
}
Why does the same filter does not work on delete?
Edit: Insert code looks like this:
_, _ = contactCollection.InsertOne(context.TODO(), Contact{
ID: "abcdefg",
SurName: "Demo",
PreName: "on stackoverflow",
})
Contact.ID is of type xid.ID, which is a byte array:
type ID [rawLen]byte
So the insert code you provided where you use a string literal to specify the value for the ID field would be a compile-time error:
_, _ = contactCollection.InsertOne(context.TODO(), Contact{
ID: "abcdefg",
SurName: "Demo",
PreName: "on stackoverflow",
})
Later in your comments you clarified that the above insert code was just an example, and not how you actually do it. In your real code you unmarshal the contact (or its ID field) from a request.
xid.ID has its own unmarshaling logic, which might interpret the input data differently, and might result in an ID representing a different string value than your input. ID.UnmarshalJSON() defines how the string ID will be converted to xid.ID:
func (id *ID) UnmarshalJSON(b []byte) error {
s := string(b)
if s == "null" {
*id = nilID
return nil
}
return id.UnmarshalText(b[1 : len(b)-1])
}
As you can see, the first byte is cut off, and ID.UnmarshalText() does even more "magic" on it (check the source if you're interested).
All-in-all, to avoid such "transformations" happen in the background without your knowledge, use a simple string type for your ID, and do necessary conversions yourself wherever you need to store / transmit your ID.
For the ID Field, you should use the primitive.ObjectID provided by the bson package.
"go.mongodb.org/mongo-driver/bson/primitive"
ID primitive.ObjectID `json:"_id" bson:"_id"`

Coffescript Nested Do Loop / Async Nested Loop

I want to save into two collections in my mongoDB. This operations are async so I use for and do in coffee.
for machine in machines
do(machine) ->
//if machine does not exist
for part in machine.parts
do(part) ->
//if not part not exists --> save
//push part to machine parts list
//save machine
The machine parts are empty later in the db. How can I make the first do loop wait for the second do loop to finish?
EDIT Real Code Example:
recipeModel = require('../models/recipe.model')
ingredientModel = require('../models/ingredient.model')
#Save Recipe into the database
async.map recipes, (recipe, next) ->
recipeDBObject = new recipeModel()
recipeDBObject.href = recipe.href
recipeDBObject.ingredients = []
recipeModel.find({ href: recipe.href }, (err, recipeFound) ->
return next err if err
return next null, recipeFound if recipeFound.length > 0
recipeDBObject.title = recipe.title
ingredientsPushArray = []
console.log recipe.href
async.map recipe.zutaten, (ingredient, cb) ->
#Save all ingredients
ingredient.idName = ingredient.name.replace(/[^a-zA-Z0-9]+/gi, "").toLowerCase()
ingredientModel.find({ idName: ingredient.idName }, (err, ingredientFound) ->
return next err if err
if ingredientFound.length >0
ingredientDBObject = ingredientFound[0]
else
ingredientDBObject = new ingredientModel()
ingredientDBObject.name = ingredient.name
ingredientDBObject.save()
recipeDBObject.ingredients.push({"idName":ingredient.idName, "name":ingredient.name, "amount":ingredient.amount})
return cb(null, true)
)
recipeDBObject.ingredients = ingredientsPushArray
recipeDBObject.save()
return next(null, true)
)
I still don't get it working. Recipes are saved, node builds the ingredients array but it neither saves the ingredients nor does it save the array into the recipes.
EDIT 2:
async.map recipes,
(recipe, next) ->
recipeDBObject = new recipeModel()
recipeDBObject.href = recipe.href
recipeDBObject.ingredients = []
recipeModel.find({ href: recipe.href }, (err, recipeFound) ->
return next err if err
return next null, recipeFound if recipeFound.length > 0
recipeDBObject.title = recipe.title
ingredientsPushArray = []
ingredientsArray = []
async.map recipe.zutaten,
(ingredient, cb) ->
#Save all ingredients
ingredient.idName = ingredient.name.replace(/[^a-zA-Z0-9]+/gi, "").toLowerCase()
ingredientModel.find({ idName: ingredient.idName }, (err, ingredientFound) ->
return next err if err
ingredientsArray.push({"idName":ingredient.idName, "name":ingredient.name, "amount":ingredient.amount})
if ingredientFound.length >0
return cb(null, true)
else
ingredientDBObject = new ingredientModel()
ingredientDBObject.name = ingredient.name
ingredientDBObject.idName = ingredient.idName
ingredientDBObject.save((err) ->
#console.log "some erros because required is empty" if err
return cb err if err
#console.log "ingredient saved"
return cb(null, true)
)
(err, ingredientsArray) ->
console.log "This is never logged"
return err if err
recipeDBObject.ingredients = ingredientsArray
recipeDBObject.save((err)->
return err if err
return next(null, true)
)
)
)
(err) ->
console.log "show me the errors: ", err if err
Now the ingredients are saved but the recipes aren't.
Interesting ressources:
http://www.hacksparrow.com/managing-nested-asynchronous-callbacks-in-node-js.html
The easiest way is to use some module for for managing asynchronous control flow, for example
async
promise-based solutions (e.g. when, bluebird, Q)
co for ES6 generator-based control flow
Here are some simple examples.
Using async.map
async = require 'async'
async.map machines,
(machine, next) ->
# Process single machine object
Machine.findById machine._id, (err, found) ->
return next err if err # return error immediately
return next null, found if found # return the object we found
async.map machine.parts,
(part, cb) ->
# Save part to DB and call cb callback afterward
Part.create part, cb
(err, parts) ->
return next err if err # propagate error to the next handler
# All parts have been saved successfully
machine.parts = parts
# Save machine to DB and call next callback afterward
Machine.create machine, next
(err, machines) ->
if err
# Something went wrong
else
# All machine objects have been processed successfully
Using promises and when module
When = require 'when'
machines_to_save = When.filter machines, ({_id}) ->
Machine.findById(_id).then (found) -> not found
When.map machines_to_save, (machine) ->
When.map machine.parts, (part) ->
Part.create part
.then (parts) ->
machine.parts = parts
Machine.create machine
.then (saved_machines) ->
# All machines are saved
.otherwice (err) ->
# Something went wrong