I am attempting to write an integration test for a utility module where my search function returns a new aggregate pipeline. When the database has 0 collections, this fails with an index out-of-bounds error on accounts[0]. It's acting as if the aggregate pipeline is not waiting for the insert to happen. How do I refactor this to make the aggregate stage wait for the insert?
#[actix_web::test]
async fn test_search_pipeline() {
let pipeline = vec![doc! { "$match": { "isDeleted": { "$ne": true } } }];
let mut updated_pipeline = super::search(&pipeline, String::from("foo"));
let db: Database = connect_to_database().await;
let accounts_col: Collection<Document> = db.collection("accounts");
let account_fixture = account::create();
// Insert a new account in the database
let new_account = match accounts_col.insert_one(account_fixture, None).await {
Ok(new_account) => new_account,
Err(e) => {
panic!("Unable to insert new account test fixture document: {}", e);
}
};
// Execute the search aggregate pipeline to find the document we just inserted
let accounts = match accounts_col.aggregate(updated_pipeline, None).await {
Ok(mut cursor) => {
let mut accounts: Vec<Document> = Vec::new();
while let Some(doc) = cursor.next().await {
match doc {
Ok(doc) => accounts.push(doc),
Err(e) => panic!("Error iterating through the new account test pipeline cursor : {}", e)
}
}
accounts
},
Err(e) => {
panic!("Unable to execute test search aggregate pipeline: {}", e);
}
};
// Compare the new account id that we inserted with the first search result account id
let new_account_id: String = new_account.inserted_id.as_object_id().unwrap().to_hex();
let search_account_id: String = accounts[0].get_object_id("_id").unwrap().to_hex();
assert_eq!(new_account_id, search_account_id);
}
I have tried removing the actual database calls from the match statements and nesting the aggregate stage inside of the Ok arm of the insert Result. Neither of those changes seem to change the result.
account_fixture:
doc! {
"name": "ufc0kpu!pgu1QJW3unj",
"logo": "60e9ca73c500a9001534ad84-logo.png",
"isActive": true,
"isDeleted": false,
"isUnlimited": false,
"parentAccount": null,
"credits": 100,
"isProAccount": true,
"is360Enabled": true,
"isResourcesDisabled": false,
"isEcoPrinting": false,
"isCoreExtended": false,
"createdAt": "2021-09-02T12:11:37.995+0000",
}
updated_pipeline:
doc! {
"$search": {
"autocomplete": {
"query": "ufc0kpu!pgu1QJW3unj",
"path": "name",
"fuzzy": {
"maxEdits": 2,
"prefixLength": 8
}
}
}
}
UPDATE: I have tried specifying 1 instead of majority for the write concern option on the insert_one, and unfortunately, it still results in a race condition. I was reading this page on casual consistency when I decided to try the code below. It seems like this should guarantee read-after-write consistency, but it does not.
let new_account = match accounts_col.insert_one(account_fixture, {
let mut options = InsertOneOptions::default();
options.bypass_document_validation = None;
options.write_concern = Some(WriteConcern::builder().w(Acknowledgment::Nodes(1)).build());
options
}).await {
Ok(new_account) => new_account,
Err(e) => {
panic!("Unable to insert new account test fixture document: {}", e);
}
};
Related
Mongo 4.4 and respective Golang driver are used. Database’s replica set is being run locally at localhost:27017, localhost:27020. I’ve also tried using Atlas’s sandbox cluster which gave me the same results.
According to Mongo's documentation when handling insertion of a new document fullDocument field of event data is supposed to contain newly inserted document which for some reason is not the case for me. ns field where database and collection name are supposed to be and documentKey where affected document _id is stored are empty as well. operationType field contains correct operation type. In another test it appeared that update operations do not appear in a change stream at all.
It used to work as it should but now it doesn't. Why does it happen and what am I doing wrong?
Code
// ds is the connection to discord, required for doing stuff inside handlers
func iterateChangeStream(stream *mongo.ChangeStream, ds *discordgo.Session, ctx context.Context, cancel context.CancelFunc) {
defer stream.Close(ctx)
defer cancel() // for graceful crashing
for stream.Next(ctx) {
var event bson.M
err := stream.Decode(&event)
if err != nil {
log.Print(errors.Errorf("Failed to decode event: %w\n", err))
return
}
rv := reflect.ValueOf(event["operationType"]) // getting operation type
opType, ok := rv.Interface().(string)
if !ok {
log.Print("String expected in operationType\n")
return
}
// event["fullDocument"] will be empty even when handling insertion
// models.Player is a struct representing a document of the collection
// I'm watching over
doc, ok := event["fullDocument"].(models.Player)
if !ok {
log.Print("Failed to convert document into Player type")
return
}
handlerCtx := context.WithValue(ctx, "doc", doc)
// handlerToEvent maps operationType to respective handler
go handlerToEvent[opType](ds, handlerCtx, cancel)
}
}
func WatchEvents(ds *discordgo.Session, ctx context.Context, cancel context.CancelFunc) {
pipeline := mongo.Pipeline{
bson.D{{
"$match",
bson.D{{
"$or", bson.A{
bson.D{{"operationType", "insert"}}, // !!!
bson.D{{"operationType", "delete"}},
bson.D{{"operationType", "invalidate"}},
},
}},
}},
}
// mongo instance is initialized on program startup and stored in a global variable
opts := options.ChangeStream().SetFullDocument(options.UpdateLookup)
stream, err := db.Instance.Collection.Watch(ctx, pipeline, opts)
if err != nil {
log.Panic(err)
}
defer stream.Close(ctx)
iterateChangeStream(stream, ds, ctx, cancel)
}
My issue might be related to this, except that it consistently occurs on insertion instead ocuring sometimes on updates.
If you know how to enable change stream optimization feature flag mentioned inside link above, let me know.
Feel free to ask for more clarifications.
The question was answered here.
TLDR
You need to create the following structure to unmarshal event into:
type CSEvent struct {
OperationType string `bson:"operationType"`
FullDocument models.Player `bson:"fullDocument"`
}
var event CSEvent
err := stream.Decode(&event)
event will contain a copy of the inserted document.
From sample events that I see from this link we can see that fullDocument exists only on operationType: 'insert'.
{
_id: { _data: '825DE67A42000000072B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7680004' },
operationType: 'insert',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 7, high_: 1575385666 },
fullDocument: {
_id: 5de67a42113ea7de6472e768,
name: 'Sydney Harbour Home',
bedrooms: 4,
bathrooms: 2.5,
address: { market: 'Sydney', country: 'Australia' } },
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e768 }
}
{
_id: { _data: '825DE67A42000000082B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7680004' },
operationType: 'delete',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 8, high_: 1575385666 },
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e768 }
}
So I recommend You
to limit Your $match to insert
or add if statement to operationType.
if opType == "insert" {
doc, ok := event["fullDocument"].(models.Player)
if !ok {
log.Print("Failed to convert document into Player type")
return
}
handlerCtx := context.WithValue(ctx, "doc", doc)
// handlerToEvent maps operationType to respective handler
go handlerToEvent[opType](ds, handlerCtx, cancel)
return
}
or make sure You're getting document using id of document from event["documentKey"]["_id"] and call playersCollection.findOne({_id: event["documentKey"]["_id"]})
I want to set up a list of live list of "Move"s so I used this snippet from the amplify docs.
func createSubscription() {
subscription = Amplify.API.subscribe(request: .subscription(of: Move.self, type: .onCreate))
dataSink = subscription?.subscriptionDataPublisher.sink {
if case let .failure(apiError) = $0 {
print("Subscription has terminated with \(apiError)")
} else {
print("Subscription has been closed successfully")
}
}
receiveValue: { result in
switch result {
case .success(let createdTodo):
print("Successfully got todo from subscription: \(createdTodo)")
case .failure(let error):
print("Got failed result with \(error.errorDescription)")
}
}
}
Schema auth rules
type Move
#model
#auth( rules: [
{ allow: owner, ownerField: "owner", operations: [create, update, delete, read] },
])
{
But since I added auth to the "move" type I get this error. GraphQLResponseError<Move>: GraphQL service returned a successful response containing errors: [Amplify.GraphQLError(message: "Validation error of type MissingFieldArgument: Missing field argument owner # \'onCreateMove\'", locations: nil, path: nil, extensions: nil)]
and Recovery suggestion: The list of GraphQLError contains service-specific messages
So everything is working locally but I think I need to pass the authorization to the request but I can't find any way to do it. any Ideas how I might get this request to process properly?
got it working by writng my own request and passing the owner field directly
extension GraphQLRequest {
static func newMoves() -> GraphQLRequest<Move> {
let operationName = "getMove"
let document = """
subscription MySubscription {
onCreateMove(owner: "MyUser") {
accerationMagnitude
id
}
}
"""
return GraphQLRequest<Move>(document: document,
// variables: [],
responseType: Move.self,
decodePath: operationName)
}
}
I'm having trouble finding objects in a nested array. I need to find home/away within league array which has an events array.
Example JSON:
{
"sportId":4,
"last":266178326,
"league":[
{
"id":423,
"name":"Germany - Bundesliga",
"events":[
{
"id":1125584543,
"starts":"2020-06-07T17:00:00Z",
"home":"SC Rasta Vechta",
"away":"EnBW Ludwigsburg",
"rotNum":"2601",
"liveStatus":0,
"status":"I",
"parlayRestriction":0,
"altTeaser":false,
"resultingUnit":"Regular"
},
{
"id":1125585441,
"starts":"2020-06-10T18:30:00Z",
"home":"Ratiopharm Ulm",
"away":"Crailsheim Merlins",
"rotNum":"2617",
"liveStatus":0,
"status":"I",
"parlayRestriction":0,
"altTeaser":false,
"resultingUnit":"Regular"
}
]
},
{
"id":268,
"name":"ABA - Adriatic League",
"events":[
{
"id":1122419811,
"starts":"2020-05-07T19:34:00Z",
"home":"Test 1(Do Not Wager)",
"away":"Test 2(Do Not Wager)",
"rotNum":"999998",
"liveStatus":0,
"status":"I",
"parlayRestriction":1,
"altTeaser":false,
"resultingUnit":"Regular"
}
]
},
{
"id":487,
"name":"NBA",
"events":[
{
"id":1120192519,
"starts":"2020-05-01T17:00:00Z",
"home":"Test Team B",
"away":"Test Team A",
"rotNum":"123",
"liveStatus":0,
"status":"O",
"parlayRestriction":0,
"altTeaser":false,
"resultingUnit":"Regular"
}
]
}
]
}
For example finding the league name "Germany - Bundesliga" I solved it by doing
// retrieve league by searching in the fixture collection
func FindLeagueFixture(name string) (pinnacle.Fixtures, pinnacle.League, error) {
var fixtures []pinnacle.Fixtures
err := db.C(FIXTURES).Find(
bson.M{"league.name": bson.RegEx{
Pattern: name,
Options: "i",
}}).All(&fixtures)
if err != nil {
return pinnacle.Fixtures{}, pinnacle.League{}, err
}
But now I have to event home/away names within league events. For example, finding "SC Rasta Vechta". What's the best way to handle this?
I've tried something like (No regex usage yet, since I'm having trouble already. Only trying count, not doing the whole unmarshaling for now)
// retrieve sport team by searching in the fixture collection
func FindHomeOrAwayFixture(name string) (pinnacle.Fixtures, pinnacle.League, error) {
var fixtures []pinnacle.Fixtures
// find home
c, err := db.C(FIXTURES).Find(
bson.M{"league": bson.M{"$elemMatch": bson.M{"home": name}}}).Count()
if err != nil {
return pinnacle.Fixtures{}, pinnacle.League{}, err
}
fmt.Println(c)
}
I'm trying to map data from DB ( Mongo ) to slice in go , and everythin works fine if I'm returning simple []string but if I change type to []*models.Organization that code returns slice of same elements.
func (os *OrganizationService) GetAll() ([]*models.Organization, error) {
var organizations []*models.Organization
results := os.MongoClient.Collection("organizations").Find(bson.M{})
organization := &models.Organization{}
for results.Next(organization) {
fmt.Println(organization)
organizations = append(organizations, organization)
}
return organizations, nil
}
I expect output [{ Name: "someOrg", ID: "someId" },{ Name: "someOrg2", ID: "someID }, ... ] , but actual output is [{ Name: "someOrg", ID: "someId" },{ Name: "someOrg", ID: "someId" }, ... ]
I'm using bongo package.
The application appends the same value of organization on every iteration through the loop. Fix by creating a new value inside the loop.
func (os *OrganizationService) GetAll() ([]*models.Organization, error) {
var organizations []*models.Organization
results := os.MongoClient.Collection("organizations").Find(bson.M{})
organization := &models.Organization{}
for results.Next(organization) {
fmt.Println(organization)
organizations = append(organizations, organization)
organization = &models.Organization{} // new value for next iteration
}
return organizations, nil
}
I am using AWS AppSync for mobile development (iOS) for offline/Online Capabilities
I am trying to save data in offline mode. But I am getting error "Variable id was not provided/ Missing value"
When app comes to online it automatically syncing to DynamoDB but the issue is only in offline mode unable to fetch saved record
Here is the code using in the application
`
let userObjInput = userObjectInput(id: "id", firstName: "firstname", lastName: "lastName")
let CategoryInputs = CreateUserCategoryInput(categoryName: "categoryValue" , user: userObjInput)
let mutation = CategoryMutation(input: CategoryInputs)
appSyncClient?.perform(mutation: mutation, queue: .main, optimisticUpdate: { (transaction) in
do {
let selectionSets = try transaction?.read(query: query)
try transaction?.update(query: GetUserCategoriesOfUserQuery(id: "id")) { (data: inout GetUserCategoriesOfUserQuery.Data) in
data.getAllCategoriesForUser?.append(GetUserCategoriesOfUserQuery.Data.GetAllCategoriesForUser?.init(GetUserCategoriesOfUserQuery.Data.GetAllCategoriesForUser.init(id: (UUID().uuidString), categoryName: CategoryInputs.categoryName!, isDeleted: false, user: GetUserCategoriesOfUserQuery.Data.GetAllCategoriesForUser.User?.init(GetUserCategoriesOfUserQuery.Data.GetAllCategoriesForUser.User.init(id: userObjInput.id!, firstName: userObjInput.firstName!, lastName: userObjInput.lastName!)))))
} catch {
print(error.localizedDescription)
}
}, conflictResolutionBlock: nil, resultHandler: { (result, error) in
if error == nil {
fetchCategories()
} else {
print(error?.localizedDescription)
}
})`
For those who have problem with optimistic UI missing value. I've found one trick to temporary solve the issue by passing parameter using Custom Request Header from client app.
Before, your query would look like this allDiaries(author: String): [Diary]
Just change it to => allDiaries: [Diary]
So your request mapping would look like below:
{
"version" : "2017-02-28",
"operation" : "Scan",
"filter" : {
"expression" : "author = :author",
"expressionValues" : {
":author" : { "S" : "$context.request.headers.author" }
}
}
}
Reference: How to pass AWS AppSync custom request header in iOS client?
Hope it is useful! Goodluck :)