How to get index of new object on .childMoved - swift

I am using an ordered query of the Firebase real-time database. I have a .childMoved listener on the query and when someone's index in the ordered list changes my listener gets fired. However there doesn't seem to be a way to know what the new index of the object is.
rtdb.child(refString).queryOrdered(byChild: "queuePosition")
.observe(.childMoved, with: { snapshot in
// Do something here with snapshot data
}) { error in
// error
}
How can I find out where the object should be moved to? Or should I just do sorting on the client?

The Firebase Database doesn't expose indexes, since those don't scale well in a multi-user environment. It does have an option to pass the key of the previous sibling of the node with observe: andPreviousSiblingKey.
With this key you can look up the sibling node, and move the child node after that.

Related

Firestore How to access a value from a snapshot in a Document Snapshot stream

I'm new to dart/flutter and am building an app right now that keeps track of some data on each user in a firstore database. I have a 'Users' collection which contains a document for each user, and one of the fields in the user document is the "UID" received through firebase_auth.
That being said, to make sure I have access to the latest copy of a user document, I hold a Stream. I want to somehow access the "UID" field from the latest snapshot in the stream and then do some other operations with it.
Is there anyway to do this? Am I using/understanding streams incorrectly?
If you only ever need the UID to build other widgets, you can simply use a StreamBuilder which will rebuild its children whenever a new value is emitted from the stream (which you get a copy of). However, if you need to access the latest UID at some arbitrary point of time, check out RxDart's BehaviorSubject which provides constant-time synchronous access to the latest emitted value, if one exists. It is very helpful for handling state in general.

How to persist aggregate/read model from "EventStore" in a database?

Trying to implement Event Sourcing and CQRS for the first time, but got stuck when it came to persisting the aggregates.
This is where I'm at now
I've setup "EventStore" an a stream, "foos"
Connected to it from node-eventstore-client
I subscribe to events with catchup
This is all working fine.
With the help of the eventAppeared event handler function I can build the aggregate, whenever events occur. This is great, but what do I do with it?
Let's say I build and aggregate that is a list of Foos
[
{
id: 'some aggregate uuidv5 made from barId and bazId',
barId: 'qwe',
bazId: 'rty',
isActive: true,
history: [
{
id: 'some event uuid',
data: {
isActive: true,
},
timestamp: 123456788,
eventType: 'IsActiveUpdated'
}
{
id: 'some event uuid',
data: {
barId: 'qwe',
bazId: 'rty',
},
timestamp: 123456789,
eventType: 'FooCreated'
}
]
}
]
To follow CQRS I will build the above aggregate within a Read Model, right? But how do I store this aggregate in a database?
I guess just a nosql database should be fine for this, but I definitely need a db since I will put a gRPC APi in front of this and other read models / aggreates.
But what do I actually go from when I have built the aggregate, to when to persist it in the db?
I once tried following this tutorial https://blog.insiderattack.net/implementing-event-sourcing-and-cqrs-pattern-with-mongodb-66991e7b72be which was super simple, since you'd use mongodb both as the event store and just create a view for the aggregate and update that one when new events are incoming. It had it's flaws and limitations (the aggregation pipeline) which is why I now turned to "EventStore" for the event store part.
But how to persist the aggregate, which is currently just built and stored in code/memory from events in "EventStore"...?
I feel this may be a silly question but do I have to loop over each item in the array and insert each item in the db table/collection or do you somehow have a way to dump the whole array/aggregate there at once?
What happens after? Do you create a materialized view per aggregate and query against that?
I'm open to picking the best db for this, whether that is postgres/other rdbms, mongodb, cassandra, redis, table storage etc.
Last question. For now I'm just using a single stream "foos", but at this level I expect new events to happen quite frequently (every couple of seconds or so) but as I understand it you'd still persist it and update it using materialized views right?
So given that barId and bazId in combination can be used for grouping events, instead of a single stream I'd think more specialized streams such as foos-barId-bazId would be the way to go, to try and reduce the frequency of incoming new events to a point where recreating materialized views will make sense.
Is there a general rule of thumb saying not to recreate/update/refresh materialized views if the update frequency gets below a certain limit? Then the only other a lternative would be querying from a normal table/collection?
Edit:
In the end I'm trying to make a gRPC api that has just 2 rpcs - one for getting a single foo by id and one for getting all foos (with optional field for filtering by status - but that is not so important). The simplified proto would look something like this:
rpc GetFoo(FooRequest) returns (Foo)
rpc GetFoos(FoosRequest) returns (FooResponse)
message FooRequest {
string id = 1; // uuid
}
// If the optional status field is not specified, return all foos
message FoosRequest {
// If this field is specified only return the Foos that has isActive true or false
FooStatus status = 1;
enum FooStatus {
UNKNOWN = 0;
ACTIVE = 1;
INACTIVE = 2;
}
}
message FoosResponse {
repeated Foo foos;
}
message Foo {
string id = 1; // uuid
string bar_id = 2 // uuid
string baz_id = 3 // uuid
boolean is_active = 4;
repeated Event history = 5;
google.protobuf.Timestamp last_updated = 6;
}
message Event {
string id = 1; // uuid
google.protobuf.Any data = 2;
google.protobuf.Timestamp timestamp = 3;
string eventType = 4;
}
The incoming events would look something like this:
{
id: 'some event uuid',
barId: 'qwe',
bazId: 'rty',
timestamp: 123456789,
eventType: 'FooCreated'
}
{
id: 'some event uuid',
isActive: true,
timestamp: 123456788,
eventType: 'IsActiveUpdated'
}
As you can see there is no uuid to make it possible to GetFoo(uuid) in the gRPC API, which is why I'll generate a uuidv5 with the barId and bazId, which will combined, be a valid uuid. I'm making that in the projection / aggregate you see above.
Also the GetFoos rpc will either return all foos (if status field is left undefined), or alternatively it'll return the foo's that has isActive that matches the status field (if specified).
Yet I can't figure out how to continue from the catchup subscription handler.
I have the events stored in "EventStore" (https://eventstore.com/), using a subscription with catchup, I have built an aggregate/projection with an array of Foo's in the form that I want them, but to be able to get a single Foo by id from a gRPC API of mine, I guess I'll need to store this entire aggregate/projection in a database of some sort, so I can connect and fetch the data from the gRPC API? And every time a new event comes in I'll need to add that event to the database also or how is this working?
I think I've read every resource I can possibly find on the internet, but still I'm missing some key pieces of information to figure this out.
The gRPC is not so important. It could be REST I guess, but my big question is how to make the aggregated/projected data available to the API service (possible more API's will need it as well)? I guess I will need to store the aggregated/projected data with the generated uuid and history fields in a database to be able to fetch it by uuid from the API service, but what database and how is this storing process done, from the catchup event handler where I build the aggregate?
I know exactly how you feel! This is basically what happened to me when I first tried to do CQRS and ES.
I think you have a couple of gaps in your knowledge which I'm sure you will rapidly plug. You hydrate an aggregate from the event stream as you are doing. That IS your aggregate persisted. The read model is something different. Let me explain...
Your read model is the thing you use to run queries against and to provide data for display to a UI for example. Your aggregates are not (directly) involved in that. In fact they should be encapsulated. Meaning that you can't 'see' their state from the outside. i.e. no getter and setters with the exception of the aggregate ID which would have a getter.
This article gives you a helpful overview of how it all fits together: CQRS + Event Sourcing – Step by Step
The idea is that when an aggregate changes state it can only do so via an event it generates. You store that event in the event store. That event is also published so that read models can be updated.
Also looking at your aggregate it looks more like a typical read model object or DTO. An aggregate is interested in functionality, not properties. So you would expect to see void public functions for issuing commands to the aggregate. But not public properties like isActive or history.
I hope that makes sense.
EDIT:
Here are some more practical suggestions.
"To follow CQRS I will build the above aggregate within a Read Model, right? "
You do not build aggregates in the read model. They are separate things on separate sides of the CQRS side of the equation. Aggregates are on the command side. Queries are done against read models which are different from aggregates.
Aggregates have public void functions and no getter or setters (with the exception of the aggregate id). They are encapsulated. They generate events when their state changes as a result of a command being issued. These events are stored in an event store and are used to recover the state of an aggregate. In other words, that is how an aggregate is stored.
The events go on to be published so the event handlers and other processes can react to them and update the read model and or trigger new cascading commands.
"Last question. For now I'm just using a single stream "foos", but at this level I expect new events to happen quite frequently (every couple of seconds or so) but as I understand it you'd still persist it and update it using materialized views right?"
Every couple of seconds is very likely to be fine. I'm more concerned at the persist and update using materialised views. I don't know what you mean by that but it doesn't sound like you have the right idea. Views should be very simple read models. No need to complex relations like you find in an RDMS. And is therefore highly optimised fast for reading.
There can be a lot of confusion on all the terminologies and jargon used in DDD and CQRS and ES. I think in this case, the confusion lies in what you think an aggregate is. You mention that you would like to persist your aggregate as a read model. As #Codescribler mentioned, at the sink end of your event stream, there isn't a concept of an aggregate. Concretely, in ES, commands are applied onto aggregates in your domain by loading previous events pertaining to that aggregate, rehydrating the aggregate by folding each previous event onto the aggregate and then applying the command, which generates more events to be persisted in the event store.
Down stream, a subscribing process receives all the events in order and builds a read model based on the events and data contained within. The confusion here is that this read model, at this end, is not an aggregate per se. It might very well look exactly like your aggregate at the domain end or it could be only creating a read model that doesn't use all the events and or the event data.
For example, you may choose to use every bit of information and build a read model that looks exactly like the aggregate hydrated up to the newest event(likely your source of confusion). You may instead have another process that builds a read model that only tallies a specific type of event. You might even subscribe to multiple streams and "join" them into a big read model.
As for how to store it, this is really up to you. It seems to me like you are taking the events and rebuilding your aggregate plus a history of events in a memory structure. This, of course, doesn't scale, which is why you want to store it at rest in a database. I wouldn't use the memory structure, since you would need to do a lot of state diffing when you flush to the database. You should be modify the database directly in response to each individual event. Ideally, you also transactionally store the stream count with said modification so you don't process the same event again in the case of a failure.
Hope this helps a bit.

Has the contents of an IReliableDictionary changed?

I am periodically doing full backups of an IReliableDictionary is Azure Service Fabric. Every X minutes I check to see if the number of elements in the collection has changed and if so, I create a new backup.
This obviously has the downside that if I just change the value of an item in the dictionary, the collection size does not change and no backup occurs.
One solution would be to enumerate the entire dictionary and calculate an overall hash for the collection (assuming the order of the items can be guaranteed). Is there a better way to identify if an IReliableDictionary has changed?
There are a few events generated by the StateManager that may be helpful.
TransactionChanged events are raised if the transaction is committed.
For example:
public MyService(StatefulServiceContext context)
: base(MyService.EndpointName, context, CreateReliableStateManager(context))
{
this.StateManager.TransactionChanged += this.OnTransactionChangedHandler;
}

Deleting an Item in Firebase

I have the following data in Firebase:
Before Deletion (Link)
In the "-Kabn1954" branch, I want to delete the item "apple". Using Swift, I delete an item at a specific index, in a particular branch, using this:
self.ref.child("-Kabn1954").child("foods").child("1").removeValue()
However, after I do this, the Firebase data looks like this:
After Deletion (Link)
As you can see, the data in this branch now goes directly from index 0 to index 2. For this reason, I get an error. How can I make it such that when the item at index 1 is deleted, the two remaining items have an index of 0 followed by an index of 1?
Firebase doesn't actually store the data as an array, instead it stores it as an object keyed by the index as you're observing. The guide suggests that you should try to restructure your data so that the array-like behavior is not used.
If that is not possible or really not preferable, I don't know about how the Swift API works, however in both the python and JavaScript libraries, if you observe on the parent foods element, you'll get an array object which you can splice and push an update. I'm guessing this is also true in Swift, as the API indicates that an NSArray can be returned too.
As the blog post mentions, you'll need to update the entire array when you want to reindex, as Firebase will not do it for you. setValue() accepts an NSArray which can be called on the foods reference. Be careful about race conditions here, you'll want to encapsulate the read and write into a single transaction to avoid losing your update.

Meteor reactive publish data from different collections

i try to build a homeautomation system with meteor. therefore i would like to do the following thing.
i have a collection with all my different liveValues i'm reading from any kind of source. each document is a value of a for example sensor with the actual value.
now i want to create a second collection called thing. in this collection i'd like to add all my "Things" for example "Roomtemperature living" with the data for this thing. one attribute should be the connection to one of liveValues.
Now i want to publish and subscribe with Meteor the Thing collection, because on the webinterface it doesn't matter what liveValue is behind the Thing.
Here, the in my optionen, complicated part starts.
How can i publish the data to the client and i will have a reactive update when the LiveValue has changend for the thing? because it's an differnt collection than "Thing" collection.
In my idea i would like to do this via one subscrition to one "thing" document and i will get back with this subscription the update of the liveValue of the liveValue collection.
Is this workable?
has somebody an idea how i can handle this?
i've heard something about meteor-reactive-publish but i'not sure if this is the solution. also i've heard that this needs a lots of power for the server.
thanks for your help.
So basically you want to merge the documents on server side to one reactive collection on client-side.
You should use observeChanges provided by Meteor Collections as described in the docs.
By this you can observe the changes on your server side collections and publish to your client-side aggregated collection, like this:
// Get the data from a kind of sensor
var cursor = SomeSensor.find({/* your query */});
var self = this;
// Observe the changes in the cursor and publish
// it to the 'things' collection in client
var observer = cursor.observeChanges({
added: function (document) {
self.added('things', document._id, document);
},
removed: function (document) {
self.removed('things', document._id, document);
},
changed: function (document) {
self.changed('things', document._id, document);
}
});
// Make the publication ready
self.ready();
// Stop the observer on subscription stop
self.onStop(function () {
observer.stop();
});
With this the things collection will have the data from all the sensors reactively.
Hope it helps you.