I am creating a reactive application with Meteor (with MongoDB as a backend).
I initially created a non-reactive-aware collection and denormalizers, eg.:
class DocCollection extends Mongo.Collection {
insert(doc, callback) {
const docId = super.insert(doc, callback);
doc = docMongo.findOne(docId); // for illustration, A
console.log(doc);
return docId;
}
}
docMongo = new DocCollection();
Now, I'd like to wrap it into MongoObservable, which will facilitate listening to the changes to the collection:
export const Doc = new MongoObservable.Collection(docMongo);
Then, I define a Method:
Meteor.methods({
add_me() {
Doc.insert(myDoc);
}
});
in server/main.js and call it in app.component.ts's constructor:
#Component(...)
export class AppComponent {
constructor() {
Meteor.call('add_me');
}
}
I get undefined printed to console (unless I sleep a little before findOne), so I suppose when I was looking for the doc after insertion in my Mongo.Collection, the document wasn't yet ready to be searched for.
Why does it happen, even though I overwrote the non-reactive class and only then wrapped it in MongoObservable?
How do I typically do denormalization with a reactive collection? Should I pass observables to my denormalizers and there create new ones, or is it possible to nicely wrap the non-reactive code afterwards (like I tried and failed above)? Note that I don't want to directly pass doc inside, as in more complex scenarios it will cause other inserts/updates elsewhere for which I'd also want to wait.
How do people typically test these things? If I run a test, the code above may succeed locally, as db insertion time is small, but fail when the delay is higher.
Related
I have a Firestore document that changes infrequently and is used as a lookup table in many different places in my app. It is my understanding that when I do a get for the document it is always performing a server query, even if the data in the document hasn't changed. The local cache is only used when the server is unavailable.
Since the document changes infrequently I would like to do something like this...
1) When the app starts, get the document and store it locally.
2) Setup a listener so that if the document changes the local copy is updated.
3) When the local copy is updated broadcast this change to any widget that may be using the document.
This is the way I wish Firestore worked by default.
Is this a good idea? Any suggestions on how to implement this?
Here is what I ended up doing. It was actually fairly straight forward. I implemented this as a static class and not sure if that's the best approach. But I like everything else.
I created a class that sets up a listener for the document and also provides a stream for when it is updated. As a bit of side work my class also parses the document into a Map and sorts the games.
import 'package:cloud_firestore/cloud_firestore.dart';
import 'dart:async';
import 'package:pari/game.dart';
class Schedule {
static Map<String, Game> games = Map<String, Game>();
static StreamController<Map<String, Game>> _onUpdateController = StreamController.broadcast();
static Stream get onUpdate => _onUpdateController.stream;
static void setupListener() {
print('setupListener');
DocumentReference reference = Firestore.instance.collection('schedule').document('2018');
reference.snapshots().listen((documentSnapshot) {
print('listen begin');
List<Game> sortedList = List<Game>();
documentSnapshot.data.values.forEach((value) {
Game game = Game.fromMap(value);
sortedList.add(game);
});
sortedList.sort((a,b) => a.startTime.compareTo(b.startTime));
games.clear();
sortedList.forEach((game) {
games.addAll({game.key: game});
});
_onUpdateController.add(games);
print('listen end');
print('games: ${games.length}');
});
}
}
I am part of a Angular2 application (we use beta3) and the issue is the following:
Usually we have a component that uses some service that uses some rest call and the component displays the data. Great.
However we do have a page with more then 6 components all of them using the same REST call...(the backend returns data for ALL of them) and it doesn't make sense to call 6 times the REST for each component, also it will be weird if we do some client side caching.
Is there something available out of the box ? Or a Pattern to handle such case?
Thanks.
Just do it in a shared service. If you add it only in bootstrap(..., [OtherProviders, HTTP_PROVIDERS, MyService]) each component will get injected the same instance. Store the data in the service and every component can access it
export class MyComponent {
constructor(private dataService:MyService) {
dataService.getData().subscribe(data => { this.data = data; });
}
}
export class MyService {
getData() {
if(!this.data) {
return http.get(...).map(...).subscribe(data => { this.data = data;});
}
return this.data;
}
}
The #Günter's answer really makes sense!
I don't know your code is organized but observable can also be subscribed several times. To do that you need to make them "hot" using the share operator:
export class MyService {
dataObservable:Observable;
initDataObservable() {
this.dataObservable = http.get(...).map(...).share();
}
}
Without using the share operator, corresponding request will executed several times (one per subscribe).
You can notice that the request will be executed once one subscribe method is called on the observable.
I am working on a server component which is responsible for caching models in memory and then stream any changes to interested clients.
When the first client requests a model (well model key, each model has a key to identify it) the model will be created (along with any subscriptions to downstream systems) and then sent to the client, followed by a stream of updates (generated by downstream systems). Any subsequent client's should get this cached (updated) model, again with the stream of updates. When the last client unsubscribes to the model the downstream subscriptions should be destroyed and the cached model destroyed.
Could anyone point me in the right direction as regards to how Rx could help here. I guess what isn't clear to me at the moment is how I synchronize state (of the object) and the stream of changes? Would I have two separate IObservables for the model and updates?
Update: here's what I have so far:
Model model = null;
return Observable.Create((IObserver<ModelUpdate> observer) =>
{
model = _modelFactory.GetModel(key);
_backendThing.Subscribe(model, observer.OnNext);
return Disposable.Create(() =>
{
_backendThing.Unsubscribe(model);
});
})
.Do((u) => model.MergeUpdate(u))
.Buffer(_bufferLength)
.Select(inp => new ModelEvent(inp))
.Publish()
.RefCount()
.StartWith(new ModelEvent(model)
If I understood the problem correctly, there are Models coming in dynamically. At any point in time in your Application's lifetime, the number of Models are unknown.
For that purpose an IObservable<IEnumerable<Model>> looks like a way to go. Each time there is a new Model added or an existing one removed, the updated IEnumerable<Model> would be streamed. Now it would essentially preserve the older objects as opposed to creating all Models each time there is an update unless there is a good reason to do so.
As for the update on each Model object's state such as any field value or property value changed, I would look into Paul Betts' ReactiveUI project, it has something called ReactiveObject. Reactive object helps you get change notifications easily, but that library is mainly designed for WPF MVVM applications.
Here is how a Model's state update would go with ReactiveObject
public class Model : ReactiveObject
{
int _currentPressure;
public int CurrentPressure
{
get { return _currentPressure; }
set { this.RaiseAndSetIfChagned(ref _currentPressure, value); }
}
}
now anywhere you have Model object in your application you could easily get an Observable that will give you updates about the object's pressure component. I can use When or WhenAny extension methods.
You could however not use ReactiveUI and have a simple IObservable whenever a state change occurs.
Something like this may work, though your requirements aren't exactly clear to me.
private static readonly ConcurrentDictionary<Key, IObservable<Model>> cache = new...
...
public IObservable<Model> GetModel(Key key)
{
return cache.GetOrAdd(key, CreateModelWithUpdates);
}
private IObservable<Model> CreateModelWithUpdates(Key key)
{
return Observable.Using(() => new Model(key), model => GetUpdates(model).StartWith(model))
.Publish((Model)null)
.RefCount()
.Where(model => model != null);
}
private IObservable<Model> GetUpdates(Model model) { ... }
...
public class Model : IDisposable
{
...
}
In the following case where two DbContexts are nested due to method calls:
public void Method_A() {
using (var db = new SomeDbContext()) {
//...do some work here
Method_B();
//...do some more work here
}
}
public void Method_B() {
using (var db = new SomeDbContext()) {
//...do some work
}
}
Question:
Will this nesting cause any issues? (and will the correct DbContext be disposed at the correct time?)
Is this nesting considered bad practice, should Method_A be refactored into:
public void Method_A() {
using (var db = new SomeDbContext()) {
//...do some work here
}
Method_B();
using (var db = new SomeDbContext()) {
//...do some more work here
}
}
Thanks.
Your DbContext derived class is actually managing at least three things for you here:
the metadata that describes your database and your entity model,
the underlying database connection, and
a client side "cache" of entities loaded using the context, for change tracking, relationship fixup, etc. (Note that although I term this a "cache" for want of a better word, this is generally short lived and is just to support EFs functionality. It's not a substitute for proper caching in your application if applicable.)
Entity Framework generally caches the metadata (item 1) so that it is shared by all context instances (or, at least, all instances that use the same connection string). So here that gives you no cause for concern.
As mentioned in other comments, your code results in using two database connections. This may or may not be a problem for you.
You also end up with two client caches (item 3). If you happen to load an entity from the outer context, then again from the inner context, you will have two copies of it in memory. This would definitely be confusing, and could lead to subtle bugs. This means that, if you don't want to use shared context objects, then your option 2 would probably be better than option 1.
If you are using transactions, there are further considerations. Having multiple database connections is likely to result in transactions being promoted to distributed transactions, which is probably not what you want. Since you didn't make any mention of db transactions, I won't go into this further here.
So, where does this leave you?
If you are using this pattern simply to avoid passing DbContext objects around in your code, then you would probably be better off refactoring MethodB to receive the context as a parameter. The question of how long-lived context objects should be comes up repeatedly. As a rule of thumb, create a new context for a single database operation or for a series of related database operations. (See, for example this blog post and this question.)
(As an alternative, you could add a constructor to your DbContext derived class that receives an existing connection. Then you could share the same connection between multiple contexts.)
One useful pattern is to write your own class that creates a context object and stores it as a private field or property. Then you make your class implement IDisposable and its Dispose() method disposes the context object. Your calling code news up an instance of your class, and doesn't have to worry about contexts or connections at all.
When might you need to have multiple contexts active at the same time?
This can be useful when you need to write code that is multi-threaded. A database connection is not thread-safe, so you must only ever access a connection (and therefore an EF context) from one thread at a time. If that is too restrictive, you need multiple connections (and contexts), one per thread. You might find this interesting.
You can alter your code by passing to Method_B the context. If you do so, the creation of the second db call SomeDbContext will not be necessary.
there a question an answer in stackoverflow in this link
Proper use of "Using" statement for datacontext
It is a bit late answer, but still people may be looking so here is another way.
Create class, that cares about disposing for you. In some scenarios, there would be a function usable from different places in solution. This way you avoid creating multiple instances of DbContext and you can use nested calls as many as you like.
Pasting simple example.
public class SomeContext : SomeDbContext
{
protected int UsingCount = 0;
public static SomeContext GetContext(SomeContext context)
{
if (context != null)
{
context.UsingCount++;
}
else
{
context = new SomeContext();
}
return context;
}
private SomeContext()
{
}
protected bool MyDisposing = true;
protected override void Dispose(bool disposing)
{
if (UsingCount == 0)
{
base.Dispose(MyDisposing);
MyDisposing = false;
}
else
{
UsingCount--;
}
}
public override int SaveChanges()
{
if (UsingCount == 0)
{
return base.SaveChanges();
}
else
{
return 0;
}
}
}
Example of usage
public class ExmapleNesting
{
public void MethodA()
{
using (var context = SomeContext.GetContext(null))
{
// manipulate, save it, just do not call Dispose on context in using
MethodB(context);
}
MethodB();
}
public void MethodB(SomeContext someContext = null)
{
using (var context = SomeContext.GetContext(someContext))
{
// manipulate, save it, just do not call Dispose on context in using
// Even more nested functions if you'd like
}
}
}
Simple and easy to use.
If you think number of connections to Database,and impact of times that new connections must be opened, not an important problem and you have no limitation for support your application to run at best performance, everything is OK.
Your code works well. Because create just a db context has a low impact in your performance,meta data will be cached after first loading, and connection to your database just occurs when the code need to execute a query. With liitle performance consideration and code design, I offer you to make context factory to have just an instance of each Db Context for each instance of your application.
You can take a look at this link for more performance considerations
http://msdn.microsoft.com/en-us/data/hh949853
I have a domain object which has a collection of primitive values, which represent the primary keys of another domain object ("Person").
I have a Wicket component that takes IModel<List<Person>>, and allows you to view, remove, and add Persons to the list.
I would like to write a wrapper which implements IModel<List<Person>>, but which is backed by a PropertyModel<List<Long>> from the original domain object.
View-only is easy (Scala syntax for brevity):
class PersonModel(wrappedModel: IModel[List[Long]]) extends LoadableDetachableModel[List[Person]] {
#SpringBean dao: PersonDao =_
def load: List[Person] = {
// Returns a collection of Persons for each id
wrappedModel.getObject().map { id: Long =>
dao.getPerson(id)
}
}
}
But how might I write this to allow for adding and removing from the original List of Longs?
Or is a Model not the best place to do this translation?
Thanks!
You can do something like this:
class PersonModel extends Model<List<Person>> {
private transient List<Person> cache;
private IModel<List<String>> idModel;
public PersonModel( IModel<List<String>> idModel ) {
this.idModel = idModel;
}
public List<Person> getObject() {
if ( cache == null ) {
cache = convertIdsToPersons( idModel.getObject() );
return cache;
}
public void setObject( List<Person> ob ) {
cache = null;
idModel.setObject( convertPersonsToIds( ob ) );
}
}
This isn't very good code but it shows the general idea. One thing you need to consider is how this whole thing will be serialised between requests, you might be better off extending LoadableDetachableModel instead.
Another thing is the cache: it's there to avoid having to convert the list every time getObject() is called within a request. You may or may not need it in practice (depends on a lot of factors, including the speed of the conversion), but if you use it, it means that if something else is modifying the underlying collection, the changes may not be picked up by this model.
I'm not quite sure I understand your question and I don't understand the syntax of Scala.
But, to remove an entity from a list, you can provide a link that simply removes it using your dao. You must be using a repeater to populate your Person list so each repeater entry will have its own Model which can be passed to the deletion link.
Take a look at this Wicket example that uses a link with a repeater to select a contact. You just need to adapt it to delete your Person instead of selecting it.
As for modifying the original list of Longs, you can use the ListView.removeLink() method to get a link component that removes an entry from the backing list.