Example of mongo-scala-driver transaction - mongodb

Mongodb 4 added multi document transaction support.
Mongo-scala-driver (http://mongodb.github.io/mongo-scala-driver/2.4/) supports mongodb 4, but I cannot find any example how to use transaction with scala.
Can anybody provide the link or code snippet?
P.S: There is synchronous transaction example in the official mongodb site, but I need example of async, non-blocking transaction in scala.

There is an example in the transaction and drivers documentation under the Scala tab.
There are a few extra caveats / gotchas for scala that are covered in the example code.
Each observable within the transaction must be passed the ClientSession
Each observable must be subscribed to in order for anything to happen (they are cold observables).
Transactions can be retried, if they meet the criteria. An example is provided in the code.
There is no Observable abstraction as of version 2.4.0 but there are plans to simplify the API in the future.

Related

Can I assume that all databases will return a promise?

I am designing a set of unit and integration tests with a friend of mine, and we had a doubt. We know the answer, at least, what is more likely to be true. However, we would like to hear your thoughts.
We are designing a test for MongoDB, and we expect that we should receive a promise, after asking to save a document. So far so good.
What about if we change the database??? can we assume for sure that all databases when queried will return a promise??
I guess regarding the _id, it depends from database to database, we are using _id from MongoDB for testing reasons.
We are using the following test using in Jest
create://this method do exist on service, however, at the moment of testing, it is empty, just a placeholder
jest.fn().mockImplementation((cat: CreateCatDto) =>
Promise.resolve({ _id: 'a uuid', ...cat })
)
The idea is to design backends that do not depend on the database, but for testing and developing reasons, we are using MongoDB and PostgreSQL.
Keep in mind the term promise doesn't exist as a concept for all databases, and to provide a conclusive answer to your question for all databases is not possible.
That being said, if by promise you mean the primary key or identity (in general database terms) after inserting new data, then the answer is no, there is no guarantee, not even on PostgreSQL that'll you be able to do that. It's possible for tables, even in PostgreSQL, to exist without those constraints.
Otherwise, if by promise, you mean specifically the concept in a procedural or functional language like JavaScript (as your example code in your update indicates), then yes, you should always receive a promise object if your application code is utilizing asynchronous calls appropriately.
But that would be regardless of what the asynchronous call was to whether it's a database directly (and regardless of what database system that was), an API endpoint, or another piece of application code. Also, in that case, your question (or any follow up questions) would be better suited for StackOverflow.com.
Can I assume that all databases return a promise?
No. Most if not all database wire protocols are synchronous, meaning the client blocks until it gets a response. Even the databases that expose some sort of RESTful APIs are synchronous, because HTTP.
Some client-side drivers may wrap this synchronous logic and exhibit asynchronous behaviour by returning something like JavaScrpt Promises or Java Futures, but it is entirely up to the driver implementation you choose to use.

How to connect Esper CEP engine with DDS

I belive I am missing something related with dds concept. My idea is to use a EsperIO adapter, data flow or plug-in to insert incomming events from dds to a esper engine, but I can't see it clear.
Somebody help!! (Thanks in advance)
The step by step would be
1) receive event data from DDS i.e. Java DataReader
2) build an event object that Esper can understand; for this use a JavaBean-style class for example
3) send event object into Esper
There is no need to build an adapter or use EsperIO. The API to feed events into Esper is really simple. For API code see http://www.espertech.com/esper/longer-case-study/ or Esper docs.

Spring Batch: reuse existing service as a reader

I want to reuse an existing, transactional,paginated service class, which retrieves the items using JPA from a database, inside a Spring batch job, as a reader. I want to do that instead of using directly the JpaPagingItemReader basically because the JPA query is more complex to build and the service already provides this functionality.
My question would be what are the things I should take into account when developing the Spring batch adapter over this service. Although the reference documentation http://docs.spring.io/spring-batch/trunk/reference/html/readersAndWriters.html#pagingItemReaders has a section on reusing existing services, it doesn't say anything regarding the constraints, if there are any, of using such a transactional service.
Now, I looked at the JpaPagingItemReader as an example for building the reader, and I came up with a couple of questions I couldn't find answers for netiher in the documentation or on stackoverflow, although this post https://stackoverflow.com/a/26549831/4473261 helped.
The first thing I noticed is that a new transaction is used by the JpaPagingItemReader for reading a page of data. The above post says that this new transaction is needed "so that features like retry and skip can be correctly performed.". I have also found this article related to the matter https://blog.codecentric.de/en/2012/03/transactions-in-spring-batch-part-3-skip-and-retry/ that says that "when a skippable exception occurs during reading, we just increase the skip count and keep the exception for a later call on the onSkipInRead method of the SkipListener, if configured. There’s no rollback". So I assume that the reader has to do any reading of the records in a new transaction so that if a rollback of the transaction started when the processing of the chunk began happened, then the reader is not affected. I am wondering if this is true and if in this case my adapter should create a new transaction, invoke the service inside that transaction and then commit the transaction, similarly to how the JpaPagingItemReader does it. If that's true though, I wonder why there isn't any template provided by the framework which creates the transaction, delegates to the service the actual call to retrieve the data and then commits the transaction.
Greetings,
Cristi
From a reader perspective, there really isn't much to be concerned about. You can see in our JmsItemReader which obviously works with a transactional store that we don't take any additional precautions within the ItemReader itself.
What really matters is how you configure your step. When configuring your step, you'll need to mark the reader as transactional so that Spring Batch handles rollback correctly. When Spring Batch reads items in a fault tollerant step, the default behavior is to buffer them so that they won't be re-read on failure (retry, skip, etc). However, since the items read from a transactional store are tied to the transaction (and therefore reset when the rollback occurs), you need to tell Spring Batch to not buffer the items as they are read.
To mark the ItemReader as transactional, you'll set the not-quite-well-named flag is-reader-transactional-queue to true. You can read more about configuring steps and transactions in the documentation here: http://docs.spring.io/spring-batch/trunk/reference/html/configureStep.html

Orion Context Broker (0.17.0) subscriptions information

Is there any queryContext requests to get Context subscription information of an entity? Or is it still on the roadmap?
Orion roadmap includes an operation to get all subscriptions and other to get the details of a given subscription (by subscritpion ID). It think it is not exactly what you mean (if I understand correctly, you refer to an operation that given an entity returns all the subscription that refer to such entity) but you could combine the "get all subscriptions" operation with the entity::id filter to get the same effect.
Both operations and filter are still on roadmap. This tracker shows implementation status and it is updated each time a new version is planned and released.
Alternatively, while the API operations get implemented (see the other answer to this question), you can get subscriptions information directly from the Orion database as workaround. Looking to the description of the subscriptions collection in the Orion Administration manual and with a basic knowledge of MongoDB query operations, for example you can get all the subscription to entity ID "foo" with the following query:
db.csubs.find({"entities.id": "foo"})

MongoDB critical section

I need to perform a few operations (read and writes) on my mongodb without having another process interrupting. It's for an online game and when a user sends resources to another the following steps are performed:
Check his resource value
Abort if it's not enough
Insert a resource transaction
Decrement his resource value
Increment the other ones resource value
I'm concerned that while checking if its enough or inserting the resource transaction some other transaction has already been inserted and the values become invalid. How can I make sure that this part is executed in this order?
I can see two ways:
Use client side transactions to hold a "lock": http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/
Or use versioning here whereby you hold a field with a $inc'd version number which gets updated every time you save and must be queried by whenever you go to save. A good example is within Vermongo: https://github.com/thiloplanz/v7files/wiki/Vermongo
Those seem to be the two most plausible ways I see of getting this done.
Transaction is a almost forbidden word when talking about mongo. But you can perform steps 1,2 and 4 using a atomic uptade with $inc using resource value as condition, and then perform steps 3 and 5. You will not have support for rolling back on step if next steps fails.
I am an engineer at Tokutek
TokuMX is a MongoDB replacement server that uses the same protocol and drivers and supports native multi-statement transactions on non-sharded setups. What you want can be accomplished with a serializable transaction, which will take document-level locks on documents you touch. This would be done something like
> db.beginTransaction("serializable");
> if (resourcesInsufficient()) { db.rollbackTransaction(); }
> // insert and update
> db.commitTransaction()
Again, this is not supported in sharding but may be useful for your application. More details, features and limitations are discussed here.