I am using mongo inbound adapter for retrieving data from mongo. Currently I am using below configuration.
<int-mongo:inbound-channel-adapter
id="mongoInboundAdapter" collection-name="updates_IPMS_PRICING"
mongo-template="mongoTemplatePublisher" channel="ipmsPricingUpdateChannelSplitter"
query="{'flagged' : false}" entity-class="com.snapdeal.coms.publisher.bean.PublisherVendorProductUpdate">
<poller max-messages-per-poll="2" fixed-rate="10000"></poller>
</int-mongo:inbound-channel-adapter>
I have around 20 records in my data base which qualifies the mentioned query but as I am giving max-messages-per-poll value 2 I was expecting that i will get maximum 2 records per poll.
but I am getting all the records which qualifies the mentioned query. Not sure what I am doing wrong.
Actually I'd suggest to raise a New Feature JIRA ticket for that query-expression to allow to specify org.springframework.data.mongodb.core.query.Query builder, which has skip() and limit() options and from there your issue can be fixed like:
<int-mongo:inbound-channel-adapter
query-expression="new BasicQuery('{\'flagged\' : false}').limit(2)"/>
The mongo adapter is designed to return a single message containing a collection of query results per poll. So max-messages-per-poll makes no difference here.
max-messages-per-poll is used to short-circuit the poller and, in your case, the second poll is done immediately rather than waiting 10 seconds again. After 2 polls, we wait again.
In order to implement paging, you will need to use a query-expression instead of query and maintain some state somewhere that can be included in the query on each poll.
For example, if the documents have some value that increments you can store off that value in a bean and use the value in the next poll to get the next one.
Related
Our Java app saves its configurations in a MongoDB collections. When the app starts it reads all the configurations from MongoDB and caches them in Maps. We would like to use the change stream API to be able also to watch for updates of the configurations collections.
So, upon app startup, first we would like to get all configurations, and from now on - watch for any further change.
Is there an easy way to execute the following atomically:
A find() that retrieves all configurations (documents)
Start a watch() that will send all further updates
By atomically I mean - without potentially missing any update (between 1 and 2 someone could update the collection with new configuration).
To make sure I lose no update notifications, I found that I can use watch().startAtOperationTime(serverTime) (for MongoDB of 4.0 or later), as follows.
Query the MongoDB server for its current time, using command such as Document hostInfoDoc = mongoTemplate.executeCommand(new Document("hostInfo", 1))
Query for all interesting documents: List<C> configList = mongoTemplate.findAll(clazz);
Extract the server time from hostInfoDoc: BsonTimestamp serverTime = (BsonTimestamp) hostInfoDoc.get("operationTime");
Start the change stream configured with the saved server time ChangeStreamIterable<Document> changes = eventCollection.watch().startAtOperationTime(serverTime);
Since 1 ends before 2 starts, we know that the documents that were returned by 2 were at least same or fresher than the ones on that server time. And any updates that happened on or after this server time will be sent to us by the change stream (I don't care to run again redundant updates, because I use map as cache, so extra add/remove won't make a difference, as long as the last action arrives).
I think I could also use watch().resumeAfter(_idOfLastAddedDoc) (didn't try). I did not use this approach because of the following scenario: the collection is empty, and the first document is added after getting all (none) documents, and before starting the watch(). In that scenario I don't have previous document _id to use as resume token.
Update
Instead of using "hostInfo" for getting the server time, which couldn't be used in our production, I ended using "dbStats" like that:
Document dbStats= mongoOperations.executeCommand(new Document("dbStats", 1));
BsonTimestamp serverTime = (BsonTimestamp) dbStats.get("operationTime");
Can anyone please explain how first set, previous set, next set, and the last set of records can be used to query HTTP rest message data. what exactly does this do?
I got some information in ServiceNow website, where i am not able to understand.
Can we use this instead of sysparm_limit/sysparm_offset technique to fetch the records?
Yes, it's there for pagination on your side.
To get the first 10 records you can set sysparm_limit=10 and sysparm_offset=0. To get the next 10 records you should set sysparm_limit=10 and sysparm_offset=10.
I am trying to send two separate sets of data from the same collection from server to client. Data is being inserted in the collection on a set interval of 30 seconds. One set of data sent to the client must return all documents over the course of the current day on an hourly basis, while the other set of data simply sends the most recent entry in the collection. I have a graph that needs to display hourly data, as well as fields that need to display the most recent record every 30 seconds, however, I cannot seem to decouple these two data sets. The query for the most recent entry seems to always overwrite the query for the hourly data when attempting to access the data on the client. So my question summed up is: How does one send two separate sets of data of the same collection from server to client, and then access these two separate sets independently on the client?
The answer is simple, you cannot!
The server is always answering to client with a result set that the client asked for. So, if the client needs two separate (different) result sets, then the client must fire up two different queries. Queries that request hourly data OR last (newest) entry.
Use added, changed, removed to modify the results from the two queries so that they are "transformed" into different fields. https://docs.meteor.com/api/pubsub.html#Subscription-added
However, this is probably not your issue. You are almost certainly using the same string as the name argument of your Meteor.publish call, or you are accidentally Meteor.subscribe-ing to the same Meteor.publish twice.
Make two separate Meteor.publish names, one for the most recent and one for the hourly data. Subscribe to each of them separately. The commenter is incorrect.
Recently we started to use couchbase, We are using java spring-data-couchbase with Jersey to access couchbase. Accessing low level java-sdk-api we set expire time (TTL) to a particular document with the KEY(id). It's working fine. The code is as follows.
// define couchbaseTemplate for lower-level access to Java SDK
#Autowired
CouchbaseTemplate couchbaseTemplate;
// setExpiry method update expiry given a doc ID
#Override
public void setExpiry(String key, int expN) throws RepositoryException {
couchbaseTemplate.getCouchbaseClient().touch(key, expN);
}
The problem we face is when we try to get list of documents using query, the list contains the expired documents. And when we try to access the documents from the list we found it to be null.
But if we execute query after a while the expired document no longer include to the list.
Example: When the expN = 10 seconds, and we execute query around 10 seconds after setting the TTL, the expired documents included
If we execute query around 20 seconds after setting the TTL, the expired documents no longer included
in stale options we set
Query.setStale(Stale.false)
We have tried to manipulate
Query.setIncludeDocs
But no luck, any help....
Couchbase Server does expiries lazily. There are three ways an item can be expired:
When a document is accessed (get operation) the expiration value is checked
When the expiry pager runs
When disk compaction process runs (Only in Couchbase Server 3 and onwards)
As a result of this views will not be updated until one of these three processes has happened.
For this use case you could simple do a range query against the view using the current time so it only returns documents that have not expired. Assuming the time is the same on the cluster are well as the client and the view being used is this one:
function (doc, meta) {
emit(meta.expiration, null);
}
The meta.expiration is an epoch timestamp, so the following query could be used:
String currentEpoch = String.valueOf((System.currentTimeMillis()/1000));
bucket.query(ViewQuery.from("designdoc", "myview").startkey(currentEpoch));
Please note that this will return all alive documents that have an expiration set.
If you want to do something more interesting with date formats have a look at the Date and time selection half way down in the View and query examples chapter in the Couchbase Server manual.
As the Couchbase official documents said,
Detecting Expired Documents in Result Sets : If you are using views for indexing items from Couchbase Server, items that have not yet been removed as part of the expiry pager maintenance process will be part of a result set returned by querying the view. To exclude these items from a result set you should use query parameter include_doc set to true.
For expired documents, if you set include_doc=true , Couchbase Server returns a result set indicating the document does not exist anymore. Specifically, the key that had expired but had not yet been removed by the cleanup process will appear in the result set as a row where "doc":null :
So, this is how Couchbase works with expired documents.
For your case, just filter out the items where the doc is null, the rest will be your expected result.
I need to perform a few operations (read and writes) on my mongodb without having another process interrupting. It's for an online game and when a user sends resources to another the following steps are performed:
Check his resource value
Abort if it's not enough
Insert a resource transaction
Decrement his resource value
Increment the other ones resource value
I'm concerned that while checking if its enough or inserting the resource transaction some other transaction has already been inserted and the values become invalid. How can I make sure that this part is executed in this order?
I can see two ways:
Use client side transactions to hold a "lock": http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/
Or use versioning here whereby you hold a field with a $inc'd version number which gets updated every time you save and must be queried by whenever you go to save. A good example is within Vermongo: https://github.com/thiloplanz/v7files/wiki/Vermongo
Those seem to be the two most plausible ways I see of getting this done.
Transaction is a almost forbidden word when talking about mongo. But you can perform steps 1,2 and 4 using a atomic uptade with $inc using resource value as condition, and then perform steps 3 and 5. You will not have support for rolling back on step if next steps fails.
I am an engineer at Tokutek
TokuMX is a MongoDB replacement server that uses the same protocol and drivers and supports native multi-statement transactions on non-sharded setups. What you want can be accomplished with a serializable transaction, which will take document-level locks on documents you touch. This would be done something like
> db.beginTransaction("serializable");
> if (resourcesInsufficient()) { db.rollbackTransaction(); }
> // insert and update
> db.commitTransaction()
Again, this is not supported in sharding but may be useful for your application. More details, features and limitations are discussed here.