Is there any way to trigger REST API from mongodb ?
below is my mongo document
{
_id : "123",
"expiryTime" : "2020-01-30T00:00:00Z",
"status" : "NEW"
}
I have an REST API which will mark status of all documents for which expiryTime reached as "OLD".
How to achieve this , can mongo call an API ?
MongoDB doesn't have any solution to this.
Alternative solutions:
Schedule Tasks with Spring Boot: You need to implement in your code such logic look here
UNIX: You may schedule with crontab to execute commands / API rest periodically.
Windows: You may schedule with Task Scheduler to execute commands / API rest periodically.
You may write a program (Python, Java) to do:
Connect MongoDB
Check if any data expired for current date (query)
- If yes, execute API rest for each expired data
-- Remove expired data
- If no, finish the execution
Related
The Nexus repository server by Sonatype offers a classical REST API. When an operation is triggered through the REST API, the call returns immediately, indicating through its status code whether or not the operation was started successfully. I am looking for a way to detect whether and when a job finished successfully.
In my concrete case, I am starting a backup task that writes out configuration databases in a serialized format to disk:
curl -X POST "$mynexus/service/rest/v1/tasks/$task-id/run" -H "accept: application/json"
which returns a 204 "task was run" immediately.
However, minutes after that happens, a manual check indicates that the on-disk file created by that task is still growing. Of course, I could try watching the output of lsof until that task seems finished, but that would be highly impractical, require root access to the server and also break the REST design.
A similar question here has not received an answer since 2016, so I'll ask in a more general way, in the hope that the answer will be more generally applicable:
How can a REST client detect that an operation has completely finished on the server side when talking to a Sonatype Nexus 3.x series server?
If there is no such way, would you consider that an issue with Nexus or would you recommend to create a custom workaround?
Nexus 3 has a get task API endpoint which will give you different info about a specific task, including its currentState
GET /service/v1/tasks/{id}
Example API response taken from the below linked documentation:
{
"id" : "0261aed9-9f29-447b-8794-f21693b1f9ac",
"name" : "Hello World",
"type" : "script",
"message" : null,
"currentState" : "WAITING",
"lastRunResult" : null,
"nextRun" : null,
"lastRun" : null
}
Reference: Nexus get task API endpoint documentation
I am running cadence with cassandra externally running using docker run -e CASSANDRA_SEEDS=10.x.x.x e ubercadence/server:. and its running sucessfully.
Azure cosmos says, any system running on Cassandra can use Azure cosmos using provided cosmos cassandra APi, by modifying the client connection creation code, for example : GO app sample code :
func GetSession(cosmosCassandraContactPoint, cosmosCassandraPort, cosmosCassandraUser, cosmosCassandraPassword string) *gocql.Session {
clusterConfig := gocql.NewCluster(cosmosCassandraContactPoint)
port, err := strconv.Atoi(cosmosCassandraPort)
clusterConfig.Authenticator = gocql.PasswordAuthenticator{Username: cosmosCassandraUser, Password: cosmosCassandraPassword}
clusterConfig.Port = port
clusterConfig.SslOpts = &gocql.SslOptions{Config: &tls.Config{MinVersion: tls.VersionTLS12}}
clusterConfig.ProtoVersion = 4
session, err := clusterConfig.CreateSession()
...
return session
}
From my end, I can connect external cassandra's cqlsh(which cadence is using for persisting) to azure cosmos and can create KeySpace, table in azure cosmo db. However, when I run Cadence server, all new tables are still created on local cassandra itself(instead of Axure cosmos) might be, cadence is connected to cassandra only.
So there are basically 2 question shared below :
1.Since cadence is written in GO, can we modify the source code to establish connection to AzureCosmoDb. or
or can we pass the cosmocassandra's host, port, username, password, while running the cassandra and cadence separately (docker run -e CASSANDRA_SEEDS=10.x.x.x e ubercadence/server:)
cosmosCassandraContactPoint : xyz.cassandra.cosmos.azure.com cosmosCassandraPort : 10350 cosmosCassandraUser : xyz cosmosCassandraPassword : xyz
It’s really exciting that Azure Cosmo Cassandra is now support LWT!
I took a quick look at the docs
I think it’s may not going to work directly, because it doesn’t support LoggedBatch
But Cadence is using logged batch:
I think it’s probably okay to use unlogged batch in Cadence.
Because all the operation are in a single partition always.
From the Datastax docs:
Single partition batch operations are atomic automatically, while multiple partition batch operations require the use of a batchlog to ensure atomicity
This means using unlogged batch should be the same for Cadence.
(though I believe Cadence choose to use logged batch for safe)
It should work if we change the code slightly in the Cadence Cassandra plugin
If your Spring Boot application uses MonogDB, Spring Boot will automatically provide a health check endpoint that includes whether MonogDB is "healthy". But what exactly does this check? What does the check showing that MonogDB is "up" mean? What kinds of faults can cause it to be "down".
The MongoHealthIndicator does this:
#Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
Document result = this.mongoTemplate.executeCommand("{ buildInfo: 1 }");
builder.up().withDetail("version", result.getString("version"));
}
That is, it attempts to execute the MonogoDB command buildInfo: 1. That command "returns a build summary", which is mostly version IDs.
The health check therefore indicates that your application can connection to MonogoDB and execute a simple command in a reasonable time. MonogDB probably incorporates all the information it needs to respond to that command in the executable itself; it does not need to perform a query of the data-store. So the check is unlikely to indicate that the data-accesses can be done OK and will be performant.
I have setup a Azure Function with a Azure Cosmos DB(document) output. The cosmos database is configured to be a MongoDB.
And added the following simple code to try and add a new document:
module.exports = function (context, eventHubMessages) {
context.bindings.document = {
text : "Test data"
}
context.done();
};
When i test run i get success, but when i try to open the the collection using Studio 3T i get:
Query failed with error code 1 and error message 'Unknown server error occurred when processing this request.'
When i use the same code to write to a DocumentDB i get success and i can view data in Azure. Do you need to use a different API to save data to mongoDB?
The DocumentDB output binding is using the DocumentDB API to connect and save information in the database. But your database (from what you are saying) is using the MongoDB API, they are different APIs (links point to the docs).
As you surely know, MongoDB has some requirements (like the existence of an "_id" attribute) that are covered when you connect to the database from a MongoDB client (either an SDK or a third-party client), but since you are communicating through the DocumentDB API, it's probably failing to fulfill those requirements.
You might want to try and use the Mongo driver in the function to connect to your Cosmos DB database through the MongoDB API.
Here is the spring batch design for job recommending to my client:-
UI application will call Rest API on API server. Rest API creates a unique id , and send unique id, job params, job name as jms message to some batch server. Rest API sends the unique token id to UI.
JMS message listener on batch server create a new spring batch job instance and set up unique id as job param and run the job.
UI keeps on polling the Status Rest API by passing the unique token.
Rest API finds from job param table for unique id, job execution id and provide the job status to UI.
Please advise, any suggestion so that we can create the job on API server, so that job instance and param created but does not execute any steps. We know the job execution id
On batch server with input job execution id, we can rerun/resume the job again.
I think you could use a JobRequest-table to store the initial request with the initial unique token. You could pass this token to the JMS server along with the batch request. As soon as the JMS batch server creates/starts the request it should also update the JobRequest-table and insert the actual Batch-JobInstance-Id.
So clients can ask the status by using the unique token. Rest-API server can use that token to look up the job id and get information about the job progress
I was able to do this by providing my own incrementer factory while creating the Job Repository bean. Now I can get the job execution from Seq at UI layer and use same while running job. Job execution id is stored in thread local at batch server. I am overrirding the getNextKey() method from OracleSequenceMaxValueIncrementer
I was debugging the Spring batch code. I came with this approach. But now trying to worked it out, see this works :-
My client using oracle DB. Spring Batch provides this property in batch-oracle.properties file.
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.OracleSequenceMaxValueIncrementer
I will override the batch.database.incrementer.class property by
client specific incrementer class by subclassing OracleSequenceMaxValueIncrementer and override getNextKey() abstract method.
On API server, I will call only job execution sequence, get the id and pass same in jms message.
On batch server, store the job execution id in some thread local. getNextKey() method check if incrementer name is JOB EXEC SEQ, get the id from thread local else create the way spring batch is doing it. getNextKey() will be called during job execution creation by spring batch. For other tables sequence, this will not cause issue as incrementer name will be diff.