How to manually trigger a cloudwatch rule with ScheduleExpression(10 days) - aws-cloudformation

I have to setup "AWS::Events::Rule" in cloudwatch with ScheduleExpression(10 days), and write some code to test it, but I can not change the "10 days" to 1 minute or call the lambda function directly. I know that we can call put event for calling a rule with EventPattern.
But not know how to do that for ScheduleExpression.
Any comment is welcome, Thanks.

To my knowledge there's no possibility for you to manually trigger the rule and make it execute the lambda function. What you can do is change the frequency from 10 days to 1 minute, let it execute, and when it executes switch it back to 10 days

I also met this problem. I checked AWS document and it says that a rule can only contain either EventPattern or ScheduleExpression. But in order to call aws events put-events we must provide a Source for EventPattern match. So I think we cannot manually trigger a scheduled event.
Not sure what's your use case, but I have decided to move to use Invoke API of AWSLambda client.

SDK Approach:
Yes, you can use the putRule SDK function to update the ScheduleExpression of the CloudWatch Rule. As I mentioned in the below snippet
let params =
{
Name: timezoneCronName, /* required */
ScheduleExpression: cronExpression
}
return CloudWatchEvents.putRule(cloudWatchEventsParams).promise().then((response) => {
console.debug(`CloudWatch Events response`, response);
return response;
}).catch((error) => {
console.error(`Error occurred while updating Cloud Watch Event:${error.message}`);
throw error;
});
See this Official AWS SDK DOC.
CLI Approach:
Run the following command though CLI
aws events put-rule --name "You Rule name (not full ARN)" --schedule-expression "cron(0/1 * * * ? *)"

Related

Cadence Java client, unable to get history of workflow executions

I am following the idea, mentioned in this answer and trying this:
workflowTChannel.ListClosedWorkflowExecutions(ListClosedWorkflowExecutionsRequest().apply {
domain = "valid-domain-name"
startTimeFilter = StartTimeFilter().apply {
setEarliestTime(Instant.parse("2023-01-01T00:00:00.000Z").toEpochMilli())
setLatestTime(Instant.parse("2024-01-01T00:59:59.999Z").toEpochMilli())
}
})
However, the result is always an empty list.
Fetching the list via UI works fine at the same time.
Using PostgreSQL and local test installation without advanced visibility.
UPD: debugged Cadence locally and found that it expects nanoseconds instead of milliseconds. This way correct parameters must be prepared like this:
Instant.parse("2023-01-01T00:00:00.000Z").toEpochMilli() * 1000000
My guess is that you are using seconds and Cadence expects nanoseconds timestamps.

Google Cloud Storage object finalize event triggered multiple times

Scenario
I have a couple of Google Cloud Functions triggered by a Google Cloud Storage object.finalize event. For that I'm using two buckets and transfer job with "Synchronization options: Overwrite objects at destination" which copies every day a single file from one source bucket to destination one. The source bucket is the same for both functions and the destination buckets are different.
Problem
Most of the time it works as expected but sometimes I see multiple events at the almost the same time. Most of the time I see 2 duplicates but once was 3. I put in log event payload but it always the same.
More details
Here is an example of multiple log entries
Question
Could it be a known issue for Google Cloud Storage?
If no then most probably something is wrong in my code.
I'm using the following project structure:
/functions
|--/foo-code
|--executor.js
|--foo.sql
|--/bar-code
|--executor.js
|--bar.sql
|--/shared-code
|--utils.js
|--index.js
|--package.json
index.js
let foo;
let bar;
exports.foo = (event, callback) => {
console.log(`event ${JSON.stringify(event)}`);
foo = foo || require(`./foo-code/executor`);
foo.execute(event, callback);
};
exports.bar = (event, callback) => {
console.log(`event ${JSON.stringify(event)}`);
bar = bar || require(`./bar-code/executor`);
bar.execute(event, callback);
};
./foo-code/executor.js
const utils = require('../shared-code/utils.js)
exports.execute = (event, callback) => {
// run Big Query foo.sql statement
};
./bar-code/executor.js
const utils = require('../shared-code/utils.js)
exports.execute = (event, callback) => {
// run Big Query bar.sql statement
};
And finally deployment:
foo background function with specific bucket trigger:
gcloud beta functions deploy foo \
--source=https://<path_to_repo>/functions \
--trigger-bucket=foo-destination-bucket \
--timeout=540 \
--memory=128MB
bar background function with specific bucket trigger:
gcloud beta functions deploy bar \
--source=https://<path_to_repo>/functions \
--trigger-bucket=bar-destination-bucket \
--timeout=540 \
--memory=128MB
For me looks that the most possible problem is due to the fact of multiple deployments (only trigger-bucket flag is different). But the weird thing is that above setup works most of the time.
The normal behavior of Cloud Function is that at least once the events are delivered and background functions are invoked, which means that rarely, spurious duplicates may occur.
To make sure that your function behaves correctly on retried execution attempts, you should make it idempotent by implementing it so that an event results in the desired results (and side effects) even if it is delivered multiple times.
Check the documentation for some guidelines for making a background function idempotent.

Any way to ensure frisby.js test API calls go in sequential order?

I'm trying a simple sequence of tests on an API:
Create a user resource with a POST
Request the user resource with a GET
Delete the user resource with a DELETE
I've a single frisby test spec file mytest_spec.js. I've broken the test into 3 discrete steps, each with their own toss() like:
f1 = frisby.create("Create");
f1.post(post_url, {user_id: 1});
f1.expectStatus(201);
f1.toss();
// stuff...
f2 = frisby.create("Get");
f2.get(get_url);
f2.expectStatus(200);
f2.toss();
//Stuff...
f3 = frisby.create("delete");
f3.get(delete_url);
f3.expectStatus(200);
f3.toss();
Pretty basic stuff, right. However, there is no guarantee they'll execute in order as far as I can tell as they're asynchronous, so I might get a 404 on test 2 or 3 if the user doesn't exist by the time they run.
Does anyone know the correct way to create sequential tests in Frisby?
As you correctly pointed out, Frisby.js is asynchronous. There are several approaches to force it to run more synchronously. The easiest but not the cleanest one is to use .after(() -> ... you can find more about after() in Fisby.js docs.

Jbehave : GivenStories in the end of execution

I'm using a GivenStories for executing Login scenario which is located in different story.
I was wondering if there is a way to use something similar in order to execute a logout story which is also located in different story than one I actually executing.
I know that I can do some tricks with #before/after annotations , but the question is if I can execute a "post" story
Thanks
Based on the jBehave annotation documentation a post story step can be implemented by annotating a step class method with #AfterStory (or #AfterStories if you want to execute only after all stories complete). The #AfterStory method will execute regardless of whether your executing story contains a step from the related step class (i.e. is guaranteed to execute after every story - see below for restricting to given stories).
The #BeforeStory and #AfterStory annotations allow the corresponding
methods to be executed before and after each story, either a
GivenStory or not:
#AfterStory // equivalent to #AfterStory(uponGivenStory=false)
public void afterStory() {
// ...
}
#AfterStory(uponGivenStory=true)
public void afterGivenStory() {
// ...
}
This is the answer I got from the jbehave dev channel.
Hi,
there is no such mechanism, but you could:
use the Lifecycle to execute steps (not stories) after the execution
of a scenario (executed after each scenario) have a final scenario
which invokes the given stories

Trigger will not run (added more information)

I have been tinkering with this trigger for hours now, think I pinpointed the issue now.
I have set up an example trigger like in ML8 documentation.
Now I have modified it to a more real-world action.
The issue seems to be that I use a library module that hold my own functions in a lib.xqy. I have tested the lib itself in Query Console, all functions run fine.
The alert action itself also runs fine in QC.
The simpleTrigger works ok.
The more complex one runs IF I REMOVE the function that uses my own lib.
Seems that the trigger is run by a user or from a place where it cannot find my module (which is in the modules db). I have set the trigger-db to point to the content-db.
The triggers look at a directory for new documents (document create).
If I want to use my own lib function the Error thrown is:
[1.0-ml] XDMP-MODNOTFOUND: (err:XQST0059) xdmp:eval("xquery version
"1.0-ml";
let $uri := '/marklo...", (),
<options xmlns="xdmp:eval"><database>12436607035003930594</database>
<modules>32519102440328...</options>)
-- Module /lib/sccss-lib.xqy not found
The module is in the modules-db...
Another thing that bothers me is the example in ML doc does a
xdmp:document-insert("/modules/log.xqy",
text{ "
xquery version '1.0-ml';
..."
}, xdmp:permission('app-user', 'execute'))
What does the permission app-user do in this case?
Anyway main question: Why does the trigger not run if I use a custom module in the trigger action?
I have seen this question and think it is related but I do not understand the answer there...
EDIT start, more information on the trigger create statement:
xquery version "1.0-ml";
import module namespace trgr="http://marklogic.com/xdmp/triggers"
at "/MarkLogic/triggers.xqy";
trgr:create-trigger("sensorTrigger", "Simple trigger for connection systems sensor, the action checks how long this device is around the sensor",
trgr:trigger-data-event(
trgr:directory-scope("/marklogic.solutions.obi/source/", "1"),
trgr:document-content("create"),
trgr:post-commit()),
trgr:trigger-module(xdmp:database("cluey-app-content"), "/triggers/", "check-time-at-sensor.xqy"),
fn:true(), xdmp:default-permissions() )
Also indeed the trigger is created from the QC, so indeed as admin(I yet have to figure out how to do that adding code to app-specific.rb). And also the trigger action is loaded from the QC with a doc insert statement equivalent as the trigger example in the docs.
For completeness I added this to app-specific.rb per suggestion by Geert
alias_method :original_deploy_modules, :deploy_modules
def deploy_modules()
original_deploy_modules
# and apply correct permissions
r = execute_query %Q{
xquery version "1.0-ml";
for $uri in cts:uris()
return (
$uri,
xdmp:document-set-permissions($uri, (
xdmp:permission("#{#properties["ml.app-name"]}-role", "read"),
xdmp:permission("#{#properties["ml.app-name"]}-role", "execute")
))
)
},
{ :db_name => #properties["ml.modules-db"] }
end
For testing I also loaded it as part of the content (using ./ml local deploy content to load it, as said before the action is there it will run so there seems no issue with the permission of the action doc itself. What I do not understand is that as soon as I try to use my own module in the action it fails to find the module or(see comment David) does not have the right permission on the module. So the trigger action will fail to run ... The module is loaded with roxy under /src/lib/lib.xqy
SECOND EDIT
I added all trigger stuf to include in roxy by adding the following to app_specific.rb:
# HK voor gebruik modules die geen REST permissies hebben in een rest extension
alias_method :original_deploy_modules, :deploy_modules
def deploy_modules()
original_deploy_modules
# Create triggers
r = execute_query(%Q{
xquery version "1.0-ml";
import module namespace trgr="http://marklogic.com/xdmp/triggers"
at "/MarkLogic/triggers.xqy";
xdmp:log("Installing triggers.."),
try {
trgr:remove-trigger("sensorTrigger")
} catch ($ignore) {
};
xquery version "1.0-ml";
import module namespace trgr="http://marklogic.com/xdmp/triggers"
at "/MarkLogic/triggers.xqy";
trgr:create-trigger("sensorTrigger", "Trigger to check duration at sensor",
trgr:trigger-data-event(
trgr:directory-scope("/marklogic.solutions.obi/source/", "1"),
trgr:document-content("create"),
trgr:post-commit()
),
trgr:trigger-module(xdmp:modules-database(), "/", "/triggers/check-time-at-sensor.xqy"),
fn:true(),
xdmp:default-permissions(),
fn:false()
)
},
######## THIRD EDIT ###############
#{ :app_name => #properties["ml.app-name"] }
{ :db_name => #properties["ml.modules-db"] }
)
# and apply correct permissions
r = execute_query %Q{
xquery version "1.0-ml";
for $uri in cts:uris()
return (
$uri,
xdmp:document-set-permissions($uri, (
xdmp:permission("#{#properties["ml.app-name"]}-role", "read"),
xdmp:permission("#{#properties["ml.app-name"]}-role", "execute")
))
)
},
{ :db_name => #properties["ml.modules-db"] }
end
As you can seee the rootpath is now "/" in line
trgr:trigger-module(xdmp:modules-database(), "/", "/triggers/check-time-at-sensor.xqy")
I also added permissions by hand but still as soon as I add the line pointing to sccs-lib.xqy my trigger fails...
There is a number of criteria that need to be met for a trigger to work properly. David already mentioned part of them. Let me try to complete the list:
You need to have a database that contains the trigger definition. That is the database against which the trgr:create-trigger was executed. Typically Triggers or some app-triggers.
That trigger database needs to be assigned as triggers-database to the content database (not the other way around!).
You point to a trigger module that contains the code that will get executed as soon as a trigger event occurs. The trgr:trigger-module explicitly points to the uri of the module, and the database in which it is contained.
Any libraries used by that trigger module, need to be in the same database as the trigger module. Typically both trigger module, and related libraries are stored within Modules or some app-modules.
With regard to permissions, the following applies:
First of all, you need privileges to insert the document (uri, and collection).
Then, to be able to execute the trigger, the user that does the insert needs to have a role (directly, inherited, or via amps) that has read and execute permission on the trigger modules, as well on all related libraries.
Then that same user needs to have privileges to do whatever the trigger modules needs to do.
Looking at your create-triggers statement, I notice that the trigger module is pointing to the app-content database. That means it will be looking for libraries in the app-content database as well. I would recommend putting trigger module, and libraries in the app-modules database.
Also, regarding app-user execute permission: that is just a convention. The nobody user has the app-user role. That is typically used to allow the nobody user to run rewriter code.
HTH!
Could you please provide a bit more information - like perhaps the entire trigger create statement?
For creating the trigger, keep in mind:
the trigger database that you insert the trigger into has to be the one defined in the content database you refer to in the trigger and
trgr:trigger-module allows you to define the module database and the module to run. With this defined properly, then I cannot see how /lib/sccss-lib.xqy is not found - unless it is a permissions item...
Now on to the other item in your question: you test stuff in query console. That has the roles of that user - often run by people as admin... MarkLogic gives a 'not found' message also if a document is there - and you simply do not have access to it. So, it is possible that there is a problem with permissions for the documents in your modules database.