Cadence Java client, unable to get history of workflow executions - cadence-workflow

I am following the idea, mentioned in this answer and trying this:
workflowTChannel.ListClosedWorkflowExecutions(ListClosedWorkflowExecutionsRequest().apply {
domain = "valid-domain-name"
startTimeFilter = StartTimeFilter().apply {
setEarliestTime(Instant.parse("2023-01-01T00:00:00.000Z").toEpochMilli())
setLatestTime(Instant.parse("2024-01-01T00:59:59.999Z").toEpochMilli())
}
})
However, the result is always an empty list.
Fetching the list via UI works fine at the same time.
Using PostgreSQL and local test installation without advanced visibility.
UPD: debugged Cadence locally and found that it expects nanoseconds instead of milliseconds. This way correct parameters must be prepared like this:
Instant.parse("2023-01-01T00:00:00.000Z").toEpochMilli() * 1000000

My guess is that you are using seconds and Cadence expects nanoseconds timestamps.

Related

How to manually trigger a cloudwatch rule with ScheduleExpression(10 days)

I have to setup "AWS::Events::Rule" in cloudwatch with ScheduleExpression(10 days), and write some code to test it, but I can not change the "10 days" to 1 minute or call the lambda function directly. I know that we can call put event for calling a rule with EventPattern.
But not know how to do that for ScheduleExpression.
Any comment is welcome, Thanks.
To my knowledge there's no possibility for you to manually trigger the rule and make it execute the lambda function. What you can do is change the frequency from 10 days to 1 minute, let it execute, and when it executes switch it back to 10 days
I also met this problem. I checked AWS document and it says that a rule can only contain either EventPattern or ScheduleExpression. But in order to call aws events put-events we must provide a Source for EventPattern match. So I think we cannot manually trigger a scheduled event.
Not sure what's your use case, but I have decided to move to use Invoke API of AWSLambda client.
SDK Approach:
Yes, you can use the putRule SDK function to update the ScheduleExpression of the CloudWatch Rule. As I mentioned in the below snippet
let params =
{
Name: timezoneCronName, /* required */
ScheduleExpression: cronExpression
}
return CloudWatchEvents.putRule(cloudWatchEventsParams).promise().then((response) => {
console.debug(`CloudWatch Events response`, response);
return response;
}).catch((error) => {
console.error(`Error occurred while updating Cloud Watch Event:${error.message}`);
throw error;
});
See this Official AWS SDK DOC.
CLI Approach:
Run the following command though CLI
aws events put-rule --name "You Rule name (not full ARN)" --schedule-expression "cron(0/1 * * * ? *)"

SAPUI5 batch submit returns error

I am using the following code, in an attempt to batch upload the changes made on a table:
onConfirmActionPressed: function() {
var oModel = this.getModel();
oModel.setUseBatch(true);
oModel.submitChanges();
}
I am using setProperty() to set the new values, like this:
onSingleSwitchChange: function(oControlEvent) {
var oModel = this.getView().getModel();
var rowBindingContext = oControlEvent.getSource().getBindingContext();
oModel.setProperty(rowBindingContext.sPath + "/Zlspr", "A");
}
When onConfirmActionPressed is executed, I get a server error, saying that "Commit work during changeset processing not allowed" on SAP R3.
When I upload the lines of the table one-by-one, it works fine. However, uploading this way is very slow, and in some cases it takes more than 10 minutes for the process to complete.
Am I doing something wrong while batch submitting? Is there a chance the issue is due to server (R3) misconfiguration?
You need to override methods:
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN
/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_END
Keep track of the errors across all calls to update methods and if everything went OK then in changeset_end perform commit on the database
edit:
To clarify:
In your Data Provider Class Extension in SAP Gateway you need to find your YOURENTITY_UPDATE_ENTITY method and get rid off any COMMIT WORK statements.
Then you need to redefine /IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN method and, which is a method which is fired before any batch operation. You could define a class attribute such as table mt_batch_errors which would be emptied in this method.
When you post batch changes from UI5 using oModel.submitChanges() all single changes to Entities are directed to appropriate ..._UPDATE_ENTITY methods. You need to keep track of any possible errors and if any occurs then fill your mt_batch_errors table.
After all entities have been updated /IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_END method is fired in which you are able to check mt_batch_errors table if any errors occurred during the batch process. If there were errors then you should probably ROLLBACK WORK, and if not then you are free to COMMIT WORK.
That is just an example of how it could be done, I'm curious of other suggestions.
Good luck!

How to use delta trigger in flink?

I want to use the deltatrigger in apache flink (flink 1.3) but I have some trouble with this code :
.trigger(DeltaTrigger.of(100, new DeltaFunction[uniqStruct] {
override def getDelta(oldFp: uniqStruct, newFp: uniqStruct): Double = newFp.time - oldFp.time
}, TypeInformation[uniqStruct]))
And I have this error:
error: object org.apache.flink.api.common.typeinfo.TypeInformation is not a value [ERROR] }, TypeInformation[uniqStruct]))
I don't understand why DeltaTrigger need TypeSerializer[T]
and I don't know what to do to remove this error.
Thanks a lot everyone.
I would read into this a bit https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/types_serialization.html sounds like you can create a serializer using typeInfo.createSerializer(config) on your type info. Note what you're passing in currently is a type itself and NOT the type info which is why you're getting the error you are.
You would need to do something more like
val uniqStructTypeInfo: TypeInformation[uniqStruct] = createTypeInformation[uniqStruct]
val uniqStrictTypeSerializer = typeInfo.createSerializer(config)
To quote the page above regarding the config param you need to pass to create serializer
The config parameter is of type ExecutionConfig and holds the
information about the program’s registered custom serializers. Where
ever possibly, try to pass the programs proper ExecutionConfig. You
can usually obtain it from DataStream or DataSet via calling
getExecutionConfig(). Inside functions (like MapFunction), you can get
it by making the function a Rich Function and calling
getRuntimeContext().getExecutionConfig().
DeltaTrigger needs a TypeSerializer because it uses Flink's managed state mechanism to store each element for later comparison with the next one (it just keeps one element, the last one, which is updated as new elements arrive).
You will find an example (in Java) here.
But if all you need is a window that triggers every 100msec, then it'll be easier to just use a TimeWindow, such as
input
.keyBy(<key selector>)
.timeWindow(Time.milliseconds(100)))
.apply(<window function>)
Updated:
To have hour-long windows that trigger every 100msec, you could use sliding windows. However, you would have 10 * 60 * 60 windows, and every event would be placed into each of these 36000 windows. So that's not a great idea.
If you use a GlobalWindow with a DeltaTrigger, then the window will be triggered only when events are more than 100msec apart, which isn't what you've said you want.
I suggest you look at ProcessFunction. It should be straightforward to get what you want that way.

Cache Slick DBIO Actions

I am trying to speed up "SELECT * FROM WHERE name=?" kind of queries in Play! + Scala app. I am using Play 2.4 + Scala 2.11 + play-slick-1.1.1 package. This package uses Slick-3.1 version.
My hypothesis was that slick generates Prepared statements from DBIO actions and they get executed. So I tried to cache them buy turning on flag cachePrepStmts=true
However I still see "Preparing statement..." messages in the log which means that PS are not getting cached! How should one instructs slick to cache them?
If I run following code shouldn't the PS be cached at some point?
for (i <- 1 until 100) {
Await.result(db.run(doctorsTable.filter(_.userName === name).result), 10 seconds)
}
Slick config is as follows:
slick.dbs.default {
driver="slick.driver.MySQLDriver$"
db {
driver="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/staging_db?useSSL=false&cachePrepStmts=true"
user = "user"
password = "passwd"
numThreads = 1 // For not just one thread in HikariCP
properties = {
cachePrepStmts = true
prepStmtCacheSize = 250
prepStmtCacheSqlLimit = 2048
}
}
}
Update 1
I tried following as per #pawel's suggestion of using compiled queries:
val compiledQuery = Compiled { name: Rep[String] =>
doctorsTable.filter(_.userName === name)
}
val stTime = TimeUtil.getUtcTime
for (i <- 1 until 100) {
FutureUtils.blockFuture(db.compiledQuery(name).result), 10)
}
val endTime = TimeUtil.getUtcTime - stTime
Logger.info(s"Time Taken HERE $endTime")
In my logs I still see statement like:
2017-01-16 21:34:00,510 DEBUG [db-1] s.j.J.statement [?:?] Preparing statement: select ...
Also timing of this is also remains the same. What is the desired output? Should I not see these statements anymore? How can I verify if Prepared statements are indeed reused.
You need to use Compiled queries - which are exactly doing what you want.
Just change above code to:
val compiledQuery = Compiled { name: Rep[String] =>
doctorsTable.filter(_.userName === name)
}
for (i <- 1 until 100) {
Await.result(db.run(compiledQuery(name).result), 10 seconds)
}
I extracted above name as a parameter (because you usually want to change some parameters in your PreparedStatements) but that's definitely an optional part.
For further information you can refer to: http://slick.lightbend.com/doc/3.1.0/queries.html#compiled-queries
For MySQL you need to set an additional jdbc flag, useServerPrepStmts=true
HikariCP's MySQL configuration page links to a quite useful document that provides some simple performance tuning configuration options for MySQL jdbc.
Here are a few that I've found useful (you'll need to & append them to jdbc url for options not exposed by Hikari's API). Be sure to read through linked document and/or MySQL documentation for each option; should be mostly safe to use.
zeroDateTimeBehavior=convertToNull&characterEncoding=UTF-8
rewriteBatchedStatements=true
maintainTimeStats=false
cacheServerConfiguration=true
avoidCheckOnDuplicateKeyUpdateInSQL=true
dontTrackOpenResources=true
useLocalSessionState=true
cachePrepStmts=true
useServerPrepStmts=true
prepStmtCacheSize=500
prepStmtCacheSqlLimit=2048
Also, note that statements are cached per thread; depending on what you set for Hikari connection maxLifetime and what server load is, memory usage will increase accordingly on both server and client (e.g. if you set connection max lifetime to just under MySQL default of 8 hours, both server and client will keep N prepared statements alive in memory for the life of each connection).
p.s. curious if bottleneck is indeed statement caching or something specific to Slick.
EDIT
to log statements enable the query log. On MySQL 5.7 you would add to your my.cnf:
general-log=1
general-log-file=/var/log/mysqlgeneral.log
and then sudo touch /var/log/mysqlgeneral.log followed by a restart of mysqld. Comment out above config lines and restart to turn off query logging.

Laravel script stops running after some time

I have this php script that runs for quite a while but after some time (due to browser limit when it stops loading the page) it stops with executing the code.
Basically what it does is fetch 10000 articles using an API and store them one-by-one in my own DB. After looping roughly 1200+ articles, the Google Chrome spinning/loading icon stops and all I get back in return is a blank page. When I check the DB I can see the results in the DB, but however the script didn't loop through all the articles as it should and it stopped.
I know running from CLI would remove the default browser-time-limit when it stops loading, but I don't know how to make a call to a certain controller method in Laravel from command line?
Make an Artisan command. See the docs at http://laravel.com/docs/commands
Since you are in fact seeding a database, a quick and dirty way is to use db:seed to do it for you:
class MyTableSeeder extends Seeder {
public function run()
{
// Do whatever you need to
}
}
Enable it on DatabaseSeeder:
class DatabaseSeeder extends Seeder {
public function run()
{
$this->call('MyTableSeeder');
}
}
And execute it:
php artisan db:seed
A much better way is to create an Artisan Command. Here's another answer I wrote to teach how to create it: Select all rows with a date of right now
remove timeout limit from the web server(mostly from the config file)
Use set_time_limit(0) to force PHP to execute your code for unlimited amount of time.