Using StatefulKnowledgeSession I'm able to define a filter which describes the rules I want to execute:
session.fireAllRules(new RuleNameEqualsAgendaFilter(ruleName));
But I couldn't find a way to do same thing using StatelessKnowledgeSession:
cmds.add(CommandFactory.newFireAllRules());
ExecutionResults results = session.execute(CommandFactory.newBatchExecution(cmds));
CommandFactory.newFireAllRules() can take int max and String outIdentifier or no parameter at all.
Excessive(!) documentation of JBoss Drools doesn't help me either:
Documentation
My question is whether this is possible or not.
Thanks.
The CommandFactory doesn't have methods for creating a FireAllRulesCommand using filters, but you can just create one yourself:
List<Command> cmds = new ArrayList<Command>();
cmds.add(CommandFactory.newInsert(new MyFact()));
cmds.add(new FireAllRulesCommand(new RuleNameEqualsAgendaFilter("MyRule")));
ExecutionResults results = ksession.execute(CommandFactory.newBatchExecution(cmds));
private static class RuleNameEqualsAgendaFilter implements AgendaFilter {
private final String ruleName;
public RuleNameEqualsAgendaFilter(final String ruleName) {
this.ruleName = ruleName;
}
public boolean accept(final Activation activation) {
return activation.getRule().getName().equals(this.ruleName);
}
}
Related
The Versioning API is powerful. However, with the pattern of using it, the code will quickly get messy and hard to read and maintain.
Over the time, product need to move fast to introduce new business/requirements. Is there any advice to use this API wisely.
I would suggest using a Global Version Provider design pattern in Cadence/Temporal workflow if possible.
Key Idea
The versioning API is very powerful to let you change the behavior of the existing workflow executions in a deterministic way(backward compatible). In real world, you may only care about adding the new behavior, and being okay to only introduce this new behavior to newly started workflow executions. In this case, you use a global version provider to unify the versioning for the whole workflow.
The Key idea is that we are versioning the whole workflow (that's why it's called GlobalVersionProvider). Every time adding a new version, we will update the version provider and provide a new version.
Example In Java
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ImmutableMap;
import io.temporal.workflow.Workflow;
import java.util.HashMap;
import java.util.Map;
public class GlobalVersionProvider {
private static final String WORKFLOW_VERSION_CHANGE_ID = "global";
private static final int STARTING_VERSION_USING_GLOBAL_VERSION = 1;
private static final int STARTING_VERSION_DOING_X = 2;
private static final int STARTING_VERSION_DOING_Y = 3;
private static final int MAX_STARTING_VERSION_OF_ALL =
STARTING_VERSION_DOING_Y;
// Workflow.getVersion can release a thread and subsequently cause a non-deterministic error.
// We're introducing this map in order to cache our versions on the first call, which should
// always occur at the beginning of an workflow
private static final Map<String, GlobalVersionProvider> RUN_ID_TO_INSTANCE_MAP =
new HashMap<>();
private final int versionOnInstantiation;
private GlobalVersionProvider() {
versionOnInstantiation =
Workflow.getVersion(
WORKFLOW_VERSION_CHANGE_ID,
Workflow.DEFAULT_VERSION,
MAX_STARTING_VERSION_OF_ALL);
}
private int getVersion() {
return versionOnInstantiation;
}
public boolean isAfterVersionOfUsingGlobalVersion() {
return getVersion() >= STARTING_VERSION_USING_GLOBAL_VERSION;
}
public boolean isAfterVersionOfDoingX() {
return getVersion() >= STARTING_VERSION_DOING_X;
}
public boolean isAfterVersionOfDoingY() {
return getVersion() >= STARTING_VERSION_DOING_Y;
}
public static GlobalVersionProvider get() {
String runId = Workflow.getInfo().getRunId();
GlobalVersionProvider instance;
if (RUN_ID_TO_INSTANCE_MAP.containsKey(runId)) {
instance = RUN_ID_TO_INSTANCE_MAP.get(runId);
} else {
instance = new GlobalVersionProvider();
RUN_ID_TO_INSTANCE_MAP.put(runId, instance);
}
return instance;
}
// NOTE: this should be called at the beginning of the workflow method
public static void upsertGlobalVersionSearchAttribute() {
int workflowVersion = get().getVersion();
Workflow.upsertSearchAttributes(
ImmutableMap.of(
WorkflowSearchAttribute.TEMPORAL_WORKFLOW_GLOBAL_VERSION.getValue(),
workflowVersion));
}
// Call this API on each replay tests to clear up the cache
#VisibleForTesting
public static void clearInstances() {
RUN_ID_TO_INSTANCE_MAP.clear();
}
}
Note that because of a bug in Temporal/Cadence Java SDK, Workflow.getVersion can release a thread and subsequently cause a non-deterministic error.
We're introducing this map in order to cache our versions on the first call, which should
always occur at the beginning of the workflow execution.
Call clearInstances API on each replay tests to clear up the cache.
Therefor in the workflow code:
public class HelloWorldImpl{
private GlovalVersionProvider globalVersionProvider;
#VisibleForTesting
public HelloWorldImpl(final GlovalVersionProvider versionProvider){
this.globalVersionProvider = versionProvider;
}
public HelloWorldImpl(){
this.globalVersionProvider = GlobalVersionProvider.get();
}
#Override
public void start(final Request request) {
if (globalVersionProvider.isAfterVersionOfUsingGlobalVersion()) {
GlobalVersionProvider.upsertGlobalVersionSearchAttribute();
}
...
...
if (globalVersionProvider.isAfterVersionOfDoingX()) {
// doing X here
...
}
...
if (globalVersionProvider.isAfterVersionOfDoingY()) {
// doing Y here
...
}
...
}
Best practice with the pattern
How to add a new version
For every new version
Add the new constant STARTING_VERSION_XXXX
Add a new API ` public boolean isAfterVersionOfXXX()
Update MAX_STARTING_VERSION_OF_ALL
Apply the new API into workflow code where you want to add the new logic
Maintain the replay test JSON in a pattern of `HelloWorldWorkflowReplaytest-version-x-description.json. Make sure always add a new replay test for every new version you introduce to the workflow. When generating the JSON from a workflow execution, make sure it exercise the new code path – otherwise it won't be able to protect the determinism. If it requires more than one workflow executions to exercise all branches, then make multiple JSON files for replay.
How to remove a old version:
To remove an old code path(version), add a new version to not execute old code path, then later on use Search attribute query like
GlobalVersion>=STARTING_VERSION_DOING_X AND GlobalVersion<STARTING_VERSION_NOT_DOING_X to find out if there is existing workflow execution still running with certain versions.
Instead of waiting for workflows to close, you can terminate or reset workflows
Example of deprecating a code path DoingX:
Therefor in the workflow code:
public class HelloWorldImpl implements Helloworld{
...
#Override
public void start(final Request request) {
...
...
if (globalVersionProvider.isAfterVersionOfDoingX() && !globalVersionProvider.isAfterVersionOfNotDoingX()) {
// doing X here
...
}
}
###TODO Example In Golang
Benefits
Prevent spaghetti code by using native Temporal versioning API everywhere in the workflow code
Provide search attribute to find workflow of particular version. This will fill the gaps that Temporal Java SDK is missing TemporalChangeVersion feature.
Even Cadence Java/Golang SDK has CadenceChangeVersion, this global
version search attribute is much better in query, because it's an
integer instead of a keyword.
Provide a pattern to maintain replay test easily
Provide a way to test different version without this missing feature
Cons
There shouldn't be any cons. Using this pattern doesn't stop you from using the raw versioning API directly in the workflow. You can combine this pattern with others together.
In order to have access to events like S3EventNotification we need to specify a custom argument resolver in in the QueueMessageHandlerFactory. But since the order in which those argument resolver are evaluated matters it forces me to have a list that has every argument resolver twice. Is it possible to avoid this?
I am trying to read from a queue where events are generated by amazon itself.
In this case I need to set
messageConverter.setStrictContentTypeMatch(false);
as explained here: https://cloud.spring.io/spring-cloud-aws/1.2.x/multi/multi__messaging.html#_consuming_aws_event_messages_with_amazon_sqs
In the method however I needed to use Acknowledge, Visibility, and header method parameters but those were not passed correctly unless I redefine all the possible argument resolver in the configuration.
So to have the following method signature:
#SqsListener(value = "${my-queue-name}", deletionPolicy = NEVER)
public void processRequest(
#Payload S3EventNotification s3EventNotificationRecord,
#Header("ApproximateReceiveCount") final int receiveCount,
Acknowledgment acknowledgment,
Visibility visibility) {
// do some stuff and decide to acknowledge or extend visibility
}
I was forced to write this custom configuration like:
#Configuration
public class AmazonSQSConfig {
private static final String ACKNOWLEDGMENT = "Acknowledgment";
private static final String VISIBILITY = "Visibility";
#Bean
public QueueMessageHandlerFactory queueMessageHandlerFactory() {
QueueMessageHandlerFactory factory = new QueueMessageHandlerFactory();
factory.setArgumentResolvers(initArgumentResolvers());
return factory;
}
private List<HandlerMethodArgumentResolver> initArgumentResolvers() {
MappingJackson2MessageConverter messageConverter = new MappingJackson2MessageConverter();
messageConverter.setStrictContentTypeMatch(false);
return List.of(
new HeaderMethodArgumentResolver(null, null),
new HeadersMethodArgumentResolver(),
new NotificationSubjectArgumentResolver(),
new AcknowledgmentHandlerMethodArgumentResolver(ACKNOWLEDGMENT),
new VisibilityHandlerMethodArgumentResolver(VISIBILITY),
new PayloadArgumentResolver(messageConverter));
}
}
I would expect to have a way to define a custom argument resolver but still have all the argument passed to the method once executed.
I am currently using Mehdi El Gueddari's DbContextScope project, I think by the book, and it's awesome. But I came across a problem I'm unsure how to solve today. I have a query that I need to execute using a different database login/user because it requires additional permissions. I can create another connection string in my web.config, but I'm not sure how to specify that for this query, I want to use this new connection string. Here is my usage:
In my logic layer:
private static IDbContextScopeFactory _dbContextFactory = new DbContextScopeFactory();
public static Guid GetFacilityID(string altID)
{
...
using (_dbContextFactory.CreateReadOnly())
{
entity = entities.GetFacilityID(altID)
}
}
That calls into my data layer which would look something like this:
private AmbientDbContextLocator _dbcLocator = new AmbientDbContextLocator();
protected CRMEntities DBContext
{
get
{
var dbContext = _dbcLocator.Get<CRMEntities>();
if (dbContext == null)
throw new InvalidOperationException("No ambient DbContext....");
return dbContext;
}
}
public virtual Guid GetFaciltyID(string altID)
{
return DBContext.Set<Facility>().Where(f => f.altID = altID).Select(f => f.ID).FirstOrDefault();
}
Currently my connection string is set in the default way:
public partial class CRMEntities : DbContext
{
public CRMEntities()
: base("name=CRMEntities")
{}
}
Is it possible for this specific query to use a different connection string and how?
I ended up modifying the source code in a way that feels slightly hacky, but is getting the job done for now. I created a new IAmbientDbContextLocator with a Get<TDbContext> method override that accepts a connection string:
public TDbContext Get<TDbContext>(string nameOrConnectionString) where TDbContext : DbContext
{
var ambientDbContextScope = DbContextScope.GetAmbientScope();
return ambientDbContextScope == null ? null : ambientDbContextScope.DbContexts.Get<TDbContext>(nameOrConnectionString);
}
Then I updated the DbContextCollection to pass this parameter to the DbContext's existing constructor overload. Last, I updated the DbContextCollection maintain a Dictionary<KeyValuePair<Type, string>, DbContext> instead of a Dictionary<Type, DbContext> as its cached _initializedDbContexts where the added string is the nameOrConnectionString param. So in other words, I updated it to cache unique DbContext type/connection string pairs.
Then I can get at the DbContext with the connection I need like this:
var dbContext = new CustomAmbientDbContextLocator().Get<CRMEntities>("name=CRMEntitiesAdmin");
Of course you'd have to be careful your code doesn't end up going through two different contexts/connection strings when it should be going through the same one. In my case I have them separated into two different data access class implementations.
I have a Spring batch application where BeanWrapperFieldSetMapper is used to map fields using a prototype object. However, the CSV file that is being read (via a FlatFileItemReader) contains one (indicator) field that determines the mapping of another field. If the indicator field has a value of Y, then the value of the another field should be mapped to property foo otherwise it should be mapped to property bar.
I know that I can use a custom FieldSetMapper to do this, but then I have to code the mapping all of the other fields (of which there are a quite a few). Alternatively, I could do this post reading via an ItemProcessor but then my domain (prototype) object must have a property representing the indicator field (which I prefer not to do since it is not really part of the business domain).
Is it possible to perhaps use a custom FieldSetMapper to only map these custom fields and delegate the other mappings to BeanWrapperFieldSetMapper? Or is there some other better way to solve for this?
Here is my current attempt to use a custom FieldSetMapper and delegate to BeanWrapperFieldSetMapper:
public class DelegatedFieldSetMapper extends BeanWrapperFieldSetMapper<MyProtoClass> {
#Override
public MyProtoClass mapFieldSet(FieldSet fieldSet) throws BindException {
String indicator = fieldSet.readString("indicator");
Properties fieldProperties = fieldSet.getProperties();
if (indicator.equalsIgnoreCase("y")) {
fieldProperties.put("test.foo", fieldSet.readString("value");
} else {
fieldProperties.put("test.bar", fieldSet.readString("value");
}
fieldProperties.remove("indicator");
Set<Object> keys = fieldProperties.keySet();
List<String> names = new ArrayList<String>();
List<String> values = new ArrayList<String>();
for (Object key : keys) {
names.add((String) key);
values.add((String) fieldProperties.getProperty((String) key));
}
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(names.toArray(new String[names.size()]), values.toArray(new String[values.size()]));
return super.mapFieldSet(domainObjectFieldSet);
}
}
However, a FlatFileParseException is thrown. The relevant parts of the batch config class are as follows:
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Value("${file}")
private File file;
#Bean
#Scope("prototype")
public MyProtoClass () {
return new MyProtoClass();
}
#Bean
public ItemReader<MyProtoClass> reader(LineMapper<MyProtoClass> lineMapper) {
FlatFileItemReader<MyProtoClass> flatFileItemReader = new FlatFileItemReader<MyProtoClass>();
flatFileItemReader.setResource(new FileSystemResource(file));
final int NUMBER_OF_HEADER_LINES = 1;
flatFileItemReader.setLinesToSkip(NUMBER_OF_HEADER_LINES);
flatFileItemReader.setLineMapper(lineMapper);
return flatFileItemReader;
}
#Bean
public LineMapper<MyProtoClass> lineMapper(LineTokenizer lineTokenizer, FieldSetMapper<MyProtoClass> fieldSetMapper) {
DefaultLineMapper<MyProtoClass> lineMapper = new DefaultLineMapper<MyProtoClass>();
lineMapper.setLineTokenizer(lineTokenizer);
lineMapper.setFieldSetMapper(fieldSetMapper);
return lineMapper;
}
#Bean
public LineTokenizer lineTokenizer() {
DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer();
lineTokenizer.setNames(new String[] {"value", "test.bar", "test.foo", "indicator"});
return lineTokenizer;
}
#Bean
public FieldSetMapper<MyProtoClass> fieldSetMapper(PropertyEditor emptyStringToNullPropertyEditor) {
BeanWrapperFieldSetMapper<MyProtoClass> fieldSetMapper = new DelegatedFieldSetMapper();
fieldSetMapper.setPrototypeBeanName("myProtoClass");
Map<Class<String>, PropertyEditor> customEditors = new HashMap<Class<String>, PropertyEditor>();
customEditors.put(String.class, emptyStringToNullPropertyEditor);
fieldSetMapper.setCustomEditors(customEditors);
return fieldSetMapper;
}
Finally, the CSV flat file look like this:
value,bar,foo,indicator
abc,,,y
xyz,,,n
Let's say that BatchWorkObject is the class to be mapped.
Here's a sample code in Spring Boot style that needs only your custom logic to be added.
new BeanWrapperFieldSetMapper<BatchWorkObject>(){
{
this.setTargetType(BatchWorkObject.class);
}
#Override
public BatchWorkObject mapFieldSet(FieldSet fs)
throws BindException {
BatchWorkObject tmp= super.mapFieldSet(fs);
// your custom code here
return tmp;
}
});
The code actually accomplishes what is desired except for one issue that results in the FlatFileParseException. The DelegatedFieldSetMapper contains the issue as follows:
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(names.toArray(new String[names.size()]), values.toArray(new String[values.size()]));
To resolve, change to:
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(values.toArray(new String[values.size()]), names.toArray(new String[names.size()]));
Write your own FieldSetMapper with a set of prepared delegates inside.
Those delegates are pre-built for every different kind of fields mapping.
In your object route to correct delegate based on indicator field (with a Classifier, for example).
I can't see any other way, but this solution is quite easy and straightforward to maintain.
Processing based on the input format/data can be done using a custom implementation of ItemProcessor which is either changing values in the same entity (that was populated by IteamReader) or creates a new one output entity.
I register in container services implementing IMyService.
Do I have any guarantees about their order in
container.Resolve<IEnumerable<IMyService>>
?
Just as extra help for people like me landing on this page... Here is an example how one could do it.
public static class AutofacExtensions
{
private const string OrderString = "WithOrderTag";
private static int OrderCounter;
public static IRegistrationBuilder<TLimit, TActivatorData, TRegistrationStyle>
WithOrder<TLimit, TActivatorData, TRegistrationStyle>(
this IRegistrationBuilder<TLimit, TActivatorData, TRegistrationStyle> registrationBuilder)
{
return registrationBuilder.WithMetadata(OrderString, Interlocked.Increment(ref OrderCounter));
}
public static IEnumerable<TComponent> ResolveOrdered<TComponent>(this IComponentContext context)
{
return from m in context.Resolve<IEnumerable<Meta<TComponent>>>()
orderby m.Metadata[OrderString]
select m.Value;
}
}
No, there's no ordering guaranteed here. We've considered extensions to enable it but for now it's something to handle manually.
I don't mean to self-promote, but I have also created a package to solve this problem because I had a similar need: https://github.com/mthamil/Autofac.Extras.Ordering
It uses the IOrderedEnumerable<T> interface to declare the need for ordering.
I know this is an old post but to maintain the order of registration, can't we just use PreserveExistingDefaults() during registration?
builder.RegisterInstance(serviceInstance1).As<IService>().PreserveExistingDefaults();
builder.RegisterInstance(serviceInstance2).As<IService>().PreserveExistingDefaults();
// services should be in the same order of registration
var services = builder.Resolve<IEnumberable<IService>>();
I didn't find any fresh information on topic and wrote a test which is as simple as (you'd better write your own):
var cb = new ContainerBuilder();
cb.RegisterType<MyClass1>().As<IInterface>();
// ...
using (var c = cb.Build())
{
using (var l = c.BeginLifetimeScope())
{
var e = l.Resolve<IEnumerable<IInterface>>().ToArray();
var c = l.Resolve<IReadOnlyCollection<IInterface>>();
var l = l.Resolve<IReadOnlyList<IInterface>>();
// check here, ordering is ok
}
}
Ordering was kept for all cases I've come up with. I know it is not reliable, but I think that in the current version of Autofac (4.6.0) ordering is wisely kept.