asynchronousRulesetParsing XU property and the Business Rules service on Bluemix - ibm-cloud

My ruleset is deployed on the Business Rules service on Bluemix, and I want to execute an older version of a ruleset while the newer one is being parsed. To do so, I am trying to configure the XU property asynchronousRulesetParsing, but I cannot figure out how to do so.

XU properties cannot be configured for the Business Rules service on Bluemix.
Specifically regarding the asynchronousRulesetParsing XU property, I found that it is not applicable when using Hosted Transparent Decision Services (HTDS) in ODM, since the implementation of HTDS always forces the latest version of rulesets to be used.
Since the Business Rules service on Bluemix uses HTDS, the asynchronousRulesetParsing XU property is also not applicable.
Instead, once I deploy a new ruleset version, I send a "dummy" request to force the parsing of the new ruleset version and incur the parsing delay. I wait for this request to complete before running the "real" requests of the ruleset.

Related

spring data geode pool is not resolvable as a Pool in the application context

I've come back to a #SpringBootApplication project that uses spring-geode-starter with version 1.2.4 although the same error happens with upgrades to 1.5.6 version.
It sets up a Geode client using
#Component
#EnableClusterDefinedRegions(clientRegionShortcut=ClientRegionShortcut.PROXY)
and in order to register interest subscriptions over HTTP, also
#Configuration
#EnableGemFireHttpSession
with a bean
#Bean
public ReactiveSessionRepository<?> reactiveSessionRepository() {
return new ReactiveMapSessionRepository(new ConcurrentHashMap<>());
}
On starting the application the spring data geode client connects to the server (Geode version 1.14) and auto copies regions back to the client, which is great.
However, after all the region handles are copied over, there's an error with the #EnableGemFireHttpSession which is
Error creating bean with name 'ClusteredSpringSessions' defined in class path resource [org/springframework/session/data/gemfire/config/annotation/web/http/GemFireHttpSessionConfiguration.class] and [gemfirePool] is not resolvable as a Pool in the application context
The first info message in the logs is:
org.springframework.session.data.gemfire.config.annotation.web.http.GemFireHttpSessionConfiguration 1109 sessionRegionAttributes: Expiration is not allowed on Regions with a data management policy of PROXY
org.springframework.data.gemfire.support.AbstractFactoryBeanSupport 277 lambda$logInfo$3: Falling back to creating Region [ClusteredSpringSessions] in Cache [Web]
So the client is trying to create a region ClusteredSpringSessions but it can't. The problem appears to resolve itself if I define a connection pool for HTTP, with a pool connection bean like this
#Configuration
#EnableGemFireHttpSession(poolName="devPool")
public class SessionConfig {
#Bean
public ReactiveSessionRepository<?> reactiveSessionRepository() {
return new ReactiveMapSessionRepository(new ConcurrentHashMap<>());
}
#Bean("devPool")
PoolFactoryBean sessionPool() {
PoolFactoryBean pool = new PoolFactoryBean();
ConnectionEndpoint ce = new ConnectionEndpoint("1.2.3.4", 10334);
pool.setSubscriptionEnabled(true);
pool.addLocators(ce);
return pool;
}
}
There is still the Expiration is not allowed on Regions with a data management policy of PROXY info message in the log, but this time the Falling back to creating Region [ClusteredSpringSessions] in Cache [Web] appears to work.
I don't understand why a default pool can't connect.
If a pool is defined then in version 1.2.4 that can cause this issue.
Since you are using Spring Boot for Apache Geode (SBDG), which is an excellent choice (thank you), then you can simply include the spring-geode-starter-session dependency on your #SpringBootApplication classpath, which removes the need to explicitly annotate your Spring Boot application with SSDG's #EnableGemFireHttpSession annotation.
See here for more details. I also have a Sample application demonstrating the use of SSDG, here. The guide and source code for this example, along with other examples, can be found here).
Also, I would generally advise that users drive the GemFire/Geode cluster configuration from the application and not let the cluster dictate the Regions (and/or other components/configuration) that the client gets. However, SDG's #EnableClusterDefinedRegions annotation is provided and generally useful in the case you do not have control over the GemFire/Geode cluster your application is using. Still, in the (HTTP) Session UC, the GemFire/Geode cluster would need a Session Region (which defaults to "ClusteredSpringSessions" as determined by Spring Session for Apache Geode (SSDG) itself) anyway.
OK, now to the problem at hand...
I think what is happening here is, due to backwards compatibility and legacy reasons, Spring Data for Apache Geode (SDG), on which both SSDG and SBDG are based; SBDG additionally pulls in SSDG as well, defined a GemFire/Geode Pool by the name of "gemfirePool", specifically when using the SDG XML space and using/defining a DataSource configuration.
So, it is somewhat naively assumed users would be explicitly defining a Pool and calling it "gemfirePool", and not simply relying on a "default" Pool connection to the GemFire/Geode cache server (namely "localhost", 40404, or if using Locators (recommended), "localhost" and 10334).
However, for development purposes, and in SBDG specifically, I rely on the fact that GemFire/Geode creates a "DEFAULT" Pool anyway (when no explicit Pool is defined), and forgo the strict requirement that a "gemfirePool" should exist. But, SBDG builds on SSDG and SDG and they still rely on the legacy arrangement (for example).
I have filed an Issue ticket in SSDG to change this and better align SSDG with what SBDG prefers going forward, but I simply have not gotten around to it yet. My apologies for your inconvenience.
Anyway, it is a simple change you can make externally from your Spring Boot application, in application.properties like so (see here from the HTTP Session Sample I referenced from SBDG above). This will allow you to configure the Session Region Pool "name".
Also note, it is possible to change the name of the Session Region used by the client if what comes down from the cluster when you are using SDG's #EnableClusterDefinedRegions and the Region definition pulled down from the cluster is named differently on the server-side using this property.
Additionally, you can also configure the client Session Region data policy using properties as well (for example).
Regarding the Expiration "info" message you are seeing in the logs...
Since the client Session Region is a PROXY by default, then Expiration, Eviction and other Region data management policies (e.g. Compression, etc), do not actually make much sense.
In fact, SSDG is smart about whether to apply additional Region data management policies locally or not (see here, and specifically, this logic).
The message you are seeing in your application logs is in fact coming from SSDG, specifically. This message really serves as a reminder that your Session state management is actually "managed" on the server-side (when the application client is using a PROXY or even a CACHING_PROXY Region for that matter) and that the corresponding server-side, or cluster Sessions Region should be configured manually and appropriately, with Expiration policies as well as other things if necessary. Otherwise, no Session expiration would actually happen!
I hope all this makes since.
If you continue to have problems, feel free to file an Issue ticket and provide an example test or small application replicating your problem.

RestTemplate annotated by #LoadBalanced get wrong service address by service name from eureka sometimes

I use springcloud to build the system, including many microservices。 For some interface calls, I use resttemplate annotated by #LoadBalance to implement load balancing, and use eureka as a registry center. However, when I call interfaces between different micro services, resttemplate sometimes will connect to wrong service. For example, I have service A, B, C, when service A call a service B's interface, resttemplate annotated by #LoadBalance will find the actual ip&port from eureka by service name first, and then build the actual url and send the request to target server, but sometimes, it will find the service C's ip&port when I call service B's interface, which cause a fail invoking. This case occurs infrequently but nerver disappear, I have been troubled for a long time, could anyone give me some suggestions? Thanks.
I learned why yesterday: it is a bug in spring cloud Dalston.RELEASE(https://github.com/spring-cloud/spring-cloud-commons/issues/224), and we happen to use this version. Spring cloud had fixed this bug in Dalston.SR2, and now it works fine

Is it possible to have a single frontend select between backends (defined dynamically)?

I am currently looking into deploying Traefik/Træfik on our service fabric cluster.
Basically I have a setup where I have any number of Applications (services), defined with a tenant name and each of these services is in fact a separate Web UI.
I am trying to figure out if I can configure a single frontend to target a backend so I don't have to define a new frontend each time I deploy a new UI app. Something like
[frontend.tenantui]
rule = "HostRegexp:localhost,{tenantName:[a-z]+}.example.com"
backend = "fabric:/WebApp/{tenantName}"
The idea is to have it such that I can just deploy new UI services without updating the frontend configuration.
I am currently using the Service Fabric provider for my backend services, but I am open to using the file provider or something else if that is required.
Update:
The servicemanifset contains labels, so as to let traefik create backends and frontends.
The labels are defined for one service, lets call it WebUI as an example. Now when I deploy an instance of WebUI it gets a label and traefik understands it.
Then I deploy ANOTHER instance with a DIFFERENT set of parameters, its still the WebUI service and it uses the same manifest, so it gets the same labels, and the same routing. But what I would really want was to let it have a label containing some sort of rule so I could route to the name of the service instance (determine at runtime not design time). Specifically I would like for the runtime part to be part of the domainname (thus the suggestion of a HostRegexp style rule)
I don't think it is possible to use the matched group from the HostRegexp to determine the backend.
A possibility would be to use the Property Manager API to dynamically set the frontend rule for the service instance after creating it. Also, see this for a complete example on using the API.

Service versioning in OpenAPI

We are about to implement a service API for my client that consists of a number of services, lets say ServiceA, ServiceB and ServiceC. Each service can over time (independantly) introduce new versions, while the old versions still exist. So we might have:
ServiceA v1
ServiceA v2
ServiceB v1
ServiceC v1
ServiceC v2
ServiceC v3
We are required to document this API using OpenAPI. I'm not too familiar with OpenAPI, but as far as I can see you typically version the entire API, not separate services.
How would one typically document such versioning using OpenAPI? Personally I see two options, but I am very likely missing something:
Add each version of the same service as separate services in the documentation (but that causes a bloated API over time with a lot of services)
Increase all the services versions and the entire API's version everytime a single service changes version so there's always a version 1, 2 and 3 of each service, even if some of them are identical (but that introduces a lot of unneccesary service versions).
Any input would be much appreciated.
Maybe a little late, but consider this point of view.
If the service API is implemented by a compact component, then the API should also be compact. The consumer of your service is not interested into your versioning policies more than to the extend of backwards compatibility violations and/or new feature addition.
It is absolutely out of scope to require the consumer to know all your version numbers for individual services. He does not want to know it, he wants a complete, compact package.
If you services have little in common, your best solution may be individual OpenAPI documents with tailored versions. The point is - you will not block development of your components by unrelated customers/consumers relying on the remaining services, and you will not bother the customers with zero-change version changes for their field of interest.
On the other hand, you may as well keep a version number that is separate from the endpoints.
Something like this:
openapi: 3.1.0
info:
version: 2.0
servers:
- url: http://somewhere.com/v2
paths:
/some-action:
...
instead of
openapi: 3.1.0
info:
version: 2.0
servers:
- url: http://somewhere.com
paths:
/v2/some-action:
...
Now, your clients are expected to configure the address of your service http://somewhere.com/v2 and use it for all services, A, B, C. Once you decide to release a new major version /v3 which breaks compatibility on service A, and brings new features to B, C, and possibly new D:
You may keep http://somewhere.com/v2 running with old service component version for existing customers.
The API may inform them in some way about using deprecated endpoint
You may keep both API specs published in your dev documentation tool.
Customers only using A, B may just swap http://somewhere.com/v2 for http://somewhere.com/v3 on a resource URI level and not change/rebuild any code.
Customers using A may keep using old version, and design a slow shift towards /v3 by addressing ONLY code working with A
New customers may start off by using current LIVE version, having a compact API. You definitely do not want to force them to learn which is the current latest version for each service separately, they want to encapsulate the versioning inside server as well. Also, they are free to use D.

jbpm 6.0.1 create process calling rest

How to create process with 1 service task - rest which calls
http://www.webservicex.net/currencyconvertor.asmx/ConversionRate?FromCurrency=EUR&ToCurrency=USD
and sets this value as parameter which can be seen later, using jbpm console(kie workbench)? JBOSS docs are mostly for user tasks.
My recommended solution is to create a new WorkItemHandler implementation that calls the web service get the results and inject that as a process variable.
You can see a similar example that calls web services here: https://github.com/droolsjbpm/jbpm-playground/tree/master/customer-relationships-workitems
HTH
There is a REST service task that you can use, available out-of-the-box in the web-based designer (under service tasks, so implemented as a custom service task). The associated handler should also be registered automatically when using the jbpm-installer:
https://github.com/droolsjbpm/jbpm/blob/master/jbpm-installer/conf/META-INF/CustomWorkItemHandlers.conf#L4