Unable to use multiple ServiceInstanceListener objects in CreateServiceInstanceListeners - azure-service-fabric

I have created a Service Fabric application and the StatelessService.CreateServiceInstanceListeners returns multiple ServiceInstanceListener objects. The service listeners are opened, but aborted almost immediately. It then opens the listeners again (without going through CreateServiceInstanceListeners, aborts them, ...
When I use only one of the service listeners, then everything works fine.

The returned service instance listeners are added to a ServiceListenerInstanceCollection ([code][1]) and this fails if there is already a service listener instance with the same name in the collection.
The ServiceInstanceListener constructor has the following implementation:
public ServiceInstanceListener(
Func<StatelessServiceContext, ICommunicationListener> createCommunicationListener,
string name = "")
{
this.CreateCommunicationListener = createCommunicationListener;
this.Name = name;
}
The default name is an empty string, so if you don't specify a name, then the returned enumeration contains multiple listeners with the name "" and this raises an exception. The default implementation aborts the already opened listeners and restarts them.
The solution is simple. Simply specify a (unique) name when creating the ServiceInstanceListener.

Related

Quarkus: "no tenant identifier specified" in callback

I try to add multi-tenancy support for my Quarkus app, following Quarkus hibernate-orm doc (see last section).
I have my CustomTenantResolver class and configure in application.properties, with multiple data sources, but no named persistent unit, see below:
# Default data source
quarkus.hibernate-orm.datasource=master
quarkus.hibernate-orm.database.generation=none
quarkus.hibernate-orm.multitenant=DATABASE
# ----- Tenant 'master' (default) ---------------
quarkus.datasource."master".db-kind=postgresql
quarkus.datasource."master".username=postgres
quarkus.datasource."master".password=password
quarkus.datasource."master".jdbc.url=jdbc:postgresql://localhost:5432/db_master
# ----- Tenant 'test' ---------------------------
quarkus.datasource.test.db-kind=postgresql
quarkus.datasource.test.username=postgres
quarkus.datasource.test.password=password
quarkus.datasource.test.jdbc.url=jdbc:postgresql://localhost:5432/db_test
Everything works fine for Web Services APIs functions - based on incoming web service calls, I can extract and supply tenant identifier for DB access.
Problem is, my app also needs to use callback method to listen on messages coming from Apache Pulsar queue. When a message comes in and triggers this callback, any DB access in this method will give this exception:
SessionFactory configured for multi-tenancy, but no tenant identifier specified: org.hibernate.HibernateException: SessionFactory configured for multi-tenancy, but no tenant identifier specified
at org.hibernate.internal.AbstractSharedSessionContract.<init>(AbstractSharedSessionContract.java:172)
at org.hibernate.internal.AbstractSessionImpl.<init>(AbstractSessionImpl.java:29)
at org.hibernate.internal.SessionImpl.<init>(SessionImpl.java:221)
at org.hibernate.internal.SessionFactoryImpl$SessionBuilderImpl.openSession(SessionFactoryImpl.java:1282)
at org.hibernate.internal.SessionFactoryImpl.openSession(SessionFactoryImpl.java:472)
at io.quarkus.hibernate.orm.runtime.session.TransactionScopedSession.acquireSession(TransactionScopedSession.java:86)
at io.quarkus.hibernate.orm.runtime.session.TransactionScopedSession.persist(TransactionScopedSession.java:138)
at io.quarkus.hibernate.orm.runtime.session.ForwardingSession.persist(ForwardingSession.java:53)
... (snipped)
Apparently my CustomTenantResolver class was not called during this listener callback as the callback is another fresh thread, hence no tenant id is supplied.
Do I miss anything? How about the scheduler in Quarkus - how does it support multi-tenancy in scheduled jobs?
Thanks for helps.
I had a similar issue when pulling messages from JMS. The cause of the issue is that io.quarkus.hibernate.orm.runtime.tenant.HibernateCurrentTenantIdentifierResolver ( which implements CurrentTenantIdentifierResolver and as the doc says Maps from the Quarkus {#link TenantResolver} to the Hibernate {#link CurrentTenantIdentifierResolver} model ) expects a request context to be active before calling our implementation of TenantResolver, as shown here:
// Make sure that we're in a request
if (!Arc.container().requestContext().isActive()) {
return null;
}
TenantResolver resolver = tenantResolver(persistenceUnitName);
String tenantId = resolver.resolveTenantId();
I solved it on my app by, first, enabling the request context on the JMS consumer:
Arc.container().requestContext().activate();
and, second, using a ThreadLocal to "pass" the current tenant id to the TenantResolver that will be called later by Hibernate ( through the HibernateCurrentTenantIdentifierResolver instance):
CurrentTenantLocal.setCurrentTenantId("public");
On my TenantResolver ( the class that implements TenantResolver ) I resolve the tenant from either an injected JsonWebToken jwt when it comes from a WebRequest, or using the ThreadLocal when consuming from JMS:
if ( CurrentTenantLocal.getCurrentTenantId() != null ) {
return CurrentTenantLocal.getCurrentTenantId();
}
Caveats:
Note that I haven't done an exhaustive search of the possible side effects of activating the request context... but I have no problems so far.

Sharing objects with all verticles instances

My application, an API server, is thought to be organized as follows:
MainVerticle is called on startup and should create all necessary objects for the application to work. Mainly a mongoDB pool of connections (MongoClient.createShared(...)) and a global configuration object available instance-wide. It also starts the HTTP Listener, several instances of a HttpVerticle.
HttpVerticle is in charge of receiving requests and, based the command xxx in the payload, execute the XxxHandler.handle(...) method.
Most of the XxxHandler.handle(...) methods will need to access the database. In addition, some others will also deploy additional verticles with parameters from the global conf. For example LoginHandler.handle(...) will deploy a verticle to keep user state while he's connected and this verticle will be undeployed when the user logs out.
I can't figure out how to get the global configuration object while being in XxxHandler.handle(...) or in a "sub"-verticle. Same for the mongo client.
Q1: For configuration data, I tried to use SharedData. In `MainVerticle.start() I have:
LocalMap<String, String> lm = vertx.sharedData().getLocalMap("conf");
lm.put("var", "val");
and in `HttpVerticle.start() I have:
LocalMap<String, String> lm = vertx.sharedData().getLocalMap("conf");
log.debug("var={}", lm.get("var"));
but the log output is var=null.... What am I doing wrong ?
Q2: Besides this basic example with a <String, String> map type, what if the value is a mutable Object like JsonObject which actually is what I would need ?
Q3: Finally how to make the instance of the mongo client available to all verticles?
Instead of getLocalMap() you should be using getClusterWideMap(). Then you should be able to operate on shared data accross the whole cluster and not just in one verticle.
Be aware that the shared operations are async and the code might look like (code in Groovy):
vertx.sharedData().getClusterWideMap( 'your-name' ){ AsyncResult<AsyncMap<String,String>> res ->
if( res.succeeded() )
res.result().put( 'var', 'val', { log.info "put succeeded: ${it.succeeded()}" } )
}
You should be able to use any Serializable objects in your map.

Adding new method to ServiceEventSource disables logging (Service Fabric service)

I am trying to add new method to ServiceEventSource class in a Service Fabric service (web app, web api or stateless services), to log warnings and exceptions separately from information-type messages.
When I add new method to ServiceEventSource class, it does not output any message and this.IsEnabled() returns false. Out of the box, and if I remove newly added method, ServiceEventSource outputs messages as expected, and this.IsEnabled() returns true.
I am following Using EventSource generically sample.
For example, just adding following code will cause ServiceEventSource to stop logging:
private const int ErrorEventId = 7;
[Event(ErrorEventId, Level = EventLevel.Error, Message = "Error: {0} - {1}")]
public void Error(string error, string msg)
{
WriteEvent(ErrorEventId, error, msg);
}
I've looked everywhere and can't find any reference to this unexpected behaviour.

enverse-how to customize user id in customized revision listner

i am using jpa with hibernate envers of micro service.
i tried
public class MyRevisionEntityListener implements RevisionListener {
#Override
public void newRevision(Object revisionEntity) {
// If you use spring security, you could use SpringSecurityContextHolder.
final UserContext userContext = UserContextHolder.getUserContext();
MyRevisionEntity mre = MyRevisionEntity.class.cast( revisionEntity );
mre.setUserName( userContext.getUserName() );
}
}
it saves username better.but i want to save user name as"by system" when updates the record by another micro service and when user updates should save the user name as above.how to customize above code as my requirement
It would seem the most logical based on your supplied code might be to simply add a boolean flag to your UserContext thread local variable and simply check that inside the listener.
By default this flag would be false but for your special microservice or business use case, you could alter that state temporarily, run your process, and clear that state after you've finished, very much like a web filter chain works in web applications.

Service Fabric ServicePartitionResolver ResolveAsync

I am currently using the ServicePartitionResolver to get the http endpoint of another application within my cluster.
var resolver = ServicePartitionResolver.GetDefault();
var partition = await resolver.ResolveAsync(serviceUri, partitionKey ?? ServicePartitionKey.Singleton, CancellationToken.None);
var endpoints = JObject.Parse(partition.GetEndpoint().Address)["Endpoints"];
return endpoints[endpointName].ToString().TrimEnd('/');
This works as expected, however if I redeploy my target application and its port changes on my local dev box, the source application still returns the old endpoint (which is now invalid). Is there a cache somewhere that I can clear? Or is this a bug?
Yes, they are cached. If you know that the partition is no longer valid, or if you receive an error, you can call the resolver.ResolveAsync() that has an overload that takes the earlier ResolvedServicePartition previousRsp, which triggers a refresh.
This api-overload is used in cases where the client knows that the
resolved service partition that it has is no longer valid.
See this article too.
Yes. They are cached. There are 2 solutions to overcome this.
The simplest code change that you need to do is replace var resolver = ServicePartitionResolver.GetDefault(); with var resolver = new ServicePartitionResolver();. This forces the service to create a new ServicePartitionResolver object to every time. Whereas, GetDefault() gets the cached object.
[Recommended] The right way of handling this is to implement a custom CommunicationClientFactory that implements CommunicationClientFactoryBase. And then initialize a ServicePartitionClient and call InvokeWithRetryAsync. It is documented clearly in Service Communication in the Communication clients and factories section.