#RefreshScope and /refresh not working for multiple instance - spring-cloud

#RefreshScope and /refresh not working for updating multiple service instance i know this can be done using spring cloud bus but due to some constraint i can not opt for that is there any alternatives

Consider using Ribbon to determine the available instances and then call refresh event for all of them. I have not tried this, but seems to be possible as I read in the documentations

Related

Integrating custom kafka consumer with Spring cloud config client

We are using spring config client to refresh properties dynamically. We have added spring-cloud-starter-bus-kafka on the classpath and everything works fine. Pom version of all these dependencies is 2.X.
What I want to do is, remove spring-cloud-starter-bus-kafka and add my custom code to pick up consumer event and refresh context and in turn refresh properties using cloud config client. I believe somewhere Spring is calling ConfigServicePropertySourceLocator.locate. Basically, I want to just replicate what Spring has done in spring-cloud-starter-bus-kafka to make refreshing of properties in real time possible.
The reason I'm doing all this is, I'm internally using an older version of kafka-clients. We have a homegrown version of that, it supports encryption and what not. Problem is coming since spring-cloud-starter-bus-kafka is using 2.X version of kakfa-clients and our home version is not ready for that. Due to this, either one of them is working at one point in time.
Can someone show me some pointer on what needs to be done to consume refresh event from kafka and refresh the property? I don't imagine this being too much complicated. It should be consuming kafka event and somewhere calling ConfigSourceLocator to refresh the properties.
It's even simpler than that. If you look in the RefreshListener class you can see that all it does is
Set<String> keys = this.contextRefresher.refresh();
log.info("Received remote refresh request. Keys refreshed " + keys);
Where contextRefresher is a org.springframework.cloud.context.refresh.ContextRefresher.
This will trigger the code that handles looking up configuration automatically.

How to use configure method of Grace IOC for Application Settings

I am using Grace and I want to configure it to track my settings in appsettins.json file. I can configure that with default container of ASP.NET Core like the following:
services.Configure<DatabaseConnectionSettings>(this.Configuration.GetSection("Database:Connection"));
and later use the IOptions<DatabaseConnectionSettings> or for reloading capability IOptionsSnapshot<DatabaseConnectionSettings> to get the strong-typed values from the container. How can I achieve this when using Grace? and will it support the reload capability of settings when the underlying data changed?
You can continue configuring your application the exact same way. What ever is registered in the service collection will be registered automatically in Grace. I just created an sample app to test that

Getting more detail from the Spring cloud Discovery Client

I note that with the various refactoring of common elements into spring cloud commons, the information that you get from auto wiring DiscoveryClient is rather sparse.
Lets say that I want to get more information for the incoming service data that the service gets when it registers with Eureka. Much of what i want is in the Application object.
I know that I could get this detail form the EurekaClient. How can I get access to the EurekaClient object.
I suspect you mean InstanceInfo objects, since Application basically just holds a list on InstanceInfo's. The ServiceInstance returned from the Spring Cloud DiscoveryClient.getInstances(serviceId) backed by an InstanceInfo. My guess is it would be easiest for you to autowire EurekaClient (or com.netflix.*.DiscoveryClient if your using an older version) and go from there. We have to be sparse as we support more than just eureka (consul, zookeeper).

Can HazelCast create and mange Session?

From my initial reading i understand that HazelCast offers Session Clustering as one of it's feature. But can I use HazelCast to create and manage the complete session lifecycle (creation, update, destroy, auto-expiry) ? Does HazelCast has this capability ?
or should i still have to use something like Spring Session or regular HTTPSession for creating & managing a session's lifecycle ?
Actually hazelcast doesn't offer such a api. But you can try this trick:
sessions (that are distributed) are stored in Hazelcast Map. If you reach hazelcastInstance somehow. (probably in your web application.) Then you can add entry listener to your map. So when there is a session change (like insert,evict or removed, updated etc.) you can be informed.

Loopback.io and CouchDB connector

I am trying to explore the opportunity to build a connector for CouchDB for Loopback.io.
I know CouchDB has a REST interface but - for some reason - when putting the baseURL of my Couch local server into a Rest connector in Loopback, I get an error back on some headers missing in the request from Couch.
Since some useful functions could be added to exploit views and so on, I am exploring the loopback-connector-couchdb creation.
So easy question is: what are the methods that a connector needs to implement to map exactly to the standard API endpoints offered by Loopback.io for a model?
Basic example:
POST /models (with payload body) --> all good on the "create" function of the connector
DELETE /models/{id} --> I get an error saying that the destroyAll function is NOT implemented (correct) but the destroy function IS implemented instead...
what is the difference between HEAD /models/{id} and GET /models/{id}/exists in terms of the functions called?
I try to verify the existence of the model created (successfully) in CouchDB via ID and use GET /models/{id}/exists and instead of having the function "exists" called in the Connector, another function called "Count" is called instead.
It is as if some but not all functions are mapped to the connector (note, I am not using the DataAccessObject property of the connector, as that seems to be more for additional methods, so to speak... and one of the methods does work!)
...I am confused!
Thanks for any guidance. I am trying to follow this, but i can't easily map standard API endpoints to the minimum functions of the connector (see point 2 above, for instance)
Building a connector - Loopback.io documentation
I would suggest playing with the API explorer to figure out your endpoints.
Create a sample LoopBack project via slc loopback
Create some models via slc loopback:model
Start the app via slc run
Browse to localhost:3000/explorer
In there you can see all the endpoints that are automatically generated by LoopBack. Like if you click the GET endpoint for a model, it will show the query as GET /api/<modelname>.