Getting more detail from the Spring cloud Discovery Client - spring-cloud

I note that with the various refactoring of common elements into spring cloud commons, the information that you get from auto wiring DiscoveryClient is rather sparse.
Lets say that I want to get more information for the incoming service data that the service gets when it registers with Eureka. Much of what i want is in the Application object.
I know that I could get this detail form the EurekaClient. How can I get access to the EurekaClient object.

I suspect you mean InstanceInfo objects, since Application basically just holds a list on InstanceInfo's. The ServiceInstance returned from the Spring Cloud DiscoveryClient.getInstances(serviceId) backed by an InstanceInfo. My guess is it would be easiest for you to autowire EurekaClient (or com.netflix.*.DiscoveryClient if your using an older version) and go from there. We have to be sparse as we support more than just eureka (consul, zookeeper).

Related

How to access EF Core in-memory db from another application?

I have two applications, one is a Web API, and the other is a scheduled job.
Web API
First I run this service
There is an entity called 'User'
I'm adding some fake users using a DB Context called 'ApplicationContext'
The data will be persisted in an in-memory DB
Scheduled Job service (Background service)
Now I'm running this service and trying to access the same DbContext
But I don't see the fake users in the new context
How can I access the data in another application?
Your second application must access the DB via the API on the first. You can't access an in-memory database from another process without doing this.
The good news is, this means the first application enables you to expose higher level features than the bare database. This is mostly the entire purpose of a service.

#RefreshScope and /refresh not working for multiple instance

#RefreshScope and /refresh not working for updating multiple service instance i know this can be done using spring cloud bus but due to some constraint i can not opt for that is there any alternatives
Consider using Ribbon to determine the available instances and then call refresh event for all of them. I have not tried this, but seems to be possible as I read in the documentations

Deploying IdentityServer3 on Load Balancer

We are moving right along with building out our custom IdentityServer solution based on IdentityServer3. We will be deploying in a load balanced environment.
According to https://identityserver.github.io/Documentation/docsv2/configuration/serviceFactory.html there are a number of services and stores that need to be implemented.
I have implemented the mandatory user service, client and scope stores.
The document says there are other mandatory items to implement but that there are default InMemory versions.
We were planning on using the default in memory for the other stuff but am concerned that not all will work in a load balanced scenario.
What are the other mandatory services and stores we must implement for things to work properly when load balanced?
With multiple Identity Server installations serving the same requests (e.g. load balanced) you won't be able to use the various in-memory token stores, otherwise authorization codes, refresh tokens and reference tokens issued by one server won't be recognized by the other, nor will user consent be persisted. If you are using IIS, machine key synchronization is also necessary to have tokens work across all instances.
There's an entity framework package available for the token stores. You'll need the operational data.
There's also a very useful guide to going live here.

Hazrlcast: write-behind works unstable

I have a process that populates data into map with persistence periodically. To be more exact there are two nodes: storage node with persistence enabled and cache maps defined and a lite client node started with 'lite' option and no map defined. Connection between nodes does look good. During testing I found out that only around a half of populated data is actually flown into persistence though all data is in cache. I can confirm this by browsing the cache map and via JMX stats. I can't indicate dependencies on the data or time it is populated.
Could someone please advise where the investigation should start from?
This is only my own fault. I did not provide 'lite' option to the populator properly so data is distributed between nodes and only written on the storage side as lite client does not have any persistence set up. I did not remove the question to prevent someone else from this silly failure.

CQRS - Consuming event service

My question is regarding the consuming event services which subscribes to the event published by commands in CQRS.
Say I have a document generation service which will generate some documents based on certain events, does the document generation service load the data from the domain via an aggregate root? If so, wouldn't the document generation service load data which may have been updated subsequently of the event being received by the generation service? How would you stop that from happening?
I guess I am assuming that the event will only pass the information received by command DTO and passing the whole domain model data to the event feels very wrong.
You really should build your read model from your events, unless you consider your documents a part of the domain (and you would have a CreateDocumentX command)
All i can say is that when you are speaking in cqrs you should describe the issue more deeply to solve or provide help to it properly.
However from what I've read is that you can have persistent storage on your write side, but ensure that you are not reaching out of your aggregate context.
Related issue reading-data-from-database-on-write-side-in-cqrs