dynamicaly parameterized FeignClients - feign

I need to access different instances of a server sharing the same REST interface.
for one server, or different instances of the same server, I would use Ribbon and a feignClient, but the servers are not interchangeable.
I've got a list of server adresses in my application.yml file, likewise:
servers:
- id: A
url: http://url.a
- id: B
url: http://url.b
I'd like to be able to request a server regarding input parameter, for example:
ClientA -> /rest/api/request/A/get -> http://url.a/get
ClientB -> /rest/api/request/B/get -> http://url.b/get
The middleware is agnostic regarding the clients, but the backend server is bound to the clients.
many clients -> one middleware -> some clients
Who would you achieve that using Feign? is it even possible?

The simplest way is to create two Feign targets using the reusing the interface and builder.
Client clientA = Feign.builder()
.target(Client.class, "https://url.a");
Client clientB = Feign.builder()
.target(Client.class, "https://url.b");
This will create a new Client for each target url, however, by ensuring that the supporting components such as the Encoder, Decoder, Client, and ErrorDecoder are singleton instances and thread-safe, the cost of the client will be minimal.
If you don't want to create multiple clients, the alternative is to include a URI as a method parameter.
#RequestLine("POST /repos/{owner}/{repo}/issues")
void createIssue(URI host, Issue issue, #Param("owner") String owner, #Param("repo") String repo);
The value host in the example above will replace the base uri provided in the builder. The drawback to this approach you will need to modify your interface to add this URI to the appropriate methods and adjust the callers to supply the target.

Related

What are the modifications needed to be done to get Wiremock running?

I have a .Net Core web API solution called ReportService, which calls another API endpoint (we can call this PayrollService) to get payroll reports. So my requirement is to mock the PayrollService using Wiremock.Net.
Also currently I have a automation test case written, which will directly call the ReportService controller and will execute all the service logic, and also classes which calls PayrollService and the DB layer logic and will get the HTTP result back from the ReportService.
Please note that the Automation test cases is a separate solution. So my requirement is to run the automation test cases like before on ReportService, and the payroll service will be mocked by Wiremock.
So, what are the changes that need to happen in the codebase? Do we have to change the url of the ReportService to be the Wiremock server base url in the ReportService solution? Please let us know, and please use the terms I have used in the question regarding the project names so I am clear.
Your assumption is indeed correct, you have make the base URL which is used by ReportService configurable.
So that for your unit / integration tests you can provide the URL on which the WireMock.Net server is running.
Example:
[Test]
public async Task ReportService_Should_Call_External_API_And_Get_Report()
{
// Arrange (start WireMock.Net server)
var server = WireMockServer.Start();
// Setup your mapping
server
.Given(Request.Create().WithPath("/foo").UsingGet())
.RespondWith(
Response.Create()
.WithStatusCode(200)
.WithBody(#"{ ""msg"": ""Hello world!"" }")
);
// Act (configure your ReportService to connect to the URL where WireMock.Net is running)
var reportService = new ReportService(server.Urls[0]});
var response = reportService.GetResport();
// Assert
Assert.Equal(response, ...); // Verify
}

Sharing objects with all verticles instances

My application, an API server, is thought to be organized as follows:
MainVerticle is called on startup and should create all necessary objects for the application to work. Mainly a mongoDB pool of connections (MongoClient.createShared(...)) and a global configuration object available instance-wide. It also starts the HTTP Listener, several instances of a HttpVerticle.
HttpVerticle is in charge of receiving requests and, based the command xxx in the payload, execute the XxxHandler.handle(...) method.
Most of the XxxHandler.handle(...) methods will need to access the database. In addition, some others will also deploy additional verticles with parameters from the global conf. For example LoginHandler.handle(...) will deploy a verticle to keep user state while he's connected and this verticle will be undeployed when the user logs out.
I can't figure out how to get the global configuration object while being in XxxHandler.handle(...) or in a "sub"-verticle. Same for the mongo client.
Q1: For configuration data, I tried to use SharedData. In `MainVerticle.start() I have:
LocalMap<String, String> lm = vertx.sharedData().getLocalMap("conf");
lm.put("var", "val");
and in `HttpVerticle.start() I have:
LocalMap<String, String> lm = vertx.sharedData().getLocalMap("conf");
log.debug("var={}", lm.get("var"));
but the log output is var=null.... What am I doing wrong ?
Q2: Besides this basic example with a <String, String> map type, what if the value is a mutable Object like JsonObject which actually is what I would need ?
Q3: Finally how to make the instance of the mongo client available to all verticles?
Instead of getLocalMap() you should be using getClusterWideMap(). Then you should be able to operate on shared data accross the whole cluster and not just in one verticle.
Be aware that the shared operations are async and the code might look like (code in Groovy):
vertx.sharedData().getClusterWideMap( 'your-name' ){ AsyncResult<AsyncMap<String,String>> res ->
if( res.succeeded() )
res.result().put( 'var', 'val', { log.info "put succeeded: ${it.succeeded()}" } )
}
You should be able to use any Serializable objects in your map.

IdentityServer3 PublicOrigin and IssuerUri Difference and Usage in IdentityServerOptions

I got some issue when deploying to IIS. Apparently the client uses reverse proxy and all of the OpenId configuration disco showing IP address instead of their domain name. PublicOrigin solves my problem. However, I still don't understand the different between,
PublicOrigin
and
IssuerUri
Example in:
var options = new IdentityServerOptions
{
PublicOrigin = "https://myids/project1/",
IssuerUri = "https://myids/project1/",
...
}
I can see from the disco showing changes as well if both value updated respectively, i.e.;
{
"issuer": "https://myids/project1/",
"jwks_uri": "https://myids/project1/.well-known/jwks",
"authorization_endpoint": "https://myids/project1/connect/authorize",
"token_endpoint": "https://myids/project1/connect/token",
"userinfo_endpoint": "https://myids/project1/connect/userinfo",
"end_session_endpoint": "https://myids/project1/connect/endsession",
"check_session_iframe": "https://myids/project1/connect/checksession",
"revocation_endpoint": "https://myids/project1/connect/revocation",
"introspection_endpoint": "https://myids/project1/connect/introspect",
...
}
and why not just make it the same as IssuerUri. I have read the documentation on this. Technically is just a description of the properties. I would like to understand more.
Many thanks.
IssuerUri is unique identifier of the authorization server. Value of this property is embedded into ID tokens in the iss property and it is during token validation.
On the other side, PublicOrigin is just a public URI of the server. If the server is behind reverse proxy, then without this hint it would advertise private URI in OpenID Connect metadata (.well-known/openid-configuration).
Why not have just single property? OpenID Connect specification (ยง 16.15. Issuer Identifier) supports multiple issuers residing on the same host and port. However the same section in specification recommends to host only a single issuer per host and port (i.e. single-tenant).
When would you use multi-tenant architecture? Suppose you want to build and sell your own Authentication-as-a-Service. Now you have two options - assign dedicated URI (PublicOrigin) to each of your customers or use single PublicOrigin with dedicated IssuerUri for each customer.

Connect external language server to VSCode extension

I want to implement a VSCode extension that uses the Language Server Protocol, but I want the server component to be on an actual server (in the cloud), and not a part of the VSCode extension.
Can I set the client extension to connect to a server via websockets or HTTP?
Multiple ServerOptions are supported when you initialize a LanguageClient according to the signature of ServerOptions.
you can use the StreamInfo if you want to use a real remove server as your language server. Here is a sample code to connect to your server via WebSocket and initialize a LanguageClient.
const connection = connectToServer(hostname, path);
const client = new LanguageClient(
"docfxLanguageServer",
"Docfx Language Server",
() => Promise.resolve<StreamInfo>({
reader: connection,
writer: connection,
}),
{});
private connectToServer(hostname: string, path: string): Duplex {
const ws = new WebSocket(`ws://${hostname}/${path}`);
return WebSocket.createWebSocketStream(ws);
}
I am not sure if you can control the location of the language server, but there is another option. You do not need to implement the Language Server Protocol to, for example, provide parsing help. In that case you can implement your own convenient parsing service API (tailored to the nature of the language you want to support).
Within your extension you subscribe to workspace edit events using workspace.onDidChangeTextDocument
Re-start a 1sec timeout every time the file on-change event is raised
When the timeout expires without any further file modification, gather all relevant files and send them to your parsing server
In your extension, create a DiagnosticCollection using https://code.visualstudio.com/api/references/vscode-api#languages.createDiagnosticCollection and replace populate it with the warnings/errors/hints resulting from the parsing server in the cloud.
Subscribe to other workspace events, e.g. workspace.onDidOpenTextDocument or workspace.onDidCloseTextDocument in order to keep the DiagnosticCollection content relevant

Service Fabric ServicePartitionResolver ResolveAsync

I am currently using the ServicePartitionResolver to get the http endpoint of another application within my cluster.
var resolver = ServicePartitionResolver.GetDefault();
var partition = await resolver.ResolveAsync(serviceUri, partitionKey ?? ServicePartitionKey.Singleton, CancellationToken.None);
var endpoints = JObject.Parse(partition.GetEndpoint().Address)["Endpoints"];
return endpoints[endpointName].ToString().TrimEnd('/');
This works as expected, however if I redeploy my target application and its port changes on my local dev box, the source application still returns the old endpoint (which is now invalid). Is there a cache somewhere that I can clear? Or is this a bug?
Yes, they are cached. If you know that the partition is no longer valid, or if you receive an error, you can call the resolver.ResolveAsync() that has an overload that takes the earlier ResolvedServicePartition previousRsp, which triggers a refresh.
This api-overload is used in cases where the client knows that the
resolved service partition that it has is no longer valid.
See this article too.
Yes. They are cached. There are 2 solutions to overcome this.
The simplest code change that you need to do is replace var resolver = ServicePartitionResolver.GetDefault(); with var resolver = new ServicePartitionResolver();. This forces the service to create a new ServicePartitionResolver object to every time. Whereas, GetDefault() gets the cached object.
[Recommended] The right way of handling this is to implement a custom CommunicationClientFactory that implements CommunicationClientFactoryBase. And then initialize a ServicePartitionClient and call InvokeWithRetryAsync. It is documented clearly in Service Communication in the Communication clients and factories section.