Communication between microservice using ServiceID from discovery instead of directory host? - spring-cloud

I'm new microservice, I'm reading some example about discovery server, I see we can call another microservice api by using url like:
http://inventory-service/api/inventory/{productCode}.
"inventory-service" is a service instance I registered in discovery.
So my question is what is the benefit of using serviceId intead of call directory host:port:
http://localhost:9009/api/inventory/{productCode}.

Let asume you register inventory-service with Eureka server by configuring Eureka serviceUrl in src/main/resources/bootstrap.properties.
spring.application.name=inventory-service
eureka.client.service-url.defaultZone=http://localhost:8761/eureka/
Then build inventory-service and start 2 instances of it by running following commands.
java -jar -Dserver.port=9001 target/inventory-service-0.0.1-SNAPSHOT-exec.jar
java -jar -Dserver.port=9002 target/inventory-service-0.0.1-SNAPSHOT-exec.jar
When you visit Eureka Dashboard http://localhost:8761/ you will see 2 instances of inventory-service registered.
If you want to apply Client Load Balancing from your consumer application you would need a config like this:
server.ribbon.listOfServers=localhost:9001,localhost:9002
server.ribbon.eureka.enabled=false
If you want to start new instances you would need to register them in your consumer configuration.
With ServiceID you don't have to worry about it, because all instances will register with the same identifier. It will be added automatically in the list of available servers.It is one of the advantages of using ServiceId instead hostname

Related

Is it possible to define a static server list with ribbon (via feign) when Eureka is in use?

Environment
Spring Boot 1.5.13.RELEASE
Spring Cloud Edgware.SR3
Java 8
Configuration
Eureka client is enabled and working correctly (I have tested and everything's working as I expect).
Some relevant properties from my configuration:
feign.hystrix.enabled=true
eureka.client.fetch-registry=true
spring.cloud.service-registry.auto-registration.enabled=true
service1.ribbon.listOfServers=https://www.google.com
Context
I have an application which speaks to 3 other services using feign clients. Two of these are discovered via Eureka service discovery. These are working well. The final service is an external one with a single static hostname and I do not want this resolved via Eureka. Since I do want Eureka for 2 of these services I would like to keep Eureka enabled.
Question
For the final service I tried adding service1.ribbon.listOfServers=https://www.google.com to the application.properties, however this cases the following error at runtime when invoking the feign client:
Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.netflix.hystrix.exception.HystrixRuntimeException: Service1Client#test() failed and no fallback available.] with root cause
pricing_1 |
pricing_1 | com.netflix.client.ClientException: Load balancer does not have available server for client: service1
pricing_1 | at com.netflix.loadbalancer.LoadBalancerContext.getServerFromLoadBalancer(LoadBalancerContext.java:483) ~[ribbon-loadbalancer-2.2.5.jar!/:2.2.5]
My client is configured as follows:
#FeignClient("service1")
public interface Service1Client {
#GetMapping(value = "/")
String test();
}
Thanks in advance for any advice.
Consideration
Since the spirit of Ribbon as I understand it is to act as a client side load balancer and given in my case there is nothing to load balance (I have one fixed static hostname that returns a single A record in DNS). Ribbon actually feels like an unnecessary component - what I really wanted was the Feign client as I like the fact that it abstracts away the lower level HTTP request and object seralization. So I suppose an alternative follow up question is, can I use feign without ribbon - it seems the nice out of the box behaviour would be to use ribbon - even the Javadoc of the #FeignClient annotation says:
If ribbon is available it will be
used to load balance the backend requests, and the load balancer can be configured
using a #RibbonClient with the same name (i.e. value) as the feign client.
suggesting the two are quite closely related even if they are serving different purposes.
As you mentioned, there are two ways to solve your problem.
Use Feign without Ribbon
If you specify url attribute in #FeignClient annotation, it will work without Ribbon like the below.
#FeignClient(name = "service1", url = http://www.google.com)
public interface Service1Client {
#GetMapping(value = "/")
String test();
}
In this case, your other two Feign client will still work with Ribbon and Eureka.
Use Feign with Ribbon and without Eureka
What you are missing is in your configuration is NIWSServerListClassName.
Its default value is com.netflix.niws.loadbalancer.DiscoveryEnabledNIWSServerList and it will use Eureka to retrieve the list of server. If you set NIWSServerListClassName to ConfigurationBasedServerList for a ribbon client (feign client), only that client will work with listOfServers list without retrieving server list from Eureka. And other feign clients will still work with Eureka.
service1:
ribbon:
NIWSServerListClassName: com.netflix.loadbalancer.ConfigurationBasedServerList
listOfServers: http://www.google.com

How to configure Serilog in Web Api 2 with OData v4

I am a beginner. I want to know how to Configure Serilog in RestApi to write logs to Seq. What should be the configurations in webapiconfig and Controller. and How to write log event for the same
First you need to install SEQ Server on your machine. The server registers itself as a service in your machine. This service runs on default port no 5341
In the Global.asax file you need to create the instance of the logger.
In the controller you can use variable Log to write the logs of various levels to seq server

Dynamically update Eureka instance metadata

When Spring Cloud Eureka instance starts I can define some instance metadata statically (in eureka.instance.metadataMap.* in my application.yml) or dynamically (using EurekaInstanceConfigBean for example). But once instance is registered, this metadata no longer updates in Eureka after I update the config bean.
Is there a way to define some metadata that will dynamically update in Eureka? So Eureka will work kind of like a key-value storage for each instance.
If you want to update any metadata from eureka client for itself, just use com.netflix.appinfo.ApplicationInfoManagerobject and call registerAppMetadata(Map<String, String>).
If so, this info will be updated in Eureka Server usually soon or at least in 30sec.You can use DI to get the instance of ApplicationInfoManger.
If you want to update metadata for other service instance, just invoke REST API like below to eureka server.
PUT /eureka/apps/appID/instanceID/metadata?key=value

Create app instance (in service fabric cluster explorer) ignores number of instances on local machine

Using 5.1.163 version of service fabric run time.
Created a service fabric application with one stateless web api (i.e. using owin communication listener).
Modified the generated code so that listening endpoint to contain partition id/instance id/new_guid (just as is the case for stateful services). This should allow me to create another app instance so that I can have multi-tenancy at application level.
By default, Local.xml file is set to 1 instance for this service.
Deployed it to local machine by F5. Verified that it is deployed to only one instance.
Verified that service is working fine.
Navigated to local service fabric explorer and clicked on the Cluster/Application/AppType node. Clicked on 'Create app instance'.
It successfully created 2nd app instance.
However in this new instance, the service is deployed to all 5 nodes.
I was expecting it deploy the service instance only one node. Is this a bug? But only in this version of service fabric?
When you deploy a Service Fabric application using Visual Studio (or from PowerShell) you use the Deploy-FabricApplication.ps1 that is generated for your application and found in /scripts under your SF project. This script does two things (mainly):
Create/update the application type
Create a new/upgrade existing instance of the application type
The second part there is similar to what you do in the SF Explorer, except this one also considers the publisher profile file you supply. The PS-script actually reads your publisher profile xml files and extracts any parameters in there to a hashset (a dictionary) and passes that as an argument in step 2.
You can create an instance of an SF application type using the PS cmdlets (alternatively you can use FabricClient). The following command does this: New-ServiceFabricApplication. Here you have the chance to supply your own application parameters, including instance count for services in your new application instance (if you have a dynamic parameter for that in your application manifest).
So, when you use the SF explorer to create a new application instance you cannot control how that instance is created, it is always using the default parameter values as specified directly in ApplicationManifest.xml, not values you have specified in your publisher profiles (local1, local5, cloud, etc.).
To controll the creation, run New-ServiceFabricApplication with yor parameters as a hashset.

Publicly exposing a WCF restful service via http from Service Fabric

I am trying to expose a WCF based restful service via http and am so far unsuccessful. I'm trying on my local machine first so prove it works. I found a suggestion here that I remove my local cluster and then manually run this powershell command from the SF SDK folder as administrator to recreate it with the machine name binding: .\DevClusterSetup.ps1 -UseMachineName
It created the cluster successfully. I can use the SF Explorer and see in the cluster manifest that entries in the NodeList show the machine name rather than localhost. This seems good.
But the first problem I notice is that if I expand my way through SF Explorer down to the node my app is running on I see an endpoints entry but the URL is not what I'd expect. I am seeing this: http://surfacelap/d5be9425-3247-4290-b77f-1a90f728fb8d/39cda0f7-cef4-4c7f-8af2-d05786a834b0-131111019607641260
Is that what I should see even though I have an endpoint setup? I did not expect the guid and other numbers in the path. This makes me suspect that SF is not seeing my service as being publicly accessible and instead is maybe only setup for internal access within the app? If I dig down into my service manifest I see this as expected:
<Resources>
<Endpoints>
<Endpoint Name="ResolverEndpoint" Protocol="http" Type="Input" Port="80" />
</Endpoints>
</Resources>
But how do I know if the service itself is mapped to it? When I use the crazy long url above and try a simple method of my service I get an http 202 response and no response data as expected. If I then change the method name to one that doesn't exist I get the same thing, not the expected http 404. I've tried using both my machine name and localhost. Same result.
So clearly I'm doing something wrong. Below is my CreateServiceInstanceListeners override. In it you can see I use "ResolverEndpoint" as my endpoint resource name, which matches the service manifest:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new[] { new ServiceInstanceListener((context) =>
new WcfCommunicationListener<IResolverV2>(
serviceContext: context,
wcfServiceObject: new ResolverServiceV2(),
listenerBinding: new WebHttpBinding(WebHttpSecurityMode.None),
endpointResourceName: "ResolverEndpoint"
)
)};
}
What am I doing wrong here?
Here's a way to get it to work: https://github.com/loekd/ServiceFabric.WcfCalc
Essential changes to your code are the use of the public name of your cluster as endpoint URL and an additional WebHttpBehavior behavior on that endpoint.
The endpoint resources specified in the service manifest is shared by all of the replica listeners in the process that use the same endpoint resource name. So if your service has more than one partition it is possible that more than one replica from different partition may end up in the same process. In order to differentiate the messages addressed to different partitions, the listener adds partition ID and additional instance GUID to the path.
If you are going to have a singleton partition service and know that there will not be more than one replicas in the same process you can directly supply EndpointAddress you want to the listener to open at. Use the CodePackageActivationContext API to get the port from the endpoint resource name, node name or IP address from the NodeContext and then provide the path you want the listener to open at.
Here is the code in the WcfCommunicationListener that constructs the Listen Address.
private static Uri GetListenAddress(
ServiceContext serviceContext,
string scheme,
int port)
{
return new Uri(
string.Format(
CultureInfo.InvariantCulture,
"{0}://{1}:{2}/{5}/{3}-{4}",
scheme,
serviceContext.NodeContext.IPAddressOrFQDN,
port,
serviceContext.PartitionId,
serviceContext.ReplicaOrInstanceId,
Guid.NewGuid()));
}
Please note that you can now have only once application, one service and one partition on a node and when you are testing locally, keep the instance count of that service as 1. When deployed in the actual cluster you can use -1 instance count.