Service Fabric Naming Service not forwarding to endpoint assigned to Guest Executable - azure-service-fabric

I have setup an application with two services, one a standard aspnet core api, and another node express app by following the guide here:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-existing-app
When I deploy the application locally I can use the naming service to hit the AspNetCore application such as:
http://localhost:19081/sf_node_test_02/AspNetCore/api/values
Likewise, I expect to be able to hit the api of my guest executable using this address:
http://localhost:19081/sf_node_test_02/NodeApp
However, this does not work.
If I use the direct url of the service such as:
http://localhost:30032/ I can see the node js app is in fact working as expected.
Now, I know when running the AspNet core application it is explicitly sending it's listening address back to the naming service but the guest executable is not so that explains why they might behave differently. Also, from what I understand the current version of service fabric does not provide the guest executable the information about a dynamically assigned port so it must be hard coded in the service endpoint and also in the application to listen on the same port.
E.g. If I have:
<Endpoint Name="NodeAppTypeEndpoint" Port="30032" Protocol="http" Type="Input" UriScheme="http"/>
Then in the nodejs app I must also have:
const port = process.env.PORT || 30032;
app.listen(port, () => {
console.log(`Listening on port: ${port}`);
});
Noticed 30032 in both places.
From the documentation:
Furthermore you can ask Service Fabric to publish this endpoint to the
Naming Service so other services can discover the endpoint address to
this service. This enables you to be able to communicate between
services that are guest executables. The published endpoint address is
of the form UriScheme://IPAddressOrFQDN:Port/PathSuffix. UriScheme and
PathSuffix are optional attributes. IPAddressOrFQDN is the IP address
or fully qualified domain name of the node this executable gets placed
on, and it is calculated for you.
I interpreted this to mean that if my ServiceManifest.xml has both UseImplicitHost="true" then it should automatically give the naming service the url constructed by the endpoint description.
http://localhost:19081/sf_node_test_02/NodeApp -> http://localhost:30032
Is it correct that service fabric will automatically give the naming service this listening address for this service?
Is there anyway for me to inspect the mapping in the naming service?
This would let me know if it does have an entry for my node application but it is just different than what I expect or if in fact it has no entry.
If it doesn't have an entry then I don't know how this guest executable application would be visible to the public when deployed in the cloud either.

You can use the QueryManager of FabricClient to list registered endpoints for services in your cluster. This should reveal if there is an endpoint for your node service.
var fabricClient = new FabricClient();
var applicationList = fabricClient.QueryManager.GetApplicationListAsync().GetAwaiter().GetResult();
foreach (var application in applicationList)
{
var serviceList = fabricClient.QueryManager.GetServiceListAsync(application.ApplicationName).GetAwaiter().GetResult();
foreach (var service in serviceList)
{
var partitionListAsync = fabricClient.QueryManager.GetPartitionListAsync(service.ServiceName).GetAwaiter().GetResult();
foreach (var partition in partitionListAsync)
{
var replicas = fabricClient.QueryManager.GetReplicaListAsync(partition.PartitionInformation.Id).GetAwaiter().GetResult();
foreach (var replica in replicas)
{
if (!string.IsNullOrWhiteSpace(replica.ReplicaAddress))
{
var replicaAddress = JObject.Parse(replica.ReplicaAddress);
foreach (var endpoint in replicaAddress["Endpoints"])
{
var endpointAddress = endpoint.First().Value<string>();
Console.WriteLine($"{service.ServiceName} {endpointAddress} {endpointAddress}");
}
}
}
}
}
}

Related

Assign Domain Name For Same IP, Different Nodeports Minikube

I'm trying to set up a local environment for microservices using minikube. My cluster consists of 4 pods and the minikube ip for all 4 of them are the same. However, each service runs on a unique nodeport.
EG: 172.42.159.12:12345 & 172.42.159.12:23456
Ingress generates them as
http://172.42.159.12:12345
http://172.42.159.12:23456
http://172.42.159.12:34567
http://172.42.159.12:45678
They all work fine when using the ip to access them, and they work fine when using a loadbalancer and deploying a cloud environment.
But I want this to work on my minikube, and I can't use the ../etc/hosts to assign domain names for each service ecause it does not accept the nodeports being passed in.
Any help on this is really appreciated.
so I found a solution for this.
The only way to do it is with a third-party app called Fiddler.
How To:
Download And Run Fiddler
Open Fiddler => Rules => Customize Rules
Scroll down to find static function OnBeforeRequest(oSession: Session)
Pass in
if (oSession.HostnameIs("your-domain.com")){
oSession.bypassGateway = true;
oSession["x-overrideHost"] = "minikube_ip:your_port";
}
Save File

Communication between microservice using ServiceID from discovery instead of directory host?

I'm new microservice, I'm reading some example about discovery server, I see we can call another microservice api by using url like:
http://inventory-service/api/inventory/{productCode}.
"inventory-service" is a service instance I registered in discovery.
So my question is what is the benefit of using serviceId intead of call directory host:port:
http://localhost:9009/api/inventory/{productCode}.
Let asume you register inventory-service with Eureka server by configuring Eureka serviceUrl in src/main/resources/bootstrap.properties.
spring.application.name=inventory-service
eureka.client.service-url.defaultZone=http://localhost:8761/eureka/
Then build inventory-service and start 2 instances of it by running following commands.
java -jar -Dserver.port=9001 target/inventory-service-0.0.1-SNAPSHOT-exec.jar
java -jar -Dserver.port=9002 target/inventory-service-0.0.1-SNAPSHOT-exec.jar
When you visit Eureka Dashboard http://localhost:8761/ you will see 2 instances of inventory-service registered.
If you want to apply Client Load Balancing from your consumer application you would need a config like this:
server.ribbon.listOfServers=localhost:9001,localhost:9002
server.ribbon.eureka.enabled=false
If you want to start new instances you would need to register them in your consumer configuration.
With ServiceID you don't have to worry about it, because all instances will register with the same identifier. It will be added automatically in the list of available servers.It is one of the advantages of using ServiceId instead hostname

Not able to find my stateless service that uses the WcfCommunicationListener

I am trying to find my stateless service using IServiceProxyFactory CreateServiceProxy method. It seems to find the service instance but when I invoke a method it gets an error "Client is trying to connect to invalid address net.tcp://localhost...". The stateless service uses WcfCommunicationListener.
The default implementation of IServiceProxyFactory is ServiceProxyFactory, that creates an instance of FabricTransportServiceRemotingClientFactory which in turn gives you an FabricTransportServiceRemotingClient. This one communicates (as the name suggests) using Fabric transport over TCP. Fabric transport expects the Service to have a fabric transport listener FabricTransportServiceRemotingListener on a address like fabric:/applicationname/servicename.
If you want to connect to your service that is listening to connections using the WcfCommunicationListener then you need to connect to it using WcfCommunicationClient that you can create like this:
// Create binding
Binding binding = WcfUtility.CreateTcpClientBinding();
// Create a partition resolver
IServicePartitionResolver partitionResolver = ServicePartitionResolver.GetDefault();
// create a WcfCommunicationClientFactory object.
var wcfClientFactory = new WcfCommunicationClientFactory<IMyService>
(clientBinding: binding, servicePartitionResolver: partitionResolver);
var myServiceClient = new WcfCommunicationClient(
wcfClientFactory,
ServiceUri,
ServicePartitionKey.Singleton);
The above sample is from the documentation https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-communication-wcf
So, either change your service to use fabric transport if you want to use ServiceProxy to create a client, or change your client side to use WcfCommunicationClient instead.

Publicly exposing a WCF restful service via http from Service Fabric

I am trying to expose a WCF based restful service via http and am so far unsuccessful. I'm trying on my local machine first so prove it works. I found a suggestion here that I remove my local cluster and then manually run this powershell command from the SF SDK folder as administrator to recreate it with the machine name binding: .\DevClusterSetup.ps1 -UseMachineName
It created the cluster successfully. I can use the SF Explorer and see in the cluster manifest that entries in the NodeList show the machine name rather than localhost. This seems good.
But the first problem I notice is that if I expand my way through SF Explorer down to the node my app is running on I see an endpoints entry but the URL is not what I'd expect. I am seeing this: http://surfacelap/d5be9425-3247-4290-b77f-1a90f728fb8d/39cda0f7-cef4-4c7f-8af2-d05786a834b0-131111019607641260
Is that what I should see even though I have an endpoint setup? I did not expect the guid and other numbers in the path. This makes me suspect that SF is not seeing my service as being publicly accessible and instead is maybe only setup for internal access within the app? If I dig down into my service manifest I see this as expected:
<Resources>
<Endpoints>
<Endpoint Name="ResolverEndpoint" Protocol="http" Type="Input" Port="80" />
</Endpoints>
</Resources>
But how do I know if the service itself is mapped to it? When I use the crazy long url above and try a simple method of my service I get an http 202 response and no response data as expected. If I then change the method name to one that doesn't exist I get the same thing, not the expected http 404. I've tried using both my machine name and localhost. Same result.
So clearly I'm doing something wrong. Below is my CreateServiceInstanceListeners override. In it you can see I use "ResolverEndpoint" as my endpoint resource name, which matches the service manifest:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new[] { new ServiceInstanceListener((context) =>
new WcfCommunicationListener<IResolverV2>(
serviceContext: context,
wcfServiceObject: new ResolverServiceV2(),
listenerBinding: new WebHttpBinding(WebHttpSecurityMode.None),
endpointResourceName: "ResolverEndpoint"
)
)};
}
What am I doing wrong here?
Here's a way to get it to work: https://github.com/loekd/ServiceFabric.WcfCalc
Essential changes to your code are the use of the public name of your cluster as endpoint URL and an additional WebHttpBehavior behavior on that endpoint.
The endpoint resources specified in the service manifest is shared by all of the replica listeners in the process that use the same endpoint resource name. So if your service has more than one partition it is possible that more than one replica from different partition may end up in the same process. In order to differentiate the messages addressed to different partitions, the listener adds partition ID and additional instance GUID to the path.
If you are going to have a singleton partition service and know that there will not be more than one replicas in the same process you can directly supply EndpointAddress you want to the listener to open at. Use the CodePackageActivationContext API to get the port from the endpoint resource name, node name or IP address from the NodeContext and then provide the path you want the listener to open at.
Here is the code in the WcfCommunicationListener that constructs the Listen Address.
private static Uri GetListenAddress(
ServiceContext serviceContext,
string scheme,
int port)
{
return new Uri(
string.Format(
CultureInfo.InvariantCulture,
"{0}://{1}:{2}/{5}/{3}-{4}",
scheme,
serviceContext.NodeContext.IPAddressOrFQDN,
port,
serviceContext.PartitionId,
serviceContext.ReplicaOrInstanceId,
Guid.NewGuid()));
}
Please note that you can now have only once application, one service and one partition on a node and when you are testing locally, keep the instance count of that service as 1. When deployed in the actual cluster you can use -1 instance count.

Kubernetes - nginx angularjs POD to access nodejs REST API url (another POD) within the cluster dynamically

Using Kubernetes -- Gogle Container Enginer setup , Within the Same google cloud Cluster, I am having the Front end Service -> nginx + Angular JS and REST API service --> NodeJS API. I don't want to expose NodeJS API KubeCTL Service public domain. So, 'ServiceType' is set to only 'ClusterIP' . How do we infer this NODE_API_SERIVCE_HOST , NODE_API_SERIVCE_PORT -- {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT -- inside AngularJS program.
(function() {
'use strict';
angular
.module('mymodule', [])
.config(["RestangularProvider", function(RestangularProvider) {
var apiDomainHost = process.env.NODE_API_SERVICE_HOST;
var apiDomainPort = process.env.NODE_API_SERVICE_PORT;
RestangularProvider.setBaseUrl('https://'+apiDomainHost+':'+apiDomainPort+'/node/api/v1'); }]);
})();
Then this will not work ( ReferenceError: process is not defined ) , as these angularjs code is executed at the client side.
My docker file is simple to inherit the nginx stop / start controls.
FROM nginx
COPY public/angular-folder /usr/share/nginx/html/projectSubDomainName
Page 43 of 62 in http://www.slideshare.net/carlossg/scaling-docker-with-kubernetes, explains that can we invoke the Command sh
"containers": {
{
"name":"container-name",
"command": {
"sh" , "sudo nginx Command-line parameters
'https://$NODE_API_SERVICE_HOST:$NODE_API_SERVICE_PORT/node/api/v1'"
}
}
}
Is this sustainable for the PODs restart? IF so, how do I get this variable in AngularJS?
Then this will not work ( ReferenceError: process is not defined ) , as these angularjs code is executed at the client side.
If the client is outside the cluster, the only way it will be able to access the NodeJS API is if you expose it to the client's network, which is probably the public internet. If you're concerned about the security implications of that, there are a number of different ways to authenticate the service, such as using nginx auth_basic.
"containers": {
{
"name":"container-name",
"command": {
"sh" , "sudo nginx Command-line parameters
'https://$NODE_API_SERVICE_HOST:$NODE_API_SERVICE_PORT/node/api/v1'"
}
}
}
Is this sustainable for the PODs restart? IF so, how do I get this variable in AngularJS?
Yes, service IP & port is stable, even across pod restarts. As for how to communicate the NODE_API_SERVICE_{HOST,PORT} variables to the client, you will need to inject them from a process running server side (within your cluster) into the response (e.g. directly into the JS code, or as a JSON response).