Service Fabric - Are Endpoint definitions required for service remoting? - azure-service-fabric

I'm trying to understand in what scenarios endpoint definitions are required in the ServiceManifest. I have a stateful service with the multiple service remoting listeners defined. My implementation of CreateServiceReplicaListeners:
protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
{
return new[]
{
new ServiceReplicaListener(context => this.CreateServiceRemotingListener(context)),
new ServiceReplicaListener(context =>
{
return new FabricTransportServiceRemotingListener(context,
new CustomService<string>(),
new FabricTransportRemotingListenerSettings
{
EndpointResourceName = "ustdfdomgfasdf"
});
}, name: "CustomListener")
};
}
The endpoint resource name for the custom listener is garbage. I have not defined that endpoint in the service manifest's resources:
<Resources>
<Endpoints>
<Endpoint Name="ServiceEndpoint" />
<Endpoint Name="ReplicatorEndpoint" />
</Endpoints>
</Resources>
However, in testing I find I'm still able to get a proxy to CustomListener:
InventoryItem i = new InventoryItem(description, price, number, reorderThreshold, max);
var applicationInstance = FabricRuntime.GetActivationContext().ApplicationName.Replace("fabric:/", String.Empty);
var inventoryServiceUri = new Uri("fabric:/" + applicationInstance + "/" + InventoryServiceName);
//Doesn't fail
ICustomService customServiceClient = ServiceProxy.Create<ICustomService>(inventoryServiceUri,
i.Id.GetPartitionKey(),
listenerName: "CustomListener");
//Still doesn't fail
var added = await customServiceClient.Add(1, 2);
To me, this indicates endpoint definitions aren't required for service remoting as long as the endpoint and listener names are unique. Is that so? If not, why does my example work?

Endpoints are required to tell service fabric to allocate ports in the node for the services being started on that node, this will prevent port collision when many services are opening ports in the node.
Once allocated, these are create as Environment Variables in the service process, something like: Fabric_Endpoint_<EndpointName> : port
When you create the Listeners, they are responsible to open the ports, generally using the ports allocated via Endpoints, but not prevents you creating a custom listener to Open any port (If running with enough privilege to do so)
CreateServiceRemotingListener(context) creates the default listeners
EndpointResourceName setting tell which endpoint to be used by a listener, if not defined, DefaultEndpointResourceName setting is used as the default Endpoint, the default value is "ServiceEndpoint"
What I am not sure to answer right now is: if EndpointResourceName is not found, it uses DefaultEndpointResourceName, I assume so, need to check the code to confirm that.
When multiple listeners are using the same port, they generally have a path to identify each of them, something like: tcp://127.0.0.1:1234/endpointpath

Related

gRPC Node microservice talking to another microservice in istio mesh

I've got several gRPC microservices deployed via Istio in my k8s pod behind a gateway that handles the routing for web clients. Things work great when I need to send an RPC from client (browser) to any of these services.
I'm now at the point where I want to call service A from service B directly. How do I go about doing that?
Code for how both the servers are instantiated:
const server = new grpc.Server();
server.addService(MyService, new MyServiceImpl());
server.bindAsync(`0.0.0.0:${PORT_A}`, grpc.ServerCredentials.createInsecure(), () => {
server.start();
});
A Service Account is being used with GOOGLE_APPLICATION_CREDENTIALS and a secret in my deployment YAML.
To call service A from service B, I was thinking the code in service B would look something like:
const serviceAClient: MyServiceClient = new MyServiceClient(`0.0.0.0:${PORT_A}`, creds);
const req = new SomeRpcRequest()...;
serviceAClient.someRpc(req, (err: grpc.ServiceError, response: SomeRpcResponse) => {
// yay!
});
Is that naive? One thing I'm not sure about is the creds that I need to pass when instantiating the client. I get complaints that I need to pass ChannelCredentials, but every mechanism I've tried to create those creds has not worked.
Another thing I'm realizing is that 0.0.0.0 can't be correct because each service is in its own container paired with a sidecar proxy... so how do I route RPCs properly and attach proper creds?
I'm trying to construct the creds this way:
let callCreds = grpc.CallCredentials.createFromGoogleCredential(myOauthClient);
let channelCreds = grpc.ChannelCredentials.createSsl().compose(callCreds);
const serviceAClient = new MyServiceClient(`0.0.0.0:${PORT_A}`, channcelCreds);
and I'm mysteriously getting the following error stack:
UnhandledPromiseRejectionWarning: TypeError: Channel credentials must be a ChannelCredentials object
at new ChannelImplementation (/bish/proto/activities/node_modules/#grpc/grpc-js/build/src/channel.js:69:19)
at new Client (/bish/proto/activities/node_modules/#grpc/grpc-js/build/src/client.js:58:36)
at new ServiceClientImpl (/bish/proto/activities/node_modules/#grpc/grpc-js/build/src/make-client.js:58:5)
at PresenceService.<anonymous> (/bish/src/servers/presence/dist/presence.js:348:44)
at step (/bish/src/servers/presence/dist/presence.js:33:23)
at Object.next (/bish/src/servers/presence/dist/presence.js:14:53)
at fulfilled (/bish/src/servers/presence/dist/presence.js:5:58)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
Which is odd because channelCreds is a ComposedChannelCredentialsImpl which, in fact, extends ChannelCredentials
Ok, at least the root cause of the "Channel credentials must be a ChannelCredentials object" error is now known. I'm developing node packages side-by-side as symlinks and each of the dependencies has their own copy of grpc-js in it.
https://github.com/npm/npm/issues/7742#issuecomment-257186653

How to implement a serviceworker in SFCC (Demandware)

I was wondering if anyone here has experience with implementing a service worker in SFCC/Demandware.
I generate a service worker with Webpack with sw-precache-webpack-plugin
The problem is: a service worker should be available from the root of the domain. so site.com/sw.js.
JS files will come normally in the static/ folder.
Anyone an idea how to serve this JS file from the root of the project in Demandware/SFCC?
Unfortunately, registering a service worker under an scope that is in an upper path than the service worker file itself does not work (as stated in MDN):
The service worker will only catch requests from clients under the service worker's scope.
The max scope for a service worker is the location of the worker.
(Source: https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
Solution
Here is a suggestion for a working approach for serving "/sw.js" in Demandware (Sales Force):
Create a new controller (or pipeline), e.g. "ServiceWorker-GetFile"; the response should be a file content, which can be read from whatever source you wish:
Content asset (dw.content.ContentMgr.getContent());
Library file (dw.content.ContentMgr.getContent() or directly reading a file with dw.io.File / dw.io.FileReader);
even Site preference (although I wouldn't recommend it);
Create an entry in Business Manager / Merchant Tools / SEO / Aliases to route "/sw.js" to "ServiceWorker-GetFile", i.e. use something along:
{
...
"your-host" : [
...,
{
"if-site-path": "/sw.js",
"pipeline": "ServiceWorker-GetFile"
}
]
}
This may seem like an unnecessary overhead, but it was the only way I could findfor serving files with root path in the URI.
Serving other root files as well
By expanding the controller (renaming it to, say, "Content-GetFile" and adding GET/POST parameters like "name" and/or "source") this could be conveniently used for other files as well ("/manifest.json", "/.well-known/assetlinks.json" etc.). In the next example of Business Manager / ... / Aliases, let Content-GetFile accept two parameters: "name" (which would be a file name in the content library or a content asset ID) and "source" (which would be "file" or "asset"):
...
{
"if-site-path": "/sw.js",
"pipeline": "Content-GetFile",
"params": {
"name": "/ServiceWorker/sw.js",
"source": "file"
}
},
{
"if-site-path": "/manifest.json",
"pipeline": "Content-GetFile",
"params": {
"name": "MANIFEST_JSON",
"source": "asset"
}
}
Note that your code should handle appropriately the base paths of the resources (e.g. "/ServiceWorker/sw.js" from the above example does not speak much; you should know whether this is a path in a content library or a path relative to "cartridges//static/default/js/").
Dynamic content
Since the suggested approach uses a controller, you can dynamically process the content before serving it to the user (e.g. if you need to add/remove the "/v12435145145/" part from DMW links). Sky is your limits. :)
I'm currently messing around with the service workers on DW as well.
In my case I have directly added the script inside a footer.isml file like this:
<script>
//init service worker
if ('serviceWorker' in navigator) {
window.addEventListener('load', () => {
navigator.serviceWorker
.register("${URLUtils.staticURL('/lib/sw/sw.js')}")
.then(registration => {
console.log(
`Service Worker registered! Scope: ${registration.scope}`
);
})
.catch(err => {
console.log(`Service Worker registration failed: ${err}`);
});
});
}
</script>
This works for me, well at least I can see the Service Worker registered message.
I also had some issues due to the SSL certificate since my development environment doesn't have a proper SSL but it's using HTTPs routes, so chrome was complaining about it, I needed to run chrome via terminal using this command:
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --user-data-dir=/tmp/foo --ignore-certificate-errors --unsafely-treat-insecure-origin-as-secure=[YOUR DOMAIN]
Unfortunately I'm not able to make work any line of code inside that service worker file, even I tried on Safari, since it has a Service Workers option in the develop menu, but it's not showing any service worker running.
I Hope it will helps you.

Passing service instance ID into guest executable at runtime

Is it possible to pass service instance ID (int value) into guest executable at runtime? I have looked at <ExeHost><Arguments>, but it's only good for static data that has to be provided up front.
The environment variable provides the service package instance ID which is not the same as the instance/replica ID. Generally speaking, SF's environment variables can provide the same information available using FabricRuntime, i.e. the node context and the code package activation context. In native SF services, the instance ID is provided at run-time by the Fabric (in the ServiceContext class), as a single process can host multiple partitions and instances/replicas.
In a guest executable, which does not use SF APIs, the only option AFAIK is to query the Fabric for this information in a separate executable, run it as the SetupEntryPoint (which runs every time before the guest executable) and write the information to a file.
For example (compile the code into GetFabricData.exe and add it to the code package):
private static async Task MainAsync(string[] args)
{
var serviceTypeName = args.FirstOrDefault();
if (string.IsNullOrEmpty(serviceTypeName)) throw new ArgumentNullException(nameof(serviceTypeName));
using (var client = new FabricClient())
{
var activationContext = FabricRuntime.GetActivationContext();
var nodeContext = FabricRuntime.GetNodeContext();
var nodeName = nodeContext.NodeName;
var applicationName = new Uri(activationContext.ApplicationName);
var replicas = await client.QueryManager.GetDeployedReplicaListAsync(nodeName, applicationName);
// usually taking the first may not be correct
// but in a guest executable it's unlikely there would be multiple partitions/instances
var instance = replicas.OfType<DeployedStatelessServiceInstance>()
.FirstOrDefault(c => c.ServiceTypeName == serviceTypeName);
if (instance == null)
{
throw new InvalidOperationException($"Unable to find a service instance for {serviceTypeName}");
}
File.WriteAllText("FabricData", instance.InstanceId.ToString());
}
}
And in the service manifest:
<SetupEntryPoint>
<ExeHost>
<Program>GetFabricData.exe</Program>
<Arguments>Guest1Type</Arguments>
</ExeHost>
</SetupEntryPoint>
Then, the guest executable can simply read the FabricData file.
It's available in an environment variable. See here for a complete list of environment variables available to services: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-manage-multiple-environment-app-configuration
Note that the one you're asking for, Fabric_ServicePackageInstanceId is generally only meant for internal consumption, and it identifies the entire service package. That means if you have multiple code packages (executables) in your service package, they will all get the same ID.

loadbalanced ribbon client initialization against discovery service (eureka)

I have service which runs some init scripts after application startup (implemented with ApplicationListener<ApplicationReadyEvent>). In this scripts I need to call another services with RestTemplate which is #LoadBalanced. When the call to service is invoked there's no information about instances of remote service because discovery server was not contacted at that time (I guess).
java.lang.IllegalStateException: No instances available for api-service
at org.springframework.cloud.netflix.ribbon.RibbonLoadBalancerClient.execute(RibbonLoadBalancerClient.java:79)
So is there way how to get list of available services from discovery server at application startup, before my init script will execute?
Thanks
edit:
The problem is more related to fact, that in current environment (dev) all services are tied together in one service (api-service). So from within api-service I'm trying to call #LoadBalanced client api-service which doesn't know about self? Can I register some listener or something similar to know when api-service (self) will be available?
here are the sample applications. I'm mainly interested how to have working this method
edit2:
Now there could be the solution to create EurekaListener
public static class InitializerListener implements EurekaEventListener {
private EurekaClient eurekaClient;
private RestOperations restTemplate;
public InitializerListener(EurekaClient eurekaClient, RestOperations restTemplate) {
this.eurekaClient = eurekaClient;
this.restTemplate = restTemplate;
}
#Override
public void onEvent(EurekaEvent event) {
if (event instanceof StatusChangeEvent) {
if (((StatusChangeEvent) event).getStatus().equals(InstanceInfo.InstanceStatus.UP)) {
ResponseEntity<String> helloResponse = restTemplate.getForEntity("http://api-service/hello-controller/{name}", String.class, "my friend");
logger.debug("Response from controller is {}", helloResponse.getBody());
eurekaClient.unregisterEventListener(this);
}
}
}
}
and then register it like this:
EurekaEventListener initializerListener = new InitializerListener(discoveryClient, restTemplate);
discoveryClient.registerEventListener(initializerListener);
However this is only executed only when application is registered to discovery service first time. Next time when I stop the api-service and run it again, event is not published. Is there any other event which can I catch?
Currently, in Camden and earlier, applications are required to be registered in Eureka before they can query for other applications. Your call is likely too early in the registration lifecycle. There is an InstanceRegisteredEvent that may help. There are plans to work on this in the Dalston release train.

How to access settings.xml in Azure Service Fabric stateful/stateless service?

How can I access and read parameters definied in PackageRoot/Settings/Settings.xml file from my stateful/stateless service code?
For example I have a section DocumentDbConfig with Parameter EndpointUrl:
<Section Name="DocumentDbConfig">
<Parameter Name="EndpointUrl" Value="{url}"/>
</Section>
And I would like to read it in my code:
public async Task<ServiceActionResult<Result>> GetResult()
{
_client = new Client({{ EndpointUrl }}); //HOW TO GET ENDPOINT URL FROM SETTINGS?
}
As long as your code has access to the ServiceContext you can access all of the configuration packages that were deployed with your service. For example:
serviceContext.CodePackageActivationContext.GetConfigurationPackageObject("Config")
where "Config" is the name of the configuration package. From there, you can access all of the sections and keys/values within each section. Be sure to refer to the ConfigurationPackage documentation as a guide on how to access this data, as well as how to listen to events that fire when the configuration package changes.