Create app instance (in service fabric cluster explorer) ignores number of instances on local machine - azure-service-fabric

Using 5.1.163 version of service fabric run time.
Created a service fabric application with one stateless web api (i.e. using owin communication listener).
Modified the generated code so that listening endpoint to contain partition id/instance id/new_guid (just as is the case for stateful services). This should allow me to create another app instance so that I can have multi-tenancy at application level.
By default, Local.xml file is set to 1 instance for this service.
Deployed it to local machine by F5. Verified that it is deployed to only one instance.
Verified that service is working fine.
Navigated to local service fabric explorer and clicked on the Cluster/Application/AppType node. Clicked on 'Create app instance'.
It successfully created 2nd app instance.
However in this new instance, the service is deployed to all 5 nodes.
I was expecting it deploy the service instance only one node. Is this a bug? But only in this version of service fabric?

When you deploy a Service Fabric application using Visual Studio (or from PowerShell) you use the Deploy-FabricApplication.ps1 that is generated for your application and found in /scripts under your SF project. This script does two things (mainly):
Create/update the application type
Create a new/upgrade existing instance of the application type
The second part there is similar to what you do in the SF Explorer, except this one also considers the publisher profile file you supply. The PS-script actually reads your publisher profile xml files and extracts any parameters in there to a hashset (a dictionary) and passes that as an argument in step 2.
You can create an instance of an SF application type using the PS cmdlets (alternatively you can use FabricClient). The following command does this: New-ServiceFabricApplication. Here you have the chance to supply your own application parameters, including instance count for services in your new application instance (if you have a dynamic parameter for that in your application manifest).
So, when you use the SF explorer to create a new application instance you cannot control how that instance is created, it is always using the default parameter values as specified directly in ApplicationManifest.xml, not values you have specified in your publisher profiles (local1, local5, cloud, etc.).
To controll the creation, run New-ServiceFabricApplication with yor parameters as a hashset.

Related

Is it possible to have a single frontend select between backends (defined dynamically)?

I am currently looking into deploying Traefik/Træfik on our service fabric cluster.
Basically I have a setup where I have any number of Applications (services), defined with a tenant name and each of these services is in fact a separate Web UI.
I am trying to figure out if I can configure a single frontend to target a backend so I don't have to define a new frontend each time I deploy a new UI app. Something like
[frontend.tenantui]
rule = "HostRegexp:localhost,{tenantName:[a-z]+}.example.com"
backend = "fabric:/WebApp/{tenantName}"
The idea is to have it such that I can just deploy new UI services without updating the frontend configuration.
I am currently using the Service Fabric provider for my backend services, but I am open to using the file provider or something else if that is required.
Update:
The servicemanifset contains labels, so as to let traefik create backends and frontends.
The labels are defined for one service, lets call it WebUI as an example. Now when I deploy an instance of WebUI it gets a label and traefik understands it.
Then I deploy ANOTHER instance with a DIFFERENT set of parameters, its still the WebUI service and it uses the same manifest, so it gets the same labels, and the same routing. But what I would really want was to let it have a label containing some sort of rule so I could route to the name of the service instance (determine at runtime not design time). Specifically I would like for the runtime part to be part of the domainname (thus the suggestion of a HostRegexp style rule)
I don't think it is possible to use the matched group from the HostRegexp to determine the backend.
A possibility would be to use the Property Manager API to dynamically set the frontend rule for the service instance after creating it. Also, see this for a complete example on using the API.

Communication between microservice using ServiceID from discovery instead of directory host?

I'm new microservice, I'm reading some example about discovery server, I see we can call another microservice api by using url like:
http://inventory-service/api/inventory/{productCode}.
"inventory-service" is a service instance I registered in discovery.
So my question is what is the benefit of using serviceId intead of call directory host:port:
http://localhost:9009/api/inventory/{productCode}.
Let asume you register inventory-service with Eureka server by configuring Eureka serviceUrl in src/main/resources/bootstrap.properties.
spring.application.name=inventory-service
eureka.client.service-url.defaultZone=http://localhost:8761/eureka/
Then build inventory-service and start 2 instances of it by running following commands.
java -jar -Dserver.port=9001 target/inventory-service-0.0.1-SNAPSHOT-exec.jar
java -jar -Dserver.port=9002 target/inventory-service-0.0.1-SNAPSHOT-exec.jar
When you visit Eureka Dashboard http://localhost:8761/ you will see 2 instances of inventory-service registered.
If you want to apply Client Load Balancing from your consumer application you would need a config like this:
server.ribbon.listOfServers=localhost:9001,localhost:9002
server.ribbon.eureka.enabled=false
If you want to start new instances you would need to register them in your consumer configuration.
With ServiceID you don't have to worry about it, because all instances will register with the same identifier. It will be added automatically in the list of available servers.It is one of the advantages of using ServiceId instead hostname

Play Framework how to set jdbc properties after startup

In my play application the database settings are not known before startup of the application. I have to read them from an environment variable after automatic deployment and start of the application.
The platform the app is deployed on is cloudfoundry. And there is a environment variable called VCAP_SERVICES (that is a json string). Here are all services listed e.g. the database service including the credentials
Is there a prefered way to do so? In means of still being able to use stuff like:
DataSource ds = DB.getDatasource();

Query on DNS & connect to existing vm

In my current code base, when i create a VM, DNS name is being dynamically set as same as the instance name. For example, consider if my VM name is "anandInstance", DNS name of the name is being generated as "anandInstance.cloudapp.net". Is there a way to change the DNS name like "dns1.cloudapp.net" during the creation thru REST API??
"Connect to existing VM" , is it possible to achieve this option through REST call? In case "connect to existing.." option , we are getting a list of vms/services to choose and VM is getting created successfully. How to achieve the same using API.
Thanks
In my current code base, when i create a VM, DNS name is being
dynamically set as same as the instance name. For example, consider if
my VM name is "anandInstance", DNS name of the name is being generated
as "anandInstance.cloudapp.net". Is there a way to change the DNS name
like "dns1.cloudapp.net" during the creation thru REST API??
I don't think it is possible. Imagine what a nightmare in the portal would become if you were able to do so? How would you link a Cloud Service (whatever.cloudapp.net) to an actual deployment (MyDemoVm123). However you can use your own domain and have CNAME records pointing to your "want-to-change-for-some-reason.cloudapp.net" (frankly I surely think that soon we will use even longer names)
"Connect to existing VM" , is it possible to achieve this option
through REST call?
Connection to a VM is essentially opening a RDP session. If it a windows VM, you can try using the Download RDP file API call. Once you get the file, just start it with "process.start". If it is linux VM, just start SSH client on port 22 (or one you have defined) from the Cloud Service DNS name you have.
UPDATE
From the azure portal,for stand alone machineoption, we are able to give the dns name with deafult cloudoneapp.net. How to do the same
through the rest api call.any specfic paramter is there to specify the
same?
When you are using the REST API, you first create a Cloud Service (still named hosted service in the REST API) where your machine will be hosted. Here you give the name for that hosted service (the dns name with deafult cloudoneapp.net). Then you call the Create Virtual Machine Deployment API action.
In case "connect to existing.." option , we are getting a list of vms/services to choose and VM is getting created successfully. How to
achieve the same using API.
When you want to get list of all VMs, just get a list of all Hosted Services, then get properties of each and make a guess whether it is a VM or a Cloud Service (maybe by querying for Properties of each service). I don't see a direct access to the list of Virtual Machines. But as this feature being PREVIEW, things might change in the future.
Hope my answer is clear?

How can I make chef restart a service with additional parameters passed in?

I have a template for a Rails site for Sphinx configuration. There can be multiple different Sphinx services on the same machine running on different ports, one per app. Therefore, I I only want to restart Sphinx for each site if their corresponding configuration template changes. I've created an /etc/init.d/sphinx script that restarts just one sphinx based on a parameter similar to:
/etc/init.d/sphinx restart /etc/sphinx/site1.conf
Where site1.conf is defined by a Chef template. I'd really love to use the notifies functionality for Chef Templates to pass in the correct site1.conf parameter if the template changes. Is this possible?
Alternatively, I suppose I could just register a different service for each site similar to:
/etc/init.d/sphinx_site1
However, I'd prefer to pass in the parameters to the script instead.
When defining a service resource, you can customize the start, stop, and restart commands that will be run. You can define a service resource for each site that you have using these customized commands and set up their corresponding notifications.
For example:
service "sphinx_site1" do
supports :restart => true
restart_command "/etc/init.d/sphinx restart /etc/sphinx/site1.conf"
action :nothing
end
template "/etc/sphinx/site1.conf" do
notifies :restart, "service[sphix_site1]"
end