How can I make chef restart a service with additional parameters passed in? - sphinx

I have a template for a Rails site for Sphinx configuration. There can be multiple different Sphinx services on the same machine running on different ports, one per app. Therefore, I I only want to restart Sphinx for each site if their corresponding configuration template changes. I've created an /etc/init.d/sphinx script that restarts just one sphinx based on a parameter similar to:
/etc/init.d/sphinx restart /etc/sphinx/site1.conf
Where site1.conf is defined by a Chef template. I'd really love to use the notifies functionality for Chef Templates to pass in the correct site1.conf parameter if the template changes. Is this possible?
Alternatively, I suppose I could just register a different service for each site similar to:
/etc/init.d/sphinx_site1
However, I'd prefer to pass in the parameters to the script instead.

When defining a service resource, you can customize the start, stop, and restart commands that will be run. You can define a service resource for each site that you have using these customized commands and set up their corresponding notifications.
For example:
service "sphinx_site1" do
supports :restart => true
restart_command "/etc/init.d/sphinx restart /etc/sphinx/site1.conf"
action :nothing
end
template "/etc/sphinx/site1.conf" do
notifies :restart, "service[sphix_site1]"
end

Related

Is it possible to have a single frontend select between backends (defined dynamically)?

I am currently looking into deploying Traefik/Træfik on our service fabric cluster.
Basically I have a setup where I have any number of Applications (services), defined with a tenant name and each of these services is in fact a separate Web UI.
I am trying to figure out if I can configure a single frontend to target a backend so I don't have to define a new frontend each time I deploy a new UI app. Something like
[frontend.tenantui]
rule = "HostRegexp:localhost,{tenantName:[a-z]+}.example.com"
backend = "fabric:/WebApp/{tenantName}"
The idea is to have it such that I can just deploy new UI services without updating the frontend configuration.
I am currently using the Service Fabric provider for my backend services, but I am open to using the file provider or something else if that is required.
Update:
The servicemanifset contains labels, so as to let traefik create backends and frontends.
The labels are defined for one service, lets call it WebUI as an example. Now when I deploy an instance of WebUI it gets a label and traefik understands it.
Then I deploy ANOTHER instance with a DIFFERENT set of parameters, its still the WebUI service and it uses the same manifest, so it gets the same labels, and the same routing. But what I would really want was to let it have a label containing some sort of rule so I could route to the name of the service instance (determine at runtime not design time). Specifically I would like for the runtime part to be part of the domainname (thus the suggestion of a HostRegexp style rule)
I don't think it is possible to use the matched group from the HostRegexp to determine the backend.
A possibility would be to use the Property Manager API to dynamically set the frontend rule for the service instance after creating it. Also, see this for a complete example on using the API.

How do I update service Fabric application parameter using powershell

Using powershell is there a way to update Service Fabric application parameters directly without have to redeploy the whole application.
If you want to update the applications ports you need to update the Application Manifest file and push that update to the cluster. For example, using visual studio you can make changes and when you select Publish, you can chose to update the app.
You can of course use PowerShell to apply the changes but the processes are all the same. You make changes to the manifest and update that file to the cluster. There is not an option to simply update a port using powershell like you can on an Azure VM
You can read more about updating the application manifest in the below docs:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade
https://learn.microsoft.com/en-us/powershell/module/servicefabric/update-servicefabricservice?view=azureservicefabricps
It appears that if you do not use default services, you can update the parameters.
Start-ServiceFabricApplicationUpgrade -ApplicationName $applicationName -ApplicationTypeVersion $applicationVer `
-Monitored -FailureAction Rollback -UpgradeDomainTimeoutSec 360 -HealthCheckRetryTimeoutSec 10 -ApplicationParameter $parameters -Force
If you are using the default services of the application manifest then we recommend that you do not since as you have mentioned you have to change the manifest to deploy new settings. If you want a more ops way of doing things, then you can remove the default services and use Update-ServiceFabricService to change the parameters on the fly. Generally we recommend default services only for dev/test.
src: https://github.com/Azure/service-fabric-issues/issues/114#issuecomment-269797023

Create app instance (in service fabric cluster explorer) ignores number of instances on local machine

Using 5.1.163 version of service fabric run time.
Created a service fabric application with one stateless web api (i.e. using owin communication listener).
Modified the generated code so that listening endpoint to contain partition id/instance id/new_guid (just as is the case for stateful services). This should allow me to create another app instance so that I can have multi-tenancy at application level.
By default, Local.xml file is set to 1 instance for this service.
Deployed it to local machine by F5. Verified that it is deployed to only one instance.
Verified that service is working fine.
Navigated to local service fabric explorer and clicked on the Cluster/Application/AppType node. Clicked on 'Create app instance'.
It successfully created 2nd app instance.
However in this new instance, the service is deployed to all 5 nodes.
I was expecting it deploy the service instance only one node. Is this a bug? But only in this version of service fabric?
When you deploy a Service Fabric application using Visual Studio (or from PowerShell) you use the Deploy-FabricApplication.ps1 that is generated for your application and found in /scripts under your SF project. This script does two things (mainly):
Create/update the application type
Create a new/upgrade existing instance of the application type
The second part there is similar to what you do in the SF Explorer, except this one also considers the publisher profile file you supply. The PS-script actually reads your publisher profile xml files and extracts any parameters in there to a hashset (a dictionary) and passes that as an argument in step 2.
You can create an instance of an SF application type using the PS cmdlets (alternatively you can use FabricClient). The following command does this: New-ServiceFabricApplication. Here you have the chance to supply your own application parameters, including instance count for services in your new application instance (if you have a dynamic parameter for that in your application manifest).
So, when you use the SF explorer to create a new application instance you cannot control how that instance is created, it is always using the default parameter values as specified directly in ApplicationManifest.xml, not values you have specified in your publisher profiles (local1, local5, cloud, etc.).
To controll the creation, run New-ServiceFabricApplication with yor parameters as a hashset.

CQ5. How to know host name of publisher where code executes?

To resolve problem mentioned in subject I wrote following code:
String link = externalizer.publishLink(resolverFactory.getAdministrativeResourceResolver(null),"");
I cannot check it because I have only author machine but following code will executes only on publishers.
On production we have several publisher.I want to get different results for every publisher.
Will my code work on publishers?
Have you defined sling:osgiConfig for the pid - com.day.cq.commons.impl.ExternalizerImpl?
You could configure this in OSGi console [1] directly as well.
In the configuration, you could supply dns name like 'publish http://www.example.com'
In case of multiple domain names for multiple publish instances, define sling:osgiConfig nodes for this service and attach it to 'run modes' of those publish instances. This should work.
On side note - Externalizer service is generally used for non-HTML content like email, etc. In HTML, you could use relative urls.
[1] http://localhost:4502/system/console/configMgr

capistrano (v3) deploys the same code on all roles

If I understand correctly the standard git deploy implementation with capistrano v3 deploys the same repository on all roles. I have a more difficult app that has several types of servers and each type has its own code base with its own repository. My database server for example does not need to deploy any code.
How do I tackle such a problem in capistrano v3?
Should I write my own deployment tasks for each of the roles?
How do I tackle such a problem in capistrano v3?
All servers get the code, as in certain environments the code is needed to perform some actions. For example in a typical setup the web server needs your static assets, the app server needs your code to serve the app, and the db server needs your code to run migrations.
If that's not true in your environment and you don't want the code on the servers in some roles, you could easily send a pull request to add the no_release feature back from Cap2 in to Cap3.
You can of course take the .rake files out of the Gem, and load those in your Capfile, which is a perfectly valid way to use the tool, and modify them for your own needs.
The general approach is that if you don't need code on your DB server, for example, why is it listed in your deployment file?
I can confirm you can use no_release: true to disable a server from deploying the repository code.
I needed to do this so I could specifically run a restart task for a different server.
Be sure to give your server a role so that you can target it. There is a handy function called release_roles() you can use to target servers that have your repository code.
Then you can separate any tasks (like my restart) to be independent from the deploy procedure.
For Example:
server '10.10.10.10', port: 22, user: 'deploy', roles: %w{web app db assets}
server '10.10.10.20', port: 22, user: 'deploy', roles: %w{frontend}, no_release: true
namespace :nginx do
desc 'Reloading PHP will clear OpCache. Remove Nginx Cache files to force regeneration.'
task :reload do
on roles(:frontend) do
execute "sudo /usr/sbin/service php7.1-fpm reload"
execute "sudo /usr/bin/find /var/run/nginx-cache -type f -delete"
end
end
end
after 'deploy:finished', 'nginx:reload'
after 'deploy:rollback', 'nginx:reload'
# Example of a task for release_roles() only
desc 'Update composer'
task :update do
on release_roles(:all) do
execute "cd #{release_path} && composer update"
end
end
before 'deploy:publishing', 'composer:update'
I can think of many scenarios where this would come in handy.
FYI, this link has more useful examples:
https://capistranorb.com/documentation/advanced-features/property-filtering/