How define twice the same service in Puppet? - service

In order to deploy Varnish with a Puppet class, I need stop Varnish for move and deploy files, then at the end, ensure that Varnish is started.
My problem is simple, how I can define twice a service in a Puppet class, in order to stop or start the service at differents steps ?
class varnish::install (
(...)
service { "varnish":
ensure => "stopped",
require => Package['varnish'],
before => Exec['mv-lib-varnish'],
}
(...)
service { "varnish":
ensure => "running",
require => File["$varnishncsa_file"],
}
}
I've an Duplicate definition: Service[varnish] (...) error, and it's logical...
What's the best practice to manage services in a Puppet class ? Divide in multiple classes, or there is an option for "rename" a service for declare it several times ?

try the following to get rid of duplicate error, but what you are trying to do is wrong.
Puppet brings system to certain consistent state - so telling stop service X, do some work , start service X - it out of scope of proper puppet use, puppet is more like restart service if some files on which the service depends were modified.
class varnish::install (
(...)
service { "varnish-stop":
name => "varnish"
ensure => "stopped",
require => Package['varnish'],
before => Exec['mv-lib-varnish'],
}
(...)
service { "varnish-start":
name => "varnish"
ensure => "running",
require => File["$varnishncsa_file"],
}
}

Use exec with service restart as a hook (notify) for "deploy files" action (package/another exec). Define service itself only once as running, because that is what you normally want assuring. Puppet is for describing target state.

Related

Jaeger: setting logging level

I'm trying to reduce logging to "error" (or perhaps "warning") level for the Jaeger Collector pod.
In the values file, in all sections (just to eliminate unlucky misses) wherever I saw:
cmdlineParams: {}
...I have replaced it with:
cmdlineParams:
log-level: "error"
...and yet when the deployment succeeds, the container still logs at the "info" level.
2022/05/06 14:11:41 maxprocs: Updating GOMAXPROCS=1: using minimum allowed GOMAXPROCS
{"level":"info","ts":1651846301.869269,"caller":"zapgrpc/zapgrpc.go:129","msg":"Deprecation warning: 299 Elasticsearch-7.16.3-4e6e4eab2297e949ec994e688dad46290d018022 \"legacy template [jaeger-span] has index patterns [*jaeger-span-*] matching patterns from existing composable templates [.deprecation-indexing-template,.ml-anomalies-,.ml-state,.ml-stats,.slm-history,.watch-history-13,ilm-history,logs,metrics,synthetics] with patterns (.deprecation-indexing-template => [.logs-deprecation.*],.ml-anomalies- => [.ml-anomalies-*],.ml-state => [.ml-state*],.ml-stats => [.ml-stats-*],.slm-history => [.slm-history-5*],.watch-history-13 => [.watcher-history-13*],ilm-history => [ilm-history-5*],logs => [logs-*-*],metrics => [metrics-*-*],synthetics => [synthetics-*-*]); this template [jaeger-span] may be ignored in favor of a composable template at index creation time\""}
{"level":"info","ts":1651846301.8693128,"caller":"zapgrpc/zapgrpc.go:129","msg":"Deprecation warning: 299 Elasticsearch-7.16.3-4e6e4eab2297e949ec994e688dad46290d018022 \"Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.16/security-minimal-setup.html to enable security.\""}
{"level":"info","ts":1651846301.8693204,"caller":"zapgrpc/zapgrpc.go:129","msg":"Deprecation warning: 299 Elasticsearch-7.16.3-4e6e4eab2297e949ec994e688dad46290d018022 \"Legacy index templates are deprecated in favor of composable templates.\""}
What am I missing?

How to connect searchkick (in a Rails app &/ Sidekiq job) to multiple elasticsearch clusters without stomping on global searckick config?

Upon startup my app sets my (?global?) searchkick client to point at my default elasticsearch cluster.
Searchkick.client = Elasticsearch::Client.new(
hosts: default_cluster, # this is the list of hosts in my default cluster
retry_on_failure: true,
)
However, I am upgrading my cluster (again), and while I'd like to be able to have my app read/search from that default cluster,
/search?q="some term"
# =>
Model.search("some term")
continue to work against the default_cluster
Where it starts to get a bit tricky is that:
I'd also like (via some specific ?sidekiq background jobs?) to fill an alternate (alt) cluster's index, something like:
Model.connect_to(alternate_cluster) {|client|
Searchkick.client = client
Model.reindex
}
Without causing all other background jobs to interact with the alternate cluster.
And, of course:
I'd like some way to verify that the alternate_cluster is working well (i.e. for search) before making it my default_cluster. And presumably via some admin route:
/admin/search?q="some search term"&cluster=alternate
# =>
Model.connect_to(alternate_cluster) {|client|
Searchkick.client = client
Model.search("some term")
}
And finally:
I'd like to avoid having to reconnect before every search/reindex action, i.e. I'd prefer not to have the overhead of changing (also because that probably implies that long-running tasks that continue to reconnect to searchkick will be swapping back and-forth from one cluster to the other):
Model.search("some term")
# =>
Model.connect_to(alternate_cluster) {|client|
Searchkick.client = client
Model.search("some term")
}
^ I don't want that
FWIW, the best I've been able to come-up with so far is something like:
def self.connect_to(current_cluster, &block)
previous_es_client = Searchkick.client
current_es_client = Elasticsearch::Client.new(
hosts: current_cluster,
retry_on_failure: true,
)
block.call(current_es_client)
rescue Exception => e
logger.warn(e)
ensure
Searchkick.client = previous_es_client
end
But, I suspect that will cause every other interaction within my system (via the same web-worker or other background jobs running in the same background-worker-instance) to (temporarily) point at the alternate cluster.
Thanks in advance for your assistance...

Wildfly Management CLI Configuration

I've been finding ways to load configuration to wildfly in bulk (say I have a json data).
Something that kind of look like this.
./jboss-cli.sh -c subsystem=messagingactivemq/server=default:add(<data.json>)
Where <data.json>
{
"outcome" => "success",
"result" => {
"address-full-policy" => "BLOCK",
"dead-letter-address" => "jms.queue.DLQ",
"expiry-address" => "jms.queue.ExpiryQueue",
"last-value-queue" => false,
"max-delivery-attempts" => 10,
"max-size-bytes" => 12333,
"message-counter-history-day-limit" => 10,
"page-max-cache-size" => 5,
"page-size-bytes" => 12333,
"redelivery-delay" => 0,
"redistribution-delay" => 222L,
"send-to-dla-on-no-route" => false
}
}
I want to load the above json directly to wildlfy via jboss-cli. Is this even possible? i have been looking for references about this for the past weeks. Any inputs are welcome.
EDITED
Just to be clear with my goals, I am trying ti migrate manually configured items on jBoss AS7.1 into Wildfly 10.1. Currently migration scripts only supports EAP versions of jBoss. So I have to manually select configurations fron jBoss to be migrated to wildfly. Yes, there are configuration that are deprecated and/or deleted in wildfly, so between jBoss AS7.1 and Wildfly10.1 I have to make some changes to the configuration before I load it to wildfly hence I mentioned the json data.
Since when I try to outputresource in jBoss AS7.1 via jboss-cli.sh using command /subsystem=messaging/hornetq-server=default:read-resource it will output something like
{
"outcome" => "success",
"result" => {
"acceptor" => undefined,
"allow-failback" => true,
"async-connection-execution-enabled" => true,
"backup" => false,
"bridge" => undefined,
"broadcast-group" => undefined,
"cluster-connection" => undefined,
... some resource ....
So I will make some modification on the above data (since wildfly uses activemq) and load it to wildfly as activemq. But it just want to use the json data and load it directly to wildfly's jboss-cli.sh. I want to automate this and just execute a script (shell) to do the migration.
Not entirely sure, what exactly are you trying to achieve here, but if you want to execute bulk operations from file, you can use jboss-cli.sh --file=commands.cli where commands.cli is a text file containing jboss cli commands.
This way you can perform multiple operations at once, plus you can utilize the batch functionality provided by JBoss CLI to make sure all changes are applied or reverted.
Example file with multiple commands:
#Add xa datasource
xa-data-source add \
--name=my.app.ds \
--jndi-name=java:jboss/datasources/my.app.ds \
--driver-name=h2 \
--user-name=username \
--password=password \
--use-java-context=true \
--enabled=true \
--xa-datasource-properties={"URL"=>"jdbc:h2:tcp://${env.DB_HOST:localhost}:${env.DB_PORT:1521}/~/my.app.ds;MVCC=TRUE"}
#Add JMS queue
jms-queue add --queue-address=foo.bar.myapp.queue --entries=java:/jms/queue/foo.bar.myapp.queue
#Add system property
/system-property=ENABLE_MY_COOL_MESSAGING_FEATURE:add(value="true")
If you want to define modules or execute operations based on JSON file or any other format apart from the CLI command format, I am afraid you are out of luck. You can make you own java library that wraps the JBoss CLI to execute it though - as JBoss/Wildfly provides CLI bindings for Java and Python I believe.

Could not start backuppc service through puppet

I have backuppc which is being handled by puppet and also using foreman. Below is my init.pp file :
class backuppc::service {
if $::operatingsystemcodename == 'squeeze' {
service { 'backuppc' : ensure => running, hasstatus => false, pattern => '/usr/share/backuppc/bin/BackupPC' }
} else {
service { 'backuppc' : ensure => running, hasstatus => true }
}
service { 'apache2' : ensure => running }
}
when I run puppet on node, it throws this reports on foreman :
class backuppc::service {
if $::operatingsystemcodename == 'squeeze' {
service { 'backuppc' : ensure => running, hasstatus => false, pattern => '/usr/share/backuppc/bin/BackupPC' }
} else {
service { 'backuppc' : ensure => running, hasstatus => true }
}
service { 'apache2' : ensure => running }
}
change from stopped to running failed: Could not start Service[backuppc]: Execution of '/etc/init.d/backuppc start' returned 1: Starting backuppc...2016-05-31 17:13:25 Another BackupPC is running (pid 6731); quitting...
node is running with debain squeeze 6.0.10.
any help on this ?
This ...
change from stopped to running failed: Could not start Service[backuppc]: Execution of '/etc/init.d/backuppc start' returned 1: Starting backuppc...2016-05-31 17:13:25 Another BackupPC is running (pid 6731); quitting...
... means that puppet attempted to start BackupPC, with /etc/init.d/backuppc start, which found that the process was already running. This indicates that puppet is incorrectly determining the status of the BackupPC service.
I can't find a reference to a facter fact named operatingsystemcodename in the source. Does foreman provide this variable, or are you defining it elsewhere? Perhaps you meant lsbdistcodename instead?
If so, and $::operatingsystemcodename is undefined, your conditional will always fall through to the else branch, and the resource will be defined with hasstatus => true. Puppet will attempt to use /etc/init.d/backuppc status to check if the service is running. Therefore, if the init script's status action is broken in some way (by always returning a non-0 exit code, for example) puppet will attempt to start the service on every agent run.
So first things first, I'd verify that $::operatingsystemcodename returns 'squeeze' on the node in question.
If not, I'd check the exit code of /etc/init.d/backuppc status under its various states, returning zero when started and non-zero when stopped.
If on the other hand $::operatingsystemcodename is undefined, or some unexpected value, then I'd choose another expression to use in the if statement. In this case, you'll also want to verify that the pattern attribute is correct by inspecting the process table while the BackupPC service is running.
EDIT: Alternatively, you can provide a value for the status attribute, containing a custom command used by puppet to check the status of the BackupPC service. I would expect something like status => 'pgrep -f BackupPC to work well enough. Although, puppet is already doing almost exactly this in ruby code, so I wouldn't expect it to solve you problem.
While a bit dated this blog post covers some general tips for troubleshooting puppet.

how to puppet and diff without fqdn?

I have one problem, that how to manage the agent-nodes with puppet?
I'm using the openstack to auto generating the vms, and then puppet with several puppet-code in special pattern.
eg.
The system provision several vms, each vm has two attrs:
fqdn: maybe repeat(you know the vms are genrate by system in a complex env)
uuid: this will be unique, and is stored in a persistent file. it won't change
and below are two of them.
VM1:
fqdn: api-server.expamle.com
uuid: 20a558f1-2cd9-4068-b5fc-8d252c3f3262
VM2:
fqdn: api-server.expamle.com
uuid: 096359d6-5dc9-47e9-946a-bd702fe7c2d5
(Also, I can specify the hostname with the uuid, but I think it's not a good idea.)
and now I want to puppet them with puppet kick or mcollective puppet runonce.
with mco, i can choose the facter uuid, that will diff VM1&VM2.
mco pupppetd runonce --with-facter uuid=20a558f1-2cd9-4068-b5fc-8d252c3f3262
but I STILL MUST hardcode the fqdn in the puppet-code
node api-server.expamle.com {
...
}
but in fact, I just want use it in the following style:
facter 20a558f1-2cd9-4068-b5fc-8d252c3f3262 {
...
}
facter 096359d6-5dc9-47e9-946a-bd702fe7c2d5 {
...
}
how can I write the puppet? or do some change in the architecture?
There are multiple ways of assigning roles/classifying a node in puppet.
A solution closer to the example you provided would be to use the following node.pp file
node default {
case $::uuid {
"20a558f1-2cd9-4068-b5fc-8d252c3f3262": {
include apache
...
}
"096359d6-5dc9-47e9-946a-bd702fe7c2d5": {
include ngnix
...
}
default: {
...
}
}
}
Saying that, I am not sure that this is the best solution. There are better ways assigning classes/roles to a node.
I would suggest to look at puppet hiera (http://docs.puppetlabs.com/hiera/1/complete_example.html) or ENC (http://docs.puppetlabs.com/guides/external_nodes.html) for better mechanisms.