how to puppet and diff without fqdn? - deployment

I have one problem, that how to manage the agent-nodes with puppet?
I'm using the openstack to auto generating the vms, and then puppet with several puppet-code in special pattern.
eg.
The system provision several vms, each vm has two attrs:
fqdn: maybe repeat(you know the vms are genrate by system in a complex env)
uuid: this will be unique, and is stored in a persistent file. it won't change
and below are two of them.
VM1:
fqdn: api-server.expamle.com
uuid: 20a558f1-2cd9-4068-b5fc-8d252c3f3262
VM2:
fqdn: api-server.expamle.com
uuid: 096359d6-5dc9-47e9-946a-bd702fe7c2d5
(Also, I can specify the hostname with the uuid, but I think it's not a good idea.)
and now I want to puppet them with puppet kick or mcollective puppet runonce.
with mco, i can choose the facter uuid, that will diff VM1&VM2.
mco pupppetd runonce --with-facter uuid=20a558f1-2cd9-4068-b5fc-8d252c3f3262
but I STILL MUST hardcode the fqdn in the puppet-code
node api-server.expamle.com {
...
}
but in fact, I just want use it in the following style:
facter 20a558f1-2cd9-4068-b5fc-8d252c3f3262 {
...
}
facter 096359d6-5dc9-47e9-946a-bd702fe7c2d5 {
...
}
how can I write the puppet? or do some change in the architecture?

There are multiple ways of assigning roles/classifying a node in puppet.
A solution closer to the example you provided would be to use the following node.pp file
node default {
case $::uuid {
"20a558f1-2cd9-4068-b5fc-8d252c3f3262": {
include apache
...
}
"096359d6-5dc9-47e9-946a-bd702fe7c2d5": {
include ngnix
...
}
default: {
...
}
}
}
Saying that, I am not sure that this is the best solution. There are better ways assigning classes/roles to a node.
I would suggest to look at puppet hiera (http://docs.puppetlabs.com/hiera/1/complete_example.html) or ENC (http://docs.puppetlabs.com/guides/external_nodes.html) for better mechanisms.

Related

how I can use chef-google-sql for create a postgresql instance

I'm trying to use chef-google-sql for create a postgresql instance, but seems to be impossible. then, anyone uses this cookbook?. because seems no one use it.
If anyone just did it, please let me know how to make it
this was what I tried:
gsql_instance "sql-test-postgre" do
action :create
backend_type 'SECOND_GEN'
database_version 'POSTGRESQL_9_6'
instance_type 'CLOUD_SQL_INSTANCE'
settings({
tier: 'db-n1-standard-1',
ip_configuration: {
authorized_networks: [
{
name: 'google dns server',
value: '8.8.8.8/32'
}
]
}
})
region 'us-east1-b'
project 'XXXX'
credential 'mycred'
end
It appears that your recipe is still under a testing phase. If you have a look at this page, recipes starting with tests~ are not fully compatible with GCP resources yet.
This is your deployment tests~instance.rb.

How to have multiple http configurations with akka-http

With akka-http, you can provide typesafe config as described here which is put in application.conf. so a minified config can look like following:
akka.http {
client {
connecting-timeout = 10s
}
host-connection-pool {
max-connections = 4
max-open-requests = 32
}
}
My question is if I have to call different external services in the app, I create different pool to those. How do I change these config(max-connections, max-open-requests) for these different pools calling different external services.
One solution I have found so far for this is, overwriting the connectionPoolSettings and passing it when creating http pool:
Http().superPool[RequestTracker](
settings = ConnectionPoolSettings(httpActorSystem).withMaxOpenRequests(1).withMaxConnections(1)
)(httpMat)
Here I can provide appropriate config for maxOpenRequests and maxConnectionsas par my requirement.

Setting Hystrix timeout with environment variable

In order to change Hystrix's default request timeout (1000ms), one must set the following property :
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=2000
What is the corresponding environment variable ?
I would like to "tune" the timeout on my favorite cloud platform without touching the source code first.
I'm pretty sure this one doesn't work : HYSTRIX_COMMAND_DEFAULT_EXECUTION_ISOLATION_THREAD_TIMEOUT_IN_MILLISECONDS=2000
EDIT : Problem was found with Spring Cloud Camden / Spring Boot 1.4.
VM options and environment variables can be referenced from application configuration, which is often a more convenient way to set properties with longer names.
For example, one can define the following reference in application.yml:
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: ${service.timeout}
which will be resolved from the VM option -Dservice.timeout=10000, setting the default Hystrix command timeout to 10 seconds. It is even simpler with environment variables - thanks to relaxed binding, any of these will work (export examples are for Linux):
export service.timeout=10000
export service_timeout=10000
export SERVICE.TIMEOUT=10000
export SERVICE_TIMEOUT=10000
The common approach is to use lowercase.dot.separated for VM arguments and ALL_CAPS_WITH_UNDERSCORES for environment variables.
You could try expression with a default value:
hystrix.command.default.execution.isolation.thread.timeoutIn‌Milliseconds: ${SERVICE_TIMEOUT:2000}
In case you have SERVICE_TIMEOUTsystem variable - it will be used by an application, otherwise, a default value will be picked up.
Found more of a workaround than a solution, using SPRING_APPLICATION_JSON environment variable :
SPRING_APPLICATION_JSON='{ "hystrix" : { "command" : { "default" : { "execution" : { "isolation" : { "thread" : { "timeoutInMilliseconds" : 3000 } } } } } } }'
You can use Spring config yaml file , Please read further on following link
https://github.com/spring-cloud/spring-cloud-netflix/issues/321
VM option -Dhystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=2000 works for me. However it has a side effect, then you can not change the config value in java code since system properties are prioritized.
ConfigurationManager.getConfigInstance().setProperty(
"hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds", 3000);// it doesn't work

Ganglia No matching metrics detected

We are getting error as "No matching metrics detected". cluster level metrics are visible.
ganglia core 3.6.0
ganglia web 3.5.12
Please help to resolve this issue.
Regards,
Jayendra
Somewhere, in a .conf file (or .pyconf, et al,) you must specify a 'collection_group' with a list of the metrics you want to collect. From the default gmond.conf, it should look similar to this:
collection_group {
collect_once = yes
time_threshold = 1200
metric {
name = "cpu_num"
title = "CPU Count"
}
metric {
name = "cpu_speed"
title = "CPU Speed"
}
metric {
name = "mem_total"
title = "Memory Total"
}
}
You may use wildcards to match the name.
You'll also need to include the module that provides the metrics you are looking to collect. Again, the example gmond.conf contains something like this:
modules {
module {
name = "core_metrics"
}
module {
name = "cpu_module"
path = "modcpu.so"
}
}
among others.
You can generate an example gmond.conf by typing
gmond -t > /usr/local/etc/gmond.conf
This path is correct for ganglia-3.6.0, I know that many file paths have changed several times since 3.0...
A good reference book is 'Monitoring with Ganglia.' I'd recommend getting a copy if you're going to be getting very deeply involved with configuring / maintaining a ganglia installation.
When summary/cluster graphs are visible, but individual host graph data is not, this might be caused by a mismatch of hostname case (between reported hostname and rrd graph directory names).
Check /var/lib/ganglia/rrds/CLUSTER-NAME/HOSTNAME
This will show you what case the hostnames are getting their graphs generated as.
If the case does not match their hostname, edit: /etc/ganglia/conf.php (this allows overrides to defaults at: /usr/share/ganglia/conf_default.php)
Add the following line:
$conf['case_sensitive_hostnames'] = false;
Another place to check for case sensitiviy is the gmetad settings at /etc/ganglia/gmetad
case_sensitive_hostnames 0
Versions This Was Fixed On:
OS: CentOS 6
Ganglia Core: 3.7.2-2
Ganglia Web: 3.7.1-2
Installed via EPEL

How define twice the same service in Puppet?

In order to deploy Varnish with a Puppet class, I need stop Varnish for move and deploy files, then at the end, ensure that Varnish is started.
My problem is simple, how I can define twice a service in a Puppet class, in order to stop or start the service at differents steps ?
class varnish::install (
(...)
service { "varnish":
ensure => "stopped",
require => Package['varnish'],
before => Exec['mv-lib-varnish'],
}
(...)
service { "varnish":
ensure => "running",
require => File["$varnishncsa_file"],
}
}
I've an Duplicate definition: Service[varnish] (...) error, and it's logical...
What's the best practice to manage services in a Puppet class ? Divide in multiple classes, or there is an option for "rename" a service for declare it several times ?
try the following to get rid of duplicate error, but what you are trying to do is wrong.
Puppet brings system to certain consistent state - so telling stop service X, do some work , start service X - it out of scope of proper puppet use, puppet is more like restart service if some files on which the service depends were modified.
class varnish::install (
(...)
service { "varnish-stop":
name => "varnish"
ensure => "stopped",
require => Package['varnish'],
before => Exec['mv-lib-varnish'],
}
(...)
service { "varnish-start":
name => "varnish"
ensure => "running",
require => File["$varnishncsa_file"],
}
}
Use exec with service restart as a hook (notify) for "deploy files" action (package/another exec). Define service itself only once as running, because that is what you normally want assuring. Puppet is for describing target state.