I just tried to up a new VM managed by Puppet.
When upgrading some packages, the following messages pops up:
Setting up libssl1.0.0:amd64 (1.0.1e-2+deb7u12) ...
Checking for services that may need to be restarted...done.
Checking for services that may need to be restarted...done.
Checking init scripts...
[1;24r(B)0[m[1;24r
Package configuration┌─────────────────────┤
Configuring libssl1.0.0:amd64 ├─────────────────────┐│││
There are services installed on your system which need to be restarted ││
when certain libraries, such as libpam, libc, and libssl, are upgraded. ││
Since these restarts may cause interruptions of service for the system, ││
you will normally be prompted on each upgrade for the list of services ││
you wish to restart. You can choose this option to avoid being││ prompted;
instead, all necessary restarts will be done for you││
automatically so you can avoid being asked questions on each library││ upgrade.││││
Restart services during package upgrades without asking?││││
<Yes><No>│││└───────────────────────────────────────────────────────────────────────────┘
Failed to open terminal.debconf: whiptail output the above errors, giving up!
Setting up libgnutls26:amd64 (2.12.20-8+deb7u2) ...
dpkg: error processing libssl1.0.0:amd64 (--configure):
subprocess installed post-installation script returned error exit status 255
Setting up libkrb5support0:amd64 (1.10.1+dfsg-5+deb7u2) ...
Setting up libk5crypto3:amd64 (1.10.1+dfsg-5+deb7u2) ...
Setting up libkrb5-3:amd64 (1.10.1+dfsg-5+deb7u2) ...
Setting up libgssapi-krb5-2:amd64 (1.10.1+dfsg-5+deb7u2) ...
Setting up libmagic1:amd64 (5.11-2+deb7u5) ...
Setting up file (5.11-2+deb7u5) ...
Setting up libxml2:amd64 (2.8.0+dfsg1-7+wheezy1) ...
dpkg: dependency problems prevent configuration of libcurl3:amd64:
libcurl3:amd64 depends on libssl1.0.0 (>= 1.0.1); however:
Package libssl1.0.0:amd64 is not configured yet.
Then follow a bunch of failed package configurations leading my environment not to be as I wanted...
How can I make this work?
Thank you!
EDIT : Here's my node's manifest:
class pricing {
package { "libatlas-base-dev":
ensure => "installed" ,
require => Exec['apt-get update']
}
package { "gfortran":
ensure => "installed" ,
require => Exec['apt-get update']
}
class { 'python':
version => '2.7',
dev => true,
virtualenv => true,
pip => true,
}
class { 'postgresql::globals':
encoding => 'UTF8',
locale => 'en_GB.UTF-8',
manage_package_repo => true,
version => '9.3',
}->class { 'postgresql::client':
}->class { 'postgresql::lib::devel': }
package {"libffi-dev" : ensure => "present"}
package {"libxml2-dev" : ensure => "present"}
package {"libxslt-dev" : ensure => "present"}
if $pricing_state == "master" {
package {"rabbitmq-server" :
ensure => "present",
require => Exec['apt-get update'],
}
}
file { '/etc/boto.cfg':
source => 'puppet:///modules/pricing/boto.cfg',
}
file { "/pricing/logs/":
ensure => directory,
mode => 777,
owner => "celery",
group => "celery",
}
file { "/pricing/logs/pricing.logs":
ensure => file,
mode => 777,
owner => "celery",
group => "celery",
}
user { "celery":
ensure => present,
comment => "celery",
membership => minimum,
shell => "/bin/bash",
home => "/home/$name",
managehome => true,
}
exec { "import-gpg-dotdeb":
command => "/usr/bin/wget -q http://www.dotdeb.org/dotdeb.gpg -O -| /usr/bin/apt-key add -"
}
apt::source { 'dotdeb':
location => 'http://packages.dotdeb.org',
release => 'wheezy',
repos => 'all',
require => [Exec['import-gpg-dotdeb']]
}
class { 'redis':
package_ensure => 'latest',
conf_port => '6379',
conf_bind => '0.0.0.0',
system_sysctl => true,
conf_requirepass => '3I53G3944G9ngZC',
require => [Apt::Source['dotdeb']]
}
if $pricing_state == "master" {
if $env_small == "prod" {
include supervisord
supervisord::program { 'pricing':
ensure => present,
command => '/pricing/bin/python getprices.py',
user => 'root',
directory => '/pricing/',
numprocs => 1,
autorestart => 'true',
require => Python::Virtualenv['/pricing']
}
supervisord::program { 'listen_newprices':
ensure => absent,
command => '/pricing/bin/python listen_newprices.py',
user => 'root',
directory => '/pricing/',
numprocs => 1,
autorestart => 'true',
require => Python::Virtualenv['/pricing']
}
supervisord::program { 'getprixvente':
ensure => present,
command => '/pricing/bin/python getprixvente.py',
user => 'root',
directory => '/pricing/',
numprocs => 1,
autorestart => 'true',
require => Python::Virtualenv['/pricing']
}
supervisord::program { 'getprixachat':
ensure => present,
command => '/pricing/bin/python getprixachat.py',
user => 'root',
directory => '/pricing/',
numprocs => 1,
autorestart => 'true',
require => Python::Virtualenv['/pricing']
}
supervisord::program { 'flower':
ensure => present,
command => '/pricing/bin/celery flower --port=5555 --basic_auth=celery:celery69 --broker=amqp://celery:2xF09Ad050Ct7yb#127.0.0.1:5672//',
user => 'root',
directory => '/pricing/',
numprocs => 1,
autorestart => 'true',
require => Python::Virtualenv['/pricing']
}
exec { 'restart pricing':
command => 'supervisorctl restart pricing',
path => '/usr/bin:/usr/sbin:/bin:/usr/local/bin/',
require => Supervisord::Program['pricing']
}
exec { 'restart getprixvente':
command => 'supervisorctl restart getprixvente',
path => '/usr/bin:/usr/sbin:/bin:/usr/local/bin/',
require => Supervisord::Program['getprixvente']
}
exec { 'restart getprixachat':
command => 'supervisorctl restart getprixachat',
path => '/usr/bin:/usr/sbin:/bin:/usr/local/bin/',
require => Supervisord::Program['getprixachat']
}
}
}
if $pricing_state == "slave" {
file { "/etc/init.d/celeryd":
ensure => file,
content => template('pricing/celeryd_init.erb'),
mode => 700,
owner => "root",
group => "root",
}
file { "/etc/default/celeryd":
ensure => file,
content => template('pricing/celeryd.erb'),
mode => 640,
owner => "root",
group => "root",
}
service { 'celeryd':
name => celeryd,
ensure => running,
enable => true,
subscribe => File['/etc/default/celeryd'],
require => [
File['/etc/default/celeryd'],
File['/etc/init.d/celeryd'],
User['celery'],
Python::Virtualenv['/pricing'],
],
}
exec { 'restart celeryd':
command => 'service celeryd restart',
path => '/usr/bin:/usr/sbin:/bin:/usr/local/bin/',
require => Service['celeryd'],
}
logrotate::rule { 'celerydslavelogs':
path => '/var/log/celery/*.log',
size => '100k',
rotate => 5,
}
}
logrotate::rule { 'celerydlogs':
path => '/pricing/logs/*.log',
size => '100k',
rotate => 5,
}
python::virtualenv { '/pricing':
ensure => present,
version => '2.7',
requirements => '/puppet/modules/pricing/files/requirements.txt',
owner => $user,
group => $user,
cwd => '/pricing',
timeout => 36000,
require => [
Class['postgresql::client', 'postgresql::lib::devel', 'python'],
Package['libatlas-base-dev', 'gfortran'],
Package['libffi-dev'],
Package['libxml2-dev'],
Package['libxslt-dev'],
Class['postgresql::client', 'postgresql::lib::devel', 'python'],
],
}
}
Related
How can I check the values of Xmx, Xms, and other JVM values, which are in standalone.conf, using the CLI?
[standalone#localhost:9990 /] /core-service=platform-mbean/type=memory:read-resource(recursive=true,proxies=true,include-runtime=true,include-defaults=true)
{
"outcome" => "success",
"result" => {
"heap-memory-usage" => {
"init" => 3246391296L,
"used" => 381631592L,
"committed" => 3111124992L,
"max" => 3111124992L
},
"non-heap-memory-usage" => {
"init" => 2555904L,
"used" => 80962112L,
"committed" => 90963968L,
"max" => 1317011456L
},
"object-name" => "java.lang:type=Memory",
"object-pending-finalization-count" => 0,
"verbose" => true
}
}
I need this values: JAVA_OPTS="-Xms3096m -Xmx3096m -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=512m -Djava.net.preferIPv4Stack=true"
Thanks!
You can't. These files, which live outside of the configuration directory, are native batch type files. That's why there is a standalone.conf and a standalone.conf.bat - they are O/S dependent. The CLI interacts with standalone.xml (or the other standalone configuration files) which is the O/S independent configuration.
I am trying to get a root partition (mountpoint => "/") name using Puppet facter. When I run "facter mountpoints", it shows multiple partitions. I would like to get the variable "/dev/md3" from the result.
{
/ => {
available => "893.71 GiB",
available_bytes => 959608590336,
capacity => "1.86%",
device => "/dev/md3",
filesystem => "ext4",
options => [
"rw",
"errors=remount-ro"
],
size => "910.69 GiB",
size_bytes => 977843884032,
used => "16.98 GiB",
used_bytes => 18235293696
},
/run => {
available => "794.91 MiB",
available_bytes => 833527808,
capacity => "0.07%",
device => "tmpfs",
filesystem => "tmpfs",
options => [
"rw",
"noexec",
"nosuid",
"size=10%",
"mode=0755"
],
size => "795.48 MiB",
size_bytes => 834125824,
used => "584.00 KiB",
used_bytes => 598016
},
/tmp => {
available => "1.78 GiB",
available_bytes => 1909157888,
capacity => "1.21%",
device => "/dev/md1",
filesystem => "ext4",
options => [
"rw"
],
size => "1.80 GiB",
size_bytes => 1932533760,
used => "22.29 MiB",
used_bytes => 23375872
}
}
I tried to use filter, but I was not able to filter "/" device.
$root_mount = $facts['mountpoints'].filter |$mountpoint| { $mountpoint == '/' } Do you guys have any idea?
You can access this fact directly via hash notation. Since your question heavily implies you are using Facter 3/Puppet 4, I will work with that syntax.
You just directly traverse the keys in the Facter hash to arrive at the /dev/md3 value. If we minimize the hash to the relevant portion from facter mountpoints:
{
/ => {
device => "/dev/md3"
}
}
then we see that the keys are mountpoints (you accessed that key directly when you did facter mountpoints from the CLI), /, and device. Therefore, using standard hash notation in Puppet with the $facts hash, we can access that value with:
$facts['mountpoints']['/']['device'] # /dev/md3
Check here for more info: https://docs.puppet.com/puppet/4.9/lang_facts_and_builtin_vars.html#the-factsfactname-hash
I'm using Search::Elasticsearch and Search::Elasticsearch::Scroll for search and scroll into my elasticsearch server.
In scrolling process, for some querys, I'm seeing the next errors while I'm scrolling the search results:
2016/03/22 11:03:38 - 265885 FATAL: [Daemon.pm][8221]: Something gone wrong, error $VAR1 = bless( {
'msg' => '[Missing] ** [http://localhost:9200]-[404] Not Found, called from sub Search::Elasticsearch::Scroll::next at searcher.pl line 92. With vars: {\'body\' => {\'hits\' => {\'hits\' => [],\'max_score\' => \'0\',\'total\' => 5215},\'timed_out\' => bless( do{\\(my $o = 0)}, \'JSON::XS::Boolean\' ),\'_shards\' => {\'failures\' => [{\'index\' => undef,\'reason\' => {\'reason\' => \'No search context found for id [4920053]\',\'type\' => \'search_context_missing_exception\'},\'shard\' => -1},{\'index\' => undef,\'reason\' => {\'reason\' => \'No search context found for id [5051485]\',\'type\' => \'search_context_missing_exception\'},\'shard\' => -1},{\'index\' => undef,\'reason\' => {\'reason\' => \'No search context found for id [4920059]\',\'type\' => \'search_context_missing_exception\'},\'shard\' => -1},{\'index\' => undef,\'reason\' => {\'reason\' => \'No search context found for id [5051496]\',\'type\' => \'search_context_missing_exception\'},\'shard\' => -1},{\'index\' => undef,\'reason\' => {\'reason\' => \'No search context found for id [5051500]\',\'type\' => \'search_context_missing_exception\'},\'shard\' => -1}],\'failed\' => 5,\'successful\' => 0,\'total\' => 5},\'_scroll_id\' => \'c2NhbjswOzE7dG90YWxfaGl0czo1MjE1Ow==\',\'took\' => 2},\'request\' => {\'serialize\' => \'std\',\'path\' => \'/_search/scroll\',\'ignore\' => [],\'mime_type\' => \'application/json\',\'body\' => \'c2Nhbjs1OzQ5MjAwNTM6bHExbENzRDVReEc0OV9UMUgzd3Vkdzs1MDUxNDg1OnJrQ3lsUkRKVHRxRWRWeURoOTB4WVE7NDkyMDA1OTpscTFsQ3NENVF4RzQ5X1QxSDN3dWR3OzUwNTE0OTY6cmtDeWxSREpUdHFFZFZ5RGg5MHhZUTs1MDUxNTAwOnJrQ3lsUkRKVHRxRWRWeURoOTB4WVE7MTt0b3RhbF9oaXRzOjUyMTU7\',\'qs\' => {\'scroll\' => \'1m\'},\'method\' => \'GET\'},\'status_code\' => 404}
',
'stack' => [
[
'searcher.pl',
92,
'Search::Elasticsearch::Scroll::next'
]
],
'text' => '[http://localhost:9200]-[404] Not Found',
'vars' => {
'body' => {
'hits' => {
'hits' => [],
'max_score' => '0',
'total' => 5215
},
'timed_out' => bless( do{\(my $o = 0)}, 'JSON::XS::Boolean' ),
'_shards' => {
'failures' => [
{
'index' => undef,
'reason' => {
'reason' => 'No search context found for id [4920053]',
'type' => 'search_context_missing_exception'
},
'shard' => -1
},
{
'index' => undef,
'reason' => {
'reason' => 'No search context found for id [5051485]',
'type' => 'search_context_missing_exception'
},
'shard' => -1
},
{
'index' => undef,
'reason' => {
'reason' => 'No search context found for id [4920059]',
'type' => 'search_context_missing_exception'
},
'shard' => -1
},
{
'index' => undef,
'reason' => {
'reason' => 'No search context found for id [5051496]',
'type' => 'search_context_missing_exception'
},
'shard' => -1
},
{
'index' => undef,
'reason' => {
'reason' => 'No search context found for id [5051500]',
'type' => 'search_context_missing_exception'
},
'shard' => -1
}
],
'failed' => 5,
'successful' => 0,
'total' => 5
},
'_scroll_id' => 'c2NhbjswOzE7dG90YWxfaGl0czo1MjE1Ow==',
'took' => 2
},
'request' => {
'serialize' => 'std',
'path' => '/_search/scroll',
'ignore' => [],
'mime_type' => 'application/json',
'body' => 'c2Nhbjs1OzQ5MjAwNTM6bHExbENzRDVReEc0OV9UMUgzd3Vkdzs1MDUxNDg1OnJrQ3lsUkRKVHRxRWRWeURoOTB4WVE7NDkyMDA1OTpscTFsQ3NENVF4RzQ5X1QxSDN3dWR3OzUwNTE0OTY6cmtDeWxSREpUdHFFZFZ5RGg5MHhZUTs1MDUxNTAwOnJrQ3lsUkRKVHRxRWRWeURoOTB4WVE7MTt0b3RhbF9oaXRzOjUyMTU7',
'qs' => {
'scroll' => '1m'
},
'method' => 'GET'
},
'status_code' => 404
},
'type' => 'Missing'
}, 'Search::Elasticsearch::Error::Missing' );
The code I'm using is the next one (simplified) :
# Retrieve scroll
my $scroll = $self->getScrollBySignature($item);
# Retrieve all affected documents ids
while (my #docs = $scroll->next(500)) {
# Do stuff with #docs
}
The function getScrollBySignature have the next code in order to call to elasticSearch
my $scroll = $self->{ELASTIC}->scroll_helper(
index => $self->{INDEXES},
search_type => 'scan',
ignore_unavailable => 1,
body => {
size => $self->{PAGINATION},
query => {
filtered => {
filter => {
bool => {
must => [{term => {signature_id => $item->{profileId}}}, {terms => {channel_type_id => $type}}]
}
}
}
}
}
);
As you can see, I'm doing the scroll without passing scroll parameter then as documentation says, the time that scroll is alive is 1 min.
The elasticSearch is a cluster of 3 servers, and the query that ends with that error retrieves a bit more than 5000 docs.
My first solution was to update the life time for scroll to 5 minutes and the error didn't appear.
The question is, as I understand every time I'm calling $scroll->next() the life time off scroll affected is upgraded 1m more, then how is possible to receive those context related errors?
I'm doing something in a bad manner?
Thank you all.
The first thing that comes to mind is that the timer is not updated. Have you checked this? You can do a query every 10 seconds for example and see if at the 6th query it gives you the error ...
Well, a good rule of thumb is inside a ->next() block, don't stay by iteration more than time that you've configured in scroll.
Between each call of ->next() you cannot stay more than that time configured. If you stay more, the scroll may be not be there and the error earch_context_missing_exception will appear.
My solution for this problem was inside next block only store data into array/hash structure and once the scroll process ended work with all data.
The solution of the question example:
# Retrieve scroll
my $scroll = $self->getScrollBySignature($item);
# Retrieve all affected documents ids
my #allDocs;
while (my #docs = $scroll->next(500)) {
push #allDocs, map {$_->{_id}} #docs
}
foreach (#allDocs) {
# Do stuff with doc
}
I have an hash of hash map as below. Please note that the hash map is very huge which contains PluginsStatus as Success or Error. When PluginsStatus for a key is Success then I need not process anything (I have handled this scenario) but if its Error I need to to display in the order - PluginsStatus, PluginspatchLogName, PluginsLogFileName_0, PluginsLogFileLink_0, PluginsLogFileErrors_0 and so on.
Please note, I do not know exactly how many keys (in hash of a hash) i.e. PluginsLogFileName, PluginsLogFileLink, PluginsLogFileErrors exists i.e. it is dynamic.
$VAR1 = { 'Applying Template Changes' => {
'PluginsLogFileErrors_2' => 'No Errors',
'PluginsStatus' => 'Error',
'PluginsLogFileName_1' => 'Applying_Template_Changes_2015-05-12_02-57-40AM.log',
'PluginsLogFileName_2' => 'ApplyingTemplates.log',
'PluginsLogFileErrors_1' => 'ERROR: FAPSDKEX-00024 : Error in undeploying template.Cause : Unknown.Action : refer to log file for more details.',
'PluginspatchLogName' => '2015-05-11_08-14-28PM.log',
'PluginsLogFileLink_0' => '/tmp/xpath/2015-05-11_08-14-28PM.log',
'PluginsLogFileName_0' => '2015-05-11_08-14-28PM.log',
'PluginsLogFileErrors_0' => 'No Errors',
'PluginsLogFileLink_2' => 'configlogs/ApplyingTemplates.log',
'PluginsLogFileLink_1' => 'configlogs/Applying_Template_Changes_2015-05-12_02-57-40AM.log'
},
'Configuring Keystore Service' => {
'PluginsStatus' => 'Error',
'PluginsLogFileName_1' => 'Configuring_Keystore_Service_2015-05-11_11-11-37PM.log',
'PluginsLogFileErrors_1' => 'ERROR: Failed to query taxonomy attribute AllProductFamilyAndDomains.',
'PluginspatchLogName' => '2015-05-11_08-14-28PM.log',
'PluginsLogFileLink_0' => '/tmp/xpath/2015-05-11_08-14-28PM.log',
'PluginsLogFileName_0' => '2015-05-11_08-14-28PM.log',
'PluginsLogFileErrors_0' => 'No Errors',
'PluginsLogFileLink_1' => 'configlogs/Configuring_Keystore_Service_2015-05-11_11-11-37PM.log'
},
'Applying Main Configuration' => {
'PluginsStatus' => 'Error',
'PluginspatchLogName' => '2015-05-11_08-14-28PM.log',
'PluginsLogFileName_0' => 'Applying_Main_Configuration_2015-05-12_01-11-21AM.log',
'PluginsLogFileLink_0' => '/tmp/xpath/2015-05-11_08-14-28PM.log',
'PluginsLogFileErrors_0' => 'ERROR: Failed to query taxonomy attribute AllProductFamilyAndDomains.apps.ad.common.exception.ADException: Failed to query taxonomy attribute AllProductFamilyAndDomains.... 104 lines more'
}
};
Below is an output snippet I am looking for:
Plugin name is = Applying Template Changes
PluginsStatus = Error
PluginspatchLogName = 2015-05-11_08-14-28PM.log
PluginsLogFileName_0 = 2015-05-11_08-14-28PM.log
PluginsLogFileLink_0 = /tmp/xpath/2015-05-11_08-14-28PM.log
PluginsLogFileErrors_0 = No Errors
PluginsLogFileName_1 = Applying_Template_Changes_2015-05-12_02-57-40AM.log
PluginsLogFileLink_1 = configlogs/Applying_Template_Changes_2015-05-12_02- 57-40AM.log
PluginsLogFileErrors_1 = ERROR: FAPSDKEX-00024 : Error in undeploying template.Cause : Unknown.Action : refer to log file for more details.,
PluginsLogFileName_2 = ApplyingTemplates.log
PluginsLogFileLink_2 = configlogs/ApplyingTemplates.log
PluginsLogFileErrors_2 = No Errors`
Please let me know if someone could help me here ?
You have built a hash that is less than ideal for your purposes. You should create a LogFile hash element that has an array as its value. After that the process is trivial
{
"Applying Main Configuration" => {
LogFile => [
{
Errors => "ERROR: Failed to query taxonomy attribute AllProductFamilyAndDomains.apps.ad.common.exception.ADException: Failed to query taxonomy attribute AllProductFamilyAndDomains.... 104 lines more",
Link => "/tmp/xpath/2015-05-11_08-14-28PM.log",
Name => "Applying_Main_Configuration_2015-05-12_01-11-21AM.log",
},
],
patchLogName => "2015-05-11_08-14-28PM.log",
Status => "Error",
},
"Applying Template Changes" => {
LogFile => [
{
Errors => "No Errors",
Link => "/tmp/xpath/2015-05-11_08-14-28PM.log",
Name => "2015-05-11_08-14-28PM.log",
},
{
Errors => "ERROR: FAPSDKEX-00024 : Error in undeploying template.Cause : Unknown.Action : refer to log file for more details.",
Link => "configlogs/Applying_Template_Changes_2015-05-12_02-57-40AM.log",
Name => "Applying_Template_Changes_2015-05-12_02-57-40AM.log",
},
{
Errors => "No Errors",
Link => "configlogs/ApplyingTemplates.log",
Name => "ApplyingTemplates.log",
},
],
patchLogName => "2015-05-11_08-14-28PM.log",
Status => "Error",
},
"Configuring Keystore Service" => {
LogFile => [
{
Errors => "No Errors",
Link => "/tmp/xpath/2015-05-11_08-14-28PM.log",
Name => "2015-05-11_08-14-28PM.log",
},
{
Errors => "ERROR: Failed to query taxonomy attribute AllProductFamilyAndDomains.",
Link => "configlogs/Configuring_Keystore_Service_2015-05-11_11-11-37PM.log",
Name => "Configuring_Keystore_Service_2015-05-11_11-11-37PM.log",
},
],
patchLogName => "2015-05-11_08-14-28PM.log",
Status => "Error",
},
}
Just iterate over the keys of the hash. Use the $hash{key}{inner_key} syntax to get into the nested hash.
#!/usr/bin/perl
use warnings;
use strict;
use feature qw{ say };
my %error = ( 'Applying Template Changes' => {
'PluginsLogFileErrors_2' => 'No Errors',
'PluginsStatus' => 'Error',
'PluginsLogFileName_1' => 'Applying_Template_Changes_2015-05-12_02-57-40AM.log',
# ...
'PluginsLogFileLink_0' => '/tmp/xpath/2015-05-11_08-14-28PM.log',
'PluginsLogFileErrors_0' => 'ERROR: Failed to query taxonomy attribute AllProductFamilyAndDomains.apps.ad.common.exception.ADException: Failed to query taxonomy attribute AllProductFamilyAndDomains.... 104 lines more',
},
);
for my $step (keys %error) {
print "Plugin name is = $step\n";
for my $detail (sort keys %{ $error{$step} }) {
print "$detail = $error{$step}{$detail}\n";
}
}
I've got a puppet manifest that resists my attempts to get it working right, given I'm no expert on the puppet DSL, and I'm fairly new to Puppet, I haven't managed to figure this out.
I'm trying to install Postgres using puppetlabs posgres module, creating a default role, and fixing up the DBs to work on UTF8.
Everything runs and installs, but the role doesn't get created. But if I run the provision again, then the role gets created. I assume perhaps has to do with the execution order, but honestly I'm lost.
Here's the code I'm using on my manifest file.
user { "user_vagrant":
ensure => "present",
}->
exec { 'apt_update':
command => 'apt-get update',
path => '/usr/bin/'
}
package { ['vim','postgresql-server-dev-9.1','libmysqlclient-dev','nodejs']:
ensure => 'installed',
before => Class['postgresql::server'],
require => Exec['apt_update'],
}
class { 'postgresql::server':
ip_mask_allow_all_users => '0.0.0.0/0',
listen_addresses => '*',
ipv4acls => ['local all all md5'],
postgres_password => 'postgres',
require => User['user_vagrant'],
}
postgresql::server::role { 'vagrant':
createdb => true,
login => true,
password_hash => postgresql_password("vagrant", "vagrant"),
require => Class['postgresql::server'],
} ->
exec { 'utf8_postgres':
command => 'pg_dropcluster --stop 9.1 main ; pg_createcluster --start --locale en_US.UTF-8 9.1 main',
unless => 'sudo -u postgres psql -t -c "\l" | grep template1 | grep -q UTF',
path => ['/bin', '/sbin', '/usr/bin', '/usr/sbin'],
}
Finally found the right approach to fix both the applied order, and the UTF8 issue which forced me to try the "pg_dropcluster" to begin with. Btw, this is a known issue here's the issue url http://projects.puppetlabs.com/issues/4695
This is the whole file I use to install PostgreSQL 9.1 with UTF8, and RVM ruby. Hope this helps.
Modules:
- puppetlabs/apt - 1.4
- puppetlabs/concat - 1.0
- puppetlabs/stdlib - 4.1.0
- puppetlabs/postgresql - 3.2
- blt04/puppet-rvm - git://github.com/blt04/puppet-rvm.git
stage { 'pre':
before => Stage['main']
}
class pre_req {
user { "vagrant":
ensure => "present",
}
exec { 'apt-update':
command => 'apt-get update',
path => '/usr/bin'
}->
exec { 'install_postgres':
command => "/bin/bash -c 'LC_ALL=en_US.UTF-8; /usr/bin/apt-get -y install postgresql'",
}
}
class { 'pre_req':
stage => pre
}
package { ['postgresql-server-dev-9.1']:
ensure => 'installed',
before => Class['postgresql::server']
}
class { 'postgresql::globals':
encoding => 'UTF8',
locale => 'en_US.UTF-8'
}->
class { 'postgresql::server':
stage => main,
locale => 'en_US.UTF-8',
ip_mask_allow_all_users => '0.0.0.0/0',
listen_addresses => '*',
ipv4acls => ['local all all md5'],
postgres_password => 'postgres',
require => User['vagrant']
}->
postgresql::server::role { 'vagrant':
createdb => true,
login => true,
password_hash => postgresql_password("vagrant", "vagrant"),
}
class rvm_install {
class { 'rvm': version => '1.23.10' }
rvm::system_user { vagrant: ; }
rvm_system_ruby {
"ruby-2.0.0-p247":
ensure => "present",
default_use => false;
}
rvm_gemset {
"ruby-2.0.0-p247#plyze":
ensure => present,
require => Rvm_system_ruby['ruby-2.0.0-p247'];
}
rvm_gem {
"puppet":
name => "puppet",
ruby_version => "ruby-2.0.0-p247",
ensure => latest,
require => Rvm_system_ruby["ruby-2.0.0-p247"];
}
rvm_gem {
"bundler":
name => "bundler",
ruby_version => "ruby-2.0.0-p247",
ensure => latest,
require => Rvm_system_ruby["ruby-2.0.0-p247"];
}
}
class { 'rvm_install':
require => User['vagrant'],
}