I’m trying to install check_mk agent with the standard check_mk xinetd config file via puppet on a Debian 7 server.
Check_mk installs without a problem but I've got an issue with the xinetd config.
When I change the port in the source config file on the puppet master and run puppet agent -t on the client host the new configuration is deployed correctly but puppet doesn't reload the xinetd service because the system can't recognize the state of the xinetd service.
The puppet manifest looks like this:
class basic::check-mk {
case $operatingsystem {
debian: {
package {'check-mk-agent':
ensure => present,
}
file { '/etc/xinetd.d/check_mk':
notify => Service['xinetd'],
ensure => file,
source => 'puppet:///modules/basic/etc--xinetd--checkmk',
mode => '0644',
}
service { 'xinetd':
ensure => running,
enable => true,
restart => '/etc/init.d/xinetd reload',
}
}
}
}
The debug looks like this:
info: Applying configuration version '1464186485'
debug: /Stage[main]/Ntp::Config/notify: subscribes to Class[Ntp::Service]
debug: /Stage[main]/Ntp/Anchor[ntp::begin]/before: requires Class[Ntp::Install]
debug: /Stage[main]/basic::Check-mk/Service[xinetd]/subscribe: subscribes to File[/etc/xinetd.d/check_mk]
debug: /Stage[main]/Ntp::Install/before: requires Class[Ntp::Config]
debug: /Stage[main]/Ntp::Service/before: requires Anchor[ntp::end]
debug: /Schedule[daily]: Skipping device resources because running on a host
debug: /Schedule[monthly]: Skipping device resources because running on a host
debug: /Schedule[hourly]: Skipping device resources because running on a host
debug: Prefetching apt resources for package
debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n''
debug: Puppet::Type::Package::ProviderApt: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n''
debug: /Schedule[never]: Skipping device resources because running on a host
debug: file_metadata supports formats: b64_zlib_yaml pson raw yaml; using pson
debug: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content: Executing 'diff -u /etc/xinetd.d/check_mk /tmp/puppet-file20160525-10084-1vrr8zf-0'
notice: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content:
--- /etc/xinetd.d/check_mk 2016-05-25 14:57:26.220873468 +0200
+++ /tmp/puppet-file20160525-10084-1vrr8zf-0 2016-05-25 16:28:06.393363702 +0200
## -25,7 +25,7 ##
service check_mk
{
type = UNLISTED
- port = 6556
+ port = 6554
socket_type = stream
protocol = tcp
wait = no
debug: Finishing transaction 70294357735140
info: FileBucket got a duplicate file {md5}cb0264ad1863ee2b3749bd3621cdbdd0
info: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: Filebucketed /etc/xinetd.d/check_mk to puppet with sum cb0264ad1863ee2b3749bd3621cdbdd0
notice: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content: content changed '{md5}cb0264ad1863ee2b3749bd3621cdbdd0' to '{md5}56ac5c1a50c298de4999649b27ef6277'
debug: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: The container Class[basic::Check-mk] will propagate my refresh event
info: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: Scheduling refresh of Service[xinetd]
debug: Service[ntp](provider=debian): Executing '/etc/init.d/ntp status'
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd status'
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd start'
notice: /Stage[main]/basic::Check-mk/Service[xinetd]/ensure: ensure changed 'stopped' to 'running'
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: The container Class[basic::Check-mk] will propagate my refresh event
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd status'
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: Skipping restart; service is not running
notice: /Stage[main]/basic::Check-mk/Service[xinetd]: Triggered 'refresh' from 1 events
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: The container Class[basic::Check-mk] will propagate my refresh event
debug: Class[basic::Check-mk]: The container Stage[main] will propagate my refresh event
debug: /Schedule[weekly]: Skipping device resources because running on a host
debug: /Schedule[puppet]: Skipping device resources because running on a host
debug: Finishing transaction 70294346109840
debug: Storing state
debug: Stored state in 0.01 seconds
notice: Finished catalog run in 1.43 seconds
debug: Executing '/etc/puppet/etckeeper-commit-post'
debug: report supports formats: b64_zlib_yaml pson raw yaml; using pson
The following line seems suspicious to me:
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: Skipping restart; service is not running
And service --status-all says [ ? ] xinetd. Why does the system not recognize the state of the service?
Your debug log and the output of your manual service command suggest that your xinetd does not have a working status subcommand. As a result, Puppet does not know how (or whether) to manage its run state.
You could consider fixing the initscript to recognize the status subcommand and make an LSB-compliant response (or at least to exit with code 0 if the service is running and anything else otherwise). Alternatively, you can add a status attribute to the Service resource, giving an alternative command that Puppet can use to ascertain the service's run state. (I have linked to the current docs, but I'm pretty sure that Service has had that attribute since well before Puppet 2.7.)
SOLVED: To fix the problem I had to add a status section to the init.d script of xinetd. Afterwards service xinetd status and puppet were able to recognize the status of the service. The added section looks like this:
status)
if pidof xinetd > /dev/null
then
echo "xinetd is running."
exit 0
else
echo "xinetd is NOT running."
exit 1
fi
;;
Additionaly I added the status option to the Usage line:
*)
echo "Usage: /etc/init.d/xinetd {start|stop|reload|force-reload|restart|status}"
exit 1
;;
This solved the problem.
Related
I am trying to use persistent datasource using mongoDB in hyperledger composer on a UBUNTU droplet
but after starting the rest server and den after issuing a command docker logs -f rest i am getting the following error(i have provided a link to the image)
webuser#ubuntu16:~$ docker logs -f rest
[2018-08-29T12:38:31.278Z] PM2 log: Launching in no daemon mode
[2018-08-29T12:38:31.351Z] PM2 log: Starting execution sequence in -fork mode- for app name:composer-rest-server id:0
[2018-08-29T12:38:31.359Z] PM2 log: App name:composer-rest-server id:0 online
WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Discovering types from business network definition ...
(node:15) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
Connection fails: Error: Error trying to ping. Error: Failed to connect before the deadline
It will be retried for the next request.
Exception: Error: Error trying to ping. Error: Failed to connect before the deadline
Error: Error trying to ping. Error: Failed to connect before the deadline
at _checkRuntimeVersions.then.catch (/home/composer/.npm-global/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:806:34)
at <anonymous>
[2018-08-29T12:38:41.021Z] PM2 log: App [composer-rest-server] with id [0] and pid [15], exited with code [1] via signal [SIGINT]
[2018-08-29T12:38:41.024Z] PM2 log: Starting execution sequence in -fork mode- for app name:composer-rest-server id:0
[2018-08-29T12:38:41.028Z] PM2 log: App name:composer-rest-server id:0 online
WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Discovering types from business network definition ...
(node:40) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
Connection fails: Error: Error trying to ping. Error: Failed to connect before the deadline
It will be retried for the next request.
I don't understand what is the problem and what wrong I am doing because I have followed all the steps in the hyperledger composer document with success....
Is it because I am using it on ubuntu droplet....?? anyone help
EDIT
I followed all the steps mentioned in this tutorial
but instead of using google authentication i am using github authentication.
Also i have changed my local host to the ip of my ubuntu droplet in connection.json file and also in this command
sed -e 's/localhost:7051/peer0.org1.example.com:7051/' -e 's/localhost:7053/peer0.org1.example.com:7053/' -e 's/localhost:7054/ca.org1.example.com:7054/' -e 's/localhost:7050/orderer.example.com:7050/' < $HOME/.composer/cards/restadmin#trade-network/connection.json > /tmp/connection.json && cp -p /tmp/connection.json $HOME/.composer/cards/restadmin#trade-network/
bt yet with no success! i get the following error now.....
webuser#ubuntu16:~$ docker logs rest
[2018-08-30T05:03:02.916Z] PM2 log: Launching in no daemon mode
[2018-08-30T05:03:02.989Z] PM2 log: Starting execution sequence in -fork mode- for app name:composer-rest-server id:0
[2018-08-30T05:03:02.997Z] PM2 log: App name:composer-rest-server id:0 online
WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Discovering types from business network definition ...
(node:15) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
Discovering the Returning Transactions..
Discovered types from business network definition
Generating schemas for all types in business network definition ...
Generated schemas for all types in business network definition
Adding schemas for all types to Loopback ...
Added schemas for all types to Loopback
SyntaxError: Unexpected string in JSON at position 92
at JSON.parse ()
at Promise.then (/home/composer/.npm-global/lib/node_modules/composer-rest-server/server/server.js:141:34)
at
at process._tickDomainCallback (internal/process/next_tick.js:228:7)
[2018-08-30T05:03:09.942Z] PM2 log: App [composer-rest-server] with id [0] and pid [15], exited with code 1 via signal [SIGINT]
This error Error trying to ping. Error: Failed to connect before the deadline means that the composer-rest-server in the container cannot see/connect to the underlying Fabric at the URLs in the connection.json of the card that you are using to start the REST server.
There are a number of reasons why:
The Fabric is not started
You are using a Business Network Card that has localhost in the URLs of the connection.json, and localhost just re-directs back into the rest container.
Your rest container is started on a different Docker network bridge to your Fabric containers and cannot connect to the Fabric.
Have you followed this tutorial in the Composer documentation? If followed completely it will avoid the 3 problems mentioned above.
So I am trying to use my xampp server and for the life of me can't understand why my ProFTPD will not turn on. It only became cause for concern when I saw the word "bogon" in the application log. Can anyone translate to me what the application log means and maybe how I go about troubleshooting the problem ?
Stopping all servers...
Stopping Apache Web Server...
/Applications/XAMPP/xamppfiles/apache2/scripts/ctl.sh : httpd stopped
Stopping MySQL Database...
/Applications/XAMPP/xamppfiles/mysql/scripts/ctl.sh : mysql stopped
Starting ProFTPD...
Exit code: 8
Stdout:
Checking syntax of configuration file
proftpd config test fails, aborting
Stderr:
bogon proftpd[3948]: warning: unable to determine IP address of 'bogon'
bogon proftpd[3948]: error: no valid servers configured
bogon proftpd[3948]: Fatal: error processing configuration file '/Applications/XAMPP/xamppfiles/etc/proftpd.conf'
I am using kubernetes load-lanacer(Here the haproxy configuration is written in every 10s and restarted). Since I want to pass the socket connection while reloading the HAProxy, I changed the Dockerfile of the HAProxy such that it uses HAProxy 1.8-dev2 version. The image used is haproxytech/haproxy-ubuntu:1.8-dev2. Also I added the following line under the global section of the template.cfg file(This is the template in which the HAProxy configuration is written)
stats socket /var/run/haproxy/admin.sock mode 660 level admin expose-fd listeners
Also I changed the reload command in haproxy_reload file as follows
haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -x /var/run/haproxy/admin.sock -sf $(cat /var/run/haproxy.pid)
Once I run the docker image I get the following error.(kubectl create -f rc.yaml --namespace load-balancer)
W1027 07:13:37.922565 5 service_loadbalancer.go:687] Requeuing kube-system/kube-dns because of error: error restarting haproxy -- [WARNING] 299/071337 (21) : We didn't get the expected number of sockets (expecting 1347703880 got 0)
[ALERT] 299/071337 (21) : Failed to get the sockets from the old process!
: exit status 1
FYI:
I commented the stats socket line in the template.cfg file and ran the docker image to verify whether the restart command identifies the socket. The same error occurred. Seems like the soft restart command doesn't identify the stats socket created by the HAProxy.
I want to add additional resource 'version of installed openssh' to Ohai to use it in my openssh maintaining recipe.
On RHEL 5.11 Chef 12.4.1 Ohai 8.5.0 test workstation I have created and tested Ohai plugin
$ cat cookbooks/test/files/default/plugins/openssh.rb
Ohai.plugin(:Openssh) do
provides "openssh"
Ohai::Log.debug('plugin start')
def create_objects
openssh Mash.new
end
collect_data do
create_objects
openssh[:version] = 'ssh -V 2>&1 |head -1| cut -d, -f1| cut -d_ -f2 '
end
end
Local test of ohai plugin in irb is working fine.
Now I'm trying to check resource visibility in Chef recipe
$ cat test/recipes/default.rb
file "#{ENV['HOME']}/x.txt" do
content 'HELLO WORLD'
end
output="#{Chef::JSONCompat.to_json_pretty(node.to_hash)}"
file '/tmp/node.json' do
content output
end
Chef::Log.info("============ test cookbook ** #{openssh['version']} **")
\#Chef::Log.info("============ test cookbook ** #{node['kernel']} **")
by running local chef-client
$ chef-client -z -m test/recipes/default.rb
To make additional plugin visible line is added to config files
$grep Ohai ~/.chef/*.rb
~/.chef/client.rb:Ohai::Config[:plugin_path] << '~/chef/cookbooks/test/files/default/plugins/'
~/.chef/knife.rb:Ohai::Config[:plugin_path] << '~/chef/cookbooks/test/files/default/plugins/'
(I understand that this is too explicit )
Although running with printing node['kernel'] is working fine , openssh version is not running with debug log that shows:
[2016-01-27T11:48:21-08:00] DEBUG: Cookbooks detail: []
[2016-01-27T11:48:21-08:00] DEBUG: Cookbooks to compile: []
[2016-01-27T11:48:21-08:00] DEBUG: **Loading Recipe File XXX/cookbooks/test/recipes/default.rb**
[2016-01-27T11:48:21-08:00] DEBUG: Resources for generic file resource enabled on node include: [Chef::Resource::File]
[2016-01-27T11:48:21-08:00] DEBUG: Resource for file is Chef::Resource::File
[2016-01-27T11:48:21-08:00] DEBUG: Resources for generic file resource enabled on node include: [Chef::Resource::File]
[2016-01-27T11:48:21-08:00] DEBUG: Resource for file is Chef::Resource::File
[2016-01-27T11:48:21-08:00] DEBUG: Resources for generic openssh resource enabled on node include: []
[2016-01-27T11:48:21-08:00] DEBUG: **Dynamic resource resolver FAILED to resolve a resource for openssh**
[2016-01-27T11:48:21-08:00] DEBUG: Re-raising exception: NameError - No resource, method, or local variable named `openssh' for `Chef::Recipe "XXX/cookbooks/test/recipes/default.rb"'
Questions:
How properly chef out additional plugin to recipe for local and remote execution? How to check that it is cheffed out and ready?
How properly notify chef-client to execute ohai additional plugin for local single recipe run and for remote run as well?
Any explanations and suggestions are welcomed.
Alex
A few issues: first check out https://github.com/coderanger/ohai-example to see how to package an ohai plugin in a cookbook for distribution. Second, node attributes from custom plugins still need to be accessed via the node object: node['openssh']['version']. Third, remember how execution ordering works in Chef (https://coderanger.net/two-pass/) and that the custom attributes won't be available until after the plugin is loaded and run.
Checkout mainstream before google!
This project describes how to deploy you plugin in 2017 year!
https://github.com/chef-cookbooks/ohai
When I run Chef-Client in PowerShell and allow the process to output to the screen using the following command:
& Chef-Client -z -r "chef-cookbook"
I get this output:
[2014-11-10T07:20:40-08:00] WARN: No config file found or specified on command line, using command line options.
Starting Chef Client, version 11.16.4
resolving cookbooks for run list: ["chef-cookbook"]
Synchronizing Cookbooks:
- chef-cookbook
- powershell-automation
Compiling Cookbooks...
Converging 2 resources
Recipe: powershell-automation::Port_Configuration
* powershell_script[Port_Configuration] action run (skipped due to not_if)
Recipe: powershell-automation::IIS_InstallAutomation
* powershell_script[IIS_InstallAutomation] action run (skipped due to not_if)
Running handlers:
Running handlers complete
Chef Client finished, 0/0 resources updated in 43.69728 seconds
When I run the same command, but capture it to a variable, using the following command:
$chefOutput = & Chef-Client -z -r "chef-cookbook"
The $chefOutput variable contains:
[2014-11-10T07:23:01-08:00] WARN: No config file found or specified on command line, using command line options.
[2014-11-10T07:23:01-08:00] INFO: Auto-discovered chef repository at C:/Temp
[2014-11-10T07:23:01-08:00] INFO: Starting chef-zero on host localhost, port 8889 with repository at repository at C:/Temp
One version per cookbook
[2014-11-10T07:23:06-08:00] INFO: *** Chef 11.16.4 ***
[2014-11-10T07:23:06-08:00] INFO: Chef-client pid: 3364
[2014-11-10T07:23:37-08:00] INFO: Setting the run_list to [recipe[chef-cookbook]] from CLI options
[2014-11-10T07:23:37-08:00] INFO: Run List is [recipe[chef-cookbook]]
[2014-11-10T07:23:37-08:00] INFO: Run List expands to [chef-cookbook]
[2014-11-10T07:23:37-08:00] INFO: Starting Chef Run for XXXXX.XXX.XXX.XXX.com
[2014-11-10T07:23:37-08:00] INFO: Running start handlers
[2014-11-10T07:23:37-08:00] INFO: Start handlers complete.
[2014-11-10T07:23:37-08:00] INFO: HTTP Request Returned 404 Not Found : Object not found: /reports/nodes/XXXXX.XX.XX.XX.com/runs
[2014-11-10T07:23:37-08:00] INFO: Loading cookbooks [chef-cookbook#2015.1.0, powershell-automation#2015.1.0]
[2014-11-10T07:23:37-08:00] INFO: Processing powershell_script[Port_Configuration] action run (powershell-automation::Port_Configuration line 22)
[2014-11-10T07:23:37-08:00] INFO: Processing bash[Guard resource] action run (dynamically defined)
[2014-11-10T07:23:38-08:00] INFO: bash[Guard resource] ran successfully
[2014-11-10T07:23:38-08:00] INFO: Processing powershell_script[IIS_InstallAutomation] action run (powershell-automation::IIS_InstallAutomation line 16)
[2014-11-10T07:23:43-08:00] INFO: Chef Run complete in 6.346486 seconds
[2014-11-10T07:23:43-08:00] INFO: Running report handlers
[2014-11-10T07:23:43-08:00] INFO: Report handlers complete
Why does this discrepancy between outputs happen?
Note: I am seeing that the output in the variable also contains the time stamps and INFO tags for each line. Based on this, I believe this is something to do with how Chef outputs vs something to do with PowerShell.
It checks if stdout is a TTY.