watchman cannot trigger after adding .watchmanconfig in directories - watchman

I added .watchmanconfig to every directory I am watching to set a settle option. I killed all triggers first before adding them. When I added them and start the triggers it does not start and gives me this log:
2023-02-09T08:18:18,537: [cli] failed to identify PDU: fill_buffer: EOF
2023-02-09T08:18:18,538: [cli] unable to talk to your watchman on /tmp/cnpante-state/sock! (Success)
2023-02-09T08:18:18,574: [cli] failed to identify PDU: fill_buffer: EOF
2023-02-09T08:18:18,574: [cli] unable to talk to your watchman on /tmp/cnpante-state/sock! (Success)
2023-02-09T08:18:18,615: [cli] failed to identify PDU: fill_buffer: EOF
2023-02-09T08:18:18,615: [cli] unable to talk to your watchman on /tmp/cnpante-state/sock! (Success)
I added this to .watchmanconfig
{
"settle": "10000"
}

Related

How to handle Rundeck kill job signal

I have a Python script that is being executed via Rundeck. I already have implemented handlers for signal.SIGINT and signal.SIGTERM but when the script is terminated via Rundeck KILL JOB BUTTON it is not catching the signal.
Someone know what KILL BUTTON in Rundeck use under the woods to kills the process?
Example of how I'm catching signals, it works in a standard command line execution:
def sigint_handler(signum, frame):
proc = psutil.Process(os.getpid())
children_procs = proc.children(recursive=True)
children_procs.reverse()
for child_proc in children_procs:
try:
if child_proc.is_running():
msg = f'removing: {child_proc.pid},
{child_proc.name}'
logging.debug(msg)
os.kill(child_proc.pid, SIGINT)
except OSError as exc:
raise Error('Error removing processes', detail=str(exc))
sys.exit(SIGINT_EXIT)
Adding debug logging level in Rundeck get this:
[wf:7bb0cd58-7dc6-4a55-bb0f-62399533396c] Interrupted: Engine interrupted, stopping engine...
Disconnecting from 9.11.56.44 port 22
[wf:7bb0cd58-7dc6-4a55-bb0f-62399533396c] WillShutdown: Workflow engine shutting down (interrupted? true)
[wf:7bb0cd58-7dc6-4a55-bb0f-62399533396c] OperationFailed: operation failed: java.util.concurrent.CancellationException: Task was cancelled.
SSH command execution error: Interrupted: Connection was interrupted
Caught an exception, leaving main loop due to Socket closed
Failed: Interrupted: Connection was interrupted
[workflow] finishExecuteNodeStep(mario): NodeDispatch: Interrupted: Connection was interrupted
1: Workflow step finished, result: Dispatch failed on 1 nodes: [mario: Interrupted: Connection was interrupted + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:mario)=BaseDataContext{{exec={exitCode=-1}}}, ContextView(node:mario)=BaseDataContext{{exec={exitCode=-1}}}}, base=null)} ]
[workflow] Finish step: 1,NodeDispatch
[wf:7bb0cd58-7dc6-4a55-bb0f-62399533396c] Complete: Workflow complete: [Step{stepNum=1, label='null'}: CancellationException]
[wf:7bb0cd58-7dc6-4a55-bb0f-62399533396c] Cancellation while running step [1]
[workflow] Finish execution: node-first: [Workflow result: , Node failures: {mario=[]}, status: failed]
[Workflow result: , Node failures: {mario=[]}, status: failed]
Execution failed: 57 in project iLAB: [Workflow result: , Node failures: {mario=[]}, status: failed]
It is just closing the connection?
Rundeck can't manage internal threads in that way (directly), with the kill button you can kill only the Rundeck job, the only way to manage that is by applying all the logic in your script (detect the thread, and depending on some option/behavior kill the thread). That was requested here and here.

vscode remote-ssg : server status check failed - waiting and retrying

This case that i can't connect to the remote because of "server status check failed - waiting and retrying" have happened several times.
However, when i delete the directory "data" and the file which has the suffix with '.log','.pid' or '.token' in remote server under the direcotory ".vscode-server" , this problem should be solved.[1]
[1]: https://i.stack.imgur.com/pwEwf.png
on your remote server side, check vscode-server daemon process is not quit from last connect, kill them all and retry
$ ps aux | grep vscode-server
$ kill -2 pid
I tried rebooting the remote machine and it worked.

Error while using persistent datasource using mongodb ini hyperledger composer

I am trying to use persistent datasource using mongoDB in hyperledger composer on a UBUNTU droplet
but after starting the rest server and den after issuing a command docker logs -f rest i am getting the following error(i have provided a link to the image)
webuser#ubuntu16:~$ docker logs -f rest
[2018-08-29T12:38:31.278Z] PM2 log: Launching in no daemon mode
[2018-08-29T12:38:31.351Z] PM2 log: Starting execution sequence in -fork mode- for app name:composer-rest-server id:0
[2018-08-29T12:38:31.359Z] PM2 log: App name:composer-rest-server id:0 online
WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Discovering types from business network definition ...
(node:15) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
Connection fails: Error: Error trying to ping. Error: Failed to connect before the deadline
It will be retried for the next request.
Exception: Error: Error trying to ping. Error: Failed to connect before the deadline
Error: Error trying to ping. Error: Failed to connect before the deadline
at _checkRuntimeVersions.then.catch (/home/composer/.npm-global/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:806:34)
at <anonymous>
[2018-08-29T12:38:41.021Z] PM2 log: App [composer-rest-server] with id [0] and pid [15], exited with code [1] via signal [SIGINT]
[2018-08-29T12:38:41.024Z] PM2 log: Starting execution sequence in -fork mode- for app name:composer-rest-server id:0
[2018-08-29T12:38:41.028Z] PM2 log: App name:composer-rest-server id:0 online
WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Discovering types from business network definition ...
(node:40) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
Connection fails: Error: Error trying to ping. Error: Failed to connect before the deadline
It will be retried for the next request.
I don't understand what is the problem and what wrong I am doing because I have followed all the steps in the hyperledger composer document with success....
Is it because I am using it on ubuntu droplet....?? anyone help
EDIT
I followed all the steps mentioned in this tutorial
but instead of using google authentication i am using github authentication.
Also i have changed my local host to the ip of my ubuntu droplet in connection.json file and also in this command
sed -e 's/localhost:7051/peer0.org1.example.com:7051/' -e 's/localhost:7053/peer0.org1.example.com:7053/' -e 's/localhost:7054/ca.org1.example.com:7054/' -e 's/localhost:7050/orderer.example.com:7050/' < $HOME/.composer/cards/restadmin#trade-network/connection.json > /tmp/connection.json && cp -p /tmp/connection.json $HOME/.composer/cards/restadmin#trade-network/
bt yet with no success! i get the following error now.....
webuser#ubuntu16:~$ docker logs rest
[2018-08-30T05:03:02.916Z] PM2 log: Launching in no daemon mode
[2018-08-30T05:03:02.989Z] PM2 log: Starting execution sequence in -fork mode- for app name:composer-rest-server id:0
[2018-08-30T05:03:02.997Z] PM2 log: App name:composer-rest-server id:0 online
WARNING: NODE_APP_INSTANCE value of '0' did not match any instance config file names.
WARNING: See https://github.com/lorenwest/node-config/wiki/Strict-Mode
Discovering types from business network definition ...
(node:15) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
Discovering the Returning Transactions..
Discovered types from business network definition
Generating schemas for all types in business network definition ...
Generated schemas for all types in business network definition
Adding schemas for all types to Loopback ...
Added schemas for all types to Loopback
SyntaxError: Unexpected string in JSON at position 92
at JSON.parse ()
at Promise.then (/home/composer/.npm-global/lib/node_modules/composer-rest-server/server/server.js:141:34)
at
at process._tickDomainCallback (internal/process/next_tick.js:228:7)
[2018-08-30T05:03:09.942Z] PM2 log: App [composer-rest-server] with id [0] and pid [15], exited with code 1 via signal [SIGINT]
This error Error trying to ping. Error: Failed to connect before the deadline means that the composer-rest-server in the container cannot see/connect to the underlying Fabric at the URLs in the connection.json of the card that you are using to start the REST server.
There are a number of reasons why:
The Fabric is not started
You are using a Business Network Card that has localhost in the URLs of the connection.json, and localhost just re-directs back into the rest container.
Your rest container is started on a different Docker network bridge to your Fabric containers and cannot connect to the Fabric.
Have you followed this tutorial in the Composer documentation? If followed completely it will avoid the 3 problems mentioned above.

FTPD Server Issue

So I am trying to use my xampp server and for the life of me can't understand why my ProFTPD will not turn on. It only became cause for concern when I saw the word "bogon" in the application log. Can anyone translate to me what the application log means and maybe how I go about troubleshooting the problem ?
Stopping all servers...
Stopping Apache Web Server...
/Applications/XAMPP/xamppfiles/apache2/scripts/ctl.sh : httpd stopped
Stopping MySQL Database...
/Applications/XAMPP/xamppfiles/mysql/scripts/ctl.sh : mysql stopped
Starting ProFTPD...
Exit code: 8
Stdout:
Checking syntax of configuration file
proftpd config test fails, aborting
Stderr:
bogon proftpd[3948]: warning: unable to determine IP address of 'bogon'
bogon proftpd[3948]: error: no valid servers configured
bogon proftpd[3948]: Fatal: error processing configuration file '/Applications/XAMPP/xamppfiles/etc/proftpd.conf'

puppet notify xinetd doesn't reload xinetd service

I’m trying to install check_mk agent with the standard check_mk xinetd config file via puppet on a Debian 7 server.
Check_mk installs without a problem but I've got an issue with the xinetd config.
When I change the port in the source config file on the puppet master and run puppet agent -t on the client host the new configuration is deployed correctly but puppet doesn't reload the xinetd service because the system can't recognize the state of the xinetd service.
The puppet manifest looks like this:
class basic::check-mk {
case $operatingsystem {
debian: {
package {'check-mk-agent':
ensure => present,
}
file { '/etc/xinetd.d/check_mk':
notify => Service['xinetd'],
ensure => file,
source => 'puppet:///modules/basic/etc--xinetd--checkmk',
mode => '0644',
}
service { 'xinetd':
ensure => running,
enable => true,
restart => '/etc/init.d/xinetd reload',
}
}
}
}
The debug looks like this:
info: Applying configuration version '1464186485'
debug: /Stage[main]/Ntp::Config/notify: subscribes to Class[Ntp::Service]
debug: /Stage[main]/Ntp/Anchor[ntp::begin]/before: requires Class[Ntp::Install]
debug: /Stage[main]/basic::Check-mk/Service[xinetd]/subscribe: subscribes to File[/etc/xinetd.d/check_mk]
debug: /Stage[main]/Ntp::Install/before: requires Class[Ntp::Config]
debug: /Stage[main]/Ntp::Service/before: requires Anchor[ntp::end]
debug: /Schedule[daily]: Skipping device resources because running on a host
debug: /Schedule[monthly]: Skipping device resources because running on a host
debug: /Schedule[hourly]: Skipping device resources because running on a host
debug: Prefetching apt resources for package
debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n''
debug: Puppet::Type::Package::ProviderApt: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n''
debug: /Schedule[never]: Skipping device resources because running on a host
debug: file_metadata supports formats: b64_zlib_yaml pson raw yaml; using pson
debug: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content: Executing 'diff -u /etc/xinetd.d/check_mk /tmp/puppet-file20160525-10084-1vrr8zf-0'
notice: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content:
--- /etc/xinetd.d/check_mk 2016-05-25 14:57:26.220873468 +0200
+++ /tmp/puppet-file20160525-10084-1vrr8zf-0 2016-05-25 16:28:06.393363702 +0200
## -25,7 +25,7 ##
service check_mk
{
type = UNLISTED
- port = 6556
+ port = 6554
socket_type = stream
protocol = tcp
wait = no
debug: Finishing transaction 70294357735140
info: FileBucket got a duplicate file {md5}cb0264ad1863ee2b3749bd3621cdbdd0
info: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: Filebucketed /etc/xinetd.d/check_mk to puppet with sum cb0264ad1863ee2b3749bd3621cdbdd0
notice: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content: content changed '{md5}cb0264ad1863ee2b3749bd3621cdbdd0' to '{md5}56ac5c1a50c298de4999649b27ef6277'
debug: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: The container Class[basic::Check-mk] will propagate my refresh event
info: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: Scheduling refresh of Service[xinetd]
debug: Service[ntp](provider=debian): Executing '/etc/init.d/ntp status'
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd status'
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd start'
notice: /Stage[main]/basic::Check-mk/Service[xinetd]/ensure: ensure changed 'stopped' to 'running'
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: The container Class[basic::Check-mk] will propagate my refresh event
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd status'
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: Skipping restart; service is not running
notice: /Stage[main]/basic::Check-mk/Service[xinetd]: Triggered 'refresh' from 1 events
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: The container Class[basic::Check-mk] will propagate my refresh event
debug: Class[basic::Check-mk]: The container Stage[main] will propagate my refresh event
debug: /Schedule[weekly]: Skipping device resources because running on a host
debug: /Schedule[puppet]: Skipping device resources because running on a host
debug: Finishing transaction 70294346109840
debug: Storing state
debug: Stored state in 0.01 seconds
notice: Finished catalog run in 1.43 seconds
debug: Executing '/etc/puppet/etckeeper-commit-post'
debug: report supports formats: b64_zlib_yaml pson raw yaml; using pson
The following line seems suspicious to me:
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: Skipping restart; service is not running
And service --status-all says [ ? ] xinetd. Why does the system not recognize the state of the service?
Your debug log and the output of your manual service command suggest that your xinetd does not have a working status subcommand. As a result, Puppet does not know how (or whether) to manage its run state.
You could consider fixing the initscript to recognize the status subcommand and make an LSB-compliant response (or at least to exit with code 0 if the service is running and anything else otherwise). Alternatively, you can add a status attribute to the Service resource, giving an alternative command that Puppet can use to ascertain the service's run state. (I have linked to the current docs, but I'm pretty sure that Service has had that attribute since well before Puppet 2.7.)
SOLVED: To fix the problem I had to add a status section to the init.d script of xinetd. Afterwards service xinetd status and puppet were able to recognize the status of the service. The added section looks like this:
status)
if pidof xinetd > /dev/null
then
echo "xinetd is running."
exit 0
else
echo "xinetd is NOT running."
exit 1
fi
;;
Additionaly I added the status option to the Usage line:
*)
echo "Usage: /etc/init.d/xinetd {start|stop|reload|force-reload|restart|status}"
exit 1
;;
This solved the problem.