I have configured the following check:
"cron": {
"command": "check-process.rb -p cron",
"subscribers": [],
"handlers": [
"mailer",
"flowdock",
"remediator"],
"interval": 10,
"occurences": 3,
"refresh": 600,
"standalone": false,
"remediation": {
"light_remediation": {
"occurrences": [1, 2],
"severities": [2]
}
}
},
"light_remediation": {
"command": "touch /tmp/test",
"subscribers": [],
"handlers": ["flowdock"],
"publish": false,
"interval": 10
},
Mailer and flowdock handlers are being executed as expected, so I am receiving e-mails and flowdock notifications when cron service is not running. The problem is that remediator check is not working and I have no idea why. I have used this: https://github.com/nstielau/sensu-community-plugins/blob/remediation/handlers/remediation/sensu.rb
I ran into similar issues but finally managed to get it working with some modifications.
First off, the gotchas:
Each server (client.json.template) needs to subscribe to a channel $HOSTNAME
"subscribers": ["$HOSTNAME"],
You don't have a "trigger_on" section, which is in the code but not the example and you want to set that up to trigger on the $HOSTNAME as well.
my_chek.json.template
"trigger_on": ["$HOSTNAME"]
The remediation checks need to subscribe to $HOSTNAME as well (so you need to template the checks out as well)
"subscribers": ["$HOSTNAME"],
At this point, you should be able to trigger your remediation from the sensu server manually.
Lastly, the example code listed in sensu.rb is broken... The occurrences check needs to be up one level in the loop, and the trigger_on is not inside the remediations section, it's outside.
subscribers = #event['check']['trigger_on'] ? [#event['check']['trigger_on']].flatten : [client]
...
# Check remediations matching the current severity
next unless (conditions["severities"] || []).include?(severity)
remediations_to_trigger << check
end
end
remediations_to_trigger
end
After that, it should work for you.
Oh, and one last gotcha. In your client.json.template
"safe_mode": true
It defaults to false...
Related
When I am using this remote container extension for launching a container, sometime it will stuck at "installing extensions".
It will stuck at this state as follow for a long time and no response.
Sometime it works fine and fast, sometime does not.
I have commented all the extensions in devcontainer.json file but it still happens sometime.
I dont know whether it a network issue or something else. And I want to know how can I make it stable?
Thanks!
This is my devcontainer.json.
{
"name": "gazebo_ros_docker",
"dockerFile": "Dockerfile",
"extensions": [
// "ms-iot.vscode-ros",
// "ms-vscode.cpptools",
// "mhutchie.git-graph"
],
"runArgs": [
"-it",
"--rm",
"--privileged",
"-e ROS_HOSTNAME=localhost",
"-e ROS_MASTER_URI=http://localhost:11311",
"--name=ros_container",
],
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
// "postCreateCommand": "bash /catkin_ws/src/panda_simulation/scripts/docker-setup.sh",
"workspaceMount": "source=${localWorkspaceFolder},target=/catkin_ws,type=bind",
// "workspaceMount": "source=${localWorkspaceFolder},target=/catkin_ws,type=bind,consistency=delegated",
"workspaceFolder": "/catkin_ws",
"mounts": [
"source=/tmp/.X11-unix,target=/tmp/.X11-unix,type=bind",
],
"containerEnv": {
"DISPLAY": "${localEnv:DISPLAY}",
},
"containerUser": "docker_ros"
}
I did not find the best way to solve this problem. But I export the images of the vscode-processed container and launch it. It is working well now.
I am missing deletes in watchman. Version 4.9.0, inotify.
My test code:
#!/usr/bin/env python3
import pathlib
import pywatchman
w = pywatchman.client()
w.query('watch', '/tmp/z')
clock = w.query('clock', '/tmp/z')['clock']
print(clock)
q = w.query('subscribe', '/tmp/z', 'Buffy', {'expression':["since", clock],
"fields": ["name", "exists", "oclock", "ctime_ns", "new", "mode"]})
print(q)
f = pathlib.Path('/tmp/z/xx')
f.touch()
data = w.receive()
clock = data['clock']
print()
print('Touch file:')
print(data)
print('Clock:', clock)
f.unlink()
print()
print('Delete file:')
print(w.receive())
w.close()
w = pywatchman.client(timeout=99999)
q = w.query('subscribe', '/tmp/z', 'Buffy', {'expression':["since", clock],
"fields": ["name", "exists", "oclock", "ctime_ns", "new", "mode"]})
print(q)
print()
print('We request changes since', clock)
print(w.receive())
w.close()
What I am seeing:
We create the file. We receive the notification of the new file and the directory change. GOOD. We take note of the "clock" of this notification.
We delete the file. We get the notification of the file deletion. GOOD. Be DO NOT get the notification of the directory change.
Just imagine now that the process crashes BEFORE it can update the internal details, but it remember the changes notified in step 1 (directory update and creation of a new file). That is, transaction 1 is processed, but the program crashes before transaction 2 is processed.
We now open a new subscription to watchman (remember, we are simulating a crash) and request changes since step 1. I am simulating a recovery, where the program reboots, notice that transaction 1 was OK (the file is present) and request more changes (it should get the deletion).
I would expect to get a file deletion but I get... NOTHING. CATASTROPHIC.
Transcript:
$ ./watchman-bug.py
c:1517109517:10868:3:23
{'clock': 'c:1517109517:10868:3:23', 'subscribe': 'Buffy', 'version': '4.9.0'}
Touch file:
{'unilateral': True, 'subscription': 'Buffy', 'root': '/tmp/z', 'files': [{'name': 'xx', 'exists': True, 'oclock': 'c:1517109517:10868:3:24', 'ctime_ns': 1517114230070245747, 'new': True, 'mode': 33188}], 'is_fresh_instance': False, 'version': '4.9.0', 'since': 'c:1517109517:10868:3:23', 'clock': 'c:1517109517:10868:3:24'}
Clock: c:1517109517:10868:3:24
Delete file:
{'unilateral': True, 'subscription': 'Buffy', 'root': '/tmp/z', 'files': [{'name': 'xx', 'exists': False, 'oclock': 'c:1517109517:10868:3:25', 'ctime_ns': 1517114230070245747, 'new': False, 'mode': 33188}], 'is_fresh_instance': False, 'version': '4.9.0', 'since': 'c:1517109517:10868:3:24', 'clock': 'c:1517109517:10868:3:25'}
{'clock': 'c:1517109517:10868:3:25', 'subscribe': 'Buffy', 'version': '4.9.0'}
We request changes since c:1517109517:10868:3:24
The process hangs expecting the deletion notification.
What am I doing wrong?.
Thanks for your time and knowledge!
The issue is that you're using a since expression term rather than informing watchman to use the since generator (the recency index).
What's the difference? You can think of this as the difference between the FROM and WHERE clauses in SQL. The expression field is similar in intent to the WHERE clause: it applies to the matched results and filters them down, but what you wanted to do is specify the FROM clause by setting the since field in the query spec. This is admittedly a subtle difference.
The solution is to remove the expression term and add the generator term like this:
q = w.query('subscribe', '/tmp/z', 'Buffy',
{"since": clock,
"fields": ["name", "exists", "oclock",
"ctime_ns", "new", "mode"]})
While we don't have really any documentation on the use of the pywatchman API, you can borrow the concepts from the slightly better documented nodejs API; here's a relevant snippet:
https://facebook.github.io/watchman/docs/nodejs.html#subscribing-only-to-changed-files
We are trying to use Drool as our rule engine service. What we done till now is listed below
Deployed workbench 7.2.Final
Deployed KIE server 7.2.0.Final
Configured some data objects, rules, deployed the changes to KIE server and we are able to execute the rule using rest API
Most of our requirements satisfied by stateless session (Give a set of data, execute the rule and return the data, that's it) . But using stateless we have to compromise many of the important features provided by Drools stateful session.
So we are trying to use stateful session per request. Which means the session should get disposed as soon as the request end. Also, parallel request should not interfere each other even if the session name is same
We found about container runtime strategy configuration (Workbench > Deploy > {any container} > Process Configuration > Runtime strategy)
But even after configure the container strategy to Per Request, it still behave same as Singleton (the session is not getting disposed after each request)
Few place we read it as, run time strategy only implemented in jBPM
The way we make request to KIE server is shown below
Request: POST {HOST}/kie-server/services/rest/server/containers/instances/TestRequest_1.0.4
{
"lookup": "ab-session", //stateful session
"commands": [
{
"insert": {
"out-identifier": "125",
"object": {
"com.myteam.testrequest.Product": {
"id": "123",
"name": "Hoo Hoo",
"count": 0
}
},
"return-object": "true"
}
},
{
"insert": {
"out-identifier": "126",
"object": {
"com.myteam.testrequest.Product": {
"id": "123",
"name": "Hoo Hoo",
"count": 0
}
},
"return-object": "true"
}
},
{"fire-all-rules": "hf2"}
]
}
We need help in achieving this requirement. Also, please help understand if we done something wrong
In kmodule.xml you may try to add "prototype" scope, because default is "singleton":
<ksession name="SessionName" type="stateful" default="false" clockType="realtime" scope="prototype"/>
I use Mininet with a custom topology and the RYU-REST controller "ofctl-rest.py". After installing some flowentries in the switches, sending some packets over the network and capturing traffic I recognize that the switches do not decrease the ttl - field in the ip - layer. I figure out that i have to tell the switches to decrease the ttl field (this is possible since OpenFlow - version 1.1). To do so I try the line "type": "DEC_NW_TTL", but it does not work. My compleate command look like this:
curl -X POST -d '{
"dpid": 1,
"cookie": 1,
"cookie_mask": 1,
"table_id": 0,
"idle_timeout": 3600,
"hard_timeout": 3600,
"priority": 0,
"flags": 1,
"match":{
"in_port": 1
},
"actions":[
{
"type":"OUTPUT",
"port": 4,
"type":"DEC_NW_TTL"
}
]
}' http://localhost:8080/stats/flowentry/add
What do I wrong? How do I have to modify the comand to let the switch reduce ttl? Please help me.
Thank you in advance.
I think you have to specify more than one action. Also you should change the actions' order. First, you need to decrement the TTL and afterwards send it the packet out. Sending the packet first and decrementing afterwards doesn't work.
I would try it this way:
curl -X POST -d '{
"dpid": 1,
"cookie": 1,
"cookie_mask": 1,
"table_id": 0,
"idle_timeout": 3600,
"hard_timeout": 3600,
"priority": 0,
"flags": 1,
"match":{
"in_port": 1
},
"actions":[
{
"type":"DEC_NW_TTL"
},
{
"type":"OUTPUT",
"port": 4
}
]
}' http://localhost:8080/stats/flowentry/add
The answer by Abbadon should work. You should put each action within a pair of brackets. However, the order of different actions in the post request doesn't matter. OpenFlow has its default order for different types of actions.
copy TTL inwards: apply copy TTL inward actions to the packet
pop: apply all tag pop actions to the packet
push-MPLS: apply MPLS tag push action to the packet
push-PBB: apply PBB tag push action to the packet
push-VLAN: apply VLAN tag push action to the packet
copy TTL outwards: apply copy TTL outwards action to the packet
decrement TTL: apply decrement TTL action to the packet
set: apply all set-field actions to the packet
qos: apply all QoS actions, such as set queue to the packet
group: if a group action is specified, apply the actions of the relevant group bucket(s) in the
order specified by this list
output: if no group action is specified, forward the packet on the port specified by the output
action
I have Sensu running and followed the instructions the best I could to install the Slack plugin. I'm attempting to just do a "hello-world" to get started, but the documentation seems lacking to me.
I followed the "getting started" with checks:
https://sensuapp.org/docs/0.20/getting-started-with-checks
and everything seems to be in the correct place on the server.
I am attempting to install the following community plugin, but they have a catch-all instruction for all community plugins. There is a json file in the plugin instructions, but doesn't say where to put it...
https://github.com/sensu-plugins/sensu-plugins-slack
Here is what my check_cron.json looks like ( I tried 2 methods, 1 from another source other than Sensu):
{
"checks": {
"cron_checks": {
"handlers": ["default", "slack"],
"command": "/etc/sensu/plugins/check-procs.rb -p cron -C 1 ",
"interval": 60, "subscribers": ["webservers"]
},
"cron": {
"handlers": ["default", "slack"],
"command": "/etc/sensu/plugins/check-procs.rb -p cron",
"subscribers": [
"production",
"webservers",
],
"interval": 60
}
}
}
I have restarted my server after making the changes. I'm assuming that this cron will hit every minute and call the slack notification plugin, but don't know what I'm missing, or where to put the .json doc from the Slack plugin "documentation"
https://github.com/sensu-plugins/sensu-plugins-slack
Any help getting me to the right direction?
You need a handler on the Sensu Server that will fire the request to Slack. Have you created that? If yes, please post it's content.
So I just solved this. benishkey did provide the solution in the link, however, just in case anyone comes across this and the link is broken, I thought I would add the solution.
-github user eugene-chow:
The Slack handler's config need to be named differently. Try the JSON below. I renamed the Slack config for each environment, and then pointed the handler to the respective config with -j config_name
{
"handlers": {
"slack-staging": {
"type": "pipe",
"command": "/usr/local/bin/handler-slack.rb -j slack-staging",
"severites": ["critical", "unknown"]
}
},
"slack-staging": {
"webhook_url": "https://hooks.slack.com/services/...",
"template" : ""
}
}
{
"handlers": {
"slack-production": {
"type": "pipe",
"command": "/usr/local/bin/handler-slack.rb -j slack-production",
"severites": ["critical", "unknown"]
}
},
"slack-production": {
"webhook_url": "https://hooks.slack.com/services/...",
"template" : ""
}
}
I dropped the handler-slack.rb file in with my checks and referenced it from there because it wasn't in my /usr/local/bin/ folder
I was facing the same issue, so the answer is already given but maybe help someone in the future,
First, install sensu slack plugin
/opt/sensu/embedded/bin/gem install sensu-plugins-slack
Then, Create a handler config file
vim /etc/sensu/conf.d/slack-handler.json
handler-slack.rb https://github.com/sensu-plugins/sensu-plugins-slack/blob/master/bin/handler-slack.rb
{
"handlers": {
"slack": {
"type": "pipe",
"command": "/opt/sensu/embedded/bin/handler-slack.rb",
"severites": ["critical", "unknown"]
}
},
"slack": {
"webhook_url": "https://your_webhook.com/abc",
"template" : ""
}
}
I found the answer in the "issues" section in Git
https://github.com/sensu-plugins/sensu-plugins-slack/issues/7