Sensu Handler Gives Undefined Method Error - sensu

We've recently updated our OpsGenie handler (opsgenie.rb) to the current community version, found here.
Our handler is defined as:
"opsgenie-pager" : {
"type": "pipe",
"command": "/etc/sensu/handlers/opsgenie.rb -j opsgenie-pager"
}
And our json config for opsgenie-pager is:
{
"opsgenie-pager": {
"customerKey": "<Our API Key>",
"recipients": "<Our Schedule>,<Our Escalation>",
"source": "Admiral Ackbar",
"overwrite_quiet_hours": false,
"tags": [ "admAckbar", "live", "pager" ]
}
}
When a check returns as 'CRITICAL' and the opsgenie handler is called, the sensu-server.log reports:
{"timestamp":"2015-02-03T06:16:17.804061-0700","level":"info","message":"handler output","handler":{"type":"pipe","command":"/etc/sensu/handlers/opsgenie.rb -j opsgenie-pager","name":"opsgenie-pager"},"output":"/etc/sensu/handlers/opsgenie.rb:15:in `<class:Opsgenie>': undefined method `option' for Opsgenie:Class (NoMethodError)\n"}
{"timestamp":"2015-02-03T06:16:17.804210-0700","level":"info","message":"handler output","handler":{"type":"pipe","command":"/etc/sensu/handlers/opsgenie.rb -j opsgenie-pager","name":"opsgenie-pager"},"output":"\tfrom /etc/sensu/handlers/opsgenie.rb:13:in `<main>'\n"}
In our "dev instance" (vagrant box), we're able to successfully use the OpsGenie handler to create alerts.
Any ideas what would be causing the undefined method 'option' for Opsgenie:Class (NoMethodError) error?

Looks like this was the result of an outdated gem.
Out of chance I checked the installed packages (rpms and gems) between our "dev instance" and "production".
The sensu-pluign gem in our dev instance (installed during a vagrant up) was the current version (1.1.0) whereas the version installed on Production was an older version (0.6.3).
Updating this gem with
gem update sensu-plugin
solved this issue!

Related

Product images issue in vue storefornt

I integrated vue store front with magento 2, frontend works fine but product images not display in frontend. It throws error Unable to compile TypeScript:\nsrc/image/action/local/index.ts(27,18): error TS2339: Property 'query' does not exist on type 'Request<any, any, any, any>'. imagemagick is also installed and imgurl in local.json is also defined.
Anyone please know about this why error display.
It is about this.req which is typeof Request from express - it has query property. Please make sure you have yarn.lock from the original repo and reinstall dependencies.
If you are using docker, you might need to add:
- './yarn.lock/var/www/yarn.lock'
To volumes section in the docker-compose.nodejs.yml
i have found a simple solution you can try that
copy all your magento 2 pub/media data in vue-storefront-api/var/magento-folder/pub/media
Or
create a symlink if you are working on localhost
vue-storefront-api/config/local.json
"magento2": {
"imgUrl": "http://magento-domain/pub/media/catalog/product",
"assetPath": "/../var/magento-folder/pub/media",
}
vue-storefront/config/local.json
"images": {
"useExactUrlsNoProxy": false,
"baseUrl": "http://localhost:8080/img/",
"useSpecificImagePaths": false,
"paths": {
"product": "/catalog/product"
},
"productPlaceholder": "/assets/placeholder.jpg"
},
run command in vue-storefront and vue-storefront-api

cf push to IBM Cloud failed : Unable to install node: improper constraint: >=4.1.0 <5.5.0

I pushed an app to the IBM Cloud after a minor change (just some data, no code or dependencies).
cat: /VERSION: No such file or directory
-----> IBM SDK for Node.js Buildpack v4.0.1-20190930-1425
Based on Cloud Foundry Node.js Buildpack 1.6.53
-----> Installing binaries
engines.node (package.json): >=4.1.0 <5.5.0
engines.npm (package.json): unspecified (use default)
**WARNING** Dangerous semver range (>) in engines.node. See: http://docs.cloudfoundry.org/buildpacks/node/node-tips.html
**ERROR** Unable to install node: improper constraint: >=4.1.0 <5.5.0
Failed to compile droplet: Failed to run all supply scripts: exit status 14
Exit status 223
Cell 155a85d3-8d60-425c-8e39-3a1183bfec2a stopping instance 5aad9d60-87d7-4153-b1ac-c3847c9a7a83
Cell 155a85d3-8d60-425c-8e39-3a1183bfec2a destroying container for instance 5aad9d60-87d7-4153-b1ac-c3847c9a7a83
Cell 155a85d3-8d60-425c-8e39-3a1183bfec2a successfully destroyed container for instance 5aad9d60-87d7-4153-b1ac-c3847c9a7a83
FAILED
Error restarting application: BuildpackCompileFailed
An older version of the app is running on the IBM Cloud already (from May 2019, I think).
So I wonder what changed so it's not working anymore.
In IBM Cloud Foundry, the Node.js version must be specified like this
"engines": {
"node": "12.x"
}
or
"engines": {
"node": "12.10.x"
}
You can also try removing this completely on your package.json file.
"engines": {
"node": "6.15.1",
"npm": "3.10.10"
},
Here's a quick reference.

Protractor test fails on CI

Currently I am trying a setup an end to end protractor tests to a a bitbucket pipelines with set up an headless chrome and i am currently getting some error message:
Failed: This driver instance does not have a valid session ID (did you call WebDriver.quit()?) and may no longer be used.
Any clue for this? how ever running tests locally is working fine; Can i set a constant session id?
Thanks
Check out your configuration file for this object
capabilities: {
"browserName": "chrome",
"chromeOptions": {
"args": ["incognito", "--window-size=1920,1080", "disable-extensions", "--no-sandbox", "start-maximized", "--test-type=browser"],
"prefs": {
"download": {
"prompt_for_download": false,
"directory_upgrade": true,
"default_directory": path.join(process.cwd(), "__test__reports/downloads")
}
}
}
},
When you find it, make sure you included "--no-sandbox" argument into args property.
What this guy does is it allows your tests to be ran from a remote container. In the meantime, if you include the argument when you run your tests on your machine, it has side effects like described here Chrome Instances don't close after running Test Case in Protractor

Logstash-Forwader 3.1 state file .logstash-forwarder not updating

I am having an issue with Logstash-forwarder 3.1.1 on Centos 6.5 where the state file /.logstash-forwarder is not updating as information is sent to Logstash.
I have found as activity is logged by logstash-forwarder the corresponding offset is not recorded in /.logstash-forwarder 'logrotate' file. The ./logstash-forwarder file is being recreated each time 100 events are recorded but not updated with data. I know the file has been recreated because I changed permissions to test, and permissions are reset each time.
Below are my configurations (With some actual data italicized/scrubbed):
Logstash-forwarder 3.1.1
Centos 6.5
/etc/logstash-forwarder
Note that the "paths" key does contain wildcards
{
"network": {
"servers": [ "*server*:*port*" ],
"timeout": 15,
"ssl ca": "/*path*/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/a/b/tomcat-*-*/logs/catalina.out"
],
"fields": { "type": "apache", "time_zone": "EST" }
}
]
}
Per logstash instructions for Centos 6.5 I have configured the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
Below is the resting state of the /.logstash-forwarder logrotate file:
{"/a/b/tomcat-set-1/logs/catalina.out":{"source":"/a/b/tomcat-set-1/logs/catalina.out","offset":433564,"inode":*number1*,"device":*number2*},"/a/b/tomcat-set-2/logs/catalina.out":{"source":"/a/b/tomcat-set-2/logs/catalina.out","offset":18782151,"inode":*number3*,"device":*number4*}}
There are two sets of logs that this is capturing. The offset has stayed the same for 20 minutes while activities have been occurred and sent over to Logstash.
Can anyone give me any advice on how to fix this problem whether it be a configuration setting I missed or a bug?
Thank you!
After more research I found it was announced that Filebeats is the preferred forwarder of choice now. I even found a post by the owner of Logstash-Forwarder that the program is full of bugs and is not fully supported any more.
I have instead moved to Centos7 using the latest version of the ELK stack, using Filbeats as the forwarder. Things are going much smoother now!

Sensu Client status

I am trying to see why my Sensu Client does not connect to my Sensu Server.
How can I see the status of the client and whether it tried, succeeded, failed in connecting with the server?
I have installed Sensu Server on CentOS using docker. I can connect to it, the RabbiMQ and Uchiwa panel from my host.
I have installed Sensu Client on Windows host.
I have added following configs:
C:\etc\sensu\conf.d\client.json
{
"client": {
"name": "DanWindows",
"address": " 192.168.59.3",
"subscriptions": [ "all" ]
}
}
C:\etc\sensu\config.json
{
"rabbitmq": {
"host": "192.168.59.103",
"port": 5671,
"vhost": "/sensu",
"user": "sensu",
"password": "password",
"ssl": {
"cert_chain_file": "C:/etc/sensu/ssl/cert.pem",
"private_key_file": "C:/etc/sensu/ssl/key.pem"
}
}
}
I have installed and started the Sensu Client service using following command:
sc create sensu-client binPath= C:\Tools\sensu\bin\sensu-client.exe DisplayName= "Sensu Client"
On the Uchiwa panel I do not see any clients.
The "sensu-client.err.log" and "sensu-client.out.log" are empty, while "sensu-client.wrapper.log" contains this:
2015-01-16 13:41:51 - Starting C:\Tools\sensu\embedded\bin\ruby C:\Tools\sensu\embedded\bin\sensu-client -d C:\etc\sensu\conf.d -l C:\Tools\sensu\sensu-client.log
2015-01-16 13:41:51 - Started 3800
How can I see the status of the Windows client and whether it tried, succeeded, failed in connecting with the server?
Question on the docker, is this one you built yourself? I recently built my own as well only using Ubuntu instead of CentOS.
Recent versions of sensu require the following two files in the /etc/sensu/conf.d directory:
/etc/sensu/conf.d/rabbitmq.json
/etc/sensu/conf.d/client.json
The client.json file will have contents similar to this:
{ "client": {
"name": "my-sensu-client",
"address": "192.168.x.x",
"subscriptions": [ "ALL" ] }
}
The only place I have heard of needing a config.json file is on the sensu-server. But I have only recently been looking at sensu so this may be an older sensu requirement.