Sensu Client status - sensu

I am trying to see why my Sensu Client does not connect to my Sensu Server.
How can I see the status of the client and whether it tried, succeeded, failed in connecting with the server?
I have installed Sensu Server on CentOS using docker. I can connect to it, the RabbiMQ and Uchiwa panel from my host.
I have installed Sensu Client on Windows host.
I have added following configs:
C:\etc\sensu\conf.d\client.json
{
"client": {
"name": "DanWindows",
"address": " 192.168.59.3",
"subscriptions": [ "all" ]
}
}
C:\etc\sensu\config.json
{
"rabbitmq": {
"host": "192.168.59.103",
"port": 5671,
"vhost": "/sensu",
"user": "sensu",
"password": "password",
"ssl": {
"cert_chain_file": "C:/etc/sensu/ssl/cert.pem",
"private_key_file": "C:/etc/sensu/ssl/key.pem"
}
}
}
I have installed and started the Sensu Client service using following command:
sc create sensu-client binPath= C:\Tools\sensu\bin\sensu-client.exe DisplayName= "Sensu Client"
On the Uchiwa panel I do not see any clients.
The "sensu-client.err.log" and "sensu-client.out.log" are empty, while "sensu-client.wrapper.log" contains this:
2015-01-16 13:41:51 - Starting C:\Tools\sensu\embedded\bin\ruby C:\Tools\sensu\embedded\bin\sensu-client -d C:\etc\sensu\conf.d -l C:\Tools\sensu\sensu-client.log
2015-01-16 13:41:51 - Started 3800
How can I see the status of the Windows client and whether it tried, succeeded, failed in connecting with the server?

Question on the docker, is this one you built yourself? I recently built my own as well only using Ubuntu instead of CentOS.
Recent versions of sensu require the following two files in the /etc/sensu/conf.d directory:
/etc/sensu/conf.d/rabbitmq.json
/etc/sensu/conf.d/client.json
The client.json file will have contents similar to this:
{ "client": {
"name": "my-sensu-client",
"address": "192.168.x.x",
"subscriptions": [ "ALL" ] }
}
The only place I have heard of needing a config.json file is on the sensu-server. But I have only recently been looking at sensu so this may be an older sensu requirement.

Related

Unable to access vault ui while running vault in docker: 404 page not found

I am running vault in docker like:
$ docker run -it --rm -p 8200:8200 vault:0.9.1
I have unsealed the vault:
$ VAULT_ADDR=http://localhost:8200 VAULT_SKIP_VERIFY="true" vault operator unseal L6M8O7Xg7c8vBe3g35s25OWeruNDfaQzQ5g9UZ2bvGM=
Key Value
--- -----
Seal Type shamir
Initialized false
Sealed false
Total Shares 1
Threshold 1
Version 0.9.1
Cluster Name vault-cluster-52a8c4b5
Cluster ID 96ba7037-3c99-5b6e-272e-7bcd6e5cc45c
HA Enabled false
However, I can't access the UI http://localhost:8200/ui in firefox. The error is:
404 page not found
Do you know what I am doing wrong? Does the vault docker image in docker hub have UI compiled in it?
Web UI was opensourced in v0.10.0, so v0.9.1 doesn't have Web UI. Here is blog announcing release and CHANGELOG for v0.10.0 - take a look at FEATURES subsection.
To see Web UI in web browser try running this command:
$ docker run -it --rm -p 8200:8200 vault:0.10.0
However, I would suggest using more recent Vault version, as there have been many improvements and bug fixes in the meantime. Also features added in the Web UI, so if you follow latest documentation, some of the things described there might not be available in older versions.
I observed this behavior with the Vault 0.10.3
(https://releases.hashicorp.com/vault/0.10.3/vault_0.10.3_linux_amd64.zip)
when I put the
adjustment that enabled ui in the very bottom of the vault configuration file
(etc. config.json), so
Config that returns a 404 error looks like one below:
{
"listener": [{
"tcp": {
"address" : "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"api_addr": "http://172.16.94.10:8200",
"storage": {
"consul" : {
"address" : "127.0.0.1:8500",
"path": "vault"
}
}
},
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui":"true"
}
and one that works with Vault 0.10.3 has a ui in the very top of its configuration file:
{
"ui":"true",
"listener": [{
"tcp": {
"address" : "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"api_addr": "http://172.16.94.10:8200",
"storage": {
"consul" : {
"address" : "127.0.0.1:8500",
"path": "vault"
}
}
},
"max_lease_ttl": "10h",
"default_lease_ttl": "10h"
}

Azure Service Fabric gets terminated in local cluster when running tests

So we are using Azure Service Fabric and gets a weird behavior when trying to run API tests against my local development cluster.
Every time I start the test the app gets terminated, sometimes it gets restarted again but most often it just stays terminated (and even deleted from the cluster).
I guess its somehow connected against that when I run the API test it will run and build stuff that the service fabric is using, but since the outcome is different depending on something (maybe the sun?) it feels like I am either missing something or experience a bug with service fabric.
Do anyone have any idea? Consider me as a noob and assume that I have done something wrong myself (I am doing that atleast).
UPDATE
There was a question on how we do run our tests:
Starts 2 instances of Visual Studio
Open the same .sln in both of them
Start the Service Fabric project.
Wait until cluster reports OK.
Run api test through a unit test (both service bus tests and REST tests) with Resharper test runner
Now we get the messages that is attached in diagnostics.
Diagnostics:
Event #1
{
"Timestamp": "2018-10-16T08:14:03.0590414+02:00",
"ProviderName": "Microsoft-ServiceFabric",
"Id": 23083,
"Message": ApplicationHostTerminated: ApplicationId=fabric:/<MyService>, ServiceName=fabric:/<MyService>, ServicePackageName=<MyPackage>, ServicePackageActivationId=8f36ac97-9271-4a49-94ce-dd296aebffa5, IsExclusive=True, CodePackageName=Code, EntryPointType=Exe, ExeName=MyExe, ProcessId=24568, HostId=d2a820b5-5b4d-42af-ae87-350028a3fa72, ExitCode=3221225786, UnexpectedTermination=False, StartTime=10/16/2018 08:12:14. ",
"ProcessId": 22660,
"Level": "Informational",
"Keywords": "0x4000000000000001",
"EventName": "Hosting",
"ActivityID": null,
"RelatedActivityID": null,
"Payload": {
"eventInstanceId": "\"07f15452-2f75-49e3-ad5d-d16ea49bdc8f\"",
"applicationName": "MyAppName",
"ServiceName": "fabric:/MyServiceName",
"ServicePackageName": "MyPackageName",
"ServicePackageActivationId": "8f36ac97-9271-4a49-94ce-dd296aebffa5",
"IsExclusive": true,
"CodePackageName": "Code",
"EntryPointType": 1,
"ExeName": "MyExe",
"ProcessId": 24568,
"HostId": "d2a820b5-5b4d-42af-ae87-350028a3fa72",
"ExitCode": 3221225786,
"UnexpectedTermination": false,
"StartTime": "\"\/Date(1539670334917)\/\""
}
}
Event #2
{
"Timestamp": "2018-10-16T08:14:02.3557708+02:00",
"ProviderName": "Microsoft-ServiceFabric",
"Id": 29625,
"Message": "Application deleted: Application = fabric:/MyApp, Application Type = MyServiceType ",
"ProcessId": 22660,
"Level": "Informational",
"Keywords": "0x4000000000000001",
"EventName": "CM",
"ActivityID": null,
"RelatedActivityID": null,
"Payload": {
"eventInstanceId": "\"ca608cec-8d55-4606-a331-8ebfcfff8fa6\"",
"applicationName": "fabric:/MyAppName",
"applicationTypeName": "MyAppTypeName",
"applicationTypeVersion": "1.0.0"
}
}
I think you can experience a side effect of Application Debug Mode set for your .sfproj.
By default Application Debug Mode is set to Refresh Application (which if you are using 5-node cluster is automatically changed to Remove Application) or Remove Application debug mode. This instructs Visual Studio to recreate the application for each debugging session and remove it when session ends.
Changing it to Keep Application should prevent the Visual Studio from recreating the application during debugging session.

Problems deploying DSC Extension for Azure Resource Manager template

I'm trying to deploy the Azure Resource Manager template for Windows Virtual Machines' provisioning.
Currently, I'm bootstrapping the IIS Powershell script to the DSC Module to set up IIS for a Windows virtual machine provisioned through ARM.
I keep getting this error related to WinRM:
New-AzureRmResourceGroupDeployment : 5:04:53 PM - Resource Microsoft.Compute/virtualMachines/extensions 'vmSVX-TESTAU-SQL1/dscExtension' failed with message '{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "VMExtensionProvisioningError",
"message": "VM has reported a failure when processing extension 'dscExtension'. Error message: \"DSC Configuration 'vmDSC' completed with error(s).
Following are the first few: The WinRM client cannot process the request. If the authentication scheme is different from Kerberos, or if the client computer is not joined to a domain, then HTTPS transport must be used or the destination machine must be added to the TrustedHosts configuration setting. Use winrm.cmd to
configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. You can get more information about that by running the following command: winrm help config.\"."
}
]
}
}'
The ARM Template related to the provisioning of this VM:
{
"apiVersion": "2016-03-30",
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(variables('vmNameSQL'), '/', 'dscExtension')]",
"location": "[variables('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmNameSQL'))]"
],
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.9",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": {
"url": "[variables('dscModulesUrl')]",
"script": "[concat(variables('dscFunction'),'.ps1')]",
"function": "[variables('dscFunction')]"
},
"configurationArguments": {
"nodeName": "[variables('vmNameSQL')]"
}
},
"protectedSettings": {
"configurationUrlSasToken": "[parameters('_artifactsLocationSasToken')]"
}
}
}
As for the IIS powershell script that has been bootstrapped:
Configuration WindowsFeatures
{
param ([string[]]$NodeName = 'localhost')
Node $NodeName
{
#Install the IIS Role
WindowsFeature IIS
{
Ensure = “Present”
Name = “Web-Server”
}
}
}
After a chat with various parties, we ended up removing the
"configurationArguments": {
"nodeName": "[variables('vmNameSQL')]"
}
From the ARM template and removed the
param ([string[]]$NodeName = 'localhost')
From the DSC configuration. We also set the Node to "localhost".
#iteong was able to test this new configuration and it worked.
Another point to add is the full error message was different than what was shown above:
[ERROR] A parameter cannot be found that matches parameter name 'nodeName'.\n\nAnother common error is to specify parameters of type PSCredential without an explicit type.
When using the VM Extension to apply a DSC Configuration via ARM Templates, the Node parameter must always be localhost.
When pulling DSC configuration from Azure Automation, this is when you can using variables and do some fancy work to determine what Node receives what configuration.

IBM Blockchain (Hyperledger) - "Error when deploying chaincode"

I'm following the instructions to deploy some chaincode to the IBM Hyperledger Blockchain, using the swagger API on the IBM Bluemix dashboard.
In order to deploy some chaincode, I need to submit a JSON request, which contains the path to the chaincode repository:
{
"jsonrpc": "2.0",
"method": "deploy",
"params": {
"type": 1,
"chaincodeID": {
"path": "https://github.com/series0ne/learn-chaincode/tree/master/finished"
},
"ctorMsg": {
"function": "init",
"args": [
"Hello, world"
]
},
"secureContext": "user_type1_0"
},
"id": 0
}
I have logged in user_type1_0 before attempting to deploy, but this is the result I get:
{
"jsonrpc": "2.0",
"error": {
"code": -32001,
"message": "Deployment failure",
"data": "Error when deploying chaincode: Error getting chaincode package bytes: Error getting code 'go get' failed with error: \"exit status 1\"\npackage github.com/series0ne/learn-chaincode/tree/master/finished: cannot find package \"github.com/series0ne/learn-chaincode/tree/master/finished\" in any of:\n\t/opt/go/src/github.com/series0ne/learn-chaincode/tree/master/finished (from $GOROOT)\n\t/opt/gopath/_usercode_/424324290/src/github.com/series0ne/learn-chaincode/tree/master/finished (from $GOPATH)\n\t/opt/gopath/src/github.com/series0ne/learn-chaincode/tree/master/finished\n"
},
"id": 0
}
Any ideas?
P.S. Currently running commit level 0.6.1 of the Hyperledger blockchain on Bluemix.
Try stripping out the 'tree/master' portion of your deployment url. Notice that the example linked below does not include this portion of the url:
https://github.com/IBM-Blockchain/learn-chaincode#deploying-the-chaincode
This url is going to be passed into a go get <url> command inside the peer, which will download the chaincode so that it can be compiled. So, this url must match the format accepted by this command.
I tried using the Learn Chaincode example based on the advice from Dale to change the address of the repository from https://github.com/GitHub_ID/learn-chaincode/tree/master/finished to https://github.com/GitHub_ID/learn-chaincode/finished. The Blockchain network used for this test was running on Bluemix with version 0.6.1 of the Hyperledger Fabric. With the modified path, it was possible to use the APIs tab within the interface for the Blockchain network to deploy the chaincode.
Following are some things to check.
The v2.0 branch from https://github.com/IBM-Blockchain/learn-chaincode should be used with a Blockchain network running Hyperledger Fabric version 0.6.1. Is your personal fork even with the v2.0 branch from https://github.com/IBM-Blockchain/learn-chaincode?
Was the chaincode deployment issued from the same validating peer used to register the user_type1_0 user? The validating peer can be selected at the top of the APIs tab. There is a note in the Learn Chaincode instructions indicating that the same validating peer must register the user and deploy the chaincode.
Your go get is command either not able to access Location of your package due to ACL or its parameters are invalid as per IBM doc. Please recheck its format

Hyperledger on Bluemix: Failed to launch chaincode spec(Could not get deployment transaction

I'm running a simple Hyperledger network on Bluemix. I can deploy, and invoke, but not query. The chaincode function, Init sets up a value for var, "abc" ... stub.PutState("abc", []byte(strconv.Itoa(Aval)))
I should be able to query "abc" as validation the code is ready to use. Instead, I'm seeing this error:
"... Error:Failed to launch chaincode spec(Could not get deployment
transaction for - LedgerError - ResourceNotFound:
ledger: resource not found)"
The query json is:
{
"jsonrpc": "2.0",
"method": "query",
"params": {
"type": 1,
"chaincodeID": {
"name": "my chaincode id"
},
"ctorMsg": {
"function": "read",
"args": [
"abc"
]
},
"secureContext": "user_type1_3"
},
"id": 0
}
The following is the list of probable causes of the error
Could not get deployment transaction for - LedgerError -
ResourceNotFound: ledger: resource not found
1. Chaincode did not get deployed correctly. To check if the
chaincode was deployed correctly, you need to check the peer logs to
see if there were any errors when the deploy transaction was sent.
2. Chaincode got deployed correctly, but the consensus mechanism hasnt
yet completed. You should ideally wait for a few minutes after
deploying a chaincode before you try to query it.
3. The Chaincode got deployed, but the chaincode ID/name specified
while trying to send a query is incorrect. You need to make sure you
use the same chaincode ID that comes in the response when you deploy
a chaincode.