Sending Sensu data to influx db fails - sensu

I have been trying to send Data from Sensu to Influx DB.
I created DB for Sensu, and also updated to listen on port 8090 in my case.
User login looks fine on influxdb.
I configured almost everything similar to this link
https://libraries.io/github/nohtyp/sensu-influxdb
I am not getting any success, and not seeing any data in the database ..
Anyone tried this ?

You can also use the custom script in case default configuration is not working. it gives the options to write only the data you want to save, before running the script, install InfluxDBClient (sudo apt-get install python-influxdb)
from influxdb import InfluxDBClient
import fileinput
import json
import string
import datetime
json_body = " "
for line in fileinput.input():
json_body = json_body + string.replace(line, '\n', ' ')
json_body = json.loads(json_body)
alert_in_ip = str(json_body["client"]["name"])
alert_in_ip = 'ip-' + string.replace(alert_in_ip, '.', '-')
alert_type = json_body["check"]["name"]
status = str(json_body['check']['status'])
time_stamp =(datetime.datetime.fromtimestamp(int(json_body["timestamp"])).strftime('%Y-%m-%d %H:%M:%S'))
json_body = [{ "measurement": alert_type,
"tags": {
"host": alert_in_ip
},
"time": time_stamp,
"fields": {
"value": int(status)
}
}]
client = InfluxDBClient('localhost', 8086, 'root', 'root', 'sensu')
client.write_points(json_body)
And call the above script from your handler.
For example:
"sendinflux":{
"type": "pipe",
"command": "echo $(cat) | /usr/bin/python /home/ubuntu/save_to_influx.py",
"severites": ["critical", "unknown"]
}
Hope it helps!!

Related

Receiving 404 when trying to edit a datasource despite being able to retrieve it's details

I have a datasource created in Grafana and attempting to update it to refresh the bearer token for auth access.
However, I'm receiving a 404 Not Found error from the grafana api when making a request to localhost:3000/api/datasources/uid/:uid with a uid just received from the datasources/name api - attempting to update as per the documentation https://grafana.com/docs/grafana/latest/developers/http_api/data_source/#update-an-existing-data-source
I'm using the grafana opensource docker container with the Infinity plugin.
docker run -d -p 3000:3000 --name=grafana -e "GF_INSTALL_PLUGINS=yesoreyeram-infinity-datasource" grafana/grafana-oss
I'm able to create a datasource via the api, just can't update an existing one.
My code is:-
grafana_api_token = '<my api token>'
new_access_token = '<my new bearer token>'
my_data_source = 'my_data_source'
grafana_header = {"authorization": f"Bearer {grafana_api_token}", "content-type":"application/json;charset=UTF-8"}
grafana_datasource_url = f"http://localhost:3000/api/datasources/name/{my_data_source}"
firebolt_datasource_resp = get(url=grafana_datasource_url, headers=grafana_header)
full_datasource = loads(firebolt_datasource_resp.content.decode("utf-8"))
datasource_uid = full_datasource["uid"]
update_token_url = f"http://localhost:3000/api/datasources/uid/{datasource_uid}"
new_data = {"id": full_datasource["id"],
"uid": full_datasource["uid"],
"orgId": full_datasource["orgId"],
"name": "new_data_source",
"type": full_datasource["type"],
"access": full_datasource["access"],
"url": full_datasource["url"],
"user": full_datasource["user"],
"database": full_datasource["database"],
"basicAuth": full_datasource["basicAuth"],
"basicAuthUser": full_datasource["basicAuthUser"],
"withCredentials": full_datasource["withCredentials"],
"isDefault": full_datasource["isDefault"],
"jsonData": full_datasource["jsonData"],
"secureJsonData": {
"bearerToken": new_access_token
}
}
update_bearer_token_resp = post(url=update_token_url, data=dumps(new_data), headers=grafana_header)
Oh, oh, oh, idiot mode. Using post rather than put. Doh.

How can I start nest app with Postgress DB on Win 10?

I'm front dev and I need to test locally my front app with backend (nest js) and postgresql DB. Who can write me the right way How to run and connect to DB ? I get some errors on app start. I work on win 10 and there is my steps for start this app.
install postgresql
npm install for my nest js app
run pgAdmin4 and create DB for my app
npm start
There is my ormconfig
module.exports = {
"type": "postgres",
"host": process.env.POSTGRES_HOST || "localhost",
"port": process.env.POSTGRES_PORT || 5432,
"username": process.env.POSTGRES_USER || "", //<- Here I try to set all possible username
"password": process.env.POSTGRES_PASSWORD || "", //<- Here I try to set all possible password
"database": process.env.POSTGRES_DB || "my_database",
"entities": ["dist/**/*.entity{.ts,.js}"],
"synchronize": true,
"logging": true
}
There is error that I encountered
error
Also on other computer I try to do this and I get error like
[Nest] ERROR [TypeOrmModule] Unable to connect to the database.
FATAL: password authentication failed for user "postgres" (postgresql 14 with pgAdmin 4)
In your typeormconfig.ts , you should write this:
export class PostgresTypeormConfiguration implements TypeOrmOptionsFactory
{
createTypeOrmOptions(connectionName?: string): TypeOrmModuleOptions | Promise<TypeOrmModuleOptions> {
const TypeOrmOptions:TypeOrmModuleOptions=
{
type: "postgres",
host: process.env.POSTGRES_HOST ,
port: process.env.POSTGRES_PORT ,
username: process.env.POSTGRES_USER ,
password: process.env.POSTGRES_PASSWORD ,
database: process.env.POSTGRES_DB,
entities: ["dist/**/*.entity{.ts,.js}"],
synchronize: true,
logging: true
}
return TypeOrmOptions
}
}
and you should define this in your module like this:
#Module({
imports:[TypeOrmModule.forRootAsync({useClass:PostgresTypeormConfiguration})]
})
note: if you still got an error , you wrote one of the config option wrong in your .env file or you did not define .env file in your configModule

JupyterHub - log current user

I use a custom logger to log who is currently doing any kind of stuff in Jupyterhub.
logging_config: dict = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"company": {
"()": lambda: MyFormatter(user=os.environ.get("JUPYTERHUB_USER", "Unknown"))
},
},
....
c.Application.logging_config = logging_config
Output:
{"asctime": "2022-06-29 14:13:43,773", "level": "WARNING", "name": "JupyterHub", "message": "Updating Hub route http://127.0.0.1:8081 \u2192 http://jupyterhub:8081", "user": "Unknown"
The logger itself works fine, but I am not able to log who was performing the action. In the Image I start, there is a JUPYTERHUB_USER env variable available. This seems to get passed from JupyterHub ( I don´t know how this is done exactly). But in JupyterHub I don´t have this variable available.
Is there a way to use it in JupyterHub, not just in the jupyterLab container?
This doesn't get you all the way there but it's a start - we add extra pod annotations/labels through KubeSpawner's extra_annotations using the cluster_options hook (see our helm chart for our complete daskhub setup):
dask-gateway:
gateway:
extraConfig:
optionHandler: |
from dask_gateway_server.options import Options, String, Select, Mapping, Float, Bool
from math import ceil
def cluster_options(user):
def option_handler(options):
extra_annotations = {
"hub.jupyter.org/username": user.name
}
default_extra_labels = {
"hub.jupyter.org/username": user.name,
}
return Options(
Select(
...
),
...,
handler=option_handler,
)
c.Backend.cluster_options = cluster_options
You can then poll pods with these labels to get real time usage. There may be a more direct way to do this though - not sure.

pythonanywhere is not updating mongoDB

#app.route('/signup', methods=['POST'])
def signup():
info = request.args
if info["password"] == info["password2"] and info["name"] and info["email"] and info["password"] and info["password2"]:
password = os.getenv("password")
link = 'mongodb+srv://yakov:' + password + '#cluster0.irzzw.mongodb.net/myAuctionDB?retryWrites=true&w=majority'
client = MongoClient(link)
db = client.get_database('myAuctionDB')
users = db.users
users.insert_one({
'name': info["name"],
'email': info["email"],
'password': info["password"],
'sales': [],
'offers': [],
'saved': []
})
return jsonify({"status": "ok", "message": " welcome to {} {} ".format(info["name"], info["email"])})
else:
return jsonify({"status": "error", "message": "you are missing some arguments"})
this is my code, it works when i run it locally from my computer.
i saved it on a host called pythonanywhere, and the code works, but it does not insert the json to mongoDB, it gives me this error "500 Internal Server Error".
this is the response when i run it locally:
this is the response of the same code when i run it through pythonanywhere:
Using a free acount in pythonanywhere, does not allow to connect to mongodb atlas, so when it runs on pythonanywhere, it does not connect to the database, and does not work

Vapor MongoDB Provider Error

I am trying to run a Vapor app on my local machine and have MongoDb installed and running.
I have this as my mongo.json: {
"user": "test",
"password": "password",
"database": "reading_journal",
"host": "127.0.0.1",
"port": 2701
}
which is correct in terms of the info for the local DB.
My main.swift:
import Vapor
import FluentMongo
import VaporMongo
let drop = Droplet(providers: [VaporMongo.Provider.self])
drop.get { req in
let lang = req.headers["Accept-Language"]?.string ?? "en"
return try drop.view.make("welcome", [
"message": Node.string(drop.localization[lang, "welcome", "title"])
])
}
drop.resource("users", UserController())
drop.resource("posts", PostController())
drop.run()
Yet in the log I get: "Could not initialize provider Provider: Socket failed with code 61 ("No data available") [connectFailed] "Unknown error"
Is there some other initialization that needs to be done? This is a brand new MongoDB DB.
Any help would be greatly appreciated!
In my case, I had to add "host": "0.0.0.0" in the mongo.json
That error usually happens if MongoDB is not running on the correct port. Make sure whatever you have in your mongo.json file matches what port MongoDB is running on.