How to push logs in plain text format using fluentd to AWS cloudwatch? - kubernetes

I want to send logs from Kubernetes pods to cloudwatch in plain text format. Is there a way to achieve this?
I am able to send logs in JSON format using fluent/fluentd-kubernetes-daemonset:v1.3.3-debian-cloudwatch-1.0 but I am not sure how to send it in plain text format. I will really appreciate any help.
Current log :
{
"log": "2020-04-22 08:00:00.002 WARN 7 --- [viderScheduler1] c.o.c.a.p.c.SchedulerConfiguration : HELLO\n",
"stream": "stdout",
"docker": {
"container_id": "ASDF"
},
"kubernetes": {
"container_name": "CHART",
"namespace_name": "NAMESPACE",
"pod_name": "POD_NAME",
"container_image": "IMAGE",
"container_image_id": "IMAGE_ID",
"pod_id": "ID",
"labels": {
"app": "app",
"date": "1587541693",
"environment": "env",
"pod-template-hash": "bcbcb",
"release": "asdf",
"tier": "asdf",
"version": "v1.2.3"
},
"host": "IP",
"master_url": "master_URL",
"namespace_id": "ns"
}
}
Expected log :
2020-04-22 08:00:00.002 WARN 7 --- [viderScheduler1] c.o.c.a.p.c.SchedulerConfiguration : HELLO\n

Related

Ory Kratos can't start on Kubernetes with Helm

Whenever I try to install Ory Kratos with Helm on Kubernetes, it doesn't work.
Here is my values.yaml file
kratos:
config:
dsn: postgres://admin:Strongpassword#10.43.90.243:5432/postgres_db
secrets:
cookie:
- randomsecret
cipher:
- randomsecret
default:
- randomsecret
identity:
default_schema_id: default
schemas:
- id: default
url: file:///etc/config/identity.default.schema.json
courier:
smtp:
connection_uri: smtps://username:password#smtp.gmail.com
selfservice:
default_browser_return_url: http://127.0.0.1:4455/
automigration:
enabled: true
identitySchemas:
'identity.default.schema.json': |
{
"$id": "https://schemas.ory.sh/presets/kratos/identity.email.schema.json",
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Person",
"type": "object",
"properties": {
"traits": {
"type": "object",
"properties": {
"email": {
"type": "string",
"format": "email",
"title": "E-Mail",
"ory.sh/kratos": {
"credentials": {
"password": {
"identifier": true
}
},
"recovery": {
"via": "email"
},
"verification": {
"via": "email"
}
}
}
},
"required": [
"email"
],
"additionalProperties": false
}
}
}
I type in the command $helm install kratos -f values.yaml ory/kratos. It pauses for a while and then outputs
Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
It then creates one job which repeatedly creates kratos-automigrate pods which crash in a couple of minutes with status "Error" and creates a new pod.

Extract status of Kubernetes CR created via ansible-operator

I am new to json query. Facing trouble extracting the status.conditions[ansibleResult].type
I have a CRD defined and created CR against the same, which is picked up by operator-sdk running ansible in the background. I am updating the CRD to provide relevant status once CR is accepted and processed by operator-sdk.
CR output in json appears like below.
{
"apiVersion": "vault.cpe.oraclecloud.com/v1alpha1",
"kind": "OciVaultKeys",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"vault.cpe.oraclecloud.com/v1alpha1\",\"kind\":\"OciVaultKeys\",\"metadata\":{\"annotations\":{},\"name\":\"operator-key-broken\",\"namespace\":\"tms\"},\"spec\":{\"freeformTags\":[{\"key\":\"Type\",\"value\":\"Optional-Values-Added\"}],\"ociVaultKeyName\":\"operator-key-broken\",\"ociVaultKeyShapeAlgorithm\":\"RSA\",\"ociVaultKeyShapeLength\":32,\"ociVaultName\":\"ocivault-sample-12\"}}\n"
},
"creationTimestamp": "2022-03-18T07:43:03Z",
"finalizers": [
"vault.cpe.oraclecloud.com/finalizer"
],
"generation": 1,
"name": "operator-key-broken",
"namespace": "tms",
"resourceVersion": "717880023",
"selfLink": "/apis/vault.cpe.oraclecloud.com/v1alpha1/namespaces/tms/ocivaultkeys/operator-key-broken",
"uid": "0d634e72-f592-48e0-be9b-ebfa017b2dfe"
},
"spec": {
"freeformTags": [
{
"key": "Type",
"value": "Optional-Values-Added"
}
],
"ociVaultKeyName": "operator-key-broken",
"ociVaultKeyShapeAlgorithm": "RSA",
"ociVaultKeyShapeLength": 32,
"ociVaultName": "ocivault-sample-12"
},
"status": {
"conditions": [
{
"lastTransitionTime": "2022-03-18T07:43:27Z",
"message": "",
"reason": "",
"status": "False",
"type": "Successful"
},
{
"lastTransitionTime": "2022-03-18T08:26:08Z",
"message": "Running reconciliation",
"reason": "Running",
"status": "False",
"type": "Running"
},
{
"ansibleResult": {
"changed": 0,
"completion": "2022-03-18T08:26:24.217728",
"failures": 1,
"ok": 14,
"skipped": 1
},
"lastTransitionTime": "2022-03-18T08:26:25Z",
"message": "The task includes an option with an undefined variable. The error was: No first item, sequence was empty.\n\nThe error appears to be in '/home/opc/cpe-workstation/mr_folder/workspace-2/osvc-kubernetes-operators/oci-services/roles/ocivaultkeys/tasks/fetch_vault_details_oci.yml': line 12, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: DEBUG | Fetch Vault Details | Extract Vault OCID n service_endpoint in source region\n ^ here\n",
"reason": "Failed",
"status": "True",
"type": "Failure"
}
]
}
}
I wish to reliably extract the status.conditions[].type (for the element ansibleResult) in CRD.
CRD definition extract is as below
- name: v1alpha1
served: true
storage: true
additionalPrinterColumns:
- description: 'Status of the OCI Vault Key'
jsonPath: .status.conditions[-1].type
name: STATUS
type: string
priority: 0
CRD is looking for a jsonPath expression to extract.
Thanks
Please try following :
kubectl get ocivaultkeys operator-key-broken -o jsonpath='{.status.conditions[?(#.ansibleResult)].type}'
Expected output : Failure
jsonpath help

Convert 'connections.json' file to 'tnsnames.ora'

Is there a way to convert 'connections.json' file to 'tnsnames.ora'?
Well i have approx 50 database connections and i plan not to write them up 1 by 1 in a notepad or something. Is there a much simpler way?
Im using Sqldeveloper Version: 20.2.0.175.1842
Below is the quarter of my "connections.json" file. The file is very huge.
{
"connections": [{
"info": {
"role": "",
"SavePassword": "true",
"OracleConnectionType": "BASIC",
"RaptorConnectionType": "Oracle",
"serviceName": "ABC",
"Connection-Color-For-Editors": "-12417793",
"customUrl": "jdbc:oracle:thin:#//a.b.c.d:1526/BDNPDBP",
"oraDriverType": "thin",
"NoPasswordConnection": "TRUE",
"password": "aaa/bbb/ccc=",
"hostname": "a.b.c.d",
"driver": "oracle.jdbc.OracleDriver",
"port": "1526",
"subtype": "oraJDBC",
"OS_AUTHENTICATION": "false",
"ConnName": "BDAPDPBP-SGP-PRD_FAB",
"KERBEROS_AUTHENTICATION": "false",
"user": "ABC"
},
"name": "BDAPDPBP-SGP-PRD_FAB",
"type": "jdbc"
}, ...

Adding a custom tag based on topicName (wildcard) via using JmxTrans to send Kafka JMX to influxDb

Basically what i wanted to achieve was to get MessageInPerSec metric for all the topic in kafka and to add the custom tag as topicName in the influx db so as to query based on the topic not based on the 'ObjDomain' definition, below are my JmxTrans configuration, (Note using the wildcard for the topic as to fetch the data MessageInPerSec JMX attribute for all the topic)
{
"servers": [
{
"port": "9581",
"host": "192.168.43.78",
"alias": "kafka-metric",
"queries": [
{
"outputWriters": [
{
"#class": "com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory",
"url": "http://192.168.43.78:8086/",
"database": "kafka",
"username": "admin",
"password": "root"
}
],
"obj": "kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=*",
"attr": [
"Count",
"MeanRate",
"OneMinuteRate",
"FiveMinuteRate",
"FifteenMinuteRate"
],
"resultAlias": "newTopic"
}
],
"numQueryThreads": 2
}
]
}
which yields a result in the Influx DB as follow
[name=newTopic, time=1589425526087, tags={attributeName=FifteenMinuteRate,
className=com.yammer.metrics.reporting.JmxReporter$Meter, objDomain=kafka.server,
typeName=type=BrokerTopicMetrics,name=MessagesInPerSec,topic=backblaze_smart},
precision=MILLISECONDS, fields={FifteenMinuteRate=1362.9446063537794, _jmx_port=9581
}]
and create tag with whole objDomain spefcified in the config, but i wanted to have topic as a seperate tag that is something as follow
[name=newTopic, time=1589425526087, tags={attributeName=FifteenMinuteRate,
className=com.yammer.metrics.reporting.JmxReporter$Meter, objDomain=kafka.server,
topic=backblaze_smart,
typeName=type=BrokerTopicMetrics,name=MessagesInPerSec,topic=backblaze_smart},
precision=MILLISECONDS, fields={FifteenMinuteRate=1362.9446063537794, _jmx_port=9581
}]
was not able to find any adequate documentation for the same on how to use the wildcard value of topic as a separate tag using jmxtrans and writing it to the InfluxDB.
You just need to add the following additional properties for Influx output writer. Just make sure you are using the latest version of jmxtrans release. The docs are here: https://github.com/jmxtrans/jmxtrans/wiki/InfluxDBWriter
"typeNames": ["topic"],
"typeNamesAsTags": "true"
I have listed your config with the above modifications.
{
"servers": [
{
"port": "9581",
"host": "192.168.43.78",
"alias": "kafka-metric",
"queries": [
{
"outputWriters": [
{
"#class": "com.googlecode.jmxtrans.model.output.InfluxDbWriterFactory",
"url": "http://192.168.43.78:8086/",
"database": "kafka",
"username": "admin",
"password": "root",
"typeNames": ["topic"],
"typeNamesAsTags": "true"
}
],
"obj": "kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=*",
"attr": [
"Count",
"MeanRate",
"OneMinuteRate",
"FiveMinuteRate",
"FifteenMinuteRate"
],
"resultAlias": "newTopic"
}
],
"numQueryThreads": 2
}
]
}

can we used world update for France only server ? seems to make wrong query result

On my local overpass api server with only french data on which is applied hourly planet diff, some of the query responses are wrong.
It's not doing it for each query : but something like once every 200 requests , sometime more ...
for example :
[timeout:360][out:json];way(48.359900103518235,5.708088852670471,48.360439696481784,5.708900947329539)[highway];out ;
return 3 ways :
{
"version": 0.6,
"generator": "Overpass API 0.7.54.13 ff15392f",
"osm3s": {
"timestamp_osm_base": "2019-09-23T15:00:00Z",
},
"elements": [
{
"type": "way",
"id": 53290349,
"nodes": [...],
"tags": {
"highway": "secondary",
"maxspeed": "100",
"ref": "L 385"
}
},
{
"type": "way",
"id": 238493649,
"nodes": [...],
"tags": {
"highway": "residential",
"name": "Rue du Stand",
"ref": "C 3",
"source": "..."
}
},
{
"type": "way",
"id": 597978369,
"nodes": [...],
"tags": {
"highway": "service"
}
}
]
}
First one is in Germany, far East ...
My question :
On an overpass api server, is there a way to apply diff only for defined area ? it is not documented ( neither here : https://wiki.openstreetmap.org/wiki/Overpass_API/Installation
or here : https://wiki.openstreetmap.org/wiki/User:Breki/Overpass_API_Installation#Configuring_Diffs )
if not, how to get rid of those wrong results ?
Thanks,
Two questions, so two answer :
i found that there is French diff file existing : http://download.openstreetmap.fr/replication/europe/france/minute/ so i will restart my server with those diffs.
The best way to get rid of those wrong result is to have a consistant server : no world diff for just France Data.