Prometheus alertmanager.yml file gives unmarshal error - unmarshalling

msg="Loading configuration file failed"
file=alertmanager.yml
err="yaml: unmarshal errors:\n line 3: cannot unmarshal !!str
`basic_auth` into config.plain\n line 4: field username not found in
type config.plain\n line 5: field password not found in type
config.plain\n line 20: cannot unmarshal !!map into
[]*config.WebhookConfig"
Below is the yml file, how to resolve this?
global:
http_config: basic_auth
username: 1234
password: 1234
inhibit_rules:
-
equal:
- alertname
- dev
- instance
source_match:
severity: critical
target_match:
severity: warning
receivers:
-
name: web.hook
webhook_configs:
http_config: global.http_config
url: "http://localhost:8080"
route:
group_by:
- alertname
group_interval: 5m
group_wait: 30s
receiver: web.hook
repeat_interval: 1h

Try this:
global:
http_config:
basic_auth:
username: 1234
password: 1234

Related

AWS ECS Fargate service deployment is failing "Invalid request provided: CreateService error: Container Port is missing "

I am deploying my containerized spring boot app on ECS Fargate using cloud formation templates.
Note I am using internal ALB with a Target group of type IP.
My TAskDefinition is good but the service stack gives the below error while creating the stack.
Resource handler returned message: "Invalid request provided: CreateService error: Container Port is missing (Service: AmazonECS; Status Code: 400; Error Code: InvalidParameterException; Request ID: XXX-XXX-XXX; Proxy: null)" (RequestToken: xxx-xxx-xxx, HandlerErrorCode: InvalidRequest)
Does anyone know what this error says?
I have specified a container with port in the task definition
My template
AWSTemplateFormatVersion: "2010-09-09"
Description: "CloudFormation template for creating a task definition"
Parameters:
taskDefName:
Type: String
Default: 'task-def'
springActiveProfile:
Type: String
Default: 'Dev'
appDefaultPort:
Type: Number
Default: 3070
heapMemLimit:
Type: String
Default: "-Xms512M -Xmx512M"
Resources:
MyTaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
RequiresCompatibilities:
- "FARGATE"
Family: !Ref taskDefName
NetworkMode: "awsvpc"
RuntimePlatform:
CpuArchitecture: X86_64
OperatingSystemFamily: LINUX
ExecutionRoleArn: "xxxxx"
Cpu: 0.25vCPU
Memory: 0.5GB
ContainerDefinitions:
- Name: "container1"
Image: xxx
MemoryReservation: 128
Memory: 512
PortMappings:
- ContainerPort: 3070
Protocol: tcp
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: 'ecs'
awslogs-region: us-east-1
awslogs-stream-prefix: 'spec'
OneService:
Type: AWS::ECS::Service
Properties:
LaunchType: FARGATE
TaskDefinition: !Ref MyTaskDefinition
Cluster: "clusterName"
DesiredCount: 2
DeploymentConfiguration:
MaximumPercent: 100
MinimumHealthyPercent: 70
NetworkConfiguration:
AwsvpcConfiguration:
Subnets:
- subnet-xxx
- subnet-xxx
SecurityGroups:
- sg-xxx
LoadBalancers:
- ContainerName: container1
- ContainerPort: 3070
- TargetGroupArn: arn:xxx
This was due to the YAML format.
Incorrect
LoadBalancers:
- ContainerName: container1
- ContainerPort: 3070
- TargetGroupArn: arn:xxx
Correct
LoadBalancers:
- ContainerName: container1
ContainerPort: 3070
TargetGroupArn: arn:xxx

Ansible conditional won't recognise debug output

I am trying to pass the debug message to conditional on Kubernetes object but it looks like it doesn't recognise it properly:
- name: get some service status log
kubernetes.core.k8s_log:
namespace: "{{ product.namespace }}"
label_selectors:
- app.kubernetes.io/name=check-service-existence
register: service_existence
- name: some service existence check log
debug:
msg: "{{ service_existence.log_lines | first }}"
- name: create service for "{{ product.namespace }}"
kubernetes.core.k8s:
state: present
template: create-service.j2
wait: yes
wait_timeout: 300
wait_condition:
type: "Complete"
status: "True"
when: service_existence == "service_does_not_exist"
what I am getting when I am running it is:
TASK [playbook : some service existence check log] ***
ok: [127.0.0.1] =>
msg: service_does_not_exist
TASK [playbook : create service for "namespace"] ***
skipping: [127.0.0.1]
I suspect that it treats msg: as a part of string. How can I deal with this properly?
Since your debug message is about the value of service_existence.log_lines | first your conditional should also be.
when: service_existence.log_lines | first == "service_does_not_exist"

Error on Telegraf Helm Chart update: Error parsing data

Im trying to deploy telegraf helm chart on kubernetes.
helm upgrade --install telegraf-instance -f values.yaml influxdata/telegraf
When I add modbus input plugin with holding_register i get error
[telegraf] Error running agent: Error loading config file /etc/telegraf/telegraf.conf: Error parsing data: line 49: key `name’ is in conflict with line 2fd
my values.yaml like below
## Default values.yaml for Telegraf
## This is a YAML-formatted file.
## ref: https://hub.docker.com/r/library/telegraf/tags/
replicaCount: 1
image:
repo: "telegraf"
tag: "1.21.4"
pullPolicy: IfNotPresent
podAnnotations: {}
podLabels: {}
imagePullSecrets: []
args: []
env:
- name: HOSTNAME
value: "telegraf-polling-service"
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
service:
enabled: true
type: ClusterIP
annotations: {}
rbac:
create: true
clusterWide: false
rules: []
serviceAccount:
create: false
name:
annotations: {}
config:
agent:
interval: 60s
round_interval: true
metric_batch_size: 1000000
metric_buffer_limit: 100000000
collection_jitter: 0s
flush_interval: 60s
flush_jitter: 0s
precision: ''
hostname: '9825128'
omit_hostname: false
processors:
- enum:
mapping:
field: "status"
dest: "status_code"
value_mappings:
healthy: 1
problem: 2
critical: 3
inputs:
- modbus:
name: "PS MAIN ENGINE"
controller: 'tcp://192.168.0.101:502'
slave_id: 1
holding_registers:
- name: "Coolant Level"
byte_order: CDAB
data_type: FLOAT32
scale: 0.001
address: [51410, 51411]
- modbus:
name: "SB MAIN ENGINE"
controller: 'tcp://192.168.0.102:502'
slave_id: 1
holding_registers:
- name: "Coolant Level"
byte_order: CDAB
data_type: FLOAT32
scale: 0.001
address: [51410, 51411]
outputs:
- influxdb_v2:
token: token
organization: organisation
bucket: bucket
urls:
- "url"
metrics:
health:
enabled: true
service_address: "http://:8888"
threshold: 5000.0
internal:
enabled: true
collect_memstats: false
pdb:
create: true
minAvailable: 1
Problem resolved by doing the following steps
deleted config section of my values.yaml
added my telegraf.conf to /additional_config path
added configmap to kubernetes with the following command
kubectl create configmap external-config --from-file=/additional_config
added the following command to values.yaml
volumes:
- name: my-config
configMap:
name: external-config
volumeMounts:
- name: my-config
mountPath: /additional_config
args:
- "--config=/etc/telegraf/telegraf.conf"
- "--config-directory=/additional_config"

KOWL Kafka Connect yaml configuration - has anyone managed to get it right?

getting this error: {"level":"fatal","msg":"failed to load config","error":"failed to unmarshal YAML config into config struct: 1 error(s) decoding:\n\n* '' has invalid keys: connect"}
with the folowing yaml:
kafka:
brokers:
- 192.168.12.12:9092
schemaRegistry:
enabled: true
urls:
- "http://192.168.12.12:8081"
connect:
enabled: true
clusters:
name: xy
url: "http://192.168.12.12:8091"
tls:
enabled: false
username: 1
password: 1
name: xya
url: http://192.168.12.12:8092
Try downgrade your image back to v1.5.0.
Seems that there's a mistake in master recently.
You could find all the images here

coalesce.go:200: warning: cannot overwrite table with non table for notifiers (map[])

In grafana charts I try to add notifiers and getting the error. The notifiers conf is below:
notifiers: {}
- name: email-notifier
type: email
uid: email1
# either:
org_id: 1
# or
org_name: Main Org.
is_default: true
settings:
addresses: an_email_address#example.com
The critical part was missing there were
notifiers.yaml:
notifiers:
in
notifiers: {}
notifiers.yaml:
notifiers:
- name: email-notifier
type: email
uid: email1
# either:
org_id: 1
# or
org_name: Main Org.
is_default: true
settings:
addresses: an_email_address#example.com