I'm using this chart: https://github.com/helm/charts/tree/master/stable/rabbitmq to deploy a cluster of 3 RabbitMQ nodes on Kubernetes. My intention is to have all the queues mirrored within 2 nodes in the cluster.
Here's the command I use to run Helm: helm install --name rabbitmq-local -f rabbitmq-values.yaml stable/rabbitmq
And here's the content of rabbitmq-values.yaml:
persistence:
enabled: true
resources:
requests:
memory: 256Mi
cpu: 100m
replicas: 3
rabbitmq:
extraConfiguration: |-
{
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
However, the nodes fail to start due to some parsing errors, and they stay in crash loop. Here's the output of kubectl logs rabbitmq-local-0:
BOOT FAILED
===========
Config file generation failed:
=CRASH REPORT==== 23-Jul-2019::15:32:52.880991 ===
crasher:
initial call: lager_handler_watcher:init/1
pid: <0.95.0>
registered_name: []
exception exit: noproc
in function gen:do_for_proc/2 (gen.erl, line 228)
in call from gen_event:rpc/2 (gen_event.erl, line 239)
in call from lager_handler_watcher:install_handler2/3 (src/lager_handler_watcher.erl, line 117)
in call from lager_handler_watcher:init/1 (src/lager_handler_watcher.erl, line 51)
in call from gen_server:init_it/2 (gen_server.erl, line 374)
in call from gen_server:init_it/6 (gen_server.erl, line 342)
ancestors: [lager_handler_watcher_sup,lager_sup,<0.87.0>]
message_queue_len: 0
messages: []
links: [<0.90.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 610
stack_size: 27
reductions: 228
neighbours:
15:32:53.679 [error] Syntax error in /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf after line 14 column 1, parsing incomplete
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681369 ===
supervisor: {local,gr_counter_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.97.0>},
{id,gr_lager_default_tracer_counters},
{mfargs,{gr_counter,start_link,
[gr_lager_default_tracer_counters]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681514 ===
supervisor: {local,gr_param_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.96.0>},
{id,gr_lager_default_tracer_params},
{mfargs,{gr_param,start_link,[gr_lager_default_tracer_params]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
If I remove the rabbitmq.extraConfiguration part, the nodes start properly, so it must be something wrong with the way I'm typing in the policy. Any idea what I'm doing wrong?
Thank you.
According to https://github.com/helm/charts/tree/master/stable/rabbitmq#load-definitions, it is possible to link a JSON configuration as extraConfiguration. So we ended up with this setup that works:
rabbitmq-values.yaml:
rabbitmq:
loadDefinition:
enabled: true
secretName: rabbitmq-load-definition
extraConfiguration:
management.load_definitions = /app/load_definition.json
rabbitmq-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: rabbitmq-load-definition
type: Opaque
stringData:
load_definition.json: |-
{
"vhosts": [
{
"name": "/"
}
],
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
The secret must be loaded into Kubernetes before the Helm chart is played, which goes something like this: kubectl apply -f ./rabbitmq-secret.yaml.
You can use config default of HelmChart
If needed, you can use extraSecrets to let the chart create the secret for you. This way, you don't need to manually create it before deploying a release. For example :
extraSecrets:
load-definition:
load_definition.json: |
{
"vhosts": [
{
"name": "/"
}
]
}
rabbitmq:
loadDefinition:
enabled: true
secretName: load-definition
extraConfiguration: |
management.load_definitions = /app/load_definition.json
https://github.com/helm/charts/tree/master/stable/rabbitmq
Instead of using extraConfiguration, use advancedConfiguration, you should put all these info in this section as it is for classic config format (erlang)
Related
I am using Crossplane and AWS.
when I go for
kubectl apply -f aws-rds.yaml
got error
dbsubnetgroup.database.aws.crossplane.io/prod-subnet-group unchanged
error: error validating "aws-rds.yaml": error validating data: ValidationError(RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs): invalid type for io.crossplane.aws.database.v1beta1.RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs: got "map", expected "array"
Yaml file
apiVersion: database.aws.crossplane.io/v1beta1
kind: RDSInstance
metadata:
name: production-rds
spec:
forProvider:
allocatedStorage: 50
autoMinorVersionUpgrade: true
applyModificationsImmediately: false
backupRetentionPeriod: 0
caCertificateIdentifier: rds-ca-2019
copyTagsToSnapshot: false
dbInstanceClass: db.t2.small
dbSubnetGroupName: prod-subnet-group
vpcSecurityGroupIDRefs:
name: ["rds-access-sg"]
If I change to what #gohm'c suggested
i got error again
error: error validating "aws-rds.yaml": error validating data: ValidationError(RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs[0]): invalid type for io.crossplane.aws.database.v1beta1.RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs: got "string", expected "map"
Security group
kubectl get SecurityGroup
NAME READY SYNCED ID VPC AGE
rds-access-sg True True sg-0p04733a3e2p8pp63 vpc-048b00e0000e7c1b1 19h
From crds crossplane
vpcSecurityGroupIDRefs:
description: VPCSecurityGroupIDRefs are references to VPCSecurityGroups
used to set the VPCSecurityGroupIDs.
items:
description: A Reference to a named object.
properties:
name:
description: Name of the referenced object.
type: string
required:
- name
type: object
type: array
How to change vpcSecurityGroupIDRefs to get the array?
...vpcSecurityGroupIDRefs: got "map", expected "array"
Try:
...
vpcSecurityGroupIDRefs:
- name: rds-access-sg
I am trying to set a default value of 1 replicas for pod deployment but also I would like to have the option to change the value by using --extra-vars="pod_replicas=2". I have tried the following but it doesn't work for me.
vars:
- pod_replicas: 1
spec:
replicas: "{{ pod_replicas }}"
ERROR:
TASK [Create a deployment]
fatal: [localhost]: FAILED! => {"changed": false, "error": 422, "msg": "Failed to patch object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"m essage\":\" \\\\\"\\\\\" is invalid: patch: Invalid value: \\\\\"{\\\\\\\\\\\\\"apiVersion\\\\\\\\\\\\\":\\\\\\\\\\\\\"apps/v1\\\\\\\\\\\\\",\\\\\\\\\\\\\"kind\\\\\\\\\\\\\":\\\\\\\\\ \\\\"Deployment\\\\\\\\\\\\\",\\\\\\\\\\\\\"metadata\\\\\\\\\\\\\":{\\\\\\\\\\\\\"annotations\\\\\\\\\\\\\":{\\\\\\\\\\\\\"deployment.kubernetes.io/revision\\\\\\\\\\\\\":\\\\\\\\\\\\ \"1\\\\\\\\\\\\\"},\\\\\\\\\\\\\
(...)
\\"2022-02-14T12:13:38Z\\\\\\\\\\\\\",\\\\\\\\\\\\\"lastTransitionTime\\\\\\\\\\\\\":\\\\\\\\\\\\\"2022-02-14T12:13:33Z\\\\\\\\\\\\\",\\\\\\\\\\\\\"reason\\\\\\\\\\\\\":\\\\\\\\\\ \\\"NewReplicaSetAvailable\\\\\\\\\\\\\",\\\\\\\\\\\\\"message\\\\\\\\\\\\\":\\\\\\\\\\\\\"ReplicaSet \\\\\\\\\\\\\\\\\\\\\\\\\\\\\"ovms-deployment-57c9bbdfb8\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\" has successfully progressed.\\\\\\\\\\\\\"},{\\\\\\\\\\\\\"type\\\\\\\\\\\\\":\\\\\\\\\\\\\"Available\\\\\\\\\\\\\",\\\\\\\\\\\\\"status\\\\\\\\\\\\\":\\\\\\\\\\\\\"True\\\\\\\\ \\\\\",\\\\\\\\\\\\\"lastUpdateTime\\\\\\\\\\\\\":\\\\\\\\\\\\\"2022-02-14T14:18:33Z\\\\\\\\\\\\\",\\\\\\\\\\\\\"lastTransitionTime\\\\\\\\\\\\\":\\\\\\\\\\\\\"2022-02-14T14:18:33Z\\\ \\\\\\\\\\",\\\\\\\\\\\\\"reason\\\\\\\\\\\\\":\\\\\\\\\\\\\"MinimumReplicasAvailable\\\\\\\\\\\\\",\\\\\\\\\\\\\"message\\\\\\\\\\\\\":\\\\\\\\\\\\\"Deployment has minimum availabili ty.\\\\\\\\\\\\\"}]}}\\\\\": v1.Deployment.Spec: v1.DeploymentSpec.Replicas: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|eplicas\\\\\":\\\\\"1\\\\\",\\ \\\"revisi|..., bigger context ...|\\\\\"spec\\\\\":{\\\\\"progressDeadlineSeconds\\\\\":600,\\\\\"replicas\\\\\":\\\\\"1\\\\\",\\\\\"revisionHistoryLimit\\\\\":10,\\\\\"selector\\\\\ ":{\\\\\"matchLab|...\",\"field\":\"patch\"}]},\"code\":422}\\n'", "reason": "Unprocessable Entity", "status": 422}
Any idea how I can fix this?? Thank you!
Regarding your question
How can I use --extra-vars in Ansible playbooks?
you may have a look into Understanding variable precedence, Using -e extra variables at the command line and the following small test setup
---
- hosts: localhost
become: false
gather_facts: false
vars:
REPLICAS: 1
tasks:
- name: Show value
debug:
msg: "{{ REPLICAS }} in {{ REPLICAS | type_debug }}"
which will for a run with
ansible-playbook vars.yml
result into an output of
TASK [Show value] ******
ok: [localhost] =>
msg: 1 in int
and for a run with
ansible-playbook --extra-vars="REPLICAS=2" vars.yml
into an output of
TASK [Show value] ******
ok: [localhost] =>
msg: 2 in unicode
Because of the error message
v1.Deployment.Spec: v1.DeploymentSpec.Replicas: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|eplicas\\\\\":\\\\\"1\\\\\"
I've introduced the type_debug filter. Maybe it will be necessary to cast the data type to integer.
- name: Show value
debug:
msg: "{{ REPLICAS }} in {{ REPLICAS | int | type_debug }}"
Further Occurences
When I've been tying numeric values from a variable file, they've been resolved as string not numbers
I have found a solution. Using a json object as an argumet seems to work:
ansible-playbook --extra-vars '{ "pod_replicas":2 }' <playbook>.yaml
I'm trying to create a cloud SQL instance by deployment API, when I try to create it directly from YAML file it is created successfully ,meanwhile when I create the instance from jinja/python file I get an error as below:
code: RESOURCE_ERROR
location: /deployments/olpr/resources/test
message: '{"ResourceType":"sqladmin.v1beta4.instance","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Request
contains an invalid argument.","status":"INVALID_ARGUMENT","statusMessage":"Bad
Request","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/project_id/instances","httpMethod":"POST"}}'
Is there any way where I can see the invalid_argument so that I can fix it.
Please help me with some valid suggestions.
The resource as below:
*resources = [
{
'name': 'test',
'type': 'sqladmin.v1beta4.instance',
'properties': {
'zone': 'europe-west1-b',
'rootPassword': '1234567' ,
'instanceType': 'CLOUD_SQL_INSTANCE',
'databaseVersion': 'SQLSERVER_2017_EXPRESS',
'backendType': 'SECOND_GEN',
'settings':{
'machineType' : 'db-custom-1-3840',
'dataDiskSizeGb': 10,
'dataDiskType': 'PD_SSD',
'ipConfiguration': {
'ipv4Enabled': False,
'privateNetwork':'projects/project_id/global/networks/project_id-vpc'
}
}
}
}
]*
**
**Yaml file:
resources:
- name: he
type: sqladmin.v1beta4.instance
properties:
region: europe-west1
zone: europe-west1-b
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
databaseVersion: SQLSERVER_2017_EXPRESS
serviceAccountEmailAddress: user#project_id.iam.gserviceaccount.com
rootPassword: mypass
settings:
dataDiskSizeGb: 10
dataDiskType: PD_SSD
ipConfiguration:
ipv4Enabled: false
privateNetwork: vpc
kind: sql#settings
machineType: db-custom-1-3840**
**
You're not supplying a region in the Python version. Try adding `'region': 'europe-west1' to the properties.
I have ELK(filebeat->logstash->elasticsearch<-kibana) running on win10. I gave the following two lines, then I found filebeat not sending whole text, rather some head/front texts are cut.
2018-04-27 10:42:49 [http-nio-8088-exec-1] - INFO - app-info - injectip ip 192.168.16.89
2018-04-27 10:42:23 [RMI TCP Connection(10)-127.0.0.1] - INFO - org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring FrameworkServlet 'dispatcherServlet'
In filebeat console, I notice following text:
2018-05-24T09:02:50.361+0800 DEBUG [publish] pipeline/processor.go:275 Publish event: {
"#timestamp": "2018-05-24T01:02:50.361Z",
"#metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.2.3"
},
"source": "e:\\sjj\\xxx\\YKT\\ELK\\twoFormats.log",
"offset": 97083,
"message": "xec-1] - INFO - app-info - injectip ip 192.168.16.89",
"prospector": {
"type": "log"
},
"beat": {
"name": "DESKTOP-M4AFV3I",
"hostname": "DESKTOP-M4AFV3I",
"version": "6.2.3"
}
}
and
2018-05-24T09:11:10.374+0800 DEBUG [publish] pipeline/processor.go:275 Publish event: {
"#timestamp": "2018-05-24T01:11:10.373Z",
"#metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.2.3"
},
"prospector": {
"type": "log"
},
"beat": {
"version": "6.2.3",
"name": "DESKTOP-M4AFV3I",
"hostname": "DESKTOP-M4AFV3I"
},
"source": "e:\\sjj\\xxx\\YKT\\ELK\\twoFormats.log",
"offset": 97272,
"message": "n(10)-127.0.0.1] - INFO - org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring FrameworkServlet 'dispatcherServlet'"
}
In the console, one could see message part, some front text is cut off. In first case, '2018-04-27 10:42:49 [http-nio-8088-e' is cut, in the second case, '2018-04-27 10:42:23 [RMI TCP Connectio' is cut.
Why filebeat will do this? this makes my regex generates parserexception in logstash.
I list my filebeat.yml file as follows:
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- type: log
# Change to true to enable this prospector configuration.
#enabled: false
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- e:\sjj\xxx\YKT\ELK\twoFormats.log
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["localhost:5044"]
When installing kubernetes 1.7.2 and a warning about kube-proxy appears
WARNING: all flags other than --config, --write-config-to, and --cleanup-iptables are deprecated. Please begin using a config file ASAP.
So I try make my own config file, like this,
{
"bind-address": "10.110.200.42",
"hostname-override": "10.110.200.42",
"cluster-cidr": "172.30.0.0/16",
"logtostderr": true,
"v": 0,
"allow-privileged": true,
"master": "http://10.110.200.42:8080",
"etcd-servers": "http://10.110.200.42:2379"
}
but I still get error
error: Object 'apiVersion' is missing in '{
I think I need some example about the config file, but I googled without any result, even search the source code in git , I found nothing usefull, please help!
ps, I found way to generate example file , just use --write-config-to command line , the example is below
apiVersion: componentconfig/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: ""
qps: 5
clusterCIDR: ""
configSyncPeriod: 15m0s
conntrack:
max: 0
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
featureGates: ""
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: ""
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpTimeoutMilliseconds: 250ms
I am using k8s version 1.10.3, and just for simplicity and testing, i disable service account in apiserver by adding the item
--disable-admission-plugins=ServiceAccount
And for kube-proxy, just add the --master item, e.g.
./kube-proxy --master 127.0.0.1:8080 --v=3
and the kube-proxy turns out to be working.