How to write YAML array? error: error validating "aws-rds.yaml" - kubernetes

I am using Crossplane and AWS.
when I go for
kubectl apply -f aws-rds.yaml
got error
dbsubnetgroup.database.aws.crossplane.io/prod-subnet-group unchanged
error: error validating "aws-rds.yaml": error validating data: ValidationError(RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs): invalid type for io.crossplane.aws.database.v1beta1.RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs: got "map", expected "array"
Yaml file
apiVersion: database.aws.crossplane.io/v1beta1
kind: RDSInstance
metadata:
name: production-rds
spec:
forProvider:
allocatedStorage: 50
autoMinorVersionUpgrade: true
applyModificationsImmediately: false
backupRetentionPeriod: 0
caCertificateIdentifier: rds-ca-2019
copyTagsToSnapshot: false
dbInstanceClass: db.t2.small
dbSubnetGroupName: prod-subnet-group
vpcSecurityGroupIDRefs:
name: ["rds-access-sg"]
If I change to what #gohm'c suggested
i got error again
error: error validating "aws-rds.yaml": error validating data: ValidationError(RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs[0]): invalid type for io.crossplane.aws.database.v1beta1.RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs: got "string", expected "map"
Security group
kubectl get SecurityGroup
NAME READY SYNCED ID VPC AGE
rds-access-sg True True sg-0p04733a3e2p8pp63 vpc-048b00e0000e7c1b1 19h
From crds crossplane
vpcSecurityGroupIDRefs:
description: VPCSecurityGroupIDRefs are references to VPCSecurityGroups
used to set the VPCSecurityGroupIDs.
items:
description: A Reference to a named object.
properties:
name:
description: Name of the referenced object.
type: string
required:
- name
type: object
type: array
How to change vpcSecurityGroupIDRefs to get the array?

...vpcSecurityGroupIDRefs: got "map", expected "array"
Try:
...
vpcSecurityGroupIDRefs:
- name: rds-access-sg

Related

ValidationError while adding custom branding in opensearch_dashboards.yml

I have a working opensearch cluster running in aks using opster opensearch operator. When I add custom branding in additionalConfig section it throws error.
dashboards:
version: 2.5.0
enable: true
additionalConfig:
server.basePath: "/opensearch"
server.rewriteBasePath: "false"
opensearch_security.multitenancy.enabled: "true"
opensearchDashboards.branding:
logo:
defaultUrl: "https://example.com/sample.svg"
darkModeUrl: "https://example.com/dark-mode-sample.svg"
mark:
defaultUrl: "https://example.com/sample1.svg"
darkModeUrl: "https://example.com/dark-mode-sample1.svg"
loadingLogo:
defaultUrl: "https://example.com/sample2.svg"
darkModeUrl: "https://example.com/dark-mode-sample2.svg"
faviconUrl: "https://example.com/sample3.svg"
applicationTitle: "My Custom Title"
Error Received:
error: error validating "opensearch-cluster.yaml": error validating data: ValidationError(OpenSearchCluster.spec.dashboards.additionalConfig.opensearchDashboards.branding): invalid type for io.opster.opensearch.v1.OpenSearchCluster.spec.dashboards.additionalConfig: got "map", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false
I tried updating the configmap as well that point to opensearch_dashboards.yml but the configmap is not getting updated.

How can I use '--extra-vars' for replicas in Ansible playbooks?

I am trying to set a default value of 1 replicas for pod deployment but also I would like to have the option to change the value by using --extra-vars="pod_replicas=2". I have tried the following but it doesn't work for me.
vars:
- pod_replicas: 1
spec:
replicas: "{{ pod_replicas }}"
ERROR:
TASK [Create a deployment]
fatal: [localhost]: FAILED! => {"changed": false, "error": 422, "msg": "Failed to patch object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"m essage\":\" \\\\\"\\\\\" is invalid: patch: Invalid value: \\\\\"{\\\\\\\\\\\\\"apiVersion\\\\\\\\\\\\\":\\\\\\\\\\\\\"apps/v1\\\\\\\\\\\\\",\\\\\\\\\\\\\"kind\\\\\\\\\\\\\":\\\\\\\\\ \\\\"Deployment\\\\\\\\\\\\\",\\\\\\\\\\\\\"metadata\\\\\\\\\\\\\":{\\\\\\\\\\\\\"annotations\\\\\\\\\\\\\":{\\\\\\\\\\\\\"deployment.kubernetes.io/revision\\\\\\\\\\\\\":\\\\\\\\\\\\ \"1\\\\\\\\\\\\\"},\\\\\\\\\\\\\
(...)
\\"2022-02-14T12:13:38Z\\\\\\\\\\\\\",\\\\\\\\\\\\\"lastTransitionTime\\\\\\\\\\\\\":\\\\\\\\\\\\\"2022-02-14T12:13:33Z\\\\\\\\\\\\\",\\\\\\\\\\\\\"reason\\\\\\\\\\\\\":\\\\\\\\\\ \\\"NewReplicaSetAvailable\\\\\\\\\\\\\",\\\\\\\\\\\\\"message\\\\\\\\\\\\\":\\\\\\\\\\\\\"ReplicaSet \\\\\\\\\\\\\\\\\\\\\\\\\\\\\"ovms-deployment-57c9bbdfb8\\\\\\\\\\\\\\\\\\\\\\\\\ \\\\" has successfully progressed.\\\\\\\\\\\\\"},{\\\\\\\\\\\\\"type\\\\\\\\\\\\\":\\\\\\\\\\\\\"Available\\\\\\\\\\\\\",\\\\\\\\\\\\\"status\\\\\\\\\\\\\":\\\\\\\\\\\\\"True\\\\\\\\ \\\\\",\\\\\\\\\\\\\"lastUpdateTime\\\\\\\\\\\\\":\\\\\\\\\\\\\"2022-02-14T14:18:33Z\\\\\\\\\\\\\",\\\\\\\\\\\\\"lastTransitionTime\\\\\\\\\\\\\":\\\\\\\\\\\\\"2022-02-14T14:18:33Z\\\ \\\\\\\\\\",\\\\\\\\\\\\\"reason\\\\\\\\\\\\\":\\\\\\\\\\\\\"MinimumReplicasAvailable\\\\\\\\\\\\\",\\\\\\\\\\\\\"message\\\\\\\\\\\\\":\\\\\\\\\\\\\"Deployment has minimum availabili ty.\\\\\\\\\\\\\"}]}}\\\\\": v1.Deployment.Spec: v1.DeploymentSpec.Replicas: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|eplicas\\\\\":\\\\\"1\\\\\",\\ \\\"revisi|..., bigger context ...|\\\\\"spec\\\\\":{\\\\\"progressDeadlineSeconds\\\\\":600,\\\\\"replicas\\\\\":\\\\\"1\\\\\",\\\\\"revisionHistoryLimit\\\\\":10,\\\\\"selector\\\\\ ":{\\\\\"matchLab|...\",\"field\":\"patch\"}]},\"code\":422}\\n'", "reason": "Unprocessable Entity", "status": 422}
Any idea how I can fix this?? Thank you!
Regarding your question
How can I use --extra-vars in Ansible playbooks?
you may have a look into Understanding variable precedence, Using -e extra variables at the command line and the following small test setup
---
- hosts: localhost
become: false
gather_facts: false
vars:
REPLICAS: 1
tasks:
- name: Show value
debug:
msg: "{{ REPLICAS }} in {{ REPLICAS | type_debug }}"
which will for a run with
ansible-playbook vars.yml
result into an output of
TASK [Show value] ******
ok: [localhost] =>
msg: 1 in int
and for a run with
ansible-playbook --extra-vars="REPLICAS=2" vars.yml
into an output of
TASK [Show value] ******
ok: [localhost] =>
msg: 2 in unicode
Because of the error message
v1.Deployment.Spec: v1.DeploymentSpec.Replicas: readUint32: unexpected character: \\\\ufffd, error found in #10 byte of ...|eplicas\\\\\":\\\\\"1\\\\\"
I've introduced the type_debug filter. Maybe it will be necessary to cast the data type to integer.
- name: Show value
debug:
msg: "{{ REPLICAS }} in {{ REPLICAS | int | type_debug }}"
Further Occurences
When I've been tying numeric values from a variable file, they've been resolved as string not numbers
I have found a solution. Using a json object as an argumet seems to work:
ansible-playbook --extra-vars '{ "pod_replicas":2 }' <playbook>.yaml

serverless-appsync-plugin 'pipeline' deployment error

I am using serverless to deploy an Appsync API using 'PIPELINE', for use as an API lambda-functions. This plugin https://github.com/sid88in/serverless-appsync-plugin is used to deploy Appsync with the ability to use 'pipeline'. I used the description from the documentation however when I try to do deploy it in myself I have an error:
Error: The CloudFormation template is invalid: Template error: instance of Fn::GetAtt references undefined resource GraphQlDsmeInfo
functions:
graphlql:
handler: handler.meInfo
name: meInfo
custom:
accountId: testId
appSync:
name: test-AppSync
authenticationType: API_KEY
mappingTemplates:
- type: Query
field: meInfo
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
kind: PIPELINE
functions:
- meInfo
functionConfigurations:
- dataSource: meInfo
name: 'meInfo'
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
Could somebody help me to configure 'serverless-appsync-plugin ' with pipeline kind?
You need to specify the data source used in your function.
It seems you've deployed the handler as Lambda function. If not, first you should have a separate serverless.yml config for your Lambda and deploy it. Then you need to attach this Lambda as AppSync data source, so your AppSync config would look like this:
custom:
accountId: testId
appSync:
name: test-AppSync
authenticationType: API_KEY
dataSources:
- type: AWS_LAMBDA
name: Lambda_Name
description: 'Lambda Description'
config:
lambdaFunctionArn: 'arn:aws:lambda:xxxx'
serviceRoleArn: 'arn:aws:iam::xxxx'
mappingTemplates:
- type: Query
field: meInfo
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
kind: PIPELINE
functions:
- meInfo
functionConfigurations:
- dataSource: Lambda_Name
name: 'meInfo'
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
There is an article which describes the process in details that might be useful: https://medium.com/hackernoon/running-a-scalable-reliable-graphql-endpoint-with-serverless-24c3bb5acb43

Google Cloud Deployment ,invalid_argument

I'm trying to create a cloud SQL instance by deployment API, when I try to create it directly from YAML file it is created successfully ,meanwhile when I create the instance from jinja/python file I get an error as below:
code: RESOURCE_ERROR
location: /deployments/olpr/resources/test
message: '{"ResourceType":"sqladmin.v1beta4.instance","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Request
contains an invalid argument.","status":"INVALID_ARGUMENT","statusMessage":"Bad
Request","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/project_id/instances","httpMethod":"POST"}}'
Is there any way where I can see the invalid_argument so that I can fix it.
Please help me with some valid suggestions.
The resource as below:
*resources = [
{
'name': 'test',
'type': 'sqladmin.v1beta4.instance',
'properties': {
'zone': 'europe-west1-b',
'rootPassword': '1234567' ,
'instanceType': 'CLOUD_SQL_INSTANCE',
'databaseVersion': 'SQLSERVER_2017_EXPRESS',
'backendType': 'SECOND_GEN',
'settings':{
'machineType' : 'db-custom-1-3840',
'dataDiskSizeGb': 10,
'dataDiskType': 'PD_SSD',
'ipConfiguration': {
'ipv4Enabled': False,
'privateNetwork':'projects/project_id/global/networks/project_id-vpc'
}
}
}
}
]*
**
**Yaml file:
resources:
- name: he
type: sqladmin.v1beta4.instance
properties:
region: europe-west1
zone: europe-west1-b
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
databaseVersion: SQLSERVER_2017_EXPRESS
serviceAccountEmailAddress: user#project_id.iam.gserviceaccount.com
rootPassword: mypass
settings:
dataDiskSizeGb: 10
dataDiskType: PD_SSD
ipConfiguration:
ipv4Enabled: false
privateNetwork: vpc
kind: sql#settings
machineType: db-custom-1-3840**
**
You're not supplying a region in the Python version. Try adding `'region': 'europe-west1' to the properties.

How to pass extra configuration to RabbitMQ with Helm?

I'm using this chart: https://github.com/helm/charts/tree/master/stable/rabbitmq to deploy a cluster of 3 RabbitMQ nodes on Kubernetes. My intention is to have all the queues mirrored within 2 nodes in the cluster.
Here's the command I use to run Helm: helm install --name rabbitmq-local -f rabbitmq-values.yaml stable/rabbitmq
And here's the content of rabbitmq-values.yaml:
persistence:
enabled: true
resources:
requests:
memory: 256Mi
cpu: 100m
replicas: 3
rabbitmq:
extraConfiguration: |-
{
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
However, the nodes fail to start due to some parsing errors, and they stay in crash loop. Here's the output of kubectl logs rabbitmq-local-0:
BOOT FAILED
===========
Config file generation failed:
=CRASH REPORT==== 23-Jul-2019::15:32:52.880991 ===
crasher:
initial call: lager_handler_watcher:init/1
pid: <0.95.0>
registered_name: []
exception exit: noproc
in function gen:do_for_proc/2 (gen.erl, line 228)
in call from gen_event:rpc/2 (gen_event.erl, line 239)
in call from lager_handler_watcher:install_handler2/3 (src/lager_handler_watcher.erl, line 117)
in call from lager_handler_watcher:init/1 (src/lager_handler_watcher.erl, line 51)
in call from gen_server:init_it/2 (gen_server.erl, line 374)
in call from gen_server:init_it/6 (gen_server.erl, line 342)
ancestors: [lager_handler_watcher_sup,lager_sup,<0.87.0>]
message_queue_len: 0
messages: []
links: [<0.90.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 610
stack_size: 27
reductions: 228
neighbours:
15:32:53.679 [error] Syntax error in /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf after line 14 column 1, parsing incomplete
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681369 ===
supervisor: {local,gr_counter_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.97.0>},
{id,gr_lager_default_tracer_counters},
{mfargs,{gr_counter,start_link,
[gr_lager_default_tracer_counters]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681514 ===
supervisor: {local,gr_param_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.96.0>},
{id,gr_lager_default_tracer_params},
{mfargs,{gr_param,start_link,[gr_lager_default_tracer_params]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
If I remove the rabbitmq.extraConfiguration part, the nodes start properly, so it must be something wrong with the way I'm typing in the policy. Any idea what I'm doing wrong?
Thank you.
According to https://github.com/helm/charts/tree/master/stable/rabbitmq#load-definitions, it is possible to link a JSON configuration as extraConfiguration. So we ended up with this setup that works:
rabbitmq-values.yaml:
rabbitmq:
loadDefinition:
enabled: true
secretName: rabbitmq-load-definition
extraConfiguration:
management.load_definitions = /app/load_definition.json
rabbitmq-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: rabbitmq-load-definition
type: Opaque
stringData:
load_definition.json: |-
{
"vhosts": [
{
"name": "/"
}
],
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
The secret must be loaded into Kubernetes before the Helm chart is played, which goes something like this: kubectl apply -f ./rabbitmq-secret.yaml.
You can use config default of HelmChart
If needed, you can use extraSecrets to let the chart create the secret for you. This way, you don't need to manually create it before deploying a release. For example :
extraSecrets:
load-definition:
load_definition.json: |
{
"vhosts": [
{
"name": "/"
}
]
}
rabbitmq:
loadDefinition:
enabled: true
secretName: load-definition
extraConfiguration: |
management.load_definitions = /app/load_definition.json
https://github.com/helm/charts/tree/master/stable/rabbitmq
Instead of using extraConfiguration, use advancedConfiguration, you should put all these info in this section as it is for classic config format (erlang)