ValidationError while adding custom branding in opensearch_dashboards.yml - opensearch

I have a working opensearch cluster running in aks using opster opensearch operator. When I add custom branding in additionalConfig section it throws error.
dashboards:
version: 2.5.0
enable: true
additionalConfig:
server.basePath: "/opensearch"
server.rewriteBasePath: "false"
opensearch_security.multitenancy.enabled: "true"
opensearchDashboards.branding:
logo:
defaultUrl: "https://example.com/sample.svg"
darkModeUrl: "https://example.com/dark-mode-sample.svg"
mark:
defaultUrl: "https://example.com/sample1.svg"
darkModeUrl: "https://example.com/dark-mode-sample1.svg"
loadingLogo:
defaultUrl: "https://example.com/sample2.svg"
darkModeUrl: "https://example.com/dark-mode-sample2.svg"
faviconUrl: "https://example.com/sample3.svg"
applicationTitle: "My Custom Title"
Error Received:
error: error validating "opensearch-cluster.yaml": error validating data: ValidationError(OpenSearchCluster.spec.dashboards.additionalConfig.opensearchDashboards.branding): invalid type for io.opster.opensearch.v1.OpenSearchCluster.spec.dashboards.additionalConfig: got "map", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false
I tried updating the configmap as well that point to opensearch_dashboards.yml but the configmap is not getting updated.

Related

In CDK, can I wait until a Helm-installed operator is running before applying a manifest?

I'm installing the External Secrets Operator alongside Apache Pinot into an EKS cluster using CDK. I'm running into an issue that I think is being caused by CDK attempting to create a resource defined by the ESO before the ESO has actually gotten up and running. Here's the relevant code:
// install Pinot
const pinot = cluster.addHelmChart('Pinot', {
chartAsset: new Asset(this, 'PinotChartAsset', { path: path.join(__dirname, '../pinot') }),
release: 'pinot',
namespace: 'pinot',
createNamespace: true
});
// install the External Secrets Operator
const externalSecretsOperator = cluster.addHelmChart('ExternalSecretsOperator', {
chart: 'external-secrets',
release: 'external-secrets',
repository: 'https://charts.external-secrets.io',
namespace: 'external-secrets',
createNamespace: true,
values: {
installCRDs: true,
webhook: {
port: 9443
}
}
});
// create a Fargate Profile
const fargateProfile = cluster.addFargateProfile('FargateProfile', {
fargateProfileName: 'externalsecrets',
selectors: [{ 'namespace': 'external-secrets' }]
});
// create the Service Account used by the Secret Store
const serviceAccount = cluster.addServiceAccount('ServiceAccount', {
name: 'eso-service-account',
namespace: 'external-secrets'
});
serviceAccount.addToPrincipalPolicy(new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: [
'secretsmanager:GetSecretValue',
'secretsmanager:DescribeSecret'
],
resources: [
'arn:aws:secretsmanager:us-east-1:<MY-ACCOUNT-ID>:secret:*'
]
}))
serviceAccount.node.addDependency(externalSecretsOperator);
// create the Secret Store, an ESO Resource
const secretStoreManifest = getSecretStoreManifest(serviceAccount);
const secretStore = cluster.addManifest('SecretStore', secretStoreManifest);
secretStore.node.addDependency(serviceAccount);
secretStore.node.addDependency(fargateProfile);
// create the External Secret, another ESO resource
const externalSecretManifest = getExternalSecretManifest(secretStoreManifest)
const externalSecret = cluster.addManifest('ExternalSecret', externalSecretManifest)
externalSecret.node.addDependency(secretStore);
externalSecret.node.addDependency(pinot);
Even though I've set the ESO as a dependency to the Secret Store, when I try to deploy this I get the following error:
Received response status [FAILED] from custom resource. Message returned: Error: b'Error from server (InternalError): error when creating "/tmp/manifest.yaml": Internal error occurred:
failed calling webhook "validate.clustersecretstore.external-secrets.io": Post "https://external-secrets-webhook.external-secrets.svc:443/validate-external-secrets-io-v1beta1-clusterse
cretstore?timeout=5s": no endpoints available for service "external-secrets-webhook"\n'
If I understand correctly, this is the error you'd get if you try to add a Secret Store before the ESO is fully installed. I'm guessing that CDK does not wait until the ESO's pods are running before attempting to apply the manifest. Furthermore, if I comment out the lines the create the Secret Store and External Secret, do a cdk deploy, uncomment those lines and then deploy again, everything works fine.
Is there any way around this? Some way I can retry applying the manifest, or to wait a period of time before attempting the apply?
The addHelmChart method has a property wait that is set to false by default - setting it to true lets CDK know to not mark the installation as complete until of its its K8s resources are in a ready state.

How to write YAML array? error: error validating "aws-rds.yaml"

I am using Crossplane and AWS.
when I go for
kubectl apply -f aws-rds.yaml
got error
dbsubnetgroup.database.aws.crossplane.io/prod-subnet-group unchanged
error: error validating "aws-rds.yaml": error validating data: ValidationError(RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs): invalid type for io.crossplane.aws.database.v1beta1.RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs: got "map", expected "array"
Yaml file
apiVersion: database.aws.crossplane.io/v1beta1
kind: RDSInstance
metadata:
name: production-rds
spec:
forProvider:
allocatedStorage: 50
autoMinorVersionUpgrade: true
applyModificationsImmediately: false
backupRetentionPeriod: 0
caCertificateIdentifier: rds-ca-2019
copyTagsToSnapshot: false
dbInstanceClass: db.t2.small
dbSubnetGroupName: prod-subnet-group
vpcSecurityGroupIDRefs:
name: ["rds-access-sg"]
If I change to what #gohm'c suggested
i got error again
error: error validating "aws-rds.yaml": error validating data: ValidationError(RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs[0]): invalid type for io.crossplane.aws.database.v1beta1.RDSInstance.spec.forProvider.vpcSecurityGroupIDRefs: got "string", expected "map"
Security group
kubectl get SecurityGroup
NAME READY SYNCED ID VPC AGE
rds-access-sg True True sg-0p04733a3e2p8pp63 vpc-048b00e0000e7c1b1 19h
From crds crossplane
vpcSecurityGroupIDRefs:
description: VPCSecurityGroupIDRefs are references to VPCSecurityGroups
used to set the VPCSecurityGroupIDs.
items:
description: A Reference to a named object.
properties:
name:
description: Name of the referenced object.
type: string
required:
- name
type: object
type: array
How to change vpcSecurityGroupIDRefs to get the array?
...vpcSecurityGroupIDRefs: got "map", expected "array"
Try:
...
vpcSecurityGroupIDRefs:
- name: rds-access-sg

AWS HTTP API Integration with AWS Step Functions -> Sending Multiple Values in the Input

I have a Type: AWS::Serverless::HttpApi which I am trying to connect to a Type: AWS::Serverless::StateMachine as a trigger. Meaning the HTTP API would trigger the Step Function state machine.
I can get it working, by only specifying a single input. For example, the DefinitionBody when it works, looks like this:
DefinitionBody:
info:
version: '1.0'
title:
Ref: AWS::StackName
paths:
"/github/secret":
post:
responses:
default:
description: "Default response for POST /"
x-amazon-apigateway-integration:
integrationSubtype: "StepFunctions-StartExecution"
credentials:
Fn::GetAtt: [StepFunctionsApiRole, Arn]
requestParameters:
Input: $request.body
StateMachineArn: !Ref SecretScannerStateMachine
payloadFormatVersion: "1.0"
type: "aws_proxy"
connectionType: "INTERNET"
timeoutInMillis: 30000
openapi: 3.0.1
x-amazon-apigateway-importexport-version: "1.0"
Take note of the following line: Input: $request.body. I am only specifying the $request.body.
However, I need to be able to send the $request.body and $request.header.X-Hub-Signature-256. I need to send BOTH these values to my state machine as an input.
I have tried so many different ways. For example:
Input: " { body: $request.body, header: $request.header.X-Hub-Signature-256 }"
and
$request.body
$request.header.X-Hub-Signature-256
and
Input: $request
I get different errors each time, but this is the main one:
Warnings found during import: Unable to create integration for resource at path 'POST /github/secret': Invalid selection expression specified: Validation Result: warnings : [], errors : [Invalid source: $request specified for destination: Input].
Any help on how to pass multiple values would so be appreciated.

serverless-appsync-plugin 'pipeline' deployment error

I am using serverless to deploy an Appsync API using 'PIPELINE', for use as an API lambda-functions. This plugin https://github.com/sid88in/serverless-appsync-plugin is used to deploy Appsync with the ability to use 'pipeline'. I used the description from the documentation however when I try to do deploy it in myself I have an error:
Error: The CloudFormation template is invalid: Template error: instance of Fn::GetAtt references undefined resource GraphQlDsmeInfo
functions:
graphlql:
handler: handler.meInfo
name: meInfo
custom:
accountId: testId
appSync:
name: test-AppSync
authenticationType: API_KEY
mappingTemplates:
- type: Query
field: meInfo
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
kind: PIPELINE
functions:
- meInfo
functionConfigurations:
- dataSource: meInfo
name: 'meInfo'
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
Could somebody help me to configure 'serverless-appsync-plugin ' with pipeline kind?
You need to specify the data source used in your function.
It seems you've deployed the handler as Lambda function. If not, first you should have a separate serverless.yml config for your Lambda and deploy it. Then you need to attach this Lambda as AppSync data source, so your AppSync config would look like this:
custom:
accountId: testId
appSync:
name: test-AppSync
authenticationType: API_KEY
dataSources:
- type: AWS_LAMBDA
name: Lambda_Name
description: 'Lambda Description'
config:
lambdaFunctionArn: 'arn:aws:lambda:xxxx'
serviceRoleArn: 'arn:aws:iam::xxxx'
mappingTemplates:
- type: Query
field: meInfo
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
kind: PIPELINE
functions:
- meInfo
functionConfigurations:
- dataSource: Lambda_Name
name: 'meInfo'
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
There is an article which describes the process in details that might be useful: https://medium.com/hackernoon/running-a-scalable-reliable-graphql-endpoint-with-serverless-24c3bb5acb43

Google container engine cluster showing large number of dns errors in logs

I am using google container engine and getting tons of dns errors in the logs.
Like:
10:33:11.000 I0720 17:33:11.547023 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
And:
10:46:11.000 I0720 17:46:11.546237 1 dns.go:539] records:[0xc8203153b0], retval:[{10.71.240.1 0 10 10 false 30 0 /skydns/local/cluster/svc/default/kubernetes/3465623435313164}], path:[local cluster svc default kubernetes]
This is the payload.
{
metadata: {
severity: "ERROR"
serviceName: "container.googleapis.com"
zone: "us-central1-f"
labels: {
container.googleapis.com/cluster_name: "some-name"
compute.googleapis.com/resource_type: "instance"
compute.googleapis.com/resource_name: "fluentd-cloud-logging-gke-master-cluster-default-pool-f5547509-"
container.googleapis.com/instance_id: "instanceid"
container.googleapis.com/pod_name: "fdsa"
compute.googleapis.com/resource_id: "someid"
container.googleapis.com/stream: "stderr"
container.googleapis.com/namespace_name: "kube-system"
container.googleapis.com/container_name: "kubedns"
}
timestamp: "2016-07-20T17:33:11.000Z"
projectNumber: ""
}
textPayload: "I0720 17:33:11.547023 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false"
log: "kubedns"
}
Everything is working just the logs are polluted with errors. Any ideas on why this is happening or if I should be concerned?
Thanks for the question, Aaron. Those error messages are actually just tracing/debugging output from the container and don't indicate that anything is wrong. The fact that they get written out as error messages has been fixed in Kubernetes at head and will be better in the next release of Kubernetes.