I have a yaml file as given below
apiVersion: helm.toolkit.gitops.io/v2beta1
kind: HelmRelease
metadata:
name: "test"
namespace: "test-system"
spec:
chart:
spec:
chart: "test-environment"
version: "0.1.10"
values:
key1: "value1"
key2: "value2"
key3: "value3"
key4: "value4"
gitRepository:
url: https://github.com/test-eng/test.git
helmRepositories:
- name: testplatform
url: https://test-platform/charts
How I can read only part of it using terraform. Read all the contents below ".spec.values".
key1: "value1"
key2: "value2"
key3: "value3"
key4: "value4"
gitRepository:
url: https://github.com/test-eng/test.git
helmRepositories:
- name: testplatform
url: https://test-platform/charts
I tried with yamldecode function as given below
flux_config = yamldecode((data.github_repository_file.my_file.content)[".spec.values"])
but it failed with below error
This value does not have any indices.
Related
I have this sts:
apiVersion: "apps/v1"
kind: "StatefulSet"
metadata:
name: "mongo-benchmark"
spec:
serviceName: mongo-benchmark-headless
replicas: 1
selector:
matchLabels:
app: "mongo-benchmark"
template:
metadata:
labels:
app: "mongo-benchmark"
spec:
containers:
- name: "mongo-benchmark"
image: "mongo:5"
imagePullPolicy: "IfNotPresent"
env:
- name: "MONGO_INITDB_ROOT_USERNAME"
value: "admin"
- name: "MONGO_INITDB_ROOT_PASSWORD"
value: "admin"
ports:
- containerPort: 27017
name: "mongo-port"
volumeMounts:
- name: "mongo-benchmark-data"
mountPath: "/data/db"
volumes:
- name: "mongo-benchmark-data"
persistentVolumeClaim:
claimName: "mongo-benchmark-pvc"
Everything is deployed.
The root user's username and password is admin
But when I go to the pod terminal and execute these commands I get:
$ mongo
$ use admin
$ db.auth("admin", "admin")
Error: Authentication failed.
0
I can't even read/write from/to other databases.
For example:
$ mongo
$ use test
$ db.col.findOne({})
uncaught exception: Error: error: {
"ok" : 0,
"errmsg" : "not authorized on test to execute command { find: \"col\", filter: {}, limit: 1.0, singleBatch: true, lsid: { id: UUID(\"30788b3e-48f0-4ff0-aaec-f17e20c67bde\") }, $db: \"test\" }",
"code" : 13,
"codeName" : "Unauthorized"
}
I don't know where I'm doing wrong. Anyone knows how to authenticate?
i'm trying to generate a a python dictionary using a values.yaml of a helm.
The values.yaml entry is
replicator:
targets:
- namespace: ns1
- { configmap: ns1cm1, entry: something.yml }
- namespace: ns2
- { configmap: ns2cm1, entry: ns2cm1.yml }
- { configmap: ns2cm2, entry: ns2cm2.yml }
With that input i desire to generate the following:
{
'ns1': [
{'ns1cm1': [ 'something.ym' ]},
],
'ns2': [
{'ns2cm1': [ 'ns2cm1.yml' ]},
{'ns2cm2': [ 'ns2cm2.yml' ]}
],
}
the ConfigMap template is:
data:
targets.py: |
targets = {
{{- range $target := .Values.replicator.targets }}
'{{ $target.namespace }}': [
{{- range $data := .Values.replicator.targets.$target }}
{ '$data.configmap': ['$data.entry'] },
{{- end}}
]
{{- end}}
}
But i'm getting the error Error: cannot load values.yaml: error converting YAML to JSON: did not find expected key but i don't undertand why.
I must be doing something terribly wrong. I have a replicaset configured using the MongoDB community operator, deployed in GKE, and exposed via LoadBalancers.
This replicaset has 3 members. I have defined the replicaSetHorizons like so:
replicaSetHorizons:
- mongo-replica: document-0.mydomain.com:30000
- mongo-replica: document-1.mydomain.com:30001
- mongo-replica: document-2.mydomain.com:30002
I then use mongosh from an external source (local computer outside of GKE) to connect:
mongosh "mongodb://<credentials>#document-0.mydomain.com:30000,document-1.mydomain.com:30001,document-2.mydomain.com:30002/admin?ssl=false&replicaSet=document"
I do not use SSL for now because I am testing this deployment. What I found is mongosh always returns this error:
MongoNetworkError: getaddrinfo ENOTFOUND document-0.document-svc.mongodb.svc.cluster.local
Can someone explain to me what I am doing wrong? Why is my internal clustername being given to mongosh to attempt the connection?
If I try to connect to a single member of the replicaset, the connection will succeed. If I run rs.conf(), I see the following (which looks correct??):
{
_id: 'document',
version: 1,
term: 1,
members: [
{
_id: 0,
host: 'document-0.document-svc.mongodb.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
horizons: { 'mongo-replica': 'document-0.mydomain.com:30000' },
secondaryDelaySecs: Long("0"),
votes: 1
},
{
_id: 1,
host: 'document-1.document-svc.mongodb.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
horizons: { 'mongo-replica': 'document-1.mydomain.com:30001' },
secondaryDelaySecs: Long("0"),
votes: 1
},
{
_id: 2,
host: 'document-2.document-svc.mongodb.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
horizons: { 'mongo-replica': 'document-2.mydomain.com:30002' },
secondaryDelaySecs: Long("0"),
votes: 1
}
],
protocolVersion: Long("1"),
writeConcernMajorityJournalDefault: true,
settings: {
chainingAllowed: true,
heartbeatIntervalMillis: 2000,
heartbeatTimeoutSecs: 10,
electionTimeoutMillis: 10000,
catchUpTimeoutMillis: -1,
catchUpTakeoverDelayMillis: 30000,
getLastErrorModes: {},
getLastErrorDefaults: { w: 1, wtimeout: 0 },
replicaSetId: ObjectId("62209784e8aacd8385db1609")
}
}
ReplicaSetHorizons feature does not work without using SSL/TLS certificates.
Quoting from Kubernetes Operator reference:
This method to use split horizons requires the Server Name Indication extension of the TLS protocol
In order to make this work, you need to include
TLS certificate
TLS key
CA key
TLS Certificate must contain DNS names of all your replica sets in Subject Alternative Name (SAN) section
There is a tutorial at operator github pages. You need to complete all steps, certificate issuance cannot be skipped.
Certificate resource (using cert-manager.io CRD)
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: cert-manager-certificate
spec:
secretName: mongodb-tls
issuerRef:
name: ca-issuer
kind: Issuer
duration: 87600h
commonName: "*.document-svc.mongodb.svc.cluster.local"
dnsNames:
- "*.document-svc.mongodb.svc.cluster.local"
- "document-0.mydomain.com"
- "document-1.mydomain.com"
- "document-2.mydomain.com"
MongoDBCommunity resource excerpt
spec:
type: ReplicaSet
...
replicaSetHorizons:
- mongo-replica: document-0.mydomain.com:30000
- mongo-replica: document-0.mydomain.com:30001
- mongo-replica: document-0.mydomain.com:30002
security:
tls:
enabled: true
certificateKeySecretRef:
name: mongodb-tls
caConfigMapRef:
name: ca-config-map
Secret mongodb-tls will by of type tls and contain ca.crt, tls.crt and tls.key fields representing Certificate Authority certificate, TLS certificate and TLS key respectively.
ConfigMap ca-config-map will contain ca.crt field only
More info at: mongodb-operator-secure-tls
I'm trying to find a better way to solve this scenario than resorting to a yaml inside a pulumi.apply call (which has problems with preview apparently).
The idea here is (using Azure Kubernetes) to create a secret and then make it available inside a pod (nginx pod here just for test purposes).
The current code works, but is there an API that I'm missing?
Started to mess around with:
const foobar = new k8s.storage.v1beta1.CSIDriver("testCSI", { ...
but not really sure if it is the right path and if it is, what to put where to get the same effect.
Sidenote, no, I do not want to put secrets into environment variables. Although convenient they leak in the gui and logs and possibly more places.
const provider = new k8s.Provider("provider", {
kubeconfig: config.kubeconfig,
namespace: "default",
});
const secret = new keyvault.Secret("mysecret", {
resourceGroupName: environmentResourceGroupName,
vaultName: keyVaultName,
secretName: "just-some-secret",
properties: {
value: administratorLogin,
},
});
pulumi.all([environmentTenantId, keyVaultName, clusterManagedIdentityClientId])
.apply(([environmentTenantId, keyVaultName, clusterManagedIdentityClientId]) => {
let yammie = `apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname-system-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: "${clusterManagedIdentityClientId}"
keyvaultName: ${keyVaultName}
cloudName: ""
objects: |
array:
- |
objectName: just-some-secret
objectType: secret
tenantId: ${environmentTenantId}`;
const yamlConfigGroup = new k8s.yaml.ConfigGroup("test-secret",
{
yaml: yammie,
},
{
provider: provider,
dependsOn: [secret],
}
);
});
const deployment = new k8s.apps.v1.Deployment(
name,
{
metadata: {
labels: appLabels,
},
spec: {
replicas: 1,
selector: { matchLabels: appLabels },
template: {
metadata: {
labels: appLabels,
},
spec: {
containers: [
{
name: name,
image: "nginx:latest",
ports: [{ name: "http", containerPort: 80 }],
volumeMounts: [
{
name: "secrets-store01-inline",
mountPath: "/mnt/secrets-store",
readOnly: true,
},
],
},
],
volumes: [
{
name: "secrets-store01-inline",
csi: {
driver: "secrets-store.csi.k8s.io",
readOnly: true,
volumeAttributes: { secretProviderClass: "azure-kvname-system-msi" },
},
},
],
},
},
},
},
{
provider: provider,
}
);
SecretsProviderClass is a CustomResource which isn't typed because the fields can be anything you want.
const secret = new k8s.apiextensions.CustomResource("cert", {
apiVersion: "secrets-store.csi.x-k8s.io/v1",
kind: "SecretProviderClass",
metadata: {
namespace: "kube-system",
},
spec: {
provider: "azure",
secretObjects: [{
data: [{
objectName: cert.certificate.name,
key: "tls.key",
}, {
objectName: cert.certificate.name,
key: "tls.crt"
}],
secretName: "ingress-tls-csi",
type: "kubernetes.io/tls",
}],
parameters: {
usePodIdentity: "true",
keyvaultName: cert.keyvault.name,
objects: pulumi.interpolate`array:\n - |\n objectName: ${cert.certificate.name}\n objectType: secret\n`,
tenantId: current.then(config => config.tenantId),
}
}
}, { provider: k8sCluster.k8sProvider })
Note: the objects array might work with JSON.stringify, but I haven't yet tried that.
If you want to get strong typing for a card, you can use crd2pulumi
I'm using Prometheus JMX Exporter to monitor Kafka. I've defined the following pattern rules in the JMX config file:
- pattern : kafka.server<type=(.+), name=(.+)PerSec\w*, topic=(.+)><>Count
name: kafka_server_$1_$2_total
labels:
topic: "$3"
- pattern: kafka.server<type=(.+), name=(.+)PerSec\w*><>Count
name: kafka_server_$1_$2_total
type: COUNTER
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>(Count|Value)
name: kafka_server_$1_$2
labels:
clientId: "$3"
topic: "$4"
partition: "$5"
- pattern: kafka.server<type=(.+), name=(.+), topic=(.+), partition=(.*)><>(Count|Value)
name: kafka_server_$1_$2
labels:
topic: "$3"
partition: "$4"
- pattern: kafka.server<type=(.+), name=(.+), topic=(.+)><>(Count|Value)
name: kafka_server_$1_$2
labels:
topic: "$3"
type: COUNTER
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), brokerHost=(.+), brokerPort=(.+)><>(Count|Value)
name: kafka_server_$1_$2
labels:
clientId: "$3"
broker: "$4:$5"
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+)><>(Count|Value)
name: kafka_server_$1_$2
labels:
clientId: "$3"
- pattern: kafka.server<type=(.+), name=(.+)><>(Count|Value)
name: kafka_server_$1_$2
Now I'm having the following issue. When I send data to the topic in this way:
/bin/kafka-console-producer.sh --broker-list kafka-hostname:9092 --topic test1
The counter of the metric kafka_server_brokertopicmetrics_bytesin_total increases correctly.
When I try to send data by using the following code:
"use strict";
const envs = process.env;
const options = {
"metadata.broker.list": "kafka-hostname:9092",
"group.id": "kafka1",
topic: "test1",
key: "testKey"
};
const kafkesque = require("untubo")(options);
let count = 0;
const interval = setInterval(function() {
kafkesque.push({ hello: "world", count });
console.log("sent", count);
count++;
}, 500);
process.once("SIGINT", function() {
clearInterval(interval);
console.log("closing");
kafkesque.stop(() => {
console.log("closed");
});
});
In this case the metric doesn't change at all but I can receive the message in the consumer. I think there is something not configured properly in the pattern. Do you have any idea?