Helm to install Fluentd-Cloudwatch on Amazon EKS - kubernetes

While trying to install "incubator/fluentd-cloudwatch" using helm on Amazon EKS, and setting user to root, I am getting below response.
Command used :
helm install --name fluentd incubator/fluentd-cloudwatch --set awsRegion=eu-west-1,rbac.create=true --set extraVars[0]="{ name: FLUENT_UID, value: '0' }"
Error:
Error: YAML parse error on fluentd-cloudwatch/templates/daemonset.yaml: error converting YAML to JSON: yaml: line 38: did not find expected ',' or ']'
If we do not set user to root, then by default, fluentd runs with "fluent" user and its log shows:
[error]: unexpected error error_class=Errno::EACCES error=#<Errno::
EACCES: Permission denied # rb_sysopen - /var/log/fluentd-containers.log.pos>`

Based on this looks like it's just trying to convert eu-west-1,rbac.create=true to a JSON field as field and there's an extra comma(,) there causing it to fail.
And if you look at the values.yaml you'll see the right separate options are awsRegion and rbac.create so --set awsRegion=eu-west-1 --set rbac.create=true should fix the first error.
With respect to the /var/log/... Permission denied error, you can see here that its mounted as a hostPath so if you do a:
# (means read/write user/group/world)
$ sudo chmod 444 /var/log
and all your nodes, the error should go away. Note that you need to add it to all the nodes because your pod can land anywhere in your cluster.

Download and update values.yaml as below. The changes are in awsRegion, rbac.create=true and extraVars field.
annotations: {}
awsRegion: us-east-1
awsRole:
awsAccessKeyId:
awsSecretAccessKey:
logGroupName: kubernetes
rbac:
## If true, create and use RBAC resources
create: true
## Ignored if rbac.create is true
serviceAccountName: default
# Add extra environment variables if specified (must be specified as a single line
object and be quoted)
extraVars:
- "{ name: FLUENT_UID, value: '0' }"
Then run below command to setup fluentd on Kubernetes cluster to send logs to CloudWatch Logs.
$ helm install --name fluentd -f .\fluentd-cloudwatch-values.yaml incubator/fluentd-cloudwatch
I did this and it worked for me. Logs were sent to CloudWatch Logs. Also make sure your ec2 nodes have IAM role with appropriate permissions for CloudWatch Logs.

Related

IBM-MQ kubernetes helm chart ImagePullBackOff

I want to deploy IBM-MQ to Kubernetes (Rancher) using helmfile. I've found this link and did everything as described in the guide: https://artifacthub.io/packages/helm/ibm-charts/ibm-mqadvanced-server-dev.
But the pod is not starting with the error: "ImagePullBackOff". What could be the problem? My helmfile:
...
repositories:
- name: ibm-stable-charts
url: https://raw.githubusercontent.com/IBM/charts/master/repo/stable
releases:
- name: ibm-mq
namespace: test
createNamespace: true
chart: ibm-stable-charts/ibm-mqadvanced-server-dev
values:
- ./ibm-mq.yaml
ibm-mq.yaml:
- - -
license: accept
security:
initVolumeAsRoot: true/false // I'm not sure about this, I added it just because it wasn't working.
// Both of the options don't work too
queueManager:
name: "QM1"
dev:
secret:
adminPasswordKey: adminPassword
name: mysecret
I've created the secret and seems like it's working, so the problem is not in the secret.
The full error I'm getting:
Failed to pull image "ibmcom/mq:9.1.5.0-r1": rpc error: code = Unknown desc = Error response from daemon: manifest for ibmcom/mq:9.1.5.0-r1 not found: manifest unknown: manifest unknown
I'm using helm 3, helmfile v.0.141.0, kubectl 1.22.2
I will leave some things as an exercise to you, but here is what that tutorial says:
helm repo add ibm-stable-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable
You don't really need to do this, since you are using helmfile.
Then they say to issue:
helm install --name foo
ibm-stable-charts/ibm-mqadvanced-server-dev
--set license=accept
--set queueManager.dev.secret.name=mysecret
--set queueManager.dev.secret.adminPasswordKey=adminPassword
--tls
which is targeted towards helm2 (because of those --name and --tls), but that is irrelevant to the problem.
When I install this, I get the same issue:
Failed to pull image "ibmcom/mq:9.1.5.0-r1": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/ibmcom/mq:9.1.5.0-r1": failed to resolve reference "docker.io/ibmcom/mq:9.1.5.0-r1": docker.io/ibmcom/mq:9.1.5.0-r1: not found
I went to the docker.io page of theirs and indeed such a tag : 9.1.5.0-r1 is not present.
OK, can we update the image then?
helm show values ibm-stable-charts/ibm-mqadvanced-server-dev
reveals:
image:
# repository is the container repository to use, which must contain IBM MQ Advanced for Developers
repository: ibmcom/mq
# tag is the tag to use for the container repository
tag: 9.1.5.0-r1
good, that means we can change it via an override value:
helm install foo
ibm-stable-charts/ibm-mqadvanced-server-dev
--set license=accept
--set queueManager.dev.secret.name=mysecret
--set queueManager.dev.secret.adminPasswordKey=adminPassword
--set image.tag=latest # or any other tag
so this works.
How to set-up that tag in helmfile is left as an exercise to you, but it's pretty trivial.

Kubernetes Dashboard unknown field "seccompProfile" and error 503

I am a beginner with Kubernetes. I have enabled it from Docker Destop and now I want to install Kubernetes Dashboard.
I followed this link:
https://github.com/kubernetes/dashboard#getting-started
And I executed my first command in Powershell as an administrator:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
I get the following error:
error: error validating
"https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml":
error validating data:
ValidationError(Deployment.spec.template.spec.securityContext):
unknown field "seccompProfile" in
io.k8s.api.core.v1.PodSecurityContext; if you choose to ignore these
errors, turn validation off with --validate=false
In which case I tried to use the same command with --validate=false.
Then it went and gave no errors and when I execute :
kubectl proxy
I got an access token using:
kubectl describe secret -n kube-system
and I try to access the link as provided in the guide :
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
I get the following swagger response:
The error indicated that your cluster version is not compatible to use seccompProfile.type: RuntimeDefault. In this case you don't apply the dashboard spec (https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml) right away, you download and comment the following line in the spec:
...
spec:
# securityContext:
# seccompProfile:
# type: RuntimeDefault
...
Then you apply the updated spec kubectl apply -f recommended.yaml.

Helm chart ignoring config file or given key value

I am not sure if the issue is related to promtail (helm chart used) or to helm itself.
I want to update the default host value for loki chart to a local host used on kubernetes, so I tried with this:
helm upgrade --install --namespace loki promtail grafana/promtail --set client.url=http://loki:3100/loki/api/v1/push
And with a custom values.yaml like this:
helm upgrade --install --namespace loki promtail grafana/promtail -f promtail.yaml
But it still uses wrong default url:
level=warn ts=2021-10-08T11:51:59.782636939Z caller=client.go:344 component=client host=loki-gateway msg="error sending batch, will retry" status=-1 error="Post \"http://loki-gateway/loki/api/v1/push\": dial tcp: lookup loki-gateway on 10.43.0.10:53: no such host"
If I inspect the config.yaml its using it doesnt use the internal url I gave during the installation:
root#promtail-69hwg:/# cat /etc/promtail/promtail.yaml
server:
log_level: info
http_listen_port: 3101
client:
url: http://loki-gateway/loki/api/v1/push
Any ideas? or anything I am missing?
Thanks
I don't think client.url is a value in the helm chart, but rather one inside a config file that your application is using.
Try setting config.lokiAddress:
config:
lokiAddress: http://loki-gateway/loki/api/v1/push
It gets templated into the config file I mentioned.

Bitnami Redis on Kubernetes Authentication Failure with Existing Secret

I'm trying to install Redis on Kubernetes environment with Bitnami Redis HELM Chart. I want to use a defined password rather than randomly generated one. But i'm getting error below when i want to connect to redis master or replicas with redis-cli.
I have no name!#redis-client:/$ redis-cli -h redis-master -a $REDIS_PASSWORD
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Warning: AUTH failed
I created a Kubernetes secret like this.
---
apiVersion: v1
kind: Secret
metadata:
name: redis-secret
namespace: redis
type: Opaque
data:
redis-password: YWRtaW4xMjM0Cg==
And in values.yaml file i updated auth spec like below.
auth:
enabled: true
sentinel: false
existingSecret: "redis-secret"
existingSecretPasswordKey: "redis-password"
usePasswordFiles: false
If i don't define existingSecret field and use randomly generated password then i can connect without an issue. I also tried AUTH admin1234 after Warning: AUTH failed error but it didn't work either.
You can achieve it in much simpler way i.e. by running:
$ helm install my-release \
--set auth.password="admin1234" \
bitnami/redis
This will update your "my-release-redis" secret, so when you run:
$ kubectl get secrets my-release-redis -o yaml
you'll see it contains your password, already base64-encoded:
apiVersion: v1
data:
redis-password: YWRtaW4xMjM0Cg==
kind: Secret
...
In order to get your password, you need to run:
export REDIS_PASSWORD=$(kubectl get secret --namespace default my-release-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
This will set and export REDIS_PASSWORD environment variable containing your redis password.
And then you may run your redis-client pod:
kubectl run --namespace default redis-client --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:6.2.4-debian-10-r13 --command -- sleep infinity
which will set REDIS_PASSWORD environment variable within your redis-client pod by assigning to it the value of REDIS_PASSWORD set locally in the previous step.
The issue was about how i encoded password with echo command. There was a newline character at the end of my password. I tried with printf command rather than echo and it created a different result.
printf admin1234 | base64

Zalando postgres operator issue with config

Getting below issue with Zalando Postgres operator. The default manifests are applied on the Kubernetes cluster(hosted on-prem) as provided here:
https://github.com/zalando/postgres-operator/tree/4a099d698d641b80c5aeee5bee925921b7283489/manifests
Verified if there are any issues in the operator names or any in configmaps or in the service-account definitions but couldn't figure out much.
kubectl logs -f postgres-operator-944b9d484-9h796
2019/10/24 16:31:02 Spilo operator v1.2.0
2019/10/24 16:31:02 Fully qualified configmap name: default/postgres-operator
panic: configmaps "postgres-operator" is forbidden: User "system:serviceaccount:default:zalando-postgres-operator" cannot get resource "configmaps" in API group "" in the namespace "default"
goroutine 1 [running]:
github.com/zalando/postgres-operator/pkg/controller.(*Controller).initOperatorConfig(0xc0004a6000)
/root/go/src/github.com/zalando/postgres-operator/pkg/controller/controller.go:102 +0x687
github.com/zalando/postgres-operator/pkg/controller.(*Controller).initController(0xc0004a6000)
/root/go/src/github.com/zalando/postgres-operator/pkg/controller/controller.go:253 +0x825
github.com/zalando/postgres-operator/pkg/controller.(*Controller).Run(0xc0004a6000, 0xc000464660, 0xc000047a70)
/root/go/src/github.com/zalando/postgres-operator/pkg/controller/controller.go:348 +0x2f
main.main()
/workspace/cmd/main.go:82 +0x256
Any help here?
I have set up postgres-operator in my environment and it is working perfectly in my case. Please make sure that you have followed steps:
Clone postgres-operator repo:
$ git clone https://github.com/zalando/postgres-operator
$ cd postgres-operator
Operator from Zalando can be configured in two ways - using a classical configmap, or using a CRD configuration object, which is more powerful:
$ kubectl create -f manifests/operator-service-account-rbac.yaml
serviceaccount/zalando-postgres-operator created
clusterrole.rbac.authorization.k8s.io/zalando-postgres-operator created
clusterrolebinding.rbac.authorization.k8s.io/zalando-postgres-operator created
In order to use the CRD config, you must change a value in the postgres-operator itself. Change the last few lines in manifests/postgres-operator.yaml so they read:
env:
# provided additional ENV vars can overwrite individual config map entries
#- name: CONFIG_MAP_NAME
# value: "postgres-operator"
# In order to use the CRD OperatorConfiguration instead, uncomment these lines and comment out the two lines above
- name: POSTGRES_OPERATOR_CONFIGURATION_OBJECT
value: postgresql-operator-default-configuration
The service account name given in that file does not match that given by the operator service account definition, so you must adjust and create the actual config object referenced. This is placed in manifests/postgresql-operator-default-configuration.yaml. These are the values that must be set:
configuration:
kubernetes:
pod_environment_configmap: postgres-pod-config
pod_service_account_name: zalando-postgres-operator
Let’s create the operator and it’s configuration.
$ kubectl create -f manifests/postgres-operator.yaml
deployment.apps/postgres-operator created
Please wait few minutes before type following command:
$ kubectl create -f postgresql-operator-default-configuration.yaml
operatorconfiguration.acid.zalan.do/postgresql-operator-default-configuration created
Now, you will be able to see your POD running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-operator-599fd68d95-c8z67 1/1 Running 0 21m
You can also refer to this article, hope it will helps you.