Kubectl rollout restart gets error: Unable to decode - kubernetes

When I want to restart a deployment by the following command: kubectl rollout restart -n ind-iv -f mattermost-installation.yml it returns an error: unable to decode "mattermost-installation.yml": no kind "Mattermost" is registered for version "installation.mattermost.com/v1beta1" in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
The yml file looks like this:
apiVersion: installation.mattermost.com/v1beta1
kind: Mattermost
metadata:
name: mattermost-iv # Choose the desired name
spec:
size: 1000users # Adjust to your requirements
useServiceLoadBalancer: true # Set to true to use AWS or Azure load balancers instead of an NGINX controller.
ingressName: ************* # Hostname used for Ingress, e.g. example.mattermost-example.com. Required when using an Ingress controller. Ignored if useServiceLoadBalancer is true.
ingressAnnotations:
kubernetes.io/ingress.class: nginx
version: 6.0.0
licenseSecret: "" # Name of a Kubernetes secret that contains Mattermost license. Required only for enterprise installation.
database:
external:
secret: db-credentials # Name of a Kubernetes secret that contains connection string to external database.
fileStore:
external:
url: ********** # External File Storage URL.
bucket: ********** # File Storage bucket name to use.
secret: file-store-credentials
mattermostEnv:
- name: MM_FILESETTINGS_AMAZONS3SSE
value: "false"
- name: MM_FILESETTINGS_AMAZONS3SSL
value: "false"
Anybody an idea?

Related

validation error in config.yml of kibana kubernetes

apiVersion: v1
kind: ConfigMap
metadata:
name: kibana
namespace: the-project
labels:
app: kibana
env: dev
data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |- server.name: kib.the-project.d4ldev.txn2.com server.host: "0" elasticsearch.url: http://elasticsearch:9200
this is my config.yml file. when I try to create this project, I get this error
error: error parsing configmap.yml: error converting YAML to JSON: yaml: line 13: did not find expected comment or line break
I can't get rid of the error even after removing the space in line 13 column 17
The yml content can be directly put on multiple lines, formatted like a real yaml, take a look at the following example:
data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |-
server:
name: kib.the-project.d4ldev.txn2.com
host: "0"
elasticsearch.url: http://elasticsearch:9200
This works when put in a ConfigMap, it should work even if provided to a HELM Chart (depending on how the HELM templates are written)

How to add new cluster in ArgoCD (use config file of Rancher)? - the server has asked for the client to provide credentials

I want to add a new cluster in addition to the default cluster on ArgoCD but when I add it, I get an error:
FATA[0001] rpc error: code = Unknown desc = REST config invalid: the server has asked for the client to provide credentials
I use the command argocd cluster add cluster-name
I download config file k8s of Rancher.
Thanks!
I solved my problem but welcome other solutions from everyone :D
First, create a secret with the following content:
apiVersion: v1
kind: Secret
metadata:
namespace: argocd # same namespace of argocd-app
name: mycluster-secret
labels:
argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
name: cluster-name # Get from clusters - name field in config k8s file.
server: https://mycluster.com # Get from clusters - name - cluster - server field in config k8s file.
config: |
{
"bearerToken": "<authentication token>",
"tlsClientConfig": {
"insecure": false,
"caData": "<base64 encoded certificate>"
}
}
bearerToken - Get from users - user - token field in config k8s file.
caData - Get from clusters - name - cluster - certificate-authority-data field in config k8s file.
Then, apply this yaml file and the new cluster will be automatically added to ArgoCD.
I found the solution on github:
https://gist.github.com/janeczku/b16154194f7f03f772645303af8e9f80

How to configure microk8s kubernetes to use private container's in https://hub.docker.com/?

microk8s document "Working with a private registry" leaves me unsure what to do. The Secure registry portion says Kubernetes does it one way (no indicating whether or not Kubernetes' way applies to microk8), and microk8s uses containerd inside its implementation.
My YAML file contains a reference to a private container on dockerhub.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blaw
spec:
replicas: 1
selector:
matchLabels:
app: blaw
strategy:
type: Recreate
template:
metadata:
labels:
app: blaw
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
When I microk8s kubectl apply this file and do a microk8s kubectl describe, I get:
Warning Failed 16m (x4 over 18m) kubelet Failed to pull image "johngrabner/py_blaw_service:v0.3.10": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/johngrabner/py_blaw_service:v0.3.10": failed to resolve reference "docker.io/johngrabner/py_blaw_service:v0.3.10": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I have verified that I can download this repo from a console doing a docker pull command.
Pods using public containers work fine in microk8s.
The file /var/snap/microk8s/current/args/containerd-template.toml already contains something to make dockerhub work since public containers work. Within this file, I found
# 'plugins."io.containerd.grpc.v1.cri".registry' contains config related to the registry
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
The above does not appear related to authentication.
On the internet, I found instructions to create a secret to store credentials, but this does not work either.
microk8s kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/john/.docker/config.json --type=kubernetes.io/dockerconfigjson
While you have created the secret you have to then setup your deployment/pod to use that secret in order to download the image. This can be achieved with imagePullSecrets as described on the microk8s document you mentioned.
Since you already created your secret you just have reference it in your deployment:
...
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
imagePullSecrets:
- name: regcred
...
For more reading check how to Pull an Image from a Private Registry.

How to distribute a file across presto cluster in kuebernetes

I'm new to Kubernetes. We have a presto (starburst) cluster deployed in Kubernetes and we are trying to implement SSL certificate for the presto cluster.
Based on the below URL, I have created a keystore (in my local machine) and have to populate this keystore path to 'http-server.https.keystore.path'
https://docs.starburstdata.com/latest/security/internal-communication.html
However, this file has to be distributed across the cluster. If I enter the local path then Kubernetes is throwing 'file not found' error. Could you please let me know how to distribute this in presto cluster in kubernetes.
I have tried creating the keystore as secret and mounted this to a volume.
kubectl create secret generic presto-keystore --from-file=./keystore.jks
kind: Presto
metadata:
name: stg-presto
spec:
clusterDomain: cluster.local
nameOverride: stg-presto
additionalVolumes:
- path: /jks
volume:
secret:
secretName: presto-keystore
additionalJvmConfigProperties: |
image:
name: xxxxx/presto
pullPolicy: IfNotPresent
tag: 323-e.8-k8s-0.20
prometheus:
enabled: true
additionalRules:
- pattern: 'presto.execution<name=TaskManager><>FailedTasks.TotalCount'
name: 'failed_tasks'
type: COUNTER
service:
type: NodePort
name: stg-presto
memory:
nodeMemoryHeadroom: 30Gi
xmxToTotalMemoryRatio: 0.9
heapHeadroomPerNodeRatio: 0.3
queryMaxMemory: 1Pi
queryMaxTotalMemoryPerNodePoolFraction: 0.333
coordinator:
cpuLimit: "5"
cpuRequest: "5"
memoryAllocation: "30Gi"
image:
pullPolicy: IfNotPresent
additionalProperties: |
http-server.http.enabled=false
node.internal-address-source=FQDN
http-server.https.enabled=true
http-server.https.port=8080
http-server.https.keystore.path=/jks/keystore.jks
http-server.https.keystore.key=xxxxxxx
internal-communication.https.required=true
internal-communication.https.keystore.path=/jks/keystore.jks
internal-communication.https.keystore.key=xxxxxxx
Also tried creating config and mounted it as a volume. But still getting 'Caused by: java.io.FileNotFoundException: /jks/keystore.jks (No such file or directory)'.
Could you please let me know if am missing anything.
Thanks
You can create a secret or Configmap using your keystore and mount it as volume and then use the path in your files.
How to create and use configMap in k8s here
How to configure a secret in k8s here
You can use both in a similar fashion in your Custom Resource as in any other resource. I see an option of additionalVolumes and documentation associated with it here
You can create a secret in K8s and mount it within Presto deployment using additionalVolumes property.
Checkout documentation on additionalVolumes at https://docs.starburstdata.com/latest/kubernetes/presto_resource.html
Create a secret from a file:
kubectl create secret generic cluster-keystore --from-file=./docker.cluster.jks
Add the secret in the "additionalVolumes" section in the yaml: (per Karol's URL above)
additionalVolumes:
- path: /jks
volume:
secret:
secretName: "cluster-keystore"
Add the jks file to the coordinator "additionalProperties" section in your yaml:
coordinator:
cpuRequest: 25
cpuLimit: 25
memoryAllocation: 110Gi
additionalProperties: |
http-server.https.enabled=true
http-server.https.port=8443
http-server.https.keystore.path=/jks/docker.cluster.jks
http-server.https.keystore.key=xxxxxxxxxxx
http-server.authentication.type=PASSWORD

stable/prometheus-operator - adding persistent grafana dashboards

I am trying to add a new dashboard to the below helm chart
https://github.com/helm/charts/tree/master/stable/prometheus-operator
The documentation is not very clear.
I have added a config map to the name space like the below -
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-grafana-dashboard
namespace: monitoring
labels:
grafana_dashboard: "1"
data:
etcd-dashboard.json: |-
{JSON}
According to the documentation, this should just be "picked" up and added, but its not.
https://github.com/helm/charts/tree/master/stable/grafana#configuration
The sidecar option in my values.yaml looks like -
grafana:
enabled: true
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: password
ingress:
## If true, Grafana Ingress will be created
##
enabled: false
## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts: []
## Path for grafana ingress
path: /
## TLS configuration for grafana Ingress
## Secret must be manually created in the namespace
##
tls: []
# - secretName: grafana-general-tls
# hosts:
# - grafana.example.com
#dashboardsConfigMaps:
#sidecarProvider: sample-grafana-dashboard
sidecar:
dashboards:
enabled: true
label: grafana_dashboard
I have also tried adding this to the value.yml
dashboardsConfigMaps:
- sample-grafana-dashboard
Which, doesn't work.
Does anyone have any experience with adding your own dashboards to this helm chart as I really am at my wits end.
To sum up:
For sidecar you need only one option set to true - grafana.sidecar.dashboards.enabled
Install prometheus-operator witch sidecard enabled:
helm install stable/prometheus-operator --name prometheus-operator --set grafana.sidecar.dashboards.enabled=true --namespace monitoring
Add new dashboard, for example
MongoDB_Overview:
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Now the tricky part, you have to set a correct label for your
configmap, by default grafana.sidecar.dashboards.label is set
tografana_dashboard, so:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
Now you should find your newly added dashboard in grafana, moreover every confimap with label grafana_dashboard will be processed as dashboard.
The dashboard is persisted and safe, stored in configmap.
UPDATE:
January 2021:
Prometheus operator chart was migrated from stable repo to Prometheus Community Kubernetes Helm Charts and helm v3 was released so:
Create namespace:
kubectl create namespace monitoring
Install prometheus-operator from helm chart:
helm install prometheus-operator prometheus-community/kube-prometheus-stack --namespace monitoring
Add Mongodb dashboard as an example
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Lastly, label the dashboard:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
You have to:
define you dashboard json as a configmap (as you have done, but see below for an easier way)
define a provider: to tell where to load the dashboard
map the two together
from values.yml:
dashboardsConfigMaps:
application: application
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: application
orgId: 1
folder: "Application Metrics"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/application
Now the application config map should create files in this directory in the pod, and as has been discussed the sidecar should load them into an Application Metrics folder, seen in the GUI.
That probably answers your issue as written, but as long as your dashboards aren't too big using kustonmise mean you can have the json on disk without needing to include the json in another file thus:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# May choose to enable this if need to refer to configmaps outside of kustomize
generatorOptions:
disableNameSuffixHash: true
namespace: monitoring
configMapGenerator:
- name: application
files:
- grafana-dashboards/application/api01.json
- grafana-dashboards/application/api02.json
For completeness sake you can also load dashboards from url or from the Grafana site, although I don't believe mixing method in the same folder works.
So:
dashboards:
kafka:
kafka01:
url: https://raw.githubusercontent.com/kudobuilder/operators/master/repository/kafka/docs/latest/resources/grafana-dashboard.json
folder: "KUDO Kafka"
datasource: Prometheus
nginx:
nginx1:
gnetId: 9614
datasource: Prometheus
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: kafka
orgId: 1
folder: "KUDO Kafka"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/kafka
- name: nginx
orgId: 1
folder: Nginx
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/nginx
Creates two new folders containing a dashboard each, from external sources, or maybe you point this at your git repo you de-couple your dashboard commits from your deployment.
If you do not change the settings in the helm chart. The default user/password for grafana is:
user: admin
password: prom-operator