I try to run ReportPortal in my minikube:
# Delete stuff from last try
minikube delete
minikube start --driver=docker
minikube addons enable ingress
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
mv v5 reportportal/
helm dependency update
helm install . --generate-name
→ Error: failed pre-install: warning: Hook pre-install reportportal/templates/migrations-job.yaml failed: Job.batch "chart-1601647169-reportportal-migrations" is invalid: spec.template.spec.containers[0].env[4].valueFrom.secretKeyRef.name: Invalid value: "": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
Here is the file harts.yaml: https://github.com/reportportal/kubernetes/tree/master/reportportal/v5
What could be wrong?
As mentioned here
Before you deploy ReportPortal you should have installed all its requirements. Their versions are described in requirements.yaml
You should also specify correct PostgreSQL and RabbitMQ addresses and ports in values.yaml
rabbitmq:
SecretName: ""
installdep:
enable: false
endpoint:
address: <rabbitmq_chart_name>-rabbitmq-ha.default.svc.cluster.local
port: 5672
user: rabbitmq
apiport: 15672
apiuser: rabbitmq
postgresql:
SecretName: ""
installdep:
enable: false
endpoint:
cloudservice: false
address: <postgresql_chart_name>-postgresql.default.svc.cluster.local
port: 5432
user: rpuser
dbName: reportportal
password:
I checked here and it points to postgresql secret name in values.yaml.
The solution here would be to change that from "" to your postgresql secret name and install it again. You can change it in your values.yaml or with --set, which specify overrides on the command line
Related
Error:
Steps:
I have downloaded the helm chart from here https://github.com/apache/airflow/releases/tag/helm-chart/1.8.0 (Under Assets, Source code zip).
Added following extra params to default values.yaml,
createUserJob:
useHelmHooks: false
migrateDatabaseJob:
useHelmHooks: false
dags:
gitSync:
enabled: true
#all data....
airflow:
extraEnv:
- name: AIRFLOW__API__AUTH_BACKEND
value: "airflow.api.auth.backend.basic_auth"
ingress:
web:
tls:
enabled: true
secretName: wildcard-tls-cert
host: "mydns.com"
path: "/airflow"
I also need KubernetesExecutor hence using https://github.com/airflow-helm/charts/blob/main/charts/airflow/sample-values-KubernetesExecutor.yaml as k8sExecutor.yaml
Installing using following command,
helm install my-airflow airflow-8.6.1/airflow/ --values values.yaml
--values k8sExecutor.yaml -n mynamespace
It worked when I tried the following way,
helm repo add airflow-repo https://airflow-helm.github.io/charts
helm install my-airflow airflow-repo/airflow --version 8.6.1 --values k8sExecutor.yaml --values values.yaml
values.yaml - has only overridden parameters
k3s cluster.
I have used velero helm installation:
helm install vmware-tanzu/velero --namespace velero-minio -f helm-custom-values-minio.yaml --generate-name --create-namespace
and
helm install vmware-tanzu/velero --namespace velero-aws -f helm-custom-values-aws.yaml --generate-name --create-namespace
Custom helm values:
helm-custom-values-minio.yaml
configuration:
provider: aws
backupStorageLocation:
bucket: k3s-backup
name: minio
default: false
config:
region: minio
s3ForcePathStyle: true
s3Url: http://10.10.5.15:9009
volumeSnapshotLocation:
name: minio
config:
region: minio
credentials:
secretContents:
cloud: |
[default]
aws_access_key_id=minioadm
aws_secret_access_key=<password>
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
snapshotsEnabled: true
deployRestic: true
and helm-custom-values-aws.yaml
configuration:
provider: aws
backupStorageLocation:
name: aws-s3
bucket: k3s-backup-aws
default: false
provider: aws
config:
region: us-east-1
s3ForcePathStyle: false
volumeSnapshotLocation:
name: aws-s3
provider: aws
config:
region: us-east-1
credentials:
secretContents:
cloud: |
[default]
aws_access_key_id=A..............MJ
aws_secret_access_key=qZ79rA/yVUq2c................xnIA
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
snapshotsEnabled: true
deployRestic: true
velero backup jobs:
velero create backup k3s-mongodb-restic-minio --include-namespaces mongodb --default-volumes-to-restic=true --storage-location minio -n velero-minio
velero create backup k3s-mongodb-restic-aws --include-namespaces mongodb --default-volumes-to-restic=true --storage-location aws-s3 -n velero-aws
....
They all failed:
Restic Backups:
Failed:
mongodb/mongodb-cluster-0: agent-scripts, data-volume, healthstatus, hooks, logs-volume, mongodb-cluster-keyfile, tmp
mongodb/mongodb-cluster-1: agent-scripts, data-volume, healthstatus, hooks, logs-volume, mongodb-cluster-keyfile, tmp
time="2022-10-17T17:42:32Z" level=error msg="Error backing up item" backup=velero-minio/k3s-mongodb-restic-minio error="pod volume backup failed: running Restic backup, stderr=Fatal: unable to open config file: Stat: The Access Key Id you provided does not exist in our records.\nIs there a repository at the following location?\ns3:http://10.10.5.15:9009/k3s-backup/restic/mongodb\n: exit status 1" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:199" error.function="github.com/vmware-tanzu/velero/pkg/restic.(*backupper).BackupPodVolumes" logSource="pkg/backup/backup.go:417" name=mongodb-cluster-0
...
velero get backup-locations -n velero-aws
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
aws-s3 aws k3s-backup-aws Available 2022-10-17 14:12:46 -0400 EDT ReadWrite
...
velero get backup-locations -n velero-minio
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
minio aws k3s-backup Available 2022-10-17 14:16:25 -0400 EDT ReadWrite
velero backup part works without errors but restic fails for all my jobs (mongodb is the only example).
It looks like the restic can't create snapshots for my nfs pvc.
What am I doing wrong?
It looks like velero doesn't work with multiple installations, at least the restic part fails (in my case, two instances in name spaces velero-aws and velero-minio).
So, I installed only one instance of velero to work with minio.
Removed --default-volumes-to-restic=true from the backup job configuration.
Used opt-in pod volume backup with the restic integration.
Each pod that has pvc volume needs to be annotated, like the following:
kubectl -n mongodb annotate pod/mongodb-cluster-0 backup.velero.io/backup-volumes=logs-volume,data-volume
I have not tried velero-pvc-watcher, probably it works well
Now backup works with no errors.
I am attempting to deploy JupyterHub into my kubernetes cluster as i want to have it integrated with a self hosted Gitlab instance running in the same cluster.
I have followed the steps on the Gitlab documentation page as shown here.
however when attempt to install this helm chart with this command
helm install jupyterhub/jupyterhub --namespace jupyter --version=1.2.0 --values values.yaml --generate-name ... i get the below errors.
- (root): Additional property gitlab is not allowed
- hub.extraConfig: Invalid type. Expected: object, given: string
- ingress: Additional property host is not allowed
I am using helm chart from https://github.com/jupyterhub/helm-chart. see below for my values.yaml
VALUES.YAML
#-----------------------------------------------------------------------------
# The gitlab and ingress sections must be customized!
#-----------------------------------------------------------------------------
gitlab:
clientId: <Your OAuth Application ID>
clientSecret: <Your OAuth Application Secret>
callbackUrl: http://<Jupyter Hostname>/hub/oauth_callback,
# Limit access to members of specific projects or groups:
# allowedGitlabGroups: [ "my-group-1", "my-group-2" ]
# allowedProjectIds: [ 12345, 6789 ]
# ingress is required for OAuth to work
ingress:
enabled: true
host: <JupyterHostname>
# tls:
# - hosts:
# - <JupyterHostanme>
# secretName: jupyter-cert
# annotations:
# kubernetes.io/ingress.class: "nginx"
# kubernetes.io/tls-acme: "true"
#-----------------------------------------------------------------------------
# NO MODIFICATIONS REQUIRED BEYOND THIS POINT
#-----------------------------------------------------------------------------
hub:
extraEnv:
JUPYTER_ENABLE_LAB: 1
extraConfig: |
c.KubeSpawner.cmd = ['jupyter-labhub']
c.GitLabOAuthenticator.scope = ['api read_repository write_repository']
async def add_auth_env(spawner):
'''
We set user's id, login and access token on single user image to
enable repository integration for JupyterHub.
See: https://gitlab.com/gitlab-org/gitlab-foss/issues/47138#note_154294790
'''
auth_state = await spawner.user.get_auth_state()
if not auth_state:
spawner.log.warning("No auth state for %s", spawner.user)
return
spawner.environment['GITLAB_ACCESS_TOKEN'] = auth_state['access_token']
spawner.environment['GITLAB_USER_LOGIN'] = auth_state['gitlab_user']['username']
spawner.environment['GITLAB_USER_ID'] = str(auth_state['gitlab_user']['id'])
spawner.environment['GITLAB_USER_EMAIL'] = auth_state['gitlab_user']['email']
spawner.environment['GITLAB_USER_NAME'] = auth_state['gitlab_user']['name']
c.KubeSpawner.pre_spawn_hook = add_auth_env
auth:
type: gitlab
state:
enabled: true
singleuser:
defaultUrl: "/lab"
image:
name: registry.gitlab.com/gitlab-org/jupyterhub-user-image
tag: latest
lifecycleHooks:
postStart:
exec:
command:
- "sh"
- "-c"
- >
git clone https://gitlab.com/gitlab-org/nurtch-demo.git DevOps-Runbook-Demo || true;
echo "https://oauth2:${GITLAB_ACCESS_TOKEN}#${GITLAB_HOST}" > ~/.git-credentials;
git config --global credential.helper store;
git config --global user.email "${GITLAB_USER_EMAIL}";
git config --global user.name "${GITLAB_USER_NAME}";
jupyter serverextension enable --py jupyterlab_git
proxy:
service:
type: ClusterIP
Single-string was deprecated in 0.6, for the dicts that don't conflict. The new structure is kind of:
hub:
extraConfig:
myConfigName: |
print("hi", flush=True)
Please use the following URL’s info as a reference issues/1009.
Regarding the “- (root): Additional property gitlab is not allowed” and “- ingress: Additional property host is not allowed” errors, check the indexation, as it is critical in docker-compose.yml. Please use the following threads as a reference Additional property {property} is not allowed and docker compose file not working.
I want to connect to Kubernetes using Ansible. I want to run some ansible playbooks to create Kubernetes objects such as roles and rolebindings using ansible k8s module. I want to know if the Ansible K8s module is standard Kubernetes client that can use Kubeconfig in the same way as helm and kubectl.
Please let me know how to configure Kubeconfig for ansible to connect to K8s cluster.
You basically specify the kubeconfig parameter in the Ansible YAML file. (It defaults to ~/.kube/config.json). For example:
---
- hosts: localhost
gather_facts: false
vars_files:
- vars/main.yml
tasks:
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '~/.kube/config'
state: present
loop: "{{ lookup('template', 'myapp/mysql-pass.yml') | from_yaml_all | list }}"
no_log: k8s_no_log
...
You can also make it a variable:
...
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '{{ k8s_kubeconfig }}'
...
Thankyou..It worked for me..I tried the below.
- hosts: localhost
gather_facts: false
tasks:
- name: Create a k8s namespace
k8s:
kubeconfig: '~/Documents/sample-project/eks-kubeconfig'
name: testing1
api_version: v1
kind: Namespace
state: present
state: present
I've installed the DataDog agent on my Kubernetes cluster using the Helm chart (https://github.com/helm/charts/tree/master/stable/datadog).
This works very well except for one thing. I have a number of Redis containers that have passwords set. This seems to be causing issues for the DataDog agent because it can't connect to Redis without a password.
I would like to either disable monitoring Redis completely or somehow bypass the Redis authentication. If I leave it as is I get a lot of error messages in the DataDog container logs and the redisdb integration shows up in yellow in the DataDog dashboard.
What are my options here?
I am not a fan of helm, but you can accomplish this in 2 ways:
via env vars: make use of DD_AC_EXCLUDE variable to exclude the Redis containers: eg DD_AC_EXCLUDE=name:prefix-redis
via a config map: mount an empty config map in /etc/datadog-agent/conf.d/redisdb.d/, below is an example where I renamed the auto_conf.yaml to auto_conf.yaml.example.
apiVersion: v1
data:
auto_conf.yaml.example: |
ad_identifiers:
- redis init_config: instances:
## #param host - string - required
## Enter the host to connect to.
#
- host: "%%host%%" ## #param port - integer - required
## Enter the port of the host to connect to.
#
port: "6379"
conf.yaml.example: |
init_config: instances: ## #param host - string - required
## Enter the host to connect to.
# [removed content]
kind: ConfigMap
metadata:
creationTimestamp: null
name: redisdb-d
alter the daemonset/deployment object:
[....]
volumeMounts:
- name: redisdb-d
mountPath: /etc/datadog-agent/conf.d/redisdb.d
[...]
volumes:
- name: redisdb-d
configMap:
name: redisdb-d
[...]