Use existing service account for running job - kubernetes

I am currently using emmisary-ingress in my namespace newcluster.I am setting up crds using the following crds.yaml file. In this i have only changed the hardcoded namespace.
By looking into following section of code, i came to know that the service name is emissary-apiext
apiVersion: v1
kind: Service
metadata:
name: emissary-apiext
namespace: newcluster
labels:
app.kubernetes.io/instance: emissary-apiext
app.kubernetes.io/managed-by: kubectl_apply_-f_emissary-apiext.yaml
app.kubernetes.io/name: emissary-apiext
app.kubernetes.io/part-of: emissary-apiext
spec:
type: ClusterIP
ports:
- name: https
port: 443
targetPort: https
selector:
app.kubernetes.io/instance: emissary-apiext
app.kubernetes.io/name: emissary-apiext
app.kubernetes.io/part-of: emissary-apiext
Now in following job from which i am trying to create one yaml of type KubernetesEndpointResolver, i simply mentioned the serviceAccountName: emissary-apiext and installed it with helm.
apiVersion: batch/v1
kind: Job
metadata:
labels:
app: {{ template "name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Release.Namespace }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ .Release.Namespace }}-job-kube-resolvers
spec:
template:
spec:
serviceAccountName: emissary-apiext
containers:
- name: cert-manager-setup-certificates-crd
image: "bitnami/kubectl:1.23.10-debian-11-r9"
volumeMounts:
- name: cert-manager-setup-certificates-crd
mountPath: /etc/cert-manager-setup-certificates-crd
readOnly: true
command: ["kubectl", "apply", "-f", "/etc/cert-manager-setup-certificates-crd", "-n", "newcluster"]
volumes:
- name: cert-manager-setup-certificates-crd
configMap:
name: cert-manager-setup-certificates-crd
restartPolicy: OnFailure
the configmap having yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: {{ .Release.Namespace }}
name: cert-manager-setup-certificates-crd
data:
crds.yaml: |-
apiVersion: getambassador.io/v3alpha1
kind: KubernetesEndpointResolver
metadata:
name: {{ .Release.Namespace }}-endpoint
labels:
app.kubernetes.io/instance: emissary-apiext
app.kubernetes.io/managed-by: kubectl_apply_-f_emissary-apiext.yaml
app.kubernetes.io/name: emissary-apiext
app.kubernetes.io/part-of: emissary-apiext
The error i am from job is :
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "getambassador.io/v3alpha1, Resource=kubernetesendpointresolvers", GroupVersionKind: "getambassador.io/v3alpha1, Kind=KubernetesEndpointResolver"
Name: "newcluster-endpoint", Namespace: "newcluster"
from server for: "/etc/cert-manager-setup-certificates-crd/crds.yaml":
kubernetesendpointresolvers.getambassador.io "newcluster-endpoint" is forbidden: User "system:serviceaccount:newcluster:emissary-apiext"
cannot get resource "kubernetesendpointresolvers" in API group "getambassador.io" in the namespace "newcluster"

Related

Micronaut Application on Kubernetes not able to pick up a property from yml

Running a micronaut application on kubernetes where configs are loaded from configMap.
Firstly, my configmap.yml looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: data-loader-service-config
data:
application-devcloud.yml: |-
data.uploaded.event.queue: local-datauploaded-event-queue
data.uploaded.event.consumer.concurrency: 1-3
base.dir: basedir
aws:
region: XXX
datasources:
default:
dialect: POSTGRES
driverClassName: org.postgresql.Driver
micronaut:
config:
sources:
- file:/data-loader-service-config
debug: true
jms:
sqs:
enabled: true
My deployment yml looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
name: data-loader-service
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
template:
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MICRONAUT_ENVIRONMENTS
value: "devcloud"
- name: aws.region
value: xxx
image: mynamespace/data-loader-service:0.1-SNAPSHOT
imagePullPolicy: Always
name: data-loader-service
volumeMounts:
- name: data-loader-service-config
mountPath: /data-loader-service-config
volumes:
- configMap:
defaultMode: 384
name: data-loader-service-config
optional: false
name: data-loader-service-config
When my micronaut app in the pod starts up, it is not able to resolve base.dir.
Not sure what's missing here.
Here is what I ended up doing. It works. I don't think it's the cleanest way though. Looking for a better way.
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
name: data-loader-service
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
template:
metadata:
annotations:
app.dekorate.io/vcs-url: <<unknown>>
app.dekorate.io/commit-id: c041d22bc8a1a69a4c9016b77d9df465c8ca9d83
labels:
app.kubernetes.io/name: data-loader-service
app.kubernetes.io/version: 0.1-SNAPSHOT
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MICRONAUT_ENVIRONMENTS
value: "devcloud"
- name: MICRONAUT_CONFIG_FILES
value: "/config/application-common.yml,/config/application-devcloud.yml"
- name: aws.region
value: xxx
image: xxx/data-loader-service:0.1-SNAPSHOT
imagePullPolicy: Always
name: data-loader-service
volumeMounts:
- name: data-loader-service-config
mountPath: /config
volumes:
- configMap:
defaultMode: 384
name: data-loader-service-config
optional: false
name: data-loader-service-config
I do not want to "hard-code" the values for MICRONAUT_ENVIRONMENTS and MICRONAUT_CONFIG_FILES inmy deployment.yml. Is there a way to parameterise/externalise them so that I have a single deployment.yml for all the environments. At runtime, I can dynamically decide what is the environment to which I need to deploy to? I do not want to create multiple yml files (one for each environment/profile).

Deploy to kubernetes

I want to deploy frontend and backend applications on kubernetes. I write yaml files(i get this from helm temlate):
# Source: quality-control/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-quality-control
labels:
app.kubernetes.io/name: quality-control
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
rules:
- host: "quality-control.ru"
http:
paths:
- path: /
backend:
serviceName: RELEASE-NAME-quality-control
servicePort: http
---
# Source: quality-control/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: List
items:
- apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: quality-control-frontend
labels:
app.kubernetes.io/name: quality-control-frontend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
labels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
logger: external
sourcetype: quality-control-frontend
spec:
containers:
- name: quality-control
image: "registry.***.ru:5050/quality-control-frontend:stable"
imagePullPolicy: Always
env:
- name: spring_profiles_active
value: dev
ports:
- containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /healthcheck
port: 80
protocol: TCP
initialDelaySeconds: 10
periodSeconds: 10
resources:
limits:
cpu: 2
memory: 2048Mi
requests:
cpu: 1
memory: 1024Mi
- apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-backend
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
labels:
app.kubernetes.io/name: quality-control-backend
app.kubernetes.io/instance: RELEASE-NAME
logger: external
sourcetype: quality-control-backend
spec:
containers:
- name: quality-control
image: "registry.***.ru:5050/quality-control-backend:stable"
imagePullPolicy: Always
env:
- name: spring_profiles_active
value: dev
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 2
memory: 2048Mi
requests:
cpu: 1
memory: 1024Mi
---
# Source: quality-control/templates/service.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Service
metadata:
name: quality-control-frontend
labels:
app.kubernetes.io/name: quality-control-frontend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
- apiVersion: v1
kind: Service
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME}
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app.kubernetes.io/name: quality-control-backend
app.kubernetes.io/instance: RELEASE-NAME
But I get an error when deploying:
Error: release quality-control failed: Deployment.apps "quality-control-frontend" is invalid: [spec.selector: Required value, spec.template.metadata.la bels: Invalid value: map[string]string{"app.kubernetes.io/instance":"quality-control", "app.kubernetes.io/name":"quality-control-frontend"}: `selector` does not match template `labels`]
There is a indent issue in first deployment object
change it from
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
to
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: quality-control-frontend
app.kubernetes.io/instance: RELEASE-NAME
Also there is indent problem in service list, need to change it from
- apiVersion: v1
kind: Service
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME}
app.kubernetes.io/managed-by: Tiller
to
- apiVersion: v1
kind: Service
metadata:
name: quality-control-backend
labels:
app.kubernetes.io/name: quality-control-backend
helm.sh/chart: quality-control-0.1.0
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Tiller

Does helm support Endpoints object type?

I've created the following to objects:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.serviceName }}
namespace: {{ .Values.global.namespace }}
labels:
chart: {{ template "chartName" . }}
env: {{ .Values.global.env }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": "before-hook-creation"
spec:
ports:
- port: {{ .Values.postgres.port}}
selector: {}
for a service and its endpoint:
kind: Endpoints
apiVersion: v1
metadata:
name: {{ .Values.serviceName }}
namespace: {{ .Values.global.namespace }}
labels:
chart: {{ template "chartName" . }}
env: {{ .Values.global.env }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": "before-hook-creation"
subsets:
- addresses:
- ip: "{{ .Values.external.ip }}"
ports:
- name: "db"
port: {{ .Values.external.port }}
When I use helm even in a dry run mode I can see the service object and cant see the endpoint object.
Why? Doesn't helm support all k8s objects?
Helm is just a "templating" tool, so technically it supports everything that your underlying k8 supports.
In your case, please check that both files are in the templates directory
Actually it does work. The problem was that the service and the endpoint MUST have same names (which I new) and MUST have port names exactly the same

get functional yaml files from Helm

Is there a way to intercept the yaml files from helm after is has built them, but right before the creation of the objects?
What I'm doing now is to create the objects then get them through:
for file in $(kubectl get OBJECT -n maesh -oname); do kubectl get $i -n maesh --export -oyaml > $file.yaml; done
This works fine. I only have to previously craete the object directory, but works. I just was wondering if there is a clean way of doing this.
And, by the way, the reason is because the service mesh of traefik (maesh) is still in diapers, and the only way to install it is through helm. They don't have yet the files in their repo.
You can do
helm template .
this will output something like
---
# Source: my-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-my-app
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
---
# Source: my-app/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "release-name-my-app-test-connection"
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['release-name-my-app:80']
restartPolicy: Never
---
# Source: my-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-my-app
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: release-name
spec:
containers:
- name: my-app
image: "nginx:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
---
# Source: my-app/templates/ingress.yaml
and that is valid file with k8s objects.

Minikube Ingress: unexpected error reading configmap kube-system/tcp-services: configmap kube-system/tcp-services was not found

I am running minikube with below configuration
Environment:
minikube version: v0.25.2
macOS version: 10.12.6
DriverName: virtualbox
ISO: minikube-v0.25.1.iso
I created Ingress resource to map service:messy-chimp-emauser to path: /
But when I am rolling-out changes to minikube, I am getting below logs in the pod for nginx-ingress-controller
5 controller.go:811] service default/messy-chimp-emauser does not have any active endpoints
5 controller.go:245] unexpected error reading configmap kube-system/tcp-services: configmap kube-system/tcp-services was not found
5 controller.go:245] unexpected error reading configmap kube-system/udp-services: configmap kube-system/udp-services was not found
And hence getting HTTP - 503 when trying to access service from browser
Steps to reproduce
STEP 1
minikube addons enable ingress
STEP 2
kubectl create -f kube-resources.yml
(replaced actual-image with k8s.gcr.io/echoserver:1.4)
kube-resources.yml
apiVersion: v1
kind: Service
metadata:
name: messy-chimp-emauser
labels:
app: messy-chimp-emauser
chart: emauser-0.1.0
release: messy-chimp
heritage: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: emauser
selector:
app: messy-chimp-emauser
release: messy-chimp
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: messy-chimp-emauser
labels:
app: emauser
chart: emauser-0.1.0
release: messy-chimp
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
app: emauser
release: messy-chimp
template:
metadata:
labels:
app: emauser
release: messy-chimp
spec:
containers:
- name: emauser
image: "k8s.gcr.io/echoserver:1.4"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: messy-chimp-ema-chart
labels:
app: ema-chart
chart: ema-chart-0.1.0
release: messy-chimp
heritage: Tiller
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: messy-chimp-emauser
servicePort: emauser
Request to please suggest on this.