How do I deploy this Traefik example to Kubernetes? - kubernetes

I am following the Getting Started guide for Traefik from here and am trying to launch the service into Kubernetes (Minikube) instead of Docker:
Edit your docker-compose.yml file and add the following at the end of your file.
# ...
whoami:
image: emilevauge/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"**
I am guessing I run it as:
kubectl run whoami-service --image=emilevauge/whoami --labels='traefik.frontend.rule=Host:whoami.docker.localhost'
however that generates an error of:
The Deployment "whoami-service" is invalid:
* metadata.labels: Invalid value: "'traefik.frontend.rule": name part must consist of alphanumeric characters, '-', '_' or '.', and....
So what am I missing here? How do I deploy the above to my Minikube Kubernetes cluster?

I'm not sure if this is along the lines of what you're looking for, but Traefik has a small tutorial for getting an Ingress controller set up on Kubernetes, with a great document on configuration, as well.
If you'd just like to get that particular image working, you may be able to pass the label as an argument to the pod, possibly with kubectl run. From the output of kubectl run help:
# Start the nginx container using the default command, but use custom arguments (arg1 .. argN) for that command.
kubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>
Or possibly manually in a manifest:
...
containers:
- name: whoami
image: emilevauge/whoami
args: ["traefik.frontend.rule: "Host:whoami.docker.localhost"]
Having never worked with the image in the example before, I don't know if either the above examples will actually work.
Hope that helps a little!

Related

Odd Kubernetes behaviour in AWS EKS cluster

In an EKS cluster (v1.22.10-eks-84b4fe6) that I manage I've spotted a behavior that I had never seen before (or that I missed completely...) => In a namespace with an application running in created by a public helm chart, if I create a separate new unrelated pod (a simple empty busybox with a sleep command in it) it'll automatically mount some environmental variables always starting with the name of the namespace and as referring to the available services which are related to the helm chart/deployment already in it. I'm not sure I understand this behavior, I've tested this in several other namespaces with helm charts deployed as well and I get the same results (each time with different env vars obviously).
An example in a namespace with this chart installed -> https://github.com/bitnami/charts/tree/master/bitnami/keycloak
testpod.yaml
apiVersion: v1
kind: Pod
metadata:
name: testpod
namespace: keycloak-18
spec:
containers:
- image: busybox
name: testpod
command: ["/bin/sh", "-c"]
args: ["sleep 3600"]
When in the pod:
/ # env
KEYCLOAK_18_METRICS_PORT_8080_TCP_PROTO=tcp
KUBERNETES_PORT=tcp://10.100.0.1:443
KUBERNETES_SERVICE_PORT=443
KEYCLOAK_18_METRICS_SERVICE_PORT=8080
KEYCLOAK_18_METRICS_PORT=tcp://10.100.104.11:8080
KEYCLOAK_18_PORT_80_TCP_ADDR=10.100.71.5
HOSTNAME=testpod
SHLVL=2
KEYCLOAK_18_PORT_80_TCP_PORT=80
HOME=/root
KEYCLOAK_18_PORT_80_TCP_PROTO=tcp
KEYCLOAK_18_METRICS_PORT_8080_TCP=tcp://10.100.104.11:8080
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP_ADDR=10.100.155.185
KEYCLOAK_18_POSTGRESQL_SERVICE_HOST=10.100.155.185
KEYCLOAK_18_PORT_80_TCP=tcp://10.100.71.5:80
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP_PORT=5432
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP_PROTO=tcp
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.100.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KEYCLOAK_18_POSTGRESQL_PORT=tcp://10.100.155.185:5432
KEYCLOAK_18_POSTGRESQL_SERVICE_PORT=5432
KEYCLOAK_18_SERVICE_PORT_HTTP=80
KEYCLOAK_18_POSTGRESQL_SERVICE_PORT_TCP_POSTGRESQL=5432
KUBERNETES_PORT_443_TCP_PROTO=tcp
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP=tcp://10.100.155.185:5432
KEYCLOAK_18_METRICS_SERVICE_PORT_HTTP=8080
KEYCLOAK_18_SERVICE_HOST=10.100.71.5
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.100.0.1:443
KUBERNETES_SERVICE_HOST=10.100.0.1
PWD=/
KEYCLOAK_18_METRICS_PORT_8080_TCP_ADDR=10.100.104.11
KEYCLOAK_18_METRICS_SERVICE_HOST=10.100.104.11
KEYCLOAK_18_SERVICE_PORT=80
KEYCLOAK_18_PORT=tcp://10.100.71.5:80
KEYCLOAK_18_METRICS_PORT_8080_TCP_PORT=8080
I've looked a bit into this and I've seen this doc https://kubernetes.io/docs/concepts/containers/container-environment/, but it states less variables than I can see myself
I may be behind on some Kubernetes features, does anyone have a clue?
Thanks!
What you are seeing is expected. Asserted from the official documentation:
When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. It adds {SVCNAME}_SERVICE_HOST and
{SVCNAME}_SERVICE_PORT variables, where the Service name is
upper-cased and dashes are converted to underscores.
This behavior is not EKS specific.

Set container port based on environment property

I am setting port value in an environment property while generating Pod yaml.
master $ kubectl run nginx --image=nginx --restart=Never --env=MY_PORT=8080 --dry-run -o yaml > Pod.yaml
I am trying to use the environment property MY_PORT in the ports section of my Pod yaml.
spec:
containers:
- env:
- name: MY_PORT
value: "8080"
image: nginx
name: nginx
ports:
- containerPort: $(MY_PORT)
When i try to create the Pod i am getting following error message.
error: error validating "Pod.yaml": error validating data: ValidationError(Pod.spec.containers[0].ports[0].containerPort): invalid type for io.k8s.api.core.v1.ContainerPort.containerPort: got "string", expected "integer"; if you choose to ignore theseerrors, turn validation off with --validate=false
I tried referencing like ${MY_PORT} , MY_PORT etc.. but all the time same error as above.
How i can use an environment variable value in an integer field.
You can't use an environment variable there. In the ContainerPort API object the containerPort field is specified as an integer. Variable substitution is only support in a couple of places, and where it does it is called out; see for example args and command in the higher-level Container API object.
There's no reason to make this configurable. In a Kubernetes environment the pod will have its own IP address, so there's no risk of conflict; if you want to use a different port number to connect, you can set up a service where e.g. port 80 on the service forwards to port 8080 in the pod. (In plain Docker, you can do a similar thing with a docker run -p 80:8080 option: you can always pick the external port even if the port number inside the container is fixed.) I'd delete the environment variable setting.

Chaining container error in Kubernetes

I am new to kubernetes and docker. I am trying to chain 2 containers in a pod such that the second container should not be up until the first one is running. I searched and got a solution here. It says to add "depends" field in YAML file for the container which is dependent on another container. Following is a sample of my YAML file:
apiVersion: v1beta4
kind: Pod
metadata:
name: test
labels:
apps: test
spec:
containers:
- name: container1
image: <image-name>
ports:
- containerPort: 8080
hostPort: 8080
- name: container2
image: <image-name>
depends: ["container1"]
Kubernetes gives me following error after running the above yaml file:
Error from server (BadRequest): error when creating "new.yaml": Pod in version "v1beta4" cannot be handled as a Pod: no kind "Pod" is registered for version "v1beta4"
Is the apiVersion problem here? I even tried v1, apps/v1, extensions/v1 but got following errors (respectively):
error: error validating "new.yaml": error validating data: ValidationError(Pod.spec.containers[1]): unknown field "depends" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
error: unable to recognize "new.yaml": no matches for apps/, Kind=Pod
error: unable to recognize "new.yaml": no matches for extensions/, Kind=Pod
What am I doing wrong here?
As I understand there is no field called depends in the Pod Specification.
You can verify and validate by following command:
kubectl explain pod.spec --recursive
I have attached a link to understand the structure of the k8s resources.
kubectl-explain
There is no property "depends" in the Container API object.
You split your containers in two different pods and let the kubernetes cli wait for the first container to become available:
kubectl create -f container1.yaml --wait # run command until the pod is available.
kubectl create -f container2.yaml --wait

Error from server (NotFound): replicationcontrollers "kubia-liveness" not found

I have created pods using below yaml.
apiVersion: v1
kind: Pod
metadata:
name: kubia-liveness
spec:
containers:
- image: luksa/kubia-unhealthy
name: kubia
livenessProbe:
httpGet:
path: /
port: 8080
Then I created pods using the below command.
$ kubectl create -f kubia-liveness-probe.yaml
It created a pod successfully.
Then I'm trying to create load balancer service to access from the external world.
For that I'm using the below command.
$ kubectl expose rc kubia-liveness --type=LoadBalancer --name kubia-liveness-http
For this, I'm getting below error.
Error from server (NotFound): replicationcontrollers "kubia-liveness" not found
I'm not sure how to create replicationControllers. Could anybody please give me the command to do the same.
You are mixing two approaches here, one is creating stuff from yaml definition, which is good by it self (but bare in mind that it is really rare to create a POD rather then Deployment or ReplicationController) with exposing via CLI, which has some assumptions made (ie. it expects replication controller) and with these assumptions it creates appropriate service. My suggestion would be to go for creating Service from yaml manifest as well, so you can tailor it to fit your case.

Error when deploying kube-dns: No configuration has been provided

I have just installed a basic kubernetes cluster the manual way, to better understand the components, and to later automate this installation. I followed this guide: https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/
The cluster is completely empty without addons after this. I've already deployed kubernetes-dashboard succesfully, however, when trying to deploy kube-dns, it fails with the log:
2017-01-11T15:09:35.982973000Z F0111 15:09:35.978104 1 server.go:55]
Failed to create a kubernetes client:
invalid configuration: no configuration has been provided
I used the following yaml template for kube-dns without modification, only filling in the cluster IP:
https://coreos.com/kubernetes/docs/latest/deploy-addons.html
What did I do wrong?
Experimenting with kubedns arguments, I added --kube-master-url=http://mykubemaster.mydomain:8080 to the yaml file, and suddenly it reported in green.
How did this solve it? Was the container not aware of the master for some reason?
In my case, I had to put numeric IP on "--kube-master-url=http://X.X.X.X:8080". It's on yaml file of RC (ReplicationController), just like:
...
spec:
containers:
- name: kubedns
...
args:
# command = "/kube-dns"
- --domain=cluster.local
- --dns-port=10053
- --kube-master-url=http://192.168.99.100:8080