I have created pods using below yaml.
apiVersion: v1
kind: Pod
metadata:
name: kubia-liveness
spec:
containers:
- image: luksa/kubia-unhealthy
name: kubia
livenessProbe:
httpGet:
path: /
port: 8080
Then I created pods using the below command.
$ kubectl create -f kubia-liveness-probe.yaml
It created a pod successfully.
Then I'm trying to create load balancer service to access from the external world.
For that I'm using the below command.
$ kubectl expose rc kubia-liveness --type=LoadBalancer --name kubia-liveness-http
For this, I'm getting below error.
Error from server (NotFound): replicationcontrollers "kubia-liveness" not found
I'm not sure how to create replicationControllers. Could anybody please give me the command to do the same.
You are mixing two approaches here, one is creating stuff from yaml definition, which is good by it self (but bare in mind that it is really rare to create a POD rather then Deployment or ReplicationController) with exposing via CLI, which has some assumptions made (ie. it expects replication controller) and with these assumptions it creates appropriate service. My suggestion would be to go for creating Service from yaml manifest as well, so you can tailor it to fit your case.
Related
i have a netcore webapi deployed on kubernetes. Every night at midnight i need to call an endpoint to do some operations on every pod, so i have deployed a cronjob that calls the api with curl and the method does the required operations.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test-cronjob
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: test-cronjob
image: curlimages/curl:7.74.0
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-ec"
- |
date;
echo "doingOperation"
curl POST "serviceName/DailyTask"
restartPolicy: OnFailurey
But this only calls one pod, the one assigned by my ingress.
There is a way to call every pod contained in a service?
That is an expected behavior as when we do curl on a Kubernetes Service object, it is expected to pass the requests to only one of the endpoints (IP of the pods). To achieve, what you need, you need to write a custom script that first gets the endpoints associated with the services and then iteratively call curl over them one by one.
Note: The pods IP can be changed due to pod re-creation so you should fetch the endpoints associated with the service in each run of the cronjob.
You could run kubectl inside your job:
kubectl get pods -l mylabel=mylabelvalue \
-o go-template='{{range .items}}{{.status.podIP}}{{"\n"}}{{end}}'
This will return the internal IP of all the containers my the specific label.
You can then loop over the addresses and run your command.
Since enabling Pods with rights on the API-Server, e.g. to look at the actual service endpoints, is cumbersome and poses a security risk, I recommend a simple scripting solution here.
First up, install a headless service for the deployment in question (a service with clusterIP=None). This will make your internal DNS Server create several A records, each pointing at one of your Pod IPs.
Secondly, in order to ping each Pod in a round-robin fashion from your Cron-Job, employ a little shell script along the lines below (your will need to run this from a container with dig and curl installed on it):
dig +noall +answer <name-of-headless-service>.<namespace>.svc.cluster.local | awk -F$'\t' '{curl="curl <your-protocol>://"$2":<your-port>/<your-endpoint>"; print curl}' | source /dev/stdin
I've just started learning kubernetes, in every tutorial the writer generally uses "kubectl .... deploymenst" to control the newly created deploys. Now, with those commands (ex kubectl get deploymets) i always get the response No resources found in default namespace., and i have to use "pods" instead of "deployments" to make things work (which works fine).
Now my question is, what is causing this to happen, and what is the difference between using a deployment or a pod? ? i've set the docker driver in the first minikube, it has something to do with this?
First let's brush up some terminologies.
Pod - It's the basic building block for Kubernetes. It groups one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
Deployment - It is a controller which wraps Pod/s and manages its life cycle, which is to say actual state to desired state. There is one more layer in between Deployment and Pod which is ReplicaSet : A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
Below is the visualization:
Source: I drew it!
In you case what might have happened :
Either you have created a Pod not a Deployment. Therefore, when you do kubectl get deployment you don't see any resources. Note when you create Deployments it in turn creates a ReplicaSet for you and also creates the defined pods.
Or may be you created your deployment in a different namespace, if that's the case, then type this command to find your deployments in that namespace kubectl get deploy NAME_OF_DEPLOYMENT -n NAME_OF_NAMESPACE
More information to clarify your concepts:
Source
Below the section inside spec.template is the section which is supposedly your POD manifest if you were to create it manually and not take the deployment route. Now like I said earlier in simple terms Deployments are a wrapper to your PODs, therefore anything which you see outside the path spec.template is the configuration which you will need to defined on how you want to manage (scaling,affinity, e.t.c) your POD
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Deployment is a controller providing higher level abstraction on top of pods and ReplicaSets. A Deployment provides declarative updates for Pods and ReplicaSets. Deployments internally creates ReplicaSets within which pods are created.
Use cases of deployment is documented here
One reason for No resources found in default namespace could be that you created the deployment in a specific namespace and not in default namespace.
You can see deployments in a specific namespace or in all namespaces via
kubectl get deploy -n namespacename
kubectl get deploy -A
I am developing an application that allows users to play around in their own sandboxes with a finite life time. Conceptually, it can be thought as if users were playing games of Pong. Users can interact with a web interface hosted at main/ to start a game of Pong. Each game of Pong will exist in its own pod. Since each game has a finite lifetime, the pods are dynamically created on-demand (through the Kubernetes API) as Kubernetes jobs with a single pod. There is therefore a one-to-one relationship between games of Pong and pods. Up to this point I have it all figured out.
My problem is, how do I set up an ingress to map dynamically created URLs, for example main/game1, to the corresponding pods? That is, if a user starts a game through the main interface, I would like him to be redirected to the URL of the corresponding pod where his game is hosted.
I could pre-allocate a set of urls, check if they have active jobs, and redirect if they do not, but the does not scale well. I am thinking dynamically assigning URLs is a common pattern in Kubernetes, so there must be a standard way to do this. I have looked at using nginx-ingress, but that is not a requirement.
Furthermore the comment, I created for you a little demo on minikube providing a working Ingress Class controller (enabled via minikube addons enable ingress).
Replicating the multiple Deployment that simulates the games.
kubectl create deployment deployment-1 --image=nginxdemos/hello
kubectl create deployment deployment-2 --image=nginxdemos/hello
kubectl create deployment deployment-3 --image=nginxdemos/hello
kubectl create deployment deployment-4 --image=nginxdemos/hello
kubectl create deployment deployment-5 --image=nginxdemos/hello
Same for Services resources:
kubectl create service clusterip deployment-1 --tcp=80:80
kubectl create service clusterip deployment-2 --tcp=80:80
kubectl create service clusterip deployment-3 --tcp=80:80
kubectl create service clusterip deployment-4 --tcp=80:80
kubectl create service clusterip deployment-5 --tcp=80:80
Finally, it's time for the Ingress one but we have to be quite hacky since we don't have the subcommand create available.
for number in `seq 5`; do echo "
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: deployment-$number
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /game$number
backend:
serviceName: deployment-$number
servicePort: 80
" | kubectl create -f -; done
Now you have Pod, Service and Ingress: obviously, you have to replicate the same result using Kubernetes API but, as I suggested in the comment, you should create a single Ingress resource and update accordingly Path subkey in a dynamic way.
However, if you try to simulate the cURL call faking the Host header, you can see the working result:
# curl `minikube ip`/game2 -sH 'Host: hello-world.info'|grep -i server
<p><span>Server address:</span> <span>172.17.0.5:80</span></p>
<p><span>Server name:</span> <span>deployment-2-5b98b954f6-8g5fl</span></p>
# curl `minikube ip`/game4 -sH 'Host: hello-world.info'|grep -i server
<p><span>Server address:</span> <span>172.17.0.7:80</span></p>
<p><span>Server name:</span> <span>deployment-4-767ff76774-d2fgj</span></p>
You can see the Pod IP and name as well.
I agree with Efrat Levitan. It's not the task for ingress/kubernetes itself.
You need another application (different layer of abstraction) to distinguish where the traffic should be routed for example istio and Routing rule for HTTP traffic based on cookies.
I'm experimenting with this, and I'm noticing a difference in behavior that I'm having trouble understanding, namely between running kubectl proxy from within a pod vs running it in a different pod.
The sample configuration run kubectl proxy and the container that needs it* in the same pod on a daemonset, i.e.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
# ...
spec:
template:
metadata:
# ...
spec:
containers:
# this container needs kubectl proxy to be running:
- name: l5d
# ...
# so, let's run it:
- name: kube-proxy
image: buoyantio/kubectl:v1.8.5
args:
- "proxy"
- "-p"
- "8001"
When doing this on my cluster, I get the expected behavior. However, I will run other services that also need kubectl proxy, so I figured I'd rationalize that into its own daemon set to ensure it's running on all nodes. I thus removed the kube-proxy container and deployed the following daemon set:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-proxy
labels:
app: kube-proxy
spec:
template:
metadata:
labels:
app: kube-proxy
spec:
containers:
- name: kube-proxy
image: buoyantio/kubectl:v1.8.5
args:
- "proxy"
- "-p"
- "8001"
In other words, the same container configuration as previously, but now running in independent pods on each node instead of within the same pod. With this configuration "stuff doesn't work anymore"**.
I realize the solution (at least for now) is to just run the kube-proxy container in any pod that needs it, but I'd like to know why I need to. Why isn't just running it in a daemonset enough?
I've tried to find more information about running kubectl proxy like this, but my search results drown in results about running it to access a remote cluster from a local environment, i.e. not at all what I'm after.
I include these details not because I think they're relevant, but because they might be even though I'm convinced they're not:
*) a Linkerd ingress controller, but I think that's irrelevant
**) in this case, the "working" state is that the ingress controller complains that the destination is unknown because there's no matching ingress rule, while the "not working" state is a network timeout.
namely between running kubectl proxy from within a pod vs running it in a different pod.
Assuming your cluster has an software defined network, such as flannel or calico, a Pod has its own IP and all containers within a Pod share the same networking space. Thus:
containers:
- name: c0
command: ["curl", "127.0.0.1:8001"]
- name: c1
command: ["kubectl", "proxy", "-p", "8001"]
will work, whereas in a DaemonSet, they are by definition not in the same Pod and thus the hypothetical c0 above would need to use the DaemonSet's Pod's IP to contact 8001. That story is made more complicated by the fact that kubectl proxy by default only listens on 127.0.0.1, so you would need to alter the DaemonSet's Pod's kubectl proxy to include --address='0.0.0.0' --accept-hosts='.*' to even permit such cross-Pod communication. I believe you also need to declare the ports: array in the DaemonSet configuration, since you are now exposing that port into the cluster, but I'd have to double-check whether ports: is merely polite, or is actually required.
I would like to deploy an application cluster by managing my deployment via k8s Deployment object. The documentation has me extremely confused. My basic layout has the following components that scale independently:
API server
UI server
Redis cache
Timer/Scheduled task server
Technically, all 4 above belong in separate pods that are scaled independently.
My questions are:
Do I need to create pod.yml files and then somehow reference them in deployment.yml file or can a deployment file also embed pod definitions?
K8s documentation seems to imply that the spec portion of Deployment is equivalent to defining one pod. Is that correct? What if I want to declaratively describe multi-pod deployments? Do I do need multiple deployment.yml files?
Pagids answer has most of the basics. You should create 4 Deployments for your scenario. Each deployment will create a ReplicaSet that schedules and supervises the collection of PODs for the Deployment.
Each Deployment will most likely also require a Service in front of it for access. I usually create a single yaml file that has a Deployment and the corresponding Service in it. Here is an example for an nginx.yaml that I use:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
name: nginx
targetPort: 80
nodePort: 32756
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginxdeployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginxcontainer
image: nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 80
Here some additional information for clarification:
A POD is not a scalable unit. A Deployment that schedules PODs is.
A Deployment is meant to represent a single group of PODs fulfilling a single purpose together.
You can have many Deployments work together in the virtual network of the cluster.
For accessing a Deployment that may consist of many PODs running on different nodes you have to create a Service.
Deployments are meant to contain stateless services. If you need to store a state you need to create StatefulSet instead (e.g. for a database service).
You can use the Kubernetes API reference for the Deployment and you'll find that the spec->template field is of type PodTemplateSpec along with the related comment (Template describes the pods that will be created.) it answers you questions. A longer description can of course be found in the Deployment user guide.
To answer your questions...
1) The Pods are managed by the Deployment and defining them separately doesn't make sense as they are created on demand by the Deployment. Keep in mind that there might be more replicas of the same pod type.
2) For each of the applications in your list, you'd have to define one Deployment - which also makes sense when it comes to difference replica counts and application rollouts.
3) you haven't asked that but it's related - along with separate Deployments each of your applications will also need a dedicated Service so the others can access it.
additional information:
API server use deployment
UI server use deployment
Redis cache use statefulset
Timer/Scheduled task server maybe use a statefulset (If your service has some state in)