How to pass namespace in Kubernetes create deployment command - kubernetes

I am trying to define the namespace name when executing the kubectl create deployment command?
This is what I tried:
kubectl create deployment test --image=banu/image1 namespace=test
and this doesn't work.
And I want to expose this deployment using a ClusterIP service within the cluster itself for that given namespace How can I do that using kubectl command line?

You can specify either -n or --namespace options.
kubectl create deployment test --image=nginx --namespace default --dry-run -o yaml and see result deployment yaml.
Using kubectl run
kubectl run test --namespace test --image nginx --port 9090 --dry-run -o yaml

You need to create a namespace like this
kubectl create ns test
ns stands for namespace, so with kubectl you say you want to create namespace with name test
Then while you creating the deployment you add the namespace you want
kubectl create deployment test --image=banu/image1 -n test
Where flag -n stands for namespace, that way you say to Kubernetes that all resources related to that deployment will be under the test namespace
In order to see all the resources under a specific namespace
kubectl get all -n test
--namespace and -n is the same things

Use -n test instead of namespace=test
Sample with nginx image:
$ kubectl create deployment nginx --image=nginx -n test
deployment.apps/nginx created
$ kubectl get deploy -n test
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 8s
In second case you need to create service and define labels from deployment.
You can find correct labels by runnig something like:
kubectl -n test describe deploy test |grep Labels:
and apply service like:
apiVersion: v1
kind: Service
metadata:
name: test-svc
namespace: test
spec:
ports:
- name: test
port: 80 # Change this port
protocol: TCP
type: ClusterIP
selector:
# Here you need to define output from previous step

Related

How to know the Kubernetes API all process when use kubectl for crd resources

I want to know that how to use api to curd my crd resources with api. And I can write a sdk to control resources.
Use kubectl just
kubectl get inferenceservices test-sklearn -n kserve-test
kubectl apply -f xx.yaml -n kserve-test
kubectl delete -f xx.yaml -n kserve-test
apiVersion: "serving.kubeflow.org/v1beta1"
kind: "InferenceService"
metadata:
name: "test-sklearn"
spec:
predictor:
sklearn:
storageUri: "http://xxxx"
get call process in process_log
kubectl get inferenceservices test-sklearn -n kserve-test --v=8 > process_log 2>&1
use kubectl proxy to test
kubectl proxy --address 0.0.0.0 --accept-hosts=^.*
TEST to get resource status
GET http://xxx:8001/apis/serving.kubeflow.org/v1beta1/namespaces/kserve-test/inferenceservices/test-sklearn

How to add label to existed namespace by helm

I have a project to create a mutating webhook in the kube-system namespace, which needs to exclude webhook server deployment namespaces.
But the kube-system namespace has been created. How do I attach the required labels to it using Helm?
Helmfile offers hooks which are pretty neat for that:
releases:
- name: istio-ingress
namespace: istio-ingress
chart: istio/gateway
wait: true
hooks:
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "kubectl create namespace istio-ingress --dry-run=client -o yaml | kubectl apply -f -"
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "kubectl label --dry-run=client -o yaml --overwrite namespace istio-ingress istio-injection=enabled | kubectl apply -f -"
Since the kube-system namespace is a core part of Kubernetes (every cluster has it preinstalled and some core components run there) Helm can't manage it.
Some possible things you could do instead:
Make the per-namespace labels opt-in, not opt-out; only apply the webhook in namespaces where the label is present, rather than in every namespace except flagged ones. (Istio's sidecar injector works this way.)
Exclude kube-system as a special case in the code.
Manually run kubectl label namespace outside of Helm.
Make your larger-scale deployment pipeline run the kubectl command (for example, if you have a Jenkins build that installs the webhook, also make it set the label).

How to create a kubernetes service yaml file without --dry-run

It seems that --dry-run flag is not available for service.
kubectl create service --
--add-dir-header --log-backtrace-at --server
--alsologtostderr --log-dir --skip-headers
--as --log-file --skip-log-headers
--as-group --log-file-max-size --stderrthreshold
--cache-dir --log-flush-frequency --tls-server-name
--certificate-authority --logtostderr --token
--client-certificate --match-server-version --user
--client-key --namespace --username
--cluster --password --v
--context --profile --vmodule
--insecure-skip-tls-verify --profile-output --warnings-as-errors
--kubeconfig --request-timeout
Is there a way to create a service yaml file without --dry-run=client option. I tried with the below command and getting an error.
kubectl create service ns-service nodeport --dry-run=client -o yaml >nodeport.yaml
Error: unknown flag: --dry-run
See 'kubectl create service --help' for usage.
There are two ways to do this.
=================================================================
First Way:- using kubectl create service
What wrong you are doing here is you are giving service name befor the service type in command that's why its failing.
correct way is
Syntax :
kubectl create service clusterip NAME [--tcp=<port>:<targetPort>] [--dry-run=server|client|none] [options]
Example :
kubectl create service nodeport ns-service --tcp=80:80 --dry-run=client -o yaml
=================================================================
Second way:-
Here you can use kubectl expose command to create a service file.
Let's assume you have a pod running with the name nginx. and you want to create a service for nginx pod.
then I will write below command to generate the service file.
Synatax:
kubectl expose [pod/deployment/replicaset] [name-of-pod/deployment/replicaset] --port=80 --target-port=8000 --dry-run=client -o yaml
Example:
kubectl expose pod nginx --port=80 --target-port=8000 --dry-run=client -o yaml
output:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8000
selector:
run: nginx
status:
loadBalancer: {}

How to block traffic to db pod & service (DNS) from other namespaces in kubernates?

I have created 2 tenants(tenant1,tenant2) in 2 namespaces tenant1-namespace,tenant2-namespace
Each tenant has db pod and its services
How to isolate db pods/service i.e. how to restrict pod/service from his namespace to access other tenants db pods ?
I have used service account for each tenant and applied network policies so that namespaces are isolated.
kubectl get svc --all-namespaces
tenant1-namespace grafana-app LoadBalancer 10.64.7.233 104.x.x.x 3000:31271/TCP 92m
tenant1-namespace postgres-app NodePort 10.64.2.80 <none> 5432:31679/TCP 92m
tenant2-namespace grafana-app LoadBalancer 10.64.14.38 35.x.x.x 3000:32226/TCP 92m
tenant2-namespace postgres-app NodePort 10.64.2.143 <none> 5432:31912/TCP 92m
So
I want to restrict grafana-app to use only his postgres db in his namespace only, not in other namespace.
But problem is that using DNS qualified service name (app-name.namespace-name.svc.cluster.local)
its allowing to access each other db pods (grafana-app in namespace tenant1-namespace can have access to postgres db in other tenant2-namespace via postgres-app.tenant2-namespace.svc.cluster.local
Updates : network policies
1)
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
2)
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-allow-external
spec:
podSelector:
matchLabels:
app: grafana-app
ingress:
- from: []
Your NetworkPolicy objects are correct, I created an example with them and will demonstrate bellow.
If you still have access to the service on the other namespace using FQDN, your NetworkPolicy may not be fully enabled on your cluster.
Run gcloud container clusters describe "CLUSTER_NAME" --zone "ZONE" and look for these two snippets:
At the beggining of the description it shows if the NetworkPolicy Plugin is enabled at Master level, it should be like this:
addonsConfig:
networkPolicyConfig: {}
At the middle of the description, you can find if the NetworkPolicy is enabled on the nodes. It should look like this:
name: cluster-1
network: default
networkConfig:
network: projects/myproject/global/networks/default
subnetwork: projects/myproject/regions/us-central1/subnetworks/default
networkPolicy:
enabled: true
provider: CALICO
If any of the above is different, check here: How to Enable Network Policy in GKE
Reproduction:
I'll create a simple example, I'll use gcr.io/google-samples/hello-app:1.0 image for tenant1 and gcr.io/google-samples/hello-app:2.0 for tenant2, so it's simplier to see where it's connecting but i'll use the names of your environment:
$ kubectl create namespace tenant1
namespace/tenant1 created
$ kubectl create namespace tenant2
namespace/tenant2 created
$ kubectl run -n tenant1 grafana-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:1.0
pod/grafana-app created
$ kubectl run -n tenant1 postgres-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:1.0
pod/postgres-app created
$ kubectl run -n tenant2 grafana-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:2.0
pod/grafana-app created
$ kubectl run -n tenant2 postgres-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:2.0
pod/postgres-app created
$ kubectl expose pod -n tenant1 grafana-app --port=8080 --type=LoadBalancer
service/grafana-app exposed
$ kubectl expose pod -n tenant1 postgres-app --port=8080 --type=NodePort
service/postgres-app exposed
$ kubectl expose pod -n tenant2 grafana-app --port=8080 --type=LoadBalancer
service/grafana-app exposed
$ kubectl expose pod -n tenant2 postgres-app --port=8080 --type=NodePort
service/postgres-app exposed
$ kubectl get all -o wide -n tenant1
NAME READY STATUS RESTARTS AGE IP NODE
pod/grafana-app 1/1 Running 0 100m 10.48.2.4 gke-cluster-114-default-pool-e5df7e35-ez7s
pod/postgres-app 1/1 Running 0 100m 10.48.0.6 gke-cluster-114-default-pool-e5df7e35-c68o
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/grafana-app LoadBalancer 10.1.23.39 34.72.118.149 8080:31604/TCP 77m run=grafana-app
service/postgres-app NodePort 10.1.20.92 <none> 8080:31033/TCP 77m run=postgres-app
$ kubectl get all -o wide -n tenant2
NAME READY STATUS RESTARTS AGE IP NODE
pod/grafana-app 1/1 Running 0 76m 10.48.4.8 gke-cluster-114-default-pool-e5df7e35-ol8n
pod/postgres-app 1/1 Running 0 100m 10.48.4.5 gke-cluster-114-default-pool-e5df7e35-ol8n
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/grafana-app LoadBalancer 10.1.17.50 104.154.135.69 8080:30534/TCP 76m run=grafana-app
service/postgres-app NodePort 10.1.29.215 <none> 8080:31667/TCP 77m run=postgres-app
Now, let's deploy your two rules: The first blocking all traffic from outside the namespace, the second allowing ingress the grafana-app from outside of the namespace:
$ cat default-deny-other-ns.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
$ cat allow-grafana-ingress.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-allow-external
spec:
podSelector:
matchLabels:
run: grafana-app
ingress:
- from: []
Let's review the rules for Network Policy Isolation:
By default, pods are non-isolated; they accept traffic from any source.
Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)
Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result.
Then we will apply the rules on both namespaces because the scope of the rule is the namespace it's assigned to:
$ kubectl apply -n tenant1 -f default-deny-other-ns.yaml
networkpolicy.networking.k8s.io/deny-from-other-namespaces created
$ kubectl apply -n tenant2 -f default-deny-other-ns.yaml
networkpolicy.networking.k8s.io/deny-from-other-namespaces created
$ kubectl apply -n tenant1 -f allow-grafana-ingress.yaml
networkpolicy.networking.k8s.io/web-allow-external created
$ kubectl apply -n tenant2 -f allow-grafana-ingress.yaml
networkpolicy.networking.k8s.io/web-allow-external created
Now for final testing, I'll log inside grafana-app in tenant1 and try to reach the postgres-app in both namespaces and check the output:
$ kubectl exec -n tenant1 -it grafana-app -- /bin/sh
/ ### POSTGRES SAME NAMESPACE ###
/ # wget -O- postgres-app:8080
Connecting to postgres-app:8080 (10.1.20.92:8080)
Hello, world!
Version: 1.0.0
Hostname: postgres-app
/ ### GRAFANA OTHER NAMESPACE ###
/ # wget -O- --timeout=1 http://grafana-app.tenant2.svc.cluster.local:8080
Connecting to grafana-app.tenant2.svc.cluster.local:8080 (10.1.17.50:8080)
Hello, world!
Version: 2.0.0
Hostname: grafana-app
/ ### POSTGRES OTHER NAMESPACE ###
/ # wget -O- --timeout=1 http://postgres-app.tenant2.svc.cluster.local:8080
Connecting to postgres-app.tenant2.svc.cluster.local:8080 (10.1.29.215:8080)
wget: download timed out
You can see that the DNS is resolved, but the networkpolicy blocks the access to the backend pods.
If after double checking NetworkPolicy is enabled on Master and Nodes you still face the same issue let me know in the comments and we can dig further.

How to create K8S deployment in specific namespace?

I am using kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml to create deployment.
I want to create deployment in my namespace examplenamespace.
How can I do this?
There are three possible solutions.
Specify namespace in the kubectl command:
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml -n my-namespace
Specify namespace in your yaml files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
Change default namespace in ~/.kube/config:
apiVersion: v1
kind: Config
clusters:
- name: "k8s-dev-cluster-01"
cluster:
server: "https://example.com/k8s/clusters/abc"
namespace: "my-namespace"
By adding -n namespace to command you already have. It also works with other types of resources.
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml -n namespacename
First you need to create the namespace likes this
kubectl create ns nameOfYourNamespace
Then you create your deployment under your namespace
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml -n examplenamespace
The ns at
kubectl create ns nameOfYourNamespace
stands for namespace
The -n
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml -n examplenamespace
stands for --namespace
So you first create your namespace in order Kubernetes know what namespaces dealing with.
Then when you are about to apply your changes you add the -n flag that stands for --namespace so Kubernetes know under what namespace will deploy/ create the proper resources