How to create a kubernetes service yaml file without --dry-run - kubernetes

It seems that --dry-run flag is not available for service.
kubectl create service --
--add-dir-header --log-backtrace-at --server
--alsologtostderr --log-dir --skip-headers
--as --log-file --skip-log-headers
--as-group --log-file-max-size --stderrthreshold
--cache-dir --log-flush-frequency --tls-server-name
--certificate-authority --logtostderr --token
--client-certificate --match-server-version --user
--client-key --namespace --username
--cluster --password --v
--context --profile --vmodule
--insecure-skip-tls-verify --profile-output --warnings-as-errors
--kubeconfig --request-timeout
Is there a way to create a service yaml file without --dry-run=client option. I tried with the below command and getting an error.
kubectl create service ns-service nodeport --dry-run=client -o yaml >nodeport.yaml
Error: unknown flag: --dry-run
See 'kubectl create service --help' for usage.

There are two ways to do this.
=================================================================
First Way:- using kubectl create service
What wrong you are doing here is you are giving service name befor the service type in command that's why its failing.
correct way is
Syntax :
kubectl create service clusterip NAME [--tcp=<port>:<targetPort>] [--dry-run=server|client|none] [options]
Example :
kubectl create service nodeport ns-service --tcp=80:80 --dry-run=client -o yaml
=================================================================
Second way:-
Here you can use kubectl expose command to create a service file.
Let's assume you have a pod running with the name nginx. and you want to create a service for nginx pod.
then I will write below command to generate the service file.
Synatax:
kubectl expose [pod/deployment/replicaset] [name-of-pod/deployment/replicaset] --port=80 --target-port=8000 --dry-run=client -o yaml
Example:
kubectl expose pod nginx --port=80 --target-port=8000 --dry-run=client -o yaml
output:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8000
selector:
run: nginx
status:
loadBalancer: {}

Related

Edit kubernetes resource using kubectl run --command

I am trying to create a pod run a command edit an exist resource , but its not working
My CR is
apiVersion: feature-toggle.resource.api.sap/v1
kind: TestCR
metadata:
name: test
namespace: my-namespace
spec:
enabled: true
strategies:
- name: tesst
parameters:
perecetage: "10"
The command I am trying to run is
kubectl run kube-bitname --image=bitnami/kubectl:latest -n my-namespace --command -- kubectl get testcr test -n my-namespace -o json | jq '.spec.strategies[0].parameters.perecetage="66"' | kubectl apply -f -
But This not work ? any idea ?
It would be better if you post more info about the error o the trace that are you getting executing the command, but I have a question that could be a good insight about what is happening here.
Has the kubectl command that you are running inside the bitnami/kubectl:latest any context that allow it to connect to your cluster?
If you take a look into the kubectl docker hub documentation you can see that you should map a config file to the pod in order to connect to your own cluster.
$ docker run --rm --name kubectl -v /path/to/your/kube/config:/.kube/config bitnami/kubectl:latest

How to know the Kubernetes API all process when use kubectl for crd resources

I want to know that how to use api to curd my crd resources with api. And I can write a sdk to control resources.
Use kubectl just
kubectl get inferenceservices test-sklearn -n kserve-test
kubectl apply -f xx.yaml -n kserve-test
kubectl delete -f xx.yaml -n kserve-test
apiVersion: "serving.kubeflow.org/v1beta1"
kind: "InferenceService"
metadata:
name: "test-sklearn"
spec:
predictor:
sklearn:
storageUri: "http://xxxx"
get call process in process_log
kubectl get inferenceservices test-sklearn -n kserve-test --v=8 > process_log 2>&1
use kubectl proxy to test
kubectl proxy --address 0.0.0.0 --accept-hosts=^.*
TEST to get resource status
GET http://xxx:8001/apis/serving.kubeflow.org/v1beta1/namespaces/kserve-test/inferenceservices/test-sklearn

How to format the output in Kubernetes?

I want to get specific output for a command like getting the nodeports and loadbalancer of a service. How do I do that?
The question is pretty lacking on what exactly wants to be retrieved from Kubernetes but I think I can provide a good baseline.
When you use Kubernetes, you are most probably using kubectl to interact with kubeapi-server.
Some of the commands you can use to retrieve the information from the cluster:
$ kubectl get RESOURCE --namespace NAMESPACE RESOURCE_NAME
$ kubectl describe RESOURCE --namespace NAMESPACE RESOURCE_NAME
Example:
Let's assume that you have a Service of type LoadBalancer (I've redacted some output to be more readable):
$ kubectl get service nginx -o yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
clusterIP: 10.2.151.123
externalTrafficPolicy: Cluster
ports:
- nodePort: 30531
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: A.B.C.D
Getting a nodePort from this output could be done like this:
kubectl get svc nginx -o jsonpath='{.spec.ports[].nodePort}'
30531
Getting a loadBalancer IP from this output could be done like this:
kubectl get svc nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}"
A.B.C.D
You can also use kubectl with custom-columns:
kubectl get service -o=custom-columns=NAME:metadata.name,IP:.spec.clusterIP
NAME IP
kubernetes 10.2.0.1
nginx 10.2.151.123
There are a lot of possible ways to retrieve data with kubectl which you can read more by following the:
kubectl get --help:
-o, --output='': Output format. One of:
json|yaml|wide|name|custom-columns=...|custom-columns-file=...|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=...
See custom columns, golang template and jsonpath template.
Kubernetes.io: Docs: Reference: Kubectl: Cheatsheet: Formatting output
Additional resources:
Kubernetes.io: Docs: Reference: Kubectl: Overview
Github.com: Kubernetes client: Python - if you would like to retrieve this information with Python
Stackoverflow.com: Answer: How to parse kubectl describe output and get the required field value
If you want to extract just single values, perhaps as part of scripts, then what you are searching for is -ojsonpath such as this example:
kubectl get svc service-name -ojsonpath='{.spec.ports[0].port}'
which will extract jus the value of the first port listed into the service specs.
docs - https://kubernetes.io/docs/reference/kubectl/jsonpath/
If you want to extract the whole definition of an object, such as a service, then what you are searching for is -oyaml such as this example:
kubectl get svc service-name -oyaml
which will output the whole service definition, all in yaml format.
If you want to get a more user-friendly description of a resource, such as a service, then you are searching for a describe command, such as this example:
kubectl describe svc service-name
docs - https://kubernetes.io/docs/reference/kubectl/overview/#output-options

How to block traffic to db pod & service (DNS) from other namespaces in kubernates?

I have created 2 tenants(tenant1,tenant2) in 2 namespaces tenant1-namespace,tenant2-namespace
Each tenant has db pod and its services
How to isolate db pods/service i.e. how to restrict pod/service from his namespace to access other tenants db pods ?
I have used service account for each tenant and applied network policies so that namespaces are isolated.
kubectl get svc --all-namespaces
tenant1-namespace grafana-app LoadBalancer 10.64.7.233 104.x.x.x 3000:31271/TCP 92m
tenant1-namespace postgres-app NodePort 10.64.2.80 <none> 5432:31679/TCP 92m
tenant2-namespace grafana-app LoadBalancer 10.64.14.38 35.x.x.x 3000:32226/TCP 92m
tenant2-namespace postgres-app NodePort 10.64.2.143 <none> 5432:31912/TCP 92m
So
I want to restrict grafana-app to use only his postgres db in his namespace only, not in other namespace.
But problem is that using DNS qualified service name (app-name.namespace-name.svc.cluster.local)
its allowing to access each other db pods (grafana-app in namespace tenant1-namespace can have access to postgres db in other tenant2-namespace via postgres-app.tenant2-namespace.svc.cluster.local
Updates : network policies
1)
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
2)
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-allow-external
spec:
podSelector:
matchLabels:
app: grafana-app
ingress:
- from: []
Your NetworkPolicy objects are correct, I created an example with them and will demonstrate bellow.
If you still have access to the service on the other namespace using FQDN, your NetworkPolicy may not be fully enabled on your cluster.
Run gcloud container clusters describe "CLUSTER_NAME" --zone "ZONE" and look for these two snippets:
At the beggining of the description it shows if the NetworkPolicy Plugin is enabled at Master level, it should be like this:
addonsConfig:
networkPolicyConfig: {}
At the middle of the description, you can find if the NetworkPolicy is enabled on the nodes. It should look like this:
name: cluster-1
network: default
networkConfig:
network: projects/myproject/global/networks/default
subnetwork: projects/myproject/regions/us-central1/subnetworks/default
networkPolicy:
enabled: true
provider: CALICO
If any of the above is different, check here: How to Enable Network Policy in GKE
Reproduction:
I'll create a simple example, I'll use gcr.io/google-samples/hello-app:1.0 image for tenant1 and gcr.io/google-samples/hello-app:2.0 for tenant2, so it's simplier to see where it's connecting but i'll use the names of your environment:
$ kubectl create namespace tenant1
namespace/tenant1 created
$ kubectl create namespace tenant2
namespace/tenant2 created
$ kubectl run -n tenant1 grafana-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:1.0
pod/grafana-app created
$ kubectl run -n tenant1 postgres-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:1.0
pod/postgres-app created
$ kubectl run -n tenant2 grafana-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:2.0
pod/grafana-app created
$ kubectl run -n tenant2 postgres-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:2.0
pod/postgres-app created
$ kubectl expose pod -n tenant1 grafana-app --port=8080 --type=LoadBalancer
service/grafana-app exposed
$ kubectl expose pod -n tenant1 postgres-app --port=8080 --type=NodePort
service/postgres-app exposed
$ kubectl expose pod -n tenant2 grafana-app --port=8080 --type=LoadBalancer
service/grafana-app exposed
$ kubectl expose pod -n tenant2 postgres-app --port=8080 --type=NodePort
service/postgres-app exposed
$ kubectl get all -o wide -n tenant1
NAME READY STATUS RESTARTS AGE IP NODE
pod/grafana-app 1/1 Running 0 100m 10.48.2.4 gke-cluster-114-default-pool-e5df7e35-ez7s
pod/postgres-app 1/1 Running 0 100m 10.48.0.6 gke-cluster-114-default-pool-e5df7e35-c68o
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/grafana-app LoadBalancer 10.1.23.39 34.72.118.149 8080:31604/TCP 77m run=grafana-app
service/postgres-app NodePort 10.1.20.92 <none> 8080:31033/TCP 77m run=postgres-app
$ kubectl get all -o wide -n tenant2
NAME READY STATUS RESTARTS AGE IP NODE
pod/grafana-app 1/1 Running 0 76m 10.48.4.8 gke-cluster-114-default-pool-e5df7e35-ol8n
pod/postgres-app 1/1 Running 0 100m 10.48.4.5 gke-cluster-114-default-pool-e5df7e35-ol8n
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/grafana-app LoadBalancer 10.1.17.50 104.154.135.69 8080:30534/TCP 76m run=grafana-app
service/postgres-app NodePort 10.1.29.215 <none> 8080:31667/TCP 77m run=postgres-app
Now, let's deploy your two rules: The first blocking all traffic from outside the namespace, the second allowing ingress the grafana-app from outside of the namespace:
$ cat default-deny-other-ns.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
$ cat allow-grafana-ingress.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-allow-external
spec:
podSelector:
matchLabels:
run: grafana-app
ingress:
- from: []
Let's review the rules for Network Policy Isolation:
By default, pods are non-isolated; they accept traffic from any source.
Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)
Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result.
Then we will apply the rules on both namespaces because the scope of the rule is the namespace it's assigned to:
$ kubectl apply -n tenant1 -f default-deny-other-ns.yaml
networkpolicy.networking.k8s.io/deny-from-other-namespaces created
$ kubectl apply -n tenant2 -f default-deny-other-ns.yaml
networkpolicy.networking.k8s.io/deny-from-other-namespaces created
$ kubectl apply -n tenant1 -f allow-grafana-ingress.yaml
networkpolicy.networking.k8s.io/web-allow-external created
$ kubectl apply -n tenant2 -f allow-grafana-ingress.yaml
networkpolicy.networking.k8s.io/web-allow-external created
Now for final testing, I'll log inside grafana-app in tenant1 and try to reach the postgres-app in both namespaces and check the output:
$ kubectl exec -n tenant1 -it grafana-app -- /bin/sh
/ ### POSTGRES SAME NAMESPACE ###
/ # wget -O- postgres-app:8080
Connecting to postgres-app:8080 (10.1.20.92:8080)
Hello, world!
Version: 1.0.0
Hostname: postgres-app
/ ### GRAFANA OTHER NAMESPACE ###
/ # wget -O- --timeout=1 http://grafana-app.tenant2.svc.cluster.local:8080
Connecting to grafana-app.tenant2.svc.cluster.local:8080 (10.1.17.50:8080)
Hello, world!
Version: 2.0.0
Hostname: grafana-app
/ ### POSTGRES OTHER NAMESPACE ###
/ # wget -O- --timeout=1 http://postgres-app.tenant2.svc.cluster.local:8080
Connecting to postgres-app.tenant2.svc.cluster.local:8080 (10.1.29.215:8080)
wget: download timed out
You can see that the DNS is resolved, but the networkpolicy blocks the access to the backend pods.
If after double checking NetworkPolicy is enabled on Master and Nodes you still face the same issue let me know in the comments and we can dig further.

How to pass namespace in Kubernetes create deployment command

I am trying to define the namespace name when executing the kubectl create deployment command?
This is what I tried:
kubectl create deployment test --image=banu/image1 namespace=test
and this doesn't work.
And I want to expose this deployment using a ClusterIP service within the cluster itself for that given namespace How can I do that using kubectl command line?
You can specify either -n or --namespace options.
kubectl create deployment test --image=nginx --namespace default --dry-run -o yaml and see result deployment yaml.
Using kubectl run
kubectl run test --namespace test --image nginx --port 9090 --dry-run -o yaml
You need to create a namespace like this
kubectl create ns test
ns stands for namespace, so with kubectl you say you want to create namespace with name test
Then while you creating the deployment you add the namespace you want
kubectl create deployment test --image=banu/image1 -n test
Where flag -n stands for namespace, that way you say to Kubernetes that all resources related to that deployment will be under the test namespace
In order to see all the resources under a specific namespace
kubectl get all -n test
--namespace and -n is the same things
Use -n test instead of namespace=test
Sample with nginx image:
$ kubectl create deployment nginx --image=nginx -n test
deployment.apps/nginx created
$ kubectl get deploy -n test
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 8s
In second case you need to create service and define labels from deployment.
You can find correct labels by runnig something like:
kubectl -n test describe deploy test |grep Labels:
and apply service like:
apiVersion: v1
kind: Service
metadata:
name: test-svc
namespace: test
spec:
ports:
- name: test
port: 80 # Change this port
protocol: TCP
type: ClusterIP
selector:
# Here you need to define output from previous step