no object named "pod" is registered - kubernetes

What does this mean?
-bash-4.2# kubectl create -f ./pod.yaml
Error: unable to recognize "./pod.yaml": no object named "pod" is registered
pod.yaml, capitalizing or not capitalizing 'pod' makes no difference. Validates as proper YAML.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 8080

Can you please run kubectl version and report the results? I expect that either your apisver or kubectl version is outdated, and thus doesn't know about the v1 API.
For what it's worth, that pod spec works for me with both kubectl and my apiserver at version 1.0.3.

i was able to create the pod
master $ kubectl create -f pod.yaml
pod/nginx created
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 1m
cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 8080
most likely issue is with the kubernetes on your server. can you check k8s components health status. i dont see any issue with the pod manifest. it should work

Related

minikube kubectl create file error (validating data: apiVersion not set)

I am a student and new to minikube as well as linux. My Teacher did the same and his pod created but i am getting error. I don't know what is the problem. Should i ignore validation error?
kubectl create -f myfirstpod.yaml
error: error validating "myfirstpod.yaml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
cat myfirstpod.yaml
kind: Pod
apiversion: v1
metadata:
name: myfirstpod
spec:
containers:
name: container1
image: aamirpinger/helloworld:latest
ports:
containerPort: 80
I am using these versions:
minikube version: v1.11.0
kubectl version 1.8.4
Kubernetes 1.18.3
Docker 19.03.11
Seems like you have typo. It should be apiVersion instead of apiversion. Also, the indentation in the PodSpec seems incorrect.
You can use this:
kind: Pod
apiVersion: v1
metadata:
name: myfirstpod
spec:
containers:
- name: container1
image: aamirpinger/helloworld:latest
ports:
- containerPort: 80

Pods not connecting with service having same labels -Kubernetes

I create one pod with label:appenv and one service of type node port with a selector as appenv. But when I use kubectl get ep service-name it's showing "no endpoints"(means service is not connecting with that pod).
Here are my pod.yaml and service.yaml
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod1
labels:
app: appenv
spec:
containers:
- name: app
image: aathith/lbt:v1
ports:
- name: app-port
containerPort: 8082
restartPolicy: Never
service.yaml
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: appenv
type: NodePort
ports:
- name: http
port: 8082
targetPort: 8082
nodePort: 30082
protocol: TCP
output for kubectl get po --show-labels
output for kubectl get svc
output for kubectl get svc app
Why am I not able to connect my pod to this service?
How can I change the above files to connect with each other?
Your pod is in "Completed" state - that is the problem. It is not in state "Running". Why? Because command in container was finished with 0 exit code. In normal situation container's running command should not exit unless it's a Job or Cronjob. You see what I mean?

Find why i am getting 502 Bad gateway error on kubernetes

I am using kubernetes. I have Ingress service which talks my container service. We have exposed a webapi which works all fine. But we keep getting 502 bad gateway error. I am new to kubernetes and i have no clue how to go about debugging this issue. Server is a nodejs server connected to database. Is there anything wrong with configuration?
My Deployment file--
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-pod
spec:
replicas: 1
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: my-pod
image: my-image
ports:
- name: "http"
containerPort: 8086
resources:
limits:
memory: 2048Mi
cpu: 1020m
---
apiVersion: v1
kind: Service
metadata:
name: my-pod-serv
spec:
ports:
- port: 80
targetPort: "http"
selector:
app: my-pod
My Ingress Service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: abc.test.com
http:
paths:
- path: /abc
backend:
serviceName: my-pod-serv
servicePort: 80
In Your case:
I think that you get this 502 gateway error because you don't have Ingress controller configured correctly.
Please try do do it with installed Ingress like in example below. It will do all automatically.
Nginx Ingress step by step:
1) Install helm
2) Install nginx controller using helm
$ helm install stable/nginx-ingress --name nginx-ingress
It will create 2 services. You can get their details via
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 29d
nginx-ingress-controller LoadBalancer 10.39.243.140 35.X.X.15 80:32324/TCP,443:31425/TCP 19m
nginx-ingress-default-backend ClusterIP 10.39.252.175 <none> 80/TCP 19m
nginx-ingress-controller - in short, it's dealing with requests to Ingress and directing
nginx-ingress-default-backend - in short, default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand
3) Create 2 deployments (or use yours)
$ kubectl run my-pod --image=nginx
deployment.apps/my-pod created
$ kubectl run nginx1 --image=nginx
deployment.apps/nginx1 created
4) Connect to one of the pods
$ kubectl exec -ti my-pod-675799d7b-95gph bash
And add additional line to the output to see which one we will try to connect later.
$ echo "HELLO THIS IS INGRESS TEST" >> /usr/share/nginx/html/index.html
$ exit
5) Expose deployments.
$ kubectl expose deploy nginx1 --port 80
service/nginx1 exposed
$ kubectl expose deploy my-pod --port 80
service/my-pod exposed
This will automatically create service and will looks like
apiVersion: v1
kind: Service
metadata:
labels:
app: my-pod
name: my-pod
selfLink: /api/v1/namespaces/default/services/my-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: my-pod
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
6) Now its the time to create Ingress.yaml and deploy it. Each rule in ingress need to be specified. Here I have 2 services. Each service specification starts with -host under rule parameter.
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: two-svc-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /pod
backend:
serviceName: my-pod
servicePort: 80
- host: nginx.test.svc
http:
paths:
- path: /abc
backend:
serviceName: nginx1
servicePort: 80
$ kubectl apply -f Ingress.yaml
ingress.extensions/two-svc-ingress created
7) You can check Ingress and hosts
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
two-svc-ingress my.pod.svc,nginx.test.svc 35.228.230.6 80 57m
8) Eplanation why I installed Ingress.
Connect to the ingress controller pod
$ kubectl exec -ti nginx-ingress-controller-76bf4c745c-prp8h bash
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ cat /etc/nginx/nginx.conf
Because I have installed nginx ingress earlier, after deploying Ingress.yaml, the nginx-ingress-controller found changes and automatically added necessary code.
In this file you should be able to find whole configuration for two services. I will not copy configuration but only headers.
start server my.pod.svc
start server nginx.test.svc
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ exit
9) Test
$ kubectl get svc to get your nginx-ingress-controller external IP
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/
default backend - 404
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/pod
<!DOCTYPE html>
...
</html>
HELLO THIS IS INGRESS TEST
Please keep in mind Ingress needs to be in the same namespace like services. If you have a few services in many namespace you need to create Ingress for each namespace.
I would need to set up a cluster in order to test your yml files.
Just to help you debugging, follow this steps:
1- get the logs of the my-pod container using kubectl logs my-pod-container-name, make sure everything is working
2- Use port-forward to expose your container and test it.
3- Make sure the service is working properly, change its type to load balancer, so you can reach it from outside the cluster.
If the three things are working there is a problem with your ingress configuration.
I am not sure if I explained it in a detailed way, let me know if something is not clear

How to update a container only?

I am working locally with minikube and every time i make a change on the code, i delete the service (and the deployment) and create a new one.
This operation generate a new IP for each container so i also need to update my frontend, and also to insert new data in my db container, since i loose every data every time i delete the service.
It’s way too much wasted time to work efficiently.
I would like to know if there is a way to update a container without generating new IPs, and without deleting the pod (because i don't want to delete my db container everytime i update the backend code)?
It's easy to update the existing Deployment with a new image without necessity to delete it.
Imagine we have a YAML file with the Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
To run this deployment, run the following command:
$ kubectl create -f nginx-deployment.yaml --record
(--record - appends the current command to the annotations of the created or updated resource. This is useful for future reviews, such as investigating which commands were executed in each Deployment revision, and for making a rollback.)
To see the Deployment rollout status, run
$ kubectl rollout status deployment/nginx-deployment
To update nginx image version, just run the command:
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Or you can edit existing Deployment with the command:
$ kubectl edit deployment/nginx-deployment
To see the status of the Deployment update process, run the command:
$ kubectl rollout status deployment/nginx-deployment
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out
or
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 36s
Each time you update the Deployment, it updates the Pods by creating new ReplicaSet, scaling it to 3 replicas, and scaling down old ReplicaSet to 0. If you update the Deployment again during the previous update in progress, it starts to create new ReplicaSet immediately, without waiting for completion of the previous update.
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-1180356465 3 3 3 4s
nginx-deployment-2538420311 0 0 0 56s
If you made a typo while editing the Deployment (for example, nginx:1.91) you can rollback it to the previous good version.
First, check the revisions of this deployment:
$ kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl create -f nginx-deployment.yaml --record
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91
Because we recorded the command while creating this Deployment using --record, we can easily see the changes we made in each revision.
To see the details of each revision, run:
$ kubectl rollout history deployment/nginx-deployment --revision=2
deployments "nginx-deployment" revision 2
Labels: app=nginx
pod-template-hash=1159050644
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables: <none>
No volumes.
Now you can rollback to the previous version using command:
$ kubectl rollout undo deployment/nginx-deployment
deployment "nginx-deployment" rolled back
Or you can rollback to a specific version:
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
deployment "nginx-deployment" rolled back
For more information, please read the part of Kubernetes documentation related to Deployment
First of all, in your front-end, use DNS names instead of IP addresses to reach your backend. This will save you from rebuilding your front-end app every time you deploy your backend.
That being said, there is no need to delete your service just to deploy a new version of your backend. In fact, you just need to update your deployment, making it refer to the new docker image you have built using the latest code for your backend.
Finally, as long as I understand, you have both your application and your database inside the same Pod. This is not a good practice, you should separate them in order not to cause a downtime in your database when you deploy a new version of your code.
As a sidenote, not sure if this is the case, but if you are using minikube as a development environment you're probably doing it wrong. You should use docker alone with volume binding, but that's out of scope of your question.
Use kops and create a production like cluster in AWS on the free tier.
In order to fix this you need to make sure you use a loadbalancer for your frontends. Create a service for your db container exposing the port so your frontends can reach it, and put that in your manifest for your frontends so its static. Service discovery will take care of the ip address and your containers will automatically connect to the ports. You can also setup persistent storage for your DBs. When you update your frontend code, use this to update your containers so nothing will change.
kubectl set image deployment/helloworld-deployment basicnodeapp=buildmystartup/basicnodeapp:2
Here is how I would do a state-full app in production AWS using wordpress for an example.
###############################################################################
#
# Creating a stateful app with persistent storage and front end containers
#
###############################################################################
* Here is how you create a stateful app using volumes and persistent storage for production.
* To start off we can automate the storage volume creation for our mysql server with a storage object and persistent volume claim like so:
$ cat storage.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zone: us-east-1b
$ cat pv-claim.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: db-storage
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
* Lets go ahead and create these so they are ready for our deployment of mysql
$ kubectl create -f storage.yml
storageclass "standard" created
$ kubectl create -f pv-claim.yml
persistentvolumeclaim "db-storage" created
* Lets also create our secrets file that will be needed for mysql and wordpress
$ cat wordpress-secrets.yml
apiVersion: v1
kind: Secret
metadata:
name: wordpress-secrets
type: Opaque
data:
db-password: cGFzc3dvcmQ=
# random sha1 strings - change all these lines
authkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ4OA==
loggedinkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ4OQ==
secureauthkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5MQ==
noncekey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5MA==
authsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5Mg==
secureauthsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5Mw==
loggedinsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5NA==
noncesalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5NQ==
$ kubectl create -f wordpress-secrets.yml
* Take note of the names we assigned. We will need these for the mysql deployment
* We created the storage in us-east-1b so lets set a node label for our node in that AZ so our deployment is pushed to that node and can attach our volume.
$ kubectl label nodes ip-172-20-48-74.ec2.internal storage=mysql
node "ip-172-20-48-74.ec2.internal" labeled
* Here is our mysql pod definition. Notice at the bottom we use a nodeSelector
* We will need to use that same one for our deployment so it can reach us-east-1b
$ cat wordpress-db.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: wordpress-db
spec:
replicas: 1
selector:
app: wordpress-db
template:
metadata:
name: wordpress-db
labels:
app: wordpress-db
spec:
containers:
- name: mysql
image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
ports:
- name: mysql-port
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-storage
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: db-storage
nodeSelector:
storage: mysql
* Before we go on to the deployment lets expose a service on port 3306 so wordpress can connect.
$ cat wordpress-db-service.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress-db
spec:
ports:
- port: 3306
protocol: TCP
selector:
app: wordpress-db
type: NodePort
$ kubectl create -f wordpress-db-service.yml
service "wordpress-db" created
* Now lets work on the deployment. We are going to use EFS to save all our pictures and blog posts so lets create that on us-east-1b also
* So first lets create our EFS NFS share
$ aws efs create-file-system --creation-token 1
{
"NumberOfMountTargets": 0,
"SizeInBytes": {
"Value": 0
},
"CreationTime": 1501863105.0,
"OwnerId": "812532545097",
"FileSystemId": "fs-55ed701c",
"LifeCycleState": "creating",
"CreationToken": "1",
"PerformanceMode": "generalPurpose"
}
$ aws efs create-mount-target --file-system-id fs-55ed701c --subnet-id subnet-7405f010 --security-groups sg-ffafb98e
{
"OwnerId": "812532545097",
"MountTargetId": "fsmt-a2f492eb",
"IpAddress": "172.20.53.4",
"LifeCycleState": "creating",
"NetworkInterfaceId": "eni-cac952dd",
"FileSystemId": "fs-55ed701c",
"SubnetId": "subnet-7405f010"
}
* Before we launch the deployment lets make sure our mysql server is up and connected to the volume we created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
db-storage Bound pvc-82c889c3-7929-11e7-8ae1-02fa50f1a61c 8Gi RWO standard 51m
* ok status bound means our container is connected to the volume.
* Now lets launch the wordpress frontend of two replicas.
$ cat wordpress-web.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:4-php7.0
# uncomment to fix perm issue, see also https://github.com/kubernetes/kubernetes/issues/2630
# command: ['bash', '-c', 'chown', 'www-data:www-data', '/var/www/html/wp-content/upload', '&&', 'apache2', '-DFOREGROUND']
ports:
- name: http-port
containerPort: 80
env:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
- name: WORDPRESS_AUTH_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: authkey
- name: WORDPRESS_LOGGED_IN_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: loggedinkey
- name: WORDPRESS_SECURE_AUTH_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: secureauthkey
- name: WORDPRESS_NONCE_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: noncekey
- name: WORDPRESS_AUTH_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: authsalt
- name: WORDPRESS_SECURE_AUTH_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: secureauthsalt
- name: WORDPRESS_LOGGED_IN_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: loggedinsalt
- name: WORDPRESS_NONCE_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: noncesalt
- name: WORDPRESS_DB_HOST
value: wordpress-db
volumeMounts:
- mountPath: /var/www/html/wp-content/uploads
name: uploads
volumes:
- name: uploads
nfs:
server: us-east-1b.fs-55ed701c.efs.us-east-1.amazonaws.com
path: /
* Notice we put together a string for the NFS share.
* AZ.fs-id.Region.amazonaws.com
* Now lets create our deployment.
$ kubectl create -f wordpress-web.yml
$ cat wordpress-web-service.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
ports:
- port: 80
targetPort: http-port
protocol: TCP
selector:
app: wordpress
type: LoadBalancer
* And now the load balancer for our two nodes
$ kubectl create -f wordpress-web-service.yml
* Now lets find our ELB and create a Route53 DNS name for it.
$ kubectl get services
$ kubectl describe service wordpress
Name: wordpress
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=wordpress
Type: LoadBalancer
IP: 100.70.74.90
LoadBalancer Ingress: acf99336a792b11e78ae102fa50f1a61-516654231.us-east-1.elb.amazonaws.com
Port: <unset> 80/TCP
NodePort: <unset> 30601/TCP
Endpoints: 100.124.209.16:80,100.94.7.215:80
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
38m 38m 1 service-controller Normal CreatingLoadBalancer Creating load balancer
38m 38m 1 service-controller Normal CreatedLoadBalancer Created load balancer
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
wordpress-deployment 2 2 2 2 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sysdig-agent-4sxv2 1/1 Running 0 3d
sysdig-agent-nb2wk 1/1 Running 0 3d
sysdig-agent-z42zj 1/1 Running 0 3d
wordpress-db-79z87 1/1 Running 0 54m
wordpress-deployment-2971992143-c8gg4 0/1 ContainerCreating 0 1m
wordpress-deployment-2971992143-h36v1 1/1 Running 0 1m
I think you actually need to solve 2 issues:
Do not restart the service. Restart only your pod. Then the service won't change its IP address.
Database and your application don't need to be 2 containers in the same pod. They should be 2 separate pods. Then you need another service to expose the database to your application.
So the final solution should be like this:
database pod - runs once, never restarted during development.
database service - created once, never restarted.
application pod - this is the only thing you will reload when the application code is changed. It needs to access the database, so you write literally "database-service:3306" or something like this in your application. "database-service" here is the name of the service you created in (2).
application service - created once, never restarted. You access the application from outside of minikube by using IP address of this service.

expose kubernetes pod to internet

I created a pod with an api and web docker container in kuberneters using a yml file (see below).
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
purpose: test
spec:
containers:
- name: api
image: gcr.io/test-1/api:latest
ports:
- containerPort: 8085
name: http
protocol: TCP
- name: web
image: gcr.io/test-1/web:latest
ports:
- containerPort: 5000
name: http
protocol: TCP
It show my pod is up and running
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 5m
but I don't know how to expose it from here.
it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though.
if I try something like
kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000
it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?
To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.
So if my deployment.yaml looks like this:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: test-dply
spec:
selector:
# Defines the selector that can be matched by a service for this
deployment
matchLabels:
app: test_pod
template:
metadata:
labels:
# Puts the label on the pod, this must match the matchLabels
selector
app: test_pod
spec:
# Our containers for training each model
containers:
- name: mycontainer
image: myimage
imagePullPolicy: Always
command: ["/bin/bash"]
ports:
- name: containerport
containerPort: 8085
Then the service that would link to it is:
kind: Service
apiVersion: v1
metadata:
# Name of our service
name: prodigy-service
spec:
# LoadBalancer type to allow external access to multiple ports
type: LoadBalancer
selector:
# Will deliver external traffic to the pod holding each of our containers
app: test_pod
ports:
- name: sentiment
protocol: TCP
port: 80
targetPort: containerport
You can deploy these two items by using kubectl create -f /path/to/dply.yaml and kubectl create -f /path/to/svc.yaml. Quick note: The service will allocate a public IP address, which you can find using kubectl get services with the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carbon-relay ClusterIP *.*.*.* <none> 2003/TCP 78d
comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d
It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services
You have created a pod, not a deployment.
Then you have exposed a deployment (and not your pod).
Try:
kubectl expose pod test --type=NodePort --port=80 --target-port=5000
kubectl expose pod test --type=LoadBalancer --port=XX --target-port=XXXX
If you already have pod and service running, you can create an ingress for the service you want to expose to the internet.
If you want to create it through console, Google Cloud provides really easy way to create an ingress from an existing service. Go to Services and Ingress tab, select the service, click on create ingress, fill the name and other mandatory fields.
or you can create using yaml file
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "example-ingress"
namespace: "default"
spec:
defaultBackend:
service:
name: "example-service"
port:
number: 8123
status:
loadBalancer: {}