Kubernetes service without a external IP, not able to access the service - kubernetes

I'm trying to deploy my container in a kubernetes cluster but I'm not getting an External ip and hence I'm not able to access the server.
This is my .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
name: service-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: service-app
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
spec:
containers:
- image: magneto-challengue:1.0
imagePullPolicy: ""
name: magneto-challengue
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
name: service-app
spec:
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
selector:
io.kompose.service: service-app
type: NodePort
When I use the kubectl get svc,deployment,pods command. I'm getting the next response:
As you can see I'm not getting an external Ip. With the kubectl describe service service-app command I'm getting the next response:
I tried with the 10.107.179.10 ip, but it didn't work.
Any idea?

You can not use 10.107.179.10 IP to access a pod from outside kubernetes cluster because that IP is clusterIP and is valid inside the kubernetes cluster and can be used from another pod for example.
NodePort type does not get an EXTERNAL-IP. To access a pod from outside the kubernetes cluster via NodePort service you can use NodeIP:NodePort where NodeIP is any of your kubernetes cluster nodes IP address.

Related

Kubernetes service per pod of type NodePort

I want to use service per pod, but I didn't found how to run service per pod of type nodePort.
I was trying to use that, but every service is created of type ClusterIP
https://github.com/metacontroller/metacontroller/tree/master/examples/service-per-pod
I need to create pod together with service of type nodePort.
Current configuration
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx
annotations:
service-per-pod-label: "statefulset.kubernetes.io/nginx"
service-per-pod-ports: "80:80"
spec:
selector:
matchLabels:
app: nginx
serviceName: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 1
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: web
```
ClusterIP is the default type. Define the type NodePort in the spec section if you want to use that.
spec:
type: NodePort

Bad Gateway in Rails app with Kubernetes setup

I try to setup a Rails app within a Kubernetes Cluster (which is created with k3d on my local machine.
k3d cluster create --api-port 6550 -p "8081:80#loadbalancer" --agents 2
kubectl create deployment nginx --image=nginx
kubectl create service clusterip nginx --tcp=80:80
# apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
I can get an Ingress running which correctly exposes a running Nginx Deployment ("Welcome to nginx!")
(I took this example from here: https://k3d.io/usage/guides/exposing_services/
So I know my setup is working (with nginx).
Now, I simpy wanna point that ingress to my Rails app, but I always get an "Bad Gateway". (I also tried to point to my other services (elasticsearch, kibana, pgadminer) but I always get a "Bad gateway".
I can see my Rails app running at (http://localhost:62333/)
last lines of my Dockerfile:
EXPOSE 3001:3001
CMD rm -f tmp/pids/server.pid && bundle exec rails s -b 0.0.0.0 -p 3001
Why is my API has the "bad gateway" but Nginx not?
Does it have something to do with my selectors and labels which are created by kompose convert?
This is my complete Rails-API Deployment:
kubectl apply -f api-deployment.yml -f api.service.yml -f ingress.yml
api-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.network/metashop-net: 'true'
io.kompose.service: api
spec:
containers:
- env:
- name: APPLICATION_URL
valueFrom:
configMapKeyRef:
key: APPLICATION_URL
name: env
- name: DEBUG
value: 'true'
- name: ELASTICSEARCH_URL
valueFrom:
configMapKeyRef:
key: ELASTICSEARCH_URL
name: env
image: metashop-backend-api:DockerfileJeanKlaas
name: api
ports:
- containerPort: 3001
resources: {}
# volumeMounts:
# - mountPath: /usr/src/app
# name: api-claim0
# restartPolicy: Always
# volumes:
# - name: api-claim0
# persistentVolumeClaim:
# claimName: api-claim0
status: {}
api-service.yml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
app: api
io.kompose.service: api
name: api
spec:
type: ClusterIP
ports:
- name: '3001'
protocol: TCP
port: 3001
targetPort: 3001
selector:
io.kompose.service: api
status:
loadBalancer: {}
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 3001
configmap.yml
apiVersion: v1
data:
APPLICATION_URL: localhost:3001
ELASTICSEARCH_URL: elasticsearch
RAILS_ENV: development
RAILS_MAX_THREADS: '5'
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: api-env
name: env
I hope I didn't miss anything.
thank you in advance
EDIT: added endpoint, requested in comment:
kind: endpoint
kind: Endpoints
apiVersion: v1
metadata:
name: api
namespace: default
labels:
app: api
io.kompose.service: api
selfLink: /api/v1/namespaces/default/endpoints/api
subsets:
- addresses:
- ip: 10.42.1.105
nodeName: k3d-metashop-cluster-server-0
targetRef:
kind: Pod
namespace: default
apiVersion: v1
ports:
- name: '3001'
port: 3001
protocol: TCP
The problem was within the Dockerfile:
I had not defined ENV RAILS_LOG_TO_STDOUT true, so I was not able to see any errors in the pod logs.
After I added ENV RAILS_LOG_TO_STDOUT true I saw errors like database xxxx does not exist

Accessing K8S Service in AWS Private Subnet not working

Scenario in AWS EKS Service:
Pods are running on service "web-ui-svc" in public subnet
Another pods were running on service "web-api-svc" which is in private subnet
Created Ingress for "web-ui", say "web-ui-ing"
Now Ingress "web-ui-ing" routes trafic to "web-ui-svc".
Expected:
"web-ui-svc" pods wants to communicate "web-api-svc" on port 1000 for API calls.
I tried by passing ENV like "web-api-svc:1000". Not working. But i could curl inside the Node.
Also when i access ingress url of "web-ui-ing", UI page is comming up but connectivity to web-api-svc is not happening..
Note: Both subnets are in same VPC which EKS Cluster using.
Do i need to configure proxy?, to route traffic to corresponding svc?
web-ui YAML(in AWS public subnet):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: web-ui
namespace: web
name: web-ui
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: web-ui
strategy: {}
template:
metadata:
labels:
io.kompose.network/net: "true"
io.kompose.service: web-ui
spec:
containers:
- env:
- name: WEB_API_ENDPOINT
value: "http://web-api:1000"
image: ****.dkr.ecr.us-east-1.amazonaws.com/web/ui:latest
name: web-ui
ports:
- containerPort: 3000
resources: {}
restartPolicy: Always
nodeSelector:
web_group: "web_group"
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web-ui
namespace: web
name: web-ui
spec:
ports:
- name: "3000"
port: 3000
targetPort: 3000
type: NodePort
selector:
io.kompose.service: web-ui
status:
loadBalancer: {}
web-api.yaml(in AWS Private Subnet)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: web-api
namespace: web
name: web-api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: web-api
strategy: {}
template:
metadata:
labels:
io.kompose.network/webui_net: "true"
io.kompose.service: web-api
spec:
containers:
- image: ****.dkr.ecr.us-east-1.amazonaws.com/test/webapi:latest
name: web-api
ports:
- containerPort: 1000
resources: {}
restartPolicy: Always
nodeSelector:
web_api_group: "web_api_group"
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web-api
namespace: web
name: web-api
spec:
ports:
- name: "1000"
port: 1000
targetPort: 1000
selector:
io.kompose.service: web-api
status:
loadBalancer: {}
Do i need to create Proxy kind of thing here?

tunnel for service target port empty kubernetes and can't access pod from local browser

apiVersion: apps/v1
kind: Deployment
metadata:
name: identityold-deployment
spec:
selector:
matchLabels:
app: identityold
replicas: 1
template:
metadata:
labels:
app: identityold
spec:
containers:
- name: identityold
image: <image name from docker hub>
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
labels:
app: identityold
name: identityold-svc
namespace: default
spec:
type: NodePort # use LoadBalancer as type here
ports:
- port: 80
targetPort: 8081
nodePort: 30036
selector:
app: identityold
The above code is my deployment YAML file.
and cant access from the browser the service
Exposing a service in minikube cluster is little bit different than in normal kubernetes cluster.
Please follow this guide from kubernetes documentation and use minikube service command in order to expose it properly.

Cannot access service at external ip Kubernetes

I have a problem that I cannot access service with curl althought I have external IP.I meet a timeout request. Here is my services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
crawler-manager-1 NodePort 10.103.18.210 192.168.0.10 3001:30029/TCP 2h
redis NodePort 10.100.67.138 192.168.0.11 6379:30877/TCP 5h
and here my yaml service file:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert -f docker-compose.yml
kompose.version: 1.17.0 (a74acad)
creationTimestamp: null
labels:
io.kompose.service: crawler-manager-1
name: crawler-manager-1
namespace: cbpo-example
spec:
type: NodePort
externalIPs:
- 192.168.0.10
ports:
- name: "3001"
port: 3001
targetPort: 3001
selector:
io.kompose.service: crawler-manager-1
run: redis
status:
loadBalancer: {}
Here my deployment yml file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert -f docker-compose.yml
kompose.version: 1.17.0 (a74acad)
creationTimestamp: null
labels:
io.kompose.service: crawler-manager-1
name: crawler-manager-1
namespace: cbpo-example
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: crawler-manager-1
spec:
hostNetwork: true
containers:
- args:
- npm
- start
env:
- name: DB_HOST
value: mysql
- name: DB_NAME
- name: DB_PASSWORD
- name: DB_USER
- name: REDIS_URL
value: redis://cbpo-redis
image: localhost:5000/manager
name: crawler-manager-1
ports:
- containerPort: 3001
resources: {}
restartPolicy: Always
status: {}
Anyone have a problem like me when work with kubernetes? I need to access to check if 2 service in my namespace can connect each other, Thanks so much.
Instead of communication through ip addresses for your services you can communicate with their DNS names.
“Normal” (not headless) Services are assigned a DNS A record for a
name of the form my-svc.my-namespace.svc.cluster.local. This resolves
to the cluster IP of the Service.
“Headless” (without a cluster IP)
Services are also assigned a DNS A record for a name of the form
my-svc.my-namespace.svc.cluster.local. Unlike normal Services, this
resolves to the set of IPs of the pods selected by the Service.
Clients are expected to consume the set or else use standard
round-robin selection from the set.
For more info, please check Kubernetes DNS for Services
Make sure you see end points for the app. One reason this can happen is becuase of pod name mismatch.. if i remember, it's selector.name
kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.63.13:8080 1d