I have a problem that I cannot access service with curl althought I have external IP.I meet a timeout request. Here is my services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
crawler-manager-1 NodePort 10.103.18.210 192.168.0.10 3001:30029/TCP 2h
redis NodePort 10.100.67.138 192.168.0.11 6379:30877/TCP 5h
and here my yaml service file:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert -f docker-compose.yml
kompose.version: 1.17.0 (a74acad)
creationTimestamp: null
labels:
io.kompose.service: crawler-manager-1
name: crawler-manager-1
namespace: cbpo-example
spec:
type: NodePort
externalIPs:
- 192.168.0.10
ports:
- name: "3001"
port: 3001
targetPort: 3001
selector:
io.kompose.service: crawler-manager-1
run: redis
status:
loadBalancer: {}
Here my deployment yml file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert -f docker-compose.yml
kompose.version: 1.17.0 (a74acad)
creationTimestamp: null
labels:
io.kompose.service: crawler-manager-1
name: crawler-manager-1
namespace: cbpo-example
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: crawler-manager-1
spec:
hostNetwork: true
containers:
- args:
- npm
- start
env:
- name: DB_HOST
value: mysql
- name: DB_NAME
- name: DB_PASSWORD
- name: DB_USER
- name: REDIS_URL
value: redis://cbpo-redis
image: localhost:5000/manager
name: crawler-manager-1
ports:
- containerPort: 3001
resources: {}
restartPolicy: Always
status: {}
Anyone have a problem like me when work with kubernetes? I need to access to check if 2 service in my namespace can connect each other, Thanks so much.
Instead of communication through ip addresses for your services you can communicate with their DNS names.
“Normal” (not headless) Services are assigned a DNS A record for a
name of the form my-svc.my-namespace.svc.cluster.local. This resolves
to the cluster IP of the Service.
“Headless” (without a cluster IP)
Services are also assigned a DNS A record for a name of the form
my-svc.my-namespace.svc.cluster.local. Unlike normal Services, this
resolves to the set of IPs of the pods selected by the Service.
Clients are expected to consume the set or else use standard
round-robin selection from the set.
For more info, please check Kubernetes DNS for Services
Make sure you see end points for the app. One reason this can happen is becuase of pod name mismatch.. if i remember, it's selector.name
kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.63.13:8080 1d
Related
I am trying to install the velero for k8s. During the installation when try to install mini.io I changes its service type from cluster IP to Node Port. My Pods run successfully and also I can see the node Port services is up and running.
master-k8s#masterk8s-virtual-machine:~/velero-v1.9.5-linux-amd64$ kubectl get pods -n velero -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
minio-8649b94fb5-vk7gv 1/1 Running 0 16m 10.244.1.102 node1k8s-virtual-machine <none> <none>
master-k8s#masterk8s-virtual-machine:~/velero-v1.9.5-linux-amd64$ kubectl get svc -n velero NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio NodePort 10.111.72.207 <none> 9000:31481/TCP 53m
When I try to access my services port number changes from 31481 to 45717 by it self. Every time when I correct port number and hit enter it changes back to new port and I am not able to access my application.
These are my codes from mini.io service file.
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
component: minio
What I have done so far?
I look for the log and everything show successful No error. I also try it with Load balancer service. With Load balancer port not not changes but I am not able to access the application.
Noting found on google about this issue.
I also check all the namespaces pods and services to check if these Port numbers are being used. No services use these ports.
What Do I want?
Can you please help me to find out what cause my application to change its port. Where is the issue and how to fix it.? How can I access application dashbord?
Update Question
This is the full codes file. It may help to find my mistake.
apiVersion: v1
kind: Namespace
metadata:
name: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
strategy:
type: Recreate
selector:
matchLabels:
component: minio
template:
metadata:
labels:
component: minio
spec:
volumes:
- name: storage
emptyDir: {}
- name: config
emptyDir: {}
containers:
- name: minio
image: minio/minio:latest
imagePullPolicy: IfNotPresent
args:
- server
- /storage
- --config-dir=/config
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9002
volumeMounts:
- name: storage
mountPath: "/storage"
- name: config
mountPath: "/config"
---
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
# ClusterIP is recommended for production environments.
# Change to NodePort if needed per documentation,
# but only if you run Minio in a test/trial environment, for example with Minikube.
type: NodePort
ports:
- port: 9002
nodePort: 31482
targetPort: 9002
protocol: TCP
selector:
component: minio
---
apiVersion: batch/v1
kind: Job
metadata:
namespace: velero
name: minio-setup
labels:
component: minio
spec:
template:
metadata:
name: minio-setup
spec:
restartPolicy: OnFailure
volumes:
- name: config
emptyDir: {}
containers:
- name: mc
image: minio/mc:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- "mc --config-dir=/config config host add velero http://minio:9000 minio minio123 && mc --config-dir=/config mb -p velero/velero"
volumeMounts:
- name: config
mountPath: "/config"
Edit2 Logs Of Pod
WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-01-25T00-19-54Z (go1.19.4 linux/amd64)
Status: 1 Online, 0 Offline.
API: http://10.244.1.108:9000 http://127.0.0.1:9000
Console: http://10.244.1.108:33045 http://127.0.0.1:33045
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
Edit 3 Logs of Pod
master-k8s#masterk8s-virtual-machine:~/velero-1.9.5$ kubectl logs minio-8649b94fb5-qvzfh -n velero
WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-01-25T00-19-54Z (go1.19.4 linux/amd64)
Status: 1 Online, 0 Offline.
API: http://10.244.2.131:9000 http://127.0.0.1:9000
Console: http://10.244.2.131:36649 http://127.0.0.1:36649
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
You can set the nodePort number inside the port config so that it won't be automatically set.
Try this Service:
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
type: NodePort
ports:
- port: 9000
nodePort: 31481
targetPort: 9000
protocol: TCP
selector:
component: minio
I'm trying to switch an existing app from docker-compose to Kubernetes (first time using it).
My app is deployed on AWS EKS using Fargate nodes. It runs well, but I would like to access the RabbitMQ management UI for debugging purposes.
The rabbit deployment/services files I am using are the following:
# rabbit-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
name: rabbit
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: rabbit
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
spec:
containers:
- image: rabbitmq:3.9.13-management
name: rabbit
ports:
- containerPort: 15672
- containerPort: 5672
- containerPort: 8080
resources: {}
env:
- name: RABBITMQ_DEFAULT_USER
value: "guest"
- name: RABBITMQ_DEFAULT_PASS
value: "guest"
restartPolicy: Always
status: {}
and
# rabbit-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
name: rabbit
spec:
type: NodePort
ports:
- name: "15672"
port: 15672
targetPort: 15672
- name: "5672"
port: 5672
targetPort: 5672
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: rabbit
status:
loadBalancer: {}
I also followed the instructions to create a new user:
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl add_user test test
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl set_user_tags test administrator
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
I can access the webUI on
http://localhost:8001/api/v1/namespaces/default/services/rabbit:15672/proxy/
after activating the proxy with kubectl proxy, however, login with test and test still gives me a Login failed message.
Posting the answer out of comments.
First what kubectl proxy is:
Creates a proxy server or application-level gateway between localhost
and the Kubernetes API server. It also allows serving static content
over specified HTTP path. All incoming data enters through one port
and gets forwarded to the remote Kubernetes API server port, except
for the path matching the static content path.
Also kubectl proxy works with HTTP requests, it does not work with TCP traffic. (this is probably the reason why it did not work).
You can read more in a good answer - kubectl proxy vs kubectl port-forward
Common options to access the service inside the cluster are:
use kubectl port-forward - for local development and testing purposes
use loadbalancer or nodeport service type - more advanced options which can be used across clusters and production environments. Find more about service types.
I'm trying to deploy my container in a kubernetes cluster but I'm not getting an External ip and hence I'm not able to access the server.
This is my .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
name: service-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: service-app
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
spec:
containers:
- image: magneto-challengue:1.0
imagePullPolicy: ""
name: magneto-challengue
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
name: service-app
spec:
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
selector:
io.kompose.service: service-app
type: NodePort
When I use the kubectl get svc,deployment,pods command. I'm getting the next response:
As you can see I'm not getting an external Ip. With the kubectl describe service service-app command I'm getting the next response:
I tried with the 10.107.179.10 ip, but it didn't work.
Any idea?
You can not use 10.107.179.10 IP to access a pod from outside kubernetes cluster because that IP is clusterIP and is valid inside the kubernetes cluster and can be used from another pod for example.
NodePort type does not get an EXTERNAL-IP. To access a pod from outside the kubernetes cluster via NodePort service you can use NodeIP:NodePort where NodeIP is any of your kubernetes cluster nodes IP address.
I have created a k8s deployment and service yaml for a static website. External IP address is also resolved in kubernetes service. But when I try to access the website through curl or browser, it returns connection timed out.
Dockerfile:
FROM nginx:alpine
COPY . /usr/share/nginx/html
K8s deployment yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ohno-website
labels:
app: ohno-website
spec:
replicas: 1
selector:
matchLabels:
app: ohno-website
template:
metadata:
labels:
app: ohno-website
spec:
containers:
- name: ohno-website
image: gkganeshr/ohno-website:v0.1
imagePullPolicy: Always
ports:
- containerPort: 80
k8s service yml:
apiVersion: v1
kind: Service
metadata:
name: ohno-website
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 9376
selector:
app: ohno-website
ohno_fooserver#cloudshell:~ (fourth-webbing-279817)$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.16.0.1 <none> 443/TCP 8h
ohno-website LoadBalancer 10.16.12.162 34.70.213.174 80:31977/TCP 7h4m
The target port defined in the service defition YAML is incorrect. It should match with container port from pod definition in deployment YAML
targetPort: 9376
should be changed to
targetPort: 80
I have a minikube cluster running locally (v0.17.1), with two deployments: one is a Redis instance and one is a custom app that is trying to connect to the Redis instance. My configuration is more or less copy/pasted from the official docs and the Kubernetes guestbook example.
Service definition and deployment:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller-redis
spec:
replicas: 1
template:
metadata:
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
containers:
- name: poller-redis
image: gcr.io/jmen-1266/jmen-redis:a67b5f4bfd8ea8441ed66a8fcb6596f276017a1c
ports:
- containerPort: 6379
env:
- name: GET_HOSTS_FROM
value: dns
imagePullSecrets:
- name: gcr-json-key
App deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller
spec:
replicas: 1
template:
metadata:
labels:
app: poller
tier: backend
role: service
spec:
containers:
- name: poller
image: gcr.io/jmen-1266/poller:a96a452292e894e46339309cc024cac67647cc25
imagePullPolicy: Always
imagePullSecrets:
- name: gcr-json-key
Relevant (I hope) Kubernetes info:
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 24d
poller-redis 10.0.0.137 <none> 6379/TCP 20d
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
poller 1 1 1 1 12d
poller-redis 1 1 1 1 4d
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 24d
poller-redis 172.17.0.7:6379 20d
Inside the poller pod (custom app), I get environment variables created for Redis:
# env | grep REDIS
POLLER_REDIS_SERVICE_HOST=10.0.0.137
POLLER_REDIS_SERVICE_PORT=6379
POLLER_REDIS_PORT=tcp://10.0.0.137:6379
POLLER_REDIS_PORT_6379_TCP_ADDR=10.0.0.137
POLLER_REDIS_PORT_6379_TCP_PORT=6379
POLLER_REDIS_PORT_6379_TCP_PROTO=tcp
POLLER_REDIS_PORT_6379_TCP=tcp://10.0.0.137:6379
However, if I try to connect to that port, I cannot. Doing something like:
nc -vz poller-redis 6379
fails.
What I have noticed is that I cannot access the Redis service via its ClusterIP but I can via the IP of the pod running Redis.
Any ideas, please?
Figured this out in the end, it looks like I misunderstood how the service selectors work in Kubernetes.
I have posted that my service definition is:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
The problem is that metadata.labels and spec.selector are different, when they should actually be the same. I still do not exactly understand why this is the case judging by the Kubernetes documentation, but there you have it. Now my service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller-redis
tier: backend
role: database
target: poller
ports:
- port: 6379
targetPort: 6379
I also now use straight up DNS lookup (i.e. ping poller-redis) rather than trying to connect to localhost:6379 from my target pods.
It could be related to kube-dns possibly not running.
From inside the poller pod can you verify that poller-redis resolves?
Does the following work from inside the container?
nc -v 10.0.0.137
One kube-dns service running in kube-system is enough. Did you run nc -vz poller-redis 6379 in pods which have same namespace as redis service?
poller-redis is simplified dns name of resdis service in same namespace. It will do not work in different namespace.
Since kube-dns is unavailable on nodes. So if you want to run nc or redisclient in nodes, please use clusterIP of redis service to replace dns name.