I have deployed a Drupal Instance but i see that the instance Endpoint are not visible although the containers deployed successfully - kubernetes

I have deployed a Drupal Instance but i see that the instance Endpoint are not visible although the containers deployed successfully.
Container logs don't point to any direction
apiVersion: apps/v1
kind: Deployment
metadata:
name: drupal-deployment
spec:
replicas: 1
selector:
matchLabels:
app: drupal
type: frontend
template:
metadata:
labels:
app: drupal
spec:
containers:
- name: drupal
image: drupal
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
**********************
apiVersion: v1
kind: Service
metadata:
name: drupal-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30010
selector:
app: drupal
type: frontend
************************`
root#ip-172-31-32-54:~# microk8s.kubectl get pods
NAME READY STATUS RESTARTS AGE
drupal-deployment-6fdd7975f-l4j2z 1/1 Running 0 9h
drupal-deployment-6fdd7975f-p7sfz 1/1 Running 0 9h
root#ip-172-31-32-54:~# microk8s.kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
drupal-service NodePort 10.152.183.6 <none> 80:30010/TCP 9h
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 34h
***********************
root#ip-172-31-32-54:~# microk8s.kubectl describe service drupal-service
Name: drupal-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=drupal,type=frontend
Type: NodePort
IP: 10.152.183.6
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30010/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Any directions is really helpful.
NOTE: This works perfectly when running a container using the command
docker run --name some-drupal -p 8080:80 -d drupal
Thank you,
Anish

Your service selector has two values:
Selector: app=drupal,type=frontend
but your pod has only one of these:
spec:
template:
metadata:
labels:
app: drupal
Just make sure that all labels required by the service actually exist on the pod.
Like following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: drupal-deployment
spec:
replicas: 1
selector:
matchLabels:
app: drupal
type: frontend
template:
metadata:
labels:
app: drupal
type: frontend # <--------- look here
spec:
containers:
- name: drupal
image: drupal
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80

Related

Cant access Kubernetes service with docker desktop on win10 machine

this is my pod.yaml file
apiVersion: v1
kind: Pod
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: demo-voting-app
spec:
containers:
- name: voting-app
image: kodekloud/examplevotingapp_vote:v1
ports:
- containerPort: 80
and this is my service.yaml file
apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: demo-voting-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30004
selector:
name: voting-app-pod
app: demo-voting-app
after executing
kubectl get pods,svc
i get
NAME READY STATUS RESTARTS AGE
pod/voting-app-pod 1/1 Running 0 37m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
service/voting-service NodePort 10.107.145.225 <none> 80:30004/TCP 6m45s
I tried to a access the service trough http://localhost:30004 and also i tried
http://127.0.0.1:30004
with no success
Please have selector and template label declaration in deployment as below. For more details on the attributes, refer kubernetes documentation
spec:
selector:
matchLabels:
app: demo-voting-app
template:
metadata:
labels:
app: demo-voting-app
And in service as below
selector:
app: demo-voting-app

Nginx minikube ingress : 503 Server error

I am trying to use minikube to deploy a sample flask app. But getting 503 nginx error. Please note I am able to access the app using the Nodeport service config.
I checked with minikube IP which is mapped to local host and tried to access the app, but getting 503 error. Not sure if I missed anything. I enable the minikube addons for nginx.
Here are my files -
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaskapp-deployment
labels:
app: flaskapp
spec:
replicas: 1
selector:
matchLabels:
app: flaskapp
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: flaskapp
image: <repo>/sample-flask-app:1.0
ports:
- containerPort: 5000
env:
- name: APPLICATION_SETTINGS
value: prd_config.py
imagePullSecrets:
- name: jfrog-secret
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: flaskapp-service
labels:
app: flaskapp
spec:
selector:
app: flaskapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flaskapp-ingress
labels:
app: flaskapp
spec:
defaultBackend:
service:
name: default-http-backend
port:
number: 80
rules:
- host: mydashboard.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: flaskapp-service
port:
number: 5000
Ingress status :
minikube kubectl -- get ingress flaskapp-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
flaskapp-ingress nginx mydashboard.com localhost 80 18m
Cluster status:
minikube kubectl -- get all
NAME READY STATUS RESTARTS AGE
pod/flaskapp-deployment-7f59f96fd5-j9mv9 1/1 Running 1 (103m ago) 15h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/flaskapp-deployment ClusterIP 10.103.143.58 <none> 5000/TCP 34m
service/flaskapp-service ClusterIP 10.111.242.99 <none> 5000/TCP 15h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flaskapp-deployment 1/1 1 1 15h
NAME DESIRED CURRENT READY AGE
replicaset.apps/flaskapp-deployment-7f59f96fd5 1 1 1 15h

Kubernentes External Ip is working only in the cluster

I am new to Kubernetes and I am trying to host a testing site,I have pods running as below
NAME READY STATUS RESTARTS AGE
sasank-website-78864ff54b-656ld 1/1 Running 0 30m
sasank-website-78864ff54b-qdn65 1/1 Running 0 30m
Deployment file used:
piVersion: apps/v1
kind: Deployment
metadata:
name: sasank-website
labels:
app: website
spec:
replicas: 2
selector:
matchLabels:
app: website
template:
metadata:
labels:
app: website
spec:
containers:
- name: webtesting
image: 9110727495/userdetails:latest
ports:
- containerPort: 80
Service file used:
apiVersion: v1
kind: Service
metadata:
name: testingsite
labels:
app: website
spec:
type: NodePort
externalIPs:
- 192.168.1.10
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
selector:
app: website
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 102m
testingsite NodePort 10.96.246.110 192.168.1.10 80:31438/TCP 5m9s
When I try to access the Ip with port 31438 it is refusing to connect but it is using port 80 in the clustr. When I try to access with the same IP outside the cluster it is refusing to connect even to port 80. I am not sure how to understand this.. Please help. Thank you.

Two kubernetes deployments in the same namespace are not able to communicate

I'm deploying ELK stack (oss) to kubernetes cluster. Elasticsearch deployment and service starts correctly and API is reacheble. Kibana deployment starts but can't access elasticsearch:
From Kibana container logs:
{"type":"log","#timestamp":"2019-05-08T22:49:26Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
Both deployments are in the same namespace "observability". I also tried to reference elasticsearch container as elasticsearch.observability.svc.cluster.local but it's not working too.
What I'am doing wrong? How to reference elasticsearch container from kibana container?
More info:
kubectl --context=19team-observability-admin-context -n observability get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-9d495b84f-j2297 1/1 Running 0 15s
kibana-65bc7f9c4-s9cv4 1/1 Running 0 15s
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.104.250.175 <none> 9200:30083/TCP,9300:30059/TCP 1m
kibana NodePort 10.102.124.171 <none> 5601:30124/TCP 1m
I start my containers with command
kubectl --context=19team-observability-admin-context -n observability apply -f .\elasticsearch.yaml -f .\kibana.yaml
elasticsearch.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: observability
spec:
type: NodePort
ports:
- name: "9200"
port: 9200
targetPort: 9200
- name: "9300"
port: 9300
targetPort: 9300
selector:
app: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch
namespace: observability
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: set-vm-max-map-count
image: busybox
imagePullPolicy: IfNotPresent
command: ['sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.1
ports:
- containerPort: 9200
- containerPort: 9300
resources:
requests:
memory: "3Gi"
cpu: "1"
limits:
memory: "3Gi"
cpu: "1"
kibana.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: observability
spec:
type: NodePort
ports:
- name: "5601"
port: 5601
targetPort: 5601
selector:
app: observability_platform_kibana
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: observability_platform_kibana
name: kibana
namespace: observability
spec:
replicas: 1
template:
metadata:
labels:
app: observability_platform_kibana
spec:
containers:
- env:
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana
image: docker.elastic.co/kibana/kibana-oss:6.7.1
name: kibana
ports:
- containerPort: 5601
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
restartPolicy: Always
UPDATE 1
As gonzalesraul proposed I've created second service for elastic with ClusterIP type:
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Service is created:
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.106.5.94 <none> 9200:31598/TCP,9300:32018/TCP 26s
elasticsearch-local ClusterIP 10.101.178.13 <none> 9200/TCP 26s
kibana NodePort 10.99.73.118 <none> 5601:30004/TCP 26s
And reference elastic as "http://elasticsearch-local:9200"
Unfortunately it does not work, in kibana container:
{"type":"log","#timestamp":"2019-05-09T10:13:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-local:9200/"}
Do not use a NodePort service, instead use a ClusterIP. If you need to expose as a Nodeport your service, create a second service besides, for instance:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Then update the kibana manifest to point to the ClusterIP service:
# ...
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch-local:9200
# ...
The nodePort services do not create a 'dns entry' (ex. elasticsearch.observability.svc.cluster.local) on kubernetes
Edit the server name value in kibana.yaml and set it to kibana:5601.
I think if you don't do this, by default it is trying to go to port 80.
This is what looks like now kibana.yaml:
...
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana:5601
image: docker.elastic.co/kibana/kibana-oss:6.7.1
imagePullPolicy: IfNotPresent
name: kibana
...
And this is the output now:
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:console#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:interpreter#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:metrics#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:tile_map#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:timelion#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-05-09T10:37:17Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
UPDATE
I just tested it on a bare metal cluster (bootstraped through kubeadm), and worked again.
This is the output:
{"type":"log","#timestamp":"2019-05-09T11:09:59Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["info","migrations"],"pid":1,"message":"Creating index .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Pointing alias .kibana to .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Finished in 2417ms."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
Note that it passed from "No Living Connections" to "Running". I am running the nodes on GCP. I had to open the firewalls for it to work. What's your environment?

Cannot acces Kubernetes service outside cluster

I have created a Kubernetes service for my deployment and using a load balancer an external IP has been assigned along with the node port but I am unable to access the service from outside the cluster using the external IP and nodeport.
The service has been properly created and is up and running.
Below is my deployment:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-portal
labels:
app: dev-portal
spec:
replicas: 1
selector:
matchLabels:
app: dev-portal
template:
metadata:
labels:
app: dev-portal
spec:
containers:
- name: dev-portal
image: bhavesh/ti-portal:develop
imagePullPolicy: Always
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "1G"
cpu: "1"
ports:
- containerPort: 9000
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: dev-portal
labels:
app: dev-portal
spec:
selector:
app: dev-portal
ports:
- protocol: TCP
port: 9000
targetPort: 9000
nodePort: 30429
type: LoadBalancer
For some reason, I am unable to access my service from outside and a message 'Refused to connect' is shown.
Update
The service is described using kubectl describe below:
Name: trakinvest-dev-portal
Namespace: default
Labels: app=trakinvest-dev-portal
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"trakinvest-dev-portal"},"name":"trakinvest-dev-portal","...
Selector: app=trakinvest-dev-portal
Type: LoadBalancer
IP: 10.245.185.62
LoadBalancer Ingress: 139.59.54.108
Port: <unset> 9000/TCP
TargetPort: 9000/TCP
NodePort: <unset> 30429/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>