unable to view pods in kubernetes - kubernetes

I am unable to list my pod (kubectl get pods) after creating it.
kubectl apply -f my-config.yml
returns me this log below,
configmap "my-config" deleted
daemonset "my" deleted
service "my" deleted
configmap "my-config" created
daemonset "my" created
service "my" created
And the command (kubectl get pods) doesn't list the pod created!
Below is the config file I have used,
# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: l5d
servers:
- port: 4140
ip: 0.0.0.0
service:
responseClassifier:
kind: io.l5d.http.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: buoyantio/linkerd:1.0.0
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: buoyantio/kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
type: LoadBalancer
ports:
- name: outgoing
port: 4140
- name: incoming
port: 4141
- name: admin
port: 9990

Related

Canno't acces statfuel set db's. assumption happroxy-ingress yaml is incorrect

So I'm trying to raise three instances of RavenDB and I can't access them although they are running fine when checking the logs the certificate for running in a secured manner as well as top
The deployment is done within GKE.
the certificate generated using LetsEncrypt binding to the external IP of haproxy
I have no idea what is the issue..
kubectl describe ingress command reslut:
kubectl describe ingress
Name: ravendb
Labels: app=ravendb
Namespace: default
Address: 34.111.56.107
Default backend: default-http-backend:80 (10.80.1.5:8080)
Rules:
Host Path Backends
---- ---- --------
a.example.development.run
/ ravendb-0:443 (10.80.0.14:443)
tcp-a.example.development.run
/ ravendb-0:38888 (10.80.0.14:38888)
b.example.development.run
/ ravendb-1:443 (10.80.0.12:443)
tcp-b.example.development.run
/ ravendb-1:38888 (10.80.0.12:38888)
c.example.development.run
/ ravendb-2:443 (10.80.0.13:443)
tcp-c.example.development.run
/ ravendb-2:38888 (10.80.0.13:38888)
Annotations: ingress.kubernetes.io/backends:
{"k8s-be-32116--bad4c61c2f1d097c":"HEALTHY","k8s1-bad4c61c-default-ravendb-0-38888-31a8aae1":"UNHEALTHY","k8s1-bad4c61c-default-ravendb-0-...
ingress.kubernetes.io/forwarding-rule: k8s2-fr-pocrmcsc-default-ravendb-gtrvt7cq
ingress.kubernetes.io/ssl-passthrough: true
ingress.kubernetes.io/target-proxy: k8s2-tp-pocrmcsc-default-ravendb-gtrvt7cq
ingress.kubernetes.io/url-map: k8s2-um-pocrmcsc-default-ravendb-gtrvt7cq
these are the yaml files:
HAPROXY
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-controller
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-controller
namespace: default
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-controller
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
namespace: default
spec:
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: ingress-default-backend
namespace: default
spec:
ports:
- port: 8080
selector:
run: ingress-default-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
serviceAccountName: ingress-controller
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress
args:
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
- --reload-strategy=reusesocket
ports:
- name: https
containerPort: 443
- name: stat
containerPort: 1936
livenessProbe:
httpGet:
path: /healthz
port: 10253
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: default
spec:
type: LoadBalancer
selector:
app: ingress-controller
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: stat
port: 1936
RAVENDB
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ravendb-settings
namespace: default
labels:
app: ravendb
data:
ravendb-0: >
{
"Setup.Mode": "None",
"DataDir": "/data/RavenData",
"Security.Certificate.Path": "/ssl/ssl",
"ServerUrl": "https://0.0.0.0",
"ServerUrl.Tcp": "tcp://0.0.0.0:38888",
"PublicServerUrl": "https://a.example.development.run",
"PublicServerUrl.Tcp": "tcp://tcp-a.example.development.run:38888",
"License.Path": "/license/license.json",
"License.Eula.Accepted": "true"
}
ravendb-1: >
{
"Setup.Mode": "None",
"DataDir": "/data/RavenData",
"Security.Certificate.Path": "/ssl/ssl",
"ServerUrl": "https://0.0.0.0",
"ServerUrl.Tcp": "tcp://0.0.0.0:38888",
"PublicServerUrl": "https://b.example.development.run",
"PublicServerUrl.Tcp": "tcp://tcp-b.example.development.run:38888",
"License.Path": "/license/license.json",
"License.Eula.Accepted": "true"
}
ravendb-2: >
{
"Setup.Mode": "None",
"DataDir": "/data/RavenData",
"Security.Certificate.Path": "/ssl/ssl",
"ServerUrl": "https://0.0.0.0",
"ServerUrl.Tcp": "tcp://0.0.0.0:38888",
"PublicServerUrl": "https://c.example.development.run",
"PublicServerUrl.Tcp": "tcp://tcp-c.example.development.run:38888",
"License.Path": "/license/license.json",
"License.Eula.Accepted": "true"
}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ravendb
namespace: default
labels:
app: ravendb
spec:
serviceName: ravendb
template:
metadata:
labels:
app: ravendb
spec:
containers:
- command:
- /bin/sh
- -c
- /opt/RavenDB/Server/Raven.Server --config-path /config/$HOSTNAME
image: ravendb/ravendb:latest
imagePullPolicy: Always
name: ravendb
ports:
- containerPort: 443
name: http-api
protocol: TCP
- containerPort: 38888
name: tcp-server
protocol: TCP
- containerPort: 161
name: snmp
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
- mountPath: /ssl
name: ssl
- mountPath: /license
name: license
- mountPath: /config
name: config
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 120
volumes:
- name: ssl
secret:
defaultMode: 420
secretName: ravendb-ssl
- configMap:
defaultMode: 420
name: ravendb-settings
name: config
- name: license
secret:
defaultMode: 420
secretName: ravendb-license
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
replicas: 3
selector:
matchLabels:
app: ravendb
volumeClaimTemplates:
- metadata:
labels:
app: ravendb
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ravendb
namespace: default
labels:
app: ravendb
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: a.example.development.run
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ravendb-0
port:
number: 443
- host: tcp-a.example.development.run
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ravendb-0
port:
number: 38888
- host: b.example.development.run
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ravendb-1
port:
number: 443
- host: tcp-b.example.development.run
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ravendb-1
port:
number: 38888
- host: c.example.development.run
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ravendb-2
port:
number: 443
- host: tcp-c.example.development.run
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ravendb-2
port:
number: 38888
---
apiVersion: v1
kind: Service
metadata:
name: ravendb-0
namespace: default
labels:
app: ravendb
node: "0"
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: http-api
port: 443
protocol: TCP
targetPort: 443
- name: tcp-server
port: 38888
protocol: TCP
targetPort: 38888
- name: snmp
port: 161
protocol: TCP
targetPort: 161
selector:
app: ravendb
statefulset.kubernetes.io/pod-name: ravendb-0
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: ravendb-1
namespace: default
labels:
app: ravendb
node: "1"
spec:
ports:
- name: http-api
port: 443
protocol: TCP
targetPort: 443
- name: tcp-server
port: 38888
protocol: TCP
targetPort: 38888
- name: snmp
port: 161
protocol: TCP
targetPort: 161
selector:
app: ravendb
statefulset.kubernetes.io/pod-name: ravendb-1
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: ravendb-2
namespace: default
labels:
app: ravendb
node: "2"
spec:
ports:
- name: http-api
port: 443
protocol: TCP
targetPort: 443
- name: tcp-server
port: 38888
protocol: TCP
targetPort: 38888
- name: snmp
port: 161
protocol: TCP
targetPort: 161
selector:
app: ravendb
statefulset.kubernetes.io/pod-name: ravendb-2
type: ClusterIP
SECRETS
apiVersion: v1
kind: Secret
metadata:
name: ravendb-license
namespace: default
labels:
app: ravendb
type: Opaque
data:
license.json: >
---
apiVersion: v1
kind: Secret
metadata:
name: ravendb-ssl
namespace: default
labels:
app: ravendb
type: Opaque
data:
ssl: >

Why is EMQX Persistence not working on azure kubernetes when it is working on local kubernetes?

When using kubernetes(minikube) statefulset on local machine, EMQX Rules are persisting because same pod IP is being assigned to the emqx node, for example /opt/emqx/data/mnesia/emqx#172.17.0.9. Even if I delete the pod when the new pod starts, it gets assigned the same IP as before. Everything is working as it should.
But when I'm using aks(azure kubernetes) to deploy EMQX on aks cluster using azure files, pod IP is different everytime. For example if /opt/emqx/data/mnesia/emqx#10.1.1.10 is assigned to the EMQX node, then if I try to delete the pod then /opt/emqx/data/mnesia/emqx#10.1.1.11 might be assigned to it.
so, nothing is persisting.
Local code
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage5
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: emqx-pv5
spec:
capacity:
storage: 300Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage5
local:
path: /opt/emqx/data/mnesia
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: Service
metadata:
name: emqx-headless
spec:
type: ClusterIP
clusterIP: None
selector:
app: emqx
ports:
- name: mqtt
port: 1883
protocol: TCP
targetPort: 1883
- name: mqttssl
port: 8883
protocol: TCP
targetPort: 8883
- name: mgmt
port: 8081
protocol: TCP
targetPort: 8081
- name: websocket
port: 8083
protocol: TCP
targetPort: 8083
- name: wss
port: 8084
protocol: TCP
targetPort: 8084
- name: dashboard
port: 18083
protocol: TCP
targetPort: 18083
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: emqx-statefulset
labels:
app: emqx
spec:
replicas: 1
serviceName: emqx-headless
selector:
matchLabels:
app: emqx
template:
metadata:
labels:
app: emqx
spec:
containers:
- name: emqx
image: emqx/emqx:4.2.7
ports:
- name: emqx-dashboard
containerPort: 18083
- name: ssl-port
containerPort: 8883
- name: emqx-port
containerPort: 1883
- name: ssl-dashboard
containerPort: 18084
env:
- name: EMQX_LOADED_PLUGINS
value: emqx_management,emqx_recon,emqx_retainer,emqx_dashboard,emqx_rule_engine,emqx_auth_username
- name: EMQX_CLUSTER__DISCOVERY
value: k8s
- name: EMQX_NAME
value: emqx
- name: EMQX_CLUSTER__K8S__APISERVER
value: https://kubernetes.default:443
- name: EMQX_CLUSTER__K8S__SERVICE_NAME
value: emqx
- name: EMQX_CLUSTER__K8S__ADDRESS_TYPE
value: ip
- name: EMQX_CLUSTER__K8S__APP_NAME
value: emqx
- name: EMQX_ALLOW_ANONYMOUS
value: "false"
- name: EMQX_LISTENER__SSL__EXTERNAL__MAX_CONNECTIONS
value: "1024000"
- name: EMQX_AUTH__USER__PASSWORD_HASH
value: sha256
- name: EMQX_AUTH__USER__1__USERNAME
value:
- name: EMQX_AUTH__USER__1__PASSWORD
value:
- name: EMQX_DASHBOARD__DEFAULT_USER__LOGIN
value:
- name: EMQX_DASHBOARD__DEFAULT_USER__PASSWORD
value:
- name: EMQX_DASHBOARD__LISTENER__HTTPS
value: "18084"
- name: MQX_DASHBOARD__LISTENER__HTTPS__ACCEPTORS
value: "4"
- name: EMQX_DASHBOARD__LISTENER__HTTPS__MAX_CLIENTS
value: "512"
tty: true
volumeMounts:
- name: emqx-mnesia
mountPath: "/opt/emqx/data/mnesia"
volumeClaimTemplates:
- metadata:
name: emqx-mnesia
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage5"
resources:
requests:
storage: 300Mi
Azure Kubernetes code
apiVersion: v1
kind: ServiceAccount
metadata:
name: emqx
namespace: emqx-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: emqx
subjects:
- kind: ServiceAccount
name: emqx
namespace: emqx-test
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: emqx-files
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- actimeo=30
parameters:
skuName: Standard_LRS
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: emqx-pvc
namespace: emqx-test
spec:
accessModes:
- ReadWriteMany
storageClassName: emqx-files
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: emqx
namespace: emqx-test
spec:
ports:
- name: emqx-dashboard
port: 80
targetPort: 18083
protocol: TCP
- name: ssl-port
port: 8883
targetPort: ssl-port
protocol: TCP
- name: emqx-port
port: 1883
targetPort: emqx-port
protocol: TCP
- name: ssl-dashboard
port: 443
targetPort: 18084
protocol: TCP
selector:
app: emqx
type: LoadBalancer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: emqx
labels:
app: emqx
namespace: emqx-test
spec:
serviceName: "emqx"
selector:
matchLabels:
app: emqx
replicas: 1
template:
metadata:
labels:
app: emqx
spec:
containers:
- name: emqx
image: emqx/emqx:4.2.7
ports:
- name: emqx-dashboard
containerPort: 18083
- name: ssl-port
containerPort: 8883
- name: emqx-port
containerPort: 1883
- name: ssl-dashboard
containerPort: 18084
env:
- name: EMQX_LOADED_PLUGINS
value: emqx_management,emqx_recon,emqx_retainer,emqx_dashboard,emqx_rule_engine,emqx_auth_username
- name: EMQX_CLUSTER__DISCOVERY
value: k8s
- name: EMQX_NAME
value: emqx
- name: EMQX_CLUSTER__K8S__APISERVER
value: https://kubernetes.default:443
- name: EMQX_CLUSTER__K8S__NAMESPACE
value: emqx-test
- name: EMQX_CLUSTER__K8S__SERVICE_NAME
value: emqx
- name: EMQX_CLUSTER__K8S__ADDRESS_TYPE
value: ip
- name: EMQX_CLUSTER__K8S__APP_NAME
value: emqx
- name: EMQX_ALLOW_ANONYMOUS
value: "false"
- name: EMQX_LISTENER__SSL__EXTERNAL__MAX_CONNECTIONS
value: "1024000"
- name: EMQX_AUTH__USER__PASSWORD_HASH
value: sha256
- name: EMQX_AUTH__USER__1__USERNAME
value:
- name: EMQX_AUTH__USER__1__PASSWORD
value:
- name: EMQX_DASHBOARD__DEFAULT_USER__LOGIN
value:
- name: EMQX_DASHBOARD__DEFAULT_USER__PASSWORD
value:
- name: EMQX_DASHBOARD__LISTENER__HTTPS
value: "18084"
- name: MQX_DASHBOARD__LISTENER__HTTPS__ACCEPTORS
value: "4"
- name: EMQX_DASHBOARD__LISTENER__HTTPS__MAX_CLIENTS
value: "512"
volumeMounts:
- name: emqx-data
mountPath: "/opt/emqx/data/mnesia"
tty: true
volumes:
- name: emqx-data
persistentVolumeClaim:
claimName: emqx-pvc
In k8s documentation on StatefulSet Basics you read:
The Pods' ordinals, hostnames, SRV records, and A record names have
not changed, but the IP addresses associated with the Pods may have
changed. In the cluster used for this tutorial, they have. This is why
it is important not to configure other applications to connect to Pods
in a StatefulSet by IP address.
This is expected and as you see and this behaviour is mentioned in documentation.
But why do you see different behavior on minikube and different on azure?
IP addresses are assigned by CNIs. On minikube default CNI it the docker-bridge, and on azure its's Azure CNI, so it is up to the CNI what address is assigned.
It's best to always assume that you cannot rely on pod IP addresses to stay static. Use DNS for statefulsets and for other pods and services for communication and never use hardcoded pod ip addresses directly.

Kubernetes pods refusing connections to each other

I'm trying to implement an ElasticStack in Kubernetes via Minikube. I've barely started, as I'm writing basically everything from scratch to get a better understand of K8s and because the provided yml's from Elastic don't offer any explanation as to what is done why, so I'm doing my own thing.
The problem I've ran into is that my Kibana-pod cannot communicate with my ElasticSearch-pod, although I've set up the necessary services and ports on my pods.
Where it gets weird is that
kubectl port-forward services/elastic-http 9200
works flawlessly and lets me get information from my ElasticSearch pod. However, when I enter a pod via
kubectl exec -it <pod-name> -- /bin/bash
and try to use curl to get the same information my browser just showed me, the connection is being refused and my pods won't talk to one another.
My configs look as follows.
Kibana.yml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: my-kb
namespace: default
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
name: kibana
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.3.0
ports:
- containerPort: 5601
name: kibana-web
volumeMounts:
- name: kb-conf
mountPath: /usr/share/kibana/config/kibana.yml
subPath: kibana.yml
volumes:
- name: kb-conf
configMap:
name: kibana-config
items:
- key: kibana.yml
path: kibana.yml
---
kind: Service
apiVersion: v1
metadata:
name: kibana-http
namespace: default
spec:
selector:
app: kibana
ports:
- protocol: TCP
port: 5601
name: kibana-web
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kibana-config
namespace: default
data:
kibana.yml: |
elasticsearch.hosts: ["http://elastic-http.default.svc:9200"]
ElasticSearch.yml
kind: PersistentVolume
apiVersion: v1
metadata:
name: elastic-pv
namespace: default
spec:
capacity:
storage: 15Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elastic-pv-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: elastic-deploy
namespace: default
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
ports:
- containerPort: 9200
name: elastic-http
protocol: TCP
- containerPort: 9300
name: node-sniffer
protocol: TCP
#readinessProbe:
# httpGet:
# port: 9200
# periodSeconds: 5
volumeMounts:
- name: elastic-conf
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: elastic-data
mountPath: /var/data
securityContext:
privileged: true
initContainers:
- name: sysctl-adj
image: busybox
command: ['sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
volumes:
- name: elastic-data
persistentVolumeClaim:
claimName: elastic-pv-claim
- name: elastic-conf
configMap:
name: elastic-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
---
kind: Service
apiVersion: v1
metadata:
name: elastic-http
namespace: default
spec:
selector:
app: elasticsearch
ports:
- port: 9200
targetPort: elastic-http
name: elastic-http
- port: 9300
targetPort: node-sniffer
name: node-finder
---
kind: ConfigMap
apiVersion: v1
metadata:
name: elastic-config
namespace: default
data:
elasticsearch.yml: |
xpack.security.enabled: false
node.master: true
path.data: /var/data
http.port: 9200
I think you are having clusterIP service type and if you want to see it in browser one of the option is to have service type as NodePort.
You can see more details here
I'm not sure about this part in service:
targetPort: elastic-http
targetPort: node-sniffer
could you try to remove them and try again

I can't connect my ingress with my service

I have a problem with my ingress and my service, I can not get that when I connect to the IP of my server, I redirect to the service I have associated with port 80, which is my website. I pass you the configuration files and the description of the ingress:
apiVersion: v1
kind: Namespace
metadata:
name: bookstack
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: bookstack
- name: MYSQL_PASS
value: pass
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_USER
value: user
image: mysql:5.7
name: mysql
ports:
- containerPort: 3306
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
service: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: bookstack
spec:
containers:
- env:
- name: namespace
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DB_DATABASE
value: bookstack
- name: DB_HOST
value: mysql
- name: DB_PASSWORD
value: root
- name: DB_USERNAME
value: root
image: solidnerd/bookstack:latest
name: bookstack
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
type: NodePort
ports:
- name: http-port
port: 80
protocol: TCP
selector:
app: bookstack
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http
namespace: bookstack
spec:
backend:
serviceName: bookstack
servicePort: http-port
This is what appears on my ingress:
Name: http
Namespace: bookstack
Address:
Default backend: bookstack:http-port (10.36.0.22:80)
Rules:
Host Path Backends
---- ---- --------
* * bookstack:http-port (10.36.0.22:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"http","namespace":"bookstack"},"spec":{"backend":{"serviceName":"bookstack","servicePort":"http-port"}}}
Events: <none>
It doesn't return any external IP to connect me, why could it be? I want to avoid using LoadBalancer as a service type.
The main problem was that I didn't have activated the balancer that Google Kubernetes Engine offers by default, not having it active I couldn't generate an external ip because there wasn't a balancer. There are two solutions, either activate GKE's default load balancer or create a type of service: LoadBalancer.
Important to activate also within the deploy the readinessProbe
and livenessProbe.
An example:
readinessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
There wouldn't be an external IP specifically because NodePort represents all the nodes on your cluster on that specific port. So, essentially you would have to point an external load balancer or that traffic source to each of the nodes on your cluster on that specific NodePort.
Note that if you are using ExternalTrafficPolicy=Local only the nodes that have pods for your service will reply.

Linkerd and k8s not working

I'm trying to get my head around linkerd in kubernetes. I'm using the linkerd deamonset example from their website in my local minikube
It is all deployed in the production namespace. When I try to
http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs
Nothing happens. Where am I going wrong in my setup?
My Linkerd yaml:
# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/production/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: production
port: incoming
service: l5d
servers:
- port: 4140
ip: 0.0.0.0
responseClassifier:
kind: io.l5d.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/production/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: buoyantio/linkerd:0.9.1
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: buoyantio/kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
type: LoadBalancer
ports:
- name: outgoing
port: 4140
- name: incoming
port: 4141
- name: admin
port: 9990
Here's my deployment for an apiservice:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: apiserver-production
spec:
replicas: 1
template:
metadata:
name: apiserver
labels:
app: apiserver
role: gateway
env: production
spec:
dnsPolicy: ClusterFirst
containers:
- name: apiserver
image: eu.gcr.io/xxxxx/apiservice:latest
env:
- name: MONGO_HOST
valueFrom:
secretKeyRef:
name: mongosecret
key: host
- name: MONGO_PORT
valueFrom:
secretKeyRef:
name: mongosecret
key: port
- name: MONGO_USR
valueFrom:
secretKeyRef:
name: mongosecret
key: username
- name: MONGO_PWD
valueFrom:
secretKeyRef:
name: mongosecret
key: password
- name: MONGO_DB
valueFrom:
secretKeyRef:
name: mongosecret
key: db
- name: MONGO_PREFIX
valueFrom:
secretKeyRef:
name: mongosecret
key: prefix
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
resources:
limits:
memory: "300Mi"
cpu: "50m"
imagePullPolicy: Always
command:
- "pm2-docker"
- "processes.json"
ports:
- name: apiserver
containerPort: 8080
- name: kubectl
image: buoyantio/kubectl:1.2.3
args:
- proxy
- "-p"
- "8001"
Here's the service:
kind: Service
apiVersion: v1
metadata:
name: apiserver
spec:
selector:
app: apiserver
role: gateway
type: LoadBalancer
ports:
- name: http
port: 8080
- name: external
port: 80
targetPort: 8080
In my node application I'm using global tunnel:
const server = app.listen(port);
server.on('listening', function(){
// make sure all traffic goes over linkerd
globalTunnel.initialize({
host: 'localhost',
port: 4140
});
console.log(`Feathers application started on ${app.get('host')}:${app.get('port')} `);
Where is your curl command being run?
http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs`
The linkerd service in the example doesn't expose a public IP address. You can confirm this with kubectl get svc/l5d -- I expect you'll see no external IP.
I think that you'll need to modify the service definition---or create an additional explicitly external service that exposes a ClusterIP---in order to receive ingress traffic.
Deploying two of the same node applications and making them send requests to each other it worked. Weirdly the requests don't show up in the linkerd dashboard.