K8s deployment Minio How to access the Console? - kubernetes

How do I access the Minio console?
minio.yaml
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
name: minio
selector:
app: minio
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: minio
spec:
serviceName: minio
replicas: 4
selector:
matchLabels:
app: minio
template:
metadata:
labels:
app: minio
spec:
terminationGracePeriodSeconds: 20
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- minio
topologyKey: kubernetes.io/hostname
containers:
- name: minio
env:
- name: MINIO_ACCESS_KEY
value: "hengshi"
- name: MINIO_SECRET_KEY
value: "hengshi202020"
image: minio/minio:RELEASE.2018-08-02T23-11-36Z
args:
- server
- http://minio-0.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-1.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-2.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
- http://minio-3.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/
ports:
- containerPort: 9000
- containerPort: 9001
volumeMounts:
- name: minio-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: minio-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 300M
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: NodePort
ports:
- name: server-port
port: 9000
targetPort: 9000
protocol: TCP
nodePort: 30009
- name: console-port
port: 9001
targetPort: 9001
protocol: TCP
nodePort: 30010
selector:
app: minio
curl http://NodeIP:30010 is failed
I tried container --args --console-address ":9001" or env MINIO_BROWSER still not accessible
One more question, what is the latest image startup parameter for Minio? There seems to be something wrong with my args
enter image description here

You can specify --console-address :9001 in your deployment.yaml file as below in args: section .
args:
- server
- --console-address
- :9001
- /data
Same way your Service and Ingress needs to point to 9001 port now with the latest Minio.
ports:
- protocol: TCP
port: 9001

Related

Why is EMQX Persistence not working on azure kubernetes when it is working on local kubernetes?

When using kubernetes(minikube) statefulset on local machine, EMQX Rules are persisting because same pod IP is being assigned to the emqx node, for example /opt/emqx/data/mnesia/emqx#172.17.0.9. Even if I delete the pod when the new pod starts, it gets assigned the same IP as before. Everything is working as it should.
But when I'm using aks(azure kubernetes) to deploy EMQX on aks cluster using azure files, pod IP is different everytime. For example if /opt/emqx/data/mnesia/emqx#10.1.1.10 is assigned to the EMQX node, then if I try to delete the pod then /opt/emqx/data/mnesia/emqx#10.1.1.11 might be assigned to it.
so, nothing is persisting.
Local code
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage5
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: emqx-pv5
spec:
capacity:
storage: 300Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage5
local:
path: /opt/emqx/data/mnesia
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: Service
metadata:
name: emqx-headless
spec:
type: ClusterIP
clusterIP: None
selector:
app: emqx
ports:
- name: mqtt
port: 1883
protocol: TCP
targetPort: 1883
- name: mqttssl
port: 8883
protocol: TCP
targetPort: 8883
- name: mgmt
port: 8081
protocol: TCP
targetPort: 8081
- name: websocket
port: 8083
protocol: TCP
targetPort: 8083
- name: wss
port: 8084
protocol: TCP
targetPort: 8084
- name: dashboard
port: 18083
protocol: TCP
targetPort: 18083
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: emqx-statefulset
labels:
app: emqx
spec:
replicas: 1
serviceName: emqx-headless
selector:
matchLabels:
app: emqx
template:
metadata:
labels:
app: emqx
spec:
containers:
- name: emqx
image: emqx/emqx:4.2.7
ports:
- name: emqx-dashboard
containerPort: 18083
- name: ssl-port
containerPort: 8883
- name: emqx-port
containerPort: 1883
- name: ssl-dashboard
containerPort: 18084
env:
- name: EMQX_LOADED_PLUGINS
value: emqx_management,emqx_recon,emqx_retainer,emqx_dashboard,emqx_rule_engine,emqx_auth_username
- name: EMQX_CLUSTER__DISCOVERY
value: k8s
- name: EMQX_NAME
value: emqx
- name: EMQX_CLUSTER__K8S__APISERVER
value: https://kubernetes.default:443
- name: EMQX_CLUSTER__K8S__SERVICE_NAME
value: emqx
- name: EMQX_CLUSTER__K8S__ADDRESS_TYPE
value: ip
- name: EMQX_CLUSTER__K8S__APP_NAME
value: emqx
- name: EMQX_ALLOW_ANONYMOUS
value: "false"
- name: EMQX_LISTENER__SSL__EXTERNAL__MAX_CONNECTIONS
value: "1024000"
- name: EMQX_AUTH__USER__PASSWORD_HASH
value: sha256
- name: EMQX_AUTH__USER__1__USERNAME
value:
- name: EMQX_AUTH__USER__1__PASSWORD
value:
- name: EMQX_DASHBOARD__DEFAULT_USER__LOGIN
value:
- name: EMQX_DASHBOARD__DEFAULT_USER__PASSWORD
value:
- name: EMQX_DASHBOARD__LISTENER__HTTPS
value: "18084"
- name: MQX_DASHBOARD__LISTENER__HTTPS__ACCEPTORS
value: "4"
- name: EMQX_DASHBOARD__LISTENER__HTTPS__MAX_CLIENTS
value: "512"
tty: true
volumeMounts:
- name: emqx-mnesia
mountPath: "/opt/emqx/data/mnesia"
volumeClaimTemplates:
- metadata:
name: emqx-mnesia
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage5"
resources:
requests:
storage: 300Mi
Azure Kubernetes code
apiVersion: v1
kind: ServiceAccount
metadata:
name: emqx
namespace: emqx-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: emqx
subjects:
- kind: ServiceAccount
name: emqx
namespace: emqx-test
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: emqx-files
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- actimeo=30
parameters:
skuName: Standard_LRS
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: emqx-pvc
namespace: emqx-test
spec:
accessModes:
- ReadWriteMany
storageClassName: emqx-files
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: emqx
namespace: emqx-test
spec:
ports:
- name: emqx-dashboard
port: 80
targetPort: 18083
protocol: TCP
- name: ssl-port
port: 8883
targetPort: ssl-port
protocol: TCP
- name: emqx-port
port: 1883
targetPort: emqx-port
protocol: TCP
- name: ssl-dashboard
port: 443
targetPort: 18084
protocol: TCP
selector:
app: emqx
type: LoadBalancer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: emqx
labels:
app: emqx
namespace: emqx-test
spec:
serviceName: "emqx"
selector:
matchLabels:
app: emqx
replicas: 1
template:
metadata:
labels:
app: emqx
spec:
containers:
- name: emqx
image: emqx/emqx:4.2.7
ports:
- name: emqx-dashboard
containerPort: 18083
- name: ssl-port
containerPort: 8883
- name: emqx-port
containerPort: 1883
- name: ssl-dashboard
containerPort: 18084
env:
- name: EMQX_LOADED_PLUGINS
value: emqx_management,emqx_recon,emqx_retainer,emqx_dashboard,emqx_rule_engine,emqx_auth_username
- name: EMQX_CLUSTER__DISCOVERY
value: k8s
- name: EMQX_NAME
value: emqx
- name: EMQX_CLUSTER__K8S__APISERVER
value: https://kubernetes.default:443
- name: EMQX_CLUSTER__K8S__NAMESPACE
value: emqx-test
- name: EMQX_CLUSTER__K8S__SERVICE_NAME
value: emqx
- name: EMQX_CLUSTER__K8S__ADDRESS_TYPE
value: ip
- name: EMQX_CLUSTER__K8S__APP_NAME
value: emqx
- name: EMQX_ALLOW_ANONYMOUS
value: "false"
- name: EMQX_LISTENER__SSL__EXTERNAL__MAX_CONNECTIONS
value: "1024000"
- name: EMQX_AUTH__USER__PASSWORD_HASH
value: sha256
- name: EMQX_AUTH__USER__1__USERNAME
value:
- name: EMQX_AUTH__USER__1__PASSWORD
value:
- name: EMQX_DASHBOARD__DEFAULT_USER__LOGIN
value:
- name: EMQX_DASHBOARD__DEFAULT_USER__PASSWORD
value:
- name: EMQX_DASHBOARD__LISTENER__HTTPS
value: "18084"
- name: MQX_DASHBOARD__LISTENER__HTTPS__ACCEPTORS
value: "4"
- name: EMQX_DASHBOARD__LISTENER__HTTPS__MAX_CLIENTS
value: "512"
volumeMounts:
- name: emqx-data
mountPath: "/opt/emqx/data/mnesia"
tty: true
volumes:
- name: emqx-data
persistentVolumeClaim:
claimName: emqx-pvc
In k8s documentation on StatefulSet Basics you read:
The Pods' ordinals, hostnames, SRV records, and A record names have
not changed, but the IP addresses associated with the Pods may have
changed. In the cluster used for this tutorial, they have. This is why
it is important not to configure other applications to connect to Pods
in a StatefulSet by IP address.
This is expected and as you see and this behaviour is mentioned in documentation.
But why do you see different behavior on minikube and different on azure?
IP addresses are assigned by CNIs. On minikube default CNI it the docker-bridge, and on azure its's Azure CNI, so it is up to the CNI what address is assigned.
It's best to always assume that you cannot rely on pod IP addresses to stay static. Use DNS for statefulsets and for other pods and services for communication and never use hardcoded pod ip addresses directly.

Microk8s ingress - defaultBackend

inside my ingress config i changed default backend.
spec:
defaultBackend:
service:
name: navigation-service
port:
number: 80
When I describe ingress i have got
Name: ingress-nginx
Namespace: default
Address: 127.0.0.1
Default backend: navigation-service:80 (10.1.173.59:80)
I trying to access it via localhost and i have got 404. However when i curl 10.1.173.59, i have got my static page. So my navigation-service its ok and something is wrong with defaultbacked? Even if i trying
- pathType: Prefix
path: /
backend:
service:
name: navigation-service
port:
number: 80
I have got 500 error.
What im doing wrong?
Edit: Works via NodePort but I need to access it via ingress.
apiVersion: apps/v1
kind: Deployment
metadata:
name: navigation-deployment
spec:
selector:
matchLabels:
app: navigation-deployment
template:
metadata:
labels:
app: navigation-deployment
spec:
containers:
- name: nginx
image: nginx:1.13.3-alpine
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/index.html
name: nginx-html
- mountPath: /etc/nginx/conf.d/default.conf
name: nginx-default
volumes:
- name: nginx-html
hostPath:
path: /home/x/navigation/index.html
- name: nginx-default
hostPath:
path: /home/x/navigation/default.conf
apiVersion: v1
kind: Service
metadata:
name: navigation-service
spec:
type: ClusterIP
selector:
app: navigation-deployment
ports:
- name: "http"
port: 80
targetPort: 80
If someone have this problem then you need to run ingress controller with args - --default-backend-service=namespace/service_name

How to connect nats streaming cluster

I am new to kubernetes and trying to setup nats streaming cluster. I am using following manifest file. But I am confused with how can I access nats streaming server in my application. I am using azure kubernetes service.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: stan-config
data:
stan.conf: |
# listen: nats-streaming:4222
port: 4222
http: 8222
streaming {
id: stan
store: file
dir: /data/stan/store
cluster {
node_id: $POD_NAME
log_path: /data/stan/log
# Explicit names of resulting peers
peers: ["nats-streaming-0", "nats-streaming-1", "nats-streaming-2"]
}
}
---
apiVersion: v1
kind: Service
metadata:
name: nats-streaming
labels:
app: nats-streaming
spec:
type: ClusterIP
selector:
app: nats-streaming
ports:
- port: 4222
targetPort: 4222
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nats-streaming
labels:
app: nats-streaming
spec:
selector:
matchLabels:
app: nats-streaming
serviceName: nats-streaming
replicas: 3
volumeClaimTemplates:
- metadata:
name: stan-sts-vol
spec:
accessModes:
- ReadWriteOnce
volumeMode: "Filesystem"
resources:
requests:
storage: 1Gi
template:
metadata:
labels:
app: nats-streaming
spec:
# Prevent NATS Streaming pods running in same host.
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nats-streaming
# STAN Server
containers:
- name: nats-streaming
image: nats-streaming
ports:
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
args:
- "-sc"
- "/etc/stan-config/stan.conf"
# Required to be able to define an environment variable
# that refers to other environment variables. This env var
# is later used as part of the configuration file.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: config-volume
mountPath: /etc/stan-config
- name: stan-sts-vol
mountPath: /data/stan
# Disable CPU limits.
resources:
requests:
cpu: 0
livenessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: config-volume
configMap:
name: stan-config
I tried using nats://nats-streaming:4222, but it gives following error.
stan: connect request timeout (possibly wrong cluster ID?)
I am referring https://docs.nats.io/nats-on-kubernetes/minimal-setup
You did not specified client port 4222 of nats in the StatefulSet, which you are calling inside your Service
...
ports:
- port: 4222
targetPort: 4222
...
As you can see from the simple-nats.yml they have setup the following ports:
...
containers:
- name: nats
image: nats:2.1.0-alpine3.10
ports:
- containerPort: 4222
name: client
hostPort: 4222
- containerPort: 7422
name: leafnodes
hostPort: 7422
- containerPort: 6222
name: cluster
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
command:
- "nats-server"
- "--config"
- "/etc/nats-config/nats.conf"
...
As for exposing the service outside, I would recommend reading Using a Service to Expose Your App and Exposing an External IP Address to Access an Application in a Cluster.
There is also nice article, maybe a bit old (2017) Exposing ports to Kubernetes pods on Azure, you can also check Azure docs about Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI

Expose a redis cluster - with a kubernetes statefulset to the internet

I created a statefulset that deploys a redis image to GCP on kubernetes. The challenge I am having is exposing it using a single domain name. Such that the pods can be accessed in the following order - redis.com/first, redis.com/second, redis.com/third
here are the YAML files
Statefulset
kind: StatefulSet
metadata:
name: app-redis
spec:
selector:
matchLabels:
app: apprenticeship-redis
serviceName: 'redis-service'
replicas: 3
template:
metadata:
labels:
app: app-redis
spec:
terminationGracePeriodSeconds: 10
containers:
- name: app-redis
image: redis
args:
- /etc/redis/redis.conf
volumeMounts:
- mountPath: /etc/redis
name: redis-config
readOnly: false
- name: redis-storage
mountPath: /data
readOnly: false
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 150m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
livenessProbe:
exec:
command: ['redis-cli', 'ping']
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 2
volumes:
- name: redis-config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: redis-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Headless service
apiVersion: v1
kind: Service
metadata:
labels:
app: app-redis
name: redis-service
namespace: default
spec:
ports:
- name: server-port
port: 80
protocol: TCP
targetPort: 6379
clusterIP: None
selector:
statefulset.kubernetes.io/pod-name: app-redis-0
Loadbalancer
apiVersion: v1
kind: Service
metadata:
labels:
app: redis-service
name: app-redis
spec:
externalTrafficPolicy: Local
ports:
- port: 80
protocol: TCP
targetPort: 6379
selector:
app: app-redis
type: LoadBalancer
loadBalancerIP: xx.xx.xx.xxx
status:
loadBalancer:
ingress:
- ip: xx.xx.xx.xxx
Config map
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
namespace: default
data:
redis.conf: |
dbfilename "dump.rdb"
dir /data
save 3600 1
save 300 10
save 60 100
appendonly yes
appendfilename "appendonly.aof"
Storage class
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: redis-storage
provisioner: kubernetes.io/gce-pd
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ingress
annotations:
kubernetes.io/ingress.class: 'nginx'
nginx.ingress.kubernetes.io/force-ssl-redirect: 'false'
spec:
rules:
- host: app-redis.tk
http:
paths:
- path: /
backend:
serviceName: app-redis
servicePort: 80
Each pod in the StatefulSet will need to have a service linking to it.
This service will need to be created with:
selector:
statefulset.kubernetes.io/pod-name: <POD_NAME>
Then you will be able to set ingress and use it to redirect traffic based on path:
...
spec:
rules:
- http:
paths:
- path: /app-redis-0
backend:
serviceName: redis-service-0
servicePort: 6379
- path: /app-redis-1
backend:
serviceName: redis-service-1
servicePort: 6379
- path: /app-redis-2
backend:
serviceName: redis-service-2
servicePort: 6379
...
You can read about Exposing StatefulSets in Kubernetes and Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?

How can I use ingress with Haproxy on kuberenets cluster?

I want to use ingress with Haproxy in my kuberenets cluster, how should i use it?
I am have tried using it on my local system, I have used the HAproxy ingress controller in different namespace but I am getting 503 error randomly for the haproxy pod which has been created.
try this
default backend
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
spec:
replicas: 1
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
spec:
ports:
- name: port-1
port: 8080
protocol: TCP
targetPort: 8080
selector:
run: ingress-default-backend
haproxy ingress controller
apiVersion: v1
data:
dynamic-scaling: "true"
backend-server-slots-increment: "4"
kind: ConfigMap
metadata:
name: haproxy-configmap
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
replicas: 1
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress
args:
- --default-backend-service=default/ingress-default-backend
- --default-ssl-certificate=default/tls-secret
- --configmap=$(POD_NAMESPACE)/haproxy-configmap
- --reload-strategy=native
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1936
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
externalIPs:
- 172.17.0.50
ports:
- name: port-1
port: 80
protocol: TCP
targetPort: 80
- name: port-2
port: 443
protocol: TCP
targetPort: 443
- name: port-3
port: 1936
protocol: TCP
targetPort: 1936
selector:
run: haproxy-ingress
update externalIPs as per your environment