Linkerd and k8s not working - kubernetes

I'm trying to get my head around linkerd in kubernetes. I'm using the linkerd deamonset example from their website in my local minikube
It is all deployed in the production namespace. When I try to
http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs
Nothing happens. Where am I going wrong in my setup?
My Linkerd yaml:
# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/production/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: production
port: incoming
service: l5d
servers:
- port: 4140
ip: 0.0.0.0
responseClassifier:
kind: io.l5d.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/production/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: buoyantio/linkerd:0.9.1
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: buoyantio/kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
type: LoadBalancer
ports:
- name: outgoing
port: 4140
- name: incoming
port: 4141
- name: admin
port: 9990
Here's my deployment for an apiservice:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: apiserver-production
spec:
replicas: 1
template:
metadata:
name: apiserver
labels:
app: apiserver
role: gateway
env: production
spec:
dnsPolicy: ClusterFirst
containers:
- name: apiserver
image: eu.gcr.io/xxxxx/apiservice:latest
env:
- name: MONGO_HOST
valueFrom:
secretKeyRef:
name: mongosecret
key: host
- name: MONGO_PORT
valueFrom:
secretKeyRef:
name: mongosecret
key: port
- name: MONGO_USR
valueFrom:
secretKeyRef:
name: mongosecret
key: username
- name: MONGO_PWD
valueFrom:
secretKeyRef:
name: mongosecret
key: password
- name: MONGO_DB
valueFrom:
secretKeyRef:
name: mongosecret
key: db
- name: MONGO_PREFIX
valueFrom:
secretKeyRef:
name: mongosecret
key: prefix
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
resources:
limits:
memory: "300Mi"
cpu: "50m"
imagePullPolicy: Always
command:
- "pm2-docker"
- "processes.json"
ports:
- name: apiserver
containerPort: 8080
- name: kubectl
image: buoyantio/kubectl:1.2.3
args:
- proxy
- "-p"
- "8001"
Here's the service:
kind: Service
apiVersion: v1
metadata:
name: apiserver
spec:
selector:
app: apiserver
role: gateway
type: LoadBalancer
ports:
- name: http
port: 8080
- name: external
port: 80
targetPort: 8080
In my node application I'm using global tunnel:
const server = app.listen(port);
server.on('listening', function(){
// make sure all traffic goes over linkerd
globalTunnel.initialize({
host: 'localhost',
port: 4140
});
console.log(`Feathers application started on ${app.get('host')}:${app.get('port')} `);

Where is your curl command being run?
http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs`
The linkerd service in the example doesn't expose a public IP address. You can confirm this with kubectl get svc/l5d -- I expect you'll see no external IP.
I think that you'll need to modify the service definition---or create an additional explicitly external service that exposes a ClusterIP---in order to receive ingress traffic.

Deploying two of the same node applications and making them send requests to each other it worked. Weirdly the requests don't show up in the linkerd dashboard.

Related

LoadBalancer inside RabbitMQ Cluster

I'm new to RabbitMQ and Kubernetes world, and I'm trying to achieve the following objectives:
I've deployed successfully a RabbitMQ Cluster on Kubernetes(minikube) and exposed via loadbalancer(doing minikube tunnel on local)
I can connect successfully to my Queue with a basic Spring Boot app. I can send and receive message from my cluster.
In my clusters i have 4 nodes. I have applied the mirror queue policy (HA), and the created queue is mirrored also on other nodes.
This is my cluster configuration:
apiVersion: v1
kind: Secret
metadata:
name: rabbit-secret
type: Opaque
data:
# echo -n "cookie-value" | base64
RABBITMQ_ERLANG_COOKIE: V0lXVkhDRFRDSVVBV0FOTE1RQVc=
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-config
data:
enabled_plugins: |
[rabbitmq_federation,rabbitmq_management,rabbitmq_federation_managementrabbitmq_peer_discovery_k8s].
rabbitmq.conf: |
loopback_users.guest = false
listeners.tcp.default = 5672
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.k8s.address_type = hostname
cluster_formation.node_cleanup.only_log_warning = true
##cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
##cluster_formation.classic_config.nodes.1 = rabbit#rabbitmq-0.rabbitmq.rabbits.svc.cluster.local
##cluster_formation.classic_config.nodes.2 = rabbit#rabbitmq-1.rabbitmq.rabbits.svc.cluster.local
##cluster_formation.classic_config.nodes.3 = rabbit#rabbitmq-2.rabbitmq.rabbits.svc.cluster.local
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
spec:
serviceName: rabbitmq
replicas: 4
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
serviceAccountName: rabbitmq
initContainers:
- name: config
image: busybox
command: ['/bin/sh', '-c', 'cp /tmp/config/rabbitmq.conf /config/rabbitmq.conf && ls -l /config/ && cp /tmp/config/enabled_plugins /etc/rabbitmq/enabled_plugins']
volumeMounts:
- name: config
mountPath: /tmp/config/
readOnly: false
- name: config-file
mountPath: /config/
- name: plugins-file
mountPath: /etc/rabbitmq/
containers:
- name: rabbitmq
image: rabbitmq:3.8-management
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- >
until rabbitmqctl --erlang-cookie ${RABBITMQ_ERLANG_COOKIE} await_startup; do sleep 1; done;
rabbitmqctl --erlang-cookie ${RABBITMQ_ERLANG_COOKIE} set_policy ha-two "" '{"ha-mode":"exactly", "ha-params": 2, "ha-sync-mode": "automatic"}'
ports:
- containerPort: 15672
name: management
- containerPort: 4369
name: discovery
- containerPort: 5672
name: amqp
env:
- name: RABBIT_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: RABBIT_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: RABBITMQ_NODENAME
value: rabbit#$(RABBIT_POD_NAME).rabbitmq.$(RABBIT_POD_NAMESPACE).svc.cluster.local
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_CONFIG_FILE
value: "/config/rabbitmq"
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: rabbit-secret
key: RABBITMQ_ERLANG_COOKIE
- name: K8S_HOSTNAME_SUFFIX
value: .rabbitmq.$(RABBIT_POD_NAMESPACE).svc.cluster.local
volumeMounts:
- name: data
mountPath: /var/lib/rabbitmq
readOnly: false
- name: config-file
mountPath: /config/
- name: plugins-file
mountPath: /etc/rabbitmq/
volumes:
- name: config-file
emptyDir: {}
- name: plugins-file
emptyDir: {}
- name: config
configMap:
name: rabbitmq-config
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
type: LoadBalancer
ports:
- port: 15672
targetPort: 15672
name: management
- port: 4369
targetPort: 4369
name: discovery
- port: 5672
targetPort: 5672
name: amqp
selector:
app: rabbitmq
If I understood correctly, rabbitmq with HA receives message on one node(master) and then response is mirrored to slave, right?
But if I want to load balance the workload? For example suppose that I sends 200 messages per second.
These 200 messages are received all from the master node. What I want, instead, is that these 200 messages are distributed across nodes, for example 100 messages are received from node1, 50 messages are received on node 2 and the rest on node 3. It's possible to do that? And if yes, how can I achieve it on kubernetes?

Trying to run echo server in Minikube with Istio getting connection refused from client socker

I have a simple echo server (protocol TCP) written in CPP. I am able to put the server in docker container and with port mapping it works. So I think my client server are OK. When I upload the server image in Kubernetes and deploy Istio with ingress Gateway listening on port 31400 which I think is the right one when using tcp I get connection refused from client trying to connect to the socket.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcp-echo-deployment
labels:
app: tcp-echo
system: example
spec:
replicas: 1
selector:
matchLabels:
app: tcp-echo
template:
metadata:
labels:
app: tcp-echo
system: example
spec:
containers:
- name: tcp-echo-container
image: igordptx/my-tcp-server
imagePullPolicy: Always
env:
- name: TCP_PORT
value: "2701"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
ports:
- name: tcp-echo-port
containerPort: 2701
`
apiVersion: v1
kind: Service
metadata:
name: "tcp-echo-service"
labels:
app: tcp-echo
system: example
spec:
selector:
app: "tcp-echo"
ports:
- protocol: "TCP"
port: 2701
targetPort: 2701
I am new at this and I am trying to get a simple echo server to work. Attaching pictures of the YAML files of the Deployment,service,Gateway and virtual service for the routing.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: echo-tcp-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp-echo
protocol: TCP
hosts:
- "*"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tcp-echo-vs-from-gw
spec:
hosts:
- "*"
gateways:
- echo-tcp-gateway
tcp:
- match:
- port: 31400
route:
- destination:
host: tcp-echo-service
port:
number: 2701

How to connect nats streaming cluster

I am new to kubernetes and trying to setup nats streaming cluster. I am using following manifest file. But I am confused with how can I access nats streaming server in my application. I am using azure kubernetes service.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: stan-config
data:
stan.conf: |
# listen: nats-streaming:4222
port: 4222
http: 8222
streaming {
id: stan
store: file
dir: /data/stan/store
cluster {
node_id: $POD_NAME
log_path: /data/stan/log
# Explicit names of resulting peers
peers: ["nats-streaming-0", "nats-streaming-1", "nats-streaming-2"]
}
}
---
apiVersion: v1
kind: Service
metadata:
name: nats-streaming
labels:
app: nats-streaming
spec:
type: ClusterIP
selector:
app: nats-streaming
ports:
- port: 4222
targetPort: 4222
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nats-streaming
labels:
app: nats-streaming
spec:
selector:
matchLabels:
app: nats-streaming
serviceName: nats-streaming
replicas: 3
volumeClaimTemplates:
- metadata:
name: stan-sts-vol
spec:
accessModes:
- ReadWriteOnce
volumeMode: "Filesystem"
resources:
requests:
storage: 1Gi
template:
metadata:
labels:
app: nats-streaming
spec:
# Prevent NATS Streaming pods running in same host.
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nats-streaming
# STAN Server
containers:
- name: nats-streaming
image: nats-streaming
ports:
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
args:
- "-sc"
- "/etc/stan-config/stan.conf"
# Required to be able to define an environment variable
# that refers to other environment variables. This env var
# is later used as part of the configuration file.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: config-volume
mountPath: /etc/stan-config
- name: stan-sts-vol
mountPath: /data/stan
# Disable CPU limits.
resources:
requests:
cpu: 0
livenessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: config-volume
configMap:
name: stan-config
I tried using nats://nats-streaming:4222, but it gives following error.
stan: connect request timeout (possibly wrong cluster ID?)
I am referring https://docs.nats.io/nats-on-kubernetes/minimal-setup
You did not specified client port 4222 of nats in the StatefulSet, which you are calling inside your Service
...
ports:
- port: 4222
targetPort: 4222
...
As you can see from the simple-nats.yml they have setup the following ports:
...
containers:
- name: nats
image: nats:2.1.0-alpine3.10
ports:
- containerPort: 4222
name: client
hostPort: 4222
- containerPort: 7422
name: leafnodes
hostPort: 7422
- containerPort: 6222
name: cluster
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
command:
- "nats-server"
- "--config"
- "/etc/nats-config/nats.conf"
...
As for exposing the service outside, I would recommend reading Using a Service to Expose Your App and Exposing an External IP Address to Access an Application in a Cluster.
There is also nice article, maybe a bit old (2017) Exposing ports to Kubernetes pods on Azure, you can also check Azure docs about Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI

I can't connect my ingress with my service

I have a problem with my ingress and my service, I can not get that when I connect to the IP of my server, I redirect to the service I have associated with port 80, which is my website. I pass you the configuration files and the description of the ingress:
apiVersion: v1
kind: Namespace
metadata:
name: bookstack
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: bookstack
- name: MYSQL_PASS
value: pass
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_USER
value: user
image: mysql:5.7
name: mysql
ports:
- containerPort: 3306
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
service: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: bookstack
spec:
containers:
- env:
- name: namespace
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DB_DATABASE
value: bookstack
- name: DB_HOST
value: mysql
- name: DB_PASSWORD
value: root
- name: DB_USERNAME
value: root
image: solidnerd/bookstack:latest
name: bookstack
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
type: NodePort
ports:
- name: http-port
port: 80
protocol: TCP
selector:
app: bookstack
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http
namespace: bookstack
spec:
backend:
serviceName: bookstack
servicePort: http-port
This is what appears on my ingress:
Name: http
Namespace: bookstack
Address:
Default backend: bookstack:http-port (10.36.0.22:80)
Rules:
Host Path Backends
---- ---- --------
* * bookstack:http-port (10.36.0.22:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"http","namespace":"bookstack"},"spec":{"backend":{"serviceName":"bookstack","servicePort":"http-port"}}}
Events: <none>
It doesn't return any external IP to connect me, why could it be? I want to avoid using LoadBalancer as a service type.
The main problem was that I didn't have activated the balancer that Google Kubernetes Engine offers by default, not having it active I couldn't generate an external ip because there wasn't a balancer. There are two solutions, either activate GKE's default load balancer or create a type of service: LoadBalancer.
Important to activate also within the deploy the readinessProbe
and livenessProbe.
An example:
readinessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
There wouldn't be an external IP specifically because NodePort represents all the nodes on your cluster on that specific port. So, essentially you would have to point an external load balancer or that traffic source to each of the nodes on your cluster on that specific NodePort.
Note that if you are using ExternalTrafficPolicy=Local only the nodes that have pods for your service will reply.

unable to view pods in kubernetes

I am unable to list my pod (kubectl get pods) after creating it.
kubectl apply -f my-config.yml
returns me this log below,
configmap "my-config" deleted
daemonset "my" deleted
service "my" deleted
configmap "my-config" created
daemonset "my" created
service "my" created
And the command (kubectl get pods) doesn't list the pod created!
Below is the config file I have used,
# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: l5d
servers:
- port: 4140
ip: 0.0.0.0
service:
responseClassifier:
kind: io.l5d.http.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: buoyantio/linkerd:1.0.0
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: buoyantio/kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
type: LoadBalancer
ports:
- name: outgoing
port: 4140
- name: incoming
port: 4141
- name: admin
port: 9990