Kubernetes using service cluster IP and port as environment variables - kubernetes

I have a backend service on cluster IP 10.101.71.17 and port 26379. I have a frontend deployment where I intend to pass this service IP as an environment variable.
frontend-deployment.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
namespace: my-namespace
spec:
replicas: 2
template:
metadata:
labels:
name: frontend
spec:
containers:
- name: frontend
image: localhost:5000/frontend
command: [ "/usr/local/bin/node"]
args: [ "./index.js" ]
imagePullPolicy: IfNotPresent
env:
- name: NODE_ENV
value: production
- name: API_URL
value: BACKEND_HTTP_SERVICE_HOST // Here
- name: BASIC_AUTH
value: "true"
- name: SECURE
value: "true"
- name: PORT
value: "443"
ports:
- containerPort: 443
- containerPort: 80
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 8079
nodeSelector:
beta.kubernetes.io/os: linux
---
I can get all the environment variables inside the pod but I am not sure what's the proper way of assigning it to the environment variable value.

I assume in your front end application your referring your back-end service in API_URL environment variable.
If this is the case just replace BACKEND_HTTP_SERVICE_HOST with 10.101.71.17:26379
env:
- name: NODE_ENV
value: production
- name: API_URL
value: 10.101.71.17:26379
- name: BASIC_AUTH
value: "true"
- name: SECURE
value: "true"
- name: PORT
value: "443"
your should consider using the DNS name for services.

Related

Grafana is generating links with Base URL : http://localhost:3000 instead of using my url

I deployed grafana 7 with Kubernetes, here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-core
namespace: monitoring
labels:
app: grafana
component: core
spec:
selector:
matchLabels:
app: grafana
replicas: 1
template:
metadata:
labels:
app: grafana
component: core
spec:
initContainers:
- name: init-chown-data
image: grafana/grafana:7.0.3
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command: ["chown", "-R", "472:472", "/var/lib/grafana"]
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
containers:
- image: grafana/grafana:7.0.3
name: grafana-core
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 472
# env:
envFrom:
- secretRef:
name: grafana-env
env:
# The following env variables set up basic auth twith the default admin user and admin password.
- name: GF_INSTALL_PLUGINS
value: grafana-clock-panel,grafana-simple-json-datasource,camptocamp-prometheus-alertmanager-datasource
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana
key: admin-username
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana
key: admin-password
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
readinessProbe:
httpGet:
path: /login
port: 3000
initialDelaySeconds: 30
timeoutSeconds: 1
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
- name: grafana-datasources
mountPath: /etc/grafana/provisioning/datasources
volumes:
- name: grafana-persistent-storage
persistentVolumeClaim:
claimName: grafana-storage
- name: grafana-datasources
configMap:
name: grafana-datasources
nodeSelector:
kops.k8s.io/instancegroup: monitoring-nodes
It is working well, but each time it generates an URL, it does it with base url : http://localhost:3000 instead of using https://grafana.company.com
Where can I configure that ? I couldn't find a env var that handle it.
Configure the root_url option of [server] in your Grafana config file or env variable GF_SERVER_ROOT_URL to https://grafana.company.com/.
I have fount it can be done through using the env variable inside the grafana pod. This set up is a tricky one, misuse of the url format of the GF_SERVER_ROOT_URL to your.url with no quotes, "your.url" without https:// or http:// and even "http://your.url" with no / at the end may cause problems.
grafana:
env:
GF_SERVER_ROOT_URL: "http://your.url/"
notifiers:
notifiers.yaml:
notifiers:
- name: telegram
type: telegram
uid: telegram
is_default: true
settings:
bottoken: "yourbottoken"
chatid: "-yourchatid"
and then use uid: "telegram" in the provisioned dashboards

How to connect nats streaming cluster

I am new to kubernetes and trying to setup nats streaming cluster. I am using following manifest file. But I am confused with how can I access nats streaming server in my application. I am using azure kubernetes service.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: stan-config
data:
stan.conf: |
# listen: nats-streaming:4222
port: 4222
http: 8222
streaming {
id: stan
store: file
dir: /data/stan/store
cluster {
node_id: $POD_NAME
log_path: /data/stan/log
# Explicit names of resulting peers
peers: ["nats-streaming-0", "nats-streaming-1", "nats-streaming-2"]
}
}
---
apiVersion: v1
kind: Service
metadata:
name: nats-streaming
labels:
app: nats-streaming
spec:
type: ClusterIP
selector:
app: nats-streaming
ports:
- port: 4222
targetPort: 4222
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nats-streaming
labels:
app: nats-streaming
spec:
selector:
matchLabels:
app: nats-streaming
serviceName: nats-streaming
replicas: 3
volumeClaimTemplates:
- metadata:
name: stan-sts-vol
spec:
accessModes:
- ReadWriteOnce
volumeMode: "Filesystem"
resources:
requests:
storage: 1Gi
template:
metadata:
labels:
app: nats-streaming
spec:
# Prevent NATS Streaming pods running in same host.
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nats-streaming
# STAN Server
containers:
- name: nats-streaming
image: nats-streaming
ports:
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
args:
- "-sc"
- "/etc/stan-config/stan.conf"
# Required to be able to define an environment variable
# that refers to other environment variables. This env var
# is later used as part of the configuration file.
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: config-volume
mountPath: /etc/stan-config
- name: stan-sts-vol
mountPath: /data/stan
# Disable CPU limits.
resources:
requests:
cpu: 0
livenessProbe:
httpGet:
path: /
port: 8222
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: config-volume
configMap:
name: stan-config
I tried using nats://nats-streaming:4222, but it gives following error.
stan: connect request timeout (possibly wrong cluster ID?)
I am referring https://docs.nats.io/nats-on-kubernetes/minimal-setup
You did not specified client port 4222 of nats in the StatefulSet, which you are calling inside your Service
...
ports:
- port: 4222
targetPort: 4222
...
As you can see from the simple-nats.yml they have setup the following ports:
...
containers:
- name: nats
image: nats:2.1.0-alpine3.10
ports:
- containerPort: 4222
name: client
hostPort: 4222
- containerPort: 7422
name: leafnodes
hostPort: 7422
- containerPort: 6222
name: cluster
- containerPort: 8222
name: monitor
- containerPort: 7777
name: metrics
command:
- "nats-server"
- "--config"
- "/etc/nats-config/nats.conf"
...
As for exposing the service outside, I would recommend reading Using a Service to Expose Your App and Exposing an External IP Address to Access an Application in a Cluster.
There is also nice article, maybe a bit old (2017) Exposing ports to Kubernetes pods on Azure, you can also check Azure docs about Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI

Chaincode instantiation fails in AWS EKS network

I have a simple one orderer and one peer network deployed on AWS-Elastic Kubernetes Services. I created the network using the official eks documentation. I am able to bring up the orderer as well as the peer within the pods. The peer is able to create and join the channel as well as anchor peer update transaction is successful as well.
High-level configuration for the pods:
Started as Stateful Sets
Pods use dynamic PV with Storage Class configured for aws-ebs
Exposed using service type load balancer
Uses Docker In Docker(DIND) approach for chaincode
Detailed Configuration for Orderer Pod:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: allparticipants-orderer
labels:
app: allparticipants-orderer
spec:
serviceName: orderer
replicas: 1
selector:
matchLabels:
app: allparticipants-orderer
template:
metadata:
labels:
app: allparticipants-orderer
spec:
containers:
- name: allparticipants-orderer
image: <docker-hub-url>/orderer:0.1
imagePullPolicy: Always
command: ["sh", "-c", "orderer"]
ports:
- containerPort: 7050
env:
- name: FABRIC_LOGGING_SPEC
value: DEBUG
- name: ORDERER_GENERAL_LOGLEVEL
value: DEBUG
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /var/hyperledger/orderer/orderer.genesis.block
- name: ORDERER_GENERAL_LOCALMSPID
value: OrdererMSP
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /var/hyperledger/orderer/msp
- name: ORDERER_GENERAL_TLS_ENABLED
value: "false"
- name: ORDERER_GENERAL_TLS_PRIVATEKEY
value: /var/hyperledger/orderer/tls/server.key
- name: ORDERER_GENERAL_TLS_CERTIFICATE
value: /var/hyperledger/orderer/tls/server.crt
- name: ORDERER_GENERAL_TLS_ROOTCAS
value: /var/hyperledger/orderer/tls/ca.crt
volumeMounts:
- name: allparticipants-orderer-ledger
mountPath: /var/ledger
volumeClaimTemplates:
- metadata:
name: allparticipants-orderer-ledger
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: allparticipants-orderer-sc
resources:
requests:
storage: 1Gi
Detailed configuration for Peer along with DIND in the same pod:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: allparticipants-peer0
labels:
app: allparticipants-peer0
spec:
serviceName: allparticipants-peer0
replicas: 1
selector:
matchLabels:
app: allparticipants-peer0
template:
metadata:
labels:
app: allparticipants-peer0
spec:
containers:
- name: docker
env:
- name: DOCKER_TLS_CERTDIR
value:
securityContext:
privileged: true
image: "docker:stable-dind"
ports:
- containerPort: 2375
volumeMounts:
- mountPath: /var/lib/docker
name: dockervolume
- name: allparticipants-peer0
image: <docker-hub-url>/peer0:0.1
imagePullPolicy: Always
command: ["sh", "-c", "peer node start"]
ports:
- containerPort: 7051
env:
- name: CORE_VM_ENDPOINT
value: http://localhost:2375
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: 0.0.0.0:7052
- name: FABRIC_LOGGING_SPEC
value: debug
- name: CORE_LOGGING_PEER
value: debug
- name: CORE_LOGGING_CAUTHDSL
value: debug
- name: CORE_LOGGING_GOSSIP
value: debug
- name: CORE_LOGGING_LEDGER
value: debug
- name: CORE_LOGGING_MSP
value: info
- name: CORE_LOGGING_POLICIES
value: debug
- name: CORE_LOGGING_GRPC
value: debug
- name: CORE_LEDGER_STATE_STATEDATABASE
value: goleveldb
- name: GODEBUG
value: "netdns=go"
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_SKIPHANDSHAKE
value: "true"
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_COMMITTER_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /etc/hyperledger/fabric/tls/server.crt
- name: CORE_PEER_TLS_KEY_FILE
value: /etc/hyperledger/fabric/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /etc/hyperledger/fabric/tls/ca.crt
- name: CORE_PEER_ID
value: allparticipants-peer0
- name: CORE_PEER_ADDRESS
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_LISTENADDRESS
value: 0.0.0.0:7051
- name: CORE_PEER_EVENTS_ADDRESS
value: 0.0.0.0:7053
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_LOCALMSPID
value: AllParticipantsMSP
- name: CORE_PEER_MSPCONFIGPATH
value: <path-to-msp-of-peer0>
- name: ORDERER_ADDRESS
value: <orderer-load-balancer-url>:7050
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: FABRIC_CFG_PATH
value: /etc/hyperledger/fabric
volumeMounts:
- name: allparticipants-peer0-ledger
mountPath: /var/ledger
- name: dockersock
mountPath: /host/var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: dockervolume
persistentVolumeClaim:
claimName: docker-pvc
volumeClaimTemplates:
- metadata:
name: allparticipants-peer0-ledger
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: allparticipants-peer0-sc
resources:
requests:
storage: 1Gi
Not including Storage Class and Service configuration for the pods as they seem to work fine.
So, as stated earlier, the peer is able to create and join the channel as well as I am able to install the chaincode in the peer. However, while instantiating the chaincode from within the peer, I get the following error:
Error: error endorsing chaincode: rpc error: code = Unavailable desc = transport is closing
Apart from this the Docker(DIND) container started within peer0's pod has following warning logs:
level=warning msg="887409d917bd7ef74ebe8617b8dcbedc8197741662a14b988491c54085e9acfd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/887409d917bd7ef74ebe8617b8dcbedc8197741662a14b988491c54085e9acfd/mounts/shm, flags: 0x2: no such file or directory"
Additionally, there are no logs in this docker container when I submit the following instantiation request:
peer chaincode instantiate -o $ORDERER_ADDRESS -C carchannel -n fabcar
-l node -v 0.1.1 -c '{"Args":[]}' -P "OR ('AllParticipantsMSP.member','AllParticipantsMSP.peer', 'AllParticipantsMSP.admin', 'AllParticipantsMSP.client')"
I tried searching for similar issues but not able to find one that matches this eks one.
Is there an issue with pod configuration or eks configuration? Not able to get past this one. Can someone please point me in the right direction? I am quite new to K8s.
Update 1:
I updated the service type to Load Balancer keeping rest of the configurations similar. Still I get the same error.
Update 2:
Configured DIND approach for the chaincode container.
Update 3:
Mounted pv and pvc for the dind container as well as updated the CORE_VM_ENDPOINT access protocol from tcp to http.
The issue here was the image used for Docker In Docker container. I downgraded the docker-in-docker image version from docker:stable-dind one to docker:18-dind. The stable image version has TLS enabled by default. In my case, I tried setting the value of the environment variable DOCKER_TLS_CERTDIR to blank. But that did not work out.
Attaching the snippet of the DIND configuration:
containers:
- name: docker
securityContext:
privileged: true
image: "docker:18-dind"
ports:
- containerPort: 2375
protocol: TCP
volumeMounts:
- mountPath: /var/lib/docker
name: dockervolume
Note: While I have been able to work this out, I am not marking this as the accepted answer since TLS might be required to instantiate the chaincode and there should be a way to use that and I would be open to and looking out for those answers.

kubernetes creating statefulset fail

I am trying to create a stateful set with definition below but I get this error:
error: unable to recognize "wordpress-database.yaml": no matches for kind "StatefulSet" in version "apps/v1beta2"
what's wrong?
The yaml file is (please do not consider the alignment of the rows):
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: wordpress-database
spec:
selector:
matchLabels:
app: blog
serviceName: "blog"
replicas: 1
template:
metadata:
labels:
app: blog
spec:
containers:
- name: database
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: rootPassword
- name: MYSQL_DATABASE
value: database
- name: MYSQL_USER
value: user
- name: MYSQL_PASSWORD
value: password
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: blog
image: wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: 127.0.0.1:3306
- name: WORDPRESS_DB_NAME
value: database
- name: WORDPRESS_DB_USER
value: user
- name: WORDPRESS_DB_PASSWORD
value: password
volumeClaimTemplates:
- metadata:
name: data
spec:
resources:
requests:
storage: 1Gi
The api version of StatefulSet shoud be:
apiVersion: apps/v1
From the official documentation
Good luck.

Linkerd and k8s not working

I'm trying to get my head around linkerd in kubernetes. I'm using the linkerd deamonset example from their website in my local minikube
It is all deployed in the production namespace. When I try to
http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs
Nothing happens. Where am I going wrong in my setup?
My Linkerd yaml:
# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/production/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: production
port: incoming
service: l5d
servers:
- port: 4140
ip: 0.0.0.0
responseClassifier:
kind: io.l5d.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/production/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: buoyantio/linkerd:0.9.1
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: buoyantio/kubectl:v1.4.0
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
type: LoadBalancer
ports:
- name: outgoing
port: 4140
- name: incoming
port: 4141
- name: admin
port: 9990
Here's my deployment for an apiservice:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: apiserver-production
spec:
replicas: 1
template:
metadata:
name: apiserver
labels:
app: apiserver
role: gateway
env: production
spec:
dnsPolicy: ClusterFirst
containers:
- name: apiserver
image: eu.gcr.io/xxxxx/apiservice:latest
env:
- name: MONGO_HOST
valueFrom:
secretKeyRef:
name: mongosecret
key: host
- name: MONGO_PORT
valueFrom:
secretKeyRef:
name: mongosecret
key: port
- name: MONGO_USR
valueFrom:
secretKeyRef:
name: mongosecret
key: username
- name: MONGO_PWD
valueFrom:
secretKeyRef:
name: mongosecret
key: password
- name: MONGO_DB
valueFrom:
secretKeyRef:
name: mongosecret
key: db
- name: MONGO_PREFIX
valueFrom:
secretKeyRef:
name: mongosecret
key: prefix
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
resources:
limits:
memory: "300Mi"
cpu: "50m"
imagePullPolicy: Always
command:
- "pm2-docker"
- "processes.json"
ports:
- name: apiserver
containerPort: 8080
- name: kubectl
image: buoyantio/kubectl:1.2.3
args:
- proxy
- "-p"
- "8001"
Here's the service:
kind: Service
apiVersion: v1
metadata:
name: apiserver
spec:
selector:
app: apiserver
role: gateway
type: LoadBalancer
ports:
- name: http
port: 8080
- name: external
port: 80
targetPort: 8080
In my node application I'm using global tunnel:
const server = app.listen(port);
server.on('listening', function(){
// make sure all traffic goes over linkerd
globalTunnel.initialize({
host: 'localhost',
port: 4140
});
console.log(`Feathers application started on ${app.get('host')}:${app.get('port')} `);
Where is your curl command being run?
http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs`
The linkerd service in the example doesn't expose a public IP address. You can confirm this with kubectl get svc/l5d -- I expect you'll see no external IP.
I think that you'll need to modify the service definition---or create an additional explicitly external service that exposes a ClusterIP---in order to receive ingress traffic.
Deploying two of the same node applications and making them send requests to each other it worked. Weirdly the requests don't show up in the linkerd dashboard.