How to fix "bad certificate error" in traefik 2.0? - kubernetes

I'm setting up traefik 2.0-alpha with Let's Encrypt certificates inside GKE, but now i'm in stupor with "server.go:3012: http: TLS handshake error from 10.32.0.1:2244: remote error: tls: bad certificate" error in container logs.
Connections via http working fine. When i try to connect via https, traefik return 404 with its own default certificates.
I found same problem for traefik v1 on github. Solution was in adding to config:
InsecureSkipVerify = true
passHostHeader = true
It doesn't help me.
Here is my configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-ingress-configmap
namespace: kube-system
data:
traefik.toml: |
[Global]
sendAnonymousUsage = true
debug = true
logLevel = "DEBUG"
[ServersTransport]
InsecureSkipVerify = true
[entrypoints]
[entrypoints.web]
address = ":80"
[entryPoints.web-secure]
address = ":443"
[entrypoints.mongo-port]
address = ":11111"
[providers]
[providers.file]
[tcp] # YAY!
[tcp.routers]
[tcp.routers.everything-to-mongo]
entrypoints = ["mongo-port"]
rule = "HostSNI(`*`)" # Catches every request
service = "database"
[tcp.services]
[tcp.services.database.LoadBalancer]
[[tcp.services.database.LoadBalancer.servers]]
address = "mongodb-service.default.svc:11111"
[http]
[http.routers]
[http.routers.for-jupyterx-https]
entryPoints = ["web-secure"] # won't listen to entrypoint mongo-port
# rule = "Host(`clients-ui.ddns.net`)"
# rule = "Path(`/jupyterx`)" # abo /jupyterx/*
rule = "PathPrefix(`/jupyterx`)"
service = "jupyterx"
[http.routers.for-jupyterx.tls]
[http.routers.for-jupyterx-http]
entryPoints = ["web"] # won't listen to entrypoint mongo-port
# rule = "Host(`clients-ui.ddns.net`)"
# rule = "Path(`/jupyterx`)" # abo /jupyterx/*
rule = "PathPrefix(`/jupyterx`)"
service = "jupyterx"
[http.services]
[http.services.jupyterx.LoadBalancer]
PassHostHeader = true
# InsecureSkipVerify = true
[[http.services.jupyterx.LoadBalancer.servers]]
url = "http://jupyter-service.default.svc/"
weight = 100
[acme] # every router with TLS enabled will now be able to use ACME for its certificates
email = "account#mail.com"
storage = "acme.json"
# onHostRule = true # dynamic generation based on the Host() & HostSNI() matchers
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
[acme.httpChallenge]
entryPoint = "web" # used during the challenge
And DaemonSet yaml:
# ---
# apiVersion: v1
# kind: ServiceAccount
# metadata:
# name: traefik-ingress-controller
# namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
volumes:
# - name: traefik-ui-tls-cert
# secret:
# secretName: traefik-ui-tls-cert
- name: traefik-ingress-configmap
configMap:
name: traefik-ingress-configmap
containers:
- image: traefik:2.0 # The official v2.0 Traefik docker image
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: web-secure
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
- name: mongodb
containerPort: 11111
volumeMounts:
- mountPath: "/config"
name: "traefik-ingress-configmap"
args:
- --api
- --configfile=/config/traefik.toml
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 443
name: web-secure
- protocol: TCP
port: 8080
name: admin
- port: 11111
protocol: TCP
name: mongodb
type: LoadBalancer
loadBalancerIP: 1.1.1.1
Have any suggestions, how to fix it?

Due to lack of manuals for traefik2.0-alpha, config file was written using only manual from traefik official page.
There is a "routers for HTTP & HTTPS" configuration example here https://docs.traefik.io/v2.0/routing/routers/ look like:
[http.routers]
[http.routers.Router-1-https]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
[http.routers.Router-1.tls] # will terminate the TLS request
[http.routers.Router-1-http]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
But working config looks like:
[http.routers]
[http.routers.Router-1-https]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
[http.routers.Router-1-https.tls] # will terminate the TLS request
[http.routers.Router-1-http]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
So, in my config string
[http.routers.for-jupyterx.tls]
should be changed on
[http.routers.for-jupyterx-https.tls]

Related

Pods not communicating with Gossip Router and hence not forming JGroups single cluster

I have a stateful set as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jgroups-leader-poc
labels:
app: jgroups-leader-poc
spec:
serviceName: jgroups-leader-poc
replicas: 3
selector:
matchLabels:
app: jgroups-leader-poc
template:
metadata:
labels:
app: jgroups-leader-poc
spec:
containers:
- name: jgroups-leader-poc-container
image: localhost:5001/jgroups-leader-ui:1.0
imagePullPolicy: Always
env:
- name: jgroups.gossip_routers
value: "localhost[12001]"
- name: jgroups.tcp.ip
value: "site_local,match-interface:eth0"
- name: jgroups.tcp.ntfnport
value: "7800"
- name: JGROUPS_EXTERNAL_ADDR
value: "match-interface:eth0"
ports:
- name: http
containerPort: 8080
protocol: TCP
- containerPort: 7800
name: k8sping-port
TcpGossip.xml as follows
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<TCP external_addr="${JGROUPS_EXTERNAL_ADDR:match-interface:eth0}"
bind_addr="${jgroups.tcp.ip}" bind_port="${jgroups.tcp.ntfnport:0}"
sock_conn_timeout="300"
max_bundle_size="60000"
enable_diagnostics="false"
thread_naming_pattern="cl"
thread_pool.enabled="true"
thread_pool.min_threads="1"
thread_pool.max_threads="25"
thread_pool.keep_alive_time="5000" />
<TCPGOSSIP initial_hosts="${jgroups.gossip_routers:localhost[12001]}" reconnect_interval="3000"/>
<MERGE3 min_interval="10000" max_interval="30000"/>
<FD_SOCK/>
<FD_ALL timeout="60000" interval="15000" timeout_check_interval="5000"/>
<FD_HOST check_timeout="5000" interval="15000" timeout="60000"/>
<VERIFY_SUSPECT timeout="5000"/>
<pbcast.NAKACK2 use_mcast_xmit="false" use_mcast_xmit_req="false" xmit_interval="1000"/>
<UNICAST3 xmit_table_max_compaction_time="3000" />
<pbcast.STABLE desired_avg_gossip="50000" max_bytes="4M"/>
<pbcast.GMS print_local_addr="true" join_timeout="5000" view_bundling="true"/>
<MFC max_credits="2M"/>
<FRAG2 frag_size="30K"/>
<pbcast.STATE buffer_size="1048576" max_pool="10"/>
<pbcast.FLUSH timeout="0"/>
</config>
And I have started Gossip Router as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gossiprouter
labels:
run: gossiprouter
spec:
replicas: 1
selector:
matchLabels:
run: gossiprouter
template:
metadata:
labels:
run: gossiprouter
spec:
containers:
- image: belaban/gossiprouter:latest
name: gossiprouter
ports:
- containerPort: 8787
- containerPort: 9000
- containerPort: 12001
env:
- name: LogLevel
value: "TRACE"
---
apiVersion: v1
kind: Service
metadata:
name: gossiprouter
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 8787
targetPort: 8787
name: debug
protocol: TCP
- port: 9000
targetPort: 9000
name: netcat
protocol: TCP
- port: 12001
targetPort: 12001
name: gossiprouter
protocol: TCP
selector:
run: gossiprouter
when I do kubectl get pods
kubectl get svc shows
Source code for reference:
public class Chatter extends ReceiverAdapter {
JChannel channel;
#Value("${app.jgroups.config:jgroups-config.xml}")
private String jGroupsConfig;
#Value("${app.jgroups.cluster:chat-cluster}")
private String clusterName;
#PostConstruct
public void init() {
try {
channel = new JChannel(jGroupsConfig);
channel.setReceiver(this);
channel.connect(clusterName);
checkLeaderStatus();
channel.getState(null, 10000);
} catch (Exception ex) {
log.error("registering the channel in JMX failed: {}", ex);
}
}
public void close() {
channel.close();
}
public void viewAccepted(View newView) {
log.info("view: " + newView);
checkLeaderStatus();
}
private void checkLeaderStatus() {
Address address = channel.getView().getMembers().get(0);
if (address.equals(channel.getAddress())) {
log.info("Nodes are started, I'm the master!");
}
else
{
log.info("Nodes are started, I'm a slave!");
}
}
}
The issue here is none of the pods are getting connected to running gossip router although localhost:12001 is given as jgroups.gossip_routers in the Statefulset. Hence all the pods are forming separate jgroups cluster instead of single cluster.
Gossip router service details as follows:
Sorry for the delay! Do you have a public image for jgroups-leader-ui:1.0?
I suspect that the members don't communicate with each other because they connect directly to each other, possibly some issue with external_addr.
Why are you using TCP:TCPGOSSIP instead of TUNNEL:PING?
OK, so I tried this out with a sample app/image (chat.sh) / belaban/jgroups.
Try the following steps:
kubectl apply -f gossiprouter.yaml (your existing YAML)
Find the address of eth of the GossipRouter pod (in my case below: 172.17.0.3)
kubectl -f jgroups.yaml (the Yaml is pasted below)
kubectl exec jgroups-0 probe.sh should list 3 members:
/Users/bela$ kubectl exec jgroups-0 probe.sh
#1 (180 bytes):
local_addr=jgroups-1-42392
physical_addr=172.17.0.5:7800
view=[jgroups-0-13785|2] (3) [jgroups-0-13785, jgroups-1-42392, jgroups-2-14656]
cluster=chat
version=4.2.4.Final (Julier)
#2 (180 bytes):
local_addr=jgroups-0-13785
physical_addr=172.17.0.4:7800
view=[jgroups-0-13785|2] (3) [jgroups-0-13785, jgroups-1-42392, jgroups-2-14656]
cluster=chat
version=4.2.4.Final (Julier)
#3 (180 bytes):
local_addr=jgroups-2-14656
physical_addr=172.17.0.6:7800
view=[jgroups-0-13785|2] (3) [jgroups-0-13785, jgroups-1-42392, jgroups-2-14656]
cluster=chat
version=4.2.4.Final (Julier)
3 responses (3 matches, 0 non matches)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jgroups
labels:
run: jgroups
spec:
replicas: 3
selector:
matchLabels:
run: jgroups
serviceName: "jgroups"
template:
metadata:
labels:
run: jgroups
spec:
containers:
- image: belaban/jgroups
name: jgroups
command: ["chat.sh"]
args: ["-props gossip.xml"]
env:
- name: DNS_QUERY
value: "jgroups.default.svc.cluster.local"
- name: DNS_RECORD_TYPE
value: A
- name: TCPGOSSIP_INITIAL_HOSTS
value: "172.17.0.3[12001]"
---
apiVersion: v1
kind: Service
metadata:
# annotations:
# service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: jgroups
labels:
run: jgroups
spec:
publishNotReadyAddresses: true
clusterIP: None
ports:
- name: ping
port: 7800
protocol: TCP
targetPort: 7800
- name: probe
port: 7500
protocol: UDP
targetPort: 7500
- name: debug
port: 8787
protocol: TCP
targetPort: 8787
- name: stomp
port: 9000
protocol: TCP
targetPort: 9000
selector:
run: jgroups
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---

Connection Refused on Port 9000 for Logstash Deployment on Kubernetes

I'm attempting to use the Statistics Gathering Jenkins plugin to forward metrics to Logstash. The plugin is configured with the following url: http://logstash.monitoring-observability:9000. Both Jenkins and Logstash are deployed on Kubernetes. When I run a build, which triggers metrics forwarding via this plugin, I see the following error in the logs:
2022-02-19 23:29:20.464+0000 [id=263] WARNING o.j.p.s.g.util.RestClientUtil$1#failed: The request for url http://logstash.monitoring-observability:9000/ has failed.
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:173
I get the same behavior when I exec into the jenkins pod and attempt to curl logstash:
jenkins#jenkins-7889fb54b8-d9rvr:/$ curl -vvv logstash.monitoring-observability:9000
* Trying 10.52.9.143:9000...
* connect to 10.52.9.143 port 9000 failed: Connection refused
* Failed to connect to logstash.monitoring-observability port 9000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to logstash.monitoring-observability port 9000: Connection refused
I also get the following error in the logstash logs:
[ERROR] 2022-02-20 00:05:43.450 [[main]<tcp] pipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Tcp port=>9000, codec=><LogStash::Codecs::JSON id=>"json_f96babad-299c-42ab-98e0-b78c025d9476", enable_metric=>true, charset=>"UTF-8">, host=>"jenkins-server.devops-tools", ssl_verify=>false, id=>"0fddd9afb2fcf12beb75af799a2d771b99af6ac4807f5a67f4ec5e13f008803f", enable_metric=>true, mode=>"server", proxy_protocol=>false, ssl_enable=>false, ssl_key_passphrase=><password>>
Error: Cannot assign requested address
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
Here is my jenkins-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: devops-tools
labels:
app: jenkins-server
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-server
template:
metadata:
labels:
app: jenkins-server
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
serviceAccountName: jenkins-admin
containers:
- name: jenkins
env:
- name: LOGSTASH_HOST
value: logstash
- name: LOGSTASH_PORT
value: "5044"
- name: ELASTICSEARCH_HOST
value: elasticsearch-logging
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
image: jenkins/jenkins:lts
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "500Mi"
cpu: "500m"
ports:
- name: httpport
containerPort: 8080
- name: jnlpport
containerPort: 50000
livenessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-pv-claim
Here is my jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-server
namespace: devops-tools
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8080'
spec:
selector:
app: jenkins-server
k8s-app: jenkins-server
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
Here is my logstash-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: monitoring-observability
labels:
app: logstash
spec:
selector:
matchLabels:
app: logstash
replicas: 1
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
env:
- name: JENKINS_HOST
value: jenkins-server
- name: JENKINS_PORT
value: "8080"
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 9000
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
Here is my logstash-service.yaml
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: monitoring-observability
labels:
app: logstash
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "logstash"
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 9000
targetPort: 9000
type: ClusterIP
Here is my logstash configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
host => "jenkins-server.devops-tools"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
There are no firewalls configured in my cluster that would be blocking traffic on port 9000. I have also tried this same configuration with port 5044 and get the same results. It seems as though my logstash instance is not actually listening on the containerPort. Why might this be?
I resolved this error by updating the configmap to this:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
Note that all references to the jenkins host have been removed.

Tarefik Ingress does not return HTTP/HTTPS response.

My website doesn't generate any HTTP certs using LetsEncrypt, doesn't even return a response. This is for test purposes so is using a basic NGINX image that I want to serve over HTTPS.
I have an ingress type that looks like this:
apiVersion: v1
kind: Service
metadata:
name: nginxtest
labels:
app: nginx
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: nginx.example.com
http:
paths:
- path: /
backend:
serviceName: nginxtest
servicePort: http
My TOML Config Map looks like the below:
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
namespace: kube-system
data:
traefik.toml: |
defaultEntryPoints = ["http","https"]
debug = false
logLevel = "ERROR"
#Config to redirect http to https
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
[api]
[api.statistics]
recentErrors = 10
[kubernetes]
[ping]
entryPoint = "http"
[accessLog]
[acme]
email = "a#b.com"
storage = "certs/acme.json"
acmeLogging = true
entryPoint = "https"
OnHostRule = true
#caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
[acme.httpChallenge]
entryPoint="http"
Any help is appreciated, please point out any errors.

kubectl apply -f <spec.yaml> equivalent in fabric8 java api

I was trying to use io.fabric8 api to create a few resources in kubernetes using a pod-spec.yaml.
Config config = new ConfigBuilder()
.withNamespace("ag")
.withMasterUrl(K8_URL)
.build();
try (final KubernetesClient client = new DefaultKubernetesClient(config)) {
LOGGER.info("Master: " + client.getMasterUrl());
LOGGER.info("Loading File : " + args[0]);
Pod pod = client.pods().load(new FileInputStream(args[0])).get();
LOGGER.info("Pod created with name : " + pod.toString());
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
The above code works if the resource type is of POD. Similarly for other resource type it is working fine.
But if the yaml has multiple resource type like POD and service in the same file, how to use fabric8 Api ?
I was trying to use client.load(new FileInputStream(args[0])).createOrReplace(); but it is crashing with the below exception:
java.lang.NullPointerException
at java.net.URI$Parser.parse(URI.java:3042)
at java.net.URI.<init>(URI.java:588)
at io.fabric8.kubernetes.client.utils.URLUtils.join(URLUtils.java:48)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:208)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:177)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:53)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:32)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:202)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:62)
at com.nokia.k8s.InterpreterLanuch.main(InterpreterLanuch.java:66)
Yaml file used
apiVersion: v1
kind: Pod
metadata:
generateName: zep-ag-pod
annotations:
kubernetes.io/psp: restricted
spark-app-name: Zeppelin-spark-shared-process
namespace: ag
labels:
app: zeppelin
int-app-selector: shell-123
spec:
containers:
- name: ag-csf-zep
image: bcmt-registry:5000/zep-spark2.2:9
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c","echo Hi && sleep 60 && echo Done"]
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
securityContext:
fsGroup: 2000
runAsUser: 1510
serviceAccount: csfzeppelin
serviceAccountName: csfzeppelin
---
apiVersion: v1
kind: Service
metadata:
name: zeppelin-service
namespace: ag
labels:
app: zeppelin
spec:
type: NodePort
ports:
- name: zeppelin-service
port: 30099
protocol: TCP
targetPort: 8080
selector:
app: zeppelin
You don't need to specify resource type whenever loading a file with multiple documents. You simply need to do:
// Load Yaml into Kubernetes resources
List<HasMetadata> result = client.load(new FileInputStream(args[0])).get();
// Apply Kubernetes Resources
client.resourceList(result).inNamespace(namespace).createOrReplace()

istio load balancing of a single service with multiple versions

I was able to achieve load-balancing with sample istio applications
https://github.com/piomin/sample-istio-services
https://istio.io/docs/guides/bookinfo/
But was not able to get istio load-balancing working with single private service having 2 versions. Example: 2 consul servers with different versions .
Service and pod definition :
apiVersion: v1
kind: Service
metadata:
name: consul-test
labels:
app: test
spec:
ports:
- port: 8500
name: http
selector:
app: test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: consul-test-v1
spec:
replicas: 1
template:
metadata:
labels:
app: test
version: v1
spec:
containers:
- name: consul-test-v1
image: consul:latest
ports:
- containerPort: 8500
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: consul-test-v2
spec:
replicas: 1
template:
metadata:
labels:
app: test
version: v2
spec:
containers:
- name: consul-test-v2
image: consul:1.1.0
ports:
- containerPort: 8500
Gateway definition:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: con-gateway
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
exact: /catalog
route:
- destination:
host: consul-test
port:
number: 8500
Routing rules in virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: consul-test
spec:
hosts:
- consul-test
gateways:
- con-gateway
- mesh
http:
- route:
- destination:
host: consul-test
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: consul-test
spec:
host: consul-test
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Though I route all traffic ( http requests ) to consul server version v1, my http requests on consul-service lands on v1 and v2 alternately i.e, it follows Round-Robin rule .
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-test ClusterIP 10.97.200.140 <none> 8500/TCP 9m
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
[
{
"ID": "ebfa341b-4557-a392-9f8a-8ee307113faa",
"Node": "consul-test-v1-765dd566dd-6cmj9",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 9,
"ModifyIndex": 10
}
]
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
[
{
"ID": "1b60a5bd-9a17-ff18-3a65-0ff95b3a836a",
"Node": "consul-test-v2-fffd475bc-st4mv",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 5,
"ModifyIndex": 6
}
]
I have the above mentioned issue when curl is done on the service ClusterIP:ClusterPort
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-test ClusterIP 10.97.200.140 <none> 8500/TCP 9m
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
But LoadBalancing works as expected when curl is done on INGRESS_HOST and INGRESS_PORT ( determining INGRESS_HOST and INGRESS_PORT present here )
$ curl -L http://$INGRESS_HOST:$INGRESS_PORT/v1/catalog/nodes --- WORKS