kubectl apply -f <spec.yaml> equivalent in fabric8 java api - kubernetes

I was trying to use io.fabric8 api to create a few resources in kubernetes using a pod-spec.yaml.
Config config = new ConfigBuilder()
.withNamespace("ag")
.withMasterUrl(K8_URL)
.build();
try (final KubernetesClient client = new DefaultKubernetesClient(config)) {
LOGGER.info("Master: " + client.getMasterUrl());
LOGGER.info("Loading File : " + args[0]);
Pod pod = client.pods().load(new FileInputStream(args[0])).get();
LOGGER.info("Pod created with name : " + pod.toString());
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
The above code works if the resource type is of POD. Similarly for other resource type it is working fine.
But if the yaml has multiple resource type like POD and service in the same file, how to use fabric8 Api ?
I was trying to use client.load(new FileInputStream(args[0])).createOrReplace(); but it is crashing with the below exception:
java.lang.NullPointerException
at java.net.URI$Parser.parse(URI.java:3042)
at java.net.URI.<init>(URI.java:588)
at io.fabric8.kubernetes.client.utils.URLUtils.join(URLUtils.java:48)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:208)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:177)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:53)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:32)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:202)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:62)
at com.nokia.k8s.InterpreterLanuch.main(InterpreterLanuch.java:66)
Yaml file used
apiVersion: v1
kind: Pod
metadata:
generateName: zep-ag-pod
annotations:
kubernetes.io/psp: restricted
spark-app-name: Zeppelin-spark-shared-process
namespace: ag
labels:
app: zeppelin
int-app-selector: shell-123
spec:
containers:
- name: ag-csf-zep
image: bcmt-registry:5000/zep-spark2.2:9
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c","echo Hi && sleep 60 && echo Done"]
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
securityContext:
fsGroup: 2000
runAsUser: 1510
serviceAccount: csfzeppelin
serviceAccountName: csfzeppelin
---
apiVersion: v1
kind: Service
metadata:
name: zeppelin-service
namespace: ag
labels:
app: zeppelin
spec:
type: NodePort
ports:
- name: zeppelin-service
port: 30099
protocol: TCP
targetPort: 8080
selector:
app: zeppelin

You don't need to specify resource type whenever loading a file with multiple documents. You simply need to do:
// Load Yaml into Kubernetes resources
List<HasMetadata> result = client.load(new FileInputStream(args[0])).get();
// Apply Kubernetes Resources
client.resourceList(result).inNamespace(namespace).createOrReplace()

Related

How to pass data in configmap as secret Kubernetes

I am deploying a gitlab runner in K8s cluster and I need to pass data [enviroment,token,url] in configmap as secret.below are the deployment and configmap manifest files.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab-runner
spec:
replicas: 1
selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
spec:
hostNetwork: true
serviceAccountName: default
containers:
- args:
- run
image: gitlab/gitlab-runner:latest
imagePullPolicy: Always
name: gitlab-runner
resources:
requests:
cpu: "100m"
limits:
cpu: "100m"
volumeMounts:
- name: config
mountPath: /etc/gitlab-runner/config.toml
readOnly: true
subPath: config.toml
volumes:
- name: config
configMap:
name: gitlab-runner-config
restartPolicy: Always
config.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner-config
namespace: gitlab-runner
data:
config.toml: |-
concurrent = 4
[[runners]]
name = "Kubernetes Runner"
url = "gitlab url"
token = "secrettoken"
executor = "kubernetes"
environment =
["TEST_VAR=THIS_IS_TEST_VAR_PRINTING", "SECOUND_TEST_VAR=This_is_2nd_test_var"]
[runners.kubernetes]
namespace = "gitlab-runner"
privileged = true
poll_timeout = 600
cpu_request = "1"
service_cpu_request = "200m"
[[runners.kubernetes.volumes.host_path]]
name = "docker"
mount_path = "/var/run/docker.sock"
host_path = "/var/run/docker.sock"
In the above configmap yaml I need to pass Enviroment ,token and url as kubernetes secret.

Connection Refused on Port 9000 for Logstash Deployment on Kubernetes

I'm attempting to use the Statistics Gathering Jenkins plugin to forward metrics to Logstash. The plugin is configured with the following url: http://logstash.monitoring-observability:9000. Both Jenkins and Logstash are deployed on Kubernetes. When I run a build, which triggers metrics forwarding via this plugin, I see the following error in the logs:
2022-02-19 23:29:20.464+0000 [id=263] WARNING o.j.p.s.g.util.RestClientUtil$1#failed: The request for url http://logstash.monitoring-observability:9000/ has failed.
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:173
I get the same behavior when I exec into the jenkins pod and attempt to curl logstash:
jenkins#jenkins-7889fb54b8-d9rvr:/$ curl -vvv logstash.monitoring-observability:9000
* Trying 10.52.9.143:9000...
* connect to 10.52.9.143 port 9000 failed: Connection refused
* Failed to connect to logstash.monitoring-observability port 9000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to logstash.monitoring-observability port 9000: Connection refused
I also get the following error in the logstash logs:
[ERROR] 2022-02-20 00:05:43.450 [[main]<tcp] pipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Tcp port=>9000, codec=><LogStash::Codecs::JSON id=>"json_f96babad-299c-42ab-98e0-b78c025d9476", enable_metric=>true, charset=>"UTF-8">, host=>"jenkins-server.devops-tools", ssl_verify=>false, id=>"0fddd9afb2fcf12beb75af799a2d771b99af6ac4807f5a67f4ec5e13f008803f", enable_metric=>true, mode=>"server", proxy_protocol=>false, ssl_enable=>false, ssl_key_passphrase=><password>>
Error: Cannot assign requested address
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
Here is my jenkins-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: devops-tools
labels:
app: jenkins-server
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-server
template:
metadata:
labels:
app: jenkins-server
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
serviceAccountName: jenkins-admin
containers:
- name: jenkins
env:
- name: LOGSTASH_HOST
value: logstash
- name: LOGSTASH_PORT
value: "5044"
- name: ELASTICSEARCH_HOST
value: elasticsearch-logging
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
image: jenkins/jenkins:lts
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "500Mi"
cpu: "500m"
ports:
- name: httpport
containerPort: 8080
- name: jnlpport
containerPort: 50000
livenessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-pv-claim
Here is my jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-server
namespace: devops-tools
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8080'
spec:
selector:
app: jenkins-server
k8s-app: jenkins-server
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
Here is my logstash-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: monitoring-observability
labels:
app: logstash
spec:
selector:
matchLabels:
app: logstash
replicas: 1
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
env:
- name: JENKINS_HOST
value: jenkins-server
- name: JENKINS_PORT
value: "8080"
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 9000
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
Here is my logstash-service.yaml
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: monitoring-observability
labels:
app: logstash
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "logstash"
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 9000
targetPort: 9000
type: ClusterIP
Here is my logstash configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
host => "jenkins-server.devops-tools"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
There are no firewalls configured in my cluster that would be blocking traffic on port 9000. I have also tried this same configuration with port 5044 and get the same results. It seems as though my logstash instance is not actually listening on the containerPort. Why might this be?
I resolved this error by updating the configmap to this:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
Note that all references to the jenkins host have been removed.

Kubernetes: Replace file by configmap

Here my configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: chart-1591249502-zeppelin
namespace: ra-iot-dev
labels:
helm.sh/chart: zeppelin-0.1.0
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
app.kubernetes.io/version: "0.9.0"
app.kubernetes.io/managed-by: Helm
data:
log4j.properties: |-
log4j.rootLogger = INFO, dailyfile
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.appender.dailyfile.DatePattern=.yyyy-MM-dd
log4j.appender.dailyfile.DEBUG = INFO
log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender
log4j.appender.dailyfile.File = ${zeppelin.log.file}
log4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout
log4j.appender.dailyfile.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.logger.org.apache.zeppelin.python=DEBUG
log4j.logger.org.apache.zeppelin.spark=DEBUG
I'm trying to mount this file into /zeppelin/conf/log4j.properties pod directory file.
Here my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: chart-1591249502-zeppelin
labels:
helm.sh/chart: zeppelin-0.1.0
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
app.kubernetes.io/version: "0.9.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
template:
metadata:
labels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
spec:
serviceAccountName: chart-1591249502-zeppelin
securityContext:
{}
containers:
- name: zeppelin
securityContext:
{}
image: "apache/zeppelin:0.9.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
env:
- name: ZEPPELIN_PORT
value: "8080"
- name: ZEPPELIN_K8S_CONTAINER_IMAGE
value: apache/zeppelin:0.9.0
- name: ZEPPELIN_RUN_MODE
value: local
volumeMounts:
- name: log4j-properties-volume
mountPath: /zeppelin/conf/log4j.properties
volumes:
- name: log4j-properties-volume
configMap:
name: chart-1591249502-zeppelin
items:
- key: log4j.properties
path: keys
I'm getting this error event in kubernetes:
Error: failed to start container "zeppelin": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:364: container init caused \"rootfs_linux.go:54: mounting \\"/var/lib/origin/openshift.local.volumes/pods/63ac209e-a626-11ea-9e39-0050569f5f65/volumes/kubernetes.io~configmap/log4j-properties-volume\\" to rootfs \\"/var/lib/docker/overlay2/33f3199e46111afdcd64d21c58b010427c27761b02473967600fb95ab6d92e21/merged\\" at \\"/var/lib/docker/overlay2/33f3199e46111afdcd64d21c58b010427c27761b02473967600fb95ab6d92e21/merged/zeppelin/conf/log4j.properties\\" caused \\"not a directory\\"\"" : Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Take in mind, that I only want to replace an existing file. I mean, into /zeppelin/conf/ directory there are several files. I only want to replace /zeppelin/conf/log4j.properties.
Any ideas?
From logs I saw that you are working on OpenShift, however I was able to do it on GKE.
I've deployed pure zeppelin deployment form your example.
zeppelin#chart-1591249502-zeppelin-557d895cd5-v46dt:~/conf$ cat log4j.properties
#
# Licensed to the Apache Software Foundation (ASF) under one or more
...
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
...
# limitations under the License.
#
log4j.rootLogger = INFO, stdout
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
zeppelin#chart-1591249502-zeppelin-557d895cd5-v46dt:~/conf$
If you want to repleace one specific file, you need to use subPath. There is also article with another example which can be found here.
Issue 1. ConfigMap belongs to namespace
Your deployment did not contains any namespace so it was deployed in default namespace. ConfigMap included namespace: ra-iot-dev.
$ kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
...
configmaps cm true ConfigMap
...
If you will keep this namespace, you will probably get error like:
MountVolume.SetUp failed for volume "log4j-properties-volume" : configmap "chart-1591249502-zeppelin" not found
Issue 2. subPath to replace file
Ive changed one part in deployment (added subPath)
volumeMounts:
- name: log4j-properties-volume
mountPath: /zeppelin/conf/log4j.properties
subPath: log4j.properties
volumes:
- name: log4j-properties-volume
configMap:
name: chart-1591249502-zeppelin
and another in ConfigMap (removed namespace and set proper names)
apiVersion: v1
kind: ConfigMap
metadata:
name: chart-1591249502-zeppelin
labels:
helm.sh/chart: zeppelin-0.1.0
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
app.kubernetes.io/version: "0.9.0"
app.kubernetes.io/managed-by: Helm
data:
log4j.properties: |-
...
After that output of the file looks like:
$ kubectl exec -ti chart-1591249502-zeppelin-64495dcfc8-ccddr -- /bin/bash
zeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~$ cd conf
zeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$ ls
configuration.xsl log4j.properties log4j_yarn_cluster.properties zeppelin-env.cmd.template zeppelin-site.xml.template
interpreter-list log4j.properties2 shiro.ini.template zeppelin-env.sh.template
zeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$ cat log4j.properties
log4j.rootLogger = INFO, dailyfile
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.appender.dailyfile.DatePattern=.yyyy-MM-dd
log4j.appender.dailyfile.DEBUG = INFO
log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender
log4j.appender.dailyfile.File = ${zeppelin.log.file}
log4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout
log4j.appender.dailyfile.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.logger.org.apache.zeppelin.python=DEBUG
log4j.logger.org.apache.zeppelin.spark=DEBUGzeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$
---
apiVersion: v1
kind: ConfigMap
metadata:
name: application-config-test
namespace: ***
labels:
app: test
environment: ***
tier: backend
data:
application.properties: |-
ulff.kafka.configuration.acks=0
ulff.kafka.configuration[bootstrap.servers]=IP
ulff.kafka.topic=test-topic
ulff.enabled=true
logging.level.com.anurag.gigthree.ulff.kafka=DEBUG
management.port=9080
management.security.enabled=false
management.endpoints.web.exposure.include= "metrics,health,threaddump,prometheus,heapdump"
management.endpoint.prometheus.enabled=true
management.metrics.export.prometheus.enabled=true
## For apigee PROD
apigee.url=****
### Secrets in Kubenetes accessed by ENV variables
apigee.clientID=apigeeClientId
apigee.clientsecret=apigeeClientSecret
spring.mvc.throw-exception-if-no-handler-found=true
#For OAuth details for apigee
oauth2.config.clientId=${apigee.clientID}
oauth2.config.clientSecret=${apigee.clientsecret}
oauth2.config.authenticationScheme=form
oauth2.config.scopes[0]=test_INTEGRATION_ALL
oauth2.config.accessTokenUri=${apigee.url}/oauth2/token
oauth2.config.requestTimeout=55000
oauth2.restTemplateBuilder.enabled=true
#spring jackson properties
spring.jackson.default-property-inclusion=always
spring.jackson.generator.ignore-unknown=true
spring.jackson.mapper.accept-case-insensitive-properties=true
spring.jackson.deserialization.fail-on-unknown-properties=false
# service urls for apply profile
services.apigeeIntegrationAPI.doProfileChangeUrl=${apigee.url}/v1/testintegration
services.apigeeIntegrationAPI.modifyServiceOfSubscriberUrl=${apigee.url}/v1/testintegration/subscribers
# service urls for retrieve profile
services.apigeeIntegrationAPI.getProfileUrl=${apigee.url}/v1
services.apigeeIntegrationAPI.readKeyUrl=${apigee.url}/v1/testintegration
test.acfStatusConfig[1].country-prefix=
test.acfStatusConfig[1].country-code=
test.acfStatusConfig[1].profile-name=
test.acfStatusConfig[1].adult=ON
test.acfStatusConfig[1].hvw=ON
test.acfStatusConfig[1].ms=ON
test.acfStatusConfig[1].dc=ON
test.acfStatusConfig[1].at=OFF
test.acfStatusConfig[1].gambling=
test.acfStatusConfig[1].dating=OFF
test.acfStatusConfig[1].sex=OFF
test.acfStatusConfig[1].sn=OFF
logging.pattern.level=%X{ulff.transaction-id:-} -%5p
logging.config=/app/config/log4j2.yml
log4j2.yml: |-
Configutation:
name: test-ms
packages :
Appenders:
Console:
- name: sysout
target: SYSTEM_OUT
PatternLayout:
pattern: "%d{HH:mm:ss.SSS} %-5p [%-7t] %F:%L - %m%n"
- name: syserr
target: SYSTEM_ERR
PatternLayout:
pattern: "%d{HH:mm:ss.SSS} %-5p [%-7t] %F:%L - %m%n"
Filters:
ThresholdFilter :
level: "WARN"
onMatch: "ACCEPT"
Kafka:
name : kafkaAppender
topic: af.prod.ms.test.tomcat
JSONLayout:
complete: "false"
compact: "false"
eventEol: "true"
includeStacktrace: "true"
properties: "true"
Property:
name: "bootstrap.servers"
value: ""
Loggers:
Root:
level: INFO
AppenderRef:
- ref: sysout
- ref: syserr
#### test 1 test 2 Separate kafka log from application log
Logger:
- name: com.anurag
level: INFO
AppenderRef:
- ref: kafkaAppender
- name: org.springframework
level: INFO
AppenderRef:
- ref: kafkaAppender

How to fix "bad certificate error" in traefik 2.0?

I'm setting up traefik 2.0-alpha with Let's Encrypt certificates inside GKE, but now i'm in stupor with "server.go:3012: http: TLS handshake error from 10.32.0.1:2244: remote error: tls: bad certificate" error in container logs.
Connections via http working fine. When i try to connect via https, traefik return 404 with its own default certificates.
I found same problem for traefik v1 on github. Solution was in adding to config:
InsecureSkipVerify = true
passHostHeader = true
It doesn't help me.
Here is my configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-ingress-configmap
namespace: kube-system
data:
traefik.toml: |
[Global]
sendAnonymousUsage = true
debug = true
logLevel = "DEBUG"
[ServersTransport]
InsecureSkipVerify = true
[entrypoints]
[entrypoints.web]
address = ":80"
[entryPoints.web-secure]
address = ":443"
[entrypoints.mongo-port]
address = ":11111"
[providers]
[providers.file]
[tcp] # YAY!
[tcp.routers]
[tcp.routers.everything-to-mongo]
entrypoints = ["mongo-port"]
rule = "HostSNI(`*`)" # Catches every request
service = "database"
[tcp.services]
[tcp.services.database.LoadBalancer]
[[tcp.services.database.LoadBalancer.servers]]
address = "mongodb-service.default.svc:11111"
[http]
[http.routers]
[http.routers.for-jupyterx-https]
entryPoints = ["web-secure"] # won't listen to entrypoint mongo-port
# rule = "Host(`clients-ui.ddns.net`)"
# rule = "Path(`/jupyterx`)" # abo /jupyterx/*
rule = "PathPrefix(`/jupyterx`)"
service = "jupyterx"
[http.routers.for-jupyterx.tls]
[http.routers.for-jupyterx-http]
entryPoints = ["web"] # won't listen to entrypoint mongo-port
# rule = "Host(`clients-ui.ddns.net`)"
# rule = "Path(`/jupyterx`)" # abo /jupyterx/*
rule = "PathPrefix(`/jupyterx`)"
service = "jupyterx"
[http.services]
[http.services.jupyterx.LoadBalancer]
PassHostHeader = true
# InsecureSkipVerify = true
[[http.services.jupyterx.LoadBalancer.servers]]
url = "http://jupyter-service.default.svc/"
weight = 100
[acme] # every router with TLS enabled will now be able to use ACME for its certificates
email = "account#mail.com"
storage = "acme.json"
# onHostRule = true # dynamic generation based on the Host() & HostSNI() matchers
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
[acme.httpChallenge]
entryPoint = "web" # used during the challenge
And DaemonSet yaml:
# ---
# apiVersion: v1
# kind: ServiceAccount
# metadata:
# name: traefik-ingress-controller
# namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
volumes:
# - name: traefik-ui-tls-cert
# secret:
# secretName: traefik-ui-tls-cert
- name: traefik-ingress-configmap
configMap:
name: traefik-ingress-configmap
containers:
- image: traefik:2.0 # The official v2.0 Traefik docker image
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: web-secure
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
- name: mongodb
containerPort: 11111
volumeMounts:
- mountPath: "/config"
name: "traefik-ingress-configmap"
args:
- --api
- --configfile=/config/traefik.toml
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 443
name: web-secure
- protocol: TCP
port: 8080
name: admin
- port: 11111
protocol: TCP
name: mongodb
type: LoadBalancer
loadBalancerIP: 1.1.1.1
Have any suggestions, how to fix it?
Due to lack of manuals for traefik2.0-alpha, config file was written using only manual from traefik official page.
There is a "routers for HTTP & HTTPS" configuration example here https://docs.traefik.io/v2.0/routing/routers/ look like:
[http.routers]
[http.routers.Router-1-https]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
[http.routers.Router-1.tls] # will terminate the TLS request
[http.routers.Router-1-http]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
But working config looks like:
[http.routers]
[http.routers.Router-1-https]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
[http.routers.Router-1-https.tls] # will terminate the TLS request
[http.routers.Router-1-http]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
So, in my config string
[http.routers.for-jupyterx.tls]
should be changed on
[http.routers.for-jupyterx-https.tls]

cluster created with Kops - deploying one pod by node with DaemonSet avoiding master node

I try to deploy one pod by node. It works fine with the kind daemonSet and when the cluster is created with kubeup. But we migrated the cluster creation using kops and with kops the master node is part of the cluster.
I noticed the master node is defined with a specific label: kubernetes.io/role=master
and with a taint: scheduler.alpha.kubernetes.io/taints: [{"key":"dedicated","value":"master","effect":"NoSchedule"}]
But it does not stop to have a pod deployed on it with DaemonSet
So i tried to add scheduler.alpha.kubernetes.io/affinity:
- apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: elasticsearch-data
namespace: ess
annotations:
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingRequiredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/role",
"operator": "NotIn",
"values": ["master"]
}
]
}
]
}
}
}
spec:
selector:
matchLabels:
component: elasticsearch
type: data
provider: fabric8
template:
metadata:
labels:
component: elasticsearch
type: data
provider: fabric8
spec:
serviceAccount: elasticsearch
serviceAccountName: elasticsearch
containers:
- env:
- name: "SERVICE_DNS"
value: "elasticsearch-cluster"
- name: "NODE_MASTER"
value: "false"
image: "essearch/ess-elasticsearch:1.7.6"
name: elasticsearch
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
volumeMounts:
- mountPath: "/usr/share/elasticsearch/data"
name: task-pv-storage
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
nodeSelector:
minion: true
But it does not work. Is anyone know why?
The workaround I have for now is to use nodeSelector and add a label to the nodes that are minion only but i would avoid to add a label during the cluster creation because it's an extra step and if i could avoid it, it would be for the best :)
EDIT:
I changed to that (given the answer) and i think it's right but it does not help, i still have a pod deployed on it:
- apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: elasticsearch-data
namespace: ess
spec:
selector:
matchLabels:
component: elasticsearch
type: data
provider: fabric8
template:
metadata:
labels:
component: elasticsearch
type: data
provider: fabric8
annotations:
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingRequiredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/role",
"operator": "NotIn",
"values": ["master"]
}
]
}
]
}
}
}
spec:
serviceAccount: elasticsearch
serviceAccountName: elasticsearch
containers:
- env:
- name: "SERVICE_DNS"
value: "elasticsearch-cluster"
- name: "NODE_MASTER"
value: "false"
image: "essearch/ess-elasticsearch:1.7.6"
name: elasticsearch
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
volumeMounts:
- mountPath: "/usr/share/elasticsearch/data"
name: task-pv-storage
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
Just move the annotation into the pod template: section (under metadata:).
Alternatively taint the master node (and you can remove the annotation):
kubectl taint nodes nameofmaster dedicated=master:NoSchedule
I suggest you read up on taints and tolerations.