My jenkins agent is deploy on the k8s, here is the agent yaml:
---
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins: slave
cluster: dev-monitor-platform
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: jenkins
operator: In
values:
- ci
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
containers:
- name: slave-docker
image: harbor.mycompany.net/jenkins/docker:19.03-git
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 50m
memory: 256Mi
limits:
cpu: 100m
memory: 512Mi
securityContext:
privileged: true
command:
- cat
tty: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
- mountPath: /root/.m2
name: jenkins-maven-m2
- mountPath: /home/jenkins/
name: workspace-volume
readOnly: false
- name: jnlp
image: harbor.mycompany.net/jenkins/inbound-agent:alpine-jdk11
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 50m
memory: 256Mi
limits:
cpu: 100m
memory: 512Mi
volumeMounts:
- mountPath: /home/jenkins/
name: workspace-volume
readOnly: false
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: workspace-volume
emptyDir: {}
- name: jenkins-maven-m2
nfs:
path: /export/mid-devops/jenkins/m2
server: xxx.xxx.xxx.xxx
The master itself pull code is fast:
However when the agent try to pull pipeline codes, it always gets stuck for about One-and-a-half minutes:
and when agent pull application codes, it is fast.
This problem happens every time, I have no idea.
I expect the agent don't get stuck when checkout codes
Related
I've got Kubernetes cluster with ECK Operator deployed. I also deploy Filebeat to my cluster. Here's file:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
namespace: logging-dev
spec:
type: filebeat
version: 8.2.0
elasticsearchRef:
name: elastic-logging-dev
kibanaRef:
name: kibana
config:
filebeat:
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata: { }
- add_host_metadata: { }
daemonSet:
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: "false"
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: filebeat
securityContext:
runAsUser: 0
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
requests:
memory: 500Mi
cpu: 100m
limits:
memory: 500Mi
cpu: 200m
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
It's working very well, but I want to also parse Kubernetes logs, eg.:
E0819 18:57:51.309161 1 watcher.go:327] failed to prepare current and previous objects: conversion webhook for minio.min.io/v2, Kind=Tenant failed: Post "https://operator.minio-operator.svc:4222/webhook/v1/crd-conversion?timeout=30s": dial tcp 10.233.8.119:4222: connect: connection refused
How can I do that?
In Fluentd it's quite simple:
<filter kubernetes.var.log.containers.kube-apiserver-*_kube-system_*.log>
#type parser
key_name log
format /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)$/
time_format %m%d %H:%M:%S.%N
types pid:integer
reserve_data true
remove_key_name_field false
</filter>
But I cannot find any example, tutorial or whatever how to do this with Filebeat.
I'm running into a missing resources issue when submitting a Workflow. The Kubernetes namespace my-namespace has a quota enabled, and for whatever reason the pods being created after submitting the workflow are failing with:
pods "hello" is forbidden: failed quota: team: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
I'm submitting the following Workflow,
apiVersion: "argoproj.io/v1alpha1"
kind: "Workflow"
metadata:
name: "hello"
namespace: "my-namespace"
spec:
entrypoint: "main"
templates:
- name: "main"
container:
image: "docker/whalesay"
resources:
requests:
memory: 0
cpu: 0
limits:
memory: "128Mi"
cpu: "250m"
Argo is running on Kubernetes 1.19.6 and was deployed with the official Helm chart version 0.16.10. Here are my Helm values:
controller:
workflowNamespaces:
- "my-namespace"
resources:
requests:
memory: 0
cpu: 0
limits:
memory: 500Mi
cpu: 0.5
pdb:
enabled: true
# See https://argoproj.github.io/argo-workflows/workflow-executors/
# docker container runtime is not present in the TKGI clusters
containerRuntimeExecutor: "k8sapi"
workflow:
namespace: "my-namespace"
serviceAccount:
create: true
rbac:
create: true
server:
replicas: 2
secure: false
resources:
requests:
memory: 0
cpu: 0
limits:
memory: 500Mi
cpu: 0.5
pdb:
enabled: true
executer:
resources:
requests:
memory: 0
cpu: 0
limits:
memory: 500Mi
cpu: 0.5
Any ideas on what I may be missing? Thanks, Weldon
Update 1: I tried another namespace without quotas enabled and got past the missing resources issue. However I now see: Failed to establish pod watch: timed out waiting for the condition. Here's what the spec looks like for this pod. You can see the wait container is missing resources. This is the container causing the issue reported by this question.
spec:
containers:
- command:
- argoexec
- wait
env:
- name: ARGO_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ARGO_CONTAINER_RUNTIME_EXECUTOR
value: k8sapi
image: argoproj/argoexec:v2.12.5
imagePullPolicy: IfNotPresent
name: wait
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /argo/podmetadata
name: podmetadata
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-v4jlb
readOnly: true
- image: docker/whalesay
imagePullPolicy: Always
name: main
resources:
limits:
cpu: 250m
memory: 128Mi
requests:
cpu: "0"
memory: "0"
try deploying the workflow on another namespace if you can, and verify if it's working or not.
if you can try with removing the quota for respective namespace.
instead of quota you can also use the
apiVersion: v1
kind: LimitRange
metadata:
name: default-limit-range
spec:
limits:
- default:
memory: 512Mi
cpu: 250m
defaultRequest:
cpu: 50m
memory: 64Mi
type: Container
so any container have not resource request, limit mentioned that will get this default config of 50m CPU & 64 Mi Memory.
https://kubernetes.io/docs/concepts/policy/limit-range/
I would like to add a fluent-bit agent as a sidecar container to an existing Istio Ingress Gateway Deployment that is generated via external tooling (istioctl). I figured using ytt and its overlays would be a good way to accomplish this since it should let me append an additional container to the Deployment and a few extra volumes while leaving the rest of the generated YAML intact.
Here's a placeholder Deployment that approximates an istio-ingressgateay to help visualize the structure:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-ingressgateway
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
template:
metadata:
labels:
app: istio-ingressgateway
spec:
containers:
- args:
- example-args
command: ["example-command"]
image: gcr.io/istio/proxyv2
imagePullPolicy: Always
name: istio-proxy
volumes:
- name: example-volume-secret
secret:
secretName: example-secret
- name: example-volume-configmap
configMap:
name: example-configmap
I want to add a container to this that looks like:
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
and volumes that look like:
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
- name: varlog
hostPath:
path: /var/log
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
I managed to hack something together by modifying the overylay files example in the ytt playground, this looks like this:
## load("#ytt:overlay", "overlay")
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
containers:
##overlay/append
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
volumes:
##overlay/append
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
##overlay/append
- name: varlog
hostPath:
path: /var/log
##overlay/append
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
What I am wondering, though, is what is the best, most idiomatic way of using ytt to do this?
Thanks!
What you have now is good! The one suggestion I would make is that, if the volumes and containers always need to be added together, they be combined in to the same overlay, like so:
## load("#ytt:overlay", "overlay")
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
containers:
##overlay/append
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
##overlay/append
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
##overlay/append
- name: varlog
hostPath:
path: /var/log
##overlay/append
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
This will guarantee any time the container is added, the appropriate volumes will be included as well.
My pod specification file
- name: temp1-cont
image: temp1-img:v2
env:
- name: CONFIG_MODE
value: "manager"
securityContext:
privileged: true
volumeMounts:
- mountPath: /bin/tipc-config
name: tipc-vol
volumeMounts:
- mountPath: /etc/
name: config-vol
resources:
limits:
cpu: 100m
memory: "100Mi"
requests:
cpu: 100m
memory: "100Mi"
command: ["/etc/init.d/docker-init"]
volumes:
- name: tipc-vol
hostPath:
path: /opt/tipc-config
type: FileOrCreate
- name: config-vol
hostPath:
path: /opt/config/
type: DirectoryOrCreate
I am using two hostPath volumes namely tipc-vol and config-vol
But when i create the pod only one volume is mounted which is incidentally the last volume mounted on the container
temp1-cont:
Container ID:
Image: temp1-img:v2
Image ID:
Port: <none>
Host Port: <none>
Command:
/etc/init.d/docker-init
State: Running
Started: Tue, 09 Jun 2020 09:36:57 +0000
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Environment:
CONFIG_MODE: manager
Mounts:
/etc/ from config-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-g2ltz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
tipc-vol:
Type: HostPath (bare host directory volume)
Path: /opt/tipc-config
HostPathType: FileOrCreate
config-vol:
Type: HostPath (bare host directory volume)
Path: /opt/config/
HostPathType: DirectoryOrCreate
default-token-g2ltz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-g2ltz
Optional: false
QoS Class: Guaranteed
In the k8s doc, nothing like this is mentioned.
I am trying to test my application so I am using hostVolume and not persistent volume.
Any help would be appreciated.
Thanks in advance.
Update pod volume mounts like following
volumeMounts:
- mountPath: /bin/tipc-config
name: tipc-vol
- mountPath: /etc/
name: config-vol
So, your pod yaml will be
- name: temp1-cont
image: temp1-img:v2
env:
- name: CONFIG_MODE
value: "manager"
securityContext:
privileged: true
volumeMounts:
- mountPath: /bin/tipc-config
name: tipc-vol
- mountPath: /etc/
name: config-vol
resources:
limits:
cpu: 100m
memory: "100Mi"
requests:
cpu: 100m
memory: "100Mi"
command: ["/etc/init.d/docker-init"]
volumes:
- name: tipc-vol
hostPath:
path: /opt/tipc-config
type: FileOrCreate
- name: config-vol
hostPath:
path: /opt/config/
type: DirectoryOrCreate
Remove the second volumeMounts:. So below should work
volumeMounts:
- mountPath: /bin/tipc-config
name: tipc-vol
- mountPath: /etc/
name: config-vol
My pods can not resolve external world ( for ex for mail, ... ) how can I add google nameserver to the cluster ? For info the host resolve it without problem and has nameserver.
The problem is that the liveness check made skids fail, I changed it like bellow.
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v10
namespace: kube-system
labels:
k8s-app: kube-dns
version: v10
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v10
template:
metadata:
labels:
k8s-app: kube-dns
version: v10
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd:2.0.9
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.12
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/kube2sky"
- --domain=cluster.local
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=cluster.local.
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 15
#readinessProbe:
#httpGet:
#path: /healthz
#port: 8080
#scheme: HTTP
#initialDelaySeconds: 1
#timeoutSeconds: 5
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.