Pod crash CrashLoopBackOff - kubernetes

I am getting after run "CrashLoopBackOff"
kubectl get pods
This is my yml file.
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: cloudtrail-pipe
spec:
template:
metadata:
labels:
app: cloudtrail-pipe
spec:
hostname: cloudtrail-pipe
containers:
- name: cloudtrail-pipe
ports:
- containerPort: 5047
name: filebeat
- containerPort: 9600
name: logstash
image: docker.elastic.co/logstash/logstash:6.5.4
volumeMounts:
- name: cloudtrail-pipe-config
mountPath: /usr/share/logstash/pipeline/
- name: logstash-jvm-options
mountPath: /usr/share/logstash/config/
command:
- logstash
volumes:
- name: cloudtrail-pipe-config
configMap:
name: cloudtrail-pipe
items:
- key: cloudtrail.conf
path: cloudtrail.conf
- name: logstash-output-log
configMap:
name: logstash-output-log
items:
- key: cloudtrail.log
path: cloudtrail.log
- name: logstash-jvm-options
configMap:
name: logstash-jvm-options
items:
- key: jvm.options
path: jvm.options
---
kind: Service
apiVersion: v1
metadata:
name: cloudtrail-pipe
spec:
type: NodePort
selector:
app: cloudtrail-pipe
ports:
- protocol: TCP
port: 5047
targetPort: 5047
nodePort: 30104
name: filebeat
- protocol: TCP
port: 9600
targetPort: 9600
name: logstash
And this is the output of
kubectl --v=8 logs cloudtrail-pipe-59bbd75b44-5wcgv --namespace=default -p
I0826 09:17:00.060776 28458 round_trippers.go:416] GET https://xx.xx.xx.xx:6443/api/v1/namespaces/default/pods/cloudtrail-pipe-59bbd75b44-5wcgv
I0826 09:17:00.060800 28458 round_trippers.go:423] Request Headers:
I0826 09:17:00.060811 28458 round_trippers.go:426] Accept: application/json, */*
I0826 09:17:00.060821 28458 round_trippers.go:426] User-Agent: kubectl/v1.15.3 (linux/amd64) kubernetes/2d3c76f
I0826 09:17:00.067284 28458 round_trippers.go:441] Response Status: 200 OK in 6 milliseconds
I0826 09:17:00.067300 28458 round_trippers.go:444] Response Headers:
I0826 09:17:00.067307 28458 round_trippers.go:447] Content-Type: application/json
I0826 09:17:00.067313 28458 round_trippers.go:447] Content-Length: 3772
I0826 09:17:00.067319 28458 round_trippers.go:447] Date: Mon, 26 Aug 2019 09:17:00 GMT
I0826 09:17:00.067356 28458 request.go:947] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"cloudtrail-pipe-59bbd75b44-5wcgv","generateName":"cloudtrail-pipe-59bbd75b44-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/cloudtrail-pipe-59bbd75b44-5wcgv","uid":"ebb671b8-0840-4874-9a03-15bf6a01da62","resourceVersion":"97628","creationTimestamp":"2019-08-26T09:12:45Z","labels":{"app":"cloudtrail-pipe","pod-template-hash":"59bbd75b44"},"annotations":{"kubernetes.io/limit-ranger":"LimitRanger plugin set: memory request for container cloudtrail-pipe; memory limit for container cloudtrail-pipe"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"cloudtrail-pipe-59bbd75b44","uid":"697b2314-921b-416f-91ea-0cc295916283","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"cloudtrail-pipe-config","configMap":{"name":"cloudtrail-pipe","items":[{"key":"cloudtrail.conf","path":"cloudtrail.conf"}],"defaultMode":420}},{"name":"logstash-output-log","configMap":{"name":"logstash-output-log","items":[{"key":" [truncated 2748 chars]
I0826 09:17:00.071390 28458 round_trippers.go:416] GET https://xx.xx.xx.xx:6443/api/v1/namespaces/default/pods/cloudtrail-pipe-59bbd75b44-5wcgv/log?previous=true
I0826 09:17:00.071408 28458 round_trippers.go:423] Request Headers:
I0826 09:17:00.071415 28458 round_trippers.go:426] Accept: application/json, */*
I0826 09:17:00.071422 28458 round_trippers.go:426] User-Agent: kubectl/v1.15.3 (linux/amd64) kubernetes/2d3c76f
I0826 09:17:30.073747 28458 round_trippers.go:441] Response Status: 500 Internal Server Error in 30002 milliseconds
I0826 09:17:30.073775 28458 round_trippers.go:444] Response Headers:
I0826 09:17:30.073785 28458 round_trippers.go:447] Content-Type: application/json
I0826 09:17:30.073792 28458 round_trippers.go:447] Content-Length: 252
I0826 09:17:30.073799 28458 round_trippers.go:447] Date: Mon, 26 Aug 2019 09:17:30 GMT
I0826 09:17:30.073834 28458 request.go:947] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Get https://xx.xx.xx.xx:10250/containerLogs/default/cloudtrail-pipe-59bbd75b44-5wcgv/cloudtrail-pipe?previous=true: dial tcp xx.xx.xx.xx:10250: i/o timeout","code":500}
I0826 09:17:30.074166 28458 helpers.go:199] server response object: [{
"metadata": {},
"status": "Failure",
"message": "Get https://xx.xx.xx.xxx:10250/containerLogs/default/cloudtrail-pipe-59bbd75b44-5wcgv/cloudtrail-pipe?previous=true: dial tcp xx.xx.xx.xx:10250: i/o timeout",
"code": 500
}]
F0826 09:17:30.074198 28458 helpers.go:114] Error from server: Get https://xx.xx.xx.xx:10250/containerLogs/default/cloudtrail-pipe-59bbd75b44-5wcgv/cloudtrail-pipe?previous=true: dial tcp xx.xx.xx.xx:10250: i/o timeout
Could you help to found the error, please?
EDIT
Below is the config map logstash-jvm-options which maps the file jvm.options. After comment the line with the volume logstash-jvm-options of the yml the deployment works ok.
-Xms2g
-Xmx2g
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djruby.compile.invokedynamic=true
-Djruby.jit.threshold=0
-Djava.security.egd=file:/dev/urandom
-XX:+HeapDumpOnOutOfMemoryError

I solved this problem I increased memory resource
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 100m
memory: 250Mi
memory

Related

Redirect from www to non www duplicates query params kubernetes

I am trying to redirect from https://example.nl to https://www.example.nl. This works perfectly. However, when I add query params, the query params get duplicated.
For example, whenever I go to example.nl?test=a, it redirects to wwww.example.nl?test=a?test=a.
How do I prevent this duplication of query params?
I use kubernetes and digital ocean. My kubernetes ingress file looks as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
service.beta.kubernetes.io/do-loadbalancer-name: "example"
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host = 'example.nl' ) {
rewrite ^ https://www.example.nl$request_uri permanent;
}
nginx.ingress.kubernetes.io/server-snippet: |
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types
application/atom+xml
application/geo+json
application/javascript
application/x-javascript
application/json
application/ld+json
application/manifest+json
application/rdf+xml
application/rss+xml
application/xhtml+xml
application/xml
font/eot
font/otf
font/ttf
image/svg+xml
text/css
text/javascript
text/plain
text/xml;
spec:
tls:
- hosts:
- www.example.nl
- example.nl
secretName: main-example-tls
rules:
- host: www.example.nl
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: front-end
port:
number: 3000
- path: /api
pathType: Prefix
backend:
service:
name: back-end
port:
number: 8000
- host: example.nl
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: front-end
port:
number: 3000
- path: /api
pathType: Prefix
backend:
service:
name: back-end
port:
number: 8000
If any additional informations is required. Please let me know.
try something like
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite / https://test.app.example.com$uri permanent;
You can also refer my article : https://medium.com/#harsh.manvar111/kubernetes-ingress-domain-redirect-4595e9030a2c

Istio: Add a custom request header to outbound HTTP requests

We want to add a custom request header to HTTP requests to a specific external endpoint. Created the following Istio configurations.
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: httpbin-se
spec:
exportTo:
- .
hosts:
- httpbin.org
location: MESH_EXTERNAL
ports:
- name: 443-port
number: 443
protocol: TLS
resolution: NONE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin-vs
spec:
hosts:
- httpbin.org
http:
- route:
- destination:
host: httpbin.org
weight: 100
headers:
request:
add:
test-header: xyz
Tried testing from a container... the test-header is not getting added
bash-4.2$ curl -X GET "https://httpbin.org/headers" -H "accept: application/json"
{
"headers": {
"Accept": "application/json",
"Host": "httpbin.org",
"User-Agent": "curl/7.29.0",
"X-Amzn-Trace-Id": "Root=1-63062d1c-2d6808374e71f1ef16713fe8"
}
}

Ambassador Rate limitting not working properly

I am trying to do rate-limiting with ambassador following this tutorial. I am using minikube and local docker image.I have tested all api is responding correctly after deploying to Kubernetes only the rate-limiting function isn't working.
Here is my deploy.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-deployment
spec:
ports:
- name: http
port: 80
targetPort: 3000
- name: https
port: 443
targetPort: 3000
selector:
app: nodejs-deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-deployment
spec:
selector:
matchLabels:
app: nodejs-deployment
replicas: 2
template:
metadata:
labels:
app: nodejs-deployment
spec:
containers:
- name: nodongo
image: soham/nodejs-starter
imagePullPolicy: "Never"
ports:
- containerPort: 3000
Here is my rate-limit.yaml
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: nodejs-backend
spec:
prefix: /delete/
service: nodejs-deployment
labels:
ambassador:
- request_label_group:
- delete
---
apiVersion: getambassador.io/v2
kind: RateLimit
metadata:
name: backend-rate-limit
spec:
domain: ambassador
limits:
- pattern: [{generic_key: delete}]
rate: 1
unit: minute
injectResponseHeaders:
- name: "x-test-1"
value: "my-rl-test"
When I am executing the command -- curl -vLk 10.107.60.125/delete/
It is returning
* Trying 10.107.60.125:80...
* TCP_NODELAY set
* Connected to 10.107.60.125 (10.107.60.125) port 80 (#0)
> GET /delete/ HTTP/1.1
> Host: 10.107.60.125
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: text/html; charset=utf-8
< Content-Length: 11
< ETag: W/"b-CgqQ9sWpkiO3HKmStsUvuC/rZLU"
< Date: Tue, 03 Nov 2020 17:13:00 GMT
< Connection: keep-alive
<
* Connection #0 to host 10.107.60.125 left intact
Delete User
The response I am getting is 200 however I am expecting 429 error code.

Setting "Cache-Control" header with Kubernetes ingress

I have a kubernetes cluster running in AWS and am trying to modify the Cache Controller headers via the kubernetes ingress as such:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress-lab-static
namespace: lab
annotations:
ingress.kubernetes.io/rewrite-target: /$1
ingress.kubernetes.io/enable-cors: "true"
ingress.kubernetes.io/cors-allow-methods: GET, POST,PUT, OPTIONS, DELETE, HEAD, PATCH
ingress.kubernetes.io/cors-allow-headers: >-
Accept-Charset, Accept-Encoding, Access-Control-Request-Headers, Access-Control-Request-Method, Authorization,
Cache-Control, Connection, Content-Encoding, Content-Type, Content-Length, DNT, Date, Host, If-Modified-Since,
Keep-Alive, Origin, Referer, Server, TokenIssueTime, Transfer-Encoding, User-Agent, Vary, X-CustomHeader, X-Requested-With,
password, username, x-request-id, x-ratelimit-app, x-auth-id, x-auth-key, x-guest-token, X-HTTP-Method-Override,
x-oesp-username, x-oesp-token, x-cus, x-dev, X-Client-Id, X-Device-Code, X-Language-Code, UserRole, x-session-id, x-entitlements-token
ingress.kubernetes.io/configuration-snippet: |
more_set_headers 'Access-Control-Allow-Origin:$origin';
ingress.kubernetes.io/proxy-buffering: "on"
ingress.kubernetes.io/proxy-buffer-size: "2048k"
ingress.kubernetes.io/server-snippet: |
chunked_transfer_encoding off;
location ((https|http):\/\/.*\/test-service\/images\/.*\/imageName.*) {
more_set_headers 'Cache-Control: public, max-age=14400';
}
spec:
rules:
- host: static-url-lab.lab.cdn.example.com
http:
paths:
- path: /test-service/(.*)
backend:
serviceName: test-service
servicePort: 80
However this does not seem to be working. When I curl a resource matching that pattern I get the default values back:
# Example curl - not exact
curl -v "https://static-url-lab.lab.cdn.example.com/test-service/intent/test/image_name" -o /dev/null 2>&1 grep -E "(Cache-Control: max|X-Cache)"
< Cache-Control: max-age=172800, public
As far as I can tell the regex should be matching, but no change is taking place, what am I missing?
Try something like this :
working for me
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($request_uri ~* \.(js|css|gif|jpe?g|png)) {
expires 1M;
add_header Cache-Control "public";
}
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"

kube-dns manual install - requested resource not found

I am trying to install kube-dns addon manually following the Kubernetes The Hard Way for AWS.
I am getting an error when creating the kube-dns service from this yaml:
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.32.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
Executing:
kubectl create -f services/kubedns.yaml
Error from server (NotFound): the server could not find the requested resource
Any idea what is the missing resource that kubectl can't find?
Update:
Detailed output with --v=8
$ ~/aws/hardway kubectl --v=8 create -f services/kubedns.yaml
I0323 09:21:40.408869 94700 loader.go:357] Config loaded from file /Users/aplsek/.kube/config
I0323 09:21:40.410962 94700 round_trippers.go:414] GET https://kubernetes-***********.amazonaws.com:6443/swagger-2.0.0.pb-v1
I0323 09:21:40.410986 94700 round_trippers.go:421] Request Headers:
I0323 09:21:40.410998 94700 round_trippers.go:424] Accept: application/json, */*
I0323 09:21:40.411008 94700 round_trippers.go:424] User-Agent: kubectl/v1.9.4 (darwin/amd64) kubernetes/bee2d15
I0323 09:21:40.931119 94700 round_trippers.go:439] Response Status: 404 Not Found in 520 milliseconds
I0323 09:21:40.931141 94700 round_trippers.go:442] Response Headers:
I0323 09:21:40.931147 94700 round_trippers.go:445] Date: Fri, 23 Mar 2018 16:21:40 GMT
I0323 09:21:40.931153 94700 round_trippers.go:445] Content-Type: application/json
I0323 09:21:40.931159 94700 round_trippers.go:445] Content-Length: 1307
I0323 09:21:40.938145 94700 request.go:873] Response Body: {
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/apps",
"/apis/apps/v1beta1",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/autoscaling/v2alpha1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v2alpha1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1beta1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/policy",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1alpha1",
"/apis/rbac.authorization.k8s.io/v1beta1",
"/apis/settings.k8s.io",
"/apis/settings.k8s.io/v1alpha1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/ping",
"/healthz/poststarthook/b [truncated 283 chars]
I0323 09:21:40.939987 94700 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "unknown"
}
]
},
"code": 404
}]
F0323 09:21:40.940029 94700 helpers.go:119] Error from server (NotFound): the server could not find the requested resource