Stages:
connect a container's shell
curl www.xxx.com (//this will always waiting )
...
Then I use tcpdump in host machine and filter by ip
tcpdump -i eth0 host ip
3 11:05:05 2019/12/2 133.5701630 10.171.162.231 111.111.222.333 TCP TCP: [Bad CheckSum]Flags=......S., SrcPort=48836, DstPort=HTTP(80), PayloadLen=0, Seq=126843476, Ack=0, Win=29200 ( Negotiating scale factor 0x7 ) = 29200
4 11:05:05 2019/12/2 133.5704230 111.111.222.333 10.171.162.231 TCP TCP:Flags=...A..S., SrcPort=HTTP(80), DstPort=48836, PayloadLen=0, Seq=3228156738, Ack=126843477, Win=2896 ( Negotiated scale factor 0x9 ) = 1482752
5 11:05:05 2019/12/2 133.5704630 10.171.162.231 111.111.222.333 TCP TCP: [Bad CheckSum]Flags=...A...., SrcPort=48836, DstPort=HTTP(80), PayloadLen=0, Seq=126843477, Ack=3228156739, Win=229 (scale factor 0x7) = 29312
6 11:05:05 2019/12/2 133.5705430 10.171.162.231 111.111.222.333 HTTP HTTP:Request, GET /api/test, Query:debug
7 11:05:05 2019/12/2 133.5707110 111.111.222.333 10.171.162.231 TCP TCP:Flags=...A...., SrcPort=HTTP(80), DstPort=48836, PayloadLen=0, Seq=3228156739, Ack=126843596, Win=6 (scale factor 0x9) = 3072
The tcp flag is
src -> dst syn
dst -> src syn/ack
src -> dst ack
src -> dst ack/push
dst -> src ack
The curl command will waiting a long time and then throw a timeout error. in normal request there has a dst -> src ack/push packet, but I never received.
I don't know why and how to resolve it.
--- my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-dep
labels:
app: test-app
version: stable
spec:
replicas: 2
selector:
matchLabels:
app: test-app
version: stable
template:
metadata:
labels:
app: test-app
version: stable
spec:
containers:
- image: test-app
name: test-app
livenessProbe:
httpGet:
path: /health/status
port: 80
initialDelaySeconds: 3
periodSeconds: 10
ports:
- containerPort: 80
Related
I have a problem when trying to mount 2 file into a pods.
Here's the Volumes part of the manifest file:
# Source: squid/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: squid-dev
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
spec:
replicas: 2
updateStrategy:
type: RollingUpdate
serviceName: squid-dev
selector:
matchLabels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
volumeClaimTemplates:
- metadata:
name: squid-dev-cache
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
template:
metadata:
annotations:
checksum: checksum
checksum/config: e51a4d6e552f890604aaa4c47c522653c25cad7ffec5680f67bbaadba6d3c3b2
checksum/secret: secret
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "squid"
containers:
- name: squid
image: "honestica/squid:4-ff434982-c47b-47c3-b705-b2adb2730978"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: squid-dev-config
mountPath: /etc/squid/squid.conf
subPath: squid.conf
- name: squid-dev-config
mountPath: /etc/squid/squid.conf.backup
subPath: squid.conf.backup
- name: squid-dev-cache
mountPath: /var/cache/squid
ports:
- name: port3128
containerPort: 3128
protocol: TCP
- name: port8080
containerPort: 8080
protocol: TCP
readinessProbe:
tcpSocket:
port: 3128
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
{}
volumes:
- name: squid-dev-config
configMap:
name: squid-dev
And this is manifest of the configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: squid-dev-config
labels:
app: squid
chart: squid-0.4.1
release: squid-dev
heritage: Helm
data:
squid.conf: |
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443 8443 8448 8248 8280
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
#acl Safe_ports port 1025–9999 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
...
squid.conf.backup: |
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443 8443 8448 8248 8280
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
#acl Safe_ports port 1025–9999 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
...
After using helm to install, I execute into pod and list the folder /etc/squid and result is below:
/ # ls -la /etc/squid/
total 388
drwxr-xr-x 1 root root 31 Mar 25 19:09 .
drwxr-xr-x 1 root root 19 Mar 25 19:09 ..
-rw-r--r-- 1 root root 692 Oct 30 23:43 cachemgr.conf
-rw-r--r-- 1 root root 692 Oct 30 23:43 cachemgr.conf.default
-rw-r--r-- 1 root root 1800 Oct 30 23:43 errorpage.css
-rw-r--r-- 1 root root 1800 Oct 30 23:43 errorpage.css.default
-rw-r--r-- 1 root root 12077 Oct 30 23:43 mime.conf
-rw-r--r-- 1 root root 12077 Oct 30 23:43 mime.conf.default
-rw-r--r-- 1 root root 3598 Mar 25 19:09 squid.conf
drwxrwxrwx 2 root root 6 Mar 25 19:09 squid.conf.backup
-rw-r--r-- 1 root root 2526 Oct 30 23:43 squid.conf.default
-rw-r--r-- 1 root root 344566 Oct 30 23:43 squid.conf.documented
Why squid.conf is a file and squid.conf.backup is a folder? I have change the name of squid.conf.backup to anything else but it 's still create a folder instead of a file, and if we choose the name same as a file in this folder ex: cachemgr.conf
Warning Failed 3s (x3 over 15s) kubelet Error: failed to start container "squid": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/kubelet/pods/411dc966-ed7d-494c-b9b7-4abfe1639f00/volume-subpaths/squid-dev-config/squid/0" to rootfs at "/etc/squid/cachemgr.conf" caused: mount through procfd: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Only squid.conf can be mounted as a file, anything else will be mounted as a folder.
How can I fix this? Can anyone explain about this behavior please?
I have search on google and helm chart of fluent-bit can mount 2 files into pods using only 1 configmap: https://github.com/fluent/helm-charts/tree/main/charts/fluent-bit
Kubectl version:
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-eks-f17b81", GitCommit:"f17b810c9e5a82200d28b6210b458497ddfcf31b", GitTreeState:"clean", BuildDate:"2021-10-15T21:46:21Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Helm version:
version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}
I'm new at Kubernetes and trying to do a simple project to connect MySQL and PhpMyAdmin using Kubernetes on my Ubuntu 20.04. I created the components needed and here is the components.
mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-root-password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: mysql-configmap
key: mysql-database
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
phpmyadmin.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: phpmyadmin
labels:
app: phpmyadmin
spec:
replicas: 1
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
app: phpmyadmin
spec:
containers:
- name: phpmyadmin
image: phpmyadmin
ports:
- containerPort: 3000
env:
- name: PMA_HOST
valueFrom:
configMapKeyRef:
name: mysql-configmap
key: database_url
- name: PMA_PORT
value: "3306"
- name: PMA_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-username
- name: PMA_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-user-password
---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin-service
spec:
selector:
app: phpmyadmin
ports:
- protocol: TCP
port: 8080
targetPort: 3000
ingress-service.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
defaultBackend:
service:
name: phpmyadmin-service
port:
number: 8080
rules:
- host: test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: phpmyadmin-service
port:
number: 8080
when I execute microk8s kubectl get ingress ingress-service, the output is:
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service public test.com 127.0.0.1 80 45s
and when I tried to access test.com, that's when I got 502 error.
My kubectl version:
Client Version: v1.22.2-3+9ad9ee77396805
Server Version: v1.22.2-3+9ad9ee77396805
My microk8s' client and server version:
Client:
Version: v1.5.2
Revision: 36cc874494a56a253cd181a1a685b44b58a2e34a
Go version: go1.15.15
Server:
Version: v1.5.2
Revision: 36cc874494a56a253cd181a1a685b44b58a2e34a
UUID: b2bf55ad-6942-4824-99c8-c56e1dee5949
As for my microk8s' own version, I followed the installation instructions from here, so it should be 1.21/stable. (Couldn't find the way to check the exact version from the internet, if someone know how, please tell me how)
mysql.yaml logs:
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started.
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.26-1debian10 started.
2021-10-14 07:05:38+00:00 [Note] [Entrypoint]: Initializing database files
2021-10-14T07:05:38.960693Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.26) initializing of server in progress as process 41
2021-10-14T07:05:38.967970Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-10-14T07:05:39.531763Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-10-14T07:05:40.591862Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main
2021-10-14T07:05:40.592247Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main
2021-10-14T07:05:40.670594Z 6 [Warning] [MY-010453] [Server] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2021-10-14 07:05:45+00:00 [Note] [Entrypoint]: Database files initialized
2021-10-14 07:05:45+00:00 [Note] [Entrypoint]: Starting temporary server
2021-10-14T07:05:45.362827Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.26) starting as process 90
2021-10-14T07:05:45.486702Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-10-14T07:05:45.845971Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-10-14T07:05:46.022043Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main
2021-10-14T07:05:46.022189Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main
2021-10-14T07:05:46.023446Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2021-10-14T07:05:46.023728Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-10-14T07:05:46.026088Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2021-10-14T07:05:46.044967Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock
2021-10-14T07:05:46.045036Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.26' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server - GPL.
2021-10-14 07:05:46+00:00 [Note] [Entrypoint]: Temporary server started.
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Creating database testing-database
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Creating user testinguser
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Giving user testinguser access to schema testing-database
2021-10-14 07:05:48+00:00 [Note] [Entrypoint]: Stopping temporary server
2021-10-14T07:05:48.422053Z 13 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 8.0.26).
2021-10-14T07:05:50.543822Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.26) MySQL Community Server - GPL.
2021-10-14 07:05:51+00:00 [Note] [Entrypoint]: Temporary server stopped
2021-10-14 07:05:51+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up.
2021-10-14T07:05:51.711889Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.26) starting as process 1
2021-10-14T07:05:51.725302Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-10-14T07:05:51.959356Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-10-14T07:05:52.162432Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1 is enabled for channel mysql_main
2021-10-14T07:05:52.162568Z 0 [Warning] [MY-013746] [Server] A deprecated TLS version TLSv1.1 is enabled for channel mysql_main
2021-10-14T07:05:52.163400Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2021-10-14T07:05:52.163556Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-10-14T07:05:52.165840Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2021-10-14T07:05:52.181516Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2021-10-14T07:05:52.181562Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.26' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
phpmyadmin.yaml logs:
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.114.139. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.114.139. Set the 'ServerName' directive globally to suppress this message
[Thu Oct 14 03:57:32.653011 2021] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.51 (Debian) PHP/7.4.24 configured -- resuming normal operations
[Thu Oct 14 03:57:32.653240 2021] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
Here is also my Allocatable on describe nodes command:
Allocatable:
cpu: 4
ephemeral-storage: 113289380Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 5904508Ki
pods: 110
and the Allocated resources:
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 550m (13%) 200m (5%)
memory 270Mi (4%) 370Mi (6%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Any help? Thanks in advance.
Turns out it is a fudged up mistake of mine, where I specify the phpmyadmin's container port to be 3000, while the default image port opens at 80. After changing the containerPort and phpmyadmin-service's targetPort to 80, it opens the phpmyadmin's page.
So sorry for kkopczak and AndD for the fuss and also big thanks for trying to help! :)
I can't access a service from a pod, when I run the curl serviceIP:port command from my pod console, I get the following error:
root#strongswan-deployment-7bc4c96494-qmb46:/# curl -v 10.111.107.133:80
* Trying 10.111.107.133:80...
* TCP_NODELAY set
* connect to 10.111.107.133 port 80 failed: Connection timed out
* Failed to connect to 10.111.107.133 port 80: Connection timed out
* Closing connection 0
curl: (28) Failed to connect to 10.111.107.133 port 80: Connection timed out
Here is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: strongswan-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: strongswan
template:
metadata:
labels:
app: strongswan
spec:
containers:
- name: strongswan-container
image: 192.168.39.1:5000/mystrongswan
ports:
- containerPort: 80
command: ["/bin/bash", "-c", "--"]
args: ["while true; do sleep 30; done;"]
securityContext:
privileged: true
imagePullSecrets:
- name: dockerregcred
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: strongswan-service
spec:
selector:
app: strongswan
ports:
- port: 80 # Port exposed to the cluster
protocol: TCP
targetPort: 80 # Port on which the pod listens
I tried with an Nginx pod and this time it works, I am able to connect to the Nginx service with the curl command.
I don't see where the problem comes from, since it works for the Nginx pod. What I did wrong?
I use minikube :
user#user-ThinkCentre-M91p:~/minikube$ minikube version
minikube version: v1.20.0
commit: c61663e942ec43b20e8e70839dcca52e44cd85ae
EDIT
My second pod yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: godart-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: godart
template:
metadata:
labels:
app: godart
spec:
containers:
- name: godart-container
image: 192.168.39.1:5000/mygodart
ports:
- containerPort: 9020
imagePullSecrets:
- name: dockerregcred
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: godart-service
spec:
selector:
app: godart
ports:
- port: 9020 # Port exposed to the cluster
protocol: TCP
targetPort: 9020 # Port on which the pod listens
The error :
[root#godart-deployment-648fb8757c-6mscv /]# curl -v 10.104.206.191:9020
* About to connect() to 10.104.206.191 port 9020 (#0)
* Trying 10.104.206.191...
* Connection timed out
* Failed connect to 10.104.206.191:9020; Connection timed out
* Closing connection 0
curl: (7) Failed connect to 10.104.206.191:9020; Connection timed out
the dockerfile :
FROM centos/systemd
ENV container docker
RUN yum -y update; yum clean all
RUN yum -y install systemd; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
COPY /godart* /home
RUN yum install -y /home/GoDart-3.3-b10.el7.x86_64.rpm
RUN yum install -y /home/GoDartHmi-3.3-b10.el7.x86_64.rpm
CMD ["/usr/sbin/init"]
EDIT EDIT:
I solved my problem by adding a file that can respond to an http request, this is the file:
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(9020, "0.0.0.0");
To make it work you must have a Node js environment installed.
Run the script with the command:node filename.js
And after that I am able to curl my services.
I don't really understand why it works now, does anyone have an explanation ?
Thank you
Your strongswan-container container is using bash -c -- "while true; do sleep 30; done;" as command.
The sleep command obviously does not listen to any port.
When you try to curl your service on port 80, a TCP connection is attempted towards the Pod on port 80, but since there is no such port listening in the Pod the connection attempt fails.
how can I fix this error without using the sleep command?
If I good understand your question I know 2 solutions of your problem. First you can understand how work CrashLoopBackOff. Then you can change Container restart policy. The most important field should be: lastProbeTime. This means Timestamp of when the Pod condition was last probed.
Second solution should be creating a readiness probe. You can read more about it also here.
I am trying to write a cron job which hits a rest endpoint of the application it is pulling image of.
Below is the sample code:
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Chart.Name }}-cronjob
labels:
app: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
spec:
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 2
startingDeadlineSeconds: 1800
jobTemplate:
spec:
template:
metadata:
name: {{ .Chart.Name }}-cronjob
labels:
app: {{ .Chart.Name }}
spec:
restartPolicy: OnFailure
containers:
- name: demo
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command: ["/bin/sh", "-c", "curl http://localhost:8080/hello"]
readinessProbe:
httpGet:
path: "/healthcheck"
port: 8081
initialDelaySeconds: 300
periodSeconds: 60
timeoutSeconds: 30
failureThreshold: 3
livenessProbe:
httpGet:
path: "/healthcheck"
port: 8081
initialDelaySeconds: 300
periodSeconds: 60
timeoutSeconds: 30
failureThreshold: 3
resources:
requests:
cpu: 200m
memory: 2Gi
limits:
cpu: 1
memory: 6Gi
schedule: "*/5 * * * *"
But i keep running into *curl: (7) Failed to connect to localhost port 8080: Connection refused*.
I can see from the events that it creates the container and immediately throws: Back-off restarting failed container.
I already have pods running of demo app and it works fine, it is just when i am trying to point to this existing app and hit a rest endpoint i start running into connection refused errors.
Exact output when seeing the logs:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8080: Connection refused
Event Logs:
Container image "wayfair/demo:728ac13-as_test_cron_job" already present on machine
9m49s Normal Created pod/demo-cronjob-1619108100-ndrnx Created container demo
6m17s Warning BackOff pod/demo-cronjob-1619108100-ndrnx Back-off restarting failed container
5m38s Normal SuccessfulDelete job/demo-cronjob-1619108100 Deleted pod: demo-cronjob-1619108100-ndrnx
5m38s Warning BackoffLimitExceeded job/demo-cronjob-1619108100 Job has reached the specified backoff limit
Being new to K8, Any pointers are helpful!
You are trying to connect to localhost:8080 with your curl which doesn't make sense from what I understand of your CronJob definition.
From the docs (at https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod )
The command and arguments that you define in the configuration file
override the default command and arguments provided by the container
image. If you define args, but do not define a command, the default
command is used with your new arguments.
Note: The command field corresponds to entrypoint in some container
runtimes. Refer to the Notes below.
If you define a command for the image, even if the image would start a rest application on port 8080 on localhost with its default entrypoint (or command, depends on the container type you are using), the command overrides the entrypoint and no application is start.
If you have the necessity of both starting the application and then performing other operations, like curls and so on, I suggest to use a .sh script or something like that, depending on what is the Job objective.
I would like to deploy the java petstore for kubernetes. In order to achieve this I have 2 simple deployments. The first one is the java web app and the second one is a MySQL database.
When istio is disabled the connection between the app and the DB works well.
Unfortunatly when the istio sidecar is injected the communication between the two stops working.
Here is the deployment file of the web app:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jpetstoreweb
spec:
replicas: 1
template:
metadata:
labels:
app: jpetstoreweb
annotations:
sidecar.istio.io/inject: "true"
spec:
containers:
- name: jpetstoreweb
image: wingardiumleviosa/petstore:v7
env:
- name: VERSION
value: "1"
- name: DB_URL
value: "jpetstoredb-service"
- name: DB_PORT
value: "3306"
- name: DB_NAME
value: "jpetstore"
- name: DB_USERNAME
value: "jpetstore"
- name: DB_PASSWORD
value: "foobar"
ports:
- containerPort: 9080
readinessProbe:
httpGet:
path: /
port: 9080
initialDelaySeconds: 10
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: jpetstoreweb-service
spec:
selector:
app: jpetstoreweb
ports:
- port: 80
targetPort: 9080
---
And next the deployment file of the mySql database :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jpetstoredb
spec:
replicas: 1
template:
metadata:
labels:
app: jpetstoredb
annotations:
sidecar.istio.io/inject: "true"
spec:
containers:
- name: jpetstoredb
image: wingardiumleviosa/petstoredb:v1
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "foobar"
- name: MYSQL_DATABASE
value: "jpetstore"
- name: MYSQL_USER
value: "jpetstore"
- name: MYSQL_PASSWORD
value: "foobar"
---
apiVersion: v1
kind: Service
metadata:
name: jpetstoredb-service
spec:
selector:
app: jpetstoredb
ports:
- port: 3306
targetPort: 3306
Finally the error logs from the web app trying to connect to the DB :
Exception thrown by application class 'org.springframework.web.servlet.FrameworkServlet.processRequest:488'
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: Communication link failure: java.io.EOFException, underlying cause: null ** BEGIN NESTED EXCEPTION ** java.io.EOFException STACKTRACE: java.io.EOFException at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1395) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:1539) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:1930) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1168) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1279) at com.mysql.jdbc.MysqlIO.sqlQuery(MysqlIO.java:1225) at com.mysql.jdbc.Connection.execSQL(Connection.java:2278) at com.mysql.jdbc.Connection.execSQL(Connection.java:2237) at com.mysql.jdbc.Connection.execSQL(Connection.java:2218) at com.mysql.jdbc.Connection.setAutoCommit(Connection.java:548) at org.apache.commons.dbcp.DelegatingConnection.setAutoCommit(DelegatingConnection.java:331) at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.setAutoCommit(PoolingDataSource.java:317) at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:221) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:350) at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:261) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:101) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at com.sun.proxy.$Proxy28.getCategory(Unknown Source) at org.springframework.samples.jpetstore.web.spring.ViewCategoryController.handleRequest(ViewCategoryController.java:31) at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:874) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:808) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:476) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:431) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1255) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:743) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:440) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.invokeTarget(WebAppFilterChain.java:182) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:93) at com.ibm.ws.security.jaspi.JaspiServletFilter.doFilter(JaspiServletFilter.java:56) at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:201) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:90) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:996) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1134) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1005) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:927) at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1023) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:417) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:376) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:532) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:466) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:331) at com.ibm.ws.http.channel.internal.inbound.HttpICLReadCallback.complete(HttpICLReadCallback.java:70) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:501) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:571) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:926) at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1015) at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:232) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.lang.Thread.run(Thread.java:812) ** END NESTED EXCEPTION **
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:488)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:431)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
Extract : Could not open JDBC Connection for transaction
Additionnal info :
1) I can curl the DB from the web app container using CURL and it answers correctly.
2) I use Cilium instead of Calico
3) I installed Istio using HELM
4) Kubernetes is installed on bare metal (no cloud provider)
5) kubectl get pods -n istio-system all istio pods are running
6) kubectl get pods -n kube-system all cilium pods are running
7) Istio is injected using kubectl apply -f <(~/istio-1.0.5/bin/istioctl kube-inject -f ~/jpetstore.yaml) -n foo. If I use any other method Istio is not injecting itself in the Web pod (But works for the DB pod, god knows why)
8) The DB pod is always happy and working well
9) Logs of the istio-proxy container inside the WebApp pod : kubectl logs jpetstoreweb-84c7d8964-s642k istio-proxy -n myns
2018-12-28T03:52:30.610101Z info Version root#6f6ea1061f2b-docker.io/istio-1.0.5-c1707e45e71c75d74bf3a5dec8c7086f32f32fad-Clean
2018-12-28T03:52:30.610167Z info Proxy role: model.Proxy{ClusterID:"", Type:"sidecar", IPAddress:"10.233.72.142", ID:"jpetstoreweb-84c7d8964-s642k.myns", Domain:"myns.svc.cluster.local", Metadata:map[string]string(nil)}
2018-12-28T03:52:30.611217Z info Effective config: binaryPath: /usr/local/bin/envoy
configPath: /etc/istio/proxy
connectTimeout: 10s
discoveryAddress: istio-pilot.istio-system:15007
discoveryRefreshDelay: 1s
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: jpetstoreweb
zipkinAddress: zipkin.istio-system:9411
2018-12-28T03:52:30.611249Z info Monitored certs: []envoy.CertSource{envoy.CertSource{Directory:"/etc/certs/", Files:[]string{"cert-chain.pem", "key.pem", "root-cert.pem"}}}
2018-12-28T03:52:30.611829Z info Starting proxy agent
2018-12-28T03:52:30.611902Z info Received new config, resetting budget
2018-12-28T03:52:30.611912Z info Reconciling configuration (budget 10)
2018-12-28T03:52:30.611926Z info Epoch 0 starting
2018-12-28T03:52:30.613236Z info Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster jpetstoreweb --service-node sidecar~10.233.72.142~jpetstoreweb-84c7d8964-s642k.myns~myns.svc.cluster.local --max-obj-name-len 189 --allow-unknown-fields -l warn --v2-config-only]
[2018-12-28 03:52:30.630][20][info][main] external/envoy/source/server/server.cc:190] initializing epoch 0 (hot restart version=10.200.16384.256.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=4882536)
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:192] statically linked extensions:
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:194] access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:197] filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash,istio_authn,jwt-auth,mixer
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:200] filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:203] filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.rbac,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy,mixer
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:205] stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:207] tracers: envoy.dynamic.ot,envoy.lightstep,envoy.zipkin
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:210] transport_sockets.downstream: alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:213] transport_sockets.upstream: alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-12-28 03:52:30.634][20][info][config] external/envoy/source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2018-12-28 03:52:30.638][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-12-28 03:52:30.638][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-12-28 03:52:30.638][20][info][config] external/envoy/source/server/configuration_impl.cc:60] loading 1 listener(s)
[2018-12-28 03:52:30.640][20][info][config] external/envoy/source/server/configuration_impl.cc:94] loading tracing configuration
[2018-12-28 03:52:30.640][20][info][config] external/envoy/source/server/configuration_impl.cc:103] loading tracing driver: envoy.zipkin
[2018-12-28 03:52:30.640][20][info][config] external/envoy/source/server/configuration_impl.cc:116] loading stats sink configuration
[2018-12-28 03:52:30.640][20][info][main] external/envoy/source/server/server.cc:432] starting main dispatch loop
[2018-12-28 03:52:32.010][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-12-28 03:52:32.011][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-12-28 03:52:34.691][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-12-28 03:52:34.691][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-12-28 03:52:38.483][20][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:130] cm init: initializing cds
[2018-12-28 03:53:01.596][20][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|443||kubernetes.default.svc.cluster.local during init
...
[2018-12-28T04:09:09.561Z] - 115 1548 6 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40318 10.233.72.142:9080 10.233.72.1:43098
[2018-12-28T04:09:14.555Z] - 115 1548 8 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40350 10.233.72.142:9080 10.233.72.1:43130
[2018-12-28T04:09:19.556Z] - 115 1548 5 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40364 10.233.72.142:9080 10.233.72.1:43144
[2018-12-28T04:09:24.558Z] - 115 1548 6 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40378 10.233.72.142:9080 10.233.72.1:43158
10) Using Istio 1.0.5 and kubernetes 1.13.0
All idears are welcome ;-)
Thx
So there really is an issue with Istio 1.0.5 and the JDBC of MySQL
The temporary solution is to delete the mesh ressource in the following way :
kubectl delete meshpolicies.authentication.istio.io default
As stated here and referencing this.
(FYI : I deleted the ressource BEFORE deploying my petstore app.)
As of Istio 1.1.1 there is more data on this problem in the FAQ