I am trying to set up a local environment with docker for my Magento 2 Commerce Cloud.
The containers seem to be all right.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
917897513a55 magento/magento-cloud-docker-elasticsearch:7.7-1.1 "/tini -- /usr/local…" 7 days ago Up 3 minutes (healthy) 9200/tcp, 9300/tcp nvmcloud_elasticsearch_1
3f222162cfb0 mariadb:10.4 "docker-entrypoint.s…" 7 days ago Up 3 minutes (healthy) 0.0.0.0:32769->3306/tcp nvmcloud_db_1
f1393e33a973 magento/magento-cloud-docker-tls:latest-1.1 "/entrypoint.sh" 7 days ago Up About a minute 0.0.0.0:443->443/tcp nvmcloud_tls_1
75fd66a30fce magento/magento-cloud-docker-varnish:latest-1.1 "/entrypoint.sh" 7 days ago Up About a minute 80/tcp nvmcloud_varnish_1
592c338aae5a magento/magento-cloud-docker-nginx:latest-1.1 "/docker-entrypoint.…" 7 days ago Up 2 minutes (healthy) 443/tcp, 0.0.0.0:8080->80/tcp nvmcloud_web_1
592f532e50c4 magento/magento-cloud-docker-php:7.3-fpm-1.1 "/docker-entrypoint.…" 7 days ago Up 2 minutes (healthy) 9000/tcp nvmcloud_fpm_1
5aa703aa8ede redis:5.0 "docker-entrypoint.s…" 7 days ago Up 3 minutes (healthy) 0.0.0.0:32768->6379/tcp nvmcloud_redis_1
5239984fe942 mailhog/mailhog:latest "MailHog" 7 days ago Up About an hour 0.0.0.0:1025->1025/tcp, 0.0.0.0:8025->8025/tcp nvmcloud_mailhog_1
When I run docker-compose run deploy cloud-deploy I get this error.
Could not open input file: ./vendor/bin/ece-tools returned non-zero exit status 1
I am a newbie with docker and my research did not lead to any solution yet.
Thanks!
To all appearances, ece-tools package isn't installed yet.
Please follow Magento 2 Cloud DevDocs:
If you still use a version of Magento Commerce Cloud that does not
contain the ece-tools package, then your project requires an upgrade.
We deprecated the magento/magento-cloud-configuration and
magento/ece-patches packages in favor of the ece-tools package.
Reference: https://devdocs.magento.com/cloud/project/ece-tools-upgrade-project.html
Related
I want to restart my Kubernetes pods after every 21 days I want probably a crone job that can do that. That pod should be able to restart pods from all of my deployments.
I don't want to write any extra script for that is that possible ?
Thanks you .
If you want it to run every 21 days, no matter the day of month, then cron job doesn't support a simple, direct translation of that requirement and most other similar scheduling systems.
In that specific case you'd have to track the days yourself and calculate the next day it should run at by adding 21 days to the current date whenever the function runs.
My suggestion is :
if you want it to run at 00:00 every 21st day of the month:
Cron Job schedule is : 0 0 21 * * (at 00:00 on day-of-month 21).
Output :
If you want it to run at 12:00 every 21st day of the month:
Cron Job schedule is : 0 12 21 * * (at 12:00 on day-of-month 21).
Output :
Or else
If you want to add the 21 days yourself, you can refer and use setInterval to schedule a function to run at a specific time as below :
const waitSeconds = 1814400; // 21 days
function scheduledFunction() {
// Do something
}
// Run now
scheduledFunction();
// Run every 21 days
setInterval(scheduledFunction, waitSeconds);
For more information Refer this SO Link to how to schedule Pod restart.
EDIT: weird DNS behavior was some kind of transient issue, and now RHEL/podman works the same way as Ubuntu/podman. I can't reproduce the issue, which makes most part (not 100% though) of this question moot.
I am trying to use Podman and docker-compose to create a compose stack with multiple replicas of backend container, and having a hard time with it.
I use Podman because I have to (comes from Red Hat platform), and picked docker-compose because it is familiar and I use in local dev host, too. I know there are alternatives (podman-compose etc). I learned that Podman 4.1 supported Docker Compose so this sounded like a good candidate.
As an example I have docker-compose.yml with one frontend container and 3 backend containers:
version: "3"
services:
frontend:
image: "nginx:latest"
ports:
- "3000:80"
depends_on:
- backend
backend:
image: "tomcat:latest"
ports:
- "8180-8280:8080"
scale: 3
Note: this stack is just an example. It's purpose is only to highlight the networking aspects of multi-replica docker-compose . I could use something else than nginx:latest, and using e.g. Traefik can solve some of the problems...but sometimes you wish to connect directly from one container to a service with multiple container replicas.
Docker & docker-compose (Ubuntu)
Running this on host which has docker and docker-compose is straightforward.
Backend containers get assigned random ports from range 8180-8280.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
98941117708c nginx:latest "/docker-entrypoint.…" 25 seconds ago Up 22 seconds 0.0.0.0:3000->80/tcp, :::3000->80/tcp example-docker-compose-frontend-1
3d749e25eaba tomcat:latest "catalina.sh run" 26 seconds ago Up 23 seconds 0.0.0.0:8193->8080/tcp, :::8193->8080/tcp example-docker-compose-backend-1
854ba8f60cb3 tomcat:latest "catalina.sh run" 26 seconds ago Up 23 seconds 0.0.0.0:8192->8080/tcp, :::8192->8080/tcp example-docker-compose-backend-2
e57e32181e8e tomcat:latest "catalina.sh run" 26 seconds ago Up 23 seconds 0.0.0.0:8194->8080/tcp, :::8194->8080/tcp example-docker-compose-backend-3
Logging into frontend container, service name backend resolves to all 3 backend containers
dig backend
; <<>> DiG 9.16.27-Debian <<>> backend
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31602
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;backend. IN A
;; ANSWER SECTION:
backend. 600 IN A 172.19.0.3
backend. 600 IN A 172.19.0.2
backend. 600 IN A 172.19.0.4
curl backend:8080 works
Podman & docker-compose (RHEL 8)
Podman came preinstalled, I added docker-compose (standalone) and podman-docker:
# curl -SL https://github.com/docker/compose/releases/download/v2.10.2/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
# chmod a+x /usr/local/bin/docker-compose
# sudo yum install podman-docker
And activated rootless podman socket so that podman and docker-compose can talk to each other:
# systemctl --user enable podman.socket
# systemctl --user start podman.socket
# systemctl --user status podman.socket
# export DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock
I also switched network backend to netavark, DNS did not work without that change
$ podman info |grep -i networkbackend
networkBackend: netavark
1. Port ranges are not supported
Podman does not like ports: - "8180-8280:8080" due to this bug: https://github.com/containers/podman/issues/15111
[+] Running 1/0
⠿ Network example-docker-compose_default Created 0.0s
⠋ Container example-docker-compose-backend-3 Creating 0.0s
⠋ Container example-docker-compose-backend-1 Creating 0.0s
⠋ Container example-docker-compose-backend-2 Creating 0.0s
Error response from daemon: make cli opts(): strconv.Atoi: parsing "8180-8280": invalid syntax
2. Without port range, address already in use conflict
Changed docker-compose.yml to remove port range
ports:
- "8080:8080"
This results in port conflict, all 3 backend containers try to bind to 8080
Catalina.startup.Catalina.start Server startup in [190] milliseconds
Error response from daemon: rootlessport listen tcp 0.0.0.0:8080: bind: address already in use
3. Scale = 1
Let's try with just one backend container. System starts up
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
217c3e726431 docker.io/library/tomcat:latest catalina.sh run 7 seconds ago Up 6 seconds ago 0.0.0.0:8080->8080/tcp example-docker-compose-backend-1
9c3f86676bde docker.io/library/nginx:latest nginx -g daemon o... 6 seconds ago Up 6 seconds ago 0.0.0.0:3000->80/tcp example-docker-compose-frontend-1
DNS from frontend server looks weird. What are all these different backend IP addresses 10.89.0.3 - 10.89.0.12? Only the last of them works when I curl 10.89.0.x. Still, curl backend:8080 works fine?
4. Scale up from command line
I remove ports and scale from docker-compose.yml and start compose stack with scale=3 option:
version: "3"
services:
frontend:
image: "nginx:latest"
ports:
- "3000:80"
depends_on:
- backend
backend:
image: "tomcat:latest"
$ docker-compose up --scale backend=3
[+] Running 4/4
⠿ Container example-docker-compose-backend-2 Recreated 0.3s
⠿ Container example-docker-compose-backend-3 Recreated 0.2s
⠿ Container example-docker-compose-backend-1 Recreated 0.3s
⠿ Container example-docker-compose-frontend-1 Recreated 0.3s
Attaching to example-docker-compose-backend-1, example-docker-compose-backend-2, example-docker-compose-backend-3, example-docker-compose-frontend-1
Now compose stack starts nicely with 3 backend containers
$ docker ps
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b81c44246ae7 docker.io/library/tomcat:latest catalina.sh run 22 minutes ago Up 22 minutes ago example-docker-compose-backend-3
814dc5d307f7 docker.io/library/tomcat:latest catalina.sh run 22 minutes ago Up 22 minutes ago example-docker-compose-backend-2
fb0a5090a456 docker.io/library/tomcat:latest catalina.sh run 22 minutes ago Up 22 minutes ago example-docker-compose-backend-1
c0219d7fded4 docker.io/library/nginx:latest nginx -g daemon o... 22 minutes ago Up 22 minutes ago 0.0.0.0:3000->80/tcp example-docker-compose-frontend-1
DNS from frontend has even more entries
# dig backend
; <<>> DiG 9.16.27-Debian <<>> backend
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29526
;; flags: qr rd ad; QUERY: 1, ANSWER: 11, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 410008265c2b0edb (echoed)
;; QUESTION SECTION:
;backend. IN A
;; ANSWER SECTION:
backend. 86400 IN A 10.89.0.3
backend. 86400 IN A 10.89.0.4
backend. 86400 IN A 10.89.0.6
backend. 86400 IN A 10.89.0.7
backend. 86400 IN A 10.89.0.15
backend. 86400 IN A 10.89.0.16
backend. 86400 IN A 10.89.0.18
backend. 86400 IN A 10.89.0.19
backend. 86400 IN A 10.89.0.20
backend. 86400 IN A 10.89.0.21
backend. 86400 IN A 10.89.0.22
And curl backend:8080 does not work (not sure which port I should use now)
Questions
What's going on here?
Can I achieve a setup of 3 backend containers, so that DNS name backend would resolve to those, with podman & docker-compose?
Podman seems to support docker-compose (or vice versa), but only to a degree. Is there some documentation which tells what docker-compose features are supported on Podman, and which are not?
Podman is my container runtime of choice...unless I'm working with docker-compose, in which case I have found it to be "close but not quite" in terms of its docker API support.
However, for what you're trying to do, you could replace Nginx with Traefik, and let Traefik handle the load balancing. Traefik is a dynamic proxy that uses the Docker API and container labelling to discover containers and configure the proxy rules.
For example:
version: "3"
services:
frontend:
image: "docker.io/traefik:v2.8"
ports:
- "3000:80"
- "127.0.0.1:3080:8080"
command:
- --api.insecure=true
- --providers.docker
volumes:
- /run/user/$UID/podman/podman.sock:/var/run/docker.sock
backend:
labels:
traefik.http.routers.backend.rule: Host(`localhost`)
image: "quay.io/larsks/demoserver:latest"
scale: 3
Here we're mapping all requests for Host: localhost to to our backend containers. This is just for the purposes of a demonstration (since I'll be running curl localhost/...); a more realistic configuration would use specific hostnames, or paths, etc. You can read more in the Routing configuration section of Traefik's Docker documentation, and also in the general Router documentation.
With this configuration, we see the following containers running:
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c29de1d137e2 docker.io/library/traefik:v2.8 --api.insecure=tr... 5 seconds ago Up 6 seconds ago 0.0.0.0:3000->80/tcp, 127.0.0.1:3080->8080/tcp demoserver_frontend_1
f4fac24cb494 quay.io/larsks/demoserver:latest /usr/local/bin/st... 5 seconds ago Up 6 seconds ago demoserver_backend_2
d9d388202be2 quay.io/larsks/demoserver:latest /usr/local/bin/st... 5 seconds ago Up 5 seconds ago demoserver_backend_1
5a3e6330739d quay.io/larsks/demoserver:latest /usr/local/bin/st... 5 seconds ago Up 5 seconds ago demoserver_backend_3
And we can see that requests on port 3000 cycle between the available backends. Running this script:
for i in {1..10}; do
curl http://localhost:3000/hostname
done
Produces as output:
5a3e6330739d
d9d388202be2
f4fac24cb494
5a3e6330739d
d9d388202be2
f4fac24cb494
5a3e6330739d
d9d388202be2
f4fac24cb494
5a3e6330739d
I am trying to connect 2 microservices using Kafka Connector in Spring using Docker containers.
Refer to this Gitlab link for project details.
I have 2 containers in Spring.
s1-pledgeservice
s1-donorservice
First, I need to run s1-pledgeservice (through startup.sh in the corresponding project folder) and it works fine.
But for second step, when I run s1-donorservice (through startup.sh in the corresponding project folder), I get an exception in s1-pledgeservice and s1-donorservice
s1-pledgeservice exception:
kafka-connect | 2020-12-23 05:53:31,677 ERROR || Uncaught exception in REST call to /connectors/ [org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper]
kafka-connect | java.lang.NullPointerException
s1-donorservice exception:
curl: (3) URL using bad/illegal format or missing URL
", ning: Couldn't read data from file "sql/postgres-connect-config.json
Looking at docker images, and it seems to be running fine.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c149865fd1ac debezium/connect:1.2 "/docker-entrypoint.…" 59 minutes ago Up 12 minutes 8778/tcp, 9092/tcp, 0.0.0.0:8083->8083/tcp, 9779/tcp kafka-connect
58b557f8d277 oawofolu/ssm-pledgeservice:v1 "java -jar /app.jar" 59 minutes ago Up 12 minutes 0.0.0.0:8101->8080/tcp pledge-consumer
1da693b393f8 debezium/kafka:1.2 "/docker-entrypoint.…" 59 minutes ago Up 12 minutes 8778/tcp, 9779/tcp, 0.0.0.0:9092->9092/tcp kafka
30df710d8120 debezium/postgres:12 "docker-entrypoint.s…" 59 minutes ago Up 12 minutes 0.0.0.0:5432->5432/tcp postgres
aa4b9d124f5b debezium/zookeeper:1.2 "/docker-entrypoint.…" 59 minutes ago Up 12 minutes 0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 8778/tcp, 0.0.0.0:3888->3888/tcp, 9779/tcp zookeeper
Note that I am running Docker from a Debian distribution in WSL2.
I have encountered some mysterious network errors in our kubernetes cluster. Although I originally encountered these errors using ingress, there are even more errors when I bypass our load balancer, bypass kube-proxy and bypass nginx-ingress. The most errors are present when going directly to services and straight to the pod IPs. I believe this is because the load balancer and nginx have some better error handling than the raw iptable routing.
To test the error I use apache benchmark from VM on same subnet, any concurrency level, no keep-alive, connect to the pod IP and use a high enough request number to give me time to either scale up or scale down a deployment. Odd thing is it doesn't matter at all which deployment I modify since it always causes the same sets of errors even when its not related to the pod I am modifying. ANY additions or removals of pods will trigger apache benchmark errors. Manual deletions, scaling up/down, auto-scaling all trigger errors. If there are no pod changes while the ab test is running then no errors get reported. Note keep-alive does seem to greatly reduce if not eliminate the errors, but I only tested that a handful of times and never saw an error.
Other than some bizarre iptable conflict, I really don't see how deleting pod A can effect network connections of pod B. Since the errors are brief and go away within seconds it seems more like a brief network outage.
Sample ab test: ab -n 5000 -c 2 https://10.112.0.24/
Errors when using HTTPS:
SSL handshake failed (5).
SSL read failed (5) - closing connection
Errors when using HTTP:
apr_socket_recv: Connection reset by peer (104)
apr_socket_recv: Connection refused (111)
Example ab output. I ctl-C after encountering first errors:
$ ab -n 5000 -c 2 https://10.112.0.24/
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 10.112.0.24 (be patient)
Completed 500 requests
Completed 1000 requests
SSL read failed (5) - closing connection
Completed 1500 requests
^C
Server Software: nginx
Server Hostname: 10.112.0.24
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
Document Path: /
Document Length: 2575 bytes
Concurrency Level: 2
Time taken for tests: 21.670 seconds
Complete requests: 1824
Failed requests: 2
(Connect: 0, Receive: 0, Length: 1, Exceptions: 1)
Total transferred: 5142683 bytes
HTML transferred: 4694225 bytes
Requests per second: 84.17 [#/sec] (mean)
Time per request: 23.761 [ms] (mean)
Time per request: 11.881 [ms] (mean, across all concurrent requests)
Transfer rate: 231.75 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 5 15 9.8 12 82
Processing: 1 9 9.0 6 130
Waiting: 0 8 8.9 6 129
Total: 7 23 14.4 19 142
Percentage of the requests served within a certain time (ms)
50% 19
66% 24
75% 28
80% 30
90% 40
95% 54
98% 66
99% 79
100% 142 (longest request)
Current sysctl settings that may be relevant:
net.netfilter.nf_conntrack_tcp_be_liberal = 1
net.nf_conntrack_max = 131072
net.netfilter.nf_conntrack_buckets = 65536
net.netfilter.nf_conntrack_count = 1280
net.ipv4.ip_local_port_range = 27050 65500
I didn't see any conntrack "full" errors. Best I could tell there isn't packet loss. We recently upgraded from 1.14 and didn't notice the issue but I can't say for certain it wasn't there. I believe we will be forced to migrate away from romana soon since it doesn't seem to be maintained anymore and as we upgrade to kube 1.16.x we are encountering problems with it starting up.
I have searched the internet all day today looking for similar problems and the closest one that resembles our problem is https://tech.xing.com/a-reason-for-unexplained-connection-timeouts-on-kubernetes-docker-abd041cf7e02 but I have no idea how to implement the iptable masquerade --random-fully option given we use romana and I read (https://github.com/kubernetes/kubernetes/pull/78547#issuecomment-527578153) that random-fully is the default for linux kernel 5 which we are using. Any ideas?
kubernetes 1.15.5
romana 2.0.2
centos7
Linux kube-master01 5.0.7-1.el7.elrepo.x86_64 #1 SMP Fri Apr 5 18:07:52 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux
====== Update Nov 5, 2019 ======
It has been suggested to test an alternate CNI. I chose calico since we used that in an older Debian based kube cluster. I rebuilt a VM with our most basic Centos 7 template (vSphere) so there is a little baggage coming from our customizations. I can't list everything we customized in our template but the most notable change is the kernel 5 upgrade yum --enablerepo=elrepo-kernel -y install kernel-ml.
After starting up the VM these are the minimal steps to install kubernetes and run the test:
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install docker-ce-3:18.09.6-3.el7.x86_64
systemctl start docker
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
yum install -y kubeadm-1.15.5-0 kubelet-1.15.5-0 kubectl-1.15.5-0
systemctl enable --now kubelet
kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
cat <<EOF > /tmp/test-deploy.yml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: nginx
image: nginxdemos/hello
ports:
- containerPort: 80
EOF
# wait for control plane to become healthy
kubectl apply -f /tmp/test-deploy.yml
Now the setup is ready and this is the ab test:
$ docker run --rm jordi/ab -n 100 -c 1 http://192.168.4.4/
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.4.4 (be patient)...apr_pollset_poll: The timeout specified has expired (70007)
Total of 11 requests completed
The ab test gives up after this error. If I decrease the number of requests to see avoid the timeout this is what you would see:
$ docker run --rm jordi/ab -n 10 -c 1 http://192.168.4.4/
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.4.4 (be patient).....done
Server Software: nginx/1.13.8
Server Hostname: 192.168.4.4
Server Port: 80
Document Path: /
Document Length: 7227 bytes
Concurrency Level: 1
Time taken for tests: 0.029 seconds
Complete requests: 10
Failed requests: 0
Total transferred: 74140 bytes
HTML transferred: 72270 bytes
Requests per second: 342.18 [#/sec] (mean)
Time per request: 2.922 [ms] (mean)
Time per request: 2.922 [ms] (mean, across all concurrent requests)
Transfer rate: 2477.50 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.8 1 3
Processing: 1 2 1.2 1 4
Waiting: 0 1 1.3 0 4
Total: 1 3 1.4 3 5
Percentage of the requests served within a certain time (ms)
50% 3
66% 3
75% 4
80% 5
90% 5
95% 5
98% 5
99% 5
100% 5 (longest request)
This issue is technically different than the original issue I reported but this is a different CNI and there are still network issues. It does have the timeout error in common when I run the same test in the kube/romana cluster: run the ab test on the same node as the pod. Both encountered the same timeout error but in romana I could get a few thousand requests to finish before hitting the timeout. Calico encounters the timeout error before reaching a dozen requests.
Other variants or notes:
- net.netfilter.nf_conntrack_tcp_be_liberal=0/1 doesn't seem to make a difference
- higher -n values sometimes work but it is largely random.
- running the 'ab' test at low -n values several times in a row can sometimes trigger the timeout
At this point I am pretty sure it is some issue with our centos installation but I can't even guess what it could be. Are there any other limits, sysctl or other configs that could cause this?
====== Update Nov 6, 2019 ======
I observer that we had an older kernel installed in so I upgraded my kube/calico test VM with the same newer kernel 5.3.8-1.el7.elrepo.x86_64. After the update and a few reboots I can no longer reproduce the "apr_pollset_poll: The timeout specified has expired (70007)" timout errors.
Now that the timeout error is gone I was able to repeat the original test where I load test pod A and kill pod B on my vSphere VMs. On the romana environments the problem still existed but only when the load test is on a different host than where the pod A is located. If I run the test on the same host, no errors at all. Using Calico instead of romana, there are no load test errors on either host so the problem was gone. There may still be some setting to tweak that can help romana but I think this is "strike 3" for romana so I will start transitioning a full environment to Calico and do some acceptance testing there to ensure there are no hidden gotchas.
You mentioned that if there are no pod changes while the ab test is running, then no errors get reported. So it means that errors occur when you add pod or delete one.
This is normal behaviour as when pod gets deleted; it takes time for iptable rules changes to propagate. It may happen that container got removed, but iptable rules haven't got changed yet end packets are being forwarded to the nonexisting container, and this causes errors (it is sort of like a race condition).
The first thing you can do is always to create readiness probe as it will make sure that traffic will not be forwarded to the container until it is ready to handle requests.
The second thing to do is to handle deleting the container properly. This is a bit harder task because it may be handled at many levels, but the easiest thing you can do is adding PreStop hook to your container like this:
lifecycle:
preStop:
exec:
command:
- sh
- -c
- "sleep 5"
PreStop hook gets executed at the moment of the pod deletion request. From this moment, k8s start changing iptable rules and it should stop forwarding new traffic to the container that's about to get deleted. While sleeping you give some time for k8s to propagate iptable changes in the cluster while not interrupting already existing connections. After PreStop handle exits, the container will receive SIGTERM signal.
My suggestion would be to apply both of these mechanisms together and check if it helps.
You also mentioned that bypassing ingress is causing more errors. I would assume that this is due to the fact that ingress has implemented retries mechanism. If it's unable to open a connection to a container, it will try several times, and hopefully will get to a container that can handle its request.
Im trying to measure our cisco router availability data based on downtime happened. I assume from show version, the time mentioned in tag "System returned to ROM" is when reload started, and next line "System restarted at" is when router back online again. The only thing confusing is RouterA node showing time when returned to ROM while RouterB is not.
Does anyone know the different? And how to make RouterB shows the exact time router reloaded next time on the show version command?
Thanks
Hadit
RouterA#show version
Cisco IOS Software, s72033_rp Software (s72033_rp-ITPK9V-M), Version 12.2(33)IRI, RELEASE SOFTWARE (fc1)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2012 by Cisco Systems, Inc.
Compiled Fri 04-May-12 14:04 by prod_rel_team
ROM: System Bootstrap, Version 12.2(17r)S4, RELEASE SOFTWARE (fc1)
RouterA uptime is 4 years, 11 weeks, 5 days, 17 hours, 27 minutes
Uptime for this control processor is 4 years, 11 weeks, 5 days, 17 hours, 19 minutes
System returned to ROM by reload at 00:01:24 JAVT Thu Oct 11 2012 (SP by reload)
System restarted at 00:05:58 JAVT Thu Oct 11 2012
System image file is "bootdisk:s72033-itpk9v-mz.122-33.IRI.bin"
Last reload type: Normal Reload
RouterB#show version
Cisco IOS Software, s72033_rp Software (s72033_rp-ITPK9V-M), Version 12.2(33)IRI, RELEASE SOFTWARE (fc1)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2012 by Cisco Systems, Inc.
Compiled Fri 04-May-12 14:04 by prod_rel_team
ROM: System Bootstrap, Version 12.2(17r)S4, RELEASE SOFTWARE (fc1)
RouterB uptime is 4 years, 12 weeks, 6 days, 17 hours, 51 minutes
Uptime for this control processor is 4 years, 12 weeks, 6 days, 17 hours, 54 minutes
System returned to ROM by reload (SP by reload)
System restarted at 00:16:21 JAVT Wed Oct 3 2012
System image file is "bootdisk:s72033-itpk9v-mz.122-33.IRI.bin"
Last reload type: Normal Reload
Do not rely on those commands to measure availability, you are much better off using a proper uptime tool to handle this for you, there are several free ones like Zabbix, PRTG and others and commercial ones like Solarwinds, HP NNM and others.
The output of those commands may change without notice hence not a good idea to rely on them for things that c an be automated using proper tools.