Can't get version of the global server state file '/var/state/haproxy/global' - kubernetes

I have installed HAProxy-Ingress into my Kubernetes,
I keep getting error 404 when trying to resolve paths,
the reason is certainly this error, logged in the controller :
2022/11/22 23:49:18 INFO controller.go:165 HAProxy reloaded
[NOTICE] (275) : haproxy version is 2.6.6-274d1a4
[WARNING] (275) : config : config: Can't get version of the global server state file '/var/state/haproxy/global'.
[WARNING] (345) : Proxy healthz stopped (cumulated conns: FE: 8, BE: 0).
[WARNING] (345) : Proxy http stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (345) : Proxy https stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (345) : Proxy stats stopped (cumulated conns: FE: 0, BE: 0).
[NOTICE] (275) : New worker (356) forked
[NOTICE] (275) : Loading success.
[NOTICE] (275) : haproxy version is 2.6.6-274d1a4
[WARNING] (275) : Former worker (345) exited with code 0 (Exit)
I tried copy/pasting the file from docker zone into /var/state/haproxy/ but this doesn't help...

Related

Need help connecting to libera chat

I am having trouble connecting to libera.chat and irc.libera.chat using Konversation Version 1.8.21123 on Jammy Jellyfish (fully updated). I have worked through the steps given on https://userbase.kde.org/Konversatio...tication#step5 and still cannot connect. The repeating log is shown below.
[12:44] [Info] Looking for server irc.libera.chat (port 6697)...
[12:44] [Info] Server found, connecting...
[12:44] [Info] Negotiating capabilities with server...
[12:44] [Notice] -lithium.libera.chat- *** Checking Ident
[12:44] [Notice] -lithium.libera.chat- *** Looking up your hostname...
[12:44] [Notice] -lithium.libera.chat- *** Couldn't look up your hostname
[12:45] [Notice] -lithium.libera.chat- *** No Ident response
[12:45] [Capabilities] account-notify away-notify chghost extended-join multi-prefix sasl=PLAIN,ECDSA-NIST256P-CHALLENGE,EXTERNAL tls account-tag cap-notify echo-message server-time solanum.chat/identify-msg solanum.chat/oper solanum.chat/realhost
[12:45] [Info] Requesting capabilities: account-notify away-notify chghost extended-join multi-prefix sasl cap-notify server-time
[12:45] [Info] SASL capability acknowledged by server, attempting SASL PLAIN authentication...
[12:45] [Error] SASL authentication attempt failed.
[12:45] [Info] Closing capabilities negotiation.
[12:45] [Error] Connection to server irc.libera.chat (port 6697) lost: The TLS/SSL connection has been closed.
[12:45] [Info] Trying to reconnect to irc.libera.chat (port 6697) in 10 seconds.
[12:45] [Info] Looking for server irc.libera.chat (port 6697)...​ <-- Log repeats from this line.
Is there something blatant that I have overlooked ?
Is there some web page that I need to visit in order to register my ident/hostname/whatever (!) ?
Stuart
​

What are different log levels for a Kubernetes-pod? and what are it roles?

there are log levels for kubernetes pod like WARNING, CRITICAL,ERROR , TRACE, DEBUG can some one list all log levels in kubernetes and their functions
like ERROR for error messages
link for Kubernetes pod logs doc
# kubectl logs carts-78f46c5569-cv5wq -n sock-shop
OpenJDK 64-Bit Server VM warning: ignoring option PermSize=32m; support was removed in 8.0
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=64m; support was removed in 8.0
2020-07-16 12:58:54.877 INFO [bootstrap,,,] 6 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#51521cc1: startup date [Thu Jul 16 12:58:54 GMT 2020]; root of context hierarchy
2020-07-16 12:58:55.867 INFO [bootstrap,,,] 6 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [class org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$b894f39] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

kubespray stops in the middle of the process, https://127.0.0.1:6443/healthz, Request failed: <urlopen error Tunnel connection failed: 403 Forbidden>"

I want to install Kubernetes on 3 Masters, 3 ETCDs and 2 Nodes by Kubespray. but kubespray playbook stops in the middle of the process.
At one point, it print this message, but the process continued:
TASK [kubernetes/kubeadm : Join to cluster with ignores] *
fatal: [lsrv-k8s-node1]: FAILED! => {"changed": true, "cmd": ["timeout", "-k", "120s", "120s", "/usr/local/bin/kubeadm", "join", "config", "/etc/kubernetes/kubeadm-client.conf", "ignore-preflight-errors=all"], "delta": "0:01:03.639553", "end": "2020-04-25 23:08:51.163709", "msg": "non-zero return code", "rc": 1, "start": "2020-04-25 23:07:47.524156", "stderr": "W0425 23:07:47.569297 49639 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.\nW0425 23:07:47.570267 49639 common.go:77] your configuration file uses a deprecated API spec: \"kubeadm.k8s.io/v1beta1\". Please use 'kubeadm config migrate old-config old.yaml new-config new.yaml', which will write the new, similar spec using a newer API version.\n\t[WARNING DirAvailableetc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty\n\t[WARNING IsDockerSystemdCheck]: detected \"cgroupfs\" as the Docker cgroup driver. The recommended driver is \"systemd\". Please follow the guide at https://kubernetes.io/docs/setup/cri/\n\t[WARNING HTTPProxy]: Connection to \"https://192.168.72.133\" uses proxy \"https://192.168.70.145:3128\". If that is not intended, adjust your proxy settings\nerror execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.72.133:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: proxyconnect tcp: tls: first record does not look like a TLS handshake\nTo see the stack trace of this error execute with v=5 or higher", "stderr_lines": ["W0425 23:07:47.569297 49639 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.", "W0425 23:07:47.570267 49639 common.go:77] your configuration file uses a deprecated API spec: \"kubeadm.k8s.io/v1beta1\". Please use 'kubeadm config migrate old-config old.yaml new-config new.yaml', which will write the new, similar spec using a newer API version.", "\t[WARNING DirAvailableetc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty", "\t[WARNING IsDockerSystemdCheck]: detected \"cgroupfs\" as the Docker cgroup driver. The recommended driver is \"systemd\". Please follow the guide at https://kubernetes.io/docs/setup/cri/", "\t[WARNING HTTPProxy]: Connection to \"https://192.168.72.133\" uses proxy \"https://192.168.70.145:3128\". If that is not intended, adjust your proxy settings", "error execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.72.133:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: proxyconnect tcp: tls: first record does not look like a TLS handshake", "To see the stack trace of this error execute with v=5 or higher"], "stdout": "[preflight] Running pre-flight checks", "stdout_lines": ["[preflight] Running pre-flight checks"]}
fatal: [lsrv-k8s-node2]: FAILED! => {"changed": true, "cmd": ["timeout", "-k", "120s", "120s", "/usr/local/bin/kubeadm", "join", "config", "/etc/kubernetes/kubeadm-client.conf", "ignore-preflight-errors=all"], "delta": "0:01:03.644100", "end": "2020-04-25 23:08:51.182100", "msg": "non-zero return code", "rc": 1, "start": "2020-04-25 23:07:47.538000", "stderr": "W0425 23:07:47.583487 30148 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.\nW0425 23:07:47.584414 30148 common.go:77] your configuration file uses a deprecated API spec: \"kubeadm.k8s.io/v1beta1\". Please use 'kubeadm config migrate old-config old.yaml new-config new.yaml', which will write the new, similar spec using a newer API version.\n\t[WARNING DirAvailableetc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty\n\t[WARNING IsDockerSystemdCheck]: detected \"cgroupfs\" as the Docker cgroup driver. The recommended driver is \"systemd\". Please follow the guide at https://kubernetes.io/docs/setup/cri/\n\t[WARNING HTTPProxy]: Connection to \"https://192.168.72.133\" uses proxy \"https://192.168.70.145:3128\". If that is not intended, adjust your proxy settings\nerror execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.72.133:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: proxyconnect tcp: tls: first record does not look like a TLS handshake\nTo see the stack trace of this error execute with v=5 or higher", "stderr_lines": ["W0425 23:07:47.583487 30148 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.", "W0425 23:07:47.584414 30148 common.go:77] your configuration file uses a deprecated API spec: \"kubeadm.k8s.io/v1beta1\". Please use 'kubeadm config migrate old-config old.yaml new-config new.yaml', which will write the new, similar spec using a newer API version.", "\t[WARNING DirAvailableetc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty", "\t[WARNING IsDockerSystemdCheck]: detected \"cgroupfs\" as the Docker cgroup driver. The recommended driver is \"systemd\". Please follow the guide at https://kubernetes.io/docs/setup/cri/", "\t[WARNING HTTPProxy]: Connection to \"https://192.168.72.133\" uses proxy \"https://192.168.70.145:3128\". If that is not intended, adjust your proxy settings", "error execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.72.133:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: proxyconnect tcp: tls: first record does not look like a TLS handshake", "To see the stack trace of this error execute with v=5 or higher"], "stdout": "[preflight] Running pre-flight checks", "stdout_lines": ["[preflight] Running pre-flight checks"]}
Saturday 25 April 2020 23:08:51 +0430 (0:01:03.866) 0:06:53.654
TASK [kubernetes/kubeadm : Display kubeadm join stderr if any] *
ok: [lsrv-k8s-node1] => {
"msg": "Joined with warnings\n['W0425 23:07:47.569297 49639 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.', 'W0425 23:07:47.570267 49639 common.go:77] your configuration file uses a deprecated API spec: \"kubeadm.k8s.io/v1beta1\". Please use \\'kubeadm config migrate old-config old.yaml new-config new.yaml\\', which will write the new, similar spec using a newer API version.', '\\t[WARNING DirAvailableetc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty', '\\t[WARNING IsDockerSystemdCheck]: detected \"cgroupfs\" as the Docker cgroup driver. The recommended driver is \"systemd\". Please follow the guide at https://kubernetes.io/docs/setup/cri/', '\\t[WARNING HTTPProxy]: Connection to \"https://192.168.72.133\" uses proxy \"https://192.168.70.145:3128\". If that is not intended, adjust your proxy settings', \"error execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.72.133:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: proxyconnect tcp: tls: first record does not look like a TLS handshake\", 'To see the stack trace of this error execute with v=5 or higher']\n"
}
ok: [lsrv-k8s-node2] => {
"msg": "Joined with warnings\n['W0425 23:07:47.583487 30148 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.', 'W0425 23:07:47.584414 30148 common.go:77] your configuration file uses a deprecated API spec: \"kubeadm.k8s.io/v1beta1\". Please use \\'kubeadm config migrate old-config old.yaml new-config new.yaml\\', which will write the new, similar spec using a newer API version.', '\\t[WARNING DirAvailableetc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty', '\\t[WARNING IsDockerSystemdCheck]: detected \"cgroupfs\" as the Docker cgroup driver. The recommended driver is \"systemd\". Please follow the guide at https://kubernetes.io/docs/setup/cri/', '\\t[WARNING HTTPProxy]: Connection to \"https://192.168.72.133\" uses proxy \"https://192.168.70.145:3128\". If that is not intended, adjust your proxy settings', \"error execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.72.133:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: proxyconnect tcp: tls: first record does not look like a TLS handshake\", 'To see the stack trace of this error execute with v=5 or higher']\n"
}
Saturday 25 April 2020 23:08:51 +0430 (0:00:00.082) 0:06:53.737
Saturday 25 April 2020 23:08:51 +0430 (0:00:00.050) 0:06:53.787
But eventually it stopped at this point:
PLAY [kube-master] *
TASK [kubespray-defaults : Configure defaults] *
ok: [lsrv-k8s-mstr1] => {
"msg": "Check roles/kubespray-defaults/defaults/main.yml"
}
ok: [lsrv-k8s-mstr2] => {
"msg": "Check roles/kubespray-defaults/defaults/main.yml"
}
ok: [lsrv-k8s-mstr3] => {
"msg": "Check roles/kubespray-defaults/defaults/main.yml"
}
Saturday 25 April 2020 23:09:41 +0430 (0:00:00.044) 0:07:44.209
Saturday 25 April 2020 23:09:41 +0430 (0:00:00.043) 0:07:44.253
Saturday 25 April 2020 23:09:41 +0430 (0:00:00.044) 0:07:44.297
FAILED - RETRYING: Kubernetes Apps | Wait for kube-apiserver (20 retries left).
FAILED - RETRYING: Kubernetes Apps | Wait for kube-apiserver (19 retries left).
...
FAILED - RETRYING: Kubernetes Apps | Wait for kube-apiserver (2 retries left).
FAILED - RETRYING: Kubernetes Apps | Wait for kube-apiserver (1 retries left).
TASK [kubernetes-apps/ansible : Kubernetes Apps | Wait for kube-apiserver] *
fatal: [lsrv-k8s-mstr1]: FAILED! => {"attempts": 20, "changed": false, "content": "", "elapsed": 0, "msg": "Status code was -1 and not [200]: Request failed: <urlopen error Tunnel connection failed: 403 Forbidden>", "redirected": false, "status": -1, "url": "https://127.0.0.1:6443/healthz"}
NO MORE HOSTS LEFT *
PLAY RECAP *
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
lsrv-k8s-etcd1 : ok=152 changed=8 unreachable=0 failed=0 skipped=213 rescued=0 ignored=0
lsrv-k8s-etcd2 : ok=142 changed=8 unreachable=0 failed=0 skipped=206 rescued=0 ignored=0
lsrv-k8s-etcd3 : ok=142 changed=8 unreachable=0 failed=0 skipped=206 rescued=0 ignored=0
lsrv-k8s-mstr1 : ok=626 changed=48 unreachable=0 failed=1 skipped=747 rescued=0 ignored=0
lsrv-k8s-mstr2 : ok=464 changed=40 unreachable=0 failed=0 skipped=605 rescued=0 ignored=0
lsrv-k8s-mstr3 : ok=466 changed=40 unreachable=0 failed=0 skipped=603 rescued=0 ignored=0
lsrv-k8s-node1 : ok=385 changed=22 unreachable=0 failed=1 skipped=334 rescued=1 ignored=0
lsrv-k8s-node2 : ok=385 changed=22 unreachable=0 failed=1 skipped=334 rescued=1 ignored=0
Saturday 25 April 2020 23:10:07 +0430 (0:00:25.764) 0:08:10.061
===============================================================================
kubernetes/kubeadm : Join to cluster - 64.06s
kubernetes/kubeadm : Join to cluster with ignores 63.87s
kubernetes-apps/ansible : Kubernetes Apps | Wait for kube-apiserver 25.76s
kubernetes/preinstall : Update package management cache (APT) 17.29s
etcd : Gen_certs | Write etcd master certs - 11.07s
kubernetes/master : Master | wait for kube-scheduler 7.76s
Gather necessary facts 6.35s
kubernetes-apps/ingress_controller/cert_manager : Cert Manager | Remove legacy namespace 5.64s
container-engine/docker : ensure docker packages are installed 5.14s
kubernetes-apps/ingress_controller/ingress_nginx : NGINX Ingress Controller | Create manifests 4.48s
kubernetes/master : kubeadm | write out kubeadm certs - 4.41s
kubernetes-apps/ingress_controller/cert_manager : Cert Manager | Create manifests - 3.99s
etcd : Gen_certs | Gather etcd master certs - 3.70s
bootstrap-os : Fetch /etc/os-release 3.63s
bootstrap-os : Install dbus for the hostname module - 3.29s
kubernetes-apps/external_provisioner/local_path_provisioner : Local Path Provisioner | Create manifests - 3.11s
kubernetes-apps/ingress_controller/ingress_nginx : NGINX Ingress Controller | Apply manifests - 3.05s
kubernetes/client : Generate admin kubeconfig with external api endpoint 2.70s
kubernetes/master : kubeadm | Check if apiserver.crt contains all needed SANs - 2.68s
download : download | Download files / images - 2.67s
It seams that health check doesn't work. and return 403:
fatal: [lsrv-k8s-mstr1]: FAILED! => {"attempts": 20, "changed": false, "content": "", "elapsed": 0, "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "https://127.0.0.1:6443/healthz"}
please guide me.
The problem was caused by https_proxy set in the /etc/environment file in worker nodes.
After removing https_proxy and http_proxy lines, the problem was solved.
Your error messages indicate you may have authentication problems as the root issue. Make sure you did not miss or mis-configure any pre-installation steps.
These commands will give some info regarding your cluster state:
kubectl get componentstatuses
kubectl get nodes
kubectl get pods --all-namespaces

Error While running Spring BOOT 1.5.10: s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization

While running Spring Boot application I am getting below error
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.10.RELEASE)
2018-02-10 15:27:24.209 INFO 14692 --- [ main] c.i.S.SpringBootSpringDataJpaApplication : Starting SpringBootSpringDataJpaApplication on LAPTOP-J6O7ENAD with PID 14692 (C:\Users\Srinu\Downloads\SpringBoot_SpringDataJPA\target\classes started by Srinu in C:\Users\Srinu\Downloads\SpringBoot_SpringDataJPA)
2018-02-10 15:27:24.211 INFO 14692 --- [ main] c.i.S.SpringBootSpringDataJpaApplication : No active profile set, falling back to default profiles: default
2018-02-10 15:27:24.252 INFO 14692 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#221af3c0: startup date [Sat Feb 10 15:27:24 IST 2018]; root of context hierarchy
Sat Feb 10 15:27:24 IST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Sat Feb 10 15:27:25 IST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Sat Feb 10 15:27:25 IST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Sat Feb 10 15:27:25 IST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Sat Feb 10 15:27:25 IST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Sat Feb 10 15:27:25 IST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Sat Feb 10 15:27:25 IST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Sat Feb 10 15:27:25 IST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Sat Feb 10 15:27:25 IST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Sat Feb 10 15:27:25 IST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
2018-02-10 15:27:25.131 INFO 14692 --- [ main] j.LocalContainerEntityManagerFactoryBean : Building JPA container EntityManagerFactory for persistence unit 'default'
2018-02-10 15:27:25.148 INFO 14692 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [
name: default
...]
2018-02-10 15:27:25.241 INFO 14692 --- [ main] org.hibernate.Version : HHH000412: Hibernate Core {5.0.12.Final}
2018-02-10 15:27:25.244 INFO 14692 --- [ main] org.hibernate.cfg.Environment : HHH000206: hibernate.properties not found
2018-02-10 15:27:25.247 INFO 14692 --- [ main] org.hibernate.cfg.Environment : HHH000021: Bytecode provider name : javassist
2018-02-10 15:27:25.302 INFO 14692 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
2018-02-10 15:27:25.501 INFO 14692 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.MySQL5Dialect
2018-02-10 15:27:25.685 INFO 14692 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
2018-02-10 15:27:25.705 WARN 14692 --- [ main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'registrationService': Unsatisfied dependency expressed through field 'userRepository'; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'com.irs.repository.UserRepository' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {#org.springframework.beans.factory.annotation.Autowired(required=true)}
2018-02-10 15:27:25.705 INFO 14692 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2018-02-10 15:27:25.712 INFO 14692 --- [ main] utoConfigurationReportLoggingInitializer :
Error starting ApplicationContext. To display the auto-configuration report re-run your application with 'debug' enabled.
2018-02-10 15:27:25.796 ERROR 14692 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
Field userRepository in com.service.RegistrationService required a bean of type 'com.repository.UserRepository' that could not be found.
Action:
Consider defining a bean of type 'com.repository.UserRepository' in your configuration.
*****************UserRepository.java**********************
package com.repository;
import org.springframework.data.jpa.repository.JpaRepository;
import com.entity.UserEntity;
public interface UserRepository extends JpaRepository<UserEntity, String>{
}
****-------------------RegisterService------------******
#Service
public class RegistrationService {
#Autowired
private UserRepository userRepository;
public String registerUser(User user) throws Exception{
validateUser(user);
UserEntity userEntity=userRepository.findOne(user.getUserId());
------ ---- ------ --------- - ------------- ------------
}}
***---------------SpringBootMain Class-----------------------****
#SpringBootApplication(scanBasePackages="com")
#PropertySource(value={"classpath:configuration.properties"})
public class SpringBootSpringDataJpaApplication implements CommandLineRunner{
#Autowired
private Environment environment;
#Autowired
ApplicationContext applicationContext;
public static void main(String[] args) {
SpringApplication.run(SpringBootSpringDataJpaApplication.class, args);
}

Rancher continuously restarts MongoDB

After upgrade to rancher 1.3 from 1.1 we have problem in running mongodb cluster.
Suddenly rancher without reasons keeps restarting at least one node of the mogodb cluster claiming that it's incomplete.
Below you can find a fragment of rancher log (look at the end first as a reversed order log):
01:19:17 PM INFO service.trigger.info Requested: 3, Created: 3, Unhealthy: 0, Bad: 0, Incomplete: 0
01:19:17 PM INFO service.trigger.info Service already reconciled
01:19:16 PM INFO service.trigger Re-evaluating state
01:19:16 PM INFO service.trigger (1 sec) Re-evaluating state
01:19:16 PM INFO service.trigger.info Service reconciled: Requested: 3, Created: 3, Unhealthy: 0, Bad: 0, Incomplete: 0
01:19:16 PM INFO service.update.info Service already reconciled
01:19:16 PM INFO service.update Updating service
01:19:16 PM INFO service.update.info Requested: 3, Created: 3, Unhealthy: 0, Bad: 0, Incomplete: 0
01:19:16 PM INFO service.trigger.exception Busy processing [SERVICE.280] will try later
01:19:03 PM INFO service.update Updating service
01:19:03 PM INFO service.update.exception Busy processing [SERVICE.280] will try later
01:19:02 PM INFO service.trigger.wait (14 sec) Waiting for instances to start
01:19:02 PM INFO service.instance.create Creating extra service instance
01:19:02 PM INFO service.instance.create Creating extra service instance
01:19:01 PM INFO service.trigger (15 sec) Re-evaluating state
01:19:01 PM INFO service.trigger.info Requested: 3, Created: 3, Unhealthy: 0, Bad: 0, Incomplete: 1
the problem always starts with Requested: 3, Created: 3, Unhealthy: 0, Bad: 0, Incomplete:1
However in the very same time in the mongo nothing interesting is happening, suddenly it's being restarted by something external i.e. rancher (log in natural order):
2017-01-22T13:06:11.957+0000 I NETWORK [conn2362] end connection 10.42.191.72:55615 (24 connections now open)
2017-01-22T13:06:14.848+0000 I NETWORK [initandlisten] connection accepted from 10.42.191.72:55635 #2363 (25 connections now open)
2017-01-22T13:06:14.849+0000 I NETWORK [conn2363] end connection 10.42.191.72:55635 (24 connections now open)
(nothing unusual until here, look here)->
2017-01-22T13:06:15.243+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2017-01-22T13:06:15.244+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2017-01-22T13:06:15.253+0000 I REPL [signalProcessingThread] Stopping replication applier threads
2017-01-22T13:06:15.556+0000 I STORAGE [conn105] got request after shutdown()
2017-01-22T13:06:15.871+0000 I STORAGE [conn91] got request after shutdown()
2017-01-22T13:06:15.874+0000 I STORAGE [conn86] got request after shutdown()
2017-01-22T13:06:15.887+0000 I STORAGE [conn82] got request after shutdown()
2017-01-22T13:06:15.941+0000 I STORAGE [conn83] got request after shutdown()
2017-01-22T13:06:16.009+0000 I STORAGE [conn85] got request after shutdown()
2017-01-22T13:06:16.020+0000 I STORAGE [conn84] got request after shutdown()
2017-01-22T13:06:16.108+0000 I STORAGE [conn75] got request after shutdown()
2017-01-22T13:06:16.133+0000 I STORAGE [conn87] got request after shutdown()
Any idea what could be worng with rancher. I even try and created clean mongodb having no clients same story is being restarted by rancher at lease twice an hour sometimes more often.
Any workaround?