spring cloud dataflow server on kubernetes error on dashboard loading - spring-cloud

Trying to run the Spring Cloud Dataflow server on kubernetes .When i try to open the dashboard url (https://scdfserverurl/dashboard/#/apps ) in the browser its partially loading and givingthe below error in the logs.The other components skipper is running fine and able access the url.
Error stack trace
2019-10-29 23:12:20.855+0000 [http-nio-9393-exec-9] ERROR o.s.c.d.s.c.RestControllerAdvice - Caught exception while handling a request
org.springframework.web.client.RestClientException: Could not extract response: no suitable HttpMessageConverter found for response type [org.springframework.hateoas.Resources] and content type [text/html]
at org.springframework.web.client.HttpMessageConverterExtractor.extractData(HttpMessageConverterExtractor.java:121)
at org.springframework.web.client.RestTemplate$ResponseEntityResponseExtractor.extractData(RestTemplate.java:995)
at org.springframework.web.client.RestTemplate$ResponseEntityResponseExtractor.extractData(RestTemplate.java:978)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:737)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:710)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:628)
at org.springframework.hateoas.client.Traverson$TraversalBuilder.toObject(Traverson.java:344)
at org.springframework.cloud.skipper.client.DefaultSkipperClient.listDeployers(DefaultSkipperClient.java:335)
at org.springframework.cloud.dataflow.server.stream.SkipperStreamDeployer.platformList(SkipperStreamDeployer.java:610)
at org.springframework.cloud.dataflow.server.service.impl.DefaultStreamService.platformList(DefaultStreamService.java:339)
at org.springframework.cloud.dataflow.server.service.impl.DefaultStreamService$$FastClassBySpringCGLIB$$89697014.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
at org.springframework.cloud.dataflow.server.service.impl.DefaultStreamService$$EnhancerBySpringCGLIB$$a40fcfc9.platformList()
at org.springframework.cloud.dataflow.server.controller.StreamDeploymentController.platformList(StreamDeploymentController.java:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:189)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:800)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1038)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:634)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:741)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.ForwardedHeaderFilter.doFilterInternal(ForwardedHeaderFilter.java:157)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:90)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:117)
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:106)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:200)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:679)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:834)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1415)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Kubernetes config YML
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: scdf-server-network-policy
spec:
podSelector:
matchLabels:
app: scdf-server
ingress:
- from:
- namespaceSelector:
matchLabels:
gkp_namespace: ingress-nginx
egress:
- {}
policyTypes:
- Ingress
- Egress
---
apiVersion: v1
kind: Secret
metadata:
name: poc-pull-secret
data:
.dockerconfigjson: ewogICJhdXRocyI6IHsKI
type: kubernetes.io/dockerconfigjson
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: scdf-server
labels:
app: scdf-server
spec:
replicas: 1
selector:
matchLabels:
app: scdf-server
template:
metadata:
labels:
app: scdf-server
annotations:
kubernetes.io/psp: nonroot
spec:
containers:
- name: scdf-server
image: <quay_url>/scdf-server:0.0.9-scdf
imagePullPolicy: Always
ports:
- containerPort: 9393
protocol: TCP
resources:
limits:
cpu: "4"
memory: 2Gi
requests:
cpu: 25m
memory: 1Gi
securityContext:
runAsUser: 99
env:
- name: SPRING_CLOUD_SKIPPER_CLIENT_SERVER_URI
value: "<skipper_server_url>/api"
- name: SPRING_CLOUD_DATAFLOW_FEATURES_SKIPPER_ENABLED
value: "true"
imagePullSecrets:
- name: poc-pull-secret
serviceAccount: spark
serviceAccountName: spark
---
apiVersion: v1
kind: Service
metadata:
name: scdf-server
labels:
app: scdf-server
spec:
ports:
- port: 80
targetPort: 9393
protocol: TCP
name: http
selector:
app: scdf-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: scdf-server
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
ingress.kubernetes.io/secure-backends: "true"
kubernetes.io/ingress.allow.http: false
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
rules:
- host: "<app_url>"
http:
paths:
- path: /
backend:
serviceName: scdf-server
servicePort: 80
tls:
- hosts:
- "<app_url>"
POM.XML for the Dataflow Server
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>pocgroup</groupId>
<artifactId>scdf-server</artifactId>
<version>1.0.0-SNAPSHOT</version>
<name>Spring Cloud Data Flow :: Server</name>
<packaging>jar</packaging>
<description>Spring Cloud Dataflow Server</description>
<!-- Spring Boot Dependency -->
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dataflow-parent</artifactId>
<version>2.1.2.RELEASE</version>
<relativePath/>
</parent>
<properties>
<java.version>1.8</java.version>
<revision>0.0.0-SNAPSHOT</revision>
<jacoco.skip.instrument>true</jacoco.skip.instrument>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.jacoco</groupId>
<artifactId>org.jacoco.agent</artifactId>
<version>0.8.2</version>
<classifier>runtime</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.8</version>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-dataflow-server</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
<dependency>
<groupId>org.jacoco</groupId>
<artifactId>org.jacoco.agent</artifactId>
<classifier>runtime</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.sybase.jconnect</groupId>
<artifactId>jconn4</artifactId>
<version>7.07-27307</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<systemPropertyVariables>
<jacoco-agent.destfile>${project.build.directory}/jacoco.exec</jacoco-agent.destfile>
</systemPropertyVariables>
</configuration>
<dependencies>
<dependency>
<groupId>org.apache.maven.surefire</groupId>
<artifactId>surefire-junit47</artifactId>
<version>2.21.0</version>
</dependency>
</dependencies>
</plugin>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.2</version>
<executions>
<execution>
<id>default-instrument</id>
<goals>
<goal>instrument</goal>
</goals>
<configuration>
<skip>${jacoco.skip.instrument}</skip>
</configuration>
</execution>
<execution>
<id>default-restore-instrumented-classes</id>
<goals>
<goal>restore-instrumented-classes</goal>
</goals>
<configuration>
<skip>${jacoco.skip.instrument}</skip>
</configuration>
</execution>
<execution>
<id>report</id>
<phase>prepare-package</phase>
<goals>
<goal>report</goal>
</goals>
<configuration>
<skip>${jacoco.skip.instrument}</skip>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>build-info</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>pl.project13.maven</groupId>
<artifactId>git-commit-id-plugin</artifactId>
<configuration>
<failOnNoGitDirectory>false</failOnNoGitDirectory>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>3.0.0</version>
<executions>
<execution>
<id>add-it-test-source</id>
<phase>process-resources</phase>
<goals>
<goal>add-test-source</goal>
<goal>add-test-resource</goal>
</goals>
<configuration>
<sources>
<source>src/it/java</source>
</sources>
<resources>
<resource>
<directory>src/it/resources</directory>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<executions>
<execution>
<id>checkstyle-validation</id>
<phase>none</phase>
</execution>
</executions>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>integration-test</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<executions>
<execution>
<id>run-integration-tests</id>
<goals>
<goal>integration-test</goal>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>

This was due to the https - http redirection due to which dataflow server wasnt able to hit the skipper/deployer rest end point .Below link has the details
Link

Related

Disable localhost_access_log from tomcat kubernetes container

I have an application in a Tomcat9 container with Kubernetes. I want to disable localhost_access_log-yyy-mm-dd.txt from /usr/local/tomcat/logs. I know that there is a possibility, to comment from server.xml this part:
`<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log" suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />`
But If I try to make a sed in Dockerfile,and to comment that part when I start the container, it will end in Error status. Is there any possibility to make this without touching server.xml?
I do not think you really need to override config with sed, as sed might be a way in pure dockerized environment but why sed when there is a better option available through configmap in Kubernetes .
create a configmap from server.xml file
kubectl create configmap server-config --from-file=server.xml
and server.xml
<?xml version="1.0" encoding="UTF-8"?>
<Server port="8005" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.startup.VersionLoggerListener" />
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />
<GlobalNamingResources>
<Resource name="UserDatabase" auth="Container"
type="org.apache.catalina.UserDatabase"
description="User database that can be updated and saved"
factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
pathname="conf/tomcat-users.xml" />
</GlobalNamingResources>
<Service name="Catalina">
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<Engine name="Catalina" defaultHost="localhost">
<Realm className="org.apache.catalina.realm.LockOutRealm">
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true">
</Host>
</Engine>
</Service>
</Server>
and the deployment file with configmap volume
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
spec:
selector:
matchLabels:
app: tomcat
replicas: 1
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat:latest
volumeMounts:
- name: config-volume
mountPath: /usr/local/tomcat/conf/
ports:
- containerPort: 8080
volumes:
- name: config-volume
configMap:
name: server-config

Unable to create clusters in Hazelcast over the Kubernetes

I am trying to use Hazelcast on Kubernetes. For that the Docker is installed on Windows and Kubernetes environment is simulate on the Docker. Here is the config file hazelcast.xml
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast
xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.7.xsd"
xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<join>
<multicast enabled="false" />
<tcp-ip enabled="false"/>
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<!--
<properties>
<property name="service-dns">cobrapp.default.endpoints.cluster.local</property>
<property name="service-dns-timeout">10</property>
</properties>
-->
</discovery-strategy>
</discovery-strategies>
</join>
</network>
</hazelcast>
The problem is that it is unable to create cluster on the simulated environment. According to my deploment file it should create three clusters. Here is the deployment config file
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
imagePullPolicy: Never
image: testapp:latest
ports:
- containerPort: 5701
- containerPort: 8085
---
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
selector:
app: test
type: LoadBalancer
ports:
- name: hazelcast
port: 5701
- name: test
protocol: TCP
port: 8085
targetPort: 8085
The output upon executing the deployment file
Members [1] {
Member [10.1.0.124]:5701 this
}
However the expected output is, it should have three clusters in it as per the deployment file. If anybody can help?
Hazelcast's default multicast discovery doesn't work on Kubernetes out-of-the-box. You need an additional plugin for that. Two alternatives are available, Kubernetes API and DNS lookup.
Please check the relevant documentation for more information.

No available Hazelcast instance"HazelcastCachingProvider.HAZELCAST_CONFIG_LOCATION"

any idea about this error:
No available Hazelcast instance. Please specify your Hazelcast configuration file path via "HazelcastCachingProvider.HAZELCAST_CONFIG_LOCATION"
Working fine with this cfg in a local kubernetes, but always getting this error when i set Kubernetes = true and multicast to false. I'm trying to use it for Liberty in IBMCloud Kubernetes.
<hazelcast xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
https://hazelcast.com/schema/config/hazelcast-config-3.12.xsd">
<group>
<name>cluster</name>
</group>
<network>
<join>
<multicast enabled="true"/>
<kubernetes enabled="false"/>
</join>
</network>
</hazelcast>
server.xml
<httpSessionCache libraryRef="jCacheVendorLib"
uri="file:${server.config.dir}hazelcast-config.xml" />
<library id="jCacheVendorLib">
<file name="${shared.config.dir}/lib/global/hazelcast-3.12.6.jar" />
</library>
This is what I done:
I have a docker image using liberty, in the liberty configuration I set the following configuration to use hazelcast:
<server>
<featureManager>
...
<feature>sessionCache-1.0</feature>
...
</featureManager>
...
<httpSessionCache libraryRef="jCacheVendorLib"
uri="file:${server.config.dir}hazelcast-config.xml" />
<library id="jCacheVendorLib">
<file name="${shared.config.dir}/lib/global/hazelcast-3.12.6.jar" />
</library>
...
</server>
Then I set the configuration in hazelcast-config.xml. I only get the error when I set kubernetes=true and multicast=false. If I left kubernetes = false and multicast = true works fine on my local kubernetes, but hazelcast can't find other pods when I deployed it on cloud (looks like ips are on a different network)
<hazelcast xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
https://hazelcast.com/schema/config/hazelcast-config-3.12.xsd">
<group>
<name>cluster</name>
</group>
<network>
<join>
<multicast enabled="true"/>
<kubernetes enabled="false"/>
</join>
</network>
</hazelcast>
Also I ran the RBAC yaml.
And run the following yaml to deploy it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: employee-service
labels:
app: employee-service
spec:
replicas: 3
selector:
matchLabels:
app: employee-service
template:
metadata:
labels:
app: employee-service
spec:
containers:
- name: myapp
image: myapp
ports:
- name: http
containerPort: 8080
- name: multicast
containerPort: 5701
------------------------------
apiVersion: v1
kind: Service
metadata:
name: service
spec:
type: NodePort
selector:
app: employee-service
ports:
- protocol: TCP
port: 9080
targetPort: 9080
nodePort: 31234
If you use the Hazelcast Kubernetes plugin for the discovery, please make sure that you configured RBAC, for example with the following command.
kubectl apply -f https://raw.githubusercontent.com/hazelcast/hazelcast-kubernetes/master/rbac.yaml
Please also make sure that the default parameters work for you (you run your Hazelcast in the same namespace, etc.).
If that does not help, please share the full StackTrace logs.

SAXParseException in OrientDB Kubernetes cluster

I have OrientDB database set up in a distributed mode with Kubernetes. I defined PersistentVolumeClaims in a StatefulSet in such a way:
volumeClaimTemplates:
- metadata:
name: orientdb-databases
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 40Gi
- metadata:
name: orientdb-backup
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
When I do not define PersistentVolumes they are created with generic name and the cluster works as expected. However, when I define my own PersistentVolumes pods start to crash and I get such errors:
[org.xml.sax.SAXParseException; systemId: file:/orientdb/config/orientdb-server-config.xml; lineNumber: 89; columnNumber: 22; The processing instruction target matching "[xX][mM][lL]" is not allowed.]
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<orient-server>
<network>
<protocols>
<protocol implementation="com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary" name="binary"/>
</protocols>
<listeners>
<listener protocol="binary" socket="default" port-range="2424-2430" ip-address="127.0.0.1"/>
</listeners>
</network>
<storages/>
<security>
<users/>
<resources/>
</security>
<isAfterFirstTime>false</isAfterFirstTime>
</orient-server>
I checked my orient-server-config.xml file and I can see the reason for such behavior. I generate this config file with a bash script:
</network>
<storages>
</storages>
<users>
</users>
<properties>
<entry value=\"1\" name=\"db.pool.min\"/>
<entry value=\"50\" name=\"db.pool.max\"/>
<entry value=\"true\" name=\"profiler.enabled\"/>
</properties>
</orient-server>
And that's how it looks like on the pod:
<storages/>
<users>
<user resources="*" password="{password}" name="root"/>
<user resources="connect,server.listDatabases,server.dblist" password="{password}" name="guest"/>
</users>
<properties>
<entry value="1" name="db.pool.min"/>
<entry value="50" name="db.pool.max"/>
<entry value="true" name="profiler.enabled"/>
</properties>
<isAfterFirstTime>true</isAfterFirstTime>
</orient-server>
How can I fix this error?
Here is how I defined PersistentVolumes. As I have 3 nodes and 2 volumes for each of them the name of the volume and the name of the host path directory only changes by the number (from 1 to 6):
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-db-1"
labels:
service: orientdb
type: local
spec:
storageClassName: standard
capacity:
storage: "40Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /orientdb-volumes/databases-1
persistentVolumeReclaimPolicy: Delete
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-backup-1"
labels:
service: orientdb
type: local
spec:
storageClassName: standard
capacity:
storage: "20Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /orientdb-volumes/databases-2
persistentVolumeReclaimPolicy: Delete

Unable to mount persistent storage on Minikube

I am currently trying to deploy the following on Minikube.
I updated the configuration files to use a hostpath as a persistent storage on minikube node.
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-volume"
spec:
capacity:
storage: "20Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "orientdb-pv-claim"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "20Gi"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: orientdbservice
spec:
#replicas: 1
template:
metadata:
name: orientdbservice
labels:
run: orientdbservice
test: orientdbservice
spec:
containers:
- name: orientdbservice
image: orientdb:latest
env:
- name: ORIENTDB_ROOT_PASSWORD
value: "rootpwd"
ports:
- containerPort: 2480
name: orientdb
volumeMounts:
- name: orientdb-config
mountPath: /data/orientdb/config
- name: orientdb-databases
mountPath: /data/orientdb/databases
- name: orientdb-backup
mountPath: /data/orientdb/backup
volumes:
- name: orientdb-config
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-databases
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-backup
persistentVolumeClaim:
claimName: orientdb-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: orientdbservice
labels:
run: orientdbservice
spec:
type: NodePort
selector:
run: orientdbservice
ports:
- protocol: TCP
port: 2480
name: http
which results in the following:
#kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-volume 20Gi RWO Retain Available 4h
pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO Delete Bound default/orientdb-pv-claim standard 4h
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
orientdb-pv-claim Bound pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO standard 4h
#kubectl get svc
NAME READY STATUS RESTARTS AGE
orientdbservice-458328598-zsmw5 0/1 ContainerCreating 0 3h
#kubectl describe pod orientdbservice-458328598-zsmw5
.
.
.
Events:
FirstSeen LastSeen Count From SubObjectPath TypeReason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3h 41s 26 kubelet, minikube Warning FailedMount Unable to mount volumes for pod "orientdbservice-458328598-zsmw5_default(392b1298-78ff-11e7-a46d-1277ec3dd2b5)": timeout expired waiting for volumes to attach/mount for pod "default"/"orientdbservice-458328598-zsmw5". list of unattached/unmounted volumes=[orientdb-databases]
It seems that volumes are not able mount for the pod. Is there something wrong with the way I am creating a persistent volume on my node ?
Appreciate all the help
Few questions before I tell you what worked for me:
The directory /data on minikube machine does it have right set of permissions?
In minikube you don't need to worry about setting up volumes in other words don't worry about PersistentVolume anymore, just enable the volume provisioner addon using following command. Once you do that every PersistentVolumeClaim that tries to claim storage will get whatever it needs.
minikube addons enable default-storageclass
So here is what worked for me:
I removed the PersistentVolume
I have changed the mountPath also to match what is given in the upstream docs https://hub.docker.com/_/orientdb/
I have added separate PersistentVolumeClaim for databases and backup
I have changed the config from a PersistentVolumeClaim to configMap, so you don't need to care about how do I get the config to running cluster?, you do it using configMap. Because the config is coming from set of config Files.
Here is the config that worked for me:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdb-databases
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdb-backup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdbservice
spec:
ports:
- name: orientdbservice-2480
port: 2480
targetPort: 0
- name: orientdbservice-2424
port: 2424
targetPort: 0
selector:
app: orientdbservice
type: NodePort
status:
loadBalancer: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdbservice
spec:
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdbservice
spec:
containers:
- env:
- name: ORIENTDB_ROOT_PASSWORD
value: rootpwd
image: orientdb
name: orientdbservice
resources: {}
volumeMounts:
- mountPath: /orientdb/databases
name: orientdb-databases
- mountPath: /orientdb/backup
name: orientdb-backup
- mountPath: /orientdb/config
name: orientdb-config
volumes:
- configMap:
name: orientdb-config
name: orientdb-config
- name: orientdb-databases
persistentVolumeClaim:
claimName: orientdb-databases
- name: orientdb-backup
persistentVolumeClaim:
claimName: orientdb-backup
And for configMap I had to goto this directory for sample config examples/3-nodes-compose/var/odb3/config in github repository orientechnologies/orientdb-docker
You goto above directory or directory you have saved config in and run following command:
kubectl create configmap orientdb-config --from-file=.
If you wanna see what is being created automatically and being deployed run following:
kubectl create configmap orientdb-config --from-file=. --dry-run -o yaml
Here is the configMap I have used to do my deployment:
apiVersion: v1
data:
automatic-backup.json: |-
{
"enabled": true,
"mode": "FULL_BACKUP",
"exportOptions": "",
"delay": "4h",
"firstTime": "23:00:00",
"targetDirectory": "backup",
"targetFileName": "${DBNAME}-${DATE:yyyyMMddHHmmss}.zip",
"compressionLevel": 9,
"bufferSize": 1048576
}
backups.json: |-
{
"backups": []
}
default-distributed-db-config.json: |
{
"autoDeploy": true,
"readQuorum": 1,
"writeQuorum": "majority",
"executionMode": "undefined",
"readYourWrites": true,
"newNodeStrategy": "static",
"servers": {
"*": "master"
},
"clusters": {
"internal": {
},
"*": {
"servers": [
"<NEW_NODE>"
]
}
}
}
events.json: |-
{
"events": []
}
hazelcast.xml: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- ~ Copyright (c)
2008-2012, Hazel Bilisim Ltd. All Rights Reserved. ~ \n\t~ Licensed under the
Apache License, Version 2.0 (the \"License\"); ~ you may \n\tnot use this file
except in compliance with the License. ~ You may obtain \n\ta copy of the License
at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ \n\t~ Unless required by applicable
law or agreed to in writing, software ~ distributed \n\tunder the License is distributed
on an \"AS IS\" BASIS, ~ WITHOUT WARRANTIES \n\tOR CONDITIONS OF ANY KIND, either
express or implied. ~ See the License for \n\tthe specific language governing
permissions and ~ limitations under the License. -->\n\n<hazelcast\n\txsi:schemaLocation=\"http://www.hazelcast.com/schema/config
hazelcast-config-3.3.xsd\"\n\txmlns=\"http://www.hazelcast.com/schema/config\"
xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n\t<group>\n\t\t<name>orientdb</name>\n\t\t<password>orientdb</password>\n\t</group>\n\t<network>\n\t\t<port
auto-increment=\"true\">2434</port>\n\t\t<join>\n\t\t\t<multicast enabled=\"true\">\n\t\t\t\t<multicast-group>235.1.1.1</multicast-group>\n\t\t\t\t<multicast-port>2434</multicast-port>\n\t\t\t</multicast>\n\t\t</join>\n\t</network>\n\t<executor-service>\n\t\t<pool-size>16</pool-size>\n\t</executor-service>\n</hazelcast>\n"
orientdb-client-log.properties: |
#
# /*
# * Copyright 2014 Orient Technologies LTD (info(at)orientechnologies.com)
# *
# * Licensed under the Apache License, Version 2.0 (the "License");
# * you may not use this file except in compliance with the License.
# * You may obtain a copy of the License at
# *
# * http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
# *
# * For more information: http://www.orientechnologies.com
# */
#
# Specify the handlers to create in the root logger
# (all loggers are children of the root logger)
# The following creates two handlers
handlers = java.util.logging.ConsoleHandler
# Set the default logging level for the root logger
.level = ALL
com.orientechnologies.orient.server.distributed.level = FINE
com.orientechnologies.orient.core.level = WARNING
# Set the default logging level for new ConsoleHandler instances
java.util.logging.ConsoleHandler.level = WARNING
# Set the default formatter for new ConsoleHandler instances
java.util.logging.ConsoleHandler.formatter = com.orientechnologies.common.log.OLogFormatter
orientdb-server-config.xml: |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<orient-server>
<handlers>
<handler class="com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin">
<parameters>
<parameter value="${distributed}" name="enabled"/>
<parameter value="${ORIENTDB_HOME}/config/default-distributed-db-config.json" name="configuration.db.default"/>
<parameter value="${ORIENTDB_HOME}/config/hazelcast.xml" name="configuration.hazelcast"/>
<parameter value="odb3" name="nodeName"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OJMXPlugin">
<parameters>
<parameter value="false" name="enabled"/>
<parameter value="true" name="profilerManaged"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OAutomaticBackup">
<parameters>
<parameter value="false" name="enabled"/>
<parameter value="${ORIENTDB_HOME}/config/automatic-backup.json" name="config"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OServerSideScriptInterpreter">
<parameters>
<parameter value="true" name="enabled"/>
<parameter value="SQL" name="allowedLanguages"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.plugin.livequery.OLiveQueryPlugin">
<parameters>
<parameter value="false" name="enabled"/>
</parameters>
</handler>
</handlers>
<network>
<sockets>
<socket implementation="com.orientechnologies.orient.server.network.OServerSSLSocketFactory" name="ssl">
<parameters>
<parameter value="false" name="network.ssl.clientAuth"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.keyStore"/>
<parameter value="password" name="network.ssl.keyStorePassword"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.trustStore"/>
<parameter value="password" name="network.ssl.trustStorePassword"/>
</parameters>
</socket>
<socket implementation="com.orientechnologies.orient.server.network.OServerSSLSocketFactory" name="https">
<parameters>
<parameter value="false" name="network.ssl.clientAuth"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.keyStore"/>
<parameter value="password" name="network.ssl.keyStorePassword"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.trustStore"/>
<parameter value="password" name="network.ssl.trustStorePassword"/>
</parameters>
</socket>
</sockets>
<protocols>
<protocol implementation="com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary" name="binary"/>
<protocol implementation="com.orientechnologies.orient.server.network.protocol.http.ONetworkProtocolHttpDb" name="http"/>
</protocols>
<listeners>
<listener protocol="binary" socket="default" port-range="2424-2430" ip-address="0.0.0.0"/>
<listener protocol="http" socket="default" port-range="2480-2490" ip-address="0.0.0.0">
<commands>
<command implementation="com.orientechnologies.orient.server.network.protocol.http.command.get.OServerCommandGetStaticContent" pattern="GET|www GET|studio/ GET| GET|*.htm GET|*.html GET|*.xml GET|*.jpeg GET|*.jpg GET|*.png GET|*.gif GET|*.js GET|*.css GET|*.swf GET|*.ico GET|*.txt GET|*.otf GET|*.pjs GET|*.svg GET|*.json GET|*.woff GET|*.woff2 GET|*.ttf GET|*.svgz" stateful="false">
<parameters>
<entry value="Cache-Control: no-cache, no-store, max-age=0, must-revalidate\r\nPragma: no-cache" name="http.cache:*.htm *.html"/>
<entry value="Cache-Control: max-age=120" name="http.cache:default"/>
</parameters>
</command>
</commands>
<parameters>
<parameter value="utf-8" name="network.http.charset"/>
<parameter value="true" name="network.http.jsonResponseError"/>
<parameter value="Access-Control-Allow-Origin:*;Access-Control-Allow-Credentials: true" name="network.http.additionalResponseHeaders"/>
</parameters>
</listener>
</listeners>
</network>
<storages/>
<users>
<user resources="*" password="{PBKDF2WithHmacSHA256}8B5E4C8ABD6A68E8329BD58D1C785A467FD43809823C8192:BE5D490BB80D021387659F7EF528D14130B344D6D6A2D590:65536" name="root"/>
<user resources="connect,server.listDatabases,server.dblist" password="{PBKDF2WithHmacSHA256}268A3AFC0D2D9F25AB7ECAC621B5EA48387CF2B9996E1881:CE84E3D0715755AA24545C23CDACCE5EBA35621E68E34BF2:65536" name="guest"/>
</users>
<properties>
<entry value="1" name="db.pool.min"/>
<entry value="50" name="db.pool.max"/>
<entry value="true" name="profiler.enabled"/>
</properties>
<isAfterFirstTime>true</isAfterFirstTime>
</orient-server>
orientdb-server-log.properties: |
#
# /*
# * Copyright 2014 Orient Technologies LTD (info(at)orientechnologies.com)
# *
# * Licensed under the Apache License, Version 2.0 (the "License");
# * you may not use this file except in compliance with the License.
# * You may obtain a copy of the License at
# *
# * http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
# *
# * For more information: http://www.orientechnologies.com
# */
#
# Specify the handlers to create in the root logger
# (all loggers are children of the root logger)
# The following creates two handlers
handlers = java.util.logging.ConsoleHandler, java.util.logging.FileHandler
# Set the default logging level for the root logger
.level = INFO
com.orientechnologies.level = INFO
com.orientechnologies.orient.server.distributed.level = INFO
# Set the default logging level for new ConsoleHandler instances
java.util.logging.ConsoleHandler.level = INFO
# Set the default formatter for new ConsoleHandler instances
java.util.logging.ConsoleHandler.formatter = com.orientechnologies.common.log.OAnsiLogFormatter
# Set the default logging level for new FileHandler instances
java.util.logging.FileHandler.level = INFO
# Naming style for the output file
java.util.logging.FileHandler.pattern=../log/orient-server.log
# Set the default formatter for new FileHandler instances
java.util.logging.FileHandler.formatter = com.orientechnologies.common.log.OLogFormatter
# Limiting size of output file in bytes:
java.util.logging.FileHandler.limit=10000000
# Number of output files to cycle through, by appending an
# integer to the base file name:
java.util.logging.FileHandler.count=10
kind: ConfigMap
metadata:
creationTimestamp: null
name: orientdb-config
Here is what it looks like for me:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
po/orientdbservice-4064909316-pzxhl 1/1 Running 0 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/orientdbservice 10.0.0.185 <nodes> 2480:31058/TCP,2424:30671/TCP 1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/orientdbservice 1 1 1 1 1h
NAME DESIRED CURRENT READY AGE
rs/orientdbservice-4064909316 1 1 1 1h
$ kubectl get cm
NAME DATA AGE
orientdb-config 8 1h
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
orientdb-backup Bound pvc-9c1507ea-8253-11e7-9e2b-52540058bb88 10Gi RWO standard 1h
orientdb-databases Bound pvc-9c00ca83-8253-11e7-9e2b-52540058bb88 10Gi RWO standard 1h
The config generated above might look little different, it was auto generated from the tool called kedge see the instructions of how I did it in this gist: https://gist.github.com/surajssd/6bbe43a1b2ceee01962e0a1480d8cb04