We followed the article https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment#creating-service and setup the Ignite Cluster in the Kubernetes.
We were unable to establish the connection from the client application which is deployed in a different POD to this Ignite Cluster using TcpDiscoveryKubernetesIpFinder.
We saw the below errors in the Ignite Cluster node,
[SEVERE][main][TcpDiscoverySpi] Failed to get registered addresses from IP finder (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries) [maxTimeout=0] class org.apache.ignite.spi.IgniteSpiException:
Failed to retrieve Ignite pods IP addresses. at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:80)
Here is the spring.xml configuration:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="ignite-namespace" />
<property name="serviceName" value="ignite-service" />
</bean>
</property>
</bean>
</property>
</bean>
</beans>
We specified the namespace and service names in the deployment files,
cluster-role.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ignite
namespace: ignite-namespace
rules:
- apiGroups:
- ""
resources: # Here are the resources you can access
- pods
- endpoints
verbs: # That is what you can do with them
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ignite
roleRef:
kind: ClusterRole
name: ignite
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: ignite
namespace: ignite-namespace
ignite-service.yaml:
apiVersion: v1
kind: Service
metadata:
# The name must be equal to KubernetesConnectionConfiguration.serviceName
name: ignite-service
# The name must be equal to KubernetesConnectionConfiguration.namespace
namespace: ignite-namespace
labels:
app: ignite
spec:
type: LoadBalancer
ports:
- name: rest
port: 8080
targetPort: 8080
- name: thinclients
port: 10800
targetPort: 10800
# The pod-to-service routing is required for apps that are not deployed in K8
sessionAffinity: ClientIP
selector:
# Must be equal to the label set for pods.
app: ignite
status:
loadBalancer: {}
I am trying to use Hazelcast on Kubernetes. For that the Docker is installed on Windows and Kubernetes environment is simulate on the Docker. Here is the config file hazelcast.xml
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast
xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.7.xsd"
xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<join>
<multicast enabled="false" />
<tcp-ip enabled="false"/>
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<!--
<properties>
<property name="service-dns">cobrapp.default.endpoints.cluster.local</property>
<property name="service-dns-timeout">10</property>
</properties>
-->
</discovery-strategy>
</discovery-strategies>
</join>
</network>
</hazelcast>
The problem is that it is unable to create cluster on the simulated environment. According to my deploment file it should create three clusters. Here is the deployment config file
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
imagePullPolicy: Never
image: testapp:latest
ports:
- containerPort: 5701
- containerPort: 8085
---
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
selector:
app: test
type: LoadBalancer
ports:
- name: hazelcast
port: 5701
- name: test
protocol: TCP
port: 8085
targetPort: 8085
The output upon executing the deployment file
Members [1] {
Member [10.1.0.124]:5701 this
}
However the expected output is, it should have three clusters in it as per the deployment file. If anybody can help?
Hazelcast's default multicast discovery doesn't work on Kubernetes out-of-the-box. You need an additional plugin for that. Two alternatives are available, Kubernetes API and DNS lookup.
Please check the relevant documentation for more information.
I setup the hazelcast kubernetes configuration as per the explanations given in the below link,
https://vertx.io/docs/vertx-hazelcast/java/#_configuring_for_kubernetes
But, hazelcast can identify all members on one node and not able to find across all the nodes in the cluster.
Please help us to solve this issue
Following is the service file for hazelcast of type ClusterIP,
apiVersion: v1
kind: Service
metadata:
name: cb-hazelcast-service
spec:
selector:
component: cb-hazelcast-service
type: ClusterIP
clusterIP: None
ports:
- name: hz-port-name
port: 5701
protocol: TCP
Following is the deployment file for microservice 1,
apiVersion: apps/v1
kind: Deployment
metadata:
name: cb-agent-service
spec:
replicas: 1
minReadySeconds: 30
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app: cb-agent-service
template:
metadata:
labels:
app: cb-agent-service
component: cb-hazelcast-service
spec:
containers:
- name: cb-agent-service
image: <docker-image-hub>/agent-service:hz-dns-001
#imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /usr/data/logs
name: shared-logs
ports:
- containerPort: 8085
name: cbport
ports:
- name: hazelcast
containerPort: 5701
volumes:
- name: shared-logs
hostPath:
path: /usr/data/logs
---
apiVersion: v1
kind: Service
metadata:
name: cb-agent-service
labels:
vertx-cluster: "true"
spec:
type: NodePort
ports:
- port: 80
targetPort: 8085
selector:
app: cb-agent-service
following is deployment for another microservice,
apiVersion: apps/v1
kind: Deployment
metadata:
name: cb-transaction-service
spec:
replicas: 1
minReadySeconds: 30
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app: cb-transaction-service
template:
metadata:
labels:
app: cb-transaction-service
component: cb-hazelcast-service
spec:
containers:
- name: cb-transaction-service
image: <docker-iamge-hub>/transaction-service:hz-dns-001
#imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /usr/data/logs
name: shared-logs
ports:
- containerPort: 8085
name: cbport
ports:
- name: hazelcast
containerPort: 5701
nodeSelector:
service: transaction
volumes:
- name: shared-logs
hostPath:
path: /usr/data/logs
---
apiVersion: v1
kind: Service
metadata:
name: cb-transaction-service
labels:
vertx-cluster: "true"
spec:
type: NodePort
ports:
- port: 80
targetPort: 8085
selector:
app: cb-transaction-service
Following is the cluster.xml file for all the microservices
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.memcache.enabled">false</property>
<property name="hazelcast.wait.seconds.before.join">0</property>
<property name="hazelcast.logging.type">slf4j</property>
<property name="hazelcast.health.monitoring.delay.seconds">2</property>
<property name="hazelcast.max.no.heartbeat.seconds">5</property>
<property name="hazelcast.max.no.master.confirmation.seconds">10</property>
<property name="hazelcast.master.confirmation.interval.seconds">10</property>
<property name="hazelcast.member.list.publish.interval.seconds">10</property>
<property name="hazelcast.connection.monitor.interval">10</property>
<property name="hazelcast.connection.monitor.max.faults">2</property>
<property name="hazelcast.partition.migration.timeout">10</property>
<property name="hazelcast.migration.min.delay.on.member.removed.seconds">3</property>
<!-- at the moment the discovery needs to be activated explicitly -->
<property name="hazelcast.discovery.enabled">true</property>
<property name="hazelcast.rest.enabled">false</property>
</properties>
<network>
<port auto-increment="true" port-count="10000">5701</port>
<outbound-ports>
<ports>0</ports>
</outbound-ports>
<join>
<multicast enabled="false"/>
<tcp-ip enabled="false"/>
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<properties>
<property name="service-dns">cb-hazelcast-service.default.svc.cluster.local</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</join>
</network>
<partition-group enabled="false"/>
<executor-service name="default">
<pool-size>16</pool-size>
<!--Queue capacity. 0 means Integer.MAX_VALUE.-->
<queue-capacity>0</queue-capacity>
</executor-service>
<map name="__vertx.subs">
<backup-count>1</backup-count>
<time-to-live-seconds>0</time-to-live-seconds>
<max-idle-seconds>0</max-idle-seconds>
<max-size policy="PER_NODE">0</max-size>
<eviction-percentage>25</eviction-percentage>
<merge-policy>com.hazelcast.map.merge.LatestUpdateMapMergePolicy</merge-policy>
</map>
<semaphore name="__vertx.*">
<initial-permits>1</initial-permits>
</semaphore>
</hazelcast>
I used this guide to setup OrientDB cluster in Kubernetes. However, it seems that each node on each pod creates its own cluster instead of joining a shared one. So the logs on each pod display such a message:
Members [1] {
Member [pod-ip]:5701 - generated id
}
What could cause such a problem?
My orientdb-server-config file looks this way:
<handler class="com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin">
<parameters>
<parameter value="true" name="enabled"/>
<parameter value="orientdb/config/default-distributed-db-config.json" name="configuration.db.default"/>
<parameter value="orientdb/config/hazelcast.xml" name="configuration.hazelcast"/>
<parameter name="nodeName" value="$pod_dns" />
</parameters>
</handler>
My hazelcast.xml file looks like this (pod_dns is the name of the pod stored in env):
</network>
<properties>
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<join>
<multicast enabled="false"/>
<tcp-ip enabled="false"/>
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<properties>
<property name="service-dns">pod_dns.default.svc.cluster.local</property>
<property name="service-dns-timeout">10</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</join>
</network>
Kubernetes StatefulSet. Bash scripts for hazelcast and orientdb-server-config files are mounted and executed (for setting update depending on the env value on each pod):
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: orientdbservice
spec:
serviceName: orientdbservice
replicas: 3
podManagementPolicy: Parallel
selector:
matchLabels:
service: orientdb
type: container-deployment
template:
metadata:
labels:
service: orientdb
type: container-deployment
spec:
containers:
- name: orientdbservice
image: orientdb:2.2.36
command: ["/bin/sh","-c", " cp /configs/* /orientdb/config/ ; chmod +x /orientdb/config/hazelcast_template.sh ; chmod +x /orientdb/config/server_config_template.sh ; sh /orientdb/config/hazelcast_template.sh ; sh /orientdb/config/server_config_template.sh ; /orientdb/bin/server.sh -Ddistributed=true" ]
env:
- name: ORIENTDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: orientdb-password
key: password.txt
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 2424
name: port-binary
- containerPort: 2480
name: port-http
- containerPort: 5701
name: hazelcast
volumeMounts:
- name: config
mountPath: /orientdb/config
- name: orientdb-config-template-hazelcast
mountPath: /configs/hazelcast_template.sh
subPath: hazelcast_template.sh
- name: orientdb-config-template-server
mountPath: /configs/server_config_template.sh
subPath: server_config_template.sh
- name: orientdb-config-distributed
mountPath: /configs/default-distributed-db-config.json
subPath: default-distributed-db-config.json
- name: orientdb-databases
mountPath: /orientdb/databases
- name: orientdb-backup
mountPath: /orientdb/backup
volumes:
- name: config
emptyDir: {}
- name: orientdb-config-template-hazelcast
configMap:
name: orientdb-configmap-template-hazelcast
- name: orientdb-config-template-server
configMap:
name: orientdb-configmap-template-server
- name: orientdb-config-distributed
configMap:
name: orientdb-configmap-distributed
volumeClaimTemplates:
- metadata:
name: orientdb-databases
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
- metadata:
name: orientdb-backup
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
The problem is in the Hazelcast0Kubernetes plugin configuration. First of all, it is necessary to update OrientDB version to latest - 3.0.10 with embedded newest Hazelcast version. Also, I have mounted hazelcast-kubernetes.jar dependency file directly into /orientdb/lib folder and it started to work properly. The problem was not about the config files but about the dependency setup for OrientDB.
I am currently trying to deploy the following on Minikube.
I updated the configuration files to use a hostpath as a persistent storage on minikube node.
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-volume"
spec:
capacity:
storage: "20Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "orientdb-pv-claim"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "20Gi"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: orientdbservice
spec:
#replicas: 1
template:
metadata:
name: orientdbservice
labels:
run: orientdbservice
test: orientdbservice
spec:
containers:
- name: orientdbservice
image: orientdb:latest
env:
- name: ORIENTDB_ROOT_PASSWORD
value: "rootpwd"
ports:
- containerPort: 2480
name: orientdb
volumeMounts:
- name: orientdb-config
mountPath: /data/orientdb/config
- name: orientdb-databases
mountPath: /data/orientdb/databases
- name: orientdb-backup
mountPath: /data/orientdb/backup
volumes:
- name: orientdb-config
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-databases
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-backup
persistentVolumeClaim:
claimName: orientdb-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: orientdbservice
labels:
run: orientdbservice
spec:
type: NodePort
selector:
run: orientdbservice
ports:
- protocol: TCP
port: 2480
name: http
which results in the following:
#kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-volume 20Gi RWO Retain Available 4h
pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO Delete Bound default/orientdb-pv-claim standard 4h
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
orientdb-pv-claim Bound pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO standard 4h
#kubectl get svc
NAME READY STATUS RESTARTS AGE
orientdbservice-458328598-zsmw5 0/1 ContainerCreating 0 3h
#kubectl describe pod orientdbservice-458328598-zsmw5
.
.
.
Events:
FirstSeen LastSeen Count From SubObjectPath TypeReason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3h 41s 26 kubelet, minikube Warning FailedMount Unable to mount volumes for pod "orientdbservice-458328598-zsmw5_default(392b1298-78ff-11e7-a46d-1277ec3dd2b5)": timeout expired waiting for volumes to attach/mount for pod "default"/"orientdbservice-458328598-zsmw5". list of unattached/unmounted volumes=[orientdb-databases]
It seems that volumes are not able mount for the pod. Is there something wrong with the way I am creating a persistent volume on my node ?
Appreciate all the help
Few questions before I tell you what worked for me:
The directory /data on minikube machine does it have right set of permissions?
In minikube you don't need to worry about setting up volumes in other words don't worry about PersistentVolume anymore, just enable the volume provisioner addon using following command. Once you do that every PersistentVolumeClaim that tries to claim storage will get whatever it needs.
minikube addons enable default-storageclass
So here is what worked for me:
I removed the PersistentVolume
I have changed the mountPath also to match what is given in the upstream docs https://hub.docker.com/_/orientdb/
I have added separate PersistentVolumeClaim for databases and backup
I have changed the config from a PersistentVolumeClaim to configMap, so you don't need to care about how do I get the config to running cluster?, you do it using configMap. Because the config is coming from set of config Files.
Here is the config that worked for me:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdb-databases
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdb-backup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdbservice
spec:
ports:
- name: orientdbservice-2480
port: 2480
targetPort: 0
- name: orientdbservice-2424
port: 2424
targetPort: 0
selector:
app: orientdbservice
type: NodePort
status:
loadBalancer: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdbservice
spec:
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdbservice
spec:
containers:
- env:
- name: ORIENTDB_ROOT_PASSWORD
value: rootpwd
image: orientdb
name: orientdbservice
resources: {}
volumeMounts:
- mountPath: /orientdb/databases
name: orientdb-databases
- mountPath: /orientdb/backup
name: orientdb-backup
- mountPath: /orientdb/config
name: orientdb-config
volumes:
- configMap:
name: orientdb-config
name: orientdb-config
- name: orientdb-databases
persistentVolumeClaim:
claimName: orientdb-databases
- name: orientdb-backup
persistentVolumeClaim:
claimName: orientdb-backup
And for configMap I had to goto this directory for sample config examples/3-nodes-compose/var/odb3/config in github repository orientechnologies/orientdb-docker
You goto above directory or directory you have saved config in and run following command:
kubectl create configmap orientdb-config --from-file=.
If you wanna see what is being created automatically and being deployed run following:
kubectl create configmap orientdb-config --from-file=. --dry-run -o yaml
Here is the configMap I have used to do my deployment:
apiVersion: v1
data:
automatic-backup.json: |-
{
"enabled": true,
"mode": "FULL_BACKUP",
"exportOptions": "",
"delay": "4h",
"firstTime": "23:00:00",
"targetDirectory": "backup",
"targetFileName": "${DBNAME}-${DATE:yyyyMMddHHmmss}.zip",
"compressionLevel": 9,
"bufferSize": 1048576
}
backups.json: |-
{
"backups": []
}
default-distributed-db-config.json: |
{
"autoDeploy": true,
"readQuorum": 1,
"writeQuorum": "majority",
"executionMode": "undefined",
"readYourWrites": true,
"newNodeStrategy": "static",
"servers": {
"*": "master"
},
"clusters": {
"internal": {
},
"*": {
"servers": [
"<NEW_NODE>"
]
}
}
}
events.json: |-
{
"events": []
}
hazelcast.xml: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- ~ Copyright (c)
2008-2012, Hazel Bilisim Ltd. All Rights Reserved. ~ \n\t~ Licensed under the
Apache License, Version 2.0 (the \"License\"); ~ you may \n\tnot use this file
except in compliance with the License. ~ You may obtain \n\ta copy of the License
at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ \n\t~ Unless required by applicable
law or agreed to in writing, software ~ distributed \n\tunder the License is distributed
on an \"AS IS\" BASIS, ~ WITHOUT WARRANTIES \n\tOR CONDITIONS OF ANY KIND, either
express or implied. ~ See the License for \n\tthe specific language governing
permissions and ~ limitations under the License. -->\n\n<hazelcast\n\txsi:schemaLocation=\"http://www.hazelcast.com/schema/config
hazelcast-config-3.3.xsd\"\n\txmlns=\"http://www.hazelcast.com/schema/config\"
xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n\t<group>\n\t\t<name>orientdb</name>\n\t\t<password>orientdb</password>\n\t</group>\n\t<network>\n\t\t<port
auto-increment=\"true\">2434</port>\n\t\t<join>\n\t\t\t<multicast enabled=\"true\">\n\t\t\t\t<multicast-group>235.1.1.1</multicast-group>\n\t\t\t\t<multicast-port>2434</multicast-port>\n\t\t\t</multicast>\n\t\t</join>\n\t</network>\n\t<executor-service>\n\t\t<pool-size>16</pool-size>\n\t</executor-service>\n</hazelcast>\n"
orientdb-client-log.properties: |
#
# /*
# * Copyright 2014 Orient Technologies LTD (info(at)orientechnologies.com)
# *
# * Licensed under the Apache License, Version 2.0 (the "License");
# * you may not use this file except in compliance with the License.
# * You may obtain a copy of the License at
# *
# * http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
# *
# * For more information: http://www.orientechnologies.com
# */
#
# Specify the handlers to create in the root logger
# (all loggers are children of the root logger)
# The following creates two handlers
handlers = java.util.logging.ConsoleHandler
# Set the default logging level for the root logger
.level = ALL
com.orientechnologies.orient.server.distributed.level = FINE
com.orientechnologies.orient.core.level = WARNING
# Set the default logging level for new ConsoleHandler instances
java.util.logging.ConsoleHandler.level = WARNING
# Set the default formatter for new ConsoleHandler instances
java.util.logging.ConsoleHandler.formatter = com.orientechnologies.common.log.OLogFormatter
orientdb-server-config.xml: |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<orient-server>
<handlers>
<handler class="com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin">
<parameters>
<parameter value="${distributed}" name="enabled"/>
<parameter value="${ORIENTDB_HOME}/config/default-distributed-db-config.json" name="configuration.db.default"/>
<parameter value="${ORIENTDB_HOME}/config/hazelcast.xml" name="configuration.hazelcast"/>
<parameter value="odb3" name="nodeName"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OJMXPlugin">
<parameters>
<parameter value="false" name="enabled"/>
<parameter value="true" name="profilerManaged"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OAutomaticBackup">
<parameters>
<parameter value="false" name="enabled"/>
<parameter value="${ORIENTDB_HOME}/config/automatic-backup.json" name="config"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OServerSideScriptInterpreter">
<parameters>
<parameter value="true" name="enabled"/>
<parameter value="SQL" name="allowedLanguages"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.plugin.livequery.OLiveQueryPlugin">
<parameters>
<parameter value="false" name="enabled"/>
</parameters>
</handler>
</handlers>
<network>
<sockets>
<socket implementation="com.orientechnologies.orient.server.network.OServerSSLSocketFactory" name="ssl">
<parameters>
<parameter value="false" name="network.ssl.clientAuth"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.keyStore"/>
<parameter value="password" name="network.ssl.keyStorePassword"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.trustStore"/>
<parameter value="password" name="network.ssl.trustStorePassword"/>
</parameters>
</socket>
<socket implementation="com.orientechnologies.orient.server.network.OServerSSLSocketFactory" name="https">
<parameters>
<parameter value="false" name="network.ssl.clientAuth"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.keyStore"/>
<parameter value="password" name="network.ssl.keyStorePassword"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.trustStore"/>
<parameter value="password" name="network.ssl.trustStorePassword"/>
</parameters>
</socket>
</sockets>
<protocols>
<protocol implementation="com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary" name="binary"/>
<protocol implementation="com.orientechnologies.orient.server.network.protocol.http.ONetworkProtocolHttpDb" name="http"/>
</protocols>
<listeners>
<listener protocol="binary" socket="default" port-range="2424-2430" ip-address="0.0.0.0"/>
<listener protocol="http" socket="default" port-range="2480-2490" ip-address="0.0.0.0">
<commands>
<command implementation="com.orientechnologies.orient.server.network.protocol.http.command.get.OServerCommandGetStaticContent" pattern="GET|www GET|studio/ GET| GET|*.htm GET|*.html GET|*.xml GET|*.jpeg GET|*.jpg GET|*.png GET|*.gif GET|*.js GET|*.css GET|*.swf GET|*.ico GET|*.txt GET|*.otf GET|*.pjs GET|*.svg GET|*.json GET|*.woff GET|*.woff2 GET|*.ttf GET|*.svgz" stateful="false">
<parameters>
<entry value="Cache-Control: no-cache, no-store, max-age=0, must-revalidate\r\nPragma: no-cache" name="http.cache:*.htm *.html"/>
<entry value="Cache-Control: max-age=120" name="http.cache:default"/>
</parameters>
</command>
</commands>
<parameters>
<parameter value="utf-8" name="network.http.charset"/>
<parameter value="true" name="network.http.jsonResponseError"/>
<parameter value="Access-Control-Allow-Origin:*;Access-Control-Allow-Credentials: true" name="network.http.additionalResponseHeaders"/>
</parameters>
</listener>
</listeners>
</network>
<storages/>
<users>
<user resources="*" password="{PBKDF2WithHmacSHA256}8B5E4C8ABD6A68E8329BD58D1C785A467FD43809823C8192:BE5D490BB80D021387659F7EF528D14130B344D6D6A2D590:65536" name="root"/>
<user resources="connect,server.listDatabases,server.dblist" password="{PBKDF2WithHmacSHA256}268A3AFC0D2D9F25AB7ECAC621B5EA48387CF2B9996E1881:CE84E3D0715755AA24545C23CDACCE5EBA35621E68E34BF2:65536" name="guest"/>
</users>
<properties>
<entry value="1" name="db.pool.min"/>
<entry value="50" name="db.pool.max"/>
<entry value="true" name="profiler.enabled"/>
</properties>
<isAfterFirstTime>true</isAfterFirstTime>
</orient-server>
orientdb-server-log.properties: |
#
# /*
# * Copyright 2014 Orient Technologies LTD (info(at)orientechnologies.com)
# *
# * Licensed under the Apache License, Version 2.0 (the "License");
# * you may not use this file except in compliance with the License.
# * You may obtain a copy of the License at
# *
# * http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
# *
# * For more information: http://www.orientechnologies.com
# */
#
# Specify the handlers to create in the root logger
# (all loggers are children of the root logger)
# The following creates two handlers
handlers = java.util.logging.ConsoleHandler, java.util.logging.FileHandler
# Set the default logging level for the root logger
.level = INFO
com.orientechnologies.level = INFO
com.orientechnologies.orient.server.distributed.level = INFO
# Set the default logging level for new ConsoleHandler instances
java.util.logging.ConsoleHandler.level = INFO
# Set the default formatter for new ConsoleHandler instances
java.util.logging.ConsoleHandler.formatter = com.orientechnologies.common.log.OAnsiLogFormatter
# Set the default logging level for new FileHandler instances
java.util.logging.FileHandler.level = INFO
# Naming style for the output file
java.util.logging.FileHandler.pattern=../log/orient-server.log
# Set the default formatter for new FileHandler instances
java.util.logging.FileHandler.formatter = com.orientechnologies.common.log.OLogFormatter
# Limiting size of output file in bytes:
java.util.logging.FileHandler.limit=10000000
# Number of output files to cycle through, by appending an
# integer to the base file name:
java.util.logging.FileHandler.count=10
kind: ConfigMap
metadata:
creationTimestamp: null
name: orientdb-config
Here is what it looks like for me:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
po/orientdbservice-4064909316-pzxhl 1/1 Running 0 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/orientdbservice 10.0.0.185 <nodes> 2480:31058/TCP,2424:30671/TCP 1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/orientdbservice 1 1 1 1 1h
NAME DESIRED CURRENT READY AGE
rs/orientdbservice-4064909316 1 1 1 1h
$ kubectl get cm
NAME DATA AGE
orientdb-config 8 1h
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
orientdb-backup Bound pvc-9c1507ea-8253-11e7-9e2b-52540058bb88 10Gi RWO standard 1h
orientdb-databases Bound pvc-9c00ca83-8253-11e7-9e2b-52540058bb88 10Gi RWO standard 1h
The config generated above might look little different, it was auto generated from the tool called kedge see the instructions of how I did it in this gist: https://gist.github.com/surajssd/6bbe43a1b2ceee01962e0a1480d8cb04