Defining the defaultMode in a Kubernetes volume field within a deployment element can become quite tricky.
It expects three decimals, corresponding to the binary UNIX permissions.
As an example, to mount the ConfigMap with permissions r------, you'd need to specify 256.
apiVersion: apps/v1
kind: Deployment
metadata:
name: foo
namespace: foo
spec:
replicas: 1
selector:
matchLabels:
app: foo
template:
metadata:
labels:
app: foo
spec:
containers:
- image: php-fpm:latest
volumeMounts:
- name: phpini
mountPath: /usr/local/etc/php/conf.d/99-settings.ini
readOnly: true
subPath: 99-settings.ini
volumes:
- configMap:
defaultMode: 256
name: phpini-configmap
optional: false
name: phpini
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: foo
namespace: foo
name: phpini-configmap
data:
99-settings.ini: |
; Enable Zend OPcache extension module
zend_extension = opcache
Use the following table:
unix decimal
unix readable
binary equivalent
defaultMode
400
r--------
100000000
256
440
r--r-----
100100000
288
444
r--r--r--
100100100
292
600
rw-------
110000000
384
600
rw-r-----
110100000
416
660
rw-rw----
110110000
432
660
rw-rw-r--
110110100
436
666
rw-rw-rw-
110110110
438
700
rwx------
111000000
448
770
rwxrwx---
111111000
504
777
rwxrwxrwx
111111111
511
A more direct way to do this is to use a base8 to base10 converter like this one
I'm trying to deploy Guacamole on a Kubernetes cluster. Firstly I've had problems with the particular Authentication type 10 not supported issue. According to this issue ticket on their jira page it's already fixed on their GitHub repo but has to be released yet.
So #Stavros Kois in the comments pointed out to me that they're using a temporary solution till the next release of Guacamole. I've copied the two scripts: 3-temp-hack and 4-temp-hack from their file and used it in my own deployment.
The deployment file is looking like this:
apiVersion: v1
kind: Service
metadata:
name: guacamole
namespace: $NAMESPACE
labels:
app: guacamole
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: guacamole
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: guacd
namespace: $NAMESPACE
labels:
app: guacamole
spec:
ports:
- name: http
port: 4822
targetPort: 4822
selector:
app: guacamole
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: guacamole
namespace: $NAMESPACE
labels:
app: guacamole
spec:
replicas: 1
selector:
matchLabels:
app: guacamole
template:
metadata:
labels:
app: guacamole
spec:
containers:
- name: guacd
image: docker.io/guacamole/guacd:$GUACAMOLE_GUACD_VERSION
env:
- name: GUACD_LOG_LEVEL
value: "debug"
ports:
- containerPort: 4822
securityContext:
runAsUser: 1000
runAsGroup: 1000
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
- name: guacamole
image: docker.io/guacamole/guacamole:$GUACAMOLE_GUACAMOLE_VERSION
env:
- name: GUACD_HOSTNAME
value: "localhost"
- name: GUACD_PORT
value: "4822"
- name: POSTGRES_HOSTNAME
value: "database-url.nl"
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_DATABASE
value: "guacamole"
- name: POSTGRES_USER
value: "guacamole_admin"
- name: POSTGRES_PASSWORD
value: "guacamoleadmin"
- name: HOME
value: "/home/guacamole"
- name: LOGBACK_LEVEL
value: "debug"
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
securityContext:
runAsUser: 1001
runAsGroup: 1001
allowPrivilegeEscalation: false
runAsNonRoot: true
# TEMPORARY WORKAROUND UNTIL GUACAMOLE RELEASE AN IMAGE WITH THE UPDATED DRIVER
volumeMounts:
- name: temphackalso
mountPath: "/opt/guacamole/postgresql"
volumes:
- name: temphack
persistentVolumeClaim:
claimName: temphack
- name: temphackalso
persistentVolumeClaim:
claimName: temphackalso
# TEMPORARY WORKAROUND UNTIL GUACAMOLE RELEASE AN IMAGE WITH THE UPDATED DRIVER
# https://issues.apache.org/jira/browse/GUACAMOLE-1433
# https://github.com/apache/guacamole-client/pull/655
initContainers:
- name: temp-hack-1
image: docker.io/guacamole/guacamole:$GUACAMOLE_GUACAMOLE_VERSION
securityContext:
runAsUser: 1001
runAsGroup: 1001
volumeMounts:
- name: temphack
mountPath: "/opt/guacamole/postgresql-hack"
command: ["/bin/sh", "-c"]
args:
- |-
echo "Checking postgresql driver version..."
if [ -e /opt/guacamole/postgresql/postgresql-42.2.24.jre7.jar ]; then
echo "Version found is correct."
exit 0
else
echo "Old version found. Will try to download a known-to-work version."
echo "Downloading (postgresql-42.2.24.jre7.jar)..."
curl -L "https://jdbc.postgresql.org/download/postgresql-42.2.24.jre7.jar" >"/opt/guacamole/postgresql-hack/postgresql-42.2.24.jre7.jar"
if [ -e /opt/guacamole/postgresql-hack/postgresql-42.2.24.jre7.jar ]; then
echo "Downloaded successfully!"
cp -r /opt/guacamole/postgresql/* /opt/guacamole/postgresql-hack/
if [ -e /opt/guacamole/postgresql-hack/postgresql-9.4-1201.jdbc41.jar ]; then
echo "Removing old version... (postgresql-9.4-1201.jdbc41.jar)"
rm "/opt/guacamole/postgresql-hack/postgresql-9.4-1201.jdbc41.jar"
if [ $? -eq 0 ]; then
echo "Removed successfully!"
else
echo "Failed to remove."
exit 1
fi
fi
else
echo "Failed to download."
exit 1
fi
fi
- name: temp-hack-2
image: docker.io/guacamole/guacamole:$GUACAMOLE_GUACAMOLE_VERSION
securityContext:
runAsUser: 1001
runAsGroup: 1001
volumeMounts:
- name: temphack
mountPath: "/opt/guacamole/postgresql-hack"
- name: temphackalso
mountPath: "/opt/guacamole/postgresql"
command: ["/bin/sh", "-c"]
args:
- |-
echo "Copying postgres driver into the final destination."
cp -r /opt/guacamole/postgresql-hack/* /opt/guacamole/postgresql/
if [ -e /opt/guacamole/postgresql/postgresql-42.2.24.jre7.jar ]; then
echo "Driver copied successfully!"
else
echo "Failed to copy the driver"
fi
So I've successfully copied the correct postgresql driver. The following commands returns the correct driver as expected: ls -lah /opt/guacamole/postgresql =
drwxrwxrwx 4 root root 4.0K Jul 28 15:46 .
drwxr-xr-x 1 root root 184 Dec 29 2021 ..
-rw-r--r-- 1 guacamole guacamole 5.9M Jul 28 17:06 guacamole-auth-jdbc-postgresql-1.4.0.jar
drwx------ 2 root root 16K Jul 28 15:45 lost+found
-rw-r--r-- 1 guacamole guacamole 0 Jan 1 1970 postgresql-42.2.24.jre7.jar
drwxr-xr-x 3 guacamole guacamole 4.0K Jul 28 15:46 schema
And: ls -lah /home/guacamole/.guacamole/lib =
drwxr-xr-x 2 guacamole guacamole 41 Jul 28 17:06 .
drwxr-xr-x 4 guacamole guacamole 82 Jul 28 17:06 ..
lrwxrwxrwx 1 guacamole guacamole 53 Jul 28 17:06 postgresql-42.2.24.jre7.jar -> /opt/guacamole/postgresql/postgresql-42.2.24.jre7.jar
But still getting the ClassNotFoundException org.postgresql.Driver. Anybody ideas?
The pod yaml
containers:
- name: kiada
image: :kiada-0.1
volumeMounts:
- name: my-test
subPath: my-app.conf
mountPath: /html/my-app.conf
volumes:
- name: my-test
configMap:
name: kiada-config
the config map
➜ v5-kubernetes git:(master) ✗ k get cm kiada-config -oyaml
apiVersion: v1
data:
key: value\n
status-message: This status message is set in the kiada-config config map2\n
kind: ConfigMap
metadata:
creationTimestamp: "2022-05-18T03:01:15Z"
name: kiada-config
namespace: default
resourceVersion: "135185128"
uid: 8c8875ce-47f5-49d4-8bc7-d8dbc2d7f7ba
the pod has my-app.conf
root#kiada2-7cc7bf55d8-m97tt:/# ls -al /html/my-app.conf/
total 12
drwxrwxrwx 3 root root 4096 May 21 02:29 .
drwxr-xr-x 1 root root 4096 May 21 02:29 ..
drwxr-xr-x 2 root root 4096 May 21 02:29 ..2022_05_21_02_29_41.554311630
lrwxrwxrwx 1 root root 31 May 21 02:29 ..data -> ..2022_05_21_02_29_41.554311630
lrwxrwxrwx 1 root root 10 May 21 02:29 key -> ..data/key
lrwxrwxrwx 1 root root 21 May 21 02:29 status-message -> ..data/status-message
root#kiada2-7cc7bf55d8-m97tt:/# ls -al /html/my-app.conf/
if i add subPath in pod yaml
spec:
containers:
- name: kiada
image: kiada-0.1
volumeMounts:
- name: my-test
subPath: my-app.conf
mountPath: /html/my-app.conf
volumes:
- name: my-test
configMap:
name: kiada-config
the resutl
root#kiada2-c89749c8-x9qwq:/# ls -al html/my-app.conf/
total 8
drwxrwxrwx 2 root root 4096 May 21 02:36 .
drwxr-xr-x 1 root root 4096 May 21 02:36 ..
why i use subPath,the config map key is not exists ,what's wrong?
In order to produce a file called my-app.config containing your application config in your Pod's file system, would have to ensure that this file exists in your config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: kiada-config
data:
my-app.conf: |
key: value
status-message: This status message is set in the kiada-config config map2
Then, you can mount it into your Pod like this:
apiVersion: v1
kind: Pod
metadata:
labels:
run: kiada
name: kiada
spec:
containers:
- name: kiada
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "tail -f /dev/null" ]
volumeMounts:
- mountPath: /html/
name: my-test
volumes:
- name: my-test
configMap:
name: kiada-config
The subPath field is not required in this scenario. It would be useful either if you want to remap the my-app.conf to a different name..
apiVersion: v1
kind: Pod
metadata:
labels:
run: kiada
name: kiada
spec:
containers:
- name: kiada
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "tail -f /dev/null" ]
volumeMounts:
- mountPath: /html/my-app-new-name.conf
name: my-test
subPath: my-app.conf
volumes:
- name: my-test
configMap:
name: kiada-config
..or, if you had multiple config files in your ConfigMap and just wanted to map one of them into your Pod:
apiVersion: v1
kind: ConfigMap
metadata:
name: kiada-config
data:
my-app.conf: |
key: value
status-message: This status message is set in the kiada-config config map2
my-second-app.conf: |
error: not in use
apiVersion: v1
kind: Pod
metadata:
labels:
run: kiada
name: kiada
spec:
containers:
- name: kiada
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "tail -f /dev/null" ]
volumeMounts:
- mountPath: /html/my-app.conf
name: my-test
subPath: my-app.conf
volumes:
- name: my-test
configMap:
name: kiada-config
There is no file in your configmap i would suggest checking out the : https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume
configmap :
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
POD deployment
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
When the pod runs, the command ls /etc/config/ produces the output below:
SPECIAL_LEVEL
SPECIAL_TYPE
if you want to inject configmap with a different file name you can use items
items:
- key: SPECIAL_LEVEL
path: keys
Example: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-specific-path-in-the-volume
I am pulling image from server in kubernetes(v1.15.2) pod, I am using this config to auth my access in containers:
"imagePullSecrets": [
{
"name": "regcred"
}
]
but it seems not work with the initial containers, when pulling initial container,the kubernetes pod throw this error:
Failed to pull image "registry.cn-hangzhou.aliyuncs.com/app_k8s/fat/alpine-bash:3.8": rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/alpine-bash, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
so how to specify the auth info with the initial containers of kubernetes pod to make it works? this is my full config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: deployment-apollo-portal-server
namespace: sre
selfLink: /apis/apps/v1/namespaces/sre/deployments/deployment-apollo-portal-server
uid: bc8e94bb-524d-487e-b9bb-90624cfcace3
resourceVersion: '479747'
generation: 4
creationTimestamp: '2020-05-31T05:34:08Z'
labels:
app: deployment-apollo-portal-server
annotations:
deployment.kubernetes.io/revision: '4'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"deployment-apollo-portal-server"},"name":"deployment-apollo-portal-server","namespace":"sre"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"pod-apollo-portal-server"}},"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":1},"type":"RollingUpdate"},"template":{"metadata":{"labels":{"app":"pod-apollo-portal-server"}},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchExpressions":[{"key":"app","operator":"In","values":["pod-apollo-portal-server"]}]},"topologyKey":"kubernetes.io/hostname"},"weight":100}]}},"containers":[{"env":[{"name":"APOLLO_PORTAL_SERVICE_NAME","value":"service-apollo-portal-server.sre"}],"image":"apollo-portal-server:v1.0.0","imagePullPolicy":"IfNotPresent","livenessProbe":{"initialDelaySeconds":120,"periodSeconds":15,"tcpSocket":{"port":8070}},"name":"container-apollo-portal-server","ports":[{"containerPort":8070,"protocol":"TCP"}],"readinessProbe":{"initialDelaySeconds":10,"periodSeconds":5,"tcpSocket":{"port":8070}},"securityContext":{"privileged":true},"volumeMounts":[{"mountPath":"/apollo-portal-server/config/application-github.properties","name":"volume-configmap-apollo-portal-server","subPath":"application-github.properties"},{"mountPath":"/apollo-portal-server/config/apollo-env.properties","name":"volume-configmap-apollo-portal-server","subPath":"apollo-env.properties"}]}],"dnsPolicy":"ClusterFirst","initContainers":[{"command":["bash","-c","curl
--connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-dev.sre:8090"],"image":"alpine-bash:3.8","name":"check-service-apollo-admin-server-dev"},{"command":["bash","-c","curl
--connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-test-alpha.sre:8090"],"image":"alpine-bash:3.8","name":"check-service-apollo-admin-server-alpha"},{"command":["bash","-c","curl
--connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-test-beta.sre:8090"],"image":"alpine-bash:3.8","name":"check-service-apollo-admin-server-beta"},{"command":["bash","-c","curl
--connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-prod.sre:8090"],"image":"alpine-bash:3.8","name":"check-service-apollo-admin-server-prod"}],"restartPolicy":"Always","volumes":[{"configMap":{"items":[{"key":"application-github.properties","path":"application-github.properties"},{"key":"apollo-env.properties","path":"apollo-env.properties"}],"name":"configmap-apollo-portal-server"},"name":"volume-configmap-apollo-portal-server"}]}}}}
spec:
replicas: 3
selector:
matchLabels:
app: pod-apollo-portal-server
template:
metadata:
creationTimestamp: null
labels:
app: pod-apollo-portal-server
spec:
volumes:
- name: volume-configmap-apollo-portal-server
configMap:
name: configmap-apollo-portal-server
items:
- key: application-github.properties
path: application-github.properties
- key: apollo-env.properties
path: apollo-env.properties
defaultMode: 420
initContainers:
- name: check-service-apollo-admin-server-dev
image: >-
registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/alpine-bash:3.8
command:
- bash
- '-c'
- >-
curl --connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120 service-apollo-admin-server-dev.sre:8090
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- name: check-service-apollo-admin-server-alpha
image: >-
registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/alpine-bash:3.8
command:
- bash
- '-c'
- >-
curl --connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-test-alpha.sre:8090
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- name: check-service-apollo-admin-server-beta
image: >-
registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/alpine-bash:3.8
command:
- bash
- '-c'
- >-
curl --connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-test-beta.sre:8090
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- name: check-service-apollo-admin-server-prod
image: >-
registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/alpine-bash:3.8
command:
- bash
- '-c'
- >-
curl --connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120 service-apollo-admin-server-prod.sre:8090
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
containers:
- name: container-apollo-portal-server
image: >-
registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/apollo-portal-server:v1.0.0
ports:
- containerPort: 8070
protocol: TCP
env:
- name: APOLLO_PORTAL_SERVICE_NAME
value: service-apollo-portal-server.sre
resources: {}
volumeMounts:
- name: volume-configmap-apollo-portal-server
mountPath: /apollo-portal-server/config/application-github.properties
subPath: application-github.properties
- name: volume-configmap-apollo-portal-server
mountPath: /apollo-portal-server/config/apollo-env.properties
subPath: apollo-env.properties
livenessProbe:
tcpSocket:
port: 8070
initialDelaySeconds: 120
timeoutSeconds: 1
periodSeconds: 15
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 8070
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
imagePullSecrets:
- name: regcred
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-apollo-portal-server
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 4
replicas: 4
updatedReplicas: 2
unavailableReplicas: 4
conditions:
- type: Available
status: 'False'
lastUpdateTime: '2020-05-31T05:34:08Z'
lastTransitionTime: '2020-05-31T05:34:08Z'
reason: MinimumReplicasUnavailable
message: Deployment does not have minimum availability.
- type: Progressing
status: 'False'
lastUpdateTime: '2020-05-31T06:08:24Z'
lastTransitionTime: '2020-05-31T06:08:24Z'
reason: ProgressDeadlineExceeded
message: >-
ReplicaSet "deployment-apollo-portal-server-dc65dcf6b" has timed out
progressing.
I have a strange issue where I am trying to apply a PodAntiAffinity to make sure that no 2 pods of the specific deploymentConfig ever end up on the same node:
I attempt to edit the dc with:
spec:
replicas: 1
selector:
app: server-config
deploymentconfig: server-config
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
app: server-config
deploymentconfig: server-config
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- server-config
topologyKey: "kubernetes.io/hostname"
but on saving that, I get a :
"/tmp/oc-edit-34z56.yaml" 106L, 3001C written
deploymentconfig "server-config" skipped
and the changes dont stick. My openshift/Kubernetes versions are:
[root#master1 ~]# oc version
oc v1.5.1
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO
Thanks in advance.
This seems to work, the syntax is wildly different and the "scheduler.alpha.kubernetes.io/affinity" annotation needs to be added to work:
spec:
replicas: 1
selector:
app: server-config
deploymentconfig: server-config
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/affinity: |
{
"podAntiAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": [{
"labelSelector": {
"matchExpressions": [{
"key": "app",
"operator": "In",
"values":["server-config"]
}]
},
"topologyKey": "kubernetes.io/hostname"
}]
}
}
Working as intended and spreading out properly between nodes:
[root#master1 ~]# oc get pods -o wide |grep server-config
server-config-22-4ktvf 1/1 Running 0 3h 10.1.1.73 10.0.4.101
server-config-22-fz31j 1/1 Running 0 3h 10.1.0.3 10.0.4.100
server-config-22-mrw09 1/1 Running 0 3h 10.1.2.64 10.0.4.102