i'm launching a glassfish pod on my Kubernetes cluster, and i'm trying to copy some .war files from a folder that's on my host, but the command cp always seems to fail.
my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: glassfish
spec:
# replicas: 2
selector:
matchLabels:
app: glassfish
strategy:
type: Recreate
template:
metadata:
labels:
app: glassfish
spec:
containers:
- image: glassfish:latest
name: glassfish
ports:
- containerPort: 8080
name: glassfishhttp
- containerPort: 4848
name: glassfishadmin
command: ["/bin/cp"]
args: #["/mnt/apps/*","/usr/local/glassfish4/glassfish/domains/domain1/autodeploy/"]
- /mnt/apps/
- /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/
volumeMounts:
- name: glassfish-persistent-storage
mountPath: /mount
- name: app
mountPath: /mnt/apps
volumes:
- name: glassfish-persistent-storage
persistentVolumeClaim:
claimName: fish-mysql-pvc
- name: app
hostPath:
path: /mnt/nfs
type: Directory
I'm trying to use the following command in my container:
cp /mnt/apps/* /usr/local/glassfish4/glassfish/domains/domain1/autodeploy
What am I doing wrong?
So far i've tried to do it with the /, without it, using /*
When i use apps/ i see "item or directory not found", when i use apps/ i get "directory ommitted", i need only whats in the map, not the map itself so -r doesn't really help either.
Two things to note here:
If you want to copy a direcyory using cp, you have to provide the -a or -R flag to cp:
-R If source_file designates a directory, cp copies the directory and the entire subtree connected at
that point. If the source_file ends in a /, the contents of the directory are copied rather than
the directory itself. This option also causes symbolic links to be copied, rather than indirected
through, and for cp to create special files rather than copying them as normal files. Created
directories have the same mode as the corresponding source directory, unmodified by the process'
umask.
In -R mode, cp will continue copying even if errors are detected.
If you use /bin/cp as your entrypoint in the pod, then this command is not executed in a shell. The * in /path/to/* however is a shell feature.
initContainers do not have args, only `command.
To make this work, use /bin/sh as the command instead:
command:
- /bin/sh
- -c
- cp /mnt/apps/* /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/
What am I doing wrong?
here is correct command to execute:
command: ["sh", "-c", "cp -r /mnt/apps/* /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/ && asadmin start-domain --verbose"]
With your cp command you effectively overwrite legitimate required command to start everything. You can see original one (running ok without your cp command) if you inspect that container. Initially, container is started with:
...
"Cmd": [
"/bin/sh",
"-c",
"asadmin start-domain --verbose"
],
...
simply adding it after the copy command solves your issue.
Related
I have a pod with multiple init containers and one main container. One of the init container create a sh file with some export commands like:
export Foo=Bar
I want to source the file so it creates the env variable like this:
containers:
- name: test
command:
- "bash"
- "-c"
args:
- "source /path/to/file"
It doesn't create the env variable. But if I run the source command directly in the container it works. What is the best way to do this using the command option in the pod definition?
If you are looking for create the sh in the init container with the variables and then use in the "main container" here is a quick example:
manifest
apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
name: mypod
spec:
initContainers:
- name: my-init-container
image: alpine:latest
command: ["sh", "-c", "echo export Foo=bar > /shared/script.sh && chmod +x /shared/script.sh"]
volumeMounts:
- name: shared
mountPath: /shared
containers:
- name: mycontainer
image: mycustomimage
resources:
limits:
memory: "32Mi"
cpu: "100m"
volumeMounts:
- name: shared
mountPath: /shared
volumes:
- name: shared
Dockerfile
FROM alpine:latest
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD ...
entrypoint.sh
#!/bin/sh
. /shared/script.sh
env
exec "$#"
logs
$ kubectl logs pod/mypod
<...>
Foo=bar
<...>
As you can see we can created a script file in the init container with Foo=bar variable and source the file in the "main container", the script is there the volume shared mounted in both containers.
Most of the situations we use configMaps/secrets/vaults and inject that as variables in the containers as the others answers mentioned. I recommend checking if those can solve your problem first.
Kubernetes configmap can used to have the key values as env variable inside a container.
Instead of using the init container, you can directly use the configmap or secret to inject the variables as environment variable into pod.
So your script will be able to access those variables directly.
Example : https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
I have read some tutorials of how to mount a volume in container and run the script on host/node directly. These are the examples given.
DeamonSet pod spec
hostPID: true
nodeSelector:
cloud.google.com/gke-local-ssd: "true"
volumes:
- name: setup-script
configMap:
name: local-ssds-setup
- name: host-mount
hostPath:
path: /tmp/setup
initContainers:
- name: local-ssds-init
image: marketplace.gcr.io/google/ubuntu1804
securityContext:
privileged: true
volumeMounts:
- name: setup-script
mountPath: /tmp
- name: host-mount
mountPath: /host
command:
- /bin/bash
- -c
- |
set -e
set -x
# Copy setup script to the host
cp /tmp/setup.sh /host
# Copy wait script to the host
cp /tmp/wait.sh /host
# Wait for updates to complete
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/setup/wait.sh
# Give execute priv to script
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/setup/setup.sh
# Wait for Node updates to complete
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/setup/wait.sh
# If the /tmp folder is mounted on the host then it can run the script
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/setup/setup.sh
containers:
- image: "gcr.io/google-containers/pause:2.0"
name: pause
(There is a configmap for composing the .sh files. I just skip that)
What does "/usr/bin/nsenter -m/proc/1/ns/mnt" mean? Is this a command to run something on host? what is "/proc/1/ns/mnt" ?
Lets start from the namepaces to understand this in detail :
Namespaces in container helps to isolate resources among the process. Namespaces controls the resources from the kernal and allocate to the process. This provides a great isolation among different containers that may run in a system.
Having said that, it will also make things complicated with these access restrictions to the namespaces. so comes the nsenter command , which will give the conatiners access to the namespaces. something similar to the sudo command.
.This command can give us access to mount, UTS, IPC, Network, PID,user,cgroup, and time namespaces.
the -m in your example is --mount which will access to the mount namespace specified by that file.
I'm planning to have an initcontainer that will handle some crypto stuff and then generate a source file to be sourced by a container.
The source file will be dynamically generated, the VARS will be dynamic, this means I will never know the VAR names or it's contents. This also means I cannot use k8s env.
The file name will always be the same.
I know I can change the Dockerfile from my applications and include an entrypoint to execute a script before running the workload to source the file, but, still, is this the only option?
There's no way to achieve this in k8s?
My container can mount the dir where the file was created by the initcontainer. But it can't, somehow, source the file?
apiVersion: v1
kind: Pod
metadata:
name: pod-init
namespace: default
spec:
nodeSelector:
env: sm
initContainers:
name: genenvfile
image: busybox
imagePullPolicy: Always
command: ["/bin/sh"]
# just an example, there will be a software here that will translate some encrypted stuff into VARS and then append'em to a file
args: ["-c", "echo MYVAR=func > /tmp/sm/filetobesourced"]
volumeMounts:
- mountPath: /tmp/sm
name: tmpdir
containers:
image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
imagePullPolicy: IfNotPresent
name: mypod-cm
tty: true
volumeMounts:
- mountPath: /tmp/sm
readOnly: true
name: tmpdir
volumes:
name: tmpdir
emptyDir:
medium: Memory
The step-by-step that I'm thinking would be:
initcontainer mounts /tmp/sm and generates a file called /tmp/sm/filetobesourced
container mounts the /tmp/sm
container source the /tmp/sm/filetobesourced
workload runs using all the vars sourced by the last step
Am I missing something to get the third step done?
Change the command and/or args on the main container to be more like bash -c 'source /tmp/sm/filetobesourced && exec whatevertheoriginalcommandwas'.
For example I want to place an application configuration file inside:
/opt/webserver/my_application/config/my_config_file.xml
I create a ConfigMap from file and then place it in a volume like:
/opt/persistentData/
The idea is to run afterwards an script that does something like:
cp /opt/persistentData/my_config_file.xml /opt/webserver/my_application/config/
But it could be any startup.sh script that does needed actions.
How do I run this command/script? (during Pod initialization before Tomcat startup).
I would first try if this works.
spec:
containers:
- volumeMounts:
- mountPath: /opt/webserver/my_application/config/my_config_file.xml
name: config
subPath: my_config_file.xml
volumes:
- configMap:
items:
- key: KEY_OF_THE_CONFIG
path: my_config_file.xml
name: config
name: YOUR_CONFIGMAP_NAME
If not, add an init container to copy the file.
spec:
initContainers:
- name: copy-config
image: busybox
command: ['sh', '-c', '/bin/cp /opt/persistentData/my_config_file.xml /opt/webserver/my_application/config/']
How about mounting the ConfigMap where you actually want it instead of copying over?
update:
The init container #ccshih mentioned should do, but one can try other options too:
Build a custom image modyfying the base one, using a Docker recipe. The example below takes a java+tomcat7 openshift image, adds an additional folder to the app classpath, so you can mount your ConfigMap to /mnt/config without overwriting anything, keeping both folders available.
.
FROM openshift/webserver31-tomcat7-openshift:1.2-6
# add classpaths to config
RUN sed -i 's/shared.loader=/shared.loader=\/mnt\/config/'
/opt/webserver/conf/catalina.properties
Change the ENTRYPOINT of the application, either by modifying the image, or by the DeploymentConfig hooks, see: https://docs.okd.io/latest/dev_guide/deployments/deployment_strategies.html#pod-based-lifecycle-hook
With the hooks one just needs to remember to call the original entrypoint or launch script after all the custom stuff is done.
.
spec:
containers:
-
name: my-app
image: 'image'
command:
- /bin/sh
args:
- '-c'
- cp /wherever/you/have/your-config.xml /wherever/you/want/it/ && /opt/webserver/bin/launch.sh
I am trying to copy files from a container to a local/host directory. Running my experiments on minikube. Tried starting minikube with a mount as: minikube mount /tmp/export:/data/export and it still does not work.
I have a single pod, that upon startup runs a simple script:
timeout --signal=SIGINT 10s clinic bubbleprof -- node index.js >> /tmp/clinic.output.log && \
cp -R `grep "." /tmp/clinic.output.log | tail -1 | grep -oE '[^ ]+$'`* /data/export/ && \
echo "Finished copying clinic run generated files"
Once my script finishes its run, the container dies. This happens because bash is the process with PID 1. I don't mind this. My problem is that /tmp/export is empty, after the files should have been copied out.
My pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: clinic-testapp
spec:
containers:
- name: clinic-testapp
image: username/container-image:0.0.11
ports:
- containerPort: 80
volumeMounts:
- name: clinic-storage
mountPath: /data/export
volumes:
- name: clinic-storage
hostPath:
path: /tmp/export
Am I doing something wrong? Please advise.