Pipe file from container in k8s to local machine, edit and copy back to container in k8s? - kubernetes

I need to update a file in a container running in k8s using my local editor and save back the updates to the original file in the container without restarting/redeploying the container.
Right now I do:
$ kubectl exec tmp-shell -- cat /root/motd > motd && vi motd && kubectl cp motd tmp-shell:/root/motd
Is there some better way to do this?
I have looked at:
https://github.com/ksync/ksync
but seems heavyweight for something this "simple".
Notice:
I don't want to use the editor that might or might not be available inside the container - since an editor is not guaranteed to be available.

One option that might be available is ephemeral debug containers however they are an alpha feature so probably not enabled for you at time of writing. Barring that, yeah what you said is an option. It probably goes without saying but this is a very bad idea, might not work at all if the target file isn't writable (which it shouldn't be in most cases) either because of file permissions, or because the container is running in immutable mode. Also this would only matter if the thing using the file will detect the change without reloading.
A better medium term plan would be to store the content in the ConfgMap and mount it into place. That would let you update it whenever you want.

You can use https://github.com/waverage/kubelink utility. It is like ksync but simpler and based on kubectl cp and kubectl exec commands.
Install with command: pip install kubelink and then use like:
kubelink create --name mypreset --source /Users/bob/myproject --destination /code --namespace default --selector "app=backend"
kubelink watch mypreset
Kubelink watches your local files and uploads them to Kubernetes pods when they change.

Related

How can I change the config file of the mongo running on ECS

I changed the mongod.conf.orig of the mongo running on ECS, but when I restart, the changes are gone.
Here's the details:
I have a mongodb running on ECS, it always crashes due to out of memory.
I have found the reason, I set the ECS memory to 8G, but because the mongo is running in a container, it detected a higher memory.
when I run db.hostInfo()
I got the memSizeMB higher than 16G.
It caused that when I run db.serverStatus().wiredTiger.cache
I got a "maximum bytes configured" higher than 8G
so I need to reduce the wiredTigerCacheSizeGB in config file.
I used the command line copilot svc exec -c /bin/sh -n mongo to connect to it.
Then I found a file named mongod.conf.orig.
I ran apt-get install vim to install vi and edit this file mongod.conf.orig.
But after I restart the mongo task, all my changes are gone. include the vi I just installed.
Did anyone meet the same problem? Any information will be appreciated.
ECS containers has ephemeral storage. In your case, you could create an EFS and mount it in a container, then share the configuration.
If you use CloudFormation, look at mount points.

VSCode: How to open repositories that have been cloned to a local volume

Using the remote container extension in VSCode I have opened a repository in a dev container. I am using vscode/docker in windows/wsl2 to develop in linux containers. In order to improve disk performance I chose the option to clone the repository into a docker volume instead of bind mounting it from windows. This works great, I did some editing, but before my change was finished I closed my VSCode window to perform some other tasks. Now I have some uncommitted state in a git repository that is cloned into a docker volume, I can inspect the volume using the docker extension.
My question is, how do I reconnect to that project?
One way is if the container is still running I can reconnect to it, then do file>open folder and navigated to the volume mounted inside the container. But what if the container itself has been deleted? If the file was on my windows filesystem I could say "open folder" on the windows directory and then run "Remote-Container: Reopen in dev container" or whatever, but I can't open the folder in a volume can I?
if I understood correctly you cloned a repository directly into a container volume using the "Clone Repository in Container Volume..." option.
Assuming the local volume is still there you should still be able to recover the uncommitted changes you saved inside it.
First make sure the volume is still there: Unless you named it something in particular, it is usually named <your-repository-name>-<hex-unique-id>. Use this docker command to list the volume and their labels:
docker volume ls --format "{{.Name}}:\t{{.Labels}}"
Notice I included the Labels property, this should help you locate the right volume which should have a label that looks like vsch.local.repository=<your-repository-clone-url>. You can even use the filter mode of the previous command if you remember the exact URL used for cloning in the first place, like this:
docker volume ls --filter label=vsch.local.repository=<your-repository-clone-url>
If you still struggle to locate the exact volume, you can find more about the docker volume ls command in the Official docker documentation and also use docker volume inspect to obtain detailed information about volumes.
Once you know the name of the volume, open an empty folder on your local machine and create the necessary devcontainer file .devcontainer/devcontainer.json. Choose the image most suitable to your development environment, but in order to recover your work by performing a simple git commit any image with git installed should do (even those who are not specifically designed to be devcontainers, here I am using Alpine because it occupies relatively little space).
Then set the workspaceFolder and workspaceMount variables to mount your volume in your new devcontainer, like this:
{
"image": "mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13",
"workspaceFolder": "/workspaces/<your-repository-name>",
"workspaceMount": "type=volume,source=<your-volume-name>,target=/worskpaces"
}
If you need something more specific there you can find exhaustive documentation about the possible devcontainer configuration in the devcontainer.json reference page.
You can now use VSCode git tools and even continue the work from where you left the last time you "persisted" your file contents.
This is, by the way, the only way I know to work with VSCode devcontainers if you are using Docker through TCP or SSH with a non-local context (a.k.a. the docker VM is not running on your local machine), since your local file system is not directly available to the docker machine.
If you look at the container log produced when you ask VSCode to spin up a devcontainer for you, you will find the actual docker run command executed by the IDE to be something like along this line:
docker run ... type=bind,source=<your-local-folder>,target=/workspaces/<your-local-folder-or-workspace-basename>,consistency=cached ...
meaning that if you omit to specify the workspaceMount variable in devcontainer.json, VSCode will actually do it for you like if you were to write this:
// file: .devcontainer/devcontainer.json
{
// ...
"workspaceFolder": "/worspaces/${localWorkspaceFolderBasename}",
"workspaceMount": "type=bind,source=${localWorkspaceFolder},target=/worspaces/${localWorkspaceFolderBasename},consistency=cached"}
// ...
}
Where ${localWorkspaceFolder} and ${localWorkspaceFolderBasename} are dynamic variables avaialble in the VSCode devcontainer.json context.
Alternatively
If you just want to commit the changes and throw away the volume afterwards you can simply spin up a docker container with git installed (even the tiny Alpine linux one should do):
docker run --rm -it --name repo-recovery --mount type=volume,source=<your-volume-name>,target=/workspaces --workdir /workspaces/<your-repository-name> mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13 /bin/bash
Then, if you are familiar with the git command line tool, you can git add and git commit all your modifications. Alternatively you can run the git commands directly instead of manually using a shell inside the container:
docker run --rm -t --name repo-recovery --mount type=volume,source=<your-volume-name>,target=/workspaces --workdir /workspaces/<your-repository-name> mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13 /bin/bash -c "git add . && git commit -m 'Recovered from devcontainer'"
You can find a full list of devcontainers provided by MS in the VSCode devcontainers repository .
Devcontainers are an amazing tool to help you keep your environment clean and flexible, I hope this answer understood and helped you solve your problem and expand a bit your knowledge about this instrument.
Cheers
you can also use remote-contianer: clone repository in volumes again
volumes and your changes still there

bash TAB completion does not work on centos 8

I run a centos 8 distro on docker and I would like to have bash TAB completion with dnf package manager. According to other posts, I did the following once my docker container is started:
dnf clean all && rm -r /var/cache/dnf && dnf upgrade -y && dnf update -y
and then
dnf install bash-completion sqlite -y
After doing that I restart the container but there is still no bash completion. I also tried to source directly the bash completion file by doing:
source /etc/profile.d/bash_completion.sh
but without any better effect.
Would you know what I am doing wrong ?
You shouldn't need BASH Completion in a Docker container. The only time you should be manually connecting to a shell inside a Linux container is to troubleshoot why the process running in the container is behaving abnormally. In fact, some container design advice might even go as far as suggesting you not include a shell inside your base OS at all!
The reason this isn't working for you is due to the way in which Linux containers operate. A Container is simply a namespaced process that is managed by the kernel installed on the Host OS. This process cannot be modified or interrupted or the container will be destroyed since the process will be sent a SIGTERM. When you attempt to source the bash_completion.sh script, you are attempting to pass new configuration arguments to your existing namespaced process managed by Docker.
If you really wanted to do this the best way to do it would be to create a new Docker Container Image based on the original CentOS 8 Base Image. And then from there install the bash completion package and add an echo command to add the source line to your user's .bashrc file.
EDIT:
With regards to the additional question asked OP in the comments of this answer I have added additional information below.
Why should not I need bash completion in a container
The reason you do not need bash completion in a container is because containers are not meant to be attached to with a shell. A is simply supposed to be a single instance of a process running under specific configured criteria. Containers aren't meant to be used to create dev environments for you to connect to, they're meant to run processes and applications in software infrastructure.
Manually updating & installing packages
You mention that one of the first things you do when you spin up a container is install packages. This is also alarming to me because you are not supposed to be manually interacting with a container at all. This includes package installation. Instead, you should generate a new Container Image from the older Base Image and add additional RUN statements to the Dockerfile to update the system and install these desired packages.
Cannot believe it is not possible
It is possible if you create a new Dockerfile that purposely installs it on a new layer of the base image and produces a new container image for you to use. BUT the point is that you shouldn't be connecting to Docker containers in the first place to even get to a point where you could need something like bash completion!
Here is a great summary on the difference between a container and a virtual machine that might help clarify some of this for you. In a nutshell, containers are supposed to run, and only run, processes.

Can I run aws-xray on the same ECS container?

I don't want to have to deploy a whole other ECS service just to enable X-Ray. I'm hoping I can run X-Ray on the same docker container as my app, I would have thought that was the preferred way of running it. I know there might be some data loss if my container dies. But I don't much care about that, I'm trying to stop this proliferation of extra services which serve only extra analytical/logging functions, I already have a logstash container I'm not happy about, my feeling is that apps themselves should be able to do this sort of stuff.
While we have the Dockerhub image of the X-Ray Daemon, you can absolutely run the daemon in the same docker container as your application - that shouldn't be an issue.
Here's the typical setup with the daemon dockerfile and task definition instructions:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html
I imagine you can simply omit the task definition attributes around the daemon, since it would be running locally beside your application - those wouldn't be used at all.
So I think the proper way to do this is using supervisord, see link for an example of that, but I ended up just making a very simple script:
# start.sh
/usr/bin/xray &
$CATALINA_HOME/bin/catalina.sh run
And then having a Dockerfile:
FROM tomcat:9-jdk11-openjdk
RUN apt-get install -y unzip
RUN curl -o daemon.zip https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-3.x.zip
RUN unzip daemon.zip && cp xray /usr/bin/xray
# COPY APPLICATION
# TODO
COPY start.sh /usr/bin/start.sh
RUN chmod +x /usr/bin/start.sh
CMD ["/bin/bash", "/usr/bin/start.sh"]
I think I will look at using supervisord next time.

How am I supposed to use a Postgresql docker image/container?

I'm new to docker. I'm still trying to wrap my head around all this.
I'm building a node application (REST api), using Postgresql to store my data.
I've spent a few days learning about docker, but I'm not sure whether I'm doing things the way I'm supposed to.
So here are my questions:
I'm using the official docker postgres 9.5 image as base to build my own (my Dockerfile only adds plpython on top of it, and installs a custom python module for use within plpython stored procedures). I created my container as suggedsted by the postgres image docs:
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
After I stop the container I cannot run it again using the above command, because the container already exists. So I start it using docker start instead of docker run. Is this the normal way to do things? I will generally use docker run the first time and docker start every other time?
Persistance: I created a database and populated it on the running container. I did this using pgadmin3 to connect. I can stop and start the container and the data is persisted, although I'm not sure why or how is this happening. I can see in the Dockerfile of the official postgres image that a volume is created (VOLUME /var/lib/postgresql/data), but I'm not sure that's the reason persistance is working. Could you please briefly explain (or point to an explanation) about how this all works?
Architecture: from what I read, it seems that the most appropriate architecture for this kind of app would be to run 3 separate containers. One for the database, one for persisting the database data, and one for the node app. Is this a good way to do it? How does using a data container improve things? AFAIK my current setup is working ok without one.
Is there anything else I should pay atention to?
Thanks
EDIT: adding to my confusion, I just ran a new container from the debian official image (no Dockerfile, just docker run -i -t -d --name debtest debian /bin/bash). With the container running in the background, I attached to it using docker attach debtest and the proceeded to apt-get install postgresql. Once installed I ran (still from within the container) psql and created a table in the default postgres database, and populated it with 1 record. Then I exited the shell and the container stopped automatically since the shell wasn't running anymore. I started the container againg using docker start debtest, then attached to it and finally run psql again. I found everything is persisted since the first run. Postgresql is installed, my table is there, and offcourse the record I inserted is there too. I'm really confused as to why do I need a VOLUME to persist data, since this quick test didn't use one and everything apears to work just fine. Am I missing something here?
Thanks again
1.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword
-d postgres
After I stop the container I cannot run it again using the above
command, because the container already exists.
Correct. You named it (--name some-postgres) hence before starting a new one, the old one has to be deleted, e.g. docker rm -f some-postgres
So I start it using
docker start instead of docker run. Is this the normal way to do
things? I will generally use docker run the first time and docker
start every other time?
No, it is by no means normal for docker. Docker process containers are supposed normally to be ephemeral, that is easily thrown away and started anew.
Persistance: ... I can stop and start
the container and the data is persisted, although I'm not sure why or
how is this happening. ...
That's because you are reusing the same container. Remove the container and the data is gone.
Architecture: from what I read, it seems that the most appropriate
architecture for this kind of app would be to run 3 separate
containers. One for the database, one for persisting the database
data, and one for the node app. Is this a good way to do it? How does
using a data container improve things? AFAIK my current setup is
working ok without one.
Yes, this is the good way to go by having separate containers for separate concerns. This comes in handy in many cases, say when for example you need to upgrade the postgres base image without losing your data (that's in particular where the data container starts to play its role).
Is there anything else I should pay atention to?
When acquainted with the docker basics, you may take a look at Docker compose or similar tools that will help you to run multicontainer applications easier.
Short and simple:
What you get from the official postgres image is a ready-to-go postgres installation along with some gimmicks which can be configured through environment variables. With docker run you create a container. The container lifecycle commands are docker start/stop/restart/rm Yes, this is the Docker way of things.
Everything inside a volume is persisted. Every container can have an arbitrary number of volumes. Volumes are directories either defined inside the Dockerfile, the parent Dockerfile or via the command docker run ... -v /yourdirectoryA -v /yourdirectoryB .... Everything outside volumes is lost with docker rm. Everything including volumes is lost with docker rm -v
It's easier to show than to explain. See this readme with Docker commands on Github, read how I use the official PostgreSQL image for Jira and also add NGINX to the mix: Jira with Docker PostgreSQL. Also a data container is a cheap trick to being able to remove, rebuild and renew the container without having to move the persisted data.
Congratulations, you have managed to grasp the basics! Keep it on! Try docker-compose to better manage those nasty docker run ...-commands and being able to manage multi-containers and data-containers.
Note: You need a blocking thread in order to keep a container running! Either this command must be explicitly set inside the Dockerfile, see CMD, or given at the end of the docker run -d ... /usr/bin/myexamplecommand command. If your command is NON blocking, e.g. /bin/bash, then the container will always stop immediately after executing the command.