copying a folder from container running by using podman - copy

I have a jenkins container running inside a linux instance using podman.
I want to take the backup of the jenkins_home folder from within the container to my linux server.
Container name is jenkins
I am using the following command :
podman cp jenkins:/var/jenkins_home /home/ec2-user/
But i am facing the following error:
Error: 1 error occurred:
* error determining run uid: user: unknown user error looking up user "jenkins"

Related

Path expansion for volume overlays uses root with Rancher Desktop (OSX)

We're trying to use rancher desktop and nerdctl to bring up some compose stacks. The issue we've run into is that when you have a mount similar to the following, nerdctl expands either ${HOME} or ~/ to /root, despite me running the command as my normal user.
volumes:
- ${HOME}/.config/conf:/opt/confpath
FATA[0000] failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/root/.config/conf" to rootfs at "/opt/confpath" caused: stat /root/.config/conf: no such file or directory: unknown
FATA[0000] error while creating container u42-docs: exit status 1
Is there a way to prevent this path expansion to the root user, and instead be my own user?
Rancher desktop was installed via brew install --cask rancher

Why does buildah fail running inside a kubernetes container?

Hey I'm creating a Gitlab pipeline and I have a runner in Kubernetes.
In my pipeline I am trying to build the application as container.
I'm building the container with buildah, which is running inside a Kubernetes pod. While the pipeline is running kubectl get pods --all-namespaces shows the buildah pod:
NAMESPACE NAME READY STATUS RESTARTS AGE
gitlab-runner runner-wyplq6-h-project-6157-concurrent-0qc9ns 2/2 Running 0 7s
The pipeline runs
buildah login -u gitlab-ci-token -p ${CI_BUILD_TOKEN} ${CI_REGISTRY} and
buildah bud -t ${CI_REGISTRY_IMAGE}/${CI_COMMIT_BRANCH}:${CI_COMMIT_SHA} .
with the Dockerfile using FROM parity/parity:v2.5.13-stable.
Buldah bud however fails, and prints:
Login Succeeded!
STEP 1: FROM parity/parity:v2.5.13-stable
Getting image source signatures
Copying blob sha256:d1983a67e104e801fceb1850a375a71fe6b62636ba7a8403d9644f308a6a43f9
Copying blob sha256:3386e6af03b043219225367632569465e5ecd47391d1f99a6d265e51bd463a83
Copying blob sha256:49ac0bbe6c8eeb959337b336ceaa5c3bbbae81e316025f9b94ede453540f2377
Copying blob sha256:72d77d7d5e84353d77d8a8f97d250120afe3650b85010137961560bce3a327d5
Copying blob sha256:1a0f3a523f04f61db942018321ae122f90d8e3303e243b005e8de9817daf7028
Copying blob sha256:4aae9d2bd9a7a79a688ccf753f0fa9bed5ae66ab16041380e595a077e1772b25
Copying blob sha256:8326361ddc6b9703a60c5675d1e9cc4b05dbe17473f8562c51b78a1f6507d838
Copying blob sha256:92c90097dde63c8b1a68710dc31fb8b9256388ee291d487299221dae16070c4a
Copying config sha256:36be05aeb6426b5615e2d6b71c9590dbc4a4d03ae7bcfa53edefdaeef28d3f41
Writing manifest to image destination
Storing signatures
time="2022-02-08T10:40:15Z" level=error msg="Error while applying layer: ApplyLayer exit status 1 stdout: stderr: permission denied"
error creating build container: The following failures happened while trying to pull image specified by "parity/parity:v2.5.13-stable" based on search registries in /etc/containers/registries.conf:
* "localhost/parity/parity:v2.5.13-stable": Error initializing source docker://localhost/parity/parity:v2.5.13-stable: pinging docker registry returned: Get https://localhost/v2/: dial tcp [::1]:443: connect: connection refused
* "docker.io/parity/parity:v2.5.13-stable": Error committing the finished image: error adding layer with blob "sha256:3386e6af03b043219225367632569465e5ecd47391d1f99a6d265e51bd463a83": ApplyLayer exit status 1 stdout: stderr: permission denied
...
I am thinking of 2 possible causes:
First the container is build and then it is stored inside the kubernetes pod before transfering it to the container registry. Since the Pod does not have any persistent storage, it fails writting, hence this error.
The second is that The container is build and pushed to the container registry, for some reasons it has no permissions to it and fails.
Which one is it? And how do I fix it?
If it is the fist reason, do I need to add persistent volume rights to the serviceaccount running the pod?
gitlab runner needs root privileges, add this line into [runner.kuberentes] in gitlab configuration
privileged = true

Unable to run unsupported-workflow: Error 1193: Unknown system variable 'transaction_isolation'

When running the unsupported-workflow command on Cadence 16.1 against 5.7 Mysql Aurora 2.07.2 . I'm encountering the following error:
Error: connect to SQL failed
Error Details: Error 1193: Unknown system variable 'transaction_isolation'
I've set $MYSQL_TX_ISOLATION_COMPAT=true . Are there other settings I need to modify in order for this to run?
It's just fixed in https://github.com/uber/cadence/pull/4226 but not in release yet.
You can use it either building the tool, or use the docker image:
update docker image via docker pull ubercadence/cli:master
run the command docker run --rm ubercadence/cli:master --address <> adm db unsupported-workflow --conn_attrs tx_isolation=READ-COMMITTED --db_type mysql --db_address ...
For SQL tool:
cadence-sql-tool --connect-attributes tx_isolation=READ-COMMITTED ...

Map host user into postgres container

I was trying to run postgres 12.0 alpine with arbitrary user in an attempt to have easier acces to the mounted drives. However, I get the following error. I was following the instructions from official docker hub here
docker run -it --user "$(id -u):$(id -g)" -v /etc/passwd:/etc/passwd:ro postgres:12.0-alipne
I get: initdb: error: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted
Then I tried initializing the target directory separately which needs a restart in between. This is also not working and gives me the same error. But this time, the container starts as a root user.
Has anyone had success running the postgres alpine container with an arbitrary user?

ImagePullBackOff Error

I'm using Minikube on Windows based machine. On the same machine, I also have docker-machine setup.
I've pointed docker client towards minikube’s docker environment. This way, can see Docker environment inside Kubernetes.
Without issues, I can build docker images & run docker containers from Minikube VM. However, when I try to start any docker container via kubectl(from PowerShell), its failing to start primarily as if kubectl can't find docker image due to following error -
Failed to pull image "image name": rpc error: code = Unknown desc =
Error response from daemon: repository "image-repo-name" not found:
does not exist or no pull access Error syncing pod
I don't know what's missing. If "docker run" can access the image why "kubectl" can not do?
Here my Dockerfile:
FROM node:4.4
EXPOSE 9002
COPY server.js .
CMD node server.js
Make sure your image path in your yaml is correct. That image should exist on your local machine. It should be named with a number not "latest"
Have this in your deployment yaml:
image: redis:1.0.48
run "> docker images" to see the list of images on your machine.