How to run Popper on HPC without sudo rights - hpc

I want to execute a Popper workflow on a Linux HPC (High-performance computing) cluster. I don’t have admin/sudo rights. I know that I should use Singularity instead of Docker because Singularity is designed to not need sudo to run.
However, singularity build needs sudo privileges, if not executed in fakeroot/rootless mode.
This is what I have done in the HPC login node:
I installed Spack (0.15.4) and Singularity (3.6.1):
git clone --depth=1 https://github.com/spack/spack.git
. spack/share/spack/setup-env.sh
spack install singularity
spack load singularity
I installed Popper (2.7.0) in a virtual environment:
python3 -m venv ~/popper
~/popper/bin/pip install popper
I created an example workflow in ~/test/wf.yml:
steps:
- uses: "docker://alpine:3.11"
args: ["echo", "Hello world!"]
- uses: "./my_image/"
args: ["Hello number two!"]
With ~/test/my_image/Dockerfile:
FROM alpine:3.11
ENTRYPOINT ["echo"]
I tried to run the two steps of the Popper workflow in the login node:
$ cd ~/test
$ ~/popper/bin/popper run --engine singularity --file wf.yml 1
[1] singularity pull popper_1_4093d631.sif docker://alpine:3.11
[1] singularity run popper_1_4093d631.sif ['echo', 'Hello world!']
ERROR : Failed to create user namespace: user namespace disabled
ERROR: Step '1' failed ('1') !
$ ~/popper/bin/popper run --engine singularity --file wf.yml 2
[2] singularity build popper_2_4093d631.sif /home/bikfh/traylor/test/./my_image/
[sudo] password for traylor:
So both steps fail.
My questions:
For an image from Docker Hub: How do I enable “user namespace”?
For a custom image: How do I build an image without sudo and run the container?

For an image from Docker Hub: How do I enable “user namespace”?
I found that the user namespace feature needs to be already enabled on the host machine. Here are instructions for checking whether it’s enabled.
In the case of the cluster computer I am using (Frankfurt Goethe HLR), user namespaces are only enabled in the computation nodes, not the login node.
That’s why it didn’t work for me.
So I need to send the job with SLURM (here only the first step with a container from Docker Hub):
~/popper/bin/popper run --engine singularity --file wf.yml --config popper_config.yml 1
popper_config.yml defines the options for SLURM’s sbatch (compare the Popper docs). They depend on your cluster computer. In my case it looks like this:
resource_manager:
name: slurm
options:
"1": # The default step ID is a number and needs quotes here.
nodes: 1
mem-per-cpu: 10 # MB
ntasks: 1
partition: test
time: "00:01:00"
For a custom image: How do I build an image without sudo and run the container?
Trying to apply the same procedure to step 2, which has a custom Dockerfile, fails with this message:
FATAL: could not use fakeroot: no mapping entry found in /etc/subuid
I tried to create the .sif file (Singularity image) with Popper on another computer and copy it from ~/.cache/popper/singularity/... over to the cluster machine.
Unfortunately, Popper seems to clear that cache folder, so the .sif image doesn’t persist.

Related

requested access to the resource is denied [duplicate]

I am using Laravel 4.2 with docker. I setup it on local. It worked without any problem but when I am trying to setup online using same procedure then I am getting error:
pull access denied for <projectname>/php, repository does not exist or may require 'docker login'
is it something relevant to create repository here https://cloud.docker.com/ or need to docker login in command?
After days of study I am still not able to figure out what could be the fix in this case and what are the right steps?
I have the complete code. I can paste here if need to check certain parts.
Please note that the error message from Docker is misleading.
$ docker build deploy/.
Sending build context to Docker daemon 5.632kB
Step 1/16 : FROM rhel7:latest
pull access denied for rhel7, repository does not exist or may require 'docker login'
It says that it may require 'docker login'.
I struggled with this. I realized the image does not exist at https://hub.docker.com any more.
Just make sure to write the docker name correctly!
In my case, I wrote (notice the extra 'u'):
FROM ubunutu:16.04
The correct docker name is:
FROM ubuntu:16.04
The message usually comes when you put the wrong image name. Please check your image if it exists on the Docker repository with the correct tag.
It helped me.
docker run -d -p 80:80 --name ngnix ngnix:latest
Unable to find image 'ngnix:latest' locally
docker: Error response from daemon: pull access denied for ngnix, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
$ docker run -d -p 80:80 --name nginx nginx:latest
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
I had the same issue. In my case it was a private registry. So I had to create a secret as shown here
and then we have to add the image pull secret to the deployment.yaml file as shown below.
pods/private-reg-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
November 2020 and later
If this error is new, and pulling from Docker Hub worked in the past, note Docker Hub now introduced rate limiting in Nov 2020
You will frequently see messages like:
Warning: No authentication provided, using CircleCI credentials for pulls from Docker Hub.
From Circle CI and other similar tools that use Docker Hub. Or:
Error response from daemon: pull access denied for cimg/mongo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
You'll need to specify the credentials used to fetch the image:
For CircleCI users:
- image: circleci/mongo:4.4.2
# Needed to pull down Mongo images from Docker hub
# Get from https://hub.docker.com/
# Set up at https://app.circleci.com/pipelines/github/org/sapp
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
I had the same issue
pull access denied for microsoft/mmsql-server-linux, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Turns out the DockerHub was moved to a different name
So I would suggest you re check-in docker hub
I solved this by inserting a language at the front of the docker image
FROM python:3.7-alpine
I had the same error message but for a totally different reason.
Being new to docker, I issued
docker run -it <crypticalId>
where <crypticalId> was the id of my newly created container.
But, the run command wants the id of an image, not a container.
To start a container, docker wants
docker start -i <crypticalId>
In my case I was using a custom image and docker baked into Minikube on my local machine.
I had specified the pull policy incorrectly:-
imagePullPolicy: Always
But it should have been:-
imagePullPolicy: IfNotPresent
Because the custom image was only present locally after I'd explicitly built it in the minikube docker environment.
I had this because I inadvertantly remove the AS tag from my first image:
ex:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
should have been:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64 AS installer
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
I had the same issue when working with docker-composer. In my case it was an Amazon AWS ECR private registry. It seems to be a bug in docker-compose
https://github.com/docker/compose/issues/1622#issuecomment-162988389
After adding the full path "myrepo/myimage" to docker compose yaml
image: xxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/myrepo:myimage
it was all fine.
This error message might possibly indicate something else.
In my case I defined another Docker-Image elsewhere from which the current Docker inherited its settings (docker-compos.yml):
FROM my_own_image:latest
The error message I got:
qohelet$ docker-compose up
Building web
Step 1/22 : FROM my_own_image:latest
ERROR: Service 'web' failed to build: pull access denied for my_own_image, repository does not exist or may require 'docker login'
Due to a reinstall the previous Docker were gone and I couldn't build my docker using docker-compose up with this command:
sudo docker build -t my_own_image:latest -f MyOwnImage.Dockerfile .
In your specific case you might have defined your own php-docker.
If the repository is private you have to assign permissions to download it. You have two options, with the docker login command, or put in ~/.docker/docker.config the file generated once you login.
if you have over two stage in the docker build process read this solution:
this error message is completely misleading.
if you have a two-stage (context) dockerfile and want to copy some data from the first to the second stage, you must label the first context (ex: build) and access it by that label
#stage(1)
from <image> as build
.
.
#stage(2)
From <image>
copy --from=build /sourceDir /distinationDir
Docker might have lost the authentication data. So you'll have to reauthenticate with your registry provider. With AWS for example:
aws ecr get-login --region us-west-2 --no-include-email
And then copy and paste that resulting "docker login..." to authenticated docker.
Source: Amazon ECR Registeries
If you're downloading from somewhere else than your own registry or docker-hub, you might have to do a separate agreement of terms on their site, like the case with Oracle's docker registry. It allows you to do docker login fine, but pulling the container won't still work until you go to their site and agree on their terms.
Make sure the image exists in docker hub. To me, I was trying to pull MongoDB using the command docker run mongodb which is incorrect. In the docker hub, the image name is mongo.
If you don't have an image with that name locally, docker will try to pull it from docker hub, but there's no such image on docker hub.
Or simply try "docker login".
If you are using multiple Dockerfiles you should not forget to run build for all of it. That was my case.
I had to run docker pull first, then running docker-compose up again and then it worked.
docker pull index.docker.io/youruser/yourrepo:latest
Try this in your docker-compose.yml file
image: php:rc-zts-alpine
When I run the command multiple times "docker pull scrapinghub/splash" in Power shell then it solve the issue.
if it was caused with AWS EC2 and ECR, due to name issue(happens with beginners!)
Error response from daemon: pull access denied for my-app, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
when using docker pull use Image URI of the image, available in ECR-row itself as Copy URI
docker pull Image_URI
I have seen this message and thought something was wrong about my Docker authentication. However, I've realized that Docker only allows 1 private repository per free plan. So it is quite possible that you are trying to pull your private repository and see this error because have not upgraded your plan.
Got the same problem but nothing worked. And then I understood I need run .sh (.ps1) script first before doing docker-compose.
So I have the following files:
docker-compose.yml
docker-build.sh
docker-build.ps1
Dockerfile
And I had to first run docker-build.sh on Unix (Mac) machine or docker-build.ps1 on Windows:
sh docker-build.sh
It will build an image in my case.
And only then after an image has been built I can run:
docker-compose up --build
For references. Here is my docker-compose file:
version: '3.8'
services:
api-service:
image: x86_64/prediction-service:0.8.1
container_name: api-service
expose:
- 8060
ports:
- "8060:80"
And here is docker-build.sh:
VERSION="0.8.1"
ARCH="x86_64"
APP="prediction-service"
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
docker build -f $DIR/Dockerfile -t $ARCH/$APP:$VERSION .
I had misspelled nginx to nignx in Dockerfile
In my case the solution was to re-create docker-file through visual studio and all worked perfeclty.
I heard the same issue.
I solved by login
docker login -u your_user_name
then I was prompt to enter docker hub password
The rest command work perfect after login successfull
Someone might come across the same error for different reasons than what is already presented, so let me share:
I got the same error when using docker multistage builds (Multiple: FROM <> as <>).
And I forgot to remove one (COPY --from=<> <>)
After removing that COPY then it worked fine.
Exceeded Docker Hub's Limit on Free Repos:
Despite first executing:
docker login -u <dockerhub uname>
and "Login Succeeded" being returned, I received the error in this question.
In the webgui in Settings > Visibility Settings I remarked:
Using 2 of 1 private repositories.
Which told me that I had exceeded the limit on Docker Hub's free account limits. However, removing a previous image didn't clear the error...
The Fix:
Indeed, the error message in my case was a red herring- it's nothing related to authentication issues.
Deleting just the images exceeding the allowed limit did NOT clear the error however!
To get past the error you need to delete ALL the images in your FREE Docker Hub account, then run a new build pushing the image to your account.
Your pull command will now succeed.

Podman images not showing with podman image ls

I am trying to setup a build server in a Red Hat Enterprise Linux 8 (CentoOS 8) virtual machine.
I installed podman by running sudo dnf install -y #container-tools
I then ran sudo podman pull mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim to pull a container image from docker:
Trying to pull mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim...Getting image source signatures
Copying blob e936bd534ffb done
Copying blob caf64655bcbb done
Copying blob 4156e490f05f done
Copying blob 68ced04f60ab done
Copying blob 7064c3d93b4a done
Copying config e2cd20adb1 done
Writing manifest to image destination
Storing signatures
e2cd20adb1292ef24ca70de7abaddaadd57a5c932d3852b972e43b6f05a03dea
This looks successful to me. And if I run it again, I get told that the layers "already exists". But then I run:
podman image ls
and I get an empty list back:
REPOSITORY TAG IMAGE ID CREATED SIZE
I also tried the following commands to get a list:
podman image ls -a
podman image list
podman image list -a
podman images
podman images ls
podman images ls -a
podman images list
podman images list -a
They all give an empty list.
How can I see the container image that I pulled down?
Update: I ran sudo podman run --rm --name=linuxconfig-test -p 80:80 httpd and (on another machine) browsed to the ip address of my linux machine and got It Works! shown. So podman is working at least in part.
Unlike Docker, Podman stores images in the home directory of the user. The default path is ~/.local/share/containers/storage and it can be verified by running podman info. Since you executed podman pull as root, the pulled image will be stored in the home directory of the root user. This is why no images are listed when you run podman image ls without sudo.
The main idea behind podman is that it can run entirely in user mode without connecting to a priviledged daemon. Ideally, all podman commands should be run without sudo.
Turns out you have to run using sudo. I ran :
sudo podman image ls
and it returned the list of container images.
You can use the --root option to give the path to where the images are stored. That is if you need to run as root.
Though one important part of using podman is that you do not need to run as root or sudo user.
Note - The moment you run this, podman will change the owner of certain folders and files in the overlay location,and later when you run without sudo , you need to chown back. So not recommended
sudo podman images --root /home/xxx/.local/share/containers/storage

keda func deploy from a dir which contains spaces is failing

I am using Visual Code with Azure Core Tools to deploy a container to a K8S cluster which has KEDA installed. But seeing this docker error. The error is caused because the docker build is run without the double quotes.
$ func kubernetes deploy --name bollaservicebusfunc --registry sbolladockerhub --python
Running 'docker build -t sbolladockerhub/bollaservicebusfunc C:\Users\20835918\work\welcome to space'....done
Error running docker build -t sbolladockerhub/bollaservicebusfunc C:\Users\20835918\work\welcome to space.
output:
"docker build" requires exactly 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
(.venv)
20835918#CROC1LWPF1S99JJ MINGW64 ~/work/welcome to space (master)
I know there is a known bug Spaces in directory
But posting to see if there is a workaround, this is important as I have eveything in Onedrive - Comapny Name and it has spaces in it
Looking into the code for func, you could specify --image-name instead of --registry which seems to skip building the container.
You would have to build your docker container manually using the same code shown in the output and provide the value for the -t argument of the docker command for --image-name of the func command after.
Also, since this would not push your docker container as well, make sure to push it before running the func command.

error with docker container with postgresql

I am unable to install postgresql on existing jenkins docker image,below are the list of steps i have followed:
Step 1 : Download the jenkins and specify the name for the volume to jenkins-home as described in the below article
http://www.catosplace.net/blog/2015/02/11/running-jenkins-in-docker-containers/
using the below command download the image and specify the volume
docker create -v /var/jenkins_home --name jenkins-home jenkins
Step 2 : Updated the dockerfile please see below
Dockerfile added postgresql installation commands from postgresql_dockerfile
Step 3 : Build the docker image
docker build -t ci_jenkins_docker .
Step 4 : Now run the ci_jenkins_docker image
docker run -p 8085:8080 --volumes-from jenkins-home ci_jenkins_docker
I get below error message after running the above command
touch : cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied.
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions ?
What am I doing wrong ?
When you mount an external volume, that happens at run time, and the permissions of the mounted volume over-ride whatever is set earlier in the Dockerfile image. In order to make the jenkins_home directory writable, you will probably have to change the permissions in an entrypoint script.

Can't clone public repo from within Dockerfile

I have the following line in my Dockerfile:
RUN git clone https://github.com/assafg/youtube-remote.git ./youtube-remote
When executing sudo docker build -t 'yremote' .
I get the following error:
Cloning into './youtube-remote'... fatal: unable to access
'https://github.com/assafg/youtube-remote.git/': Could not resolve
host: github.com The command '/bin/sh -c git clone
https://github.com/assafg/youtube-remote.git ./youtube-remote'
returned a non-zero code: 128
Running clone command from command line works fine.
This can happen if your container can't connect to the internet. Possibly because it was started with a weird networking option? Run this command to check default internet connectivity:
docker run ubuntu apt install -y git && \
git clone https://github.com/assafg/youtube-remote.git ./youtube-remote
If that container successfully pulls down the repo, it probably means the first container has a networking problem. Try to restart, or change networking settings.
Docker Network just became a first class citizen in the Docker ecosystem. It's a really fast-moving project. This advice applies to v1.8
This is not a very scientific answer but sometimes docker restart helps especially in cases connected with docker network.