Docker container can't download WordPress installation files automatically - docker-compose

I have the following compose file:
version: "3"
services:
wordpress:
image: visiblevc/wordpress
privileged: true
network_mode: bridge #should help?
# required for mounting bindfs
cap_add:
- SYS_ADMIN
devices:
- /dev/fuse
# required on certain cloud hosts
security_opt:
- apparmor:unconfined
ports:
- 8080:80
- 8081:443
...WP parameters ommitted for brewity
When I run it, I get the following error:
When I use the CLI and curl the SAME url, I can download the information.
What should I change in the yaml to make it work automatically?
UPDATE:
As curl works manually, I don't think it as a DNS resolve error, but to make sure, I modified the resolv.conf file to a valid nameserver address:
Unfortunately, it didn't solve the issue.

Try to put on image this:
image: visiblevc/wordpress:latest
You need to specify the tag.

We checked the initialization shell script of the Docker image.
We couldn't determine why the resolve fails when wp core download was called so we created a workaround.
We added a pre-init section inside the script to use the explicit curl commands that worked as demonstrated in the CLI. As a sideeffect, we download the necessary files so it bypasses the need of downloading and the potential resolve timeout.
Note: we needed several curl calls for the API as the resolve kept failing after only one call.
This is how the new script starts:
h1 Pre-init
curl "https://api.wordpress.org/core/version-check/1.7/?locale=en_US" >/dev/null
curl "https://downloads.wordpress.org/release/wordpress-5.9.3.zip" --output wordpress-5.9.3.zip
curl "https://api.wordpress.org/core/version-check/1.7/?locale=en_US" >/dev/null
curl "https://downloads.wordpress.org/plugin/classic-editor.1.6.2.zip" --output classic-editor.1.6.2.zip
curl "https://downloads.wordpress.org/translation/core/5.9.3/hu_HU.zip" --output hu_HU.zip
curl "https://api.wordpress.org/core/version-check/1.7/?locale=en_US" >/dev/null
h1 'Begin WordPress Installation'
Using this script, the installation succeeded.

Related

requested access to the resource is denied [duplicate]

I am using Laravel 4.2 with docker. I setup it on local. It worked without any problem but when I am trying to setup online using same procedure then I am getting error:
pull access denied for <projectname>/php, repository does not exist or may require 'docker login'
is it something relevant to create repository here https://cloud.docker.com/ or need to docker login in command?
After days of study I am still not able to figure out what could be the fix in this case and what are the right steps?
I have the complete code. I can paste here if need to check certain parts.
Please note that the error message from Docker is misleading.
$ docker build deploy/.
Sending build context to Docker daemon 5.632kB
Step 1/16 : FROM rhel7:latest
pull access denied for rhel7, repository does not exist or may require 'docker login'
It says that it may require 'docker login'.
I struggled with this. I realized the image does not exist at https://hub.docker.com any more.
Just make sure to write the docker name correctly!
In my case, I wrote (notice the extra 'u'):
FROM ubunutu:16.04
The correct docker name is:
FROM ubuntu:16.04
The message usually comes when you put the wrong image name. Please check your image if it exists on the Docker repository with the correct tag.
It helped me.
docker run -d -p 80:80 --name ngnix ngnix:latest
Unable to find image 'ngnix:latest' locally
docker: Error response from daemon: pull access denied for ngnix, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
$ docker run -d -p 80:80 --name nginx nginx:latest
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
I had the same issue. In my case it was a private registry. So I had to create a secret as shown here
and then we have to add the image pull secret to the deployment.yaml file as shown below.
pods/private-reg-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
November 2020 and later
If this error is new, and pulling from Docker Hub worked in the past, note Docker Hub now introduced rate limiting in Nov 2020
You will frequently see messages like:
Warning: No authentication provided, using CircleCI credentials for pulls from Docker Hub.
From Circle CI and other similar tools that use Docker Hub. Or:
Error response from daemon: pull access denied for cimg/mongo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
You'll need to specify the credentials used to fetch the image:
For CircleCI users:
- image: circleci/mongo:4.4.2
# Needed to pull down Mongo images from Docker hub
# Get from https://hub.docker.com/
# Set up at https://app.circleci.com/pipelines/github/org/sapp
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
I had the same issue
pull access denied for microsoft/mmsql-server-linux, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Turns out the DockerHub was moved to a different name
So I would suggest you re check-in docker hub
I solved this by inserting a language at the front of the docker image
FROM python:3.7-alpine
I had the same error message but for a totally different reason.
Being new to docker, I issued
docker run -it <crypticalId>
where <crypticalId> was the id of my newly created container.
But, the run command wants the id of an image, not a container.
To start a container, docker wants
docker start -i <crypticalId>
In my case I was using a custom image and docker baked into Minikube on my local machine.
I had specified the pull policy incorrectly:-
imagePullPolicy: Always
But it should have been:-
imagePullPolicy: IfNotPresent
Because the custom image was only present locally after I'd explicitly built it in the minikube docker environment.
I had this because I inadvertantly remove the AS tag from my first image:
ex:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
should have been:
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64 AS installer
...
.. etc ...
...
FROM mcr.microsoft.com/windows/servercore:1607-KB4546850-amd64
COPY --from=installer ["/dotnet", "/Program Files/dotnet"]
... etc ...
I had the same issue when working with docker-composer. In my case it was an Amazon AWS ECR private registry. It seems to be a bug in docker-compose
https://github.com/docker/compose/issues/1622#issuecomment-162988389
After adding the full path "myrepo/myimage" to docker compose yaml
image: xxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/myrepo:myimage
it was all fine.
This error message might possibly indicate something else.
In my case I defined another Docker-Image elsewhere from which the current Docker inherited its settings (docker-compos.yml):
FROM my_own_image:latest
The error message I got:
qohelet$ docker-compose up
Building web
Step 1/22 : FROM my_own_image:latest
ERROR: Service 'web' failed to build: pull access denied for my_own_image, repository does not exist or may require 'docker login'
Due to a reinstall the previous Docker were gone and I couldn't build my docker using docker-compose up with this command:
sudo docker build -t my_own_image:latest -f MyOwnImage.Dockerfile .
In your specific case you might have defined your own php-docker.
If the repository is private you have to assign permissions to download it. You have two options, with the docker login command, or put in ~/.docker/docker.config the file generated once you login.
if you have over two stage in the docker build process read this solution:
this error message is completely misleading.
if you have a two-stage (context) dockerfile and want to copy some data from the first to the second stage, you must label the first context (ex: build) and access it by that label
#stage(1)
from <image> as build
.
.
#stage(2)
From <image>
copy --from=build /sourceDir /distinationDir
Docker might have lost the authentication data. So you'll have to reauthenticate with your registry provider. With AWS for example:
aws ecr get-login --region us-west-2 --no-include-email
And then copy and paste that resulting "docker login..." to authenticated docker.
Source: Amazon ECR Registeries
If you're downloading from somewhere else than your own registry or docker-hub, you might have to do a separate agreement of terms on their site, like the case with Oracle's docker registry. It allows you to do docker login fine, but pulling the container won't still work until you go to their site and agree on their terms.
Make sure the image exists in docker hub. To me, I was trying to pull MongoDB using the command docker run mongodb which is incorrect. In the docker hub, the image name is mongo.
If you don't have an image with that name locally, docker will try to pull it from docker hub, but there's no such image on docker hub.
Or simply try "docker login".
If you are using multiple Dockerfiles you should not forget to run build for all of it. That was my case.
I had to run docker pull first, then running docker-compose up again and then it worked.
docker pull index.docker.io/youruser/yourrepo:latest
Try this in your docker-compose.yml file
image: php:rc-zts-alpine
When I run the command multiple times "docker pull scrapinghub/splash" in Power shell then it solve the issue.
if it was caused with AWS EC2 and ECR, due to name issue(happens with beginners!)
Error response from daemon: pull access denied for my-app, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
when using docker pull use Image URI of the image, available in ECR-row itself as Copy URI
docker pull Image_URI
I have seen this message and thought something was wrong about my Docker authentication. However, I've realized that Docker only allows 1 private repository per free plan. So it is quite possible that you are trying to pull your private repository and see this error because have not upgraded your plan.
Got the same problem but nothing worked. And then I understood I need run .sh (.ps1) script first before doing docker-compose.
So I have the following files:
docker-compose.yml
docker-build.sh
docker-build.ps1
Dockerfile
And I had to first run docker-build.sh on Unix (Mac) machine or docker-build.ps1 on Windows:
sh docker-build.sh
It will build an image in my case.
And only then after an image has been built I can run:
docker-compose up --build
For references. Here is my docker-compose file:
version: '3.8'
services:
api-service:
image: x86_64/prediction-service:0.8.1
container_name: api-service
expose:
- 8060
ports:
- "8060:80"
And here is docker-build.sh:
VERSION="0.8.1"
ARCH="x86_64"
APP="prediction-service"
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
docker build -f $DIR/Dockerfile -t $ARCH/$APP:$VERSION .
I had misspelled nginx to nignx in Dockerfile
In my case the solution was to re-create docker-file through visual studio and all worked perfeclty.
I heard the same issue.
I solved by login
docker login -u your_user_name
then I was prompt to enter docker hub password
The rest command work perfect after login successfull
Someone might come across the same error for different reasons than what is already presented, so let me share:
I got the same error when using docker multistage builds (Multiple: FROM <> as <>).
And I forgot to remove one (COPY --from=<> <>)
After removing that COPY then it worked fine.
Exceeded Docker Hub's Limit on Free Repos:
Despite first executing:
docker login -u <dockerhub uname>
and "Login Succeeded" being returned, I received the error in this question.
In the webgui in Settings > Visibility Settings I remarked:
Using 2 of 1 private repositories.
Which told me that I had exceeded the limit on Docker Hub's free account limits. However, removing a previous image didn't clear the error...
The Fix:
Indeed, the error message in my case was a red herring- it's nothing related to authentication issues.
Deleting just the images exceeding the allowed limit did NOT clear the error however!
To get past the error you need to delete ALL the images in your FREE Docker Hub account, then run a new build pushing the image to your account.
Your pull command will now succeed.

Helm Chart - Camunda

Good Morning.
I'm currently using an helmchart to deploy camunda inside an openshift namespace/cluster.
For your information, Camunda has a default process called "Invoice" and that process is responsible to create a default user called "demo".
I would like to avoid that user creation, so i was able to do it through docker with the following command:
docker run -d --name camunda -p 8080:8080 -v
/tmp/empty:/camunda/webapps/camunda-invoice
camunda/camunda-bpm-platform:latest
But now, my helm chart uses a custom "values.yaml" that calls the camunda image, and then issues a command to start it:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
So is it possible to use the same behavior as docker command shown above, to empty the "webapps" directory after calling the camunda.sh?
I know that I can pass through the args: [ ] the argument "--webapps" but the issue is that it will remove the "tasklist" and "cockpit" that allows users to access the Camunda UI.
Thank you everyone.
Have a nice day!
EDIT:
While speaking with Camunda team, i just had the information that i can send the "--webapps --swaggerui --rest" arguments in order to start the application without having the default BPMN Process (Invoice).
So I'm currently try to use multiple arguments in my Helm Chart values.yaml like this:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
args: ["--webapps", "--rest", "--swaggerui"]
Unfortunately, it's not working this way. What am i doing wrong?
If I send just one argument like "--webapps" it reads the arguments and creates the container.
But if i send multiple arguments, like the example shown above, it just doesn't create the container.
Am i doing something wrong?
The different start arguments for the Camunda 7 RUN distribution are documented here: https://docs.camunda.org/manual/7.18/user-guide/camunda-bpm-run/#start-script-arguments
Here is a helm value file example using these parameters:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
args: ['--production','--webapps','--rest','--swaggerui']
extraEnvs:
- name: DB_VALIDATE_ON_BORROW
value: "false"

Datajoint LabBook - how to change ports

I am running Datajoint LabBook through the provided docker container (https://datajoint.github.io/datajoint-labbook/user.html#installation) and wondered whether there is a way to move it away from the (default?) port (80?). I am not sure I understand the instructions in the .yaml (docker-compose-deploy.yaml), it seems to me that there is a pharus endpoint (5000) and then there are two port definitions (443:443, 80:80) further down. I am not sure what those refer to.
Yes, you can move the DataJoint LabBook service to a different port, however, a few changes will be necessary for it to function properly.
TL;DR
Assuming that you are accessing DataJoint LabBook locally, follow these steps:
Add the line 127.0.0.1 fakeservices.datajoint.io to your hosts file. Verify the hosts file location in your file system.
Modify the ports configuration in docker-compose-deploy.yaml as:
ports:
- "3000:443" # replace 3000 with the port of your choosing
#- "80:80" # disables HTTP -> HTTPS redirect
Navigate in your Google Chrome browser to https://fakeservices.datajoint.io:3000
Detailed Explanation
Let me first speak a bit on the architecture and then describe the relevant changes as we go along.
Below is the Docker Compose file presented in the documentation. I'll make the assumption that you are attempting to run this locally.
# PHARUS_VERSION=0.1.0 DJLABBOOK_VERSION=0.1.0 docker-compose -f docker-compose-deploy.yaml pull
# PHARUS_VERSION=0.1.0 DJLABBOOK_VERSION=0.1.0 docker-compose -f docker-compose-deploy.yaml up -d
#
# Intended for production deployment.
# Note: You must run both commands above for minimal outage.
# Make sure to add an entry into your /etc/hosts file as `127.0.0.1 fakeservices.datajoint.io`
# This serves as an alias for the domain to resolve locally.
# With this config and the configuration below in NGINX, you should be able to verify it is
# running properly by navigating in your browser to `https://fakeservices.datajoint.io`.
# If you don't update your hosts file, you will still have access at `https://localhost`
# however it should simply display 'Not secure' since the cert will be invalid.
version: "2.4"
x-net: &net
networks:
- main
services:
pharus:
<<: *net
image: datajoint/pharus:${PHARUS_VERSION}
environment:
- PHARUS_PORT=5000
fakeservices.datajoint.io:
<<: *net
image: datajoint/nginx:v0.0.16
environment:
- ADD_zlabbook_TYPE=STATIC
- ADD_zlabbook_PREFIX=/
- ADD_pharus_TYPE=REST
- ADD_pharus_ENDPOINT=pharus:5000
- ADD_pharus_PREFIX=/api
- HTTPS_PASSTHRU=TRUE
entrypoint: sh
command:
- -c
- |
rm -R /usr/share/nginx/html
curl -L $$(echo "https://github.com/datajoint/datajoint-labbook/releases/download/\
${DJLABBOOK_VERSION}/static-djlabbook-${DJLABBOOK_VERSION}.zip" | tr -d '\n' | \
tr -d '\t') -o static.zip
unzip static.zip -d /usr/share/nginx
mv /usr/share/nginx/build /usr/share/nginx/html
rm static.zip
/entrypoint.sh
ports:
- "443:443"
- "80:80"
depends_on:
pharus:
condition: service_healthy
networks:
main:
First, the Note in the header comment above is important and seems to have been missed in the DataJoint LabBook documentation (I've filed this issue to update it). Make sure to follow the instruction in the Note as 'secure' access is required from pharus (more on this below).
From the Docker Compose file, you will note 2 services:
pharus - A DataJoint REST API backend service. This service is configured to listen on port 5000, however, it is not actually exposed to the host. This means that it will not conflict and does not require any change as it is entirely contained within a local, virtual docker network.
fakeservices.datajoint.io - A proxying service that is exposed to the host and thus accessible locally and publicly against the host. Its primary purpose is to either:
a) forward requests beginning with /api to pharus, or
b) resolve other requests to the DataJoint LabBook GUI.
DataJoint LabBook's GUI is a static web app which means that it can be served as insecure (HTTP, typically port 80) and secure (HTTPS, typically port 443). Because of the secure requirement from pharus, requests made to port 80 are simply redirected to 443 and exposed for convenience. Therefore, if we want to move DataJoint LabBook to a new port we simply should change the mapping of 443 to a new port on the host and disable the 80 -> 443 redirect. Therefore, the port update would look like so:
ports:
- "3000:443" # replace 3000 with the port of your choosing
#- "80:80" # disables HTTP -> HTTPS redirect
Finally, after configuring and bringing up the services, you should be able to confirm the port change by navigating to https://fakerservices.datajoint.io:3000 in your Google Chrome browser.

Failing to connect to a Postgres Container with psycopg2?

I have a docker-compose file that looks like this
version: "3.7"
services:
app:
stdin_open: true
tty: true
build:
context: .
dockerfile: app.Dockerfile
volumes:
- ${HOST_SAVE_DIRC}:${CONTAINER_SAVE_DIRC}
depends_on:
- postgres
postgres:
image: 'postgres'
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_HOST_AUTH_METHOD=trust
restart: always
expose:
- "5432"
where variables like POSTGRES_USER are entries from a env file. app.Dockerfile looks like
FROM python:3.8.3-slim-buster
COPY src /src/
COPY init.sql .
COPY .env .
COPY run.sh run.sh
COPY requirements.txt .
RUN ls -a
RUN pip install --no-cache-dir -r requirements.txt
The containers are created, then the user is logged into the app container w/ the main function of the program being called - this is when the database calls
From the app container I am attempting to connect to the postgres container via psycopg2. However when I attempt to do so, I receive the following error:
psycopg2.OperationalError: could not connect to server: No route to host
Is the server running on host "postgres" (172.22.0.2) and accepting
TCP/IP connections on port 5432?
using a psycopg2 call that looks like
with psy.connect(host='postgres', port=5432, user='postgres', password='postgres') as conn:
...
the entries of this psycopg2 call match the env file given to the docker-compose file.
My understanding is that Postgres uses port 5432 by default. Also that when docker-compose creates the two containers - it creates a docker network for those containers name DIR_default where DIR is the name of the directory the docker-compose file lives in, where each container can be accessed with using the name listed in the docker-compose file ('postgres' and 'app' in these cases).
Among various tries:
I've checked and the database isn't going down between the container being created and the user being exec'd in.
I've tried various little changes like changing the container names, postgres login info, etc.
I've tried linking the postgres container name explicitly with link: "postgres:postgres".
Other solutions suggested here
Any help would be greatly appreciated! I see no reason why something as simple as this should be occurring, but also here I am.
Edit:
Pinging the Postgres container from the app container appears to be working when running docker exec app ping postgres_container_name. Is this a sign that the Docker network is set up correctly and the issue is something of mine?
Edit 2:
Tried clearing all images and containers, then restarting the Docker daemon and afterwards my PC. No change in either case.
For reference, the ping command looked like
docker exec python-app ping name_given_to_postgres_container
returning various statements which looked like
64 bytes from name_given_to_postgres_container.project_name_default (172.18.0.3): icmp_seq=1 ttl=64 time=0.090 ms
which unless I am mistaken, I believe is signalling a succesful ping.
The top level .env file provided to docker-compose
HOST_SAVE_DIRC=~/python_projects/project_directory/directory_in_project
CONTAINER_SAVE_DIRC=/pdfs
POSTGRES_DB=project_name # same as project_directory
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_PORT=5432
Here is the requirements.txt file for the Python app as well
certifi==2020.4.5.1
chardet==3.0.4
idna==2.9
psycopg2-binary==2.8.5
read-env==1.1.0
requests==2.23.0
urllib3==1.25.9
Exec-ing into the Postgres container with docker exec -it container_id bash and running psql -U postgres appears to be successful - even with restart: always removed. I can also see the database named in the docker-compose file is also created. I feel confident in saying this container isn't dying spontaneously.
However, hitting the 5432 port on the Postgres container with netcat via nc name_given_to_postgres_container 5432-5433 returns an error similar to the one returned by psycopg2
arxivist_postgres_1 [172.22.0.3] 5433 (?) : No route to host
arxivist_postgres_1 [172.22.0.3] 5432 (postgresql) : No route to host
The same error is also returned with curl. So my guess the issue isn't with the Postgres container directly, psycopg2, or the host-name - but something with the port?
Edit 3:
As a last attempt to fix this project, the full project this post is referring to is posted at this link. If anyone would like to download the repo and try building the docker containers themselves via ./start.sh - that might be just what is needed to find a solution!
I thought I had Docker setup on my machine, which runs Fedora 32. However as I came to realize from this article, setting up Docker on Fedora 32 requires some extra steps I was not previously aware of.
Specifically for this issue, the command listed in the article to add Docker to whitelist Docker on the local network's firewall with the command
sudo firewall-cmd --permanent --zone=FedoraWorkstation --add-masquerade
So I believe the root cause of my issue was simply my app container being blocked from accessing the postgres container by the firewall. Making the above change made the program work finally!

Kubernetes Container Command

I'm working with Neo4j in Kubernetes.
For a showcase, I want to fill the Neo4j in the Pods with initial data which i can do with a cypher-file, I'm having in the /bin folder using the cypher-shell.
So basically I start the container and run cat bin/initialData.cypher | bin/cypher-shell.
I've validated that this works by running it in the kubectl exec -it <pod> /bin/bash bash.
However, no matter how I try to map to the spec.container.command, it fails.
Currently my best guess is
spec:
containers:
command:
- /bin/bash
- -c
- |
cd bin
ls
cat initialData.cypher | cypher-shell
which does not work. It displays the ls correctly but throws a connection refused afterwards, where I have no idea where its coming from.
edit: Updated
You did valid spec, but with a wrong syntax.
Try like this
spec:
containers:
command: ["/bin/bash"]
args: ["-c","cat import/initialData.cypher | bin/cypher-shell"]
Update:
In your neo4j.conf you have to uncomment the lines related to using the neo4j-shell
# Enable a remote shell server which Neo4j Shell clients can log in to.
dbms.shell.enabled=true
# The network interface IP the shell will listen on (use 0.0.0.0 for all interfaces).
dbms.shell.host=127.0.0.1
# The port the shell will listen on, default is 1337.
dbms.shell.port=1337
Exec seems like the better way to handle this but you wouldn’t usually use both command and args. In this case, probably just put the whole thing in command.
I've found out what my problem was.
I did not realize that the commands are not linked to the initialisation lifecycles meaning the command was executed, before the neo4j was started in the container.
Basically, using the command was the wrong approach for me.