Metadata fetch failed stack driver logging Google Compute Engine - docker-compose

I am integrating my go application with Stackdriver logging via cloud.google.com/go/logging. My application works perfectly fine when deployed in a GCP on Flex engine. However, when I run my app in local, as soon as I hit localhost:8080 I get the following error on my console and the application gets killed automatically:
Metadata fetch failed: Get http://metadata/computeMetadata/v1/instance/attributes/gae_project: dial tcp: lookup metadata on 127.0.0.
11:53: server misbehaving
My understanding is that when running locally, the code should not try to access Google's internal metadata, which is what is happening above. I dug deeper and looks like this part is handled in the code cloud.google.com/go/compute/metadata/metadata.go. I might be wrong here but it looks like I have to set an env variable for the code to work properly. Pasting from the documentation in metadata.go
// metadataHostEnv is the environment variable specifying the
// GCE metadata hostname. If empty, the default value of
// metadataIP ("169.254.169.254") is used instead.
// This is variable name is not defined by any spec, as far as
// I know; it was made up for the Go package.
metadataHostEnv = "GCE_METADATA_HOST"
If all of my understanding is true, what should I set GCE_METADATA_HOST to? If I am wrong about my understanding, why am I seeing this error? Is it possible that this error has something to do with my Docker and not with Stackdriver logging?
I am running my app with in a container with docker-compose. I am performing go install which generates the binary and then I am simply executing the binary.
EDIT: This is my compose file
version: '3'
services:
dev:
image: <gcr_image>
entrypoint:
- /bin/sh
- -c
- "cat ./config-scripts/config.sh >> /root/.bashrc; bash"
command: bash
stdin_open: true
tty: true
working_dir: /code
environment:
- ENV1=value1
- ENV2=value2
ports:
- "8080:8080"
volumes:
- .:/code
- ~/.npmrc:/root/.npmrc
- ~/.config/gcloud:/root/.config/gcloud
- /var/run/docker.sock:/var/run/docker.sock

Related

How to create a storage account with Azurite and Docker-compose and connect to it via Storage Explorer

I am creating an Azure Function that must be connected to a local storage account. It's for study purpose. The problem does not exists if I run the function with "default" options, the ones are set when I create an Azure function that connect to a containerized local storage.
But now I want to customize my project using the docker compose. Forget the function, In this moment is not a problem and I don't care about it. Here the compose file:
version: '3.4'
services:
functionapp4:
image: ${DOCKER_REGISTRY-}functionapp4
container_name: MyFunction
build:
context: .
dockerfile: FunctionApp4/Dockerfile
storage:
image: mcr.microsoft.com/azure-storage/azurite
container_name: MyStorage
restart: always
ports:
- 127.0.0.1:10000:10000
- 127.0.0.1:10001:10001
- 127.0.0.1:10002:10002
environment:
- AZURITE_ACCOUNTS="devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
volumes:
- azurite:/data
volumes:
azurite:
When I run the project, both the containers (function and storage) start. But here I can see immediately a problem:
the services have been started at http://0.0.0.0 even if I set 127.0.0.1 in the compose file. I also tried with "127.0.0.1:{portNumber}"
Now, I open the Storage Explorer, where I created the storage with the same name and key I set in the compose:
Now, when I click on queue I get this error:
{
"name": "RestError",
"message": "Invalid storage account.\nRequestId:a20dea2a-2535-4098-950e-33a7f44ceca1\nTime:2023-02-08T07:36:52.554Z",
"code": "InvalidOperation",
"statusCode": 400,
"request": {
"streamResponseStatusCodes": {},
"url": "http://127.0.0.1:10001/devst*****?timeout=30",
...
}
}
I also tried to set the command in docker compose file:
command: 'azurite'
In this case, the service starts listening at the correct host, but it is worst because I get the error I cannot connect to the storge account either:
The problem seems to be in my environment variable:
environment:
- AZURITE_ACCOUNTS="devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
But it is correclty set:
I tryed both with quotation marks and without them. No change
If I remove the env variable, I can connect to the default storage account correctly.
What's wrong in my configuration? Any suggestion please?
Thank you
Just one small error in my configuration.
This line
- AZURITE_ACCOUNTS="devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
must be
- "AZURITE_ACCOUNTS=devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
Please, note the quotation marks.

Is there any way to use compose v2 in bitbucket pipeline?

First of all - no, I cannot switch from Bitbucket pipelines to something appropriate, unfortunately, it is direct requirement.
[x] I have searched other SO questions and google, the following two questions are related:
Bitbucket Pipeline - docker compose error (no answer)
How to use docker compose V2 in Bitbucket Pipelines (answer not working even when literally copied to pipeline definition for one of reasons below)
Working v1 main pipeline (only significant step and job, of course, it is larger)
image: python:3.10
definitions:
steps:
- step: &run-tests
name: Test
image: docker/compose:debian-1.29.2
caches:
- docker
services:
- docker
script:
- COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
# - ... (wait until ready and run tests, ignored, because error happens earlier)
pipelines:
default:
- parallel:
- step: *run-tests
Encountered errors
I'll to refer to them multiple times, so let's define short aliases:
403
+ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
listing workers for Build: failed to list workers: Unavailable: connection error: desc = "transport: Error while dialing unable to upgrade to h2c, received 403"
priviliged
+ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 2.8s done
#1 creating container buildx_buildkit_default 0.0s done
#1 ERROR: Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed
------
> [internal] booting buildkit:
------
Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed
Unfortunately, there is no docker/compose v2 image, and our deployment uses v2, so some inconsistencies happen. I'm trying to use v2 in pipeline now. I replaced docker-compose references with docker compose and try to prevent this command from crashing. Important thing to note: I need docker buildkit and cannot go without it, because I'm using Dockerfile.name.dockerignore files which are separate for prod and dev, and docker without buildkit does not support it (builds will simply fail).
Things I tried (debug smts like docker version and docker compose version were always working OK in these cases):
using image: linuxserver/docker-compose:2.10.2-v2. Result: 403.
using image: library/docker:20.10.18.
No more changes. Result: privileged.
Add docker buildx create --driver-opt image=moby/buildkit:v0.10.4-rootless --use as a step. Result: privileged (logs show that this image is actually used: pulling image moby/buildkit:v0.10.4-rootless 6.3s done).
using no explicit image (relying on bitbucket docker installation).
with official compose installation method (result: 403):
- mkdir -p /usr/local/lib/docker/cli-plugins/
- wget -O /usr/local/lib/docker/cli-plugins/docker-compose https://github.com/docker/compose/releases/download/v2.10.2/docker-compose-linux-x86_64
- chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
with solution from 2nd link above (result: 403, but with some portion of success: downloaded two services that do not require building - postgres and redis - and failed only then)
If it is important, compose file for CI (only healthchecks trimmed, everything else not touched):
# We need this file without volumes due to bitbucket limitations.
version: '3.9'
services:
db:
image: mariadb:10.8.3-jammy
env_file: .env.ci
volumes:
- ./tests/db_init/:/docker-entrypoint-initdb.d
networks:
- app_network
redis:
image: redis:alpine
environment:
- REDIS_REPLICATION_MODE=master
networks:
- app_network
app:
build:
context: .
args:
- APP_USER=reporting
- APP_PORT
env_file: .env.ci
depends_on:
- db
- redis
networks:
- app_network
nginx:
build:
context: .
dockerfile: configs/Dockerfile.nginx
env_file: .env.ci
environment:
- APP_HOST=app
ports:
- 80:80
depends_on:
- app
networks:
- app_network
networks:
app_network:
driver: bridge
For now I reverted everything and keep using v1. The limitations of bitbucket pipelines drive me mad, I can easily run the same stuff in github actions, but now have to remove one service (that uses docker directory mounting, so cannot run on bitbucket) and spend whole day trying to upgrade compose. Sorry for this tone, this really makes me desire to quit bitbucket forever and never touch it again.

How to make sure docker-compose will not remove my volume with postgres data

I am running a simple django webapp with docker-compose. I define both a web service and a db service in a docker-compose.yml file:
version: "3.8"
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
env_file:
- ./.env.dev
depends_on:
- db
volumes:
postgres_data:
I start the service by running:
docker-compose up -d
I can load some data in there with a custom django command that I wrote for my app. Everything is running fine (with data) on localhost:8000.
However, when I run
docker-compose down
(so without -v) and then again
docker-compose up -d
the database is empty again. The volume was not persisted. From what I read in the docker-compose docs and also in several posts here at SO, persisting the volume and reusing it when you start a new container should be the default behavior (which, if I understand it correctly, you can disable by using the --renew-anon-volumes flag).
However in my case, the volume is not persisted. Or maybe it is, but my data is gone.
By doing docker volume ls I can see that my volume (I'll use the name my_volume here) still exists after the docker-compose down command. However, the CreatedAt value has been changed. This makes me think it's a different volume with the same name, and my data is already gone, but I don't know how to confirm that.
This SO answer suggests to mount the volume on /var/lib/postgresql instead of /var/lib/postgresql/data. However, I've seen other resources (like this one) where the opposite is suggested. I've tried both, but neither option works.
Thanks for any advice.
It turns out that the Dockerfile of my app was using an entrypoint in which the following command was executed: python manage.py flush which clears all data in the database. As this gets executed every time the app container starts, it clears all data. It had nothing to do with docker-compose.

my docker-compose is failing on this line. Why?

I need to copy a php.ini file that I have (with xdebug enabled) to /bitnami/php-fpm/conf/. I am using a bitnami docker container, and I want to use xdebug to debug the php code in my app. Therefore I must enable xdebug in the php.ini file. The /bitnami/php-fpm container on the repository had this comment added to it:
5.5.30-0-r01 (2015-11-10)
php.ini is now exposed in the volume mounted at /bitnami/php-fpm/conf/ allowing users to change the defaults as per their requirements.
So I am trying to copy my php.ini file to /bitnami/php-fpm/conf/php.ini in the docker-compose.yml. Here is the php-fpm section of the .yml:
php-fpm:
image: bitnami/php-fpm:5.5.26-3
volumes:
- ./app:/app
- php.ini:/bitnami/php-fpm/conf
networks:
- net
volumes:
database_data:
driver: local
networks:
net:
Here is the error I get: ERROR: Named volume "php.ini:/bitnami/php-fpm/conf:rw" is used in service "php-fpm" but no declaration was found in the volumes section.
Any idea how to fix this?
I will assume that your indentation is correct otherwise you probably wouldn't get that error. Always run your yaml's through a lint tool such as http://www.yamllint.com/.
In terms of your volume mount, the first one you have the correct syntax but the second you don't therefore Docker thinks it is a named volume.
Assuming php.ini is in the root directory next to your docker-compose.yml.
volumes:
- ./app:/app
- ./php.ini:/bitnami/php-fpm/conf

Create database and user in mongoDB official docker image

I want to have a MongoDB service running in a Docker in order to serve a Flask app. What I've tried is create a container using docker-compose.yml:
my_mongo_service:
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_ROOT_USER}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MY_DATABASE_NAME}
ports:
- "27017:27017"
volumes:
- "/data/db:/data/db"
command: mongod
Imagine we have an .env file like this:
MONGO_ROOT_USER=my_fancy_username
MONGO_ROOT_PASSWORD=my_fancy_password
MY_DATABASE_NAME=my_fancy_database
What I would expect (reading the doc) is that a database matching MY_DATABASE_NAME value is created and an user matching MONGO_ROOT_USER is created too and I could authenticate with the pair (MONGO_ROOT_USER,MONGO_ROOT_PASSWORD).
Ok, I launch my container with docker-compose up and enter on it with docker exec -it <container-id> bash. I put mongo on the console and when I try to authenticate it crashes:
> use my_fancy_database
switched to db my_fancy_database
> db.auth('my_fancy_username','my_fancy_password')
Error: Authentication failed.
0
On the log, the error I find is the following
[...] authentication failed for my_fancy_username on my_fancy_database from client [...] ; UserNotFound: Could not find user my_fancy_username#my_fancy_database
The docker-compose.yml configuration (as it was posted on official documentation) is not working. What I'm doing wrong?
Thanks in advance.
I don't get it. Are you using environmental variables, which are not in the environment? It sure looks so.
If you do echo $MY_DATABASE_NAME in your terminal and see empty output, then here is the answer to your question. You either first have to define the variable with export (or source for a file) or redefine your docker-compose.yml.
For that, it's best to use env_file directive:
my_mongo_service:
image: mongo
env_file:
- .env
ports:
- "27017:27017"
volumes:
- "/data/db:/data/db"
And set your .env as this:
MONGO_INITDB_ROOT_USERNAME=my_fancy_username
MONGO_INITDB_ROOT_PASSWORD=my_fancy_password
MONGO_INITDB_DATABASE=my_fancy_database
Side note: using command: mongod is not necessary, the base image is already using it.