pass metadata to containers command in gce - metadata

I'd like to store an api key (eg:a twitter key) to metadata.
Then use it from the command line of a sub container.
#api-keys.yaml:
TWITTER-CONSUMER-KEY: blahah
TWITTER-CONSUMER-SECRET: blihih
.
#google-container-manifest.yaml:
version: v1beta2
containers:
- name: tweet-thing-one
image: gcr.io/my-project/tweetfeed
command:
- --consumer-key=$TWITTER-CONSUMER-KEY
- --consumer-secret=$TWITTER-CONSUMER-SECRET
- --params-for=one
- name: tweet-thing-two
image: gcr.io/my-project/tweetfeed
command:
- --consumer-key=$TWITTER-CONSUMER-KEY
- --consumer-secret=$TWITTER-CONSUMER-SECRET
- --params-for=two
...
So I could run
$ gcloud compute instances create containervm-test-1 \
--image container-vm \
--metadata-from-file api-keys=api-keys.yaml \
google-container-manifest=google-container-manifest.yaml \
--zone us-central1-a \
--machine-type f1-micro
Thanks !

Looks like two separate issues:
(1) the api-keys.yaml file isn't translated into environment variables within the VM; instead, those need to be queried from the metadata server directly. See e.g. Instance environment variables and https://developers.google.com/compute/docs/metadata#querying.
(2) If you specify "command:" in the manifest, you need to specify the entire command you want to run, not just the flags. From https://cloud.google.com/compute/docs/containers/container_vms
containers[].command[] list of string The command line to run. If this is omitted, the container is assumed to have a command embedded in it."

Related

First argument is not passed to image in kubernetes deployments

I have a docker image with below entrypoint.
ENTRYPOINT ["sh", "-c", "python3 -m myapp ${*}"]
I tried to pass arguments to this image in my kubernetes deployments so that ${*} is replaced with them, but after checking the logs it seem that the first argument was ignored.
I tried to reproduce the result regardless of image, and applied below pod:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: postgres # or any image you may like
command: ["bash -c /bin/echo ${*}"]
args:
- sth
- serve
- arg
when I check the logs, I just see serve arg, and sth is completely ignored.
Any idea on what went wrong or what should I do to pass arguments to exec-style entrypoints instead?
First, your command has quoting problems -- you are effectively running bash -c echo.
Second, you need to closely read the documentation for the -c option (emphasis mine):
If the -c option is present, then commands are read from
the first non-option argument command_string. If there
are arguments after the command_string, the first argument
is assigned to $0 and any remaining arguments are assigned
to the positional parameters. The assignment to $0 sets
the name of the shell, which is used in warning and error
messages.
So you want:
command: ["bash", "-c", "echo ${*}", "bash"]
Given your pod definition, this would set $0 to bash, and then $1 to sth, $2 to serve, and $3 to arg.
There are some subtleties around using sh -c here. For the examples you show, it's not necessary. The important things to remember are that the ENTRYPOINT and CMD are combined together into a single command (or, in Kubernetes, command: and args:), and that sh -c generally takes only a single string argument and acts on it.
The examples you show don't use any shell functionality and you can break the commands into their constituent words as YAML list items.
command:
- /bin/echo
- sth
- serve
- arg
For the Dockerfile case, there is a pattern of using ENTRYPOINT to specify a command and CMD for its arguments, which parallels Kubernetes's syntax here. For this to work well, I'd avoid sh -c (including the implicit sh -c from the ENTRYPOINT shell form); just provide the first set of words in JSON-array form.
ENTRYPOINT ["python", "-m", "myapp"]
# don't override command:, the image's ENTRYPOINT is right, but do add
args:
- foo
- bar
- baz
(If your entrypoint setup is complex enough to require shell operators, it's typically easier to write and debug to move it into a dedicated script and make that script be the ENTRYPOINT or CMD, rather than trying to figure out sh -c semantics and YAML quoting.)

Google Cloud Endpoint Error when creating service config

I am trying to configure Google Cloud Endpoints using Cloud Functions. For the same I am following instructions from: https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-functions
I have followed the steps given and have come to the point of building the service config into a new ESPv2 Beta docker image. When I give the command:
chmod +x gcloud_build_image
./gcloud_build_image -s CLOUD_RUN_HOSTNAME \
-c CONFIG_ID -p ESP_PROJECT_ID
after replacing the hostname and configid and projectid I get the following error
> -c service-host-name-xxx -p project-id
Using base image: gcr.io/endpoints-release/endpoints-runtime-serverless:2
++ mktemp -d /tmp/docker.XXXX
+ cd /tmp/docker.5l3t
+ gcloud endpoints configs describe service-host-name-xxx.run.app --project=project-id --service=service-host-name-xxx.app --format=json
ERROR: (gcloud.endpoints.configs.describe) NOT_FOUND: Service configuration 'services/service-host-name-xxx.run.app/configs/service-host-name-xxx' not found.
+ error_exit 'Failed to download service config'
+ echo './gcloud_build_image: line 46: Failed to download service config (exit 1)'
./gcloud_build_image: line 46: Failed to download service config (exit 1)
+ exit 1
Any idea what am I doing wrong? Thanks
My bad. I repeated the steps and got it working. So I guess there must have been some mistake I did while trying it out. The document works as it states.
I had the same error. When running the script twice it works. This means you have to already have a service endpoint configured, which does not exist yet when the script tries to fetch the endpoint information with:
gcloud endpoints configs describe service-host-name-xxx.run.app
What I would do (in cloudbuild) is to supply some sort of an "empty" container first. I used the following example on top of my cloudbuild.yaml:
gcloud run services list \
--platform managed \
--project ${PROJECT_ID} \
--region europe-west1 \
--filter=${PROJECT_ID}-esp-svc \
--format yaml | grep . ||
gcloud run deploy ${PROJECT_ID}-esp-svc \
--image="gcr.io/endpoints-release/endpoints-runtime-serverless:2" \
--allow-unauthenticated \
--platform managed \
--project=${PROJECT_ID} \
--region=europe-west1 \
--timeout=120

Send arguments to a Job

I have a docker Image that basically runs a one time script. That scripts takes 3 arguments. My docker file is
FROM <some image>
ARG URL
ARG USER
ARG PASSWORD
RUN apt update && apt install curl -y
COPY register.sh .
RUN chmod u+x register.sh
CMD ["sh", "-c", "./register.sh $URL $USER $PASSWORD"]
When I spin up the contianer using docker run -e URL=someUrl -e USER=someUser -e PASSWORD=somePassword -itd <IMAGE_ID> it works perfectly fine.
Now I want to deploy this as a job.
My basic Job looks like:
apiVersion: batch/v1
kind: Job
metadata:
name: register
spec:
template:
spec:
containers:
- name: register
image: registeration:1.0
args: ["someUrl", "someUser", "somePassword"]
restartPolicy: Never
backoffLimit: 4
But this the pod errors out on
Error: failed to start container "register": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"someUrl\": executable file not found in $PATH"
Looks like it is taking my args as commands and trying to execute them. Is that correct ? What can I do to fix this ?
In the Dockerfile as you've written it, two things happen:
The URL, username, and password are fixed in the image. Anyone who can get the image can run docker history and see them in plain text.
The container startup doesn't take any arguments; it just runs the single command with its fixed set of arguments.
Especially since you're planning to pass these arguments in at execution time, I wouldn't bother trying to include them in the image. I'd reduce the Dockerfile to:
FROM ubuntu:18.04
RUN apt update \
&& DEBIAN_FRONTEND=noninteractive \
apt install --assume-yes --no-install-recommends \
curl
COPY register.sh /usr/bin
RUN chmod u+x /usr/bin/register.sh
ENTRYPOINT ["register.sh"]
When you launch it, the Kubernetes args: get passed as command-line parameters to the entrypoint. (It is the same thing as the Docker Compose command: and the free-form command at the end of a plain docker run command.) Making the script be the container entrypoint will make your Kubernetes YAML work the way you expect.
In general I prefer using CMD to ENTRYPOINT. (Among other things, it makes it easier to docker run --rm -it ... /bin/sh to debug your image build.) If you do that, then the Kubernetes args: need to include the name of the script it's running:
args: ["./register.sh", "someUrl", "someUser", "somePassword"]
Use:
args: ["sh", "-c", "./register.sh someUrl someUser somePassword"]

Docker in Docker Executor in Gitlab-Runner does not work (Cannot connect to the docker deamon)

So i recently tried Docker and Gitlab Runner but it seems i cant get it to work.
This is the log i have:
Running with gitlab-runner 10.0.2 (a9a76a50)
on my-docker (c588e5e2)
Using Docker executor with image docker:git ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:b9145b364a203c0afc538ca615b3470e41729edfb7338017f5d4eeb5b13b2d90 for docker service...
Waiting for services to be up and running...
Using docker image sha256:7961fbf38d6f827265aed22fe41a1db889c54913283b678a8623efdda9573977 for predefined container...
Pulling docker image docker:git ...
Using docker image docker:git ID=sha256:5917639be9495ab183f357e8bafafea82449f0c4b12b745eef8bd23d474220ca for build container...
Running on runner-c588e5e2-project-1-concurrent-0 via gitlabServer...
Cloning repository...
Cloning into '<Project name>'...
Checking out ed0ce69e as master...
Skipping Git submodules setup
$ # Auto DevOps variables and functions # collapsed multi-line command
$ setup_docker
$ build
Building Heroku-based application using gliderlabs/herokuish docker image...
**docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.**
See 'docker run --help'.
ERROR: Job failed: exit code 125
What could be the error? Docker itsself is running as it seems, but the docker inside does not seem to work.
This is my .toml file:
[[runners]]
name = my name
url = my url
token = my token
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
Thanks in advance for help!
Edit: Thats what "docker ps" gave as output:
ED STATUS PORTS NAMES
e66e844481b7 7961fbf38d6f "gitlab-runner-ser..." 2 sec onds ago Up Less than a second runner-73520410-proje ct-1-concurrent-0-docker-0-wait-for-service
4f659dba7bac b9145b364a20 "dockerd-entrypoin..." 2 sec onds ago Up 1 second 2375/tcp runner-73520410-proje ct-1-concurrent-0-docker-0
73776d4638b9 gitlab/gitlab-runner:latest "/usr/bin/dumb-ini..." 19 mi nutes ago Up 19 minutes gitlab-runner
Edit 2: My gitlab-ci.yaml
#ruby 2.2
rspec:ruby2.2:
image: ruby:2.2
script:
- bundle exec rspec spec
tags:
- ruby
except:
- tags
#ruby 2.1
rspec:ruby2.1:
image: ruby:2.1
script:
- bundle exec rspec spec
tags:
- ruby
except:
- tags
.go: &go_definition
before_script:
- apt-get update -qq && apt-get install -y ruby
- ruby -v
script:
- go version
- which go
- bin/compile
- support/go-test
- support/go-format check
go:1.8:
<<: *go_definition
image: golang:1.8
codeclimate:
before_script: []
image: docker:latest
variables:
DOCKER_DRIVER: overlay
services:
- docker:dind
script:
- docker pull codeclimate/codeclimate
- docker run --env CODECLIMATE_CODE="$PWD" --volume "$PWD":/code --volume /var/run/docker.sock:/var/run/docker.sock --volume /tmp/cc:/tmp/cc codeclimate/codeclimate analyze -f json > codeclimate.json
artifacts:
paths: [codeclimate.json]
This is my registration command, I think you are missing to pass the privileged during registration and also make sure the gitlab-runner user is part of the docker group:
gitlab-runner register \
--template-config /tmp/gitlab-config.toml \
--config /etc/gitlab-runner/config.toml \
--non-interactive \
--url "$gitlab_url" \
--registration-token "$runner_registration_token" \
--name "$runner_name" \
--tag-list "$runner_tags" \
--run-untagged="$runner_run_untagged" \
--locked="$runner_locked" \
--access-level="$runner_access" \
--maximum-timeout="$maximum_timeout" \
--executor "docker" \
--docker-privileged \
--docker-volumes "/cache" \
--docker-volumes "/certs/client" \
--docker-image "$runner_image"
sudo usermod -aG docker gitlab-runner
# concurrent global can't be setup in registration
# See: https://gitlab.com/gitlab-org/gitlab/-/issues/332497
sed -i "s/concurrent.*/concurrent = $concurrent/" /etc/gitlab-runner/config.toml
# prometheus port for GitLab Runner is 9252 as defined here https://github.com/prometheus/prometheus/wiki/Default-port-allocations
echo -e "listen_address = \":9252\"\n$(cat /etc/gitlab-runner/config.toml)" > /etc/gitlab-runner/config.toml

Google cloud's glcoud compute instance create gives error "The resource projects/{ourID}/global/images/family/debian-8 was not found

We are using a server I created on Google Cloud Platform to create and manage the other servers over there. But when trying to create a new server from the Linux command line with the GCloud compute instances create function we receive the following error:
marco#ans-mgmt-01:~/gcloud$ ./create_gcloud_instance.sh app-tst-04 tst,backend-server,bootstrap home-tst 10.20.22.104
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- The resource 'projects/REMOVED_OUR_PROJECTID/global/images/family/debian-8' was not found
Our script looks like this:
#!/bin/bash
if [ "$#" -ne 4 ]; then
echo "Usage: create_gcloud_instance <instance_name> <tags> <subnet_name> <server_ip>"
exit 1
fi
set -e
INSTANCE_NAME=$1
TAGS=$2
SERVER_SUBNET=$3
SERVER_IP=$4
gcloud compute --project "REMOVED OUR PROJECT ID" instances create "$INSTANCE_NAME" \
--zone "europe-west1-c" \
--machine-type "f1-micro" \
--network "cloudnet" \
--subnet "$SERVER_SUBNET" \
--no-address \
--private-network-ip="$SERVER_IP" \
--maintenance-policy "MIGRATE" \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--service-account "default" \
--tags "$TAGS" \
--image-family "debian-8" \
--boot-disk-size "10" \
--boot-disk-type "pd-ssd" \
--boot-disk-device-name "bootdisk-$INSTANCE_NAME" \
./clean_known_hosts.sh $INSTANCE_NAME
On the google cloud console (console.cloud.google.com) I enabled the cloud api access scope for the ans-mgmt-01 server and also tried to create a server from there. That's working without problems.
The problem is that gcloud is looking for the image family in your project and not the debian-cloud project where it really exists.
This can be fixed by simply using --image-project debian-cloud.
This way instead of looking for projects/{yourID}/global/images/family/debian-8, it will look for projects/debian-cloud/global/images/family/debian-8.
For me the problem was debian-8(and now debian-9) reached the end of life and no longer supported. Updating to debian-10 or debian-11 fixed the issue
For me the problem was debian-9 after so much time came to an end and tried updating to debian-10 fixed the issue
you could run below command to see if the image is available
gcloud compute images list | grep debian
Below is the result from the command
NAME: debian-10-buster-v20221206
PROJECT: debian-cloud
FAMILY: debian-10
NAME: debian-11-bullseye-arm64-v20221102
PROJECT: debian-cloud
FAMILY: debian-11-arm64
NAME: debian-11-bullseye-v20221206
PROJECT: debian-cloud
FAMILY: debian-11
So you could have some idea from your result