Google Cloud Endpoint Error when creating service config - google-cloud-endpoints-v2

I am trying to configure Google Cloud Endpoints using Cloud Functions. For the same I am following instructions from: https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-functions
I have followed the steps given and have come to the point of building the service config into a new ESPv2 Beta docker image. When I give the command:
chmod +x gcloud_build_image
./gcloud_build_image -s CLOUD_RUN_HOSTNAME \
-c CONFIG_ID -p ESP_PROJECT_ID
after replacing the hostname and configid and projectid I get the following error
> -c service-host-name-xxx -p project-id
Using base image: gcr.io/endpoints-release/endpoints-runtime-serverless:2
++ mktemp -d /tmp/docker.XXXX
+ cd /tmp/docker.5l3t
+ gcloud endpoints configs describe service-host-name-xxx.run.app --project=project-id --service=service-host-name-xxx.app --format=json
ERROR: (gcloud.endpoints.configs.describe) NOT_FOUND: Service configuration 'services/service-host-name-xxx.run.app/configs/service-host-name-xxx' not found.
+ error_exit 'Failed to download service config'
+ echo './gcloud_build_image: line 46: Failed to download service config (exit 1)'
./gcloud_build_image: line 46: Failed to download service config (exit 1)
+ exit 1
Any idea what am I doing wrong? Thanks

My bad. I repeated the steps and got it working. So I guess there must have been some mistake I did while trying it out. The document works as it states.

I had the same error. When running the script twice it works. This means you have to already have a service endpoint configured, which does not exist yet when the script tries to fetch the endpoint information with:
gcloud endpoints configs describe service-host-name-xxx.run.app
What I would do (in cloudbuild) is to supply some sort of an "empty" container first. I used the following example on top of my cloudbuild.yaml:
gcloud run services list \
--platform managed \
--project ${PROJECT_ID} \
--region europe-west1 \
--filter=${PROJECT_ID}-esp-svc \
--format yaml | grep . ||
gcloud run deploy ${PROJECT_ID}-esp-svc \
--image="gcr.io/endpoints-release/endpoints-runtime-serverless:2" \
--allow-unauthenticated \
--platform managed \
--project=${PROJECT_ID} \
--region=europe-west1 \
--timeout=120

Related

What parameter(s) do I have to pass `gsutil` to access a Google Cloud local storage? (storage-testbench)

For test purposes, I want to run the storage-testbench simulator. It allows me to send REST commands to a local server which is supposed to work like a Google Cloud Storage facility.
In my tests, I want to copy 3 files from my local hard drive to that local GCS-like storage facility using gsutil cp .... I found out that in order to connect to that specific server, I need additional options on the command line as follow:
gsutil \
-o "Credentials:gs_json_host=127.0.0.1" \
-o "Credentials:gs_json_port=9000" \
-o "Boto:https_validate_certificates=False" \
cp -p test my-file.ext gs://bucket-name/my-file.ext
See .boto for details on defining the credentials.
Unfortunately, I get this error:
CommandException: No URLs matched: test
The name at the end (test) is the project identifier (-p test). There is an example in the README.md of the storage-testbench project, although it's just a variable in a URI.
How do I make the cp command work?
Note:
The gunicorn process shows that the first GET from the cp command works as expected. It returns a 200. So the issue seems to be inside gsutil. Also, I'm able to create the bucket just fine:
gsutil \
-o "Credentials:gs_json_host=127.0.0.1" \
-o "Credentials:gs_json_port=9000" \
-o "Boto:https_validate_certificates=False" \
mb -p test gs://bucket-name
Trying the mb a second time gives me a 509 as expected.
More links:
gsutil global options
gsutil cp ...

How Can I Debug apiserver Startup When No Logs Are Generated?

I am trying to install the aws-encryption-provider following the steps at https://github.com/kubernetes-sigs/aws-encryption-provider. After I added the --encryption-provider-config=/etc/kubernetes/aws-encryption-provider-config.yaml parameter to /etc/kubernetes/manifests/kube-apiserver.yaml the apiserver process did not restart. Nor do I see any error messages.
What technique can I use to see errors created when apiserver starts?
Realizing that the apiserver is running inside a docker container, I connected to one of my controller nodes using SSH. Then I started a container using the following command to get a shell prompt using the same docker image that apiserver is using.
docker run \
-it \
--rm \
--entrypoint /bin/sh \
--volume /etc/kubernetes:/etc/kubernetes:ro \
--volume /etc/ssl/certs:/etc/ssl/certs:ro \
--volume /etc/pki:/etc/pki:ro \
--volume /etc/pki/ca-trust:/etc/pki/ca-trust:ro \
--volume /etc/pki/tls:/etc/pki/tls:ro \
--volume /etc/ssl/etcd/ssl:/etc/ssl/etcd/ssl:ro \
--volume /etc/kubernetes/ssl:/etc/kubernetes/ssl:ro \
--volume /var/run/kmsplugin:/var/run/kmsplugin \
k8s.gcr.io/kube-apiserver:v1.18.5
Once inside that container, I could run the same command that is setup in kube-apiserver.yaml. This command was:
kube-apiserver \
--encryption-provider-config=/etc/kubernetes/aws-encryption-provider-config.yaml \
--advertise-address=10.250.203.201 \
...
--service-node-port-range=30000-32767 \
--storage-backend=etcd3 \
I elided the bulk of the command since you'll need to get specific values from your own kube-apiserver.yaml file.
Using this technique showed me the error message:
Error: error while parsing encryption provider configuration file
"/etc/kubernetes/aws-encryption-provider-config.yaml": error while parsing
file: resources[0].providers[0]: Invalid value:
config.ProviderConfiguration{AESGCM:(*config.AESConfiguration)(nil),
AESCBC:(*config.AESConfiguration)(nil), Secretbox:(*config.SecretboxConfiguration)
(nil), Identity:(*config.IdentityConfiguration)(nil), KMS:(*config.KMSConfiguration)
(nil)}: provider does not contain any of the expected providers: KMS, AESGCM,
AESCBC, Secretbox, Identity

Mongo DB Atlas. Is it safe to whitelist all ip because someone attempting to access the database needs a password

I have a google app engine with my express server. I also have my db in MongoDB Atlas. I currently have my MongoDB Atlas whitelisting all ip. The connection string is in the code for my express server running on Google Cloud. Presumable any attacker trying to get into the database would still need a user name and password for the connection string.
Is it safe to do this?
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
Is it safe to do this?
"Safe" is a relative term. It is safer than having an unauthed database open to the internet, but the weakest link is now your password.
A whitelist is an additional layer of security, so that if someone knows or can guess your password, they can't just connect from anywhere. They must be connecting from a set of known IP addresses. This makes the attack surface smaller, so the database is less likely to be broken into by a random person in the internet.
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
You would need to determine the IP ranges of your application, and plug in that range into the whitelist.
here is an answer i left elsewhere. hope it helps someone who comes across this:
this script will be kept up to date on my gist
why
mongo atlas provides a reasonably priced access to a managed mongo DB. CSPs where containers are hosted charge too much for their managed mongo DB. they all suggest setting an insecure CIDR (0.0.0.0/0) to allow the container to access the cluster. this is obviously ridiculous.
this entrypoint script is surgical to maintain least privileged access. only the current hosted IP address of the service is whitelisted.
usage
set as the entrypoint for the Dockerfile
run in cloud init / VM startup if not using a container (and delete the last line exec "$#" since that is just for containers
behavior
uses the mongo atlas project IP access list endpoints
will detect the hosted IP address of the container and whitelist it with the cluster using the mongo atlas API
if the service has no whitelist entry it is created
if the service has an existing whitelist entry that matches current IP no change
if the service IP has changed the old entry is deleted and new one is created
when a whitelist entry is created the service sleeps for 60s to wait for atlas to propagate access to the cluster
env
setup
create API key for org
add API key to project
copy the public key (MONGO_ATLAS_API_PK) and secret key (MONGO_ATLAS_API_SK)
go to project settings page and copy the project ID (MONGO_ATLAS_API_PROJECT_ID)
provide the following values in the env of the container service
SERVICE_NAME: unique name used for creating / updating (deleting old) whitelist entry
MONGO_ATLAS_API_PK: step 3
MONGO_ATLAS_API_SK: step 3
MONGO_ATLAS_API_PROJECT_ID: step 4
deps
bash
curl
jq CLI JSON parser
# alpine / apk
apk update \
&& apk add --no-cache \
bash \
curl \
jq
# ubuntu / apt
export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get -y install \
bash \
curl \
jq
script
#!/usr/bin/env bash
# -- ENV -- #
# these must be available to the container service at runtime
#
# SERVICE_NAME
#
# MONGO_ATLAS_API_PK
# MONGO_ATLAS_API_SK
# MONGO_ATLAS_API_PROJECT_ID
#
# -- ENV -- #
set -e
mongo_api_base_url='https://cloud.mongodb.com/api/atlas/v1.0'
check_for_deps() {
deps=(
bash
curl
jq
)
for dep in "${deps[#]}"; do
if [ ! "$(command -v $dep)" ]
then
echo "dependency [$dep] not found. exiting"
exit 1
fi
done
}
make_mongo_api_request() {
local request_method="$1"
local request_url="$2"
local data="$3"
curl -s \
--user "$MONGO_ATLAS_API_PK:$MONGO_ATLAS_API_SK" --digest \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--request "$request_method" "$request_url" \
--data "$data"
}
get_access_list_endpoint() {
echo -n "$mongo_api_base_url/groups/$MONGO_ATLAS_API_PROJECT_ID/accessList"
}
get_service_ip() {
echo -n "$(curl https://ipinfo.io/ip -s)"
}
get_previous_service_ip() {
local access_list_endpoint=`get_access_list_endpoint`
local previous_ip=`make_mongo_api_request 'GET' "$access_list_endpoint" \
| jq --arg SERVICE_NAME "$SERVICE_NAME" -r \
'.results[]? as $results | $results.comment | if test("\\[\($SERVICE_NAME)\\]") then $results.ipAddress else empty end'`
echo "$previous_ip"
}
whitelist_service_ip() {
local current_service_ip="$1"
local comment="Hosted IP of [$SERVICE_NAME] [set#$(date +%s)]"
if (( "${#comment}" > 80 )); then
echo "comment field value will be above 80 char limit: \"$comment\""
echo "comment would be too long due to length of service name [$SERVICE_NAME] [${#SERVICE_NAME}]"
echo "change comment format or service name then retry. exiting to avoid mongo API failure"
exit 1
fi
echo "whitelisting service IP [$current_service_ip] with comment value: \"$comment\""
response=`make_mongo_api_request \
'POST' \
"$(get_access_list_endpoint)?pretty=true" \
"[
{
\"comment\" : \"$comment\",
\"ipAddress\": \"$current_service_ip\"
}
]" \
| jq -r 'if .error then . else empty end'`
if [[ -n "$response" ]];
then
echo 'API error whitelisting service'
echo "$response"
exit 1
else
echo "whitelist request successful"
echo "waiting 60s for whitelist to propagate to cluster"
sleep 60s
fi
}
delete_previous_service_ip() {
local previous_service_ip="$1"
echo "deleting previous service IP address of [$SERVICE_NAME]"
make_mongo_api_request \
'DELETE' \
"$(get_access_list_endpoint)/$previous_service_ip"
}
set_mongo_whitelist_for_service_ip() {
local current_service_ip=`get_service_ip`
local previous_service_ip=`get_previous_service_ip`
if [[ -z "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] has not yet been whitelisted"
whitelist_service_ip "$current_service_ip"
elif [[ "$current_service_ip" == "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] IP has not changed"
else
echo "service [$SERVICE_NAME] IP has changed from [$previous_service_ip] to [$current_service_ip]"
delete_previous_service_ip "$previous_service_ip"
whitelist_service_ip "$current_service_ip"
fi
}
check_for_deps
set_mongo_whitelist_for_service_ip
# run CMD
exec "$#"

ERROR: (gcloud.firebase.test.android.run) 'Pixel' is not a valid model

I tried to launch FireBase Test Lab from the command line but I got an error:
ERROR: (gcloud.firebase.test.android.run) 'Pixel' is not a valid model
Here is how I tried to run the command:
gcloud firebase test android run \
--app app/build/outputs/apk/debug/app-debug.apk \
--test app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
--timeout 30m \
--results-bucket "locusmaps-android-sdk" \
--test-targets "com.locuslabs.android.sdk.TestUITest#testTapMapLabelRentalCarCenter" \
--use-orchestrator \
--device model=Pixel,version=27,locale=en_US,orientation=portrait \
--num-flaky-test-attempts 2 \
--environment-variables numShards=2,shardIndex=0
The only reference I could find to this error is some source code but no existing solution anyone has articulated.
How do I find the correct model number?
According to the gcloud firebase test android run documentation you can find a list of MODEL_ID with the following command:
gcloud firebase test android models list
So use --device model=Pixel2 parameter instead of Pixel.

Google cloud's glcoud compute instance create gives error "The resource projects/{ourID}/global/images/family/debian-8 was not found

We are using a server I created on Google Cloud Platform to create and manage the other servers over there. But when trying to create a new server from the Linux command line with the GCloud compute instances create function we receive the following error:
marco#ans-mgmt-01:~/gcloud$ ./create_gcloud_instance.sh app-tst-04 tst,backend-server,bootstrap home-tst 10.20.22.104
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- The resource 'projects/REMOVED_OUR_PROJECTID/global/images/family/debian-8' was not found
Our script looks like this:
#!/bin/bash
if [ "$#" -ne 4 ]; then
echo "Usage: create_gcloud_instance <instance_name> <tags> <subnet_name> <server_ip>"
exit 1
fi
set -e
INSTANCE_NAME=$1
TAGS=$2
SERVER_SUBNET=$3
SERVER_IP=$4
gcloud compute --project "REMOVED OUR PROJECT ID" instances create "$INSTANCE_NAME" \
--zone "europe-west1-c" \
--machine-type "f1-micro" \
--network "cloudnet" \
--subnet "$SERVER_SUBNET" \
--no-address \
--private-network-ip="$SERVER_IP" \
--maintenance-policy "MIGRATE" \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--service-account "default" \
--tags "$TAGS" \
--image-family "debian-8" \
--boot-disk-size "10" \
--boot-disk-type "pd-ssd" \
--boot-disk-device-name "bootdisk-$INSTANCE_NAME" \
./clean_known_hosts.sh $INSTANCE_NAME
On the google cloud console (console.cloud.google.com) I enabled the cloud api access scope for the ans-mgmt-01 server and also tried to create a server from there. That's working without problems.
The problem is that gcloud is looking for the image family in your project and not the debian-cloud project where it really exists.
This can be fixed by simply using --image-project debian-cloud.
This way instead of looking for projects/{yourID}/global/images/family/debian-8, it will look for projects/debian-cloud/global/images/family/debian-8.
For me the problem was debian-8(and now debian-9) reached the end of life and no longer supported. Updating to debian-10 or debian-11 fixed the issue
For me the problem was debian-9 after so much time came to an end and tried updating to debian-10 fixed the issue
you could run below command to see if the image is available
gcloud compute images list | grep debian
Below is the result from the command
NAME: debian-10-buster-v20221206
PROJECT: debian-cloud
FAMILY: debian-10
NAME: debian-11-bullseye-arm64-v20221102
PROJECT: debian-cloud
FAMILY: debian-11-arm64
NAME: debian-11-bullseye-v20221206
PROJECT: debian-cloud
FAMILY: debian-11
So you could have some idea from your result