I sent a docker file for Google Cloud Build. The build was created successfully.
The artifact URL is:
gcr.io/XXX/api/v1:abcdef017e651ee2b713828662801b36fc2c1
How I can check the image size? (MB\GB)
There isn't API for this. But I have a workaround, this Linux command line
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://gcr.io/v2/XXX/api/v1/manifests/abcdef017e651ee2b713828662801b36fc2c1 2>/dev/null | \
jq ".layers[].size" | \
awk '{s+=$1} END {print s}'
Detail line by line
Create a curl with a secure token from the GCLOUD CLI
Get the image manifest, which describe all the layers and their size
Get only the layers' sizes
Sum the sizes
-> Result is in Byte.
Related
I want to passively check the permissions (scopes) of a GitHub security token passively (without pushing something into a repository). I tried the following command. I replaced your_username: your access token and the URL of my repo. But it shows an error.
curl: (3) URL using bad/illegal format or missing URL
curl -u your_username:your_access_token \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/octocat/hello-world/collaborators/USERNAME/permission
If the goal is to determine which scopes a token has access to, check the response header with prefix x-oauth-scopes (using curl with -I):
$ GITHUB_TOKEN=ghp_DefineYourOwnToken
$ curl -sS -f -I -H "Authorization: token ${GITHUB_TOKEN}" https://api.github.com | grep ^x-oauth-scopes: | cut -d' ' -f2- | tr -d "[:space:]" | tr ',' '\n'
Note that tr -d "[:space:]" above is essential for removing some unusual whitespace, failing which a matching command such as grep -x doesn't subsequently work correctly.
Sample output 1:
gist
repo
workflow
Sample output 2:
delete:packages
public_repo
read:packages
repo:invite
repo:status
Credit: answer by VK
I'm using this script:
wget -O C:\FlairnetLab\output\x.csv --http-user='[My User]' --http-password='[Password]' --no-check-certificate https://ca-test.adyen.com/reports/download/MerchantAccount/FlairnetECOM/payments_accounting_report_2021_06_10.csv
But i get no file found response. But if i type the url in the browser using the same credential i can download the file.
Can some help me?
The problem is likely to be the encoding of the credentials (both Adyen-generated username and password include several special characters).
An option is to generate the base64-encoded string username: password with a command line (or with an online generator)
# example on Mac
$ echo -n '<username>:<password>' | openssl base64
cmVwb3J0.......5LX4=
then pass it in the Authorization header
# example with wget
wget --header "Authorization: Basic cmVwb3J0.......5LX4=" https://ca-test.adyen.com/reports/download/MerchantAccount/MyMerchantAccount/payments_accounting_report_2021_01_01.csv
# example with curl
curl -H "Authorization: Basic cmVwb3J0.......5LX4=" -X GET https://ca-test.adyen.com/reports/download/MerchantAccount/MyMerchantAccount/payments_accounting_report_2021_01_01.csv
PUBLIC_DNS=$(aws ec2 describe-instances --region ${AWS_DEFAULT_REGION} --filters 'Name=tag:Name,Values=udapeople-backend-ec2-*' --query "Reservations[*].Instances[0].PublicDnsName" --output text)
echo ${PUBLIC_DNS}
curl -H "Content-Type: text/plain" \
-H "token: ${CIRCLE_WORKFLOW_ID}" \
--request PUT \
--data ${PUBLIC_DNS} \
https://api.memstash.io/values/public_dns
curl: no URL specified!
curl: try 'curl --help' or 'curl --manual' for more information
Exited with code exit status 2
CircleCI received exit code 2
your error isn't with Circle CI but with your curl command. The error message is saying it doesn't have a URL to which to PUT. I do see that you included a URL in your curl command, so maybe the problem is in your line endings. Try removing the line endings and run your circle CI job again. You can also try running the command from your local command line.
This is because memstash.io is not working as a website or webservice, there is no issue in your code , memstash is working as a memory-cache for CD jobs so you can find another CD caching service or you have a good option to use circle ci caching itself , try to search for circleCI docs
I am trying to configure Google Cloud Endpoints using Cloud Functions. For the same I am following instructions from: https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-functions
I have followed the steps given and have come to the point of building the service config into a new ESPv2 Beta docker image. When I give the command:
chmod +x gcloud_build_image
./gcloud_build_image -s CLOUD_RUN_HOSTNAME \
-c CONFIG_ID -p ESP_PROJECT_ID
after replacing the hostname and configid and projectid I get the following error
> -c service-host-name-xxx -p project-id
Using base image: gcr.io/endpoints-release/endpoints-runtime-serverless:2
++ mktemp -d /tmp/docker.XXXX
+ cd /tmp/docker.5l3t
+ gcloud endpoints configs describe service-host-name-xxx.run.app --project=project-id --service=service-host-name-xxx.app --format=json
ERROR: (gcloud.endpoints.configs.describe) NOT_FOUND: Service configuration 'services/service-host-name-xxx.run.app/configs/service-host-name-xxx' not found.
+ error_exit 'Failed to download service config'
+ echo './gcloud_build_image: line 46: Failed to download service config (exit 1)'
./gcloud_build_image: line 46: Failed to download service config (exit 1)
+ exit 1
Any idea what am I doing wrong? Thanks
My bad. I repeated the steps and got it working. So I guess there must have been some mistake I did while trying it out. The document works as it states.
I had the same error. When running the script twice it works. This means you have to already have a service endpoint configured, which does not exist yet when the script tries to fetch the endpoint information with:
gcloud endpoints configs describe service-host-name-xxx.run.app
What I would do (in cloudbuild) is to supply some sort of an "empty" container first. I used the following example on top of my cloudbuild.yaml:
gcloud run services list \
--platform managed \
--project ${PROJECT_ID} \
--region europe-west1 \
--filter=${PROJECT_ID}-esp-svc \
--format yaml | grep . ||
gcloud run deploy ${PROJECT_ID}-esp-svc \
--image="gcr.io/endpoints-release/endpoints-runtime-serverless:2" \
--allow-unauthenticated \
--platform managed \
--project=${PROJECT_ID} \
--region=europe-west1 \
--timeout=120
We are using a server I created on Google Cloud Platform to create and manage the other servers over there. But when trying to create a new server from the Linux command line with the GCloud compute instances create function we receive the following error:
marco#ans-mgmt-01:~/gcloud$ ./create_gcloud_instance.sh app-tst-04 tst,backend-server,bootstrap home-tst 10.20.22.104
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- The resource 'projects/REMOVED_OUR_PROJECTID/global/images/family/debian-8' was not found
Our script looks like this:
#!/bin/bash
if [ "$#" -ne 4 ]; then
echo "Usage: create_gcloud_instance <instance_name> <tags> <subnet_name> <server_ip>"
exit 1
fi
set -e
INSTANCE_NAME=$1
TAGS=$2
SERVER_SUBNET=$3
SERVER_IP=$4
gcloud compute --project "REMOVED OUR PROJECT ID" instances create "$INSTANCE_NAME" \
--zone "europe-west1-c" \
--machine-type "f1-micro" \
--network "cloudnet" \
--subnet "$SERVER_SUBNET" \
--no-address \
--private-network-ip="$SERVER_IP" \
--maintenance-policy "MIGRATE" \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--service-account "default" \
--tags "$TAGS" \
--image-family "debian-8" \
--boot-disk-size "10" \
--boot-disk-type "pd-ssd" \
--boot-disk-device-name "bootdisk-$INSTANCE_NAME" \
./clean_known_hosts.sh $INSTANCE_NAME
On the google cloud console (console.cloud.google.com) I enabled the cloud api access scope for the ans-mgmt-01 server and also tried to create a server from there. That's working without problems.
The problem is that gcloud is looking for the image family in your project and not the debian-cloud project where it really exists.
This can be fixed by simply using --image-project debian-cloud.
This way instead of looking for projects/{yourID}/global/images/family/debian-8, it will look for projects/debian-cloud/global/images/family/debian-8.
For me the problem was debian-8(and now debian-9) reached the end of life and no longer supported. Updating to debian-10 or debian-11 fixed the issue
For me the problem was debian-9 after so much time came to an end and tried updating to debian-10 fixed the issue
you could run below command to see if the image is available
gcloud compute images list | grep debian
Below is the result from the command
NAME: debian-10-buster-v20221206
PROJECT: debian-cloud
FAMILY: debian-10
NAME: debian-11-bullseye-arm64-v20221102
PROJECT: debian-cloud
FAMILY: debian-11-arm64
NAME: debian-11-bullseye-v20221206
PROJECT: debian-cloud
FAMILY: debian-11
So you could have some idea from your result