Creating new Cloud Run service revision via API - rest

Do I understand correctly that you cannot create a new Cloud Run service revision via the API ?
Looking under Revisions : https://cloud.google.com/run/docs/reference/rest/v1/namespaces.revisions : "Revisions are created by updates to a Configuration."
If we then check "Configurations" : https://cloud.google.com/run/docs/reference/rest/v1/namespaces.configurations : we only have a GET and a LIST method.

It's not so hard, but you need to know well the Cloud Run service first. Cloud Run implement the Knative API and one of the strength of Cloud Run is to guaranteed the portability on Cloud Run and on any K8S cluster that use Knative on top of it.
Thus,
You have the base URL. Choose your region first.
Then use the create service URL for the creation. You have to know that the namespace_id on Cloud Run managed is your Project_ID.
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-type: application/json" \
-X POST -d #knative.json \
https://us-central1-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/<Project_ID>/services
If you want to update the revision, use the replace service API. Same thing, but you need to add the name of the service in the URL and PUT it.
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-type: application/json" \
-X PUT -d #knative.json \
https://us-central1-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/<Project_ID>/services/<ServiceName>
Because Cloud Run is compliant with Knative, the content of the file knative.json is a Knative document (in JSON in my example)
{
"apiVersion": "serving.knative.dev/v1",
"kind": "Service",
"metadata": {
"name": "<ServiceName>",
"namespace": "<Project_ID>"
},
"spec": {
"template": {
"spec": {
"containers": [
{
"image": "gcr.io/knative-samples/helloworld-go",
"env": [
{
"name": "TARGET",
"value": "Go Sample v2"
}
]
}
]
}
}
}
}
Note in this document the service Name and the Namespace (projectID) that you need to change.

Related

Flutter Web: Not loading Images from the FirebaseStorage (Blocked by CORS policy) [duplicate]

I'm trying to download files from Firebase Storage through a XMLHttpRequest, but Access-Control-Allow-Origin is not set on the resource, so it's not possible. Is there any way to set this header on the storage server?
(let [xhr (js/XMLHttpRequest.)]
(.open xhr "GET" url)
(aset xhr "responseType" "arraybuffer")
(aset xhr "onload" #(js/console.log "bin" (.-response xhr)))
(.send xhr)))
Chrome error message:
XMLHttpRequest cannot load
https://firebasestorage.googleapis.com/[EDITED]
No 'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:3449' is therefore not allowed
access.
From this post on the firebase-talk group/list:
The easiest way to configure your data for CORS is with the gsutil command line tool.
The installation instructions for gsutil are available at https://cloud.google.com/storage/docs/gsutil_install.
Once you've installed gsutil and authenticated with it, you can use it to configure CORS.
For example, if you just want to allow object downloads from your custom domain, put this data in a file named cors.json (replacing "https://example.com" with your domain):
[
{
"origin": ["https://example.com"],
"method": ["GET"],
"maxAgeSeconds": 3600
}
]
Then, run this command (replacing "exampleproject.appspot.com" with the name of your bucket):
gsutil cors set cors.json gs://exampleproject.appspot.com
and you should be set.
If you need a more complicated CORS configuration, check out the docs at https://cloud.google.com/storage/docs/cross-origin#Configuring-CORS-on-a-Bucket.
The above is now also included in the Firebase documentation on CORS Configuration
Google Cloud now has an inline editor to make this process even easier. No need to install anything on your local system.
Open the GCP console and start a cloud terminal session by clicking the >_ icon button in the top navbar. Or search for "cloud shell editor" in the search bar.
Click the pencil icon to open the editor, then create the cors.json file.
Run gsutil cors set cors.json gs://your-bucket
Just want to add to the answer. Just go to your project in google console (console.cloud.google.com/home) and select your project. There open the terminal and just create the cors.json file (touch cors.json) and then follow the answer and edit this file (vim cors.json) as suggested by #frank-van-puffelen
This worked for me. Cheers!
I am working on a project using firebase storage and the end-user needs a way to download the file they uploaded. I was getting a cors error when the user tried to download the file but after some research, I solved the issue.
Here is how I figured it out:
Download Google Cloud CLI
Log in using the CLI
Create cors.json file in the project directory and type the code below.
[
{
"origin": ["*"],
"method": ["GET"],
"maxAgeSeconds": 3600
}
]
Navigate to the directory containing cors.json with the Google Cloud CLI
In the CLI type: gsutil cors set cors.json gs://<app_name>.appspot.com
another approach to do this is using Google JSON API.
step 1 : get access token to use with JSON API
To get a token use go to : https://developers.google.com/oauthplayground/
Then search for JSON API or Storage
Select required options i.e read ,write , full_access (tick those which are required)
Follow the process to get Access Token, which will be valid for an hour.
Step 2: Use token to hit google JSON API to update CORS
Sample Curl :
curl -X PATCH \
'https://www.googleapis.com/storage/v1/b/your_bucket_id?fields=cors' \
-H 'Accept: application/json' \
-H 'Accept-Encoding: gzip, deflate' \
-H 'Authorization: Bearer ya29.GltIB3rTqQ2tJgh0cMj1SEa1UgQNJnTMXUjMlMIRGG-mBCbiUO0wqdDuEpnPD6cbkcr1CuLItuhaNCTJYhv2ZKjK7yqyIHNgkCBup-T8Z1B1RiBrCgcgliHOGFDz' \
-H 'Content-Type: application/json' \
-H 'Postman-Token: d19f29ed-2e80-4c34-85ee-c46c9058fac0' \
-H 'cache-control: no-cache' \
-d '{
"location": "us",
"storageClass": "Standard",
"cors": [
{
"maxAgeSeconds": "360000000",
"method": [
"GET",
"HEAD",
"DELETE"
],
"origin": [
"*"
],
"responseHeader":[
"Content-Type"
]
}
]
}'

Logs are not coming in logDNA Console

I have configured lite version of logDNA in IBM cloud,
but whenever I am sending logs using curl command logs are not getting captured,
Although HOST is coming in All Sources and in All Apps I am getting myapp.
Here is my curl command
curl "https://logs.au-syd.logging.cloud.ibm.com/logs/ingest?hostname=EXAMPLE_HOST&mac=C0:FF:EE:C0:FF:EE&ip=10.0.1.101&now=$(date +%s)" \
-u INGETION_KEY: \
-H "Content-Type: application/json; charset=UTF-8" \
-d \
'{
"lines": [
{
"line":"This is an my log line",
"app":"myapp",
"level": "INFO",
"env": "DEV",
"meta": {
"customfield": {
"nestedfield": "nestedvalue"
}
}
}
]
}'
After posting this curl command from my system I am getting this response back
{"status":"ok","batchID":"a9fc7347-e4ef-42f7-9241-63291911cee6:20274:ld62"}%
Any Help would be appriciated
TIA
Did you whitelist your domain in the LogDNA settings?
Also, the LogDNA lite version on IBM Cloud only has live tail and does not store any logs. It can be easy to miss the logs if you refresh the browser.

How to access IBM logDNA without going through ibm-console login

I'm trying to access IBM provided logDNA without logging in IBM console and traversing to LogDNA Dashboard location
I've no clue on how to proceed with this.
curl "https://logs.logdna.com/logs/ingest?hostname=EXAMPLE_HOST&mac=C0:FF:EE:C0:FF:EE&ip=10.0.1.101&now=$(date +%s)" \
-u INSERT_INGESTION_KEY: \
-H "Content-Type: application/json; charset=UTF-8" \
-d \
'{
"lines": [
{
"line":"This is an awesome log statement",
"app":"myapp",
"level": "INFO",
"env": "production",
"meta": {
"customfield": {
"nestedfield": "nestedvalue"
}
}
}
]
}'
In the above code snippet the URL used is generic URL, instead of that I want to generate a URL for my IBM LogDNA which should be accessible through access token, so that I can use below type code snippet to push the logs to logDNA directly from the code.
Currently to open LogDNA dashboard, I login to IBM cloud UI and check in Observability section.
Is there a way we can access this through tokens and have a custom URL on which I can apply this?
The steps to obtain the dashboard URL using the command line are described in the related IBM Cloud Logging service documentation.
This works for the Activity Tracker the same way:
$ ibmcloud resource service-instance your-instance-name --output json | jq -r '.[0].dashboard_url'
LogDNA documentation about REST ingestion is amazing...
But their service is really amazing. And this works for me:
In LogDNA dashboard, at [Settings -> Organization -> API Keys] you can find your key.
Let's say, you key is 77777haha77777777777hoho.
In you curl command replace the second row with this:
-u "77777haha77777777777hoho:77777haha77777777777hoho" \
Entire tested command:
curl "https://logs.logdna.com/logs/ingest?
hostname=EXAMPLE_HOST&mac=C0:FF:EE:C0:FF:EE&ip=10.0.1.101&now=1610830847530" \
-u "77777haha77777777777hoho:77777haha77777777777hoho" \
-H "Content-Type: application/json; charset=UTF-8" \
-d \
'{
"lines":[
{
"timestamp":1610830847530,
"line":"This is an awesome log statement",
"file":"example.log"
}
]
}'

how can i bulk remove expired certificates from Vault

our vault storage keep cluttering up with massive amount of expired certificates.
there is an option to revoke a certificate using api or a lease id, but they are still available and can be queried.
following will only revoke a certificate,
$ curl \
--header "X-Vault-Token: ..." \
--request POST \
--data #payload.json \
http://127.0.0.1:8200/v1/pki/revoke
is there a way to permanently remove expired certificates?
there is an endpoint for it,
tidy
This endpoint allows tidying up the storage backend and/or CRL by removing certificates that have expired
and are past a certain buffer period beyond their expiration time.
So to remove all expired certificates make a POST request to https://<vault-api-url>:<api-port>/v1/<pki-role>/tidy with "tidy_cert_store": true as payload,
using cURL,
curl -X POST \
https://<vault-api-url>:<api-port>/v1/<pki-role>/tidy \
-H 'content-type: application/json' \
-H 'x-vault-token: c32165c4-212f-2dc2e-cd9f-acf63bdce91c' \
-d '{
"tidy_cert_store": true
}'
The syntax Sufiyan provided appears incorrect (or for an old version). In Vault >1.2 (and maybe earlier) it should be:
curl -X POST \
-H "X-Vault-Token: $VAULT_TOKEN" \
-d '{"tidy_cert_store":true}' \
$VAULT_ADDR/v1/$pki_engine/tidy
Which should result in the tidy process to begin, and return this response:
{
"request_id": "",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": null,
"wrap_info": null,
"warnings": [
"Tidy operation successfully started. Any information from the operation will be printed to Vault's server logs."
],
"auth": null
}
Latest docs on Tidy are https://www.vaultproject.io/api/secret/pki/index.html#tidy

ERROR 401 on creation of External (Internal) Master Instance on GCP Cloud SQL

I started the creation of a MySQL 1st Gen read replica with Google Cloud SQL. However after 5 hours of waiting I have given up. Now I can't delete the instance and on trying to delete using the cloud terminal just get
I gave up and deleted the project...
Meanwhile back to the first part of the script -
ACCESS_TOKEN="$(gcloud auth application-default print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{"name": "food22",
"region": "us-central1",
"databaseVersion": "MYSQL_5_5",
"onPremisesConfiguration": {"hostPort": "xxx.xxx.xxx.xxx:3306"}}' -X POST
\
https://www.googleapis.com/sql/v1beta4/projects/[Project ID]/instances
Orange Warning Symbol
I have a nasty feeling that Google have abandoned this on premise link after I saw this - https://serverfault.com/questions/835108/gcp-configure-an-external-master-issue/835266 and looked at Answer Two.....
Come on Google - get your act together!