In google, images are hosted in CDN URL type and I tried to download as an image from that CDN but it throws an error in C#. Used this c# code attached below.
using (var webClient = new WebClient())
{
byte[] imageBytes = webClient.DownloadData(imageUUl);
System.IO.File.WriteAllBytes(#"E:\Temp\img2.jpeg", imageBytes);
}
URL: https://lh6.googleusercontent.com/vpsleVfq12ZnALrwbIUqCTa0Fpqa5C8IUViGkESOSqvHshQpKCyOq4wsRfTcadG2WYgcW3m0yq_6M2l_IrSM3qr35spIML9iyIHEULwRu4mWw4CUjCwpVfiWnd5MXPImMw=w1280
Thanks in advance.
In GCP Cloud CDN, you can use a signed URL with authentication or signed cookies to authorize users and provide them with a time-limited token for accessing your protected content, so a Cloud CDN does not block requests without a signature query parameter or Cloud-CDN-Cookie HTTP cookie. It rejects requests with invalid (or otherwise malformed) request parameters, due to this I suggest reviewing your browser client security settings; how the authentication is managed to your CDN URL. Some clients store cookies by default if the security policy is allowed, also review how your CDN URL security ingress is configured because when you are using a CDN URLs with signed cookies the responses to signed and unsigned requests are cached separately, so a successful response to a valid signed request is never used to serve an unsigned request.
In another hand if you are using a signed CDN URL to limited secure access to file for a limited amount of time, there are some steps that you need to follow first:
Ensure that Cloud CDN is enabled.
If necessary, update to the latest version of the Google Cloud CLI:
`gcloud components update`
Creating keys for your signed URLs
To create keys, follow these steps.
1.In the Google Cloud Console, go to the Cloud CDN page.
2.Click Add origin.
3.Select an HTTP(S) load balancer as the origin.
4.Select backend services or backend buckets. For each one
-Click Configure, and then click Add signing key.
-Under Name, give the new signing key a name.
-Under the Key creation method, select Automatically generate or Let me enter.
- If you're entering your own key, type the key into the text field.
- Click Done.
- Under Cache entry maximum age, provide a value, and select a Unit of time from the drop-down list. You can choose among second, minute, hour, and day. The maximum amount of time is three (3) days.
5. Click Save.
6. Click Add.
Configuring Cloud Storage permissions.
Before you run the following command, add at least one key to a backend bucket in your project; otherwise, the command fails with an error because the Cloud CDN cache fill service account is not created until you add one or more keys for the project. Replace PROJECT_NUM with your project number and BUCKET with your storage bucket.
gsutil iam ch \ serviceAccount:service-PROJECT_NUM#cloud-cdn-fill.iam.gserviceaccount.com:objectViewer \ gs://BUCKET
List the keys on a backend service or backend bucket, run one of the
following commands:
gcloud compute backend-services describe BACKEND_NAME
gcloud compute backend-buckets describe BACKEND_NAME
Sign URLs and distribute them.
You can sign URLs by using the gcloud compute sign-url command or by using code that you write yourself. If you need many signed URLs, custom code provides better performance.
This command reads and decodes the base64url encoded key value from KEY_FILE_NAME, and then outputs a signed URL that you can use for GET
or HEAD requests for the given URL.
gcloud compute sign-url \
"https://example.com/media/video.mp4" \
--key-name my-test-key \
--expires-in 30m \
--key-file sign-url-key-file
In this link, you can find more info related to signed URLs and signed cookies.
You can't download an image in that way, since you need to provide an OAuth token. and you need to have the profile scope enabled
var GoogleAuth; // Google Auth object.
function initClient() {
gapi.client.init({
'apiKey': 'YOUR_API_KEY',
'clientId': 'YOUR_CLIENT_ID',
'scope': 'https://www.googleapis.com/auth/userinfo.profile',
'discoveryDocs': ['https://discovery.googleapis.com/discovery/v1/apis']
}).then(function () {
GoogleAuth = gapi.auth2.getAuthInstance();
// Listen for sign-in state changes.
GoogleAuth.isSignedIn.listen(updateSigninStatus);
});
}
Related
Response status code does not indicate success: 401 (Unauthorized) When calling Azure Blockchain
I am getting above error when calling Azure Blockchain RESTApi and I have checked all the permissions and getting Access Token correctly but When I call to get the applications we are getting Above mentioned Error
The issue was fixed, by following the below steps:
1) Go to Azure Portal-> App Service -> [Your App Service]-Api
2) Go to Authentication/Authorization Blade
Allow App Service Authentication and choose Allow anonymous requests(no action) and choose Azure Active Directory as Authentication Providers
Click on Advanced -> Fill in the options as in the following image:
Client ID is the same as App ID in the AzureAD Application Registration
Client Secret is the same as API Key you generated from the Keys Section in the AzureAD Application Registration
Issuer URL is https://sts.windows.net/{AZUREADTENANTID} ß You can get the azuread tenant ID from the AzureAD Properties
Allowed Token Audiences should have the following value:
i. [https://%7bYOURBLOCKCHAINAPIURL-API.azurewebsites.net/.auth/login/aad/callback]
https://{YOURBLOCKCHAINAPIURL-API.azurewebsites.net/.auth/login/aad/callback
3) Go to Azure Active Directory
Now, go to Azure AD and Navigate to App Registrations and Click BlockChain API ß or the name you chose for your Azure AD App Registration when you configured blockchain the first time:
Click Settings and Click Reply URLs
Add the following URLs there:
Blockchain Workbench URL (it should be already there)
Blockchain API Base URL (the one that has -API in it)
Blockchain API Base URL with Callback (as indicated below, but use your own URL and add /.auth/login/aad/callback)
The getpostman.com/oauth2/callback URL will be used later to test the API using the Postman App (An App used to test APIs), please add it as is below
4) Save your settings and then go to the Manifest(Next to settings in the Blockchain API AzureAD App Properties)
Set the manifest entry oauth2allowimplicitflow to True
Save Configuration.
That’s it, now we need to test the API, you can download Postman to test the API, the configuration of Postman is a little bit long, I would prefer that you sign up free on this service: https://www.wintellectnow.com/Videos/Watch?videoId=blockchain-on-azure
Use Code: FREETRIAL to sign up – It will require a credit card but it won’t be charged, use any prepaid or postpaid card, but watching this video is highly recommended, specially at minute 53 as it explains how to use the API with Postman (Remember to disable the trial to prevent the card from being charged after 7 days 😊)
Here is a sample Token generated using the built in Auth Sample code that comes with Azure Blockchain Samples on Gethub (You can download this from here: https://github.com/Azure-Samples/blockchain/tree/master/blockchain-workbench/auth-samples/bearer-token-retrieval/static ), if you are going to test using it, you must add http://localhost to the Reply URLs above and you have to place it on your localhost IIS)
And here is a sample from Postman which shows how the token is added to the request header:
Here is the response before the authorization token:
Once the authorization token is available, here is the response that is generated once I call the API indicated in the GET Section in the image above:
Also, here is the response headers after a successful authorization:
To sum it up, the #blockchain API requires #OAuth2 authentication token, this token isn’t passed using Swagger UI or the application you built, you need to modify your application to authenticate to AAD OAuth2 to obtain a Token from AzureAD, then you need to use this generated token in the request header to the API in order to call the API.
One additional tip, don’t select the default machine size while creating workbench, but choose better performance machines like: VM Size: Standard F2s_v2 (2 vcpus, 4 GB memory) ==> approx.. 50 USD per month .
Discussion on Microsoft tech community site
The original Post of the Author
We are currently experiencing an outage in South Central US
https://azure.microsoft.com/en-us/status/
Azure AD is also impacted.
You will want to monitor the Azure Status Page for further updates. Unfortunately we cannot do anything until the problem has been mitigated by engineering.
After the issue has been mitigated and if you are still seeing issues let us know.
I am trying to give the Google CDN service account access to my bucket as said here: https://cloud.google.com/cdn/docs/using-signed-urls
gsutil iam ch serviceAccount:service-{PROJECT_NUMBER}#cloud-cdn-fill.iam.gserviceaccount.com:objectViewer gs://{BUCKET}
But the response is:
BadRequestException: 400 Invalid argument
Adding it via the cloud console is also impossible, it says "Email addresses and domains must be associated with an active Google Account or Google Apps account."
Am I missing something or is this a bug?
The Cloud CDN cache fill service account is created when you enable signed URLs. The error message suggests there's a problem with the project number or you haven't yet enabled signed URLs for that project. You can enable signed URLs by following the instructions at https://cloud.google.com/cdn/docs/using-signed-urls#creatingkeys. Make sure you enable signed URLs for a backend service or backend bucket in the same project you specify in the gsutil command.
Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)
For every Google Compute instance, there is a default service account like this:
1234567890123-compute#developer.gserviceaccount.com
I can create my instance with the proper scope (i.e. https://www.googleapis.com/auth/devstorage.full_control) and use this account to make API requests.
On this page: https://cloud.google.com/storage/docs/authentication#service_accounts it says:
Every project has a service account associated with it, which may be used for authentication and to enable advanced features such as Signed URLs and browser uploads using POST.
This implies that I can use this service account to created Signed URLs. However, I have no idea how to create a signed URL with this service account since I can't seem to get the private key (.p12 file) associated with this account.
I can create a new, separate service account from the developer console, and that has the option of downloading a .p12 file for signing, but the project level service accounts do not appear under the "APIs and auth / Credentials" section. I can see them under "Project / Permissions", but I can't do anything with them there.
Am I missing some other way to retrieve the private key for these default accounts, or is there no way to sign urls when using them?
You can use p12 key of any of your service account while you're authenticated through your main account or a GCE service account or other services accounts that have appropriate permissions on the bucket and the file.
In this case, just create a service account download p12 key and use the following command to sign your URL:
$ gsutil signurl -d 10m privatekey.p12 gs://bucket/foo
Though you can authenticate using different service account using the following command:
gcloud auth activate-service-account service-account-email --key-file key.p12
You can list and switch your accounts using these commands:
$ gcloud auth list
$ gcloud config set account
We have a build system on our network that frequently hits the github api limit for our company's IP address. This, of course, also blocks local developers.
The readme indicates that we should be able to authenticate for more requests, but I can't see how.
The Github API has a 60-requests-per-hour rate-limit for
non-authenticated use. You'll likely never hit this as TSD uses local
caching and the definition files are downloaded from Github RAW urls.
If you need some more then a scope-limited Github OAuth token can be
used to boost the limit to 5000.
From the tsd page on npm:
.tsdrc
This is a optional JSON encoded file to define global settings. TSD looks for it in the user's home director (eg: %USERPROFILE% on Windows, $HOME / ~ on Linux), and in the current working directory.
"proxy" - Use a http proxy
Any standard http-proxy as supported by the request package.
{
"proxy": "http://proxy.example.com:88"
}
"token" - Github OAuth token:
The OAuth token can be used to boost the Github API rate-limit from 60 to 5000 (non-cached) requests per hour. The is token needs just 'read-only access to public information' so no additional OAuth scopes are necessary.
{
"token": "0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33"
}
You can create this token on Github.com:
Go to https://github.com/settings/tokens/new
Deselect all scopes to create a token with just basic authentication.
(verify you really deselected all scopes)
(wonder why these presets were set??)
Enter a identifying name, something like "TSD Turbo 5000"
Create the token.
Copy the hex-string to the token element in the .tsdrc file.
Verify enhanced rate-limit using $ tsd rate
Change or revoke the token at any time on https://github.com/settings/applications
Note: keep in mind the .tsdrc file is not secured. Don't use a token with additional scope unless you know what you are doing.
The bare 'no scope' token is relatively harmless as it gives 'read-only access to public information', same as any non-autenticated access. But it does identify any requests done with it as being yours, so it is still your responsibility to keep the token private.