I think I just created a signed url with gsutil, but it never asked me for the private key's password, rather dropped to a command prompt after I issued the command without any output.
I tried to access the bucket using commondatastorage.googleapis.com as the url, & the secret strings as username/password, but I get 403- forbidden errors.
How do I access my buckets through my S3 browser app?
Thanks.
Related
In google, images are hosted in CDN URL type and I tried to download as an image from that CDN but it throws an error in C#. Used this c# code attached below.
using (var webClient = new WebClient())
{
byte[] imageBytes = webClient.DownloadData(imageUUl);
System.IO.File.WriteAllBytes(#"E:\Temp\img2.jpeg", imageBytes);
}
URL: https://lh6.googleusercontent.com/vpsleVfq12ZnALrwbIUqCTa0Fpqa5C8IUViGkESOSqvHshQpKCyOq4wsRfTcadG2WYgcW3m0yq_6M2l_IrSM3qr35spIML9iyIHEULwRu4mWw4CUjCwpVfiWnd5MXPImMw=w1280
Thanks in advance.
In GCP Cloud CDN, you can use a signed URL with authentication or signed cookies to authorize users and provide them with a time-limited token for accessing your protected content, so a Cloud CDN does not block requests without a signature query parameter or Cloud-CDN-Cookie HTTP cookie. It rejects requests with invalid (or otherwise malformed) request parameters, due to this I suggest reviewing your browser client security settings; how the authentication is managed to your CDN URL. Some clients store cookies by default if the security policy is allowed, also review how your CDN URL security ingress is configured because when you are using a CDN URLs with signed cookies the responses to signed and unsigned requests are cached separately, so a successful response to a valid signed request is never used to serve an unsigned request.
In another hand if you are using a signed CDN URL to limited secure access to file for a limited amount of time, there are some steps that you need to follow first:
Ensure that Cloud CDN is enabled.
If necessary, update to the latest version of the Google Cloud CLI:
`gcloud components update`
Creating keys for your signed URLs
To create keys, follow these steps.
1.In the Google Cloud Console, go to the Cloud CDN page.
2.Click Add origin.
3.Select an HTTP(S) load balancer as the origin.
4.Select backend services or backend buckets. For each one
-Click Configure, and then click Add signing key.
-Under Name, give the new signing key a name.
-Under the Key creation method, select Automatically generate or Let me enter.
- If you're entering your own key, type the key into the text field.
- Click Done.
- Under Cache entry maximum age, provide a value, and select a Unit of time from the drop-down list. You can choose among second, minute, hour, and day. The maximum amount of time is three (3) days.
5. Click Save.
6. Click Add.
Configuring Cloud Storage permissions.
Before you run the following command, add at least one key to a backend bucket in your project; otherwise, the command fails with an error because the Cloud CDN cache fill service account is not created until you add one or more keys for the project. Replace PROJECT_NUM with your project number and BUCKET with your storage bucket.
gsutil iam ch \ serviceAccount:service-PROJECT_NUM#cloud-cdn-fill.iam.gserviceaccount.com:objectViewer \ gs://BUCKET
List the keys on a backend service or backend bucket, run one of the
following commands:
gcloud compute backend-services describe BACKEND_NAME
gcloud compute backend-buckets describe BACKEND_NAME
Sign URLs and distribute them.
You can sign URLs by using the gcloud compute sign-url command or by using code that you write yourself. If you need many signed URLs, custom code provides better performance.
This command reads and decodes the base64url encoded key value from KEY_FILE_NAME, and then outputs a signed URL that you can use for GET
or HEAD requests for the given URL.
gcloud compute sign-url \
"https://example.com/media/video.mp4" \
--key-name my-test-key \
--expires-in 30m \
--key-file sign-url-key-file
In this link, you can find more info related to signed URLs and signed cookies.
You can't download an image in that way, since you need to provide an OAuth token. and you need to have the profile scope enabled
var GoogleAuth; // Google Auth object.
function initClient() {
gapi.client.init({
'apiKey': 'YOUR_API_KEY',
'clientId': 'YOUR_CLIENT_ID',
'scope': 'https://www.googleapis.com/auth/userinfo.profile',
'discoveryDocs': ['https://discovery.googleapis.com/discovery/v1/apis']
}).then(function () {
GoogleAuth = gapi.auth2.getAuthInstance();
// Listen for sign-in state changes.
GoogleAuth.isSignedIn.listen(updateSigninStatus);
});
}
I am trying to create a signed URL for a private object stored in cloud storage.
The storage client is being created using a service account that has the Storage Admin role:
storage_client = storage.Client.from_service_account_json('service.json')
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(blob_name)
url = blob.generate_signed_url(
version="v4",
# This URL is valid for 15 minutes
expiration=datetime.timedelta(minutes=15),
# Allow GET requests using this URL.
method="GET"
)
This generates a URL that when accessed via a browser gives this error:
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.</Details>
</Error>
What am I missing here? The service account has no problem interacting with the bucket or blob normally - I can download it/etc. It's just the Signed URL that doesn't work. I can make the object public and then download it - but that defeats the purpose of being able to generate a signed URL.
All of the other answers I've found seem to focus on issues using application default credentials or are very old examples from the v2 API.
Clearly there's something about how I'm using the service account - do I need to explicitly give it permissions on that particular object? Is the Storage Admin role not enough in this context?
Going crazy with this. Please help!
I am trying to give the Google CDN service account access to my bucket as said here: https://cloud.google.com/cdn/docs/using-signed-urls
gsutil iam ch serviceAccount:service-{PROJECT_NUMBER}#cloud-cdn-fill.iam.gserviceaccount.com:objectViewer gs://{BUCKET}
But the response is:
BadRequestException: 400 Invalid argument
Adding it via the cloud console is also impossible, it says "Email addresses and domains must be associated with an active Google Account or Google Apps account."
Am I missing something or is this a bug?
The Cloud CDN cache fill service account is created when you enable signed URLs. The error message suggests there's a problem with the project number or you haven't yet enabled signed URLs for that project. You can enable signed URLs by following the instructions at https://cloud.google.com/cdn/docs/using-signed-urls#creatingkeys. Make sure you enable signed URLs for a backend service or backend bucket in the same project you specify in the gsutil command.
I have a repo with shell script and want to put single command to run it in readme file, like:
bash <(curl -L <path_to_raw_script_file>)
Raw file urls for GitHub Enterprise look like this: https://raw.github.ibm.com/<user>/<repo>/<branch>/<path_to_file>?token=<token>, where <token> is unique to the file and generated when accesing it via Raw button in repository or with ?raw=true suffix in url.
The problem is, tokens get invalidated after few days/when file is updated and I wouldn't like to update mentioned command each time token becomes invalid. Is there a way to deal with it?
I know there is a way for user to create personal token and use it to login to github from machine he's runnning script from, but I wanted to keep it as simple as possible.
I was thinking of something like auto-generating that raw file url (since user reading the readme file on github surely does have access to the script located in the same repo), but I am not sure if that's possible.
No input, one-liner.
You can get this link by clicking the raw button in the GHE UI, just remove the token query param at the end.
curl -sfSO https://${USER}:${TOKEN}#${GHE_DOMAIN}/raw/${REPO_OWNER}/${REPO_NAME}/${REF}/${FILE}
I believe you'll always need the tokens - however if you'd like to automate the process you can dynamically request tokens associated with a github Oauth app and not associated with any user profile.
https://developer.github.com/enterprise/2.13/apps/building-oauth-apps/authorizing-oauth-apps/
I know there is a way for user to create personal token and use it to login to GitHub from machine he's runnning script from, but I wanted to keep it as simple as possible.
Actually, using GCM (Git Credential Manager); the PAT will be provided when accessing the raw.xxx URL.
But only with GCM v2.0.692 which supports those URLs. See PR 599.
Fix GitHub Enterprise API URL for raw source code links
This is a simple fix of #598 for GitHub Enterprise instances that use a raw. hostname prefix for raw source code links.
I've verified this fix locally by swapping out the GitHub.dll that is used by Visual Studio.
So it now checks for 'raw.' in the hostname and remove it to get the correct GHE API URL.
I have been using the Google Speech API to transcribe audio to text from my PHP app (using the Google Cloud PHP Client) for several months without any problem. But my calls have now started to return 403 errors with status "PERMISSION_DENIED" and message "The caller does not have permission".
I'm using the Speech API together with Google Storage. I'm authenticating using a service account and sending my audio data to Storage. That's working, the file gets uploaded. So I understand - but I might be wrong? - that "the caller" does not have permission to then read to the audio data from Storage.
I've been playing with permissions through the Google Console without success. I've read the docs but am quite confused. The service account I am using (I guess this is "the caller"?) has owner permissions on the project. And everything used to work fine, I haven't changed a thing.
I'm not posting code because if I understand correctly my app code isn't the issue - it's rather my Google Cloud settings. I'd be grateful for any idea or clarifications of concepts!
Thanks.
Being an owner of the project doesn't necessarily imply that the service account has read permission on the object. It's possible that the object was uploaded by another account that specified a private ACL or similar.
Make sure that the service account has access to the object by giving it the right permissions on the entire bucket or on the specific object itself.
You can do so using gsutil acl. More information and additional methods may be found in the official documentation.
For instance the following command gives READ permission on an object to your service account:
gsutil acl -r ch -u serviceAccount#domain.com:R gs://bucket/object
And this command gives READ permission on an entire bucket to your service account:
gsutil acl -r ch -u serviceAccount#domain.com:R gs://bucket
In google cloud vision,when your creating credentials with service account key, you have to create role and set it owner and accesses full permissions