I have developed a app for using aws services, but i could not.I have "access key" and "secret key".But when go for s3 uploader i found error "No such bucket" and you are not sing up. I think , when i created the account i was not complete the "payment method" process.
So aws not provide the test mode . I am confuse please suggest me right way to do it.
Thanks in advance
I help maintain the AWS SDK for iOS. Building off the suggestions from Brad:
Make sure you can access the S3 console from AWS website. This will ensure you have an active and valid account
Make sure you have copied the access and secret keys correctly into the S3_Uploader sample application Constants.h correctly
The sample creates a unique name based on your access key. If this is failing for some reason you can update Constants.m in the sample to use your own custom name (or use a bucket that you've already created via the console)
It sounds like you don't have an active AWS account.
Do you have one? Can you access your bucket from a regular PC? I am guessing you don't. Make sure you can access your account and bucket from a regular dekstop before doing it on your iPhone. You need to go into the management console and create an S3 bucket, if not -you will get that error. (Either that, you are trying to access the wrong one).
Related
I follow the tutorial: https://developers.google.com/assistant/transactions/digital/dev-guide-digital-consumables
Everything works great up to a point when it's time to consume the product.
When calling https://actions.googleapis.com/v3/conversations/{sessionId}/entitlement:consume
it returns:
{"error":{"code":403,"message":"The caller does not have permission","status":"PERMISSION_DENIED"}}
I tried with the same JWT as I used to make a purchase as well as with a newly generated one. I'm also sure that entitlement.purchaseToken is successfully retrieved.
Any ideas?
I figured it out!
My app is using Service Account key from another project to generate JWT - due to a common app in Google Play Console, which enables me to share purchase entitlements across all my Actions.
As it turned out, this key can not be used in calling consume endpoint: https://actions.googleapis.com/v3/conversations/${convId}/entitlement:consume
To call it successfully, I have to use Service Account key from the project that the Action belongs to, generate new JWT and the voila! Everything's working as it should.
I am facing the challenge to request the Bing Ads API to get a couple of metrics from it.
I am using Apache Airflow DAGs hosted on a remote Kubernetes cluster to do so. It is a nice way to automate and schedule tasks.
Now, the documentation is rather light on the point of gaining access to the API.
I have followed this https://learn.microsoft.com/en-us/advertising/guides/authentication-oauth-identity-platform?view=bingads-13#registerapplication
and the official SDK docs https://github.com/BingAds/BingAds-Python-SDK/.
I am failing at authenticating when querying, since I am lacking a couple of pieces of information.
When authenticating using the "refresh token" and "redirect URI", I do not have either. (Class OAuthWebAuthCodeGrant here: https://github.com/BingAds/BingAds-Python-SDK/blob/294d01eea57d80ba381a42cde8d006fc318af056/bingads/authorization.py#L566)
When using a different method (Class OAuthDesktopMobileAuthCodeGrant here: https://github.com/BingAds/BingAds-Python-SDK/blob/294d01eea57d80ba381a42cde8d006fc318af056/bingads/authorization.py#L532), I fail w/
AADSTS700016: Application with identifier '<someidentifier>' was not found in the directory '<somethingelse>'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.
Thank you very much in advance! If you need more details, let me know!
Also great documentation in general, if I can make it more "newb"-friendly, let me know!
Edit1:
Sadly, while there has been some traffic to this question, nobody seems to be able to answer.
I will specify the set up a bit further.
We use Airflow DAGs to request daily updates from the API. For this, we need to authenticate. The authentication comes from a "new device" every time, since the code runs on a k8s cluster which allocates the jobs dynamically to it's pods.
For authentication, we ventured into different solutions, but all require some form of human interaction to get the refresh token into the DAG.
Is there any solution which allows for a hands-free deamon like many-server-to-server communication?
This link sheds some light on what we are looking for: https://learn.microsoft.com/en-us/azure/active-directory/develop/scenario-daemon-app-registration#api-permissions---app-permissions-and-admin-consent
Sadly, the Bing Ads API does not show up there.
What key piece of information are we missing?
Bing Ads, like Google Ads, uses OAuth for its API.
If you reference the Getting Started page, it mentions that you need a developer token, complete with links.
You can follow these steps to get a developer token for production.
Sign in with Super Admin credentials at the Microsoft Advertising Developer Portal account tab.
Choose the user that you want associated with the developer token. Typically an application only needs one universal token regardless how many users will be supported.
Click on the Request Token button.
Regarding your specific scenario--an application running in the cloud without an interface--you should know that OAuth requires you to interact with it to set things up. So run your app locally ONCE, or at least the getting_started code from your language's walkthrough: https://learn.microsoft.com/en-us/advertising/guides/walkthrough-desktop-application-python?view=bingads-13
Running it locally will go through the authentication process with your browser and generate a refresh token (in the file refresh.txt by default). Store this file with your code. It will have to be on the server that's making the request, and since it's in Kubernetes, you'll have to keep it with your container file.
We are integrating filepicker.io with Google Cloud Storage (ie. asking filepicker to write the uploaded files to our Google Cloud Storage account). Their documentation is pretty clear, however I found that I have to give the service account "Editor" access to the whole project, which is a security concern for us (it means that if somebody gets access to the access tokens used by Filepicker, they can do whatever they want with our Google Cloud project instead of just having access to the file). Trying to use some more restrictive permissions (like "Storage Object Creator" + "Storage Object Viewer") makes Filepicker fail.
Had anyone managed to configure the Google Cloud Storage integration of Filepicker.io with something less than "Editor" access to the project?
Just wanted to note that in the mean time I implemented a workaround: I created a separate Google Cloud project and only gave "Storage Admin" access to that one to Filepicker. Then I gave permissions to the accounts from the other projects to use the given storage bucket from the "upload" project. So this way at least any token leaked on Filepicker's end is limited to accessing this "upload" project.
I just tested this and it seems that the minimum required role to store to GCS using Filestack/Filepicker is either the "Project Editor" role, or the "Storage Admin" role. I will submit a feature request to allow more varied role options.
I followed up this tutorial to allow upload of files from GWT frontend directly to Google Cloud Storage using signed URLs. I've extended the Java example by specifying content type which worked just fine. Then, I saw that files uploaded this way weren't publicly readable. To get this working I've tried:
I've set up default ACL for newly uploaded objects gsutil defacl set public-read gs://<bucket>. Uploaded file again - no luck, stil not visible.
Then tried to set ACL on that object directly gsutil acl set public-read gs://<bucket>/<file> but it gave me AccessDeniedException: 403 Forbidden. It makes sense since gsutil is authenticated with my Google account and signed URL is being created with service account and it's P12 key.
I've tried to set up ACL at upload phase therefore added "x-goog-acl:public-read\n" canonicalized extension headers and appropriate query string param to pass signature check. Damn, stil no luck!
My assumption is that maybe this extension header I'm using is wrong? Then according to documentation all authenticated requests to GCS will apply private ACL by default.
Anyway - why I can't make these files publicly readable from Google Console when I'm logged in as project owner? I can make so for all files uploaded through console (I know that in that case the owner is project owner and not the service account).
What I'm doing wrong? How can I make them publicly readable by anyone?
Thanks in advice!
I think if you gone through the given docs. It clearly mention that, if you need the user to download the object without using the google account then this method provides an assigned URL for specific time to the User to download the object. I am assuming that might be its not possible to make those objects publicly available as they are signed. If still you need that functionality I would recommend you to go through the resumable upload or simple upload of the object.
Also try to put the service account of your project as the owner in the "Edit default permission of Object" in the developer console on the right side of your bucket name.
I'm quite new to Cloud Storage solutions, and I'm currently researching options to upgrade our current solution (we currently just upload on a SVN server).
What I have is a native application running on client computers, which will upload data to the Cloud Storage. Afterwards, client should be able to download and browse their data (source is not set in stone, could be a website or from other applications). They should not be able to access other user's data.
I'm not sure how I'm supposed to proceed. As far as I understand, the native application will upload using a Native Application Credential, using JSON.
Do I need multiple credentials to track multiple users? That seems wrong to me. Besides when they come back as 'users' through the web interface, they wouldn't be using that authentification, would they?
Do I need to change the ACL of the uploaded files afterwards?
Should I just not give write/read access to any particular users and handle read requests through Signed URLs, dealing with permission details by myself using something else on the side? (not forcing a Google Account is probably a requirement)
Sorry if this is too many questions, and thanks!
Benjamin
The "individual credentials per instance of an app" question has come up before, and unfortunately there's not a great answer. If you want every user to have different permissions, you need every user to be associated with a different account.
Like you point out, the best current answer, other than requiring users to have Google accounts, is to have a centralized service that vends signed URLs to the end applications. That service would be the only owner of all of the objects and would give out permission to read or upload as needed.