GCS: Access denied - service not available in your location - google-cloud-storage

since some days ago we are getting an "Access forbidden" error when trying to download any resources uploaded to Google Cloud Storage from some of our servers.
That resources are shared publicly and they are available from all the other locations we've tested.
For example, this is URL https://migoa.storage.googleapis.com/static/r45001/images/cancel.png should be accessible world-wide. We tested it and it's working. But, when trying to download it from one of our servers, instead of the image, we got this XML:
<?xml version='1.0' encoding='UTF-8'?>
<Error><Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>We're sorry, but this service is not available in your location</Details>
</Error>
[javi#OK ~]$ lwp-request https://migoa.storage.googleapis.com/static/r45001/images/cancel.png -dS
GET https://migoa.storage.googleapis.com/static/r45001/images/cancel.png --> 200 OK
[javi#notOK ~]$ lwp-request https://migoa.storage.googleapis.com/static/r45001/images/cancel.png -dS
GET https://migoa.storage.googleapis.com/static/r45001/images/cancel.png --> 403 Forbidden
Adding the host "migoa.storage.googleapis.com" to the server hosts file, pointing to another location IP (Google Cloud Storage acts as CDN using the best per-location server), the image is downloaded without problems, so, it seems a problem with the resolved Google server, in our case, this:
dig +short migoa.storage.googleapis.com
storage-ugc.l.googleusercontent.com.
74.125.136.132
In fact, we don't know how many locations can be affected. Also, we've been using GCS from almost 1 year without any problems until now.
Any idea about how to solve this?
Regards,

Related

Nginx Proxy Manager redirects hosted website to 502 Bad Gateway

I have a website running where I use Nginx Proxy Manager to redirect to this website. However, as soon as I hit my website I get the following message:
Does anyone have a clue what is happening here?
Finally, I came to the following conclusion. I think from my experience it can mean two things:
Either your website/docker container is not running
Or Nginx cannot find an index.html file on the main root of web address 'example.com'.
However, hostinger.com points out the following:
Unresolved domain name
Server overload
Browser issues
Home-network equipment error
Firewall blocks
So make sure that index.html is present for your website and you have trouble shooted your container where you are 100% sure that docker container has no exceptions, errors and runs perfectly fine. Try use something like 'docker-compose logs' where the docker-compose.yml is located (this only works for a running docker container)

MainPageSuffix for static site not working

I have searched for this question, but none of the responses help me.
Following the tutorial, I have created a new bucket (www.stepwiserefinement.co.uk) and it contains a static site, including index.html and error.html.
I have used the Console to set these as defaults for the base url and unknown files.
When I access the http://www.stepwiserefinement.co.uk URL, I get an XML listing of the files; I should be seeing index.html.
gustily correctly reports
{"mainPageSuffix": "/index.html", "notFoundPage": "/error.html"}
but if I access the domain with no path, the response is
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>
Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket.
</Details>
</Error>
No https, no load balancer needed.
Missing something.
Suggestions please.
There's multiple issues here.
Your site still loads in HTTPS when you put it in the browser. The connection is somehow upgrading you to SSL. And if you're in SSL, you need the load balancer. As opposed to these instructions without load balancer. Maybe you have SSL turned on with your registrar or somewhere else.
I only get 404 error. Not sure how you got "Access denied". But it could also be a secondary issue because when enabled properly, no access control is present. For example, it says here under step 3 "selected Uniform for Access Control". This removes access control.
Let us know if you followed the last article completely
Edit: Also, out of curiosity, try making the bucket public (without Uniform), if it doesn't work above.

gsutil make bucket command [gsutil mb] is not working

I am trying to create a bucket using gsutil mb command:
gsutil mb -c DRA -l US-CENTRAL1 gs://some-bucket-to-my-gs
But I am getting this error message:
Creating gs://some-bucket-to-my-gs/...
BadRequestException: 400 Invalid argument.
I am following the documentation from here
What is the reason for this type of error?
I got the same error. I was because I used the wrong location.
The location parameter expects a region without specifying witch zone.
Eg.
sutil mb -p ${TF_ADMIN} -l europe-west1-b gs://${TF_ADMIN}
Should have been
sutil mb -p ${TF_ADMIN} -l europe-west1 gs://${TF_ADMIN}
One reason this error can occur (confirmed in chat with the question author) is that you have an invalid default_project_id configured in your .boto file. Ensure that ID matches your project ID in the Google Developers Console
If you can make a bucket successfully using the Google Developers Console, but not using "gsutil mb", this is a good thing to check.
I was receiving the same error for the same command while using gsutil as well as the web console. Interestingly enough, changing my bucket name from "google-gatk-test" to "gatk" allowed the request to go through. The original name does not appear to violate bucket naming conventions.
Playing with the bucket name is worth trying if anyone else is running into this issue.
Got this error and adding the default_project_id to the .boto file didn't work.
Took me some time but at the end i deleted the credentials file from the "Global Config" directory and recreated the account.
Using it on windows btw...
This can happen if you are logged into the management console (storage browser), possibly a locking/contention issue.
May be an issue if you add and remove buckets in batch scripts.
In particular this was happening to me when creating regionally diverse (non DRA) buckets :
gsutil mb -l EU gs://somebucket
Also watch underscores, the abstraction scheme seems to use them to map folders. All objects in the same project are stored at the same level (possibly as blobs in an abstracted database structure).
You can see this when downloading from the browser interface (at the moment anyway).
An object copied to gs://somebucket/home/crap.txt might be downloaded via a browser (or curl) as home_crap.txt. As a an aside (red herring) somefile.tar.gz can come down as somefile.tar.gz.tar so a little bit of renaming may be required due to the vagaries of the headers returned from the browser interface anyway. Min real support level is still $150/mth.
I had this same issue when I created my bucket using the following commands
MY_BUCKET_NAME_1=quiceicklabs928322j22df
MY_BUCKET_NAME_2=MY_BUCKET_NAME_1
MY_REGION=us-central1
But when I decided to add dollar sign $ to the variable MY_BUCKET_NAME_1 as MY_BUCKET_NAME_2=$MY_BUCKET_NAME_1 the error was cleared and I was able to create the bucket
I got this error when I had capital letter in the bucket name
$gsutil mb gs://CLIbucket-anu-100000
Creating gs://CLIbucket-anu-100000/...
BadRequestException: 400 Invalid bucket name: 'CLIbucket-anu-100000'
$gsutil mb -l ASIA-SOUTH1 -p single-archive-352211 gs://clibucket-anu-100
Creating gs://clibucket-anu-100/..
$

What's the cause of "Failed to preload gadget..." for Sharebox gadgets in IBM Connections

I've followed the procedure documented at "Adding new ways to share content"
...but keep getting an error:
Failed to preload gadget http://....
Detailed error: 400 Gadget is not trusted to render in this container. cre.iruntime:cre.iwidget.event:cre.wire:cre.iwidget:cre.iwidget.itemset:cre….ibm.connections.ee:ibm.connections.ee:container.nongadget:open-views.js:4
sharebox error http://i7.minus.com/ibiLz4SSWA5EL8.png
This looks like some sort of trust problem with external servers, but my other gadgets (embedded experience & home page gadgets) on the same external host are all working fine.
What have I missed out in the configuration?
OK, shamefully, it looks like I missed out swapping the security attribute whitelistEnabled="true" to whitelistEnabled="false" in:
/opt/IBM/WebSphere/AppServer/profiles/Dmgr01/config/cells/connectionswwCell01/LotusConnections-config/opensocial-config.xml
Here:
<security whitelistEnabled="false" featureAdminEnabled="true">
More details in this slide: How to add your own OpenSocial Gadgets to IBM Connections.
Of course, in a production system, you'll have to checkout the opensocial config using wsadmin.sh, edit, checkin & restart.

Azure deployment virtual directory [duplicate]

This question already has answers here:
Azure deployment with 2 websites is cycling for a long time
(2 answers)
Closed 8 years ago.
Added a new azure deployment project to my web application and deploy was successfull.
After adding a virtual directory to ServiceDefinition.csfef the application remains cycling, so I deleted the instance using azure console and deployed again with success and with the virtual directory.
When I access the site I get a page with:
Service Unavailable
HTTP Error 503. The service is unavailable.
After analysing intellitrace got this message:
https://picasaweb.google.com/112383217404623421937/Dropbox#5748710219235327730
In event viewer:
Warnings:
The application '/' belonging to site '1' has an invalid AppPoolId 'DefaultAppPool' set. Therefore, the application will be ignored.
Site 1 was disabled because the root application defined for the site is invalid. See the previous event log message for information about why the root application is invalid.
File Server Resource Manager failed to enumerate share paths or DFS paths. Mappings from local file paths to share and DFS paths may be incomplete or temporarily unavailable. FSRM will retry the operation at a later time.
Help?
This is what you shared and I think there are couple of concern. First I think your directory location will be correct when app running on Azure or you haven't added any content in your project that's why directories are present there..
<Site name="PT" physicalDirectory="..\RIS2048.ConsultaClick.WWWPacientes">
<VirtualDirectory name="images" physicalDirectory="..\RIS2048.ConsultaClick.WWWPacientes\imgpt" />
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint1" hostHeader="pt.consultaclick.com" />
</Bindings>
</Site>
Next because your are parsing the request on host header, which makes website to differentiate into applications so it is best to have Virtual Application setting along with it.
Otherwise you really need to have minimum two sites and set two bindings. One binding for your pt.consultaclick.com and other for any other remaining host header otherwise your site will serve very limited requests based on host header.
I like this blog which explained this blog in serious details which sure will help you. My this blog has some info on this regard.