i uploaded images to google storage bucket and i am no trying to set the CDN using the load balancer to work.
Storage Status :
Bucket Permissions : Storage Object Viewer - Reader assign to allUsers ,
Storage Legacy Bucket Reader assign To allUsers
File Status :
Share Public is set and there is a public link
Load Balancer:
Set to path /creatives/* on the host name
but i always get this msg:
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
</Error>
what i notice is as soon as i build the path of /creatives/* there is another path build /* direct to the backend service of the auto scale group
am i missing here any settings?
So as i discovered Google CDN + Google http Load balancer works different from other CDN.
with a regular CDN you can direct the origin to you bucket HTTP address and work on the / structure.
for example :
Google CDN Bucket URL:
googleapi.storage.com/my-bucket
Folder structure:
/1/1.jpg
Normal CDN origin will be pointing to googleapi.storage.com/my-bucket
and you will get a new service endpoint for the CDN like:
my-bucket.fastly.cdonservice.com
and this call will work:
my-bucket.fastly.cdonservice.com/1/1.jpg
but on google cloud what you are setting up is a path that is connected to the CDN service that you created on the backend part.
So this is the big difference, lets assume that you created this path rule.
host: www.googlecdnnonexplainedfeautres.com
path: /images/*
service: yourbackendservice (connected to the bucket you want to cache)
so you might assume this should work:
www.googlecdnnonexplainedfeautres.com/images/1/1.jpg.
but NO ..
after digging the logs you will find a 404 on the bucket because google will go and search this path on the bucket:
googleapi.storage.com/my-bucket/images/1/1.jpg.
wait , where did the images came ? i thought its a hook. no google take this as a static website root ( the thing you can check on and off on S3) so here its mandatory.
so how this should work ?
modify the folder structure to be like this :
Google CDN Bucket URL:
googleapi.storage.com/my-bucket
Folder structure:
images/1/1.jpg
and now you are good.
This link should work now :
www.googlecdnnonexplainedfeautres.com/images/1/1.jpg.
so before you commit a bucket to be used as CDN source for google just add another top folder that match with the path you set on the LB.
and of course .. permissions , allUsers , read and etc..
EnjoY!
You can use URL Rewrites to solve this problem.
This may not have existed back in 2017 but there's an option at least as of 2021.
Explanation:
The lb's default behavior is to pass the entire path after the host to Cloud Storage. This may seem incorrect, but it's a sane default (How is the load balancer supposed to know what part of the path you want to include or exclude?).
I was facing the same issue.
I connected mydomain.com/static/* to my cloud storage bucket static-assets.
Upon visiting mydomain.com/static/static-asset.jpg the load balancer would request cloud storage for an object with key static-assets/**static**/static-asset.jpg.
Since the object's actual key is static-assets/static-asset.jpg, this would return a NoSuchKey response.
The fix was the rewrite the path prefix of /static to /.
One way to configure this is through the Cloud Console Load Balancer UI--we can add an advanced path rule to rewrite the prefix.
Please note:
Important: The rewrite is prepended to the path as is. Full path
rewrites are not supported. HTTP(S) Load Balancing only implements
path prefix rewrites. For example, you can rewrite:
host.name/path1/resource1 to host.name/path2/resource1. You cannot
rewrite host.name/path1/resource1 to host.name/path1/resource2.
Read about URL Rewrites here.
Related
I am working on a webapp for transferring files with Aspera. We are using AoC for the transfer server and an S3 bucket for storage.
When I upload a file to my s3 bucket using aspera connect everything appears to be successful, I see it in the bucket, and I see the new file in the directory when I run /files/browse on the parent folder.
I am refactoring my code to use the /files/{id}/files endpoint to list the directory because the documentation says it is faster compared to the /files/browse endpoint. After the upload is complete, when I run the /files/{id}/files GET request, the new file does not show up in the returned data right away. It only becomes available after a few minutes.
Is there some caching mechanism in place? I can't find anything about this in the documentation. When I make a transfer in the AoC dashboard everything updates right away.
Thanks,
Tim
Yes, the file-id base system uses an in-memory cache (redis).
This cache is updated when a new file is uploaded using Aspera. But for files movement directly on the storage, there is a daemon that will periodically scan and find new files.
If you want to bypass the cache, and have the API read the storage, you can add this header in the request:
X-Aspera-Cache-Control: no-cache
Another possibility is to trigger a scan by reading:
/files/{id}
for the folder id
I have searched for this question, but none of the responses help me.
Following the tutorial, I have created a new bucket (www.stepwiserefinement.co.uk) and it contains a static site, including index.html and error.html.
I have used the Console to set these as defaults for the base url and unknown files.
When I access the http://www.stepwiserefinement.co.uk URL, I get an XML listing of the files; I should be seeing index.html.
gustily correctly reports
{"mainPageSuffix": "/index.html", "notFoundPage": "/error.html"}
but if I access the domain with no path, the response is
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>
Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket.
</Details>
</Error>
No https, no load balancer needed.
Missing something.
Suggestions please.
There's multiple issues here.
Your site still loads in HTTPS when you put it in the browser. The connection is somehow upgrading you to SSL. And if you're in SSL, you need the load balancer. As opposed to these instructions without load balancer. Maybe you have SSL turned on with your registrar or somewhere else.
I only get 404 error. Not sure how you got "Access denied". But it could also be a secondary issue because when enabled properly, no access control is present. For example, it says here under step 3 "selected Uniform for Access Control". This removes access control.
Let us know if you followed the last article completely
Edit: Also, out of curiosity, try making the bucket public (without Uniform), if it doesn't work above.
I am following the steps of setting up Django on Google App Engine, and since Gunicorn does not serve static files, I have to store my static files to Google Cloud Storage.
I am at the line with "Create a Cloud Storage bucket and make it publically readable." on https://cloud.google.com/python/django/flexible-environment#run_the_app_on_your_local_computer. I ran the following commands as suggested:
$ gsutil mb gs://your-gcs-bucket
$ gsutil defacl set public-read gs://your-gcs-bucket
The first command is supposed to create a new storage bucket, and the second line sets its default ACL. When I type in the command, the second line returns an error.
Setting default object ACL on gs://your-gcs-bucket/...
AccessDeniedException: 403 Forbidden
I also tried other commands setting or getting acl, but all returns the same error, with no additional information.
I am a newbie with google cloud services, could anyone point out what is the problem?
I figured it out myself, and it is kind of silly. I didn't notice if the first command is successful or not. And apparently it did not.
For a newbie like me, it is important to note that things like bucket name and project name are global across its space. And what happened was that the name I used to create a new bucket is already used by other people. And no wonder that I do not have permission to access that bucket.
A better way to work with this is to name the bucket name wisely, like prefixing project name and application name.
I just joined. I created a bucket for my static website. The index.html (or mainpage per google terminology) is in a different folder in the same bucket. I set my index.html as the page to show up automatically (www.example.com)
I get the website on my browser if i type exact http url including my file name - IT WORKS. But, if i type the domain name alone on the browser, i get the following error:
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
</Error>
I did setup my MainPage on the bucket options. HEre is my gsutil web get.
{"mainPageSuffix": "gs://www.example.co/NewHome/index.html"}
But index.html does not show up when i type only example.co in the browser. Any suggestions???
Found the answer the path is relative to home (bucket).
So this works
{"mainPageSuffix": "NewHome/index.html"}.
In case the developers of the Google-cloud for web are listening to this...
Even though the system picks up the index.html in a different folder, the images, jss etc are not pickedup. The current working directly still remains the home directory. (In the web case this could be relaxed to change the CWD based on where the mainpage suffix is pointing to. (just an opinion).
I'm trying to insert some storage data onto Bluemix, I searched many wiki pages but I couldn't come to conclude how to proceed. So can any one tell me how to store images, files in storage of Bluemix through any language code ( Java, Node.js)?
You have several options at your disposal for storing files in your app. None of them include doing it in the app container file system as the file space is ephemeral and will be recreated from the droplet each time a new instance of your app is created.
You can use services like MongoLab, Cloudant, Object Storage, and Redis to store all kinda of blob data.
Assuming that you're using Bluemix to write a Cloud Foundry application, another option is sshfs. At your app's startup time, you can use sshfs to create a connection to a remote server that is mounted as a local directory. For example, you could create a ./data directory that points to a remote SSH server and provides a persistent storage location for your app.
Here is a blog post explaining how this strategy works and a source repo showing it used to host a Wordpress blog in a Cloud Foundry app.
Note that as others have suggested, there are a number of services for storing object data. Go to the Bluemix Catalog [1] and select "Data Management" in the left hand margin. Each of those services should have sufficient documentation to get you started, including many sample applications and tutorials. Just click on a service tile, and then click on the "View Docs" button to find the relevant documentation.
[1] https://console.ng.bluemix.net/?ace_base=true/#/store/cloudOEPaneId=store
Check out https://www.ng.bluemix.net/docs/#services/ObjectStorageV2/index.html#gettingstarted. The storage service in Bluemix is OpenStack Swift running in Softlayer. Check out this page (http://docs.openstack.org/developer/swift/) for docs on Swift.
Here is a page that lists some clients for Swift.
https://wiki.openstack.org/wiki/SDKs
As I search There was a service that name was Object Storage service and also was created by IBM. But, at the momenti I couldn't see it in the Bluemix Catalog. I guess , They gave it back and will publish new service in the future.
Be aware that pobject store in bluemix is now S3 compatible. So for instance you can use Boto or boto3 ( for python guys ) It will work 100% API comaptible.
see some example here : https://ibm-public-cos.github.io/crs-docs/crs-python.html
this script helps you to list recursively all objects in all buckets :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint)
for bucket in s3.buckets.all():
print(bucket.name)
for obj in bucket.objects.all():
print(" - %s") % obj.key
If you want to specify your credentials this would be :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint, aws_access_key_id=YouRACCessKeyGeneratedOnYouBlueMixDAShBoard, aws_secret_access_key=TheSecretKeyThatCOmesWithYourAccessKey, use_ssl=True)
for bucket in s3.buckets.all():
print(bucket.name)
for obj in bucket.objects.all():
print(" - %s") % obj.key
If you want to create a "hello.txt" file in a new bucket. :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint, aws_access_key_id=YouRACCessKeyGeneratedOnYouBlueMixDAShBoard, aws_secret_access_key=TheSecretKeyThatCOmesWithYourAccessKey, use_ssl=True)
my_bucket=s3.create_bucket('my-new-bucket')
s3.Object(my_bucket, 'hello.txt').put(Body=b"I'm a test file")
If you want to upload a file in a new bucket :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint, aws_access_key_id=YouRACCessKeyGeneratedOnYouBlueMixDAShBoard, aws_secret_access_key=TheSecretKeyThatCOmesWithYourAccessKey, use_ssl=True)
my_bucket=s3.create_bucket('my-new-bucket')
timestampstr = str (timestamp)
s3.Bucket(my_bucket).upload_file(<location of yourfile>,<your file name>, ExtraArgs={ "ACL": "public-read", "Metadata": {"METADATA1": "resultat" ,"METADATA2": "1000","gid": "blabala000", "timestamp": timestampstr },},)