Remove the auto download of a URL that points to a google bucket - google-cloud-storage

I have files stored on google cloud storage. Google provides me a URL to access this files, but when I access this URL the file is auto downloaded. I wish to know if it is possible to remove or don't allow this auto download when we access the file URL. ;)

The download feature is handled by the client.
Calling the URL for an object stored in a bucket will return the object in the body of the HTTP request, and the client will choose what to do with this data.
If you use a web browser, the choice of downloading or not a file is usually given by the header Content-Type. In general, there are some MIME types that will be displayed on the browser itself (according to Chrome help, videos, images, PDFs and web pages will be displayed directly on the browser), while others will download directly.
To modify the MIME type of the files stored in a bucket, you must change it's metadata, so the browser will behave in the way you want.

Related

How do I rewrite URL to drop file extension for pdf on github pages?

Imagine my website is hosted on GitHub Pages and has a custom domain website.com. I can access a pdf at website.com/mypdf.pdf
Is there a way where I can make it work at website.com/mypdf?
As mentioned in comments, if you are using static website hosted by a 3rd party like GitHub pages, you don't really get a lot of control over http server. I would tentatively say you cannot control URL rewrite rules on GitHub.
What you could potentially do instead is to host a page with a bit of JavaScript that would start the download on a given event (button click, page load, etc) this way you could mask your actual download URL with this html page (that by convention comes with no file extension)
UPD: and surely enough someone's been doing it already: http://lea.verou.me/2016/11/url-rewriting-with-github-pages/. The post is going on about having nice urls, but I believe file downloads implementation can be implemented similarly
Yes you should make your website with MVC structure. Make a controller and in Index action load pdf file.
Then on action calling your pdf will be loaded like that:
Students/AllResult etc

Why is the raw url of pdf file in github can't be open with browser directly instead of download

I tried pdf, txt and png file url, only pdf url can't be open with browser if click the url, but trigger download.
I google this but only got how to fix, like instead with google doc or use pdf.js, or other html code.
What is the reason? the website ? Forgive me that i have no idea of website architecture.
When you get a file from a website, it has a content type sent along with it. Depending on the content type, the browser may choose to display it. For example, content type "application/pdf" might be shown in a browser, but "application/octet-stream" will be downloaded.
The raw URL on GitHub has content type "application/octet-stream" (a binary file) so that it will be downloaded.
The only way around this, since you can't change GitHub's code that sets the content type, is to get the data from JavaScript and parse it there -- by using pdf.js or something similar.

Example for google drive simple upload rest API

As per the documentation, if I wish to upload only media without any metadata, the simple upload will do. And the documentation says:
So, as per the documentation, I formed the request as follows and the body of the request is binary data:
But I am not able to figure out where to set the parent directory information for the media being uploaded if the body is comprised of only media.
Do I need to submit two requests, one for metadata and one for media? For that, we are provided with the multipart upload.
Can anyone please help me with a working example of a simple upload?
The form-data section in this Postman docs page might help you with entering the file location.
I found a YouTube video on the subject too.

After uploading an image to google cloud, how can I get a link to that image?

After uploading an image, I get back metadata that has a mediaDownloadLink that will download the file when accessed. Is there a way to get a link that will display the image in the browser without downloading it?
In general, any object you set to be publicly accessible (which presumably you wanted to do to use it to host images on a website), you can then access with https://storage.googleapis.com/<bucket>/<object>. You can see this link also if you go to the cloud console and make an object publicly viewable and look for the Public link you can click.
If you have problems with the link downloading instead of displaying by itself in a browser, you may need to make sure the content-type header is set correctly; for example if using ByteArrayContent to upload data using the Java API, you'll want to set a string like "image/jpeg" in its constructor for "type".

An object in Google Cloud Storage which acts as a "redirect" or "symlink"

I'm looking to move an existing website to Google Cloud Storage. However, that existing website has changed its URL structure a few times in the past. These changes are currently handled by Apache: for example, the URL /days/000233.html redirects to /days/new-post-name and /days/new-post-name redirects to /days/2002/01/01/new-post-name. Similarly, /index.rss redirects to /feed.xml, and so on.
Is there a way of marking an object in GCS so that it acts as a "symlink" to another GCS object in the same bucket? That is, when I add website configuration to a bucket, requesting an object (ideally) generates a 301 redirect header to a different object, or (less ideally) serves the content of the other object as its own?
I don't want to simply duplicate the object at each URL, because that would triple my storage space. I also can't use meta refresh headers inside the object content, because some of the redirected objects are not HTML documents (they are images, or RSS feeds). For similar reasons, I can't handle this inside the NotFound 404.html with JavaScript.
Unfortunately, symlink functionality is currently not supported by Google Cloud Storage. It's a good idea though and worth considering as a future feature.