we would like to store uploaded images on cdn server in our website. We are thinking to add fallback in our system if cdn service goes down. so images are start to upload on server hard disk in this case. we can manage this fallback using general setting option.
what is efficient way to store upload images path in our database ? we have to store full image url or image name in database. so we can recover easily in fallback situation.
Related
I'm trying to build a website on Squarespace, in which the site links to a database file. It's stored in a standard file system on a server tower with cluster. No SQL architecture or anything that I explicitly know of. Unfortunately Google Drive isn't an option due to the size of the file ( > 200 GB). I'm rather lost due to the size constraint -- does anyone have an idea about how to do this? Can I set up some sort of server query using a link on the site? Can I upload the file from my computer and store it somewhere in the backend? Thanks.
"...the size of the file ( > 200 GB)..."
Unfortunately, Squarespace's own upload limitations are far below this for the two places where files like that can be stored: file storage (20MB) and developer-mode '/assets' folder (10MB). The CSS-/Style-related storage only supports images (and likely has a limit of less than 20MB). Digital download products can be 300MB (still to small for your file) and likely can't be linked-to and accessed as you'd need for your application.
"...Can I set up some sort of server query using a link on the site?..."
If you mean a query on some other service besides Squarespace which connects to the file hosted on your Squarespace site, the answer is no simply because there's no way to upload the file to Squarespace due to its size. If, however, your mean a query from your Squarespace site to the file hosted elsewhere, then this must be done using JavaScript and done entirely client-side due to Squarespace's lack of support for server-side languages.
"...Can I upload the file from my computer and store it somewhere in the backend?..."
See the options mentioned above, though all have file size limits below that of your file.
If you are able to utilize the file on your site using client-size/front-end JavaScript only, then perhaps you could host the file on Amazon S3 or other such provider and access it that way.
I would like to host my images in a special CDN server which could server images with preprocessing. For example images loaded by clients may be in jpeg, png or gif format, they may present the images at a different resolution in the final document, and when the document is viewed the images will be served in webp format and exactly at the resolution set on the final documents.
so the original images loaded is as it is on the server but the image served is in webp format and in the size required for the final document.
A Chinese Cloud service provider Qiniu.com provides such services . They provide an API called imageMogr2 and with this API installed and the images hosted on their special servers can be pulled in any format, any size with other alterations.
If any one find similar services with AMAZON or other service providers will be glad to know more.
Their API documentations are available at developer.qiniu.com, but I am still relectuant to host on Chinese servers due to the great fire wall and access issues to the outside world.
I looking for the best way to upload an image from mobile phone to my server. I am currently using html5 to open the camera and take the picture, then I convert the file into a base64 string, then I send to the server, then save it in MongoDB.
I am expecting around 1000 to 1500 user request per day ( upload image ) , so I have the following question :
Is it a good way to do it?
Should I compress the base64, if yes how?
Should use a specific server to handle this task?
My backend is node express and the front end is ReactJS.
Thanks
It all depends on your situation. Reading and writing images from a cdn via i.e. streams is usually faster than reading and writing binary representations of images i.e. base64 from a database. However, your speed if reading from a cdn will obviously be effected by what service you use. Today, companies like Amazon can offer storage to a very cheap price so if you are not building a hobby app for like a student project you can usually afford it. Storing binary representation of images actually end up a little bit bigger in size than storing the image itself. You don't compress the base64, you compress the image before converting it. However, if you can't afford a storage account and if you know your users won't upload that many images it is usually enough to store binary representations of the images in a database. Mongo Atlas, for example, offers 512 mb for free on their database clusters. Dividing tasks of your app such as database requests and cdn services from your main application is usually a good choice if possible. This way you will divide the cpu, memory, etc. of your hardware and it will lead to faster reading and writing tasks for the user.
There are a lot of different modules for doing this in node. JIMP is a pretty nice one with loads of built in functions like resizing images and converting them to binary, either as Buffer or base64.
I must provide a solution where user can upload files and they must be stored together with some metadata, and this may grow really big.
Access to these files must be controlled, so they want me to just store them in DB BLOBs, but I fear PostgreSQL won't handle it properly over time.
My first idea was use some NoSQL DB solution, but I couldn't find any that would replace a good RDBMS and elegantly store files together. Then I thought on just saving these files in HD somewhere WebServer won't serve them, name them their table ID, and just load them on RAM and print them with proper content-type.
Could anyone suggest me any better solution for this?
I had the requirement to store many images (with some meta data) and allow controlled access to them, here is what I did.
To the cloud™
I save the image files in Amazon S3. My local database holds the metadata with the S3 location of the file as one column. When an authenticated and authorized user needs to see the file they hit a URL in my system (where the authentication and authorization checks occur) which then generates a pre-signed, expiring URL for the image and sends a redirect back to the browser. The browser is then able to load the image for a given amount of time (as specified in the signature within the URL.)
With this solution I have user level access to the resources and I don't have to store them as BLOBs or anything like that which may grow unwieldy over time. I also don't use MY bandwidth to stream the files to the client and get cheap, redundant storage for them. Obviously the suitability of this solution will depend on the nature of the binary files you are looking to store and your level of trust in Amazon. The world doesn't end if there is a slip and someone sees an image from my system they shouldn't. YMMV.
I am developing an iPhone app which retrieves information via NSUrlRequest and displays through UIWebView.
I want to hold initial data (such as HTML pages, images) as a cache so that users of my app can access to data without network costs at the first time.
Then, if data on my web server are updated, I would download them and update the cache.
For performance issues, I think it is better to store data on file system than on core data.
Yet, I think it's not possible to release a new app writing data on disk.
So, I am about to store initial data(or initial cache) at Core Data, and when users launch my app for the first time, I would copy the data to disk (like /Library folder).
Is it, do you think, a good approach?
Or,...hmm, can I access to Core Data using NSUrlRequest?
One more question,
I might access to file system using NSURL, which is the same as to data on the Web. (right?)
My app would compare version of the cache with version of data on my web server, and if it's old, retrieve new data.
and my app will access only to file system.
All data are actually HTML pages including script, and images. And, I want to cache them.
could you suggest a better design?
Thank you.
Is it, do you think, a good approach? Or,...hmm, can I access to Core Data using NSUrlRequest?
No.
One more question, I might access to file system using NSURL, which is the same as to data on the Web. (right?) My app would compare version of the cache with version of data on my web server, and if it's old, retrieve new data. and my app will access only to file system. All data are actually HTML pages including script, and images. And, I want to cache them.
Yes.
But you could also be more clever. And by "more clever" I mean "Matt Gallagher." Take a look at his very interesting approach in Substituting local data for remote UIWebView requests.