Is there a way to serve files for tasks (like images) not only from Yandex Disk?
I think disk is OK only for start because it's personal and limited to 10 GB
Additionally to Yandex Disk you can use Yandex.Cloud for uploading images. Looks like that these are all possible options (according to the doc) https://yandex.com/support/toloka-requester/concepts/use-object-storage.html
Related
As it follows from the notification displayed, I have used almost all available Google disk space (96%) while the total size of the files are 3.5 Gb only. Additional 1Gb was deleted and stored in the bin. What is the reason an how can I fix it? Also I have a lot of files shared with me from other accounts. But regarding Google Disk documentation they should not be taken into account. Additionally I have 0.8GB in Gmail and no files in Google photo
Go to this link and evaluate what are the files that are consuming more storage
Delete the files in your bin, as they are still counting towards your quota
After you delete the files, it usually takes some time to update the space on your Drive. A propagation matter.
Make sure you don't have a lot of photos taking quota out of your account
I faced hight CPU and I/O usage when I tried to upload 100Gb of small files (PNG images) to S3 bucket via very simple go s3 uploader.
Is there any way to limit bandwidth (i.e. via aws-sdk-go config) or something else to make the process of uploading less intensive or effective :) to reduce CPU and I/O usage.
I've tried nice CPU and IO but it actually doesn't help.
Have you tried S3Manager, https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/? From the docs:
Package s3manager provides utilities to upload and download objects from S3 concurrently. Helpful for when working with large objects.
I looking for the best way to upload an image from mobile phone to my server. I am currently using html5 to open the camera and take the picture, then I convert the file into a base64 string, then I send to the server, then save it in MongoDB.
I am expecting around 1000 to 1500 user request per day ( upload image ) , so I have the following question :
Is it a good way to do it?
Should I compress the base64, if yes how?
Should use a specific server to handle this task?
My backend is node express and the front end is ReactJS.
Thanks
It all depends on your situation. Reading and writing images from a cdn via i.e. streams is usually faster than reading and writing binary representations of images i.e. base64 from a database. However, your speed if reading from a cdn will obviously be effected by what service you use. Today, companies like Amazon can offer storage to a very cheap price so if you are not building a hobby app for like a student project you can usually afford it. Storing binary representation of images actually end up a little bit bigger in size than storing the image itself. You don't compress the base64, you compress the image before converting it. However, if you can't afford a storage account and if you know your users won't upload that many images it is usually enough to store binary representations of the images in a database. Mongo Atlas, for example, offers 512 mb for free on their database clusters. Dividing tasks of your app such as database requests and cdn services from your main application is usually a good choice if possible. This way you will divide the cpu, memory, etc. of your hardware and it will lead to faster reading and writing tasks for the user.
There are a lot of different modules for doing this in node. JIMP is a pretty nice one with loads of built in functions like resizing images and converting them to binary, either as Buffer or base64.
I have a fairly image-intensive iPhone app, and I'm looking to store remotely downloaded images locally in the app's sandbox tmp directory to avoid unnecessary network requests. Is there a limit to the total size of the files stored in an app's directories, or does the app need to manage that? How would the app determine the size of the files in the tmp directory?
Also, if the app needs to manage the size of the cache, I'd like to implement some kind of cache policy to determine which files get invalidated. How would I go about doing this? If I want to implement a basic LRU caching policy - invalidating files that have been used least recently - it seems like I would need to store access counts for each image and store that on the disk as well, which seems kind of funky. I suppose an easy size management policy would be to simply completely wipe the cache each time the application terminates.
Also, what's the difference between using the directory from NSCachesDirectory versus NSTemporaryDirectory? The Apple docs mention both, but don't talk about which one to use for what type of files. I'm thinking the NSTemporaryDirectory is more like a Unix /var/tmp directory, and used for ephemeral data that can be wiped out at anytime. Seems to me the NSCachesDirectory is more appropriate for storing cached images, since the files could be needed across multiple app lifecycles.
All temporary directories are local to your application; any of them will work and there is no artificial limit to the size of their contents.
A persistent LRU cache policy should be both sufficient and relatively easy to implement.
What approach is considered to be the best to store and manage video files? As databases are used for small textual data, are databases good enough to handle huge amounts of video/audio data? Are databases, the formidable solution?
Apart from size of hard disk space required for centrally managing video/audio/image content, what are the requirements of hosting such a server?
I would not store big files, like videos, in the database ; instead, I would :
store the files on disk, in the filesystem
and only store the name of the file (plus some metadata like content-type, size) in the database.
You have to consider, at least, those few points :
database is generally harder to scale than disks :
having a DB that has a size of several dozens/hundreds/more GB because of videos will make many things (backup, for example) really hard.
do you want to put more read-load on your DB servers, to serve... files ?
samer thing when writting "files" to your DB, btw
serving files (like, say, videos) from the filesystem is something that webservers do pretty well -- you can even use something lighter (like lighttpd, nginx, ...) than the webserver used to run your application (Apache, IIS, ...), if needed
it allows your application and/or some background scripts to do some tasks (like generating previews/thumbnails, for example) without involving the DB server
Using plain old files will also probably make things much easier the day you want to use some kind of CDN to distribute your videos to users.
And here are a couple of other questions/answers that might interest you :
Storing Images in DB - Yea or Nay?
Storing Images : DB or File System -
Store images(jpg,gif,png) in filesystem or DB?
store image in database or in a system file ?
Those questions/answers are about images ; but the idea will be exactly the same for videos -- except that videos will be much bigger than images !