chrome.fileSystem.retainEntry increase 500 limit - google-chrome-app

According to the chrome.filesystem documentation there is a limit of 500 fileEntry's that can be retained and restored.
The app I am developing is a document management system that links to local files. Over time I expect that the user will have over 500 links.
Any ideas on how to increase the 500 limit, or an alternative strategy to keep a long lists of local file links ????

Related

Google Drive Hits

I want to share some pdf files publicly.
While there are many sophisticated ways of doing this, I currently have a bit.ly shortlink pointing to Google Drive.
Traffic is highly variable, but the maximum number of hits in a single day has been 500.
File sizes are around 1-5 MB. So a single hit should average around 10-20 MB of download.
To what extent will this method work?
5000 hits per day?
50,000?

Hitting TooManyRequests 429 when using GoogleCloudStorageComposeOperator

When using GoogleCloudStorageComposeOperator in Google's Cloud Composer we've started hitting TooManyRequests, HTTP 429.
The rate of change requests to the object path/file.csv exceeds the rate limit. Please reduce the rate of create, update, and delete requests.
What limit are we hitting? I think it's this limit but I'm not sure:
There is a write limit to the same object name of once per second, so rapid writes to the same object name won't scale.
Does anyone have a sane way around this issue? On retry it usually works, but would be neat to not rely on it working on retry.
It's hard to say, without details, but this is rather Storage than Composer Issue. It is described in Troubleshooting guide for Cloud Storage.
There you can find some more references to dig more about it. On Quotas and Limit page I have found:
When a project's bandwidth exceeds quota in a location, requests to
affected buckets can be rejected with a retryable 429 error or can be
throttled. See Bandwidth usage for information about monitoring your
bandwidth.
It seems that this error is intended to be retried, so I think implementation of try/catch mechanism might be a solution.

Magento 2 website goes down every day and need to restart server

I have one e-commerce website in Magento 2.2.2 and it keeps on going down almost every day. Whenever it goes down, users get site took too long too respond and it never loads. To get web site working again I have to restart the server and then it works.
Total space on the server is 50GB. Out of which the whole website is around 18GB (11GB Media files and then vendor files etc.). Here are things that i cannot figure out why:
a.) The server shows that 33GB has been used although it should show only 18GB has been used. I have checked everywhere and I can't find what is consuming additional 15GB of space. Complete HTML folder is only 18GB.
b.) When I checked log files: it shows the following:
WARNING: Memory size allocated for the temporary table is more than 20% of innodb_buffer_pool_size. Please update innodb_buffer_pool_size or decrease batch size value (which decreases memory usages for the temporary table). Current batch size: 100000; Allocated memory size: 280000000 bytes; InnoDB buffer pool size: 1073741824 bytes.
I have already set innodb_buffer_pool_size to 2GB. But still, this problem keeps coming.
The server is an Amazon EC2 server and Magento is in production mode. Can allocating 100GB instead of 50GB will solve the problem?
Increased innodb buffer pool size to 10GB and logs do not show error anymore but server still goes down every day. Since RAM is only 4GB on our server, can that be the main cause? Because everyone is suggesting at least 8GB RAM?
Try the things below.
Magento2 has big log files and caching system. There may be increase your files in var folder.
But still you have to check whether your site belongs to more than 3000 products with high size images for products and you are storing all these in your server itself.
The suggestions what I can give, If your site have more products which I already mentioned better you have to use CDN for better performance. So the entire image will be process from the third party.
Next is You have to setup cloud flare to avoid the down time errors or customer side effect. You can make your index page to load while the server is down. And obviously you have to write script to restart the site automatically while its down.
In your server side check the memory size for php, you can better to give to 2G.
In Mysql side : Check the whether its making sleep query or not. If its making through your custom extension area ask your developer to optimize the code.
for eg : May be the code passing 'collection' for a single item fetch.
You can use the tool like nurelic
If everything is fine from your developer area please try to optimize the site with making memory limit mysql killing etc.. with your server side.
In the mean while magento is a big platform for e-commerce sector, so it has more area to cover by default. Better to avoid the unwanted modules from your active site, like disable the core modules which you are not using yet.
For an average site Use 16gb RAM,
A restart your mysql to make it effect ?
Also you need to set that buffer up to 20971520000, thats around 20GB.
Magento uses a lot of sessions and cache.

What is the upload limit on soundcloud

I sometimes get the error: { error_message: 'Sorry, you\'ve exceeded your upload limit.' } when I post sound files to soundcloud, using their http api.
I couldn't find any explanation for this 'upload limit' in their documentations.
Does anyone know if it's a daily limit? or a size limit? or a combination of both?
Thanks
Sparko is mostly right. The only difference is that you can tell how much remaining time you have by requesting the current user details (GET /me) and you'll there will be a key called upload_seconds_remaining.
Free users get 2 hours. Pro gets 4 hours. Pro Unlimited is unlimited. Regardless of the plan, individual tracks also can not be longer than ~6.5hrs (I forget the exact number)
Individual files cannot exceed 500mb Uploading Audio Files
However, I'd imagine this relates to your overall limit for uploading audio to SoundCloud based on the plan attached to the account you're posting to i.e exceeding the 2 hours provided by the free plan.
The API doesn't appear to provide a property for the remaining time provided to the user, although you could infer this from [user]plan & looping through all of their tracks and summing each [track]duration (although probably not advised).

Email migration API limits

In the documentation, it states that the API is limited to one email per user, and that we should create threads and process multiple users at once.
Does any one know if the is any other type of limitation? How many GB/Hour?
I have to plan a migration tens of thousands of accounts, hardware resources is practically unlimited, will I reaise a flag somewhere or get blocked if I start migrating over 1,000 users at a time?
Thanks
The limits for the API are posted at https://developers.google.com/google-apps/email-migration/limits. There is a per-user rate limit in place of one request per second per user. If you exceed this you will start seeing 503 errors returned. The best way to deal with this is to implement an exponential backoff algorithm to handle the errors and retry the request.