Running Worklight 6.2, I'm not able to upload application artifacts to the Application Center if they are larger than 64MB. I have other instances of Application Center where size is not an issue. All of my instances are using DB2 as their database.
How do we remove the size limitation for this instance of Application Center?
The key to resolving this issue may lie in how the DB2 transaction log size is configured. The LOGFILSIZ database configuration parameter can be used to attenuate the maximum transaction size, which is what needs to change to raise the ceiling on the maximum app size uploaded when using DB2.
To change the LOGFILSIZ parameter, visit This page in the MobileFirst 7.1 Knowledge Center.
Related
I'd like to use OpenMapTile server with leaflet in my webapp.
OpenMapTile server stands on ec2 on AWS.
I have also my webapp, that uses leaflet to present data.
I'm serving XYZ png tiles in couple of color styles.
What I have noticed is, that serving data from own hosted server is much slower then hosting it from services like MaptilerCloud.
When I first load map in a particular style, it takes even up to 20 seconds for a full load. Later, searching for another city in a given style is much faster by the way.
When I'm pasting a single url to my browser, to fetch a single map-tile from my own server, it loads immediately.
Do you know, what can be a problem? Ec2 has CPU usage below 15%, with max 10Gbps network burst bandwidth...
I have one e-commerce website in Magento 2.2.2 and it keeps on going down almost every day. Whenever it goes down, users get site took too long too respond and it never loads. To get web site working again I have to restart the server and then it works.
Total space on the server is 50GB. Out of which the whole website is around 18GB (11GB Media files and then vendor files etc.). Here are things that i cannot figure out why:
a.) The server shows that 33GB has been used although it should show only 18GB has been used. I have checked everywhere and I can't find what is consuming additional 15GB of space. Complete HTML folder is only 18GB.
b.) When I checked log files: it shows the following:
WARNING: Memory size allocated for the temporary table is more than 20% of innodb_buffer_pool_size. Please update innodb_buffer_pool_size or decrease batch size value (which decreases memory usages for the temporary table). Current batch size: 100000; Allocated memory size: 280000000 bytes; InnoDB buffer pool size: 1073741824 bytes.
I have already set innodb_buffer_pool_size to 2GB. But still, this problem keeps coming.
The server is an Amazon EC2 server and Magento is in production mode. Can allocating 100GB instead of 50GB will solve the problem?
Increased innodb buffer pool size to 10GB and logs do not show error anymore but server still goes down every day. Since RAM is only 4GB on our server, can that be the main cause? Because everyone is suggesting at least 8GB RAM?
Try the things below.
Magento2 has big log files and caching system. There may be increase your files in var folder.
But still you have to check whether your site belongs to more than 3000 products with high size images for products and you are storing all these in your server itself.
The suggestions what I can give, If your site have more products which I already mentioned better you have to use CDN for better performance. So the entire image will be process from the third party.
Next is You have to setup cloud flare to avoid the down time errors or customer side effect. You can make your index page to load while the server is down. And obviously you have to write script to restart the site automatically while its down.
In your server side check the memory size for php, you can better to give to 2G.
In Mysql side : Check the whether its making sleep query or not. If its making through your custom extension area ask your developer to optimize the code.
for eg : May be the code passing 'collection' for a single item fetch.
You can use the tool like nurelic
If everything is fine from your developer area please try to optimize the site with making memory limit mysql killing etc.. with your server side.
In the mean while magento is a big platform for e-commerce sector, so it has more area to cover by default. Better to avoid the unwanted modules from your active site, like disable the core modules which you are not using yet.
For an average site Use 16gb RAM,
A restart your mysql to make it effect ?
Also you need to set that buffer up to 20971520000, thats around 20GB.
Magento uses a lot of sessions and cache.
I have an application which require more than 30GB of memory and more than 4GB of disk space.
Can I run the app in any of cloud foundry environments (PCF or Bluemix - enterprise account)
Please help me on this query.
Bluemix default quota plan does not resolve your necessity, since the default plan allows only 8GB per instance (512GB max). You would need to open a ticket to change the quota plan of your organization.
Either way, to make sure about the quota plan being used by your organization, go to Manage > Account > Organization > Select Organization > Edit Org
In the quota section, look at the quota plan then login into cf tool and list the quota details:
cf login
cf quota QUOTA_PLAN
This link can give you a little more help.
This depends entirely on the Cloud Foundry provider that you're using and the limits that they put in place.
Behind the scenes, it also depends on the types of VMs being used for Diego Cells in the platform. The Cells are where your application code will run and there must be enough space on a Cell to run your app instance. As an example, if you have a total of 16G of RAM on your Diego Cells then it wouldn't be possible for a provider to support your use case of 30G for one application instance since there would be no Cells with that much free space. If you had Cells with 32G of RAM, that might work, depending on overhead and placement of other apps, but you'd probably need something even larger like 64G per Cell.
I mention all this because at the end of the day, if you're willing to run your own Cloud Foundry installation you can pretty much do whatever you want, so running with 30G or 100G isn't a problem as long as you configure and scale your platform accordingly.
Hope that helps!
I create Java web app on IBM Bluemix. This application shares session object among instances via Session Cache Service.
I understand how to program my application with session cache. But I could not find any descriptions if the total amount of cached data exceeds cache space (e.g. for starter plan, I can use 1GB cache space.).
These are my questions.
Q1. Are there any trigger to remove cached data from cache space?
Q2. After exceeding cache space, what data will be removed? Is there any cache strategy such as Least Recently Used, Least Frequently Used and so on?
The Session Cache service on IBM Bluemix is based on WebSphere Extreme Scale. Hence a lot of background information is provided in the Knowledge Center of WebSphere Extreme Scale. The standard Liberty profile for the Session Cache uses a Least Recently Used (LRU) algorithm to manage the space. I haven't tried it yet, but the first linked document describes how to monitor the cache and obtain statistics.
I thought of migrating from J 1.5.23 to 1.7 and like almost everyone i too ran into problems (Good i backed-up my site)
The problem i am facing is that my jUpgrade gets stuck at 'Migrating undefined'. 1.7 gets downloaded completely and also extracts correctly. I think i am still facing this problem because i somehow run out of space during the installation. what i wanted to know was How much disk space does migration require?
I have like 25 Mb free on my server and i am allowed only 100 MB so.
Thank You?
and btw i also unchecked the skip downloads options, didnt work for me
You will probably need more disk space than you have available. Your current site, plus the downloaded zip file, plus space for extracting the files plus any backups you have on the server are likely to exceed your 100MB.
I'd recommend taking a backup of your site, setting up the site on a localhost (xampp, wamp, etc) server on your own machine and run the migration there. This will have the benefits of not hitting arbitrary limits of what sounds like a very low budget web host.
Obviously you'll have the extra complexity of setting up your own server on your PC - but there are many tutorials out there that will walk you through the process, and the learning of new skills is always good.