how can i allocate more space into Solana program in order to upgrade? - deployment

When i try to upgrade a solana program on mainnet using buffer, it has limits because when you deploy a program on Solana, the amount of space allocated for that program is 2x the original program size. so in each upgrade we need more space in the origin program.
when size limit reaches, it threw an error:
Program returned error: "account data too small for instruction"
is there any way to allocate more space to the original program or any other way so i can upgrade my program as much as i need.
PS: i won't deploy it again in order to upgrade.

Currently you cannot increase the account size. This is a known issue that will be fixed in 1.11 https://github.com/solana-labs/solana/issues/26385

Related

Google cloud sql incorrect innodb_buffer_pool_size?

I upgraded my Cloud SQL machine from a 'db-f1-micro' 0.6GB RAM machine to a 'db-n1-standard-1' 3.75GB RAM machine last week. Running:
SELECT ##innodb_buffer_pool_size;
The output is:
1375731712
which I believe is 1.38GB. Here's the memory utilization for the primary and replica:
This seems oddly low for this machine type but researching (How to set innodb_buffer_pool_size in mysql in google cloud sql?) it doesn't appear I can alter the innodb_buffer_pool_size. Is this somehow dynamically set and slowly increasing over time? Doesn't appear to be near the 75-80% range google appears to aim for on these.
What is the value of innodb_buffer_pool_chunk_size and innodb_buffer_pool_instances?
innodb_buffer_pool_size must always be equal to or a multiple of the product of these two values, and will be automatically adjusted to be so. Chunk size can only be modified at startup, as explained in the docs page for InnoDB Buffer Pool Size configuration.
For Google CloudSQL in particular, not only the absolute, but also the relative size of the innodb_buffer_pool_size depends on instance type. I work for GCP support, and after some research in our documentation, I can tell that pool size is automatically configured based on an internal formula, which is subject to change. Improvements are being made to make instances more resilient against OOMs, and the buffer pool size has an important role in this.
So, it is expected behaviour that with your new instance type, and possibly different innodb_buffer_pool_chunk_size and innodb_buffer_pool_instances, you might get a quite changed memory usage. Currently, the user does not have control over the innodb_buffer_pool_size.

Google Cloud SQL PG11 : could not resize shared memory segment

I recently upgraded a Postgres 9.6 instance to 11.1 on Google Cloud SQL. Since then I've begun to notice a large number of the following error across multiple queries:
org.postgresql.util.PSQLException: ERROR: could not resize shared
memory segment "/PostgreSQL.78044234" to 2097152 bytes: No space left
on device
From what I've read, this is probably due to changes that came in PG10, and the typical solution involves increasing the instance's shared memory. To my knowledge this isn't possible on Google Cloud SQL though. I've also tried adjusting work_mem with no positive effect.
This may not matter, but for completeness, the instance is configured with 30 gigs of RAM, 120 gigs of SSD hd space and 8 CPU's. I'd assume that Google would provide an appropriate shared memory setting for those specs, but perhaps not? Any ideas?
UPDATE
Setting the database flag random_page_cost to 1 appears to have reduced the impact the issue. This isn't a full solution though so would still love to get a proper fix if one is out there.
Credit goes to this blog post for the idea.
UPDATE 2
The original issue report was closed and a new internal issue that isnt viewable by the public was created. According to a GCP Account Manager's email reply however, a fix was rolled out by Google on 8/11/2019.
This worked for me, I think google needs to change a flag on how they're starting the postgres container on their end that we can't influence inside postgres.
https://www.postgresql.org/message-id/CAEepm%3D2wXSfmS601nUVCftJKRPF%3DPRX%2BDYZxMeT8M2WwLSanVQ%40mail.gmail.com
Bingo. Somehow your container tech is limiting shared memory. That
error is working as designed. You could figure out how to fix the
mount options, or you could disable parallelism with
max_parallel_workers_per_gather = 0.
show max_parallel_workers_per_gather;
-- 2
-- Run your query
-- Query fails
alter user ${MY_PROD_USER} set max_parallel_workers_per_gather=0;
-- Run query again -- query should work
alter user ${MY_PROD_USER} set max_parallel_workers_per_gather=2;
-- -- Run query again -- fails
You may consider increasing Tier of the instance, that will have influence on machine memory, vCPU cores, and resources available to your Cloud SQL instance. Check available machine types
In Google Cloud SQL PostgreSQL is also possible to change database flags, that have influence on memory consumption:
max_connections: some memory resources can be allocated per-client, so the maximum number of clients suggests the maximum possible memory use
shared_buffers: determines how much memory is dedicated to PostgreSQL to use for caching data
autovacuum - should be on.
I recommend lowering the limits, to lower memory consumption.

Matlab incorrect out of disk error when compiling script

I have built a treebagger model in Matlab which I am trying to compile in R2016a using the application compiler. I did this successfully a few days ago, despite the fact that the file for the treebagger model was about 2GB.
I retrained my model because I had made some small changes to my data and now when I try to compile I get an error saying that I am out of disk space, even though I have around 250GB free disk space on the drive. More precisely, the message was "Out of disk space. Failed to create CTF file. Please free -246249051088 bytes and re-run Compiler."
I even retried on a drive with about 2.5TB of free space and got the same issue. Any ideas? Thanks for any help.

AIX: piping the application command output to more results in malloc failure in application

I have a command line application, which executed on shell will list the output reading from the database. And it gets this information in chunks for which memory allocation and free is being done.
When I execute the command (Whose output will span around 6000 pages) it is listing the data correctly.
But (only in AIX) when I issue the 'command | more' after displaying random number of pages, memory allocation in the application that is getting the data in chunks is failing.
(Where as the same command implementation with more is working fine in linux for the same data).
Any idea why in AIX it is failing? Anybody know about the memory allocation criteria in AIX? why piping the output to more command causes memory allocation failure in application?
It is not clear exactly what the failure is. Are you getting a seg fault or is the call the malloc returning 0 indicating that you are out of memory?
The fault could be in an AIX library but it could just as easily be within your application.
Go here: http://pic.dhe.ibm.com/infocenter/aix/v6r1/index.jsp (or the page that is appropriate for your level)
Search for "malloc debug". These facilities are not bleeding edge but they are fairly good and complete. With some time and care you can track down memory leaks and using memory after it has been freed (which sounds like the case here).
Its also good to review the available APARs for your level looking for matches that sound similar.
There are also 3rd party tools like zero fault http://www.zerofault.com/index.html and Purify (which looks like IBM purchased) http://www-01.ibm.com/software/awdtools/purify/unix/sysreq/ to help out.
Good luck

Setting SHMMAX etc values on MAC OS X 10.6 for PostgreSQL

I'm trying to startup my PostgreSQL server on my local machine.
But I got an error message saying:
FATAL: could not create shared memory segment: Invalid argument
DETAIL: Failed system call was shmget(key=5432001, size=9781248, 03600).
HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 9781248 bytes), reduce PostgreSQL's shared_buffers parameter (currently 1024) and/or its max_connections parameter (currently 13).
If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.
The PostgreSQL documentation contains more information about shared memory configuration.
I search and looked at the docs but everything I tries about setting the kern.sysv.shmmax and kern.sysv.shmall is working.
What are the right settings on Snow Leopard? I installed postgres with macports.
You have to increase the kernel's maximum shared memory allowance to something higher than what Postgres is trying to allocate. (You could also decrease the shared buffers or maximum connections settings in postgresql.conf to make Postgres ask for less memory, but the default values are already rather small for most use cases.)
To do this as a one-off, lasting until next reboot:
sudo sysctl -w kern.sysv.shmmax=12582912
sudo sysctl -w kern.sysv.shmall=12582912
Change the exact number as appropriate for your Postgres settings; it has to be larger than whatever Postgres says it's asking for in the log file. After doing both of these, you should be able to start Postgres.
To make the change persist across reboots, edit /etc/sysctl.conf and set the same values there.
Apple documents how to adjust them for Snow Leopard here:
http://support.apple.com/kb/HT4022: Mac OS X Server v10.6: Adjusting Shared memory segment values
sysctl does allow you to change them temporarily.
I know this is an old question, but I think it might be worth noting that if you're just using PostgreSQL for development purposes you might be safe to just go into postgres.conf (the data directory after you've done initdb) and change the shared_buffers variable to something a little lower. It defaults to 28MB or something.
But this way you're not messing around with system shared memory variables.
Warning: this answer has been made obselete by newer versions of OS X. Please reference Paul Legato's answer below.
In Mac OS X you cannot change shmmax after the system has booted. You need to edit /etc/rc or /etc/sysctl.conf, and keep in mind that it needs to be a multiple of 4096. See here
http://www.postgresql.org/docs/8.4/static/kernel-resources.html