How to change the postgresql.conf parameter "max_connections" on Google Cloud SQL?
When I exceed 100 connections I get the error: "FATAL: remaining connection slots are reserved for non-replication superuser connections"
Normally you would do it via CloudSQL flags API (or UI): https://cloud.google.com/sql/docs/postgres/flags
However, max_connections is not a parameter we currently support. We (Postgres team in CloudSQL) are aware that low max_connections is a problem for some (many?) applications and will address issue in one of the next releases.
Please follow issue 37271935 on our public issue tracker for updates.
Years later, it seems like it's supported now.
For the Terraform gang, you can update the parameter this way:
resource "google_sql_database_instance" "main" {
name = "main-instance"
database_version = "POSTGRES_14"
region = "us-central1"
settings {
tier = "db-f1-micro"
database_flags {
name = "max_connections"
value = 100
}
}
}
Note that at the time of writing the db-f1-micro default max_connections is 25, refs https://cloud.google.com/sql/docs/postgres/flags#postgres-m
Related
I am writing a Python app to run as lambda function and want to connect to an RDS DB instance without making it publicly accessible.
The RDS DB instance was already created under the default VPC with security group "sg-abcd".
So I have:
created a lambda function under the same default VPC
created a role with the following permission AWSLambdaVPCAccessExecutionRole and assigned it to the lambda function as in https://docs.aws.amazon.com/lambda/latest/dg/services-rds-tutorial.html
set sg-abcd as the lambda function's security group
added sg-abcd as source in the security group's inbound rules
added the CIDR range of the lambda function's subnet as source in the security group's inbound rules
However, when I invoke the lambda function it times out.
I can connect to the RDS DB from my laptop (after setting my IP as source in the sg's inbound rules), so I now that it is not an authentication problem. Also, for the RDS DB "Publicly Accessible" is set to "Yes".
Here's part of the app's code (where I try to connect):
rds_host = "xxx.rds.amazonaws.com"
port = xxxx
name = rds_config.db_username
password = rds_config.db_password
db_name = rds_config.db_name
logger = logging.getLogger()
logger.setLevel(logging.INFO)
try:
conn = psycopg2.connect(host=rds_host, database=db_name, user=name, password=password, connect_timeout=20)
except psycopg2.Error as e:
logger.error("ERROR: Unexpected error: Could not connect to PostgreSQL instance.")
logger.error(e)
sys.exit()
I really can't understand what I am missing. Any suggestion is welcomed, please help me figure it out!
Edit: the inbound rules that I have set look like this:
Security group rule ID: sgr-123456789
Type Info: PostgreSQL
Protocol Info: TPC
Port range Info: 5432
Source: sg-abcd OR IP or CIDR range
This document should help you out. Just make sure to get the suggestions for your specific scenario, whether the lambda function and the RDS instance are in the same VPC or not.
In my case I have the lambda function and the RDS instance in the same VPC and also both have the same subnets and SGs. But just make sure to follow the instructions in that document for the configurations needed for each scenario.
I've got an RDS instance (db.t2.micro - 1 vCPU, 1GiB RAM) that I'm spamming with connection attempts to mimic high load on the DB over a short period of time and am consistently hitting a DB connection limit of ~100 regardless of the DB instance class (I've tried db.t2.large - 4 vCPU, 16GiB RAM), setting the 'max_connections' parameters as part of a custom parameter group and the use of an RDS proxy for connection pooling.
I do notice that the red line on the DB connections graph below disappears when I increase the DB instance class which looks like more connections should be available but as can be seen in the graph the connection limit is pretty fixed at ~100
I've read threads where people have DB connections into the 000s and 0000s even so I'm convinced I'm missing something here on the configuration side of things, any ideas?
Edit: I am able to exceed ~100 connections if using the JDBC library but when I use mimic our production system which is a REST API running as a service on AWS ECS, I max out at ~100, with a http 500 error
The CloudWatch log indicates the 'rate exceeded'. The REST API is built using Microsoft.NET.Sdk.Web. In my use case the server needs to be able to handle ~500 API requests a second every 15mins.
I'm suspecting that your API, which have already identified is the REST api(could be the only one you are using, not sure from the info) is "Throttling".
Firstly to identify if its throttling or not, go to your CloudTrail console
and then create a table for a CloudTrail trail.
check the Athena console
and then select New query, type the below query and replace the table name with your clodtrail table you created.
select eventname, errorcode,eventsource,awsregion, useragent,COUNT(*) count
FROM your-cloudtrail-table
where errorcode = 'ThrottlingException'
AND eventtime between '2020-10-11T03:00:08Z' and '2020-10-12T07:15:08Z'
group by errorcode,awsregion, eventsource, useragent, eventname
order by count desc;
Once you have identified for sure that your API is throttling, You can ask the AWS team to bump up the limit if the throttling is due to the limit
(which they should be able to confirm).
see this for limit related conversation:
https://forums.aws.amazon.com/thread.jspa?threadID=226764
also check out the quota doc for the limits on ECS service:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-quotas.html
Secondly go ahead and check the PG end what is the connections limit showing there, you can psql into it and run the command:
show max_connections or
postgres=> select * from pg_settings where name='max_connections';
-[ RECORD 1 ]---+-----------------------------------------------------
name | max_connections
setting | 83
unit |
category | Connections and Authentication / Connection Settings
short_desc | Sets the maximum number of concurrent connections.
extra_desc |
context | postmaster
vartype | integer
source | configuration file
min_val | 1
max_val | 262143
enumvals |
boot_val | 100
reset_val | 83
sourcefile | /rdsdbdata/config/postgresql.conf
sourceline | 33
pending_restart | f
Hope this helps!.
This will tell you the max connections limit for that particular instance. I know there is no limit as such(there is a theoretical limit). The connection limit is dynamic in PG depending on the memory of your instance\cluster.
IF you go to RDS and then on the left side "Parameter groups"
you can search for max_connections and check for the column "values"
LEAST({DBInstanceClassMemory/9531392},5000).
I really know very little about Microsoft and .NET but it sounds like your application has a default connection pool of 100 connections.
https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling
In your DB connection string try adding Max Pool Size=200;
I have a query I'd like to run regularly in Redshift. I've set up an AWS Data Pipeline for it.
My problem is that I cannot figure out how to access Redshift. I keep getting "Unable to establish connection" errors. I have an Ec2Resource and I've tried including a subnet from our cluster's VPC and using the Security Group Id that Redshift uses, while also adding that sg-id to the inbound part of the rules. No luck.
Does anyone have a from-scratch way to set up a data pipeline to run against Redshift?
How I currently have my pipeline set up
RedshiftDatabase
Connection String: jdbc:redshift://[host]:[port]/[database]
Username, Password
Ec2Resource
Resource Role: DataPipelineDefaultResourceRole
Role: DataPipelineDefaultRole
Terminate after: 20 minutes
SqlActivity
Database: [database] (from Connection String)
Runs on: Ec2Resource
Script: SQL query
Error message
Unable to establish connection to jdbc:postgresql://[host]:[port]/[database] Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Ok, so the answer lies in Security Groups. I had to find the Security Group my Redshift cluster is in, and then add that as a value to "Security Group" parameter on the Ec2Resource in the DataPipeline.
Ec2Resource
Resource Role: DataPipelineDefaultResourceRole
Role: DataPipelineDefaultRole
Terminate after: 20 minutes
Security Group: sg-XXXXX [pull from Redshift]
Try opening inbound rules to all sources, just to narrow down possible causes. You've probably done this, but make sure you've set up your jdbc driver and configurations according to this.
there. I want to tune Google Cloud SQL for PostgreSQL instance. Currently, I'm trying to eliminate sorting speed degradation:
Sort Method: external merge Disk: 39592kB
Right now work_mem is set to 4MB, and it seems that is too small. After reading docs, I didn't find the way how to change this setting. It's impossible via Web GUI and via command line:
$ gcloud sql instances patch reporting-dev --database-flags work_mem=128MB
The following message will be used for the patch API method.
{"project": "xxx-153410", "name": "reporting-dev", "settings": {"databaseFlags": [{"name": "work_mem", "value": "128MB"}]}}
WARNING: This patch modifies a value that requires your instance to be
restarted. Submitting this patch will immediately restart your
instance if it's running.
Do you want to continue (Y/n)? Y
ERROR: (gcloud.sql.instances.patch) HTTPError 404: Flag requested cannot be set.
Any thoughts on that?
You can change it by user or by database.
alter database db1 set work_mem='64MB';
alter user stan set work_mem='32MB';
User overides db, db overrides postgresql.conf / cluster settings. Both override alter system set ... which you might not be able to use due to security settings.
Currently I am building an application with micro services. I have three instances which are actually interacting with the database i.e. Postgresql 9.4.4.
Below is my connection properties with slick 3.0
dev {
# Development Database configuration
# ~~~~~
dbconf {
dataSourceClass="org.postgresql.ds.PGSimpleDataSource"
properties {
user="xyz"
password="dev#xyz"
databaseName="dev_xyz"
serverName="localhost"
}
numThreads=10
}
}
The problem is that I am getting this FATAL: sorry, too many clients already error. max_connections in postgresql is 100 which is the default. As per the discussions in the web I might have to use a connection pool for this, which is I am doing by using Slick's default connection pool HikariCP. I am damn confuse right now, what step should I take to resolved this issue.
Add the maxConnections parameter to your configuration.
dbconf {
numThreads=10
maxConnections=10
}