I'm looking for help with TypeORM and PostgreSQL. To avoid long running queries, I would like to set a statement timeout at the connection level.
How can I do this?
TypeORM documentation
maxQueryExecutionTime If query execution time exceed this given max
execution time (in milliseconds) then logger will log this query.
If that doesn't do what you want you can use extra to send postgres driver configuration.
extra - Extra connection options to be passed to the underlying
driver. Use it if you want to pass extra settings to underlying
database driver.
Connection config that's working for me:
{
name: "default",
// .....
extra: {
application_name: "your_app_name",
statement_timeout: 30000 // 30s
}
}
We can check how these extra options are used in the driver:
https://github.com/typeorm/typeorm/blob/68a5c230776f6ad4e3ee7adea5ad4ecdce033c7e/src/driver/postgres/PostgresDriver.ts#L1361
Available options are listed here: https://node-postgres.com/api/pool
And here: https://node-postgres.com/api/client
The config passed to the pool is also passed to every client instance within the pool when the pool creates that client.
Related
In the PostgreSQL documentation https://www.postgresql.org/docs/10/libpq-connect.html, it has been said that multiple hosts can be specified in a single connection string such that all the hosts will be tried in order one after the other until one of the server gets succeeds.
But when i tried to implement the same setting in the tag present in my ASP.net web.config file, it is throwing error as no such host name. I am using NpgSQL provider in order to connect to PostgreSQL database.
I need to add multiple server names in the connection string such that if the server#1 fails then it should try for the next server server#2 immediately provided in the order until it succeeds
Can you please suggest on how multiple hosts can be provided in the connection string?
The Npgsql driver does not currently support this functionality. The issue tracking this is https://github.com/npgsql/npgsql/issues/732, I'm still hoping we can get this into the next release but there's a lot going on.
Load balancing and failover is avaialble in Npgsql version 6. At the time of writing v.6 is in preview.
Simple failover example (server2 is only used if a connection could not be established to server1):
Host=server1,server2;Username=test;Password=test
Example with load balancing (round robin I guess):
Host=server1,server2,server3,server4,server5;Username=test;Password=test;Load
Balance Hosts=true;Target Session Attributes=prefer-standby
https://www.npgsql.org/doc/failover-and-load-balancing.html
I have a query I'd like to run regularly in Redshift. I've set up an AWS Data Pipeline for it.
My problem is that I cannot figure out how to access Redshift. I keep getting "Unable to establish connection" errors. I have an Ec2Resource and I've tried including a subnet from our cluster's VPC and using the Security Group Id that Redshift uses, while also adding that sg-id to the inbound part of the rules. No luck.
Does anyone have a from-scratch way to set up a data pipeline to run against Redshift?
How I currently have my pipeline set up
RedshiftDatabase
Connection String: jdbc:redshift://[host]:[port]/[database]
Username, Password
Ec2Resource
Resource Role: DataPipelineDefaultResourceRole
Role: DataPipelineDefaultRole
Terminate after: 20 minutes
SqlActivity
Database: [database] (from Connection String)
Runs on: Ec2Resource
Script: SQL query
Error message
Unable to establish connection to jdbc:postgresql://[host]:[port]/[database] Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Ok, so the answer lies in Security Groups. I had to find the Security Group my Redshift cluster is in, and then add that as a value to "Security Group" parameter on the Ec2Resource in the DataPipeline.
Ec2Resource
Resource Role: DataPipelineDefaultResourceRole
Role: DataPipelineDefaultRole
Terminate after: 20 minutes
Security Group: sg-XXXXX [pull from Redshift]
Try opening inbound rules to all sources, just to narrow down possible causes. You've probably done this, but make sure you've set up your jdbc driver and configurations according to this.
How to establish the connection with mongo dB using jmeter with JSR223 sampler? Whenever i am trying to establish connection it is failing without any response.I am suspecting this is due to auth mechanism.
Any help with necessary changes needs to be done on jmeter is much appreciated
Whenever you face an issue with your script always check jmeter.log file, it should normally contain the root cause or at least enough information to guess it.
If you're looking for a built-in JMeter way of load testing MongoDB you will need to add the next line to user.properties file:
not_in_menu
This way you will have MongoDB Source Config back and will be able to specify your MongoDB host, port and other connection parameters. Later in JSR223 Sampler you will be able to get db object like:
def db = MongoDBHolder.getDBFromSource('sourceName', 'databaseName')
or if you need to supply the credentials:
def db = MongoDBHolder.getDBFromSource('sourceName', 'databaseName', 'username', 'password')
More information: How to Load Test MongoDB with JMeter
We are using sails framework for our web application and MongoDB as database.
Now we are calling services of the web app from the mobile.
There can be around 200-300 concurrent users calling webservice.
I observed that there are around 5-6 services executed and rest are ignore with time out exception.
I read somewhere that sails-mongo has default connection pool size 5.
How can I change it?
Here is config file. Though the connection pool size not changing.
mongodb: {
adapter: 'sails-mongo',
url : 'mongodb://127.0.0.1:27017/mydb?poolSize=200'
},
I found poolSize configuration in sails-mongo documention.
Can you try something like below.
someMongoDb: {
adapter: 'sails-mongo',
host: 'localhost', // defaults to `localhost` if omitted
port: 27017, // defaults to 27017 if omitted
user: 'username_here', // or omit if not relevant
password: 'password_here', // or omit if not relevant
database: 'database_name_here' // or omit if not relevant
poolSize: 10 //or omit if not relevant
}
It looks like the sails framework limits the concurrent request. I
remove the fetching data from mongodb and just make the method empty
without sending response. I observe that it executes 4 requests and
make other request for wait. If I kill one request it takes other
waited request
Sails/node/mongodb are not the problem as they can handle thousands of simultaneous requests. Nodejs is configured to accept infinite number of sockets by default https://nodejs.org/api/http.html#http_agent_maxsockets.
Most likely your browser or http client is limiting the number of requests per server. Refer to https://stackoverflow.com/a/985704/401025 or lookup the maximum number of requests from the manual of your http client.
Currently I am building an application with micro services. I have three instances which are actually interacting with the database i.e. Postgresql 9.4.4.
Below is my connection properties with slick 3.0
dev {
# Development Database configuration
# ~~~~~
dbconf {
dataSourceClass="org.postgresql.ds.PGSimpleDataSource"
properties {
user="xyz"
password="dev#xyz"
databaseName="dev_xyz"
serverName="localhost"
}
numThreads=10
}
}
The problem is that I am getting this FATAL: sorry, too many clients already error. max_connections in postgresql is 100 which is the default. As per the discussions in the web I might have to use a connection pool for this, which is I am doing by using Slick's default connection pool HikariCP. I am damn confuse right now, what step should I take to resolved this issue.
Add the maxConnections parameter to your configuration.
dbconf {
numThreads=10
maxConnections=10
}