Sails Framework + MongoDB set connection pool size - mongodb

We are using sails framework for our web application and MongoDB as database.
Now we are calling services of the web app from the mobile.
There can be around 200-300 concurrent users calling webservice.
I observed that there are around 5-6 services executed and rest are ignore with time out exception.
I read somewhere that sails-mongo has default connection pool size 5.
How can I change it?
Here is config file. Though the connection pool size not changing.
mongodb: {
adapter: 'sails-mongo',
url : 'mongodb://127.0.0.1:27017/mydb?poolSize=200'
},

I found poolSize configuration in sails-mongo documention.
Can you try something like below.
someMongoDb: {
adapter: 'sails-mongo',
host: 'localhost', // defaults to `localhost` if omitted
port: 27017, // defaults to 27017 if omitted
user: 'username_here', // or omit if not relevant
password: 'password_here', // or omit if not relevant
database: 'database_name_here' // or omit if not relevant
poolSize: 10 //or omit if not relevant
}

It looks like the sails framework limits the concurrent request. I
remove the fetching data from mongodb and just make the method empty
without sending response. I observe that it executes 4 requests and
make other request for wait. If I kill one request it takes other
waited request
Sails/node/mongodb are not the problem as they can handle thousands of simultaneous requests. Nodejs is configured to accept infinite number of sockets by default https://nodejs.org/api/http.html#http_agent_maxsockets.
Most likely your browser or http client is limiting the number of requests per server. Refer to https://stackoverflow.com/a/985704/401025 or lookup the maximum number of requests from the manual of your http client.

Related

HikariCP - Amazon IAM Authentication - Need of evicting existing connection in Pool?

I am using IAM authentication to connect to Amazon RDS. Since the password gets expired every 15 minutes, I am using HikariConfigMXBean to update the credentials every 14 minutes (1 minute before the auth-token actually expires).
I have been referring codes from other folks, and I have seen people doing softEvictConnections() after refreshing the credentials.
As per the documentation, softEvictConnections() will basically remove all preexisting connections and create a new pool of connections using fresh credentials.
When I tried to test, I am able to verify that the older connection created with old auth-token (which has now expired) continues to work.
For reference, below is the piece of code:
void updateHikariCredentials(HikariDataSource dataSource, String userName, String password)
{
// Update username & password.
HikariConfigMXBean configBean = dataSource.getHikariConfigMXBean();
configBean.setUsername(userName);
configBean.setPassword(password);
HikariPoolMXBean pool = dataSource.getHikariPoolMXBean();
if (pool != null) {
pool.softEvictConnections(); // <-- Why is this needed?
}
}
I am trying to understand, what is the need of evicting the existing connections?
I already set maxConnectionAge for my connections. Is there any additional advantage of forcefully evicting old connection on password update, which I am missing?
I am trying to understand, what is the need of evicting the existing connections? [...] Is there any additional advantage of forcefully evicting old connection on password update, which I am missing?
There isn't
Authentication happens when the database connection is established. Once established, the connection is not re-authenticated. By throwing away all established connections, you're just adding overhead.
The goal of short-lived credentials is to prevent an attacker from exfiltrating credentials and then using them to open a connection. If they manage to open that connection during the time that the credentials are valid, there's nothing you can do to stop them accessing your data short of manually killing the back-end process.

TypeORM PostgreSQL statement timeout

I'm looking for help with TypeORM and PostgreSQL. To avoid long running queries, I would like to set a statement timeout at the connection level.
How can I do this?
TypeORM documentation
maxQueryExecutionTime If query execution time exceed this given max
execution time (in milliseconds) then logger will log this query.
If that doesn't do what you want you can use extra to send postgres driver configuration.
extra - Extra connection options to be passed to the underlying
driver. Use it if you want to pass extra settings to underlying
database driver.
Connection config that's working for me:
{
name: "default",
// .....
extra: {
application_name: "your_app_name",
statement_timeout: 30000 // 30s
}
}
We can check how these extra options are used in the driver:
https://github.com/typeorm/typeorm/blob/68a5c230776f6ad4e3ee7adea5ad4ecdce033c7e/src/driver/postgres/PostgresDriver.ts#L1361
Available options are listed here: https://node-postgres.com/api/pool
And here: https://node-postgres.com/api/client
The config passed to the pool is also passed to every client instance within the pool when the pool creates that client.

How to establish the connection with MongoDB using jMeter with JSR223 sampler?

How to establish the connection with mongo dB using jmeter with JSR223 sampler? Whenever i am trying to establish connection it is failing without any response.I am suspecting this is due to auth mechanism.
Any help with necessary changes needs to be done on jmeter is much appreciated
Whenever you face an issue with your script always check jmeter.log file, it should normally contain the root cause or at least enough information to guess it.
If you're looking for a built-in JMeter way of load testing MongoDB you will need to add the next line to user.properties file:
not_in_menu
This way you will have MongoDB Source Config back and will be able to specify your MongoDB host, port and other connection parameters. Later in JSR223 Sampler you will be able to get db object like:
def db = MongoDBHolder.getDBFromSource('sourceName', 'databaseName')
or if you need to supply the credentials:
def db = MongoDBHolder.getDBFromSource('sourceName', 'databaseName', 'username', 'password')
More information: How to Load Test MongoDB with JMeter

Does Google Cloud Functions reconnect to my MongoDB Client for each HTTP request?

I am trying to migrate my Node/Express REST API to Google Cloud Functions, and was noticing some performance issues. I am receiving 404 errors on all my API routes while waiting for my Functions to "spin up" after a period of inactivity. I was curious if this was related to my implementation. Here is my Express serverless "server", written in Typescript (index.ts):
import * as functions from 'firebase-functions'
import * as express from 'express'
import { MyApi } from './server'
const app: express.Application = MyApi.bootstrap().app
export const myApp = functions.https.onRequest(app)
Next, here is server.ts
import * as express from 'express'
import * as mongodb from 'mongodb'
require('dotenv').config({ path: '.env' })
export class MyApi {
app: express.Application = express()
mongoDbUri: string = process.env.MONGO_URI
static bootstrap(): MyApi {
return new MyApi()
}
constructor() {
this.connectToDb(this.mongoDbUri)
}
connectToDb(uri: string) {
mongodb.MongoClient.connect(uri, (err, db) => {
if (err) {
this.handleNoDbError(err)
}
setApiRoutes(app: express.Application, db)
})
}
}
I've stripped a lot of the redundant code for the sake of simplicity, but hopefully this is enough for you to get the idea. I am asking Functions to expose some API endpoints. First, I am using connection pooling to make a single Mongo connection, then I pass that connection (db) down to my routes. These route endpoints in turn make a find() request to my MongoDB Atlas database, and pass those results on to my app.
I deployed a version of this code and it is functioning ok, in that it fetches results properly. However I am concerned about the slow performance and intermittent 404 errors (compared to a dedicated Node/Express server on Heroku, for example).
I was wondering if my problems are related to the Mongo Client. Is it connecting to my database every time a request is made to Functions? Once my Functions wake up after inactivity, do they persist the same Mongo DB connection across all future requests? I'm new to serverless so I guess I'm confused about whether my Functions start up and stay up during execution, then "shut down" after going idle.
Can supply live links if needed.
The first time your function is executed in a new instance, it will have to connect to the Mongo server.
This means that the reconnect will at least happen:
After a period of inactivity, if Cloud Functions has spun down your instances.
When there is an increase in activity, as Cloud Functions spins up additional instances.
It may also happen intermediately if your client library does connection management. But since that doesn't depend on the Cloud Functions environment, I can't comment on it.

Postman : socket hang up

I just started using Postman. I had this error "Error: socket hang up" when I was executing a collection runner. I've read a few post regarding socket hang up and it mention about sending a request and there's no response from the server side and probably timeout. How do I extend the length of time of the request in Postman Collection Runner?
For me it was because my application was switched to https and my postman requests still had http in them. Changing postman to https fixed it.
Socket hang up, error is port related error. I am sharing my experience. When you use same port for connecting database, which port is already in use for other service, then "Socket Hang up" error comes out.
eg:- port 6455 is dedicated port for some other service or connection. You cannot use same port (6455) for making a database connection on same server.
Sometimes, this error rises when a client waits for a response for a very long time. This can be resolved using the 202 (Accepted) Http code. This basically means that you will tell the server to start the job you want it to do, and then, every some-time-period check if it has finished the job.
If you are the one who wrote the server, this is relatively easy to implement. If not, check the documentation of the server you're using.
Postman was giving "Could not get response" "Error: socket hang up".
I solved this problem by adding the Content-Length http header to my request
Are you using nodemon, or some other file-watcher? In my case, I was generating some local files, uploading them, then sending the URL back to my user. Unfortunately nodemon would see the "changes" to the project, and trigger a restart before a response was sent. I ignored the build directories from my file-watcher and solved this issue.
Here is the Nodemon readme on ignoring files: https://github.com/remy/nodemon#ignoring-files
I have just faced the same problem and I fixed it by close my VPN. So I guess that's a network agent problem. You can check if you have some network proxy is on.
this happaned when client wait for response for long time
try to sync your API requests from postman
then make login post and your are done
I defined Authenticate method to generate a token and mentioned its return type as nullable string as:
public string? Authenticate(string username, string password)
{
if(!users.Any(u => u.Key==username && u.Value == password))
{
return null;
}
var tokenHandler = new JwtSecurityTokenHandler();
var tokenKey = Encoding.ASCII.GetBytes(key);
var tokenDescriptor = new SecurityTokenDescriptor()
{
Subject = new ClaimsIdentity(new Claim[]
{
new Claim(ClaimTypes.Name, username)
}),
Expires = DateTime.UtcNow.AddHours(1),
SigningCredentials = new SigningCredentials(new
SymmetricSecurityKey(tokenKey),
SecurityAlgorithms.HmacSha256Signature)
};
var token = tokenHandler.CreateToken(tokenDescriptor);
return tokenHandler.WriteToken(token);
}
Changing nullable string to simply string fixed "Socket Hang Up" issue for me!
If Postman doesn't get response within a specified time it will throw the error "socket hang up".
I was doing something like below to achieve 60 minutes of delay between each scenario in a collection:
get https://postman-echo.com/delay/10
pre request script :-
setTimeout(function(){}, [50000]);
I reduced time duration to 30 seconds:
setTimeout(function(){}, [20000]);
After that I stopped getting this error.
I solved this problem with disconnection my vpn. you should check if there is vpn connected.
What helped for me was replacing 'localhost' in the url to http://127.0.0.1 or whatever other address your local machine has assigned localhost to.
Socket hang up error could be due to the wrong URL of the API you are trying to access in the postman. please check the URL once carefully.
It's possible there are 2 things, happening at the same time.
The url contains a port which is not commonly used AND
you are using a VPN or proxy that does not support that port.
I had this problem. My server port was 45860 and I was using pSiphon anti-filter VPN. In that condition my Postman reported "connection hang-up" only when server's reply was an error with status codes bigger than 0. (It was fine when some text was returning from server with no error code.)
When I changed my web service port to 8080 on my server, WOW, it worked! even though pSiphon VPN was connected.
Following on Abhay's answer: double check the scheme. A server that is secured may disconnect if you call an https endpoint with http.
This happened to me while debugging an ASP.NET Core API running on localhost using the local cert. Took me a while to figure out since it was inside a Postman environment and also it was a Monday.
In my case, adding in the header the "Content-length" parameter did the job.
My environment is
Mac:
[Terminal command: sw_vers]
ProductName: macOS
ProductVersion: 12.0.1. (Monterey)
BuildVersion: 21A559
mysql:
[Terminal command: mysql --version]
Ver 8.0.27 for macos11.6 on x86_64 (Homebrew)
Apache:
[Terminal command: httpd -v]
Server version: Apache/2.4.48 (Unix)
Server built: Oct 1 2021 20:08:18.
*Laravel
[Terminal command: php artisan --version]
Laravel Framework 8.76.2
Postman
Version 9.1.5 (9.1.5)
socket hang up error can also occur due to backend API handling logic.
For example - I was trying to create an Nginx config file and restart the service by using the incoming API request body. This resulted in temporary disconnection of the Nginx service while handling the API request and resulted in socket hang up.
If you have tried all the steps mentioned in other comments, and still face the issue. I suggest you check the API handler code thoroughly.
I handled the above-mentioned example by calling the Nginx reset method with delay and a separate API to check the status of the prev reset request.
For me it was giving Socket Hung Up error only while running Collection Runner not with single request.
Adding a small delay (100-300ms) in the collection Runner solved issue for me.
In my case, I had to provide --ssl-client-key and --ssl-client-cert files to overcome these errors.
Great error, it is so general that for everyone something different helps.
In my case I was not able to fix it and what is really funny is fact that I am expecting to get multipart file on one endpoint. When I prepare request in postman I get "Error: socket hang up". If I change for other endpoint(even not existing) is exactly that same error. But when I call any endpoint without body that request works and after that all subsequent attempts works perfectly.
In my case this is purely postman issue. Any request using curl is never giving that error.
For me the issue was related to the mismatch of the http versions on the client and server.
Client was assuming http v2 while server (spring boot/ tomcat) in the case was http v1
When on the server I configured server to v2, the issue got resolved in a go.
In spring boot you can configure the http v2 as below:-
server.http2.enabled=true
Note - Also the scenario was related to using client auth mechanism (i.e. MTLS)
Without client auth/ MTLS it worked without issues but for client auth the version setting in spring boot was the important rescue point
"socket hang up" is proxy related issue. when we run same collection with the help of newman on jenkins then all test are passed.
change the proxy setting
https://docs.cloudfoundry.org/cf-cli/http-proxy.html
I had the same issue: "Error: socket hang up" when sending a request to store a file and backend logs mentioned a timeout as you described. In my case I was using mongoDB and the real problem was my collection’s array capacity was full. When I cleared the documents in that collection the error was dismissed. Hope this will help someone who faces a similar scenario.
"Socket Hung Up" can be on-premise issue some time's, because, of bottle neck in %temp% folder, try to free up the "temp" folder and give a try
I fixed this issue by disabling Postman token header. Screenshot:
I face the same issue in when calling a SOAP API with POSTMAN
by adding the following data in the header my issue was fixed
Key:Content-Length
Value:<calculated when request is sent>
In my case, I was incorrectly using a port reserved for https version of my api.
For example, I was supposed to use https://localhost:6211, but I was using http://localhost:6211.
It is port related error. I was trying to hit the API with an invalid port.
if it helps to anybody... In my case, i just forgot to use json parser (const jsonParser = express.json();) to have access to json type of objects sending to the server from the client. Be careful, don't waste your time =)
This happened to me while I was learning ASP.NET Web API.
In my case it was because the SSL certificate verification.
I was using VS Code so I oversee about SSL certificate verification and it came with https protocol.
I solved this with testing my endpoints with http protocol.
Another approach can be just disabling the SSL certificate Verification on Postman Settings.
This error was coming for me since the request url is not correct --> here you can see my url does not contains : after http
The url I was using was : http//locahost:9090/someApi
Solution
adding a colon new url is http://localhost:9090/someApi
the socket error was not coming
This is just my case may be your case is totally different as mentioned in the other answers :)