Amazon RDS PostgreSQL Performance input/output - postgresql

I am facing this complex challenge with an RDS PostgreSQL instance. I am almost out of any idea how to handle it. I am launching an app (React+Go+PostreSQL) for which I expect to have around 250-300 users simultaneously making the same API GET request for how long the users wish to use it.
It is a questionnaire kind of app and users will be able to retrieve one question from the database and answer it, the server will save the answer in the DB, and then the user will be able to press next to fetch the next question. I tested my API endpoint with k6 using 500 virtual users for 2 minutes and the database returns dial: i/o timeout or even connection rejected sometimes, usually when it reaches 6000 requests and I get around 93% success. I tried to fine-tune the rds instance with tcp_keep_alive parameters but without any luck, I still cannot manage to get 100% of the request pass. I also tried to increase the general storage from 20gb min to 100gb in rds and switch from the free db.t3.micro to db.t3.medium size.
Any hint would be much appreciated. It should be possible for a normal golang server with postgres to handle this requests at the same time, shouldn't it? It is just a regular select * from x where y statement.
EDIT (CODE SAMPLE):
I use a dependency injection pattern and so I have only one instance of the DB passed to all the other repositories including the API package. The db repo looks like this:
func NewRepository() (DBRepository, error) {
dbname := getenv("POSTGRES_DATABASE", "")
username := getenv("POSTGRES_ROOT_USERNAME", "")
password := getenv("POSTGRES_ROOT_PASSWORD", "")
host := getenv("POSTGRES_HOST", "")
port := getenv("POSTGRES_PORT", "")
dsn := fmt.Sprintf("host=%s user=%s password=%s"+
" dbname=%s port=%s sslmode=disable
TimeZone=Europe/Bucharest", host, username, password, dbname,
port)
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
return nil, err
}
if err != nil {
return nil, err
}
db.AutoMigrate(
//migrate tables are here
)
return &dbRepository{
db: db,
}, nil
}
Currently the parameters use in RDS for TCP keepalive are:
tcp_keepalives_count 30
tcp_keepalives_idle 1000
tcp_keepalives_interval 1000
and I also tried with different numbers.
The query I am doing is a simple .Find() statement from gorm package but it seems like this is not the issue since it gets blocked whenever hits the first query/connection with the db. There are 2 query executed in this endpoint I am testing but it gets stuck on the first. If more info is needed I will update but this issue it gets so frustrating.
My k6 test if the following:
import http from 'k6/http';
import { check } from 'k6';
import { sleep } from 'k6';
export const options = {
insecureSkipTLSVerify: true,
stages: [
{ target: 225, duration: '2m' },
],
};
const access_tokens = []
let random_token = access_tokens[Math.floor(Math.random()*access_tokens.length)];
const params = {
headers: {'Authorization': `Bearer ${random_token}`}
};
export default function () {
let res = http.get('endpoint here', params);
check(res, {'Message': (r)=> r.status === 202});
sleep(1);
}
The DB tables are also indexed and tested with the explain statement.

Related

How to handle postgres DB connection timeout/drop in gorm

My DB connection and its getter is as follow:
func connectDB() (*gorm.DB, error) {
db, err := gorm.Open(postgres.Open(dbURL), &gorm.Config{})
if err != nil {
return nil, err
}
return db, nil
}
func GetDB() (*gorm.DB, error) {
if db == nil {
return connectDB()
} else {
return db, nil
}
}
I use GetDB() in my code to do operations on the database. My app runs for about 15 minutes. How can I make sure the connection db *gorm.DB will not timeout during all that time? Even if it does not timeout within 15 minutes, how to reconnect gracefully if the connection happens to drop due to network error, etc?
GORM using database/sql to maintain connection pool. The connection pool could handle the connection timeout and error. The connection pool could be configured as below
sqlDB, err := db.DB()
// SetMaxIdleConns sets the maximum number of connections in the idle connection pool.
sqlDB.SetMaxIdleConns(10)
// SetMaxOpenConns sets the maximum number of open connections to the database.
sqlDB.SetMaxOpenConns(100)
// SetConnMaxLifetime sets the maximum amount of time a connection may be reused.
sqlDB.SetConnMaxLifetime(time.Hour)
I suggest you to use a generic database interface *sql.DB ping() function https://gorm.io/docs/generic_interface.html
Ping verifies a connection to the database is still alive, establishing a connection if necessary.
So whenever you do a new request to your database (or just for the requests you know would be executed after a long period of time) you can ping the db first and make sure it is still active (in other case the ping reconnects to the db automatically), and then do your request.

cadence go-client/ client to reach server for fetching workflow results in panic

First time user of Cadence:
Scenario
I have a cadence server running in my sandbox environment.
Intent is to fetch the workflow status
I am trying to use this cadence client
go.uber.org/cadence/client
on my local host to talk to my sandbox cadence server.
This is my simple code snippet:
var cadClient client.Client
func main() {
wfID := "01ERMTDZHBYCH4GECHB3J692PC" << I got this from cadence-ui
ctx := context.Background()
wf := cadClientlient.GetWorkflow(ctx, wfID,"") <<< Panic hits here
log.Println("Workflow RunID: ",wf.GetID())
}
I am sure getting it wrong because the client does not know how to reach the cadence server.
I referred this https://cadenceworkflow.io/docs/go-client/ to find the correct usage but could not find any reference (possible that I might have missed it).
Any help in how to resolve/implement this, will be of much help
I am not sure what panic you got. Based on the code snippet, it's likely that you haven't initialized the client.
To initialize it, follow the sample code here: https://github.com/uber-common/cadence-samples/blob/master/cmd/samples/common/sample_helper.go#L82
and
https://github.com/uber-common/cadence-samples/blob/aac75c7ca03ec0c184d0f668c8cd0ea13d3a7aa4/cmd/samples/common/factory.go#L113
ch, err := tchannel.NewChannelTransport(
tchannel.ServiceName(_cadenceClientName))
if err != nil {
b.Logger.Fatal("Failed to create transport channel", zap.Error(err))
}
b.Logger.Debug("Creating RPC dispatcher outbound",
zap.String("ServiceName", _cadenceFrontendService),
zap.String("HostPort", b.hostPort))
b.dispatcher = yarpc.NewDispatcher(yarpc.Config{
Name: _cadenceClientName,
Outbounds: yarpc.Outbounds{
_cadenceFrontendService: {Unary: ch.NewSingleOutbound(b.hostPort)},
},
})
if b.dispatcher != nil {
if err := b.dispatcher.Start(); err != nil {
b.Logger.Fatal("Failed to create outbound transport channel: %v", zap.Error(err))
client := workflowserviceclient.New(b.dispatcher.ClientConfig(_cadenceFrontendService))

Mongo-go-driver: context deadline exceeded

I have recently upgraded to the newer and offical golang mongo driver for an app I am working on.
All is work prefectly for my local development however when I hook it up and point to my backend server I am getting a 'context deadline exceeded' when calling the client.Ping(...) method.
The old driver code still works fine and I also print out the connection string and can copy and paste this into the compass app and it works without issues.
However for the life of me I cant work out why this new code is return a context timeout. Only different thing is that mongo is running on a non-standard port of 32680 and I am also using the mgm package. However it just using the offical mongo driver under the hood.
Mongo version is: 4.0.12 (locally and remote)
Connection code is here:
// NewClient creates a mongo DateBase connection
func NewClient(cfg config.Mongo) (*Client, error) {
// create database connection string
conStr := fmt.Sprintf("mongodb://%s:%s#%s:%s", cfg.Username, cfg.Password, cfg.Host, cfg.Port)
// set mgm conf ie ctxTimeout value
conf := mgm.Config{CtxTimeout: cfg.CtxTimeout}
// setup mgm / DateBase connection
err := mgm.SetDefaultConfig(&conf, cfg.Database, options.Client().ApplyURI(conStr))
if err != nil {
return nil, errors.Wrapf(err, "failed to connect to mongodb. cfg: %+v. conStr: %+v.", cfg, conStr)
}
// get access to underlying mongodb client driver, db and mgmConfig. Need for adding additional tools like seeding/migrations/etc
mgmCfg, client, db, err := mgm.DefaultConfigs()
if err != nil {
return nil, errors.Wrap(err, "failed to return mgm.DefaultConfigs")
}
// NOTE: fails here!
if err := client.Ping(mgm.Ctx(), readpref.Primary()); err != nil {
return nil, errors.Wrapf(err, "Ping failed to mongodb. cfg: %+v. conStr: %+v. mgmCfg: %+v", cfg, conStr, mgmCfg)
}
return &Client{
cfg: cfg,
mgmCfg: mgmCfg,
client: client,
db: db,
}, nil
}
HELP! I have no idea how I can debug this anymore that I have?
Try adding your authsource in your DSN,
something like
mongodb://USER:PASSWORD#HOST:PORT/DBNAME?authsource=AUTHSOURCE

Run cron in Golang while having different databases

I am working on a SaaS based project on which Merchants can subscribe to set up their online store.
Project Overview
I am using Golang (backend), Mongodb database service and Angular4 (frontend) to build the system. I have multiple merchants that can set up their store. Each merchant has its own url (having its business name as subdomain in the url) to connect to his database.
For Routing, I am using Golang's Gin framework at back end.
Problem
I want to run the cron jobs for the merchant-specific database. In these cron jobs there are some operations that need to connect to the database. But in my routing, until a route for an API is called, the database won't be set. And ultimately, the cron does not run with proper data.
Code
cron.go
package cron
import (
"gopkg.in/robfig/cron.v2"
"controllers"
)
func RunCron(){
c := cron.New()
c.AddFunc("#every 0h1m0s", controllers.ExpireProviderInvitation)
c.Start()
}
Controller function
func ExpireProviderInvitation() {
bookingAcceptTimeSetting, _ := models.GetMerchantSetting(bson.M{"section": "providers", "option_name": "bookings_accept_time"})
if bookingAcceptTimeSetting.OptionValue != nil{
allInvitations, _ := models.GetAllBookingInvitations(bson.M{ "status": 0, "send_type": "invitation", "datetime": bson.M{"$le": float64(time.Now().Unix()) - bookingAcceptTimeSetting.OptionValue.(float64)} })
if len(allInvitations) > 0 {
for _, invitationData := range allInvitations {
_ = GetNextAvailableProvider(invitationData.Bid, invitationData.Pid)
}
}
}
}
router.go
func NewRouter() {
router := gin.Default()
router.Use(gin.Recovery())
router.Use(SetMerchantDatabase)
public := router.Group("/api/v1")
for _, route := range publicRoutes{
switch route.Method {
case "GET" : public.GET(route.Pattern, route.HandlerFunc)
case "POST" : public.POST(route.Pattern, route.HandlerFunc)
case "PUT" : public.PUT(route.Pattern, route.HandlerFunc)
case "DELETE": public.DELETE(route.Pattern, route.HandlerFunc)
default : public.GET(route.Pattern, func(c *gin.Context){
c.JSON(200, gin.H{
"result": "Specify a valid http method with this route.",
})
})
}
}
router.NoRoute(controllers.UnauthorizedAccessResponse)
router.Run(":8080")
}
func SetMerchantDatabase(c *gin.Context){
subdomain := strings.Split(c.Request.Host, ".")
if len(subdomain) > 0{
config.Database = subdomain[0]
config.CurrentBusinessName = subdomain[0]
}else{
errMsg := "Failed: Invalid domain in headers."
response := controllers.ResponseController{
config.FailureCode,
config.FailureFlag,
errMsg,
nil,
}
controllers.GetResponse(c, response)
c.Abort()
}
c.Next()
}
main.go
package main
import (
"cron"
)
func main(){
cron.RunCron()
NewRouter()
}
Explanation of above code
An example route can be:
Route{ "AddCustomer", "POST", "/customer", controllers.SaveCustomer },
An example API url can be:
http://business-name.main-domain.com/api/v1/customer
Where "business-name" is the database which is set whenever an API is called.
I want to run my cron without calling an API route.
Alternative approach
In Shell script, we can run cron by hitting url as a command. For this, I can create a url to run it as a command. But this is my theoratical approach. Also I don't know how will I get different merchant databases.
I am not sure if this approach will work. Any kind of help will be greatly appreciated.
You need to adapt SetMerchantDatabase to work independently of your router. Then you can have it set things for Cron just as well.

Postgres Error running query on database: Could not detect default username

Hi I am trying to monitor postgres SQL with Prometheus. For this purpose I am using this exporter https://github.com/wrouesnel/postgres_exporter
I am starting the exporter in my docker-compose.yml like this:
exporter-postgres:
image: wrouesnel/postgres_exporter
ports:
- 9113:9113
environment:
- DATA_SOURCE_NAME="postgresql://user:pass#localhost:5432/?sslmode=disable"
When the exporter is trying to access the database errors like this are thrown:
Error running query on database: pg_stat_database pg: Could not detect default username. Please provide one explicitly. file="postgres-exporter.go" line=490
and
Error scanning runtime variable: pg_stat_database pg: Could not detect default username. Please provide one explicitly. file="postgres-exporter.go" line=464
I am not really sure what this message could mean. Also I am not really sure if the issues originates in my docker-compose file, or the exporter.
The lines which throw the error in the postgres-exporter.go are:
// Use SHOW to get the value
row := db.QueryRow(fmt.Sprintf("SHOW %s;", columnName))
var val interface{}
err := row.Scan(&val)
if err != nil {
log.Errorln("Error scanning runtime variable:", columnName, err)
continue
}
and
query, er := queryOverrides[namespace]
if er == false {
query = fmt.Sprintf("SELECT * FROM %s;", namespace)
}
// Don't fail on a bad scrape of one metric
rows, err := db.Query(query)
if err != nil {
log.Println("Error running query on database: ", namespace, err)
e.error.Set(1)
return
}
https://github.com/wrouesnel/postgres_exporter/blob/master/postgres_exporter.go
I am thankful for any help!
Edit:
Here is the connection to the database:
db, err := sql.Open("postgres", e.dsn)
Whereas e.dsn is generated like this:
dsn := os.Getenv("DATA_SOURCE_NAME")
The connection doesn't throw any error
Hey for anyone having a similiar issue in the future:
The problem was this line in the docker-compose.yml
- DATA_SOURCE_NAME="postgresql://user:pass#localhost:5432/?sslmode=disable"
Changing it to
- DATA_SOURCE_NAME=postgresql://user:pass#localhost:5432/?sslmode=disable
(Without the quotes) made everything work :)