My DB connection and its getter is as follow:
func connectDB() (*gorm.DB, error) {
db, err := gorm.Open(postgres.Open(dbURL), &gorm.Config{})
if err != nil {
return nil, err
}
return db, nil
}
func GetDB() (*gorm.DB, error) {
if db == nil {
return connectDB()
} else {
return db, nil
}
}
I use GetDB() in my code to do operations on the database. My app runs for about 15 minutes. How can I make sure the connection db *gorm.DB will not timeout during all that time? Even if it does not timeout within 15 minutes, how to reconnect gracefully if the connection happens to drop due to network error, etc?
GORM using database/sql to maintain connection pool. The connection pool could handle the connection timeout and error. The connection pool could be configured as below
sqlDB, err := db.DB()
// SetMaxIdleConns sets the maximum number of connections in the idle connection pool.
sqlDB.SetMaxIdleConns(10)
// SetMaxOpenConns sets the maximum number of open connections to the database.
sqlDB.SetMaxOpenConns(100)
// SetConnMaxLifetime sets the maximum amount of time a connection may be reused.
sqlDB.SetConnMaxLifetime(time.Hour)
I suggest you to use a generic database interface *sql.DB ping() function https://gorm.io/docs/generic_interface.html
Ping verifies a connection to the database is still alive, establishing a connection if necessary.
So whenever you do a new request to your database (or just for the requests you know would be executed after a long period of time) you can ping the db first and make sure it is still active (in other case the ping reconnects to the db automatically), and then do your request.
Related
I am facing this complex challenge with an RDS PostgreSQL instance. I am almost out of any idea how to handle it. I am launching an app (React+Go+PostreSQL) for which I expect to have around 250-300 users simultaneously making the same API GET request for how long the users wish to use it.
It is a questionnaire kind of app and users will be able to retrieve one question from the database and answer it, the server will save the answer in the DB, and then the user will be able to press next to fetch the next question. I tested my API endpoint with k6 using 500 virtual users for 2 minutes and the database returns dial: i/o timeout or even connection rejected sometimes, usually when it reaches 6000 requests and I get around 93% success. I tried to fine-tune the rds instance with tcp_keep_alive parameters but without any luck, I still cannot manage to get 100% of the request pass. I also tried to increase the general storage from 20gb min to 100gb in rds and switch from the free db.t3.micro to db.t3.medium size.
Any hint would be much appreciated. It should be possible for a normal golang server with postgres to handle this requests at the same time, shouldn't it? It is just a regular select * from x where y statement.
EDIT (CODE SAMPLE):
I use a dependency injection pattern and so I have only one instance of the DB passed to all the other repositories including the API package. The db repo looks like this:
func NewRepository() (DBRepository, error) {
dbname := getenv("POSTGRES_DATABASE", "")
username := getenv("POSTGRES_ROOT_USERNAME", "")
password := getenv("POSTGRES_ROOT_PASSWORD", "")
host := getenv("POSTGRES_HOST", "")
port := getenv("POSTGRES_PORT", "")
dsn := fmt.Sprintf("host=%s user=%s password=%s"+
" dbname=%s port=%s sslmode=disable
TimeZone=Europe/Bucharest", host, username, password, dbname,
port)
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
return nil, err
}
if err != nil {
return nil, err
}
db.AutoMigrate(
//migrate tables are here
)
return &dbRepository{
db: db,
}, nil
}
Currently the parameters use in RDS for TCP keepalive are:
tcp_keepalives_count 30
tcp_keepalives_idle 1000
tcp_keepalives_interval 1000
and I also tried with different numbers.
The query I am doing is a simple .Find() statement from gorm package but it seems like this is not the issue since it gets blocked whenever hits the first query/connection with the db. There are 2 query executed in this endpoint I am testing but it gets stuck on the first. If more info is needed I will update but this issue it gets so frustrating.
My k6 test if the following:
import http from 'k6/http';
import { check } from 'k6';
import { sleep } from 'k6';
export const options = {
insecureSkipTLSVerify: true,
stages: [
{ target: 225, duration: '2m' },
],
};
const access_tokens = []
let random_token = access_tokens[Math.floor(Math.random()*access_tokens.length)];
const params = {
headers: {'Authorization': `Bearer ${random_token}`}
};
export default function () {
let res = http.get('endpoint here', params);
check(res, {'Message': (r)=> r.status === 202});
sleep(1);
}
The DB tables are also indexed and tested with the explain statement.
I am playing around with the database/sql package trying to see how it works and understand what would happen if you don't call rows.Close() etc.
So I wrote the following piece of code for inserting a model to database:
func (db Database) Insert(m model.Model) (int32, error) {
var id int32
quotedTableName := m.TableName(true)
// Get insert query
q, values := model.InsertQuery(m)
rows, err := db.Conn.Query(q, values...)
if err != nil {
return id, err
}
for rows.Next() {
err = rows.Scan(&id)
if err != nil {
return id, err
}
}
return id, nil
}
I don't call rows.Close() on purpose to see the consequences. When setting up the database connection I set some properties such as:
conn.SetMaxOpenConns(50)
conn.SetMaxIdleConns(2)
conn.SetConnMaxLifetime(time.Second*60)
Then I attempt to insert 10000 records:
for i := 0; i < 10000; i++ {
lander := models.Lander{
// ...struct fields with random data on each iteration
}
go func() {
Insert(&lander)
}()
}
(It lacks error checking, context timeouts etc. but for the purpose of playing around it gets the job done). When I execute piece of code from above I expect to see at least some errors regarding database connections however the data gets inserted without problems (all 10000 records). When I check the Stats() I see the following:
{MaxOpenConnections:50 OpenConnections:1 InUse:0 Idle:1 WaitCount:9951 WaitDuration:3h9m33.896466243s MaxIdleClosed:48 MaxLifetimeClosed:2}
Since I didn't call rows.Close() I expected to see more OpenConnections or more InUse connections because I am never releasing the connection (maybe I might be wrong, but this is the purpose of Close() to release a Connection and return it to the pool).
So my question is simply what do these Stats() mean and why are there no errors whatsoever when doing the insertion. Also why aren't there more OpenConnections or InUse ones and what are the real consequences of not calling Close()?
According to the docs for Rows:
If Next is called and returns false and there are no further result sets, the Rows are closed automatically and it will suffice to check the result of Err.
Since you iterate all the results, the result set is closed.
First time user of Cadence:
Scenario
I have a cadence server running in my sandbox environment.
Intent is to fetch the workflow status
I am trying to use this cadence client
go.uber.org/cadence/client
on my local host to talk to my sandbox cadence server.
This is my simple code snippet:
var cadClient client.Client
func main() {
wfID := "01ERMTDZHBYCH4GECHB3J692PC" << I got this from cadence-ui
ctx := context.Background()
wf := cadClientlient.GetWorkflow(ctx, wfID,"") <<< Panic hits here
log.Println("Workflow RunID: ",wf.GetID())
}
I am sure getting it wrong because the client does not know how to reach the cadence server.
I referred this https://cadenceworkflow.io/docs/go-client/ to find the correct usage but could not find any reference (possible that I might have missed it).
Any help in how to resolve/implement this, will be of much help
I am not sure what panic you got. Based on the code snippet, it's likely that you haven't initialized the client.
To initialize it, follow the sample code here: https://github.com/uber-common/cadence-samples/blob/master/cmd/samples/common/sample_helper.go#L82
and
https://github.com/uber-common/cadence-samples/blob/aac75c7ca03ec0c184d0f668c8cd0ea13d3a7aa4/cmd/samples/common/factory.go#L113
ch, err := tchannel.NewChannelTransport(
tchannel.ServiceName(_cadenceClientName))
if err != nil {
b.Logger.Fatal("Failed to create transport channel", zap.Error(err))
}
b.Logger.Debug("Creating RPC dispatcher outbound",
zap.String("ServiceName", _cadenceFrontendService),
zap.String("HostPort", b.hostPort))
b.dispatcher = yarpc.NewDispatcher(yarpc.Config{
Name: _cadenceClientName,
Outbounds: yarpc.Outbounds{
_cadenceFrontendService: {Unary: ch.NewSingleOutbound(b.hostPort)},
},
})
if b.dispatcher != nil {
if err := b.dispatcher.Start(); err != nil {
b.Logger.Fatal("Failed to create outbound transport channel: %v", zap.Error(err))
client := workflowserviceclient.New(b.dispatcher.ClientConfig(_cadenceFrontendService))
I have recently upgraded to the newer and offical golang mongo driver for an app I am working on.
All is work prefectly for my local development however when I hook it up and point to my backend server I am getting a 'context deadline exceeded' when calling the client.Ping(...) method.
The old driver code still works fine and I also print out the connection string and can copy and paste this into the compass app and it works without issues.
However for the life of me I cant work out why this new code is return a context timeout. Only different thing is that mongo is running on a non-standard port of 32680 and I am also using the mgm package. However it just using the offical mongo driver under the hood.
Mongo version is: 4.0.12 (locally and remote)
Connection code is here:
// NewClient creates a mongo DateBase connection
func NewClient(cfg config.Mongo) (*Client, error) {
// create database connection string
conStr := fmt.Sprintf("mongodb://%s:%s#%s:%s", cfg.Username, cfg.Password, cfg.Host, cfg.Port)
// set mgm conf ie ctxTimeout value
conf := mgm.Config{CtxTimeout: cfg.CtxTimeout}
// setup mgm / DateBase connection
err := mgm.SetDefaultConfig(&conf, cfg.Database, options.Client().ApplyURI(conStr))
if err != nil {
return nil, errors.Wrapf(err, "failed to connect to mongodb. cfg: %+v. conStr: %+v.", cfg, conStr)
}
// get access to underlying mongodb client driver, db and mgmConfig. Need for adding additional tools like seeding/migrations/etc
mgmCfg, client, db, err := mgm.DefaultConfigs()
if err != nil {
return nil, errors.Wrap(err, "failed to return mgm.DefaultConfigs")
}
// NOTE: fails here!
if err := client.Ping(mgm.Ctx(), readpref.Primary()); err != nil {
return nil, errors.Wrapf(err, "Ping failed to mongodb. cfg: %+v. conStr: %+v. mgmCfg: %+v", cfg, conStr, mgmCfg)
}
return &Client{
cfg: cfg,
mgmCfg: mgmCfg,
client: client,
db: db,
}, nil
}
HELP! I have no idea how I can debug this anymore that I have?
Try adding your authsource in your DSN,
something like
mongodb://USER:PASSWORD#HOST:PORT/DBNAME?authsource=AUTHSOURCE
Suppose I had a Tcp server in linux, it would create a new goroutine for a new connnection. When I want to write data to the tcp connection, should I do it just like this
conn.Write(data)
or do it in a goroutine especially for writing, like this
func writeRoutine(sendChan chan []byte){
for {
select {
case msg := <- sendChan :
conn.Write(msg)
}
}
}
just in case that the network was busy.
In a short, Did I need a write buffer in go just like in c/c++ when writing to a socket?
PS maybe I didn't exclaim the problem clearly.
1 I talked of the server, meaning a tcp server runing in linux. It would create a new goroutine for a new connnection. like this
listener, err := net.ListenTCP("tcp", tcpAddr)
if err != nil {
log.Error(err.Error())
os.Exit(-1)
}
for {
conn, err := listener.AcceptTCP()
if err != nil {
continue
}
log.Debug("Accept a new connection ", conn.RemoteAddr())
go handleClient(conn)
}
2 I think my problem isn't much concerned with the code. As we know, when we use size_t write(int fd, const void *buf, size_t count); to write a socket fd in c/c++, for a tcp server, we need a write buffer for a socket in your code necessaryly, or maybe only some of the data is writen successfully. I mean, Do I have to do so in go ?
You are actually asking two different questions here:
1) Should you use a goroutine per accepted client connection in my TCP server?
2) Given a []byte, how should I write to the connection?
For 1), the answer is yes. This is the type of pattern that go is most suited for. If you take a look at the source code for the net/http, you will see that it spawns a goroutine for each connection.
As for 2), you should do the same that you would do in a c/c++ server: write, check how much was written and keep on writing until your done, always checking for errors. Here is a code snippet on how to do it:
func writeConn(data []byte) error {
var start,c int
var err error
for {
if c, err = conn.Write(data[start:]); err != nil {
return err
}
start += c
if c == 0 || start == len(data) {
break
}
}
return nil
}
server [...] create a new goroutine for a new connnection
This makes sense because the handler goroutines can block without delaying the server's accept loop.
If you handled each request serially, any blocking syscall would essentially lock up the server for all clients.
goroutine especially for writing
This would only make sense in use cases where you're writing either a really big chunk of data or to a very slow connection and you need your handler to continue unblocked, for instance.
Note that this is not what is commonly understood as a "write buffer".