Hi I am trying to monitor postgres SQL with Prometheus. For this purpose I am using this exporter https://github.com/wrouesnel/postgres_exporter
I am starting the exporter in my docker-compose.yml like this:
exporter-postgres:
image: wrouesnel/postgres_exporter
ports:
- 9113:9113
environment:
- DATA_SOURCE_NAME="postgresql://user:pass#localhost:5432/?sslmode=disable"
When the exporter is trying to access the database errors like this are thrown:
Error running query on database: pg_stat_database pg: Could not detect default username. Please provide one explicitly. file="postgres-exporter.go" line=490
and
Error scanning runtime variable: pg_stat_database pg: Could not detect default username. Please provide one explicitly. file="postgres-exporter.go" line=464
I am not really sure what this message could mean. Also I am not really sure if the issues originates in my docker-compose file, or the exporter.
The lines which throw the error in the postgres-exporter.go are:
// Use SHOW to get the value
row := db.QueryRow(fmt.Sprintf("SHOW %s;", columnName))
var val interface{}
err := row.Scan(&val)
if err != nil {
log.Errorln("Error scanning runtime variable:", columnName, err)
continue
}
and
query, er := queryOverrides[namespace]
if er == false {
query = fmt.Sprintf("SELECT * FROM %s;", namespace)
}
// Don't fail on a bad scrape of one metric
rows, err := db.Query(query)
if err != nil {
log.Println("Error running query on database: ", namespace, err)
e.error.Set(1)
return
}
https://github.com/wrouesnel/postgres_exporter/blob/master/postgres_exporter.go
I am thankful for any help!
Edit:
Here is the connection to the database:
db, err := sql.Open("postgres", e.dsn)
Whereas e.dsn is generated like this:
dsn := os.Getenv("DATA_SOURCE_NAME")
The connection doesn't throw any error
Hey for anyone having a similiar issue in the future:
The problem was this line in the docker-compose.yml
- DATA_SOURCE_NAME="postgresql://user:pass#localhost:5432/?sslmode=disable"
Changing it to
- DATA_SOURCE_NAME=postgresql://user:pass#localhost:5432/?sslmode=disable
(Without the quotes) made everything work :)
Related
I am trying to create some concurrent indexes using the command CRETAE INDEX CONCURRENTLY ..... through migrations in my golang project. But whenever I run that particular migration it just takes infinitely long and is never executed.
I went and checked for the logs of the POSTGRES DB and found this thing:
The weird thing is only in migrations I am not able to create concurrent indexes whereas in my main.go if i just directly write code to execute the query it is executing successfully and even on golang's DB query console it is able to create a index concurrently.
Here is my migration package code:
func NewGorm(d *gorm.DB) *GORM {
return &GORM{db: d}
}
func (g *GORM) Run(m Migrator, app, name, methods string, logger log.Logger) error {
g.txn = g.db.Begin()
ds := &datastore.DataStore{ORM: g.db}
if methods == UP {
err = m.Up(ds, logger)
} else {
err = m.Down(ds, logger)
}
if err != nil {
g.rollBack()
return &errors.Response{Reason: "error encountered in running the migration", Detail: err}
}
g.commit()
return nil
}
I know it has something to do with transactions, but i also tried disabling it by passing flag SkipDefaultTransaction: true when initializing the connection with GORM but that also didn't worked and results were the same.
Please help how can i create concurrent indexes in migrations using GORM.
Let me try to help you with the issue. First, you should update the question with all of the source code so we can understand better what's going on and help you in a more accurate way. Anyway, I'll try to help you with what we have so far. I was able to achieve your goal with the following code:
package main
import (
"fmt"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
type Post struct {
Id int
Title string `gorm:"index:idx_concurrent,option:CONCURRENTLY"`
}
type GORM struct {
db *gorm.DB
}
func main() {
dsn := "host=localhost user=postgres password=postgres dbname=postgres port=5432 sslmode=disable"
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
panic(err)
}
db.AutoMigrate(&Post{})
m := db.Migrator()
if idxFound := m.HasIndex(&Post{}, "idx_concurrent"); !idxFound {
fmt.Println("idx missing")
return
}
fmt.Println("idx already present")
}
To achieve what you need, it should be enough to add a gorm annotation next to the field where you want to add the index (e.g. Title). In this annotation you can specify to create this index in a concurrent to avoid locking the table. Then, I used the gorm.Migrator to check for the index existence.
If you've already the table, you can simply add the annotation to the model struct definition and gorm will take care of it when you'll run the AutoMigrate method.
Thanks to this you should be able to cover all of the possible scenario you might face.
Let me know if this solves your issue or if you need something else, thanks!
I am very new to Go world. I have some db functions that I need to test.
So first I have a database.go file that connects to a postgres db:
import (
"fmt"
"gorm.io/driver/postgres"
"gorm.io/gorm"
"os"
)
var DB *gorm.DB
var err error
func Open() error {
dsn := fmt.Sprintf("host=%s user=%s password=%s dbname=%s port=%s sslmode=disable", os.Getenv("HOST"), os.Getenv("USER"),
os.Getenv("PASSWORD"), os.Getenv("DB"), os.Getenv("PORT"))
DB, err = gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
return err
}
return nil
}
Then I have a customers.go file with functions that interact with that db:
import (
"customers/cmd/database"
"time"
)
type Customers struct {
ID int
CustomerName string
Active bool
Balance float32
ActiveSince time.Time
}
func Get(id int) (Customers, error) {
var customer Customers
result := database.DB.First(&customer, id)
if result.Error != nil {
return Customers{}, result.Error
} else {
return customer, nil
}
}
This is all running in docker, there is customers container and postgres container. Now the question is how do I test my Get(id int) function? I was researching dockertest but that spins up a different db and my Get function uses the one I specified in database.go. So is there a standard Go way to test these functions?
it is a docker net problem but golang:
you can create a docker net, run both container in one net.doc
or use network --network=host
export postgres container's port to localhost, and customers container link to localhost,-pxx:xx
there is standard way of unit tesing in go. pl refer testing and testify/assert. Unit tests are typically written in xxx_test.go file next to the code.
coming to unit testing of db access layer code, one option would be to have a testenv helper and use it on these lines.
customers_test.go:
package dbaccess
import "testing"
func TestDbAccessLayer(t *testing.T) {
testEnv := testUtils.NewTestEnv()
// testEnv should do the required initialization e.g.
// start any mocked services, connection to database, global logger etc.
if err := testEnv.Start(); err != nil {
t.Fatal(err)
}
// at the end of the test, stop need to reset state
// e.g. clear any entries created by test in db
defer testEnv.Stop()
// add test code
// add required assertion to validate
}
have a separate docker-compose.yml file and use it with docker compose command to start/stop services like postgresdb.
go test command may be used to run the tests. refer to the command docs for details.
Or alternatively, what additional imports do I need?
I'd like to start using Postgres as my main DBMS for some development I am doing, and my research suggests that pgx is the database driver of choice at this time. I have setup a connection package:
package database
import (
"fmt"
_ "github.com/jackc/pgx/v5"
"net/url"
"github.com/jmoiron/sqlx"
log "github.com/sirupsen/logrus"
)
func PgConnect(username, password, host, app string) *sqlx.DB {
s := fmt.Sprintf("postgres://%s:%s#%s?app+name=%s&sslmode=allow", url.QueryEscape(username), url.QueryEscape(password), host, app)
db, err := sqlx.Connect("postgres", s)
if err != nil {
log.Panicf("failed to connect to sqlserver://%s:%s#%s err: %s", username, "xxxxxxxxxxxx", host, err)
}
return db
}
The URL evaluates to:
postgres://user:password#database-host:5432/database-name?app+name=app-name&sslmode=allow
and I've also tried the prefix of postgresql here because I've seen both in places on the internet.
For the driver name in the sqlx.Connect("postgres", s) call I have tried postgres, postgresql and pgx.
In all cases the call to connect fails with the error:
sql: unknown driver "postgres" (forgotten import?)
The same code (with driver mssql and mssql URL) works connection to Microsoft SQL Server.
Can anyone help me get this going? I've found very little on the internet for this combination of language/driver/sqlx/postgres.
From the documentation, we can see that the driver's name is pgx:
A database/sql connection can be established through sql.Open.
db, err := sql.Open("pgx", "postgres://pgx_md5:secret#localhost:5432/pgx_test?sslmode=disable")
if err != nil {
return err
}
So you'll need to do two things:
Use the proper driver name:
db, err := sqlx.Connect("pgx", s)
Import the stdlib compatibility pacakge:
import (
_ "github.com/jackc/pgx/v5/stdlib" // Standard library bindings for pgx
)
I am facing this complex challenge with an RDS PostgreSQL instance. I am almost out of any idea how to handle it. I am launching an app (React+Go+PostreSQL) for which I expect to have around 250-300 users simultaneously making the same API GET request for how long the users wish to use it.
It is a questionnaire kind of app and users will be able to retrieve one question from the database and answer it, the server will save the answer in the DB, and then the user will be able to press next to fetch the next question. I tested my API endpoint with k6 using 500 virtual users for 2 minutes and the database returns dial: i/o timeout or even connection rejected sometimes, usually when it reaches 6000 requests and I get around 93% success. I tried to fine-tune the rds instance with tcp_keep_alive parameters but without any luck, I still cannot manage to get 100% of the request pass. I also tried to increase the general storage from 20gb min to 100gb in rds and switch from the free db.t3.micro to db.t3.medium size.
Any hint would be much appreciated. It should be possible for a normal golang server with postgres to handle this requests at the same time, shouldn't it? It is just a regular select * from x where y statement.
EDIT (CODE SAMPLE):
I use a dependency injection pattern and so I have only one instance of the DB passed to all the other repositories including the API package. The db repo looks like this:
func NewRepository() (DBRepository, error) {
dbname := getenv("POSTGRES_DATABASE", "")
username := getenv("POSTGRES_ROOT_USERNAME", "")
password := getenv("POSTGRES_ROOT_PASSWORD", "")
host := getenv("POSTGRES_HOST", "")
port := getenv("POSTGRES_PORT", "")
dsn := fmt.Sprintf("host=%s user=%s password=%s"+
" dbname=%s port=%s sslmode=disable
TimeZone=Europe/Bucharest", host, username, password, dbname,
port)
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
return nil, err
}
if err != nil {
return nil, err
}
db.AutoMigrate(
//migrate tables are here
)
return &dbRepository{
db: db,
}, nil
}
Currently the parameters use in RDS for TCP keepalive are:
tcp_keepalives_count 30
tcp_keepalives_idle 1000
tcp_keepalives_interval 1000
and I also tried with different numbers.
The query I am doing is a simple .Find() statement from gorm package but it seems like this is not the issue since it gets blocked whenever hits the first query/connection with the db. There are 2 query executed in this endpoint I am testing but it gets stuck on the first. If more info is needed I will update but this issue it gets so frustrating.
My k6 test if the following:
import http from 'k6/http';
import { check } from 'k6';
import { sleep } from 'k6';
export const options = {
insecureSkipTLSVerify: true,
stages: [
{ target: 225, duration: '2m' },
],
};
const access_tokens = []
let random_token = access_tokens[Math.floor(Math.random()*access_tokens.length)];
const params = {
headers: {'Authorization': `Bearer ${random_token}`}
};
export default function () {
let res = http.get('endpoint here', params);
check(res, {'Message': (r)=> r.status === 202});
sleep(1);
}
The DB tables are also indexed and tested with the explain statement.
I have recently upgraded to the newer and offical golang mongo driver for an app I am working on.
All is work prefectly for my local development however when I hook it up and point to my backend server I am getting a 'context deadline exceeded' when calling the client.Ping(...) method.
The old driver code still works fine and I also print out the connection string and can copy and paste this into the compass app and it works without issues.
However for the life of me I cant work out why this new code is return a context timeout. Only different thing is that mongo is running on a non-standard port of 32680 and I am also using the mgm package. However it just using the offical mongo driver under the hood.
Mongo version is: 4.0.12 (locally and remote)
Connection code is here:
// NewClient creates a mongo DateBase connection
func NewClient(cfg config.Mongo) (*Client, error) {
// create database connection string
conStr := fmt.Sprintf("mongodb://%s:%s#%s:%s", cfg.Username, cfg.Password, cfg.Host, cfg.Port)
// set mgm conf ie ctxTimeout value
conf := mgm.Config{CtxTimeout: cfg.CtxTimeout}
// setup mgm / DateBase connection
err := mgm.SetDefaultConfig(&conf, cfg.Database, options.Client().ApplyURI(conStr))
if err != nil {
return nil, errors.Wrapf(err, "failed to connect to mongodb. cfg: %+v. conStr: %+v.", cfg, conStr)
}
// get access to underlying mongodb client driver, db and mgmConfig. Need for adding additional tools like seeding/migrations/etc
mgmCfg, client, db, err := mgm.DefaultConfigs()
if err != nil {
return nil, errors.Wrap(err, "failed to return mgm.DefaultConfigs")
}
// NOTE: fails here!
if err := client.Ping(mgm.Ctx(), readpref.Primary()); err != nil {
return nil, errors.Wrapf(err, "Ping failed to mongodb. cfg: %+v. conStr: %+v. mgmCfg: %+v", cfg, conStr, mgmCfg)
}
return &Client{
cfg: cfg,
mgmCfg: mgmCfg,
client: client,
db: db,
}, nil
}
HELP! I have no idea how I can debug this anymore that I have?
Try adding your authsource in your DSN,
something like
mongodb://USER:PASSWORD#HOST:PORT/DBNAME?authsource=AUTHSOURCE