I'm trying to get results from select query in a comma separated format (don't want results in a file). Following code works fine if I use stmt := fmt.Sprintf("SELECT * FROM table") but below code fails to reproduce any results because rows.Next() is empty. How to resolve this issue?
func (db *Dbq) Getresults() []interface{} {
stmt := fmt.Sprintf("COPY (select * from table) TO STDOUT WITH CSV")
var results []interface{}
rows, err := db.conn.Query(db.context, stmt)
for rows.Next() {
values, err := rows.Values()
if err != nil {
log.Fatal(err)
}
results = append(results, values)
}
rows.Close()
return results
}
I ran code with two different queries and debugged where it failed.
Related
I'm trying to query from multiple databases. Each database is connected using the following function:
func connectDB(dbEnv str) *sql.DB{
// Loading environment variables from local.env file
err1 := godotenv.Load(dbEnv)
if err1 != nil {
log.Fatalf("Some error occured. Err: %s", err1)
}
dialect := os.Getenv("DIALECT")
host := os.Getenv("HOST")
dbPort := os.Getenv("DBPORT")
user := os.Getenv("USER")
dbName := os.Getenv("NAME")
password := os.Getenv("PASSWORD")
// Database connection string
dbURI := fmt.Sprintf("port=%s host=%s user=%s "+"password=%s dbname=%s sslmode=disable", dbPort, host, user, password, dbName)
// Create database object
db, err := sql.Open(dialect,dbURI)
if err != nil {
log.Fatal(err)
}
return db
}
type order struct{
OrderID string `json:"orderID"`
Name string `json:"name"`
}
type book struct{
OrderID string `json:"orderID"`
Name string `json:"name"`
}
func getOrders(db *sql.DB) []order {
var (
orderID string
name string
)
var allRows = []order{}
query := `
SELECT orderID, name
FROM orders.orders;
`
//Get rows using the query
rows, err := db.Query(query)
if err != nil { //Log if error
log.Fatal(err)
}
defer rows.Close()
// Add each row into the "allRows" slice
for rows.Next() {
err := rows.Scan(&orderID, &name, &date)
if err != nil {
log.Fatal(err)
}
//Create new order struct with the received data
row := order{
OrderID: orderID,
Name: name,
}
allRows = append(allRows, row)
}
//Log if error
err = rows.Err()
if err != nil {
log.Fatal(err)
}
return allRows
}
func getBooks(db *sql.DB) []book{
var (
bookID string
name string
)
var allRows = []book{}
query := `
SELECT bookID, name
FROM books.books;
`
//Get rows using the query
rows, err := db.Query(query)
if err != nil { //Log if error
log.Fatal(err)
}
defer rows.Close()
// Add each row into the "allRows" slice
for rows.Next() {
err := rows.Scan(&bookID, &name)
if err != nil {
log.Fatal(err)
}
//Create new book struct with the received data
row := book{
BookID: bookID,
Name: name,
}
allRows = append(allRows, row)
}
//Log if error
err = rows.Err()
if err != nil {
log.Fatal(err)
}
return allRows
}
func main() {
ordersDB:= connectDB("ordersDB.env")
booksDB:= connectDB("booksDB.env")
orders := getOrders(ordersDB)
books := getBooks(booksDB)
}
The issue is that when I use ordersDB first, the program only recognizes the table in ordersDB. And when I use booksDB first, the program only recognizes the table in booksDB.
When I try to query a table in booksDB after using ordersDB, it is giving me "relation "books.books" does not exist" error. When I try to query a table in ordersDB after using booksDB, it gives "relation "orders.orders" does not exist"
Is there a better way to connect to multiple databases?
You are using github.com/joho/godotenv to load the database configuration from the environment. Summarising (and cutting out a lot of detail) what you are doing is:
godotenv.Load("ordersDB.env")
host := os.Getenv("HOST")
// Connect to DB
godotenv.Load("booksDB.env")
host := os.Getenv("HOST")
// Connect to DB 2
However as stated in the docs "Existing envs take precedence of envs that are loaded later". This is also stated more clearly here "It's important to note that it WILL NOT OVERRIDE an env variable that already exists".
So your code will load in the first .env file, populate the environment variables, and connect to the database. You will then load the second .env file but, because the environmental variables are already set, they will not be changed and you will connect to the same database a second time.
As a work around you could use Overload. However it's probably better to reconsider your use of environmental variables (and perhaps use different variables for the second connection).
Golang, pgx:
I am trying to get all rows from t_example (currently 20 items), however for some reason only one returns (the first one). I tried to debug and rows.Next() returns false after the first iteration.
Could you please help me with advice?
I'm a newbie, but I've tried to find similar cases here in advance :)
My code:
func (ts *exampleStorage) GetAll() *[]Example {
q := `SELECT id, name FROM t_example`
rows := ts.client.Query(context.Background(), q)
example := make([]Example, 0)
for rows.Next() {
var ex Example
rows.Scan(&ex.Id, &ex.Name)
example = append(example, ex)
}
return &example
}
Your code doesn't check for errors :
row.Scan(&ex.Id, &ex.Name) could return an error (and, in pgx implementation, this error is fatal for the rows iteration) :
err := rows.Scan(&ex.Id, &ex.Name)
if err != nil {
fmt.Printf("*** rows.Scan error: %s", err)
return nil, err
}
there is a gotcha with sql.Rows / pgx.Rows error checking : you should check if an error occurred after exiting the for rows.Next() { loop :
for rows.Next() {
...
}
// check rows.Err() after the last rows.Next() :
if err := rows.Err(); err != nil {
// on top of errors triggered by bad conditions on the 'rows.Scan()' call,
// there could also be some bad things like a truncated response because
// of some network error, etc ...
fmt.Printf("*** iteration error: %s", err)
return nil, err
}
return example, nil
a side note : in the vast majority of cases you don't want to return a pointer to a slice (e.g: *[]Example) but a slice (e.g: []Example)
I'm using SELECT * in a db.query() to return columns from a table. Typically, I would fmt.Scan() the rows into a pre-declared struct{} for further manipulation, but in this case, the table columns change frequently so I'm not able to use a declared struct{} as part of my Scan().
I've been struggling to figure out how I might dynamically build a struct{} based on the column results of the db.query() which I could subsequently call on the use of for Scan(). I've read a little about reflect but I'm struggling to determine if this is right for my use-case or if I might have to think about something else.
Any pointers would be greatly appreciated.
you can get column names from resulting rowset and prepare a slice for the scan.
Example (https://go.dev/play/p/ilYmEIWBG5S) :
package main
import (
"database/sql"
"fmt"
"log"
"github.com/DATA-DOG/go-sqlmock"
)
func main() {
// mock db
db, mock, err := sqlmock.New()
if err != nil {
log.Fatal(err)
}
columns := []string{"id", "status"}
mock.ExpectQuery("SELECT \\* FROM table").
WillReturnRows(sqlmock.NewRows(columns).AddRow(1, "ok"))
// actual code
rows, err := db.Query("SELECT * FROM table")
if err != nil {
log.Fatal(err)
}
cols, err := rows.Columns()
if err != nil {
log.Fatal(err)
}
data := make([]interface{}, len(cols))
strs := make([]sql.NullString, len(cols))
for i := range data {
data[i] = &strs[i]
}
for rows.Next() {
if err := rows.Scan(data...); err != nil {
log.Fatal(err)
}
for i, d := range data {
fmt.Printf("%s = %+v\n", cols[i], d)
}
}
}
This example reads all columns into strings. To detect column type one can use rows.ColumnTypes method.
I am inserting some data in table using SQLx like this
func (*RideRepositoryImpl) insert(entity interface{}, tx persistence.Transaction) (sql.Result, error) {
ride := entity.(*model.Ride)
placeHolders := repository.InsertPlaceholders(len(rideColumns))
sql := fmt.Sprintf("INSERT INTO %s(%s) VALUES(%s)", TableName, strings.Join(Columns, ","), placeHolders)
return tx.Exec(sql, ride.ID.String(), ride.DeviceIotID, ride.VehicleID.String(), ride.UserID.String(),ride.AdditionComments)
}
and calling this function like this
func (p *RideRepositoryImpl) Save(ride *model.Ride, tx persistence.Transaction) error {
return repository.Save(ride, p.insert, tx)
Now I want to get UUID of saved record instantly after saving this record . Is there any clean way to do this instantly ?
PostgreSQL has the RETURNING clause for this.
Sometimes it is useful to obtain data from modified rows while they
are being manipulated. The INSERT, UPDATE, and DELETE commands
all have an optional RETURNING clause that supports this. Use of
RETURNING avoids performing an extra database query to collect the
data, and is especially valuable when it would otherwise be difficult
to identify the modified rows reliably.
// add the RETURNING clause to your INSERT query
sql := fmt.Sprintf("INSERT INTO %s(%s) VALUES(%s) RETURNING <name_of_uuid_column>", TableName, strings.Join(Columns, ","), placeHolders)
// use QueryRow instead of Exec
row := tx.QueryRow(sql, ride.ID.String(), ride.DeviceIotID, ride.VehicleID.String(), ride.UserID.String(),ride.AdditionComments)
// scan the result of the query
var uuid string
if err := row.Scan(&uuid); err != nil {
panic(err)
}
// ...
For additional INSERT-specific info related to RETURNING you can go to the INSERT documentation and search the page for "returning" with CTRL/CMD+F.
If, in addition, you need your function to still return an sql.Result value to satisfy some requirement, then you can return your own implementation.
var _ sql.Result = sqlresult{} // compiler check
type sqlresult struct { lastid, nrows int64 }
func (r sqlresult) LastInsertId() (int64, error) { return r.lastid, nil }
func (r sqlresult) RowsAffected() (int64, error) { return r.nrows, nil }
func (*RideRepositoryImpl) insert(entity interface{}, tx persistence.Transaction) (sql.Result, error) {
ride := entity.(*model.Ride)
placeHolders := repository.InsertPlaceholders(len(rideColumns))
sql := fmt.Sprintf("INSERT INTO %s(%s) VALUES(%s) RETURNING <name_of_uuid_column>", TableName, strings.Join(Columns, ","), placeHolders)
row := tx.QueryRow(sql, ride.ID.String(), ride.DeviceIotID, ride.VehicleID.String(), ride.UserID.String(),ride.AdditionComments)
if err := row.Scan(&ride.<NameOfUUIDField>); err != nil {
return nil, err
}
return sqlresult{0, 1}, nil
}
I am using Golang and Postgres, Postgres has an advance feature where it can return your queries in Json format. What I want to do is get that Json query results and return it but I am having trouble since it has to be a String in order to return it. This is my code
package main
import(
"fmt"
"database/sql"
_ "github.com/lib/pq"
"log"
)
func HelloServer(w http.ResponseWriter, req *http.Request) {
db, err := sql.Open("postgres", "user=postgres password=password dbname=name sslmode=disable")
if err != nil {
log.Fatal(err)
}
defer db.Close()
rows, err := db.Query("select To_Json(t) (SELECT * from cars)t")
io.WriteString(w, "hello, world!\n")
}
func main() {
http.HandleFunc("/hello", HelloServer)
log.Fatal(http.ListenAndServe(":12345", nil))
}
The Rows element returns a Json array how can I turn that Rows element into a String ? For C# and Java I would just append the .ToString() method to it and it would make it a string . As you can see from the code above the io.WriteString takes a String as a second parameter so I want to make the Rows variable a String after it has the Json returned so that I can display it in the browser by passing it to the method. I want to replace the Hello World with the String Rows.
Rows is a sql.Rows type. In order to use the data returned by your database query you will have to iterated over the "rows".
An example from the docs
age := 27
rows, err := db.Query("SELECT name FROM users WHERE age=?", age)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var name string
if err := rows.Scan(&name); err != nil {
log.Fatal(err)
}
fmt.Printf("%s is %d\n", name, age)
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
You should instead use QueryRow because you are expecting the database to return one result. In either case once you have used "Scan" to put the data into your own variable then you can either parse the JSON or print it out.