Read rows from postgresql database using golang [duplicate] - postgresql

This question already has answers here:
How to make scanning DB rows in Go DRY?
(1 answer)
How to call the Scan variadic function using reflection
(4 answers)
Closed 8 months ago.
I am reading all rows from postgresql database using golang. This method working perfectly, but I need to reorganise the code in a productive way. Any help is much appreciated.
Here is the code:
type Books struct {
Name string
ISBN string
Author string
PublishedOn string
}
func GetBooks() {
db := Connection() // COnnection to postgres database
var books []Books
rows, err := db.Query("SELECT * FROM books")
if err != nil {
panic(err)
}
defer rows.Close()
for rows.Next() {
book := Books{}
err = rows.Scan(&book.Name, &book.ISBN, &book.Author, &book.PublishedOn)
books = append(books, book)
}
}
Here let us consider the struct has 20 fields and the row also have 20 columns. What is an effective method not to use &book.Name, &book.ISBN, &book.Author, &book.PublishedOn or some x number of columns and just include ONE single value

Related

Bulk insert csv data using pgx.CopyFrom into a postgres database

I'm once again trying to push lots of csv data into a postgres database.
In the past I've created a struct to hold the data and unpacked each column into the struct before bumping the lot into the database table, and that is working fine, however, I've just found pgx.CopyFrom* and it would seem as though I should be able to make it work better.
So far I've got the column headings for the table into a slice of strings and the csv data into another slice of strings but I can't work out the syntax to push this into the database.
I've found this post which sort of does what I want but uses a [][]interface{} rather than []strings.
The code I have so far is
// loop over the lines and find the first one with a timestamp
for {
line, err := csvReader.Read()
if err == io.EOF {
break
} else if err != nil {
log.Error("Error reading csv data", "Loading Loop", err)
}
// see if we have a string starting with a timestamp
_, err := time.Parse(timeFormat, line[0])
if err == nil {
// we have a data line
_, err := db.CopyFrom(context.Background(), pgx.Identifier{"emms.scada_crwf.pwr_active"}, col_headings, pgx.CopyFromRows(line))
}
}
}
But pgx.CopyFromRows expects [][]interface{} not []string.
What should the syntax be? Am I barking up the wrong tree?
I recommend reading your CSV and creating a []interface{} for each record you read, appending the []interface{} to a collection of rows ([][]interface{}), then passing rows on to pgx.
var rows [][]interface{}
// read header outside of CSV "body" loop
header, _ := reader.Read()
// inside your CSV reader "body" loop...
row := make([]interface{}, len(record))
// use your logic/gate-keeping from here
row[0] = record[0] // timestamp
// convert the floats
for i := 1; i < len(record); i++ {
val, _ := strconv.ParseFloat(record[i], 10)
row[i] = val
}
rows = append(rows, row)
...
copyCount, err := conn.CopyFrom(
pgx.Identifier{"floaty-things"},
header,
pgx.CopyFromRows(rows),
)
I can't mock up the entire program, but here's a full demo of converting the CSV to [][]interface{}, https://go.dev/play/p/efbiFN2FJMi.
And check in with the documentation, https://pkg.go.dev/github.com/jackc/pgx/v4.

Proper way to query to check if credentials already exist [duplicate]

This question already has answers here:
Scan function by reference or by value
(1 answer)
I want to check if record exist and if not exist then i want to insert that record to database using golang
(4 answers)
Closed 2 years ago.
I currently have:
func foo (w http.ResponseWriter, req *http.Request) {
chekr := `SELECT FROM public."Users" WHERE email=$1`
err = db.QueryRow(chekr, usr.Email).Scan()
if err != sql.ErrNoRows {
data, err := json.Marshal("There is already a user with this email")
if err != nil { w.Write(data) }
}
// code that should run if email isn't found
}
However, I find it never working and always passing the if block.
As the above comment stated, I forgot the */1. QueryRow works, I just had another error somewhere. As others have stated there's others errors, this is just for one case to test.

How to insert multiple rows by one query? [duplicate]

This question already has answers here:
How do I insert multiple values into a postgres table at once?
(6 answers)
How to insert multiple data at once
(8 answers)
Closed 3 years ago.
In PostgreSQL I have a pretty simple table where I store information about relationships between users and games. Here is a working function which I use to insert data. As you can see it makes multiple SQL queries to the database which is not elegant I think. What do I need to change to insert multiple rows with one query?
var CreateRelationship = func(responseWriter http.ResponseWriter, request *http.Request) {
userID := mux.Vars(request)["user_id"]
type RequestBody struct {
Games []int `json:"games"`
}
requestBody := RequestBody{}
decoder := json.NewDecoder(request.Body)
if err := decoder.Decode(&requestBody); err != nil {
utils.ResponseWithError(responseWriter, http.StatusBadRequest, err.Error())
return
}
for i := 0; i < len(requestBody.Games); i++ {
if _, err := database.DBSQL.Exec("INSERT INTO users_games_relationship (user_id, game_id) VALUES ($1, $2);", userID, requestBody.Games[i]); err != nil {
utils.ResponseWithError(responseWriter, http.StatusInternalServerError, err.Error())
return
}
}
utils.ResponseWithSuccess(responseWriter, http.StatusOK, "All new records successfully created.")
}

Creating an array/slice to store DB Query results in Golang

I'm just getting started with golang and I'm attempting to read several rows from a Postgres users table and store the result as an array of User structs that model the row.
type User struct {
Id int
Title string
}
func Find_users(db *sql.DB) {
// Query the DB
rows, err := db.Query(`SELECT u.id, u.title FROM users u;`)
if err != nil { log.Fatal(err) }
// Initialize array slice of all users. What size do I use here?
// I don't know the number of results beforehand
var users = make([]User, ????)
// Loop through each result record, creating a User struct for each row
defer rows.Close()
for i := 0; rows.Next(); i++ {
err := rows.Scan(&id, &title)
if err != nil { log.Fatal(err) }
log.Println(id, title)
users[i] = User{Id: id, Title: title}
}
// .... do stuff
}
As you can see, my problem is that I want to initialize an array or slice beforehand to store all the DB records, but I don't know ahead of time how many records there are going to be.
I was weighing a few different approaches, and wanted to find out which of the following was most used in the golang community -
Create a really large array beforehand (e.g. 10,000 elements). Seems wasteful
Count the rows explicitly beforehand. This could work, but I need to run 2 queries - one to count and one to get the results. If my query is complex (not shown here), that's duplicating that logic in 2 places. Alternatively I can run the same query twice, but first loop through it and count the rows. All this would work, but it just seems unclean.
I've seen examples of expanding slices. I don't quite understand slices well enough to see how it could be adapted here. Also if I'm constantly expanding a slice 10k times, it definitely seems wasteful.
Go has a built-in append function for exactly this purpose. It takes a slice and one or more elements and appends those elements to the slice, returning the new slice. Additionally, the zero value of a slice (nil) is a slice of length zero, so if you append to a nil slice, it will work. Thus, you can do:
type User struct {
Id int
Title string
}
func Find_users(db *sql.DB) {
// Query the DB
rows, err := db.Query(`SELECT u.id, u.title FROM users u;`)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
var users []User
for rows.Next() {
err := rows.Scan(&id, &title)
if err != nil {
log.Fatal(err)
}
log.Println(id, title)
users = append(users, User{Id: id, Title: title})
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
// ...
}
User appending to a slice:
type DeviceInfo struct {
DeviceName string
DeviceID string
DeviceUsername string
Token string
}
func QueryMultiple(db *sql.DB){
var device DeviceInfo
sqlStatement := `SELECT "deviceName", "deviceID", "deviceUsername",
token FROM devices LIMIT 10`
rows, err := db.Query(sqlStatement)
if err != nil {
panic(err)
}
defer rows.Close()
var deviceSlice []DeviceInfo
for rows.Next(){
rows.Scan(&device.DeviceID, &device.DeviceUsername, &device.Token,
&device.DeviceName)
deviceSlice = append(deviceSlice, device)
}
fmt.Println(deviceSlice)
}
I think that what you are looking for is the capacity.
The following allocates an array that can hold 10,000 items:
users := make([]User, 0, 10_000)
but the array is, itself, still empty (len(users) == 0).
Now you can add up to at least 10,000 items before the array needs to be grown. For that purpose the append() works as expected:
users = append(users, User{...})
Maps are grown with a x2 of the size starting with 1. So it remains a power of two. I'm not sure whether slices are grown the same way (in powers of two). If so, then the allocated size above would be:
math.Pow(2, math.Ceil(math.Log(10_000)/math.Log(2)))
which is 2^14 which is 16,384.
Note: if your query uses a compatible INDEX, i.e. the WHERE clause matches the INDEX one to one, then an extra SELECT COUNT(*) ... is free since the number of elements is known and it will return that number without the need to scan all the rows of your tables.

Golang mgo result into simple slice

I'm fairly new to both Go and MongoDB. Trying to select a single field from the DB and save it in an int slice without any avail.
userIDs := []int64{}
coll.Find(bson.M{"isdeleted": false}).Select(bson.M{"userid": 1}).All(&userIDs)
The above prints out an empty slice. However, if I create a struct with a single ID field that is int64 with marshalling then it works fine.
All I am trying to do is work with a simple slice containing IDs that I need instead of a struct with a single field. All help is appreciated.
Because mgo queries return documents, a few lines of code is required to accomplish the goal:
var result []struct{ UserID int64 `bson:"userid"` }
err := coll.Find(bson.M{"isdeleted": false}).Select(bson.M{"userid": 1}).All(&result)
if err != nil {
// handle error
}
userIDs := make([]int64, len(result))
for i := range result {
userIDs[i] = result.UserID
}