I'm using go and the package uuid to generate a uuid of type [16]byte. However when I try to insert that uuid into my postgres column of type uuid I get the error converting argument $1 type: unsupported type [16]uint8, a array. So apparently I should convert the uuid on the client before I insert it into the db. How should I do that? What type should I convert it to?
In short: What go data type will work with uuid in postgres?
Thanks to the link from #sberry, I found success. Here are snippets of the code for your benefit (with a PostgreSQL 9.5 database):
import (
"database/sql"
"net/http"
"github.com/google/uuid"
)
type Thing struct {
ID uuid.UUID `json:"-" sql:",type:uuid"`
Name string `json:"name"`
}
// For a database table created as such:
// CREATE TABLE things (
// id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
// name TEXT DEFAULT ''::text
// )
func selectThingssSQL() ([]Thing, error) {
things := make([]Thing, 0)
rows, err := db.Query("SELECT id, name FROM things")
if err != nil {
return nil, err
}
defer rows.Close()
for rows.Next() {
t := &Thing{}
if err := rows.Scan(&t.ID, &t.Name); err != nil {
return nil, err
}
things = append(things, *t)
}
return things, nil
}
Related
I am pretty new to Go and I am trying to find the best way to set up my db communication. Essentially I remember from my previous workplaces that in PHP you can create a class that represents a SQL table and when you need to insert data into your db you would create an object of that class with all the necessary data, call insert(), pass your object and it would insert that data into a corresponding table without you writing any SQL code, update() works in a very similar way except it would update instead of inserting. Unfortunately, I don't remember the name of that PHP framework but maybe someone knows a way to achieve something like that in Go or is it not a thing?
Lets say I have a struct:
type Patients struct {
ID int
Name string
Image string
}
Now I want to have a function that takes Patients objet as a parameter and inserts it into a patients postgres table automatically converting patient into what postgres expects:
func (patients *Patients) insert(patient Patients) {
}
And then update() would take a Patients object and basically perform this chunk of code without me writing it:
stmt := `update patients set
name = $1,
image = $2,
where id = $3
`
_, err := db.ExecContext(ctx, stmt,
patient.Name,
patient.Image,
patient.ID
)
You are looking for something called an ORM (Object Relational Mapper). There are a few in Go, but the most popular is GORM. It's a bit of a controversial topic, but I think it's a good idea to use an ORM if you're new to Go and/or databases. It will save you a lot of time and effort.
The alternative is to use the database/sql package and write your own SQL queries. This is a good idea if you're an experienced Go developer and/or database administrator. It will give you more control over your queries and will be more efficient. Recommended reading: https://www.alexedwards.net/blog/organising-database-access. Recommended libraries for this approach include sqlx and pgx.
Here is what your struct would look like as a GORM model:
type Patient struct {
ID int `gorm:"primaryKey"`
Name string
Image string
}
And here is an example program for how to insert a patient into the database:
package main
import (
"fmt"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
type Patient struct {
ID int `gorm:"primaryKey"`
Name string
Image string
}
func main() {
dsn := "host=localhost user=postgres password=postgres dbname=postgres port=5432 sslmode=disable TimeZone=UTC"
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
db.AutoMigrate(&Patient{})
patient := Patient{
Name: "John Smith",
Image: "https://example.com/image.png",
}
result := db.Create(&patient)
if result.Error != nil {
panic(result.Error)
}
fmt.Println(patient)
}
If instead you wanted to use sqlx, you would write something like this:
package main
import (
"database/sql"
"fmt"
"log"
_ "github.com/lib/pq"
)
type Patient struct {
ID int
Name string
Image string
}
func main() {
dsn := "host=localhost user=postgres password=postgres dbname=postgres port=5432 sslmode=disable TimeZone=UTC"
db, err := sql.Open("postgres", dsn)
if err != nil {
log.Fatal(err)
}
defer db.Close()
_, err = db.Exec(`
CREATE TABLE IF NOT EXISTS patients (
id SERIAL PRIMARY KEY,
name TEXT,
image TEXT
)
`)
if err != nil {
log.Fatal(err)
}
patient := Patient{
Name: "John Smith",
Image: "https://example.com/image.png",
}
_, err = db.Exec(`
INSERT INTO patients (name, image) VALUES ($1, $2)
`, patient.Name, patient.Image)
if err != nil {
log.Fatal(err)
}
fmt.Println(patient)
}
Of course, managing your database schema is a bit more complicated with an ORM. You can use migrations, but I prefer to use a tool called goose. It's a bit of a pain to set up, but it's very powerful and flexible. Here is an example of how to use it:
package main
import (
"fmt"
"log"
"github.com/pressly/goose"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
type Patient struct {
ID int `gorm:"primaryKey"`
Name string
Image string
}
func main() {
dsn := "host=localhost user=postgres password=postgres dbname=postgres port=5432 sslmode=disable TimeZone=UTC"
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
panic("failed to connect database")
}
goose.SetDialect("postgres")
goose.SetTableName("schema_migrations")
err = goose.Run("up", db.DB(), "migrations")
if err != nil {
log.Fatal(err)
}
patient := Patient{
Name: "John Smith",
Image: "https://example.com/image.png",
}
result := db.Create(&patient)
if result.Error != nil {
panic(result.Error)
}
fmt.Println(patient)
}
where your migrations directory looks like this:
migrations/
00001_create_patients.up.sql
00001_create_patients.down.sql
and your migrations look like this:
-- 00001_create_patients.up.sql
CREATE TABLE patients (
id SERIAL PRIMARY KEY,
name TEXT,
image TEXT
);
-- 00001_create_patients.down.sql
DROP TABLE patients;
I hope this helps! Let me know if you have any questions.
I think what you're looking for is an ORM. An ORM is a library that essentially does this, taking language structures and automatically handling the SQL logic for you.
The most popular library for this in Go is GORM. Here's a link to their home page: https://gorm.io/. I've used it heavily in production and it's been a good experience!
The docs have a good example of what it'll look like.
Hope this helps.
I am trying to update a record in a postgres table with an array (slice) of values. The table has the following DDL:
CREATE TABLE slm_files (
id uuid DEFAULT gen_random_uuid() PRIMARY KEY,
filename character varying NOT NULL,
status character varying NOT NULL,
original_headers text[]
);
and the Go code I have is as follows:
package main
import (
"context"
"database/sql"
"encoding/json"
"fmt"
"os"
"strings"
"time"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
"github.com/lib/pq"
)
type message struct {
ID string `json:"id"`
Filename string `json:"filename"`
Status string `json:"status"`
OriginalHeaders []string `json:"OriginalHeaders"`
}
func main() {
host := os.Getenv("PGhost")
port := 5432
user := os.Getenv("PGuser")
password := os.Getenv("PGpassword")
dbname := os.Getenv("PGdbname")
pgConString := fmt.Sprintf("port=%d host=%s user=%s "+
"password=%s dbname=%s sslmode=disable",
port, host, user, password, dbname)
msgBody := `update_headers___
{
"id": "76b67119-d8c1-4a20-b53e-49e4972e2f19",
"filename": "SLM1171_inputData_preNCOA-5babc88b-1d14-468d-bf6e-c3b36ce90d95.csv",
"status": "Submitted",
"OriginalHeaders": [
"city",
"state",
"zipcode",
"full_name",
"individual_id"
]
}`
fmt.Println("Processing file", msgBody)
queryMethod := strings.Split(msgBody, "___")[0]
fieldDict := strings.Split(msgBody, "___")[1]
db, err := sql.Open("postgres", pgConString)
if err != nil {
panic(err)
}
fmt.Println("Connected Successfully")
defer db.Close()
body := message{}
json.Unmarshal([]byte(fieldDict), &body)
fmt.Println(queryMethod)
fmt.Println(body)
var sqlStatement string
switch queryMethod {
case "update_ncoa":
sqlStatement = fmt.Sprintf(`UPDATE slm_files SET status = '%s', updated_at = '%s' where id = '%s';`,
body.Status,
body.UpdatedAt,
body.ID,
)
case "update_headers":
sqlStatement = fmt.Sprintf(`UPDATE slm_files SET original_headers = '%s', updated_at = '%s' where id = '%s';`,
pq.Array(body.OriginalHeaders),
body.UpdatedAt,
body.ID,
)
}
fmt.Println(sqlStatement)
_, err = db.Query(sqlStatement)
if err != nil {
fmt.Println("Failed to run query", err)
return
}
}
fmt.Println("Query executed!")
return
}
but I keep getting the error
pq: malformed array literal: "&[first_name last_name city state zipcode full_name individual_id]": Error
null
I have read a few things on the internet that lead me to using pq.Array() but that doesnt seem to work.
I have read about the difference in format between Go arrays and Postgres arrays, so I had hoped that letting the pq.Array function would sort it out but apparently not.
As Peter advised, there's a lot to fix up with that database handling. And it's definitely worth redoing those SQL statements to not use Sprintf to make the query.
But in terms of just getting something working with postgres arrays and the pq library, you need to use the Value() method of pq.Array to get the postgres format. Change your update statement for the headers to something like this:
arrayVal, _ := pq.Array(body.OriginalHeaders).Value()
sqlStatement = fmt.Sprintf(`UPDATE slm_files SET original_headers = '%s', updated_at = '%s' where id = '%s';`,
arrayVal,
body.UpdatedAt,
body.ID,
)
And it's worth checking the return from the Value() method to make sure there are no errors, I just ignored it for the sake of a simple example.
I have a table in Postgres that is a Jsonb
Create Table Business(
id serial not null primary key,
id_category integer not null,
name varchar(50) not null,
owner varchar(200) not null,
coordinates jsonb not null,
reason varchar(300) not null,
foreign key(id_category) references Category(id)
);
as you can see i store the coordinates as a jsonb
ex:
Insert into Business(id_category, name, owner, coordinates, reason)
values
(1,'MyName','Owner', [{"latitude": 12.1268142, "longitude": -86.2754}]','Description')
the way that I extract the data and assign it is like this.
type Business struct {
ID int `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Owner string `json:"owner,omitempty"`
Category string `json:"category,omitempty"`
Departments []string `json:"departments,omitempty"`
Location []Coordinates `json:"location,omitempty"`
Reason string `json:"reason,omitempty"`
}
type Coordinates struct {
Latitude float64 `json:"latitude,omitempty"`
Longitude float64 `json:"longitude,omitempty"`
}
func (a Coordinates) Value() (driver.Value, error) {
return json.Marshal(a)
}
func (a *Coordinates) Scan(value []interface{}) error {
b, ok := value.([]byte)
if !ok {
return errors.New("type assertion to []byte failed")
}
return json.Unmarshal(b, &a)
}
However, I keep receiving this message.
sql: Scan error on column index 3, name "coordinates": unsupported
Scan, storing driver.Value type []uint8 into type *models.Coordinates
And the controller that I use to extract the information is this.
func (b *BusinessRepoImpl) Select() ([]models.Business, error) {
business_list := make([]models.Business, 0)
rows, err := b.Db.Query("SELECT business.id, business.name, business.owner, business.coordinates, business.reason_froggy, category.category FROM business INNER JOIN category on category.id = business.id_category group by business.id, business.name, business.owner, business.coordinates, business.reason_froggy, category.category")
if err != nil {
return business_list, err
}
for rows.Next() {
business := models.Business{}
err := rows.Scan(&business.ID, &business.Name, &business.Owner, &business.Location, &business.Reason, &business.Category)
if err != nil {
break
}
business_list = append(business_list, business)
}
err = rows.Err()
if err != nil {
return business_list, err
}
return business_list, nil
}
Can anyone please tell me how to solve this issue? Retrieve the json array of objects and assign it to the coordinates field inside Business.
1.
As you can see from the documentation the Scanner interface, to be satisfied, requires the method
Scan(src interface{}) error
But your *Coordinates type implements a different method
Scan(value []interface{}) error
The types interface{} and []interface{} are two very different things.
2.
The Scanner interface must be implemented on the type of the field which you want to pass as an argument to rows.Scan. That is, you've implemented your Scan method on *Coordinates but the type of the Location field is []Coordinates.
Again, same thing, the types *Coordinates and []Coordinates are two very different things.
So the solution is to implement the interface properly and on the proper type.
Note that since Go doesn't allow adding methods to unnamed types, and []Coordinates is an unnamed type, you need to declare a new type that you'll then use in place of []Coordinates.
type CoordinatesSlice []Coordinates
func (s *CoordinatesSlice) Scan(src interface{}) error {
switch v := src.(type) {
case []byte:
return json.Unmarshal(v, s)
case string:
return json.Unmarshal([]byte(v), s)
}
return errors.New("type assertion failed")
}
// ...
type Business struct {
// ...
Location CoordinatesSlice `json:"location,omitempty"`
// ...
}
NOTE
If the business location will always have only one pair of coordinates store into the db as a jsonb object and change the Location type from CoordinatesSlice to Coordinates and accordingly move the Scanner implementation from *CoordinatesSlice to *Coordinates.
I know that this solution is really unoptimized, but it was the only way that it works.
basically i have to obtain the json and then do an unmarshal into the Location attribute.
var location string = ""
if err := json.Unmarshal([]byte(location), &business.Location); err != nil { panic(err) }
I try to read and write and delete data from a Go application with the official mongodb driver for go (go.mongodb.org/mongo-driver).
Here is my struct I want to use:
Contact struct {
ID xid.ID `json:"contact_id" bson:"contact_id"`
SurName string `json:"surname" bson:"surname"`
PreName string `json:"prename" bson:"prename"`
}
// xid is https://github.com/rs/xid
I omit code to add to the collection as this is working find.
I can get a list of contacts with a specific contact_id using the following code (abbreviated):
filter := bson.D{}
cursor, err := contactCollection.Find(nil, filter)
for cur.Next(context.TODO()) {
...
}
This works and returns the documents. I thought about doing the same for delete or a matched get:
// delete - abbreviated
filter := bson.M{"contact_id": id}
result, _ := contactCollection.DeleteMany(nil, filter)
// result.DeletedCount is always 0, err is nil
if err != nil {
sendError(c, err) // helper function
return
}
c.JSON(200, gin.H{
"ok": true,
"message": fmt.Sprintf("deleted %d patients", result.DeletedCount),
}) // will be called, it is part of a webservice done with gin
// get complete
func Get(c *gin.Context) {
defer c.Done()
id := c.Param("id")
filter := bson.M{"contact_id": id}
cur, err := contactCollection.Find(nil, filter)
if err != nil {
sendError(c, err) // helper function
return
} // no error
contacts := make([]types.Contact, 0)
for cur.Next(context.TODO()) { // nothing returned
// create a value into which the single document can be decoded
var elem types.Contact
err := cur.Decode(&elem)
if err != nil {
sendError(c, err) // helper function
return
}
contacts = append(contacts, elem)
}
c.JSON(200, contacts)
}
Why does the same filter does not work on delete?
Edit: Insert code looks like this:
_, _ = contactCollection.InsertOne(context.TODO(), Contact{
ID: "abcdefg",
SurName: "Demo",
PreName: "on stackoverflow",
})
Contact.ID is of type xid.ID, which is a byte array:
type ID [rawLen]byte
So the insert code you provided where you use a string literal to specify the value for the ID field would be a compile-time error:
_, _ = contactCollection.InsertOne(context.TODO(), Contact{
ID: "abcdefg",
SurName: "Demo",
PreName: "on stackoverflow",
})
Later in your comments you clarified that the above insert code was just an example, and not how you actually do it. In your real code you unmarshal the contact (or its ID field) from a request.
xid.ID has its own unmarshaling logic, which might interpret the input data differently, and might result in an ID representing a different string value than your input. ID.UnmarshalJSON() defines how the string ID will be converted to xid.ID:
func (id *ID) UnmarshalJSON(b []byte) error {
s := string(b)
if s == "null" {
*id = nilID
return nil
}
return id.UnmarshalText(b[1 : len(b)-1])
}
As you can see, the first byte is cut off, and ID.UnmarshalText() does even more "magic" on it (check the source if you're interested).
All-in-all, to avoid such "transformations" happen in the background without your knowledge, use a simple string type for your ID, and do necessary conversions yourself wherever you need to store / transmit your ID.
For the ID Field, you should use the primitive.ObjectID provided by the bson package.
"go.mongodb.org/mongo-driver/bson/primitive"
ID primitive.ObjectID `json:"_id" bson:"_id"`
In PostgreSQL database I have 2 table:
CREATE TABLE WIDGET_TYPE(
WIDGET_TYPE_ID SERIAL PRIMARY KEY NOT NULL,
WIDGET_TYPE_NAME VARCHAR NOT NULL UNIQUE
);
CREATE TABLE QUESTION(
QUESTION_ID SERIAL PRIMARY KEY NOT NULL,
QUESTION_TEXT TEXT NOT NULL UNIQUE,
WIDGET_TYPE_ID INT NOT NULL,
FOREIGN KEY (WIDGET_TYPE_ID) REFERENCES WIDGET_TYPE (WIDGET_TYPE_ID)
);
As you can see each question has only one widget type for offerend answers.
After that step I am trying to design models in Golang application. I use GORM library for this task. I have problem when try to create new entry in question table. In the body of the POST request I send JSON object:
{
"question_text": "NEW QUESTION TEXT HERE",
"widget_type_id": 2
}
ERROR:
pq: insert or update on table "question" violates foreign key constraint "question_widget_type_id_fkey"
models.go:
package models
type WidgetType struct {
WidgetTypeID int `gorm:"primary_key" json:"widget_type_id"`
WidgetTypeName string `gorm:"not null;unique" json:"widget_type_name"`
}
func (WidgetType) TableName() string {
return "widget_type"
}
type Question struct {
QuestionID int `gorm:"primary_key" json:"question_id"`
QuestionText string `gorm:"not null;unique" json:"question_text"`
WidgetType WidgetType `gorm:"foreignkey:WidgetTypeID"`
WidgetTypeID uint
}
func (Question) TableName() string {
return "question"
}
handlers.go:
var CreateQuestion = func(responseWriter http.ResponseWriter, request *http.Request) {
question := models.Question{}
decoder := json.NewDecoder(request.Body)
if err := decoder.Decode(&question); err != nil {
utils.ResponseWithError(responseWriter, http.StatusBadRequest, err.Error())
return
}
defer request.Body.Close()
if err := database.DBGORM.Save(&question).Error; err != nil {
utils.ResponseWithError(responseWriter, http.StatusInternalServerError, err.Error())
return
}
utils.ResponseWithSuccess(responseWriter, http.StatusCreated, "The new entry successfully created.")
}
Where I make mistake?
I add built-in logger support of GORM. In console it show me next SQL statement:
INSERT INTO "question" ("question_text","widget_type_id") VALUES ('NEW QUESTION TEXT HERE',0) RETURNING "question"."question_id"
As you can see widget_type_id value 0. WHY?