I have a method that takes a list of userIDs and return a list of users' details.
func ListUser(userIDs []interface{}) (users []User, err error) {
// query on DB based on userID and return list of users
return users, nil
}
Now I want to expose an API endpoint for it. So, I am trying to get the userIDs from my URL.
func ListUserProfile(w http.ResponseWriter, r *http.Request) {
// I know I can get the single value using
//r.URL.Query().Get("user-id")
// but here List user takes []interface{} as a argument
users, err := users.ListUser(userIDs)
}
Is there have any way I can get the list of userIDS from my URL.
Parse the form and copy the []string in the form to an []interface{}. The Go FAQ explains why copy is required.
r.ParseForm()
userIDs := make([]interface{}, len(r.Form["user-id"]))
for i, s := range r.Form["user-id"] {
userIDs[i] = s
}
users, err := users.ListUser(userIDs)
(The question and a comment disagree on the name of the query parameter. Adjust the code in the answer to match the actual name.)
Related
I am writing a web app in go and using the GORM for my ORM. I need to be able to retrieve all the metrics of a certain user and return it via JSON to be displayed on the front end. The query seems to run successfully but I only see a memory address when printing the results and receive an error when trying to cast the results the standard way.
Here is my current code
func DisplayData(w http.ResponseWriter, r *http.Request) {
//Get the data from the database
var metric models.Metric
results := db.Where("user_id = ?", "1").Find(&metric)
//Write a json response
w.WriteHeader(http.StatusCreated)
w.Header().Set("Content-Type", "application/json")
resp := make(map[string]string)
resp["message"] = results
jsonResp, err := json.Marshal(resp)
if err != nil {
log.Fatalf("Error happened in JSON marshal. Err: %s", err)
}
w.Write(jsonResp)
return
}
This results in the error
controllers/statsCont.go:116:18: cannot use results (type *gorm.DB) as type string in assignment
note: module requires Go 1.17
When I try to cast by surrounding result in string() it gives the following error.
controllers/statsCont.go:116:26: cannot convert results (type *gorm.DB) to type string
note: module requires Go 1.17
As stated by #BaytaDarell the query result is added in the variable passed inside Find method
The return value of Find is different in the context it is called in case when it is called with db type the return type is (tx *DB) and when called with associations type the return type is Error
To solve the issue remove below lines
resp := make(map[string]string)
resp["message"] = results
And update it has
resp := map[string]interface{}{"message": metric}
I've made a post request where I'm sending data as JSON and this code creates a new row in the DB.
json.NewDecoder(r.Body).Decode(&user)
DB.Create(&user)
json.NewEncoder(w).Encode(user)
But parsing the form data shows this error
I figured this is how I would read every individual value
for key, value := range r.PostForm {
fmt.Printf("Key:%s, Value:%s\n", key, value)
}
My model looks like this
type User struct {
gorm.Model
FirstName string `json:"firstname"`
LastName string `json:"lastname"`
Email string `json:"email"`
}
How would I convert this to user and insert to DB?
Another way to do it (a bit overengineered, better to use r.FormValue), without having to type the fields again, such as r.FormValue("firstname"), etc if you already know the data, types etc.
func handleUser(w http.ResponseWriter, r *http.Request) {
user := User{}
userType := reflect.TypeOf(user)
for i := 0; i < userType.NumField(); i++ {
// Get the JSON tag of the field, use it for the r.FormValue
f := userType.Field(i).Tag.Get("json")
if f != "" {
reflect.ValueOf(&user).Elem().FieldByName(
userType.Field(i).Name).SetString(r.FormValue(f))
}
}
// DB.Create(&user) need a reference to DB here.
json.NewEncoder(w).Encode(user)
}
Note: This is just an example that will work for string fields, may panic otherwise because it's calling SetString. Additionally, a field may not exist and could panic when trying to access it in the FieldByName(..., but I think your question was geared to this direction of not having to type each field, so leaving it as an example.
The right way to do it, as Cerise Limón pointed out, would be
func handleUser(w http.ResponseWriter, r *http.Request) {
user := User{}
user.FirstName = r.FormValue("firstname")
user.LastName = r.FormValue("lastname")
user.Email = r.FormValue("email")
// DB.Create(&user) need a reference to DB here.
json.NewEncoder(w).Encode(user)
}
Note that you can also add a method for populating the user with the form if you do this in several places, for example
func (u *User) setFormData(r *http.Request) {
u.FirstName = r.FormValue("firstname")
u.LastName = r.FormValue("lastname")
u.Email = r.FormValue("email")
}
And then use it like this
func handleUser(w http.ResponseWriter, r *http.Request) {
user := User{}
user.setFormData(r)
// DB.Create(&user)
json.NewEncoder(w).Encode(user)
}
I am looking at this example.
I would never coome up with solution like this,I would go for bson.raw.
type Movie struct {
ID bson.ObjectId `json:"id" bson:"_id,omitempty"`
Name string `json:"name" bson:"name"`
Year string `json:"year" bson:"year"`
Directors []string `json:"directors" bson:"directors"`
Writers []string `json:"writers" bson:"writers"`
BoxOffice BoxOffice `json:"boxOffice" bson:"boxOffice"`
}
GetMovie function reads data from MongoDB and returns JSON
func (db *DB) GetMovie(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
w.WriteHeader(http.StatusOK)
var movie Movie
err := db.collection.Find(bson.M{"_id": bson.ObjectIdHex(vars["id"])}).One(&movie)
if err != nil {
w.Write([]byte(err.Error()))
} else {
w.Header().Set("Content-Type", "application/json")
response, _ := json.Marshal(movie)
w.Write(response)
}
}
I do not understand how generic map bson.M was created. Why did the author used bson.ObjectIdHex(vars["id"]?
bson.M is a map under the hood:
type M map[string]interface{}
And this:
bson.M{"_id": bson.ObjectIdHex(vars["id"])}
Is a composite literal creating a value of type bson.M. It has a single pair where key is "_id" and the associated value is a bson.ObjectId returned by the function bson.ObjectIdHex().
The document ID to look up and return is most likely coming as a hexadecimal string in vars["id"], and bson.ObjectIdHex() converts (parses) this into an ObjectId.
Tips: to query a document by ID, easier is to use Collection.FindId, e.g.:
err := db.collection.FindId(bson.ObjectIdHex(vars["id"])).One(&movie)
Also to avoid a runtime panic in case an invalid ID is stored in vars["id"], you could use bson.IsObjectIdHex() to check it first. For details, see Prevent runtime panic in bson.ObjectIdHex.
Also, marshaling the result into a byte slice and then writing it to the response is inefficient, the response could be streamed to the output using json.Encoder. For details, see Ouput json to http.ResponseWriter with template.
Now I'm doing:
sess := mongodb.DB("mybase").C("mycollection")
var users []struct {
Username string `bson:"username"`
}
err = sess.Find(nil).Select(bson.M{"username": 1, "_id": 0}).All(&users)
if err != nil {
fmt.Println(err)
}
var myUsers []string
for _, user := range users{
myUsers = append(myUsers, user.Username)
}
Is there a more effective way to get slice with usernames from Find (or another search function) directly, without struct and range loop?
The result of a MongoDB find() is always a list of documents. So if you want a list of values, you have to convert it manually just as you did.
Using a custom type (derived from string)
Also note that if you would create your own type (derived from string), you could override its unmarshaling logic, and "extract" just the username from the document.
This is how it could look like:
type Username string
func (u *Username) SetBSON(raw bson.Raw) (err error) {
doc := bson.M{}
if err = raw.Unmarshal(&doc); err != nil {
return
}
*u = Username(doc["username"].(string))
return
}
And then querying the usernames into a slice:
c := mongodb.DB("mybase").C("mycollection") // Obtain collection
var uns []Username
err = c.Find(nil).Select(bson.M{"username": 1, "_id": 0}).All(&uns)
if err != nil {
fmt.Println(err)
}
fmt.Println(uns)
Note that []Username is not the same as []string, so this may or may not be sufficient to you. Should you need a user name as a value of string instead of Username when processing the result, you can simply convert a Username to string.
Using Query.Iter()
Another way to avoid the slice copying would be to call Query.Iter(), iterate over the results and extract and store the username manually, similarly how the above custom unmarshaling logic does.
This is how it could look like:
var uns []string
it := c.Find(nil).Select(bson.M{"username": 1, "_id": 0}).Iter()
defer it.Close()
for doc := (bson.M{}); it.Next(&doc); {
uns = append(uns, doc["username"].(string))
}
if err := it.Err(); err != nil {
fmt.Println(err)
}
fmt.Println(uns)
I don't see what could be more effective than a simple range loop with appends. Without all the Mongo stuff your code basically is this and that's exactly how I would do this.
package main
import (
"fmt"
)
type User struct {
Username string
}
func main() {
var users []User
users = append(users, User{"John"}, User{"Jane"}, User{"Jim"}, User{"Jean"})
fmt.Println(users)
// Interesting part starts here.
var myUsers []string
for _, user := range users {
myUsers = append(myUsers, user.Username)
}
// Interesting part ends here.
fmt.Println(myUsers)
}
https://play.golang.com/p/qCwENmemn-R
I'm just getting started with golang and I'm attempting to read several rows from a Postgres users table and store the result as an array of User structs that model the row.
type User struct {
Id int
Title string
}
func Find_users(db *sql.DB) {
// Query the DB
rows, err := db.Query(`SELECT u.id, u.title FROM users u;`)
if err != nil { log.Fatal(err) }
// Initialize array slice of all users. What size do I use here?
// I don't know the number of results beforehand
var users = make([]User, ????)
// Loop through each result record, creating a User struct for each row
defer rows.Close()
for i := 0; rows.Next(); i++ {
err := rows.Scan(&id, &title)
if err != nil { log.Fatal(err) }
log.Println(id, title)
users[i] = User{Id: id, Title: title}
}
// .... do stuff
}
As you can see, my problem is that I want to initialize an array or slice beforehand to store all the DB records, but I don't know ahead of time how many records there are going to be.
I was weighing a few different approaches, and wanted to find out which of the following was most used in the golang community -
Create a really large array beforehand (e.g. 10,000 elements). Seems wasteful
Count the rows explicitly beforehand. This could work, but I need to run 2 queries - one to count and one to get the results. If my query is complex (not shown here), that's duplicating that logic in 2 places. Alternatively I can run the same query twice, but first loop through it and count the rows. All this would work, but it just seems unclean.
I've seen examples of expanding slices. I don't quite understand slices well enough to see how it could be adapted here. Also if I'm constantly expanding a slice 10k times, it definitely seems wasteful.
Go has a built-in append function for exactly this purpose. It takes a slice and one or more elements and appends those elements to the slice, returning the new slice. Additionally, the zero value of a slice (nil) is a slice of length zero, so if you append to a nil slice, it will work. Thus, you can do:
type User struct {
Id int
Title string
}
func Find_users(db *sql.DB) {
// Query the DB
rows, err := db.Query(`SELECT u.id, u.title FROM users u;`)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
var users []User
for rows.Next() {
err := rows.Scan(&id, &title)
if err != nil {
log.Fatal(err)
}
log.Println(id, title)
users = append(users, User{Id: id, Title: title})
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
// ...
}
User appending to a slice:
type DeviceInfo struct {
DeviceName string
DeviceID string
DeviceUsername string
Token string
}
func QueryMultiple(db *sql.DB){
var device DeviceInfo
sqlStatement := `SELECT "deviceName", "deviceID", "deviceUsername",
token FROM devices LIMIT 10`
rows, err := db.Query(sqlStatement)
if err != nil {
panic(err)
}
defer rows.Close()
var deviceSlice []DeviceInfo
for rows.Next(){
rows.Scan(&device.DeviceID, &device.DeviceUsername, &device.Token,
&device.DeviceName)
deviceSlice = append(deviceSlice, device)
}
fmt.Println(deviceSlice)
}
I think that what you are looking for is the capacity.
The following allocates an array that can hold 10,000 items:
users := make([]User, 0, 10_000)
but the array is, itself, still empty (len(users) == 0).
Now you can add up to at least 10,000 items before the array needs to be grown. For that purpose the append() works as expected:
users = append(users, User{...})
Maps are grown with a x2 of the size starting with 1. So it remains a power of two. I'm not sure whether slices are grown the same way (in powers of two). If so, then the allocated size above would be:
math.Pow(2, math.Ceil(math.Log(10_000)/math.Log(2)))
which is 2^14 which is 16,384.
Note: if your query uses a compatible INDEX, i.e. the WHERE clause matches the INDEX one to one, then an extra SELECT COUNT(*) ... is free since the number of elements is known and it will return that number without the need to scan all the rows of your tables.