I have this code:
u := models.Users{}
u = u.FindByEmail(login.Email)
password := []byte(login.Password)
hashedPassword, err := bcrypt.GenerateFromPassword(password, bcrypt.DefaultCost)
if err != nil {
panic(err)
}
err = bcrypt.CompareHashAndPassword(hashedPassword, []byte(u.Password))
fmt.Println(err)
I end up getting this error: crypto/bcrypt: hashedPassword is not the hash of the given password
However I previously saved my model to have the same hash as "admin", but when I run my application, it tells me it is not equal.
Re-read the docs carefully.
CompareHashAndPassword compares a bcrypt hashed password with its possible plaintext equivalent. Returns nil on success, or an error on failure.
Basically, it is saying that you should compare the hash you have stored against the plain text password.
you probably want:
u := models.Users{}
u = u.FindByEmail(login.Email)
plainPassword := []byte(login.Password)
// Assumes that u.Password is the actual hash and that you didn't store plain text password.
err = bcrypt.CompareHashAndPassword([]byte(u.Password), plainPassword)
fmt.Println(err)
Related
I am using crypto in go to take a password and use a passphrase to encrypt the password and then store it as a string in a postgres sql database. The encryption works fine but when I try to add it to my database I get an error that seems to indicate that going from a []byte to a string type messes up the encrypted password.
func Encrypt(password string, passphrase string) string {
data := []byte(password)
block, _ := aes.NewCipher([]byte(createHash(passphrase)))
gcm, err := cipher.NewGCM(block)
if err != nil {
panic(err.Error())
}
nonce := make([]byte, gcm.NonceSize())
if _, err = io.ReadFull(rand.Reader, nonce); err != nil {
panic(err.Error())
}
ciphertext := gcm.Seal(nonce, nonce, data, nil)
return string(ciphertext)
}
func createHash(key string) string {
hasher := md5.New()
hasher.Write([]byte(key))
return hex.EncodeToString(hasher.Sum(nil))
}
When I run this code and try to add the row in the database I get the error
ERROR #22021 invalid byte sequence for encoding "UTF8"
I am just looking for an easy way in go to encrypt a string password and save the encrypted string into a table in postgres. The column in the table is of type VARCHAR but I am willing to change that as well if that is for some reason the issue. Thanks for your time!
Base64 encode it:
return base64.StdEncoding.EncodeToString(ciphertext)
Even though a string is a byte array, not all sequences of bytes are a valid UTF-8 string. Base64 encoding is commonly used to store binary data as text.
In my golang project that use gorm as ORM and posgress as database, in some sitution when I begin transaction to
change three tables and commiting, just one of tables changes. two other tables data does not change.
any idea how it might happen?
you can see example below
o := *gorm.DB
tx := o.Begin()
invoice.Number = 1
err := tx.Save(&invoice)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
receipt.Ref = "1331"
err = tx.Save(&receipt)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
payment.status = "succeed"
err = tx.Save(&payment)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
err = tx.Commit()
if err != nil {
err2 := tx.Rollback()
return err
}
Just payment data changed and I'm not getting any error.
Apparently you are mistakenly using save points! In PostgreSQL, we can have nested transactions, that is, defining save points make the transaction split into parts. I am not a Golang programmer and my primary language is not Go, but as I guess the problem is "tx.save" which makes a SavePoint, and does not save the data into database. SavePoints makes a new transaction save point, and thus, the last table commits.
If you are familiar with the Node.js, then any async function callback returns an error as the first argument. In Go, we follow the same norm.
https://medium.com/rungo/error-handling-in-go-f0125de052f0
I am slightly confused by the output i'm receiving from my Postgres when querying it with the use of go.
Since I am very new to this I have a hard time even forming the right question for this problem I have, so I'll just leave a code block here, with the output I'm receiving and what I expected to happen. I hope this makes it more understandable.
The connection to the postgres db seems to work fine
rows, err := db.Query("SELECT title FROM blogs;")
fmt.Println("output", rows)
However, this is the output I am receiving.
output &{0xc4200ea180 0x4c0e20 0xc42009a3c0 0x4b4f90 <nil> {{0 0} 0 0 0 0} false <nil> []}
As I said, I am new to postgres and go, and I have no Idea what I am dealing with here.
I was expecting my entire table to return in a somewhat readable format.
I was expecting my entire table to return in a somewhat readable format.
It does not come back in a "readable" format, why would it?
Query returns a struct that you can use to iterate through the rows that matched the query.
Adapting the example in the docs to your case, and assuming your title field is a VARCHAR, something like this should work for you:
rows, err := db.Query("SELECT title FROM blogs;")
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var title string
if err := rows.Scan(&title); err != nil {
log.Fatal(err)
}
fmt.Println(title)
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
I am using the GOB encoding for my project and i figured out (after a long fight) that empty strings are not encoded/decoded correctly. In my code i use a errormessage (string) to report any problems, this errormessage is most of the time empty. If i encode a empty string, it become nothing, and this gives me a problem with decoding. I don't want to alter the encoding/decoding because these parts are used the most.
How can i tell Go how to encode/decode empty strings?
Example:
Playground working code.
Playground not working code.
The problem isn't the encoding/gob module, but instead the custom MarshalBinary/UnmarshalBinary methods you've declared for Msg, which can't correctly round trip an empty string. There are two ways you could go here:
Get rid of the MarshalBinary/UnmarshalBinary methods and rely on GOB's default encoding for structures. This change alone wont' be enough because the fields of the structure aren't exported. If you're happy to export the fields then this is the simplest option: https://play.golang.org/p/rwzxTtaIh2
Use an encoding that can correctly round trip empty strings. One simple option would be to use GOB itself to encode the struct fields:
func (m Msg) MarshalBinary() ([]byte, error) {
var b bytes.Buffer
enc := gob.NewEncoder(&b)
if err := enc.Encode(m.x); err != nil {
return nil, err
}
if err := enc.Encode(m.y); err != nil {
return nil, err
}
if err := enc.Encode(m.z); err != nil {
return nil, err
}
return b.Bytes(), nil
}
// UnmarshalBinary modifies the receiver so it must take a pointer receiver.
func (m *Msg) UnmarshalBinary(data []byte) error {
dec := gob.NewDecoder(bytes.NewBuffer(data))
if err := dec.Decode(&m.x); err != nil {
return err
}
if err := dec.Decode(&m.y); err != nil {
return err
}
return dec.Decode(&m.z)
}
You can experiment with this example here: https://play.golang.org/p/oNXgt88FtK
The first option is obviously easier, but the second might be useful if your real example is a little more complex. Be careful with custom encoders though: GOB includes a few features that are intended to detect incompatibilities (e.g. if you add a field to a struct and try to decode old data), which are missing from this custom encoding.
noob Golang and Sinatra person here. I have hacked a Sinatra app to accept an uploaded file posted from an HTML form and save it to a hosted MongoDB database via GridFS. This seems to work fine. I am writing the same app in Golang using the mgo driver.
Functionally it works fine. However in my Golang code, I read the file into memory and then write the file from memory to the MongoDB using mgo. This appears much slower than my equivalent Sinatra app. I get the sense that the interaction between Rack and Sinatra does not execute this "middle" or "interim" step.
Here's a snippet of my Go code:
func uploadfilePageHandler(w http.ResponseWriter, req *http.Request) {
// Capture multipart form file information
file, handler, err := req.FormFile("filename")
if err != nil {
fmt.Println(err)
}
// Read the file into memory
data, err := ioutil.ReadAll(file)
// ... check err value for nil
// Specify the Mongodb database
my_db := mongo_session.DB("... database name...")
// Create the file in the Mongodb Gridfs instance
my_file, err := my_db.GridFS("fs").Create(unique_filename)
// ... check err value for nil
// Write the file to the Mongodb Gridfs instance
n, err := my_file.Write(data)
// ... check err value for nil
// Close the file
err = my_file.Close()
// ... check err value for nil
// Write a log type message
fmt.Printf("%d bytes written to the Mongodb instance\n", n)
// ... other statements redirecting to rest of user flow...
}
Question:
Is this "interim" step needed (data, err := ioutil.ReadAll(file))?
If so, can I execute this step more efficiently?
Are there other accepted practices or approaches I should be considering?
Thanks...
No, you should not read the file entirely in memory at once, as that will break when the file is too large. The second example in the documentation for GridFS.Create avoids this problem:
file, err := db.GridFS("fs").Create("myfile.txt")
check(err)
messages, err := os.Open("/var/log/messages")
check(err)
defer messages.Close()
err = io.Copy(file, messages)
check(err)
err = file.Close()
check(err)
As for why it's slower than something else, hard to tell without diving into the details of the two approaches used.
Once you have the file from multipartForm, it can be saved into GridFs using below function. I tested this against huge files as well ( upto 570MB).
//....code inside the handlerfunc
for _, fileHeaders := range r.MultipartForm.File {
for _, fileHeader := range fileHeaders {
file, _ := fileHeader.Open()
if gridFile, err := db.GridFS("fs").Create(fileHeader.Filename); err != nil {
//errorResponse(w, err, http.StatusInternalServerError)
return
} else {
gridFile.SetMeta(fileMetadata)
gridFile.SetName(fileHeader.Filename)
if err := writeToGridFile(file, gridFile); err != nil {
//errorResponse(w, err, http.StatusInternalServerError)
return
}
func writeToGridFile(file multipart.File, gridFile *mgo.GridFile) error {
reader := bufio.NewReader(file)
defer func() { file.Close() }()
// make a buffer to keep chunks that are read
buf := make([]byte, 1024)
for {
// read a chunk
n, err := reader.Read(buf)
if err != nil && err != io.EOF {
return errors.New("Could not read the input file")
}
if n == 0 {
break
}
// write a chunk
if _, err := gridFile.Write(buf[:n]); err != nil {
return errors.New("Could not write to GridFs for "+ gridFile.Name())
}
}
gridFile.Close()
return nil
}