I am new in GoLang language, and I want to create REST API WebServer for file uploading...
So I am stuck in main function (file uploading) via POST request to my server...
I have this line for calling upload function
router.POST("/upload", UploadFile)
and this is my upload function:
func UploadFile( w http.ResponseWriter, r *http.Request, _ httprouter.Params ) {
io.WriteString(w, "Upload files\n")
postFile( r.Form.Get("file"), "/uploads" )
}
func postFile(filename string, targetUrl string) error {
bodyBuf := &bytes.Buffer{}
bodyWriter := multipart.NewWriter(bodyBuf)
// this step is very important
fileWriter, err := bodyWriter.CreateFormFile("file", filename)
if err != nil {
fmt.Println("error writing to buffer")
return err
}
// open file handle
fh, err := os.Open(filename)
if err != nil {
fmt.Println("error opening file")
return err
}
//iocopy
_, err = io.Copy(fileWriter, fh)
if err != nil {
panic(err)
}
bodyWriter.FormDataContentType()
bodyWriter.Close()
return err
}
but I can't see any uploaded files in my /upload/ directory...
So what am I doing wrong?
P.S I am getting second error => error opening file, so I think something wrong in file uploading or getting file from UploadFile function, am I right? If yes, than how I can teancfer or get file from this function to postFile function?
The multipart.Writer generates multipart messages, this is not something you want to use for receiving a file from a client and saving it to disk.
Assuming you're uploading the file from a client, e.g. a browser, with Content-Type: application/x-www-form-urlencoded you should use FormFile instead of r.Form.Get which returns a *multipart.File value that contains the content of the file the client sent and which you can use to write that content to disk with io.Copy or what not.
os.Open will open a file, since the file doesn't exist you will get an error.
Use os.Create instead it will create a new file and open it. (ref: https://golang.org/pkg/os/#Open)
func Open
func Open(name string) (*File, error)
Open opens the named file for
reading. If successful, methods on the returned file can be used for
reading; the associated file descriptor has mode O_RDONLY. If there is
an error, it will be of type *PathError.
func Create
func Create(name string) (*File, error)
Create creates the named file with mode 0666 (before umask),
truncating it if it already exists. If successful, methods on the
returned File can be used for I/O; the associated file descriptor has
mode O_RDWR. If there is an error, it will be of type *PathError.
EDIT
Made a new handler as an example:
And also using OpenFile as mentioned by: GoLang send file via POST request
func Upload(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, "Upload files\n")
file, handler, err := r.FormFile("file")
if err != nil {
panic(err) //dont do this
}
defer file.Close()
// copy example
f, err := os.OpenFile(handler.Filename, os.O_WRONLY|os.O_CREATE, 0666)
if err != nil {
panic(err) //please dont
}
defer f.Close()
io.Copy(f, file)
}
Related
I am creating an HTTP REST server in golang using gin-gonic
My code is:
func main() {
var port string
if len(os.Args) == 1 {
port = "8080"
} else {
port = os.Args[1]
}
router := gin.Default()
router.GET("/:a/*b", func(c *gin.Context) {
// My custom code to get "download" reader from some third party cloud storage.
// Create the file at the server side
out, err := os.Create(b)
if err != nil {
c.String(http.StatusInternalServerError, "Error in file creation at server side\n")
return
}
c.String(http.StatusOK, "File created at server side\n")
_, err = io.Copy(out, download)
if err != nil {
c.String(http.StatusInternalServerError, "Some error occured while downloading the object\n")
return
}
// Close the file at the server side
err = out.Close()
if err != nil {
c.String(http.StatusInternalServerError, "Some error occured while closing the file at server side\n")
}
// Download the file from server side at client side
c.String(http.StatusOK, "Downloading the file at client side\n")
c.FileAttachment(objectPath, objectPath)
c.String(http.StatusOK, "\nFile downlaoded at the client side successfully\n")
c.String(http.StatusOK, "Object downloaded successfully\n")
})
// Listen and serve
router.Run(":"+port)
}
When I run the curl command at the client side command prompt, it downloaded that file on my REST server, but did not download on my client side. However, the gin-gonic godoc says that:
func (*Context) File
func (c *Context) File(filepath string)
File writes the specified file into the body stream in a efficient way.
func (*Context) FileAttachment
func (c *Context) FileAttachment(filepath, filename string)
FileAttachment writes the specified file into the body stream in an efficient way On the client side, the file will typically be downloaded with the given filename
func (*Context) FileFromFS
func (c *Context) FileFromFS(filepath string, fs http.FileSystem)
FileFromFS writes the specified file from http.FileSystem into the body stream in an efficient way.
But, on close observation from my command prompt output, it printed the content of the txt file, which I need to save on my client side
So, I would like to stream that download from that 3rd party storage to the client side command prompt or browser, via my custom REST API server.
Am I missing something here?
Thanks
UPDATE: I tried to write Content-Disposition & Content-Type headers to response as follows:
package main
import (
"context"
"fmt"
"net/http"
"os"
"github.com/gin-gonic/gin"
)
func main() {
var port string
if len(os.Args) == 1 {
port = "8080"
} else {
port = os.Args[1]
}
router := gin.Default()
router.GET("/:a/*b", func(c *gin.Context) {
param_a := c.Param("a")
param_b := c.Param("b")
reqToken := c.GetHeader("my_custom_key")
// My custom code to get "download" reader from some third party cloud storage.
c.String(http.StatusOK, "Downloading the object\n")
c.Writer.Header().Add("Content-Disposition", fmt.Sprintf("attachment; filename=%", param_b))
c.Writer.Header().Add("Content-Type", c.GetHeader("Content-Type"))
c.File(param_b)
c.String(http.StatusOK, "Object downloaded successfully\n")
err = download.Close()
if err != nil {
c.String(http.StatusInternalServerError, "Error in closing the download of the object\n")
return
}
c.String(http.StatusOK, "Object downloaded completed & closed successfully\n")
})
// Listen and serve
router.Run(":"+port)
}
Now, it displays error, but also the success messages as follows, but the fill is still not downloaded at client side:
404 page not found
Object downloaded successfully
Object downloaded completed & closed successfully
I am working on a go project where I need to serve files stored in mongodb. The files are stored in a GridFs. I use gopkg.in/mgo.v2 as package to connect and query the db.
I can retrieve the file from the db, that is not hard.
f, err := s.files.OpenId(id)
But how can I serve that file with http?
I work with the JulienSchmidt router to handle all the other restfull requests.
The solutions I find always use static files, not files from a db.
Thanks in advance
Tip: Recommended to use github.com/globalsign/mgo instead of gopkg.in/mgo.v2 (the latter is not maintained anymore).
The mgo.GridFile type implements io.Reader, so you could use io.Copy() to copy its content into the http.ResponseWriter.
But since mgo.GridFile also implements io.Seeker, you may take advantage of http.ServeContent(). Quoting its doc:
The main benefit of ServeContent over io.Copy is that it handles Range requests properly, sets the MIME type, and handles If-Match, If-Unmodified-Since, If-None-Match, If-Modified-Since, and If-Range requests.
Example handler serving a file:
func serveFromDB(w http.ResponseWriter, r *http.Request) {
var gridfs *mgo.GridFS // Obtain GridFS via Database.GridFS(prefix)
name := "somefile.pdf"
f, err := gridfs.Open(name)
if err != nil {
log.Printf("Failed to open %s: %v", name, err)
http.Error(w, "something went wrong", http.StatusInternalServerError)
return
}
defer f.Close()
http.ServeContent(w, r, name, time.Now(), f) // Use proper last mod time
}
its old but i got another solution with goMongo driver by importing
"go.mongodb.org/mongo-driver/mongo/gridfs"
var bucket *gridfs.Bucket //creates a bucket
dbConnection, err := db.GetDBCollection() //connect db with your your
if err != nil {
log.Fatal(err)
}
bucket, err = gridfs.NewBucket(dbConnection)
if err != nil {
log.Fatal(err)
}
name := "br100_update.txt"
downloadStream, err := bucket.OpenDownloadStreamByName(name)
if err != nil {
log.Printf("Failed to open %s: %v", name, err)
http.Error(w, "something went wrong", http.StatusInternalServerError)
return
}
defer func() {
if err := downloadStream.Close(); err != nil {
log.Fatal(err)
}
}()
// Use SetReadDeadline to force a timeout if the download does not succeed in
// 2 seconds.
if err = downloadStream.SetReadDeadline(time.Now().Add(2 * time.Second)); err
!= nil {
log.Fatal(err)
}
// this code below use to read the file
fileBuffer := bytes.NewBuffer(nil)
if _, err := io.Copy(fileBuffer, downloadStream); err != nil {
log.Fatal(err)
}
I am trying to write a Rest API which has a basic file upload, download. I am able to do the upload part just fine but I am having a hard time downloading file from gridfs. ANy suggestions ?
UPDATE: I think I figured out how to do it. I am curious if any one has any other suggestions:
Here is how it looks for me right now:
func DownloadRecord(w http.ResponseWriter, filename string) error {
if !fileExists(filename) {
return errors.New("File doesn't exist. Nothing to download")
}
session := sqlconnecter.GetMongoDBConnection()
fileDb := session.DB("mydatabase")
file, err := fileDb.GridFS("fs").Open(filename)
defer file.Close()
if err != nil {
return err
}
fileHeader := make([]byte, 512)
file.Read(fileHeader)
fileContentType := http.DetectContentType(fileHeader)
fileSize := file.Size()
w.Header().Set("Content-Disposition", "attachment; filename="+filename)
w.Header().Set("Content-Type", fileContentType)
w.Header().Set("Content-Length", strconv.FormatInt(fileSize, 10))
file.Seek(0, 0)
io.Copy(w, file)
return err
}
I need to implement web service in go that processes tar.gz files and I wonder what is the correct way, what content type I need to define, etc.
plus, I found that a lot of things are handled automatically - on the client side I just post a gzip reader as request body and Accept-Encoding: gzip header is added automatically, and on the server side - I do not need to gunzip the request body, it is already extracted to tar. does that make sense?
Can I rely that it would be like this with any client?
Server:
func main() {
router := mux.NewRouter().StrictSlash(true)
router.Handle("/results", dataupload.NewUploadHandler()).Methods("POST")
log.Fatal(http.ListenAndServe(*address, router))
}
Uploader:
package dataupload
import (
"errors"
log "github.com/Sirupsen/logrus"
"io"
"net/http"
)
// UploadHandler responds to /results http request, which is the result-service rest API for uploading results
type UploadHandler struct {
uploader Uploader
}
// NewUploadHandler creates UploadHandler instance
func NewUploadHandler() *UploadHandler {
return &UploadHandler{
uploader: TarUploader{},
}
}
func (uh UploadHandler) ServeHTTP(writer http.ResponseWriter, request *http.Request) {
retStatus := http.StatusOK
body, err := getBody(request)
if err != nil {
retStatus = http.StatusBadRequest
log.Error("Error fetching request body. ", err)
} else {
_, err := uh.uploader.Upload(body)
}
writer.WriteHeader(retStatus)
}
func getBody(request *http.Request) (io.ReadCloser, error) {
requestBody := request.Body
if requestBody == nil {
return nil, errors.New("Empty request body")
}
var err error
// this part is commented out since somehow the body is already gunzipped - no need to extract it.
/*if strings.Contains(request.Header.Get("Accept-Encoding"), "gzip") {
requestBody, err = gzip.NewReader(requestBody)
}*/
return requestBody, err
}
Client
func main() {
f, err := os.Open("test.tar.gz")
if err != nil {
log.Fatalf("error openning file %s", err)
}
defer f.Close()
client := new(http.Client)
reader, err := gzip.NewReader(f)
if err != nil {
log.Fatalf("error gzip file %s", err)
}
request, err := http.NewRequest("POST", "http://localhost:8080/results", reader)
_, err = client.Do(request)
if err != nil {
log.Fatalf("error uploading file %s", err)
}
}
The code you've written for the client is just sending the tarfile directly because of this code:
reader, err := gzip.NewReader(f)
...
request, err := http.NewRequest("POST", "http://localhost:8080/results", reader)
If you sent the .tar.gz file content directly, then you would need to gunzip it on the server. E.g.:
request, err := http.NewRequest(..., f)
I think that's closer to the behavior you should expect third-party clients to exhibit.
Claerly not, but maybe...
Golang provides a very good support for the http client (and server). This is one of the first language to support http2 and the design of the API clearly shows their concern on having a fast http.
This is why they add Accept-Econding: gzip automatically. That will dramatically reduce the size of the server response and then optimize the transfer.
But the gzip remains an option in http 1 and not all of the client will push this header to your server.
Note that the Content-Type describes the type of data you are sending (here a tar.gz but could be application/json, test/javascript, ...), when the Accept-Encoding describes the way the data has been encoded for the transport
Go will take care of transparently handling the Accept-Encoding for you because it is responsible of the transport of the data. Then it will be up to you to handle the Content-Type because only you know how to give a sense to the content you received
noob Golang and Sinatra person here. I have hacked a Sinatra app to accept an uploaded file posted from an HTML form and save it to a hosted MongoDB database via GridFS. This seems to work fine. I am writing the same app in Golang using the mgo driver.
Functionally it works fine. However in my Golang code, I read the file into memory and then write the file from memory to the MongoDB using mgo. This appears much slower than my equivalent Sinatra app. I get the sense that the interaction between Rack and Sinatra does not execute this "middle" or "interim" step.
Here's a snippet of my Go code:
func uploadfilePageHandler(w http.ResponseWriter, req *http.Request) {
// Capture multipart form file information
file, handler, err := req.FormFile("filename")
if err != nil {
fmt.Println(err)
}
// Read the file into memory
data, err := ioutil.ReadAll(file)
// ... check err value for nil
// Specify the Mongodb database
my_db := mongo_session.DB("... database name...")
// Create the file in the Mongodb Gridfs instance
my_file, err := my_db.GridFS("fs").Create(unique_filename)
// ... check err value for nil
// Write the file to the Mongodb Gridfs instance
n, err := my_file.Write(data)
// ... check err value for nil
// Close the file
err = my_file.Close()
// ... check err value for nil
// Write a log type message
fmt.Printf("%d bytes written to the Mongodb instance\n", n)
// ... other statements redirecting to rest of user flow...
}
Question:
Is this "interim" step needed (data, err := ioutil.ReadAll(file))?
If so, can I execute this step more efficiently?
Are there other accepted practices or approaches I should be considering?
Thanks...
No, you should not read the file entirely in memory at once, as that will break when the file is too large. The second example in the documentation for GridFS.Create avoids this problem:
file, err := db.GridFS("fs").Create("myfile.txt")
check(err)
messages, err := os.Open("/var/log/messages")
check(err)
defer messages.Close()
err = io.Copy(file, messages)
check(err)
err = file.Close()
check(err)
As for why it's slower than something else, hard to tell without diving into the details of the two approaches used.
Once you have the file from multipartForm, it can be saved into GridFs using below function. I tested this against huge files as well ( upto 570MB).
//....code inside the handlerfunc
for _, fileHeaders := range r.MultipartForm.File {
for _, fileHeader := range fileHeaders {
file, _ := fileHeader.Open()
if gridFile, err := db.GridFS("fs").Create(fileHeader.Filename); err != nil {
//errorResponse(w, err, http.StatusInternalServerError)
return
} else {
gridFile.SetMeta(fileMetadata)
gridFile.SetName(fileHeader.Filename)
if err := writeToGridFile(file, gridFile); err != nil {
//errorResponse(w, err, http.StatusInternalServerError)
return
}
func writeToGridFile(file multipart.File, gridFile *mgo.GridFile) error {
reader := bufio.NewReader(file)
defer func() { file.Close() }()
// make a buffer to keep chunks that are read
buf := make([]byte, 1024)
for {
// read a chunk
n, err := reader.Read(buf)
if err != nil && err != io.EOF {
return errors.New("Could not read the input file")
}
if n == 0 {
break
}
// write a chunk
if _, err := gridFile.Write(buf[:n]); err != nil {
return errors.New("Could not write to GridFs for "+ gridFile.Name())
}
}
gridFile.Close()
return nil
}