Change HTTP routing by server status - mongodb

I'm building a REST server in Go and using mongoDB as my database (but this question is actually related to any other external resource).
I want my server to start and respond, even if the database is not up yet (not when the database is down after the server started - this is a different issue, much easier one).
So my dao package includes a connection go-routine that receives a boolean channel, and write 'true' to the channel when it successfully connected to the database. If the connection failed, the go-routine will keep trying every X seconds.
When I use this package with another software I wrote, that is a just a command-line program, I'm using select with timeout:
dbConnected := make(chan bool)
storage.Connect(dbConnected)
timeout := time.After(time.Minute)
select {
case <-dbConnected:
createReport()
case <-timeout:
log.Fatalln("Can't connect to the database")
}
I want to use the same packge in a server, but I don't want to fail the whole server. Instead, I want to start the server with handler that returns 503 SERVER BUSY, until the server is connected to the database, and then start serve requests normally. Is there a simple way to implement this logic in go standard library? Using solutions like gorilla is an option, but the server is simple with very few APIs, and gorilla is a bit overkill.
== edited: ==
I know I can use a middleware but I don't know how to do that without sharing data between the main method and the handlers. That why I'm using the channel in the first place.

I have something working, but it does based on common data. However, the data is a single boolean, so I guess it's not so dramatic. I would love to get comments for this solution:
In the dao package, I have this Connect method, that return a boolean channel. The private connect go routine, writes 'true' and exit when succeed:
func Connect() chan bool {
connected := make(chan bool)
go connect(mongoUrl, connected)
return connected
}
I also added the Ping() method to the dao package; it run forever and monitor the database status. It reports the status to a new channel and try to reconnect if needed:
func Ping() chan bool {
status := make(chan bool)
go func() {
for {
if err := session.Ping(); err != nil {
session.Close()
status <- false
<- Connect()
status <- true
}
time.Sleep(time.Second)
}
}()
return status
}
In the main package, I have this simple type:
type Connected struct {
isConnected bool
}
// this one is called as go-routine
func (c *Connected) check(dbConnected chan bool) {
// first connection, on server boot
c.isConnected = <- dbConnected
// monitor the database status
status := dao.Ping()
for {
c.isConnected = <- status
}
}
// the middleware
func (c *Connected) checkDbHandleFunc(next http.HandlerFunc) http.HandlerFunc {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !c.isConnected {
w.Header().Add("Retry-After", "10")
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(503)
respBody := `{"error":"The server is busy; Try again soon"}`
w.Write([]byte(respBody))
} else {
next.ServeHTTP(w, r)
}
})
}
Middleware usage:
...
connected := Connected{
isConnected: false,
}
dbConnected := dao.Connect()
go connected.check(dbConnected)
mux := http.NewServeMux()
mux.HandleFunc("/", mainPage)
mux.HandleFunc("/some-db-required-path/", connected.checkDbHandleFunc(someDbRequiredHandler))
...
log.Fatal(http.ListenAndServe(addr, mux))
...
Does it make sense?

Related

Request created with http.NewRequestWithContext() looses context when passed to middleware

In program bellow I have two routers. One is working at localhost:3000 and acts like a public access point. It also may send requests with data to another local address which is localhost:8000 where data is being processed. Second router is working at localhost:8000 and handles processing requests for the first router.
Problem
The first router sends a request with context to the second using http.NewRequestWithContext() function. The value is being added to the context and the context is added to request. When request arrives to the second router it does not have value that was added previously.
Some things like error handling are not being written to not post a wall of code here.
package main
import (
"bytes"
"context"
"net/http"
"github.com/go-chi/chi"
"github.com/go-chi/chi/middleware"
)
func main() {
go func() {
err := http.ListenAndServe(
"localhost:3000",
GetDataAndSolve(),
)
if err != nil {
panic(err)
}
}()
go func() {
err := http.ListenAndServe( // in GetDataAndSolve() we send requests
"localhost:8000", // with data for processing
InternalService(),
)
if err != nil {
panic(err)
}
}()
// interrupt := make(chan os.Signal, 1)
// signal.Notify(interrupt, syscall.SIGTERM, syscall.SIGINT)
// <-interrupt // just a cool way to close the program, uncomment if you need it
}
func GetDataAndSolve() http.Handler {
r := chi.NewRouter()
r.Use(middleware.Logger)
r.Get("/tasks/str", func(rw http.ResponseWriter, r *http.Request) {
// receiving data for processing...
taskCtx := context.WithValue(r.Context(), "str", "strVar") // the value is being
postReq, err := http.NewRequestWithContext( // stored to context
taskCtx, // context is being given to request
"POST",
"http://localhost:8000/tasks/solution",
bytes.NewBuffer([]byte("something")),
)
postReq.Header.Set("Content-Type", "application/json") // specifying for endpoint
if err != nil { // what we are sending
return
}
resp, err := http.DefaultClient.Do(postReq) // running actual request
// pls, proceed to Solver()
// do stuff to resp
// also despite arriving to middleware without right context
// here resp contains a request with correct context
})
return r
}
func Solver(next http.Handler) http.Handler { // here we end up after sending postReq
return http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
if r.Context().Value("str").(string) == "" {
return // the request arrive without "str" in its context
}
ctxWithResult := context.WithValue(r.Context(), "result", mockFunc(r.Context()))
next.ServeHTTP(rw, r.Clone(ctxWithResult))
})
}
func InternalService() http.Handler {
r := chi.NewRouter()
r.Use(middleware.Logger)
r.With(Solver).Post("/tasks/solution", emptyHandlerFunc)
return r
}
Your understanding of context is not correct.
Context (simplifying to an extent and in reference to NewRequestWithContext API), is just an in-memory object using which you can control the lifetime of the request (Handling/Triggering cancellations).
However your code is making a HTTP call, which goes over the wire (marshaled) using HTTP protocol. This protocol doesn't understand golang's context or its values.
In your scenario, both /tasks/str and /tasks/solution are being run on the same server. What if they were on different servers, probably different languages and application servers as well, So the context cannot be sent across.
Since the APIs are within the same server, maybe you can avoid making a full blown HTTP call and resort to directly invoking the API/Method. It might turn out to be faster as well.
If you still want to send additional values from context, then you'll have to make use of other attributes like HTTP Headers, Params, Body to send across the required information. This can provide more info on how to serialize data from context over HTTP.

How to send a response back if a function is locked using mutex.lock() in Golang?

I have this function.
func (s *eS) Post(param *errorlogs.Q) (*errorlogs.Error, *errors.RestErr) {
//sub := q.Get("sub")
s.mu.Lock()
utime := int32(time.Now().Unix())
// Open our jsonFile
jsonFile, errFile := getlist(param.Id)
// if we os.Open returns an error then handle it
if errFile != nil {
return nil, errFile
}
jsonFile, err := os.Open(dir + "/File.json")
// if we os.Open returns an error then handle it
if err != nil {
return nil, errors.NewNotFoundError("Bad File request")
}
// read our opened jsonFile as a byte array.
byteValue, _ := ioutil.ReadAll(jsonFile)
// we initialize our model
var errorFile errorlogs.Error_File
// we unmarshal our byteArray which contains our
// jsonFile's content into '' which we defined above
json.Unmarshal(byteValue, &errorFile)
// defer the closing of our jsonFile so that we can parse it later on
defer jsonFile.Close()
// An object to copy the required data from the response
var id int32
if len(errorFile.Error) == 0 {
id = 0
} else {
id = errorFile.Error[len(errorFile.Error)-1].ID
}
newValue := &errorlogs.Error{
ID: id + 1,
Utime: utime,
}
errorFile.Error = append(errorFile.Error, *newValue)
file, err := json.Marshal(errorFile)
if err != nil {
return nil, errors.NewInternalServerError("Unable to json marshal file")
}
err = ioutil.WriteFile(dir+"/File.json", file, 0644)
if err != nil {
return nil, errors.NewInternalServerError("Unable to write file")
}
s.mu.Unlock()
return newValue, nil
}
Here i am locking this function from the concurrent request that if some client is already writing to the file it will not let the other client write to it at the same time. But now i have confusion that what does this mutex.Lock() does to all the other requests while it is being locked? does it let the other client wait? or it just ignore all the other clients? do we have any way of sending back the client with some kind of response? or let the other client wait and then allow them to access this function ?
When a mutex is locked, all other calls to Mutex.Lock() will block until Mutex.Unlock() is called first.
So while your handler is running (and holding the mutex), all other requests will get blocked at the Lock() call.
Note: if your handler doesn't complete normally because you return early (using a return statement), or it panics, your mutex will remain locked, and hence all further requests will block.
A good practice is to use defer to unlock a mutex, right after it is locked:
s.mu.Lock()
defer s.mu.Unlock()
This ensures Unlock() will be called no matter how your function ends (may end normally, return or panic).
Try to hold the lock for as little time as possible to minimize blocking time of other requests. While it may be convenient to lock right as you enter the handler and only unlock before return, if you don't use the protected resources for the "lifetime" of the handler, only lock and unlock when you use the shared resource. For example if you want to protect concurrent access to a file, lock the mutex, read / write the file, and as soon as you're done with it, unlock the mutex. What you do with the read data and how you assemble and send your response should not block other requests. Of course when using defer to unlock, that may not run as early as it should be (when you're done with the shared resource). So in some cases it may be OK not to use defer, or the code accessing shared resources may be moved to a named or unnamed (anonymous) function to still be able to use defer.
sync.Mutex does not support "peeking" the status, nor "try-lock" operation. This means using sync.Mutex you cannot signal the client that it has to wait because processing the request is waiting another request to complete. If you'd need such functionality, you could use channels. A buffered channel with a capacity of 1 could fulfil this functionality: the "lock" operation is sending a value on the channel, the "unlock" operation is receiving a value from the channel. So far so good. The "try-lock" operation could be a "conditional" send operation: using a select statement with a default case, you could detect that you can't lock now because it is already locked, and you could do something else instead or meanwhile, and retry locking later.
Here's an example how it could look like:
var lock = make(chan struct{}, 1)
func handler(w http.ResponseWriter, r *http.Request) {
// Try locking:
select {
case lock <- struct{}{}:
// Success: proceed
defer func() { <-lock }() // Unlock deferred
default:
// Another handler would block us, send back an "error"
http.Error(w, "Try again later", http.StatusTooManyRequests)
return
}
time.Sleep(time.Second * 2) // Simulate long computation
io.WriteString(w, "Done")
}
func main() {
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe(":8080", nil))
}
The above simple example returns an error immediately if another request holds the lock. You could choose to do different things here: you could put it in a loop and retry a few times before giving up and returning an error (sleeping a little between iterations). You could use a timeout when attempting to lock, and only accept "failure" if you can't get the lock for some time (see time.After() and context.WithTimeout()). Of course if we're using a timeout of some sort, the default case must be removed (the default case is chosen immediately if none of the other cases can proceed immediately).
And while we're at it (the timeout), since we're already using select, it's a bonus that we can incorporate monitoring the request's context: if it's cancelled, we should terminate and return early. We may do so by adding a case receiving from the context's done channel, like case <-r.Context().Done():.
Here's an example how timeout and context monitoring could be done simply with a select:
var lock = make(chan struct{}, 1)
func handler(w http.ResponseWriter, r *http.Request) {
// Wait 1 sec at most:
ctx, cancel := context.WithTimeout(r.Context(), time.Second)
defer cancel()
// Try locking:
select {
case lock <- struct{}{}:
// Success: proceed
defer func() { <-lock }() // Unlock deferred
case <-ctx.Done():
// Timeout or context cancelled
http.Error(w, "Try again later", http.StatusTooManyRequests)
return
}
time.Sleep(time.Second * 2) // Simulate long computation
io.WriteString(w, "Done")
}

pg-go RunInTransaction not rolling back the transaction

I'm trying to rollback a transaction on my unit tests, between scenarios, to keep the database empty and do not make my tests dirty. So, I'm trying:
for _, test := range tests {
db := connect()
_ = db.RunInTransaction(func() error {
t.Run(test.name, func(t *testing.T) {
for _, r := range test.objToAdd {
err := db.PutObj(&r)
require.NoError(t, err)
}
objReturned, err := db.GetObjsWithFieldEqualsXPTO()
require.NoError(t, err)
require.Equal(t, test.queryResultSize, len(objReturned))
})
return fmt.Errorf("returning error to clean up the database rolling back the transaction")
})
}
I was expecting to rollback the transaction on the end of the scenario, so the next for step will have an empty database, but when I run, the data is never been rolling back.
I believe I'm trying to do what the doc suggested: https://pg.uptrace.dev/faq/#how-to-test-mock-database, am I right?
More info: I notice that my interface is implementing a layer over RunInTransaction as:
func (gs *DB) RunInTransaction(fn func() error) error {
f := func(*pg.Tx) error { return fn() }
return gs.pgDB.RunInTransaction(f)
}
IDK what is the problem yet, but I really guess that is something related to that (because the TX is encapsulated just inside the RunInTransaction implementation.
go-pg uses connection pooling (in common with most go database packages). This means that when you call a database function (e.g. db.Exec) it will grab a connection from the pool (establishing a new one if needed), run the command and return the connection to the pool.
When running a transaction you need to run BEGIN, whatever updates etc you require, followed by COMMIT/ROLLBACK, on a single connection dedicated to the transaction (any commands sent on other connections are not part of the transaction). This is why Begin() (and effectively RunInTransaction) provide you with a pg.Tx; use this to run commands within the transaction.
example_test.go provides an example covering the usage of RunInTransaction:
incrInTx := func(db *pg.DB) error {
// Transaction is automatically rollbacked on error.
return db.RunInTransaction(func(tx *pg.Tx) error {
var counter int
_, err := tx.QueryOne(
pg.Scan(&counter), `SELECT counter FROM tx_test FOR UPDATE`)
if err != nil {
return err
}
counter++
_, err = tx.Exec(`UPDATE tx_test SET counter = ?`, counter)
return err
})
}
You will note that this only uses the pg.DB when calling RunInTransaction; all database operations use the transaction tx (a pg.Tx). tx.QueryOne will be run within the transaction; if you ran db.QueryOne then that would be run outside of the transaction.
So RunInTransaction begins a transaction and passes the relevant Tx in as a parameter to the function you provide. You wrap this with:
func (gs *DB) RunInTransaction(fn func() error) error {
f := func(*pg.Tx) error { return fn() }
return gs.pgDB.RunInTransaction(f)
}
This effectively ignores the pg.Tx and you then run commands using other connections (e.g. err := db.PutObj(&r)) (i.e. outside of the transaction). To fix this you need to use the transaction (e.g. err := tx.PutObj(&r)).

Download occurs at my custom gin-golang based REST server side, but not occurring at client side

I am creating an HTTP REST server in golang using gin-gonic
My code is:
func main() {
var port string
if len(os.Args) == 1 {
port = "8080"
} else {
port = os.Args[1]
}
router := gin.Default()
router.GET("/:a/*b", func(c *gin.Context) {
// My custom code to get "download" reader from some third party cloud storage.
// Create the file at the server side
out, err := os.Create(b)
if err != nil {
c.String(http.StatusInternalServerError, "Error in file creation at server side\n")
return
}
c.String(http.StatusOK, "File created at server side\n")
_, err = io.Copy(out, download)
if err != nil {
c.String(http.StatusInternalServerError, "Some error occured while downloading the object\n")
return
}
// Close the file at the server side
err = out.Close()
if err != nil {
c.String(http.StatusInternalServerError, "Some error occured while closing the file at server side\n")
}
// Download the file from server side at client side
c.String(http.StatusOK, "Downloading the file at client side\n")
c.FileAttachment(objectPath, objectPath)
c.String(http.StatusOK, "\nFile downlaoded at the client side successfully\n")
c.String(http.StatusOK, "Object downloaded successfully\n")
})
// Listen and serve
router.Run(":"+port)
}
When I run the curl command at the client side command prompt, it downloaded that file on my REST server, but did not download on my client side. However, the gin-gonic godoc says that:
func (*Context) File
func (c *Context) File(filepath string)
File writes the specified file into the body stream in a efficient way.
func (*Context) FileAttachment
func (c *Context) FileAttachment(filepath, filename string)
FileAttachment writes the specified file into the body stream in an efficient way On the client side, the file will typically be downloaded with the given filename
func (*Context) FileFromFS
func (c *Context) FileFromFS(filepath string, fs http.FileSystem)
FileFromFS writes the specified file from http.FileSystem into the body stream in an efficient way.
But, on close observation from my command prompt output, it printed the content of the txt file, which I need to save on my client side
So, I would like to stream that download from that 3rd party storage to the client side command prompt or browser, via my custom REST API server.
Am I missing something here?
Thanks
UPDATE: I tried to write Content-Disposition & Content-Type headers to response as follows:
package main
import (
"context"
"fmt"
"net/http"
"os"
"github.com/gin-gonic/gin"
)
func main() {
var port string
if len(os.Args) == 1 {
port = "8080"
} else {
port = os.Args[1]
}
router := gin.Default()
router.GET("/:a/*b", func(c *gin.Context) {
param_a := c.Param("a")
param_b := c.Param("b")
reqToken := c.GetHeader("my_custom_key")
// My custom code to get "download" reader from some third party cloud storage.
c.String(http.StatusOK, "Downloading the object\n")
c.Writer.Header().Add("Content-Disposition", fmt.Sprintf("attachment; filename=%", param_b))
c.Writer.Header().Add("Content-Type", c.GetHeader("Content-Type"))
c.File(param_b)
c.String(http.StatusOK, "Object downloaded successfully\n")
err = download.Close()
if err != nil {
c.String(http.StatusInternalServerError, "Error in closing the download of the object\n")
return
}
c.String(http.StatusOK, "Object downloaded completed & closed successfully\n")
})
// Listen and serve
router.Run(":"+port)
}
Now, it displays error, but also the success messages as follows, but the fill is still not downloaded at client side:
404 page not found
Object downloaded successfully
Object downloaded completed & closed successfully

Golang - Scaling a websocket client for multiple connections to different servers

I have a websocket client. In reality, it is far more complex than the basic code shown below.
I now need to scale this client code to open connections to multiple servers. Ultimately, the tasks that need to be performed when a message is received from the servers is identical.
What would be the best approach to handle this?
As I said above the actual code performed when receiving the message is far more complex than shown in the example.
package main
import (
"flag"
"log"
"net/url"
"os"
"os/signal"
"time"
"github.com/gorilla/websocket"
)
var addr = flag.String("addr", "localhost:1234", "http service address")
func main() {
flag.Parse()
log.SetFlags(0)
interrupt := make(chan os.Signal, 1)
signal.Notify(interrupt, os.Interrupt)
// u := url.URL{Scheme: "ws", Host: *addr, Path: "/echo"}
u := url.URL{Scheme: "ws", Host: *addr, Path: "/"}
log.Printf("connecting to %s", u.String())
c, _, err := websocket.DefaultDialer.Dial(u.String(), nil)
if err != nil {
log.Fatal("dial:", err)
}
defer c.Close()
done := make(chan struct{})
go func() {
defer close(done)
for {
_, message, err := c.ReadMessage()
if err != nil {
log.Println("read:", err)
return
}
log.Printf("recv: %s", message)
}
}()
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case <-done:
return
case t := <-ticker.C:
err := c.WriteMessage(websocket.TextMessage, []byte(t.String()))
if err != nil {
log.Println("write:", err)
return
}
case <-interrupt:
log.Println("interrupt")
// Cleanly close the connection by sending a close message and then
// waiting (with timeout) for the server to close the connection.
err := c.WriteMessage(websocket.CloseMessage, websocket.FormatCloseMessage(websocket.CloseNormalClosure, ""))
if err != nil {
log.Println("write close:", err)
return
}
select {
case <-done:
case <-time.After(time.Second):
}
return
}
}
}
Modify the interrupt handling to close a channel on interrupt. This allows multiple goroutines to wait on the event by waiting for the channel to close.
shutdown := make(chan struct{})
interrupt := make(chan os.Signal, 1)
signal.Notify(interrupt, os.Interrupt)
go func() {
<-interrupt
log.Println("interrupt")
close(shutdown)
}()
Move the per-connection code to a function. This code is a copy and paste from the question with two changes: the interrupt channel is replaced with the shutdown channel; the function notifies a sync.WaitGroup when the function is done.
func connect(u string, shutdown chan struct{}, wg *sync.WaitGroup) {
defer wg.Done()
log.Printf("connecting to %s", u)
c, _, err := websocket.DefaultDialer.Dial(u, nil)
if err != nil {
log.Fatal("dial:", err)
}
defer c.Close()
done := make(chan struct{})
go func() {
defer close(done)
for {
_, message, err := c.ReadMessage()
if err != nil {
log.Println("read:", err)
return
}
log.Printf("recv: %s", message)
}
}()
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case <-done:
return
case t := <-ticker.C:
err := c.WriteMessage(websocket.TextMessage, []byte(t.String()))
if err != nil {
log.Println("write:", err)
return
}
case <-shutdown:
// Cleanly close the connection by sending a close message and then
// waiting (with timeout) for the server to close the connection.
err := c.WriteMessage(websocket.CloseMessage, websocket.FormatCloseMessage(websocket.CloseNormalClosure, ""))
if err != nil {
log.Println("write close:", err)
return
}
select {
case <-done:
case <-time.After(time.Second):
}
return
}
}
}
Declare a sync.WaitGroup in main(). For each websocket endpoint that you want to connect to, increment the WaitGroup and start a goroutine to connect that endpoint. After starting the goroutines, wait on the WaitGroup for the goroutines to complete.
var wg sync.WaitGroup
for _, u := range endpoints { // endpoints is []string
// where elements are URLs
// of endpoints to connect to.
wg.Add(1)
go connect(u, shutdown, &wg)
}
wg.Wait()
The code above with an edit to make it run against Gorilla's echo example server is posted on the playground.
is the communication with every different server completely independendant of the other servers? if yes i would go around in a fashion like:
in main create a context with a cancellation function
create a waitgroup in main to track fired up goroutines
for every server, add to the waitgroup, fire up a new goroutine from the main function passing the context and the waitgroup references
main goes in a for/select loop listening to for signals and if one arrives calls the cancelfunc and waits on the waitgroup.
main can also listen on a result chan from the goroutines and maybe print the results itself it the goroutines shouldn't do it directly.
every goroutine has as we said has references for the wg, the context and possibly a chan to return results. now the approach splits on if the goroutine must do one and one thing only, or if it needs to do a sequence of things. for the first approach
if only one thing is to be done we follow an approach like the one descripbed here (observe that to be asyncronous he would in turn fire up a new goroutine to perform the DoSomething() step that would return the result on the channel)
That allows it to be able to accept the cancellation signal at any time. it is up to you to determine how non-blocking you want to be and how prompt you want to be to respond to cancellation signals.Also the benefit of having the a context associated being passed to the goroutines is that you can call the Context enabled versions of most library functions. For example if you want your dials to have a timeout of let's say 1 minute, you would create a new context with timeout from the one passed and then DialContext with that. This allows the dial to stop both from a timeout or the parent (the one you created in main) context's cancelfunc being called.
if more things need to be done ,i usually prefer to do one thing with the goroutine, have it invoke a new one with the next step to be performed (passing all the references down the pipeline) and exit.
this approach scales well with cancellations and being able to stop the pipeline at any step as well as support contexts with dealines easily for steps that can take too long.