Golang channel in select not receiving - select

I am currently working on a small script where I use the channels, select and goroutine and I really don't understand why it doesn't run as I think.
I have 2 channels that all my goroutines listen to.
I pass the channels to each goroutine where there is a select which must choose between the 2 depending on where the data comes first.
The problem is that no goroutine falls into the second case. I can have received 100 jobs one after the other, I see everything in the log. It does well what is requested in the first case and after that it sent the work in the second channel (still if it does well ...) I do not have any more logs.
I just don't understand why...
If someone can enlighten me :)
package main
func main() {
wg := new(sync.WaitGroup)
in := make(chan *Job)
out := make(chan *Job)
results := make(chan *Job)
for i := 0; i < 50; i++ {
go work(wg, in, out, results)
}
wg.Wait()
// Finally we collect all the results of the work.
for elem := range results {
fmt.Println(elem)
}
}
func Work(wg *sync.WaitGroup, in chan *Job, out chan *Job, results chan *Job) {
wg.Add(1)
defer wg.Done()
for {
select {
case job := <-in:
ticker := time.Tick(10 * time.Second)
select {
case <-ticker:
// DO stuff
if condition is true {
out <- job
}
case <-time.After(5 * time.Minute):
fmt.Println("Timeout")
}
case job := <-out:
ticker := time.Tick(1 * time.Minute)
select {
case <-ticker:
// DO stuff
if condition is true {
results <- job
}
case <-quitOut:
fmt.Println("Job completed")
}
}
}
}
I create a number of workers who listen to 2 channels and send the final results to the 3rd.
It does something with the received job and if it validates a given condition, it passes this job to the next channel and if it validates a condition it passes the job into the result channel.
So, in my head I had a pipeline like this for 5 workers for example: 3 jobs in the channel IN, directly 3 workers takes them, if the 3 job validates the condition, they are sent in the channel OUT. Directly 2 workers takes them and the 3rd job is picked up by one of the first 3 workers ...
Now I hope you have a better understanding for my first code. But in my code, I never get to the second case.

I think your solution might be a bit over complicated. Here is a simplified version. Bare in mind that there are numerous implementations. A good article to read
https://medium.com/smsjunk/handling-1-million-requests-per-minute-with-golang-f70ac505fcaa
Or even better right from the Go handbook
https://gobyexample.com/worker-pools (which I think maybe is what you were aiming for)
Anyway, below serves as a different type of example.. There are a few ways to go about solving this problem.
package main
import (
"context"
"log"
"os"
"sync"
"time"
)
type worker struct {
wg *sync.WaitGroup
in chan job
quit context.Context
}
type job struct {
message int
}
func main() {
numberOfJobs := 50
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
w := worker{
wg: &sync.WaitGroup{},
in: make(chan job),
quit: ctx,
}
for i := 0; i < numberOfJobs; i++ {
go func(i int) {
w.in <- job{message: i}
}(i)
}
counter := 0
for {
select {
case j := <-w.in:
counter++
log.Printf("Received job %+v\n", j)
// DO SOMETHING WITH THE RECEIVED JOB
// WORKING ON IT
x := j.message * j.message
log.Printf("job processed, result %d", x)
case <-w.quit.Done():
log.Printf("Recieved quit, timeout reached. Number of jobs queued: %d, Number of jobs complete: %d\n", numberOfJobs, counter)
os.Exit(0)
default:
// TODO
}
}
}

Your quitIn and quitOut channels are basically useless: You create them and try to receive from them. Which you cannot as nobody can write to these channels because nobody even knows about their existence. I cannot say more because I do not understand what the code is supposed to do.

Because your function is "Work" and you are calling "work".

Related

Get transaction id from new relic during tests

I am using Newrelic to get insights on my golang app. I am trying to test a middleware that will log whenever a request comes with a proper new relic header. ( "Newrelic":"eyXXXXXXX" ).
This is my test :
func TestGetNewRelicTraceID(t *testing.T) {
w := httptest.NewRecorder()
req := httptest.NewRequest("GET", "/test", nil)
req.Header.Add("Newrelic", "eyJ2IjpbMCwxXSwiZCI6eyJ0eSI6IkFwcCIsImFwIjoiNDk1Njg4OTcwIiwiYWMiOiIxMzA5OTAiLCJ0eCI6IjE3MGNmYjRiNTBiMTQ2MGIiLCJpZCI6IjQ1NGY0MTFmOWNjYjA1MDgiLCJ0ciI6IjE3MGNmYjRiNTBiMTQ2MGI0MmQ0N2ZkZmQ3MTg2NzM3IiwicHIiOjEuMTI3NTUxLCJzYSI6dHJ1ZSwidGkiOjE2MjEwMTAwMjcwMjIsInRrIjoiMzQ2MDgwIn19")
app, _ := newrelic.NewApplication(
newrelic.ConfigAppName("test"),
newrelic.ConfigLicense("1TI35kweH5xJjYLvDgp6gX1LGbYvJ130n0E5Jecs"),
newrelic.ConfigDistributedTracerEnabled(true),
func(cfg *newrelic.Config) {
cfg.ErrorCollector.RecordPanics = true
},
)
_, fn := newrelic.WrapHandleFunc(app, "/test", func(w http.ResponseWriter, r *http.Request) {
txn2 := newrelic.FromContext(r.Context())
nrTraceID := fmt.Sprintf("%s", txn2.GetTraceMetadata().TraceID)
w.Write([]byte(nrTraceID))
})
fn(w, req)
assert.Equal(t, http.StatusOK, w.Code)
assert.Equal(t, "170cfb4b50b1460b42d47fdfd7186737", string(w.Body.Bytes()))
}
No matter what I do, the test never passes as every run creates a new trace id, instead of using the one coming with the header.
What am I doing incorrectly?
Well, I found my problem. So it seems that I have to wait for the connection to the New Relic server to actually take place before running the test. Which means I had to replace 1TI35kweH5xJjYLvDgp6gX1LGbYvJ130n0E5Jecs with a REAL key. This is very important it seems. Additionally, I had to add this too to the test:
err := app.WaitForConnection(time.Second * 10)
require.Nil(t, err)
And now it works as expected.

Understanding database/sql

I am playing around with the database/sql package trying to see how it works and understand what would happen if you don't call rows.Close() etc.
So I wrote the following piece of code for inserting a model to database:
func (db Database) Insert(m model.Model) (int32, error) {
var id int32
quotedTableName := m.TableName(true)
// Get insert query
q, values := model.InsertQuery(m)
rows, err := db.Conn.Query(q, values...)
if err != nil {
return id, err
}
for rows.Next() {
err = rows.Scan(&id)
if err != nil {
return id, err
}
}
return id, nil
}
I don't call rows.Close() on purpose to see the consequences. When setting up the database connection I set some properties such as:
conn.SetMaxOpenConns(50)
conn.SetMaxIdleConns(2)
conn.SetConnMaxLifetime(time.Second*60)
Then I attempt to insert 10000 records:
for i := 0; i < 10000; i++ {
lander := models.Lander{
// ...struct fields with random data on each iteration
}
go func() {
Insert(&lander)
}()
}
(It lacks error checking, context timeouts etc. but for the purpose of playing around it gets the job done). When I execute piece of code from above I expect to see at least some errors regarding database connections however the data gets inserted without problems (all 10000 records). When I check the Stats() I see the following:
{MaxOpenConnections:50 OpenConnections:1 InUse:0 Idle:1 WaitCount:9951 WaitDuration:3h9m33.896466243s MaxIdleClosed:48 MaxLifetimeClosed:2}
Since I didn't call rows.Close() I expected to see more OpenConnections or more InUse connections because I am never releasing the connection (maybe I might be wrong, but this is the purpose of Close() to release a Connection and return it to the pool).
So my question is simply what do these Stats() mean and why are there no errors whatsoever when doing the insertion. Also why aren't there more OpenConnections or InUse ones and what are the real consequences of not calling Close()?
According to the docs for Rows:
If Next is called and returns false and there are no further result sets, the Rows are closed automatically and it will suffice to check the result of Err.
Since you iterate all the results, the result set is closed.

Execute SparkCore function using Gobot.io and sleepy RESTful Framework for Go

I have the following bit of code where I'm using the RESTful framework for Go called sleepy.
I can successfully start the service at: http://localhost:3000, however when I try to access http://localhost:3000/temperature I'm expecting my SparkCore function dht to execute.
I'm using the Gobot.io Spark platform to execute this function based on this example, which I've implemented in my own code.
The problem is that the code doesn't get past the gobot.Start() method inside the Get() function so I can't actually return the result data.
I'm setting the data value hoping that I can do the:
return 200, data, http.Header{"Content-type": {"application/json"}}
But it never gets called becuase of the gobot.Start().
I'm very new to Go so any help would be greatly appreciated.
package main
import (
"net/url"
"net/http"
"fmt"
"github.com/dougblack/sleepy"
"github.com/hybridgroup/gobot"
"github.com/hybridgroup/gobot/platforms/spark"
)
var gbot = gobot.NewGobot()
var sparkCore = spark.NewSparkCoreAdaptor("spark", "device_id", "auth_token")
type Temperature struct {}
func (temperature Temperature) Get(values url.Values, headers http.Header) (int, interface{}, http.Header) {
work := func() {
if result, err := sparkCore.Function("dht", ""); err != nil {
fmt.Println(err)
} else {
data := map[string]string{"Temperature": result}
fmt.Println("result from \"dht\":", result)
}
}
robot := gobot.NewRobot("spark",
[]gobot.Connection{sparkCore},
work,
)
gbot.AddRobot(robot)
gbot.Start()
return 200, data, http.Header{"Content-type": {"application/json"}}
}
func main() {
api := sleepy.NewAPI()
temperatureResource := new(Temperature)
api.AddResource(temperatureResource, "/temperature")
fmt.Println("Listening on http://localhost:3000/")
api.Start(3000)
}
gbot.Start() is a blocking call.
In this context, you are expected to call it as:
go gbot.Start()
This will launch it in a goroutine (think thread) and then let your app continue.
When you look at the gobot example app, they don't run in the background since it is the main function. If main runs everything in the background and doesn't wait for anything, the app exits immediately with no apparent effect.

Are golang net.UDPConn and net.TCPConn thread safe?? Can i read or write of single UDPConn object in multi thread?

1.Can we call send from one thread and recv from another on the same net.UDPConn or net.TCPConn objects?
2.Can we call multiple sends parallely from different threads on the same net.UDPConn or net.TCPConn objects?
I am unable to find a good documentation also for the same.
Is golang socket api thread safe?
I find that it is hard to test if it is thread safe.
Any pointers in the direction will be helpful.
My test code is below:
package main
import (
"fmt"
"net"
"sync"
)
func udp_server() {
// create listen
conn, err := net.ListenUDP("udp", &net.UDPAddr{
IP: net.IPv4(0, 0, 0, 0),
Port: 8080,
})
if err != nil {
fmt.Println("listen fail", err)
return
}
defer conn.Close()
var wg sync.WaitGroup
for i := 0; i < 10; i = i + 1 {
wg.Add(1)
go func(socket *net.UDPConn) {
defer wg.Done()
for {
// read data
data := make([]byte, 4096)
read, remoteAddr, err := socket.ReadFromUDP(data)
if err != nil {
fmt.Println("read data fail!", err)
continue
}
fmt.Println(read, remoteAddr)
fmt.Printf("%s\n\n", data)
// send data
senddata := []byte("hello client!")
_, err = socket.WriteToUDP(senddata, remoteAddr)
if err != nil {
return
fmt.Println("send data fail!", err)
}
}
}(conn)
}
wg.Wait()
}
func main() {
udp_server()
}
Is it OK for this test code?
The documentation for net.Conn says:
Multiple goroutines may invoke methods on a Conn simultaneously.
Multiple goroutines may invoke methods on a Conn simultaneously.
My interpretation of the doc above, is that nothing catastrophic will happen if you invoke Read and Write on a net.Conn from multiple go routines, and that calls to Write on a net.Conn from multiple go routines will be serialised so that the bytes from 2 separate calls to Write will not be interleaved as they are written to the network.
The problem with the code you have presented is that there is no guarantee that Write will write the whole byte slice provided to it in one go. You are ignoring the indication of how many bytes have been written.
_, err = socket.WriteToUDP(senddata, remoteAddr)
So to make sure you write everything you would need to loop and call Write till all the senddata is sent. But net.Conn only ensures that data from a single call to Write is not interleaved. Given that you could be sending a single block of data with multiple calls to write there is no guarantee that the single block of data would reach its destination intact.
So for example 3 "hello client!" messages could arrive in the following form.
"hellohellohello client! client! client!"
So if you want reliable message writing on a net.Conn from multiple go routines you will need to synchronise those routines to ensure that single messages are written intact.
If I wanted to do this, as a first attempt I would have a single go routine reading from one or many message channels and writing to a net.Conn and then multiple go routines can write to those message channels.

How to call a goroutine inside of a select case that runs in the scope of the select's parent

I am building a data tool that collects data in a stream and operates on it. I have a main routine, a "process manager", which is responsible for creating new routines of an accumulation function. The manager is informed to create the routines based on a channel receive select case which it is running in an infinite for loop (I already have my cancel routine for itself and all of the routines it creates). The problem is that the manager needs to be able to run the goroutine accumulators in its main scope so that they can operate outside of the select and for loop's scope (I want them to keep running while the manager accepts new cases).
cancel := make(chan struct{})
chanchannel := make(chan chan datatype)
func operationManager (chanchannel chan chan datatype, cancel chan struct{}) {
for {
select {
case newchan := <- chanchannel:
go runAccum(newchan, cancel)
case <- cancel:
return
}
}
}
func runAccum(inchan chan datatype, cancel chan struct{}) {
for {
select {
case data := <- inchan;
//do something
case <- cancel:
return
}
}
}
This is a very, very dumbed-down example of my use-case, but I hope it illustrates my problem's component pieces. Let me know if this is possible, feasible, reasonable, inadvisable; And no, this is not how I implemented my teardown haha
There is no "scope" to goroutines. All goroutines are equal.
There is "scope" for closures but your goroutines do not span closures.
So all your goroutines spanned by go runAccum(newchan, cancel) will be like any other goroutine you span, no matter from where.
I assume you did not test your solution?