When to use selector.AddReceive, selector.Select - cadence-workflow

Would appreciate some clarification on when I should use selector.AddReceive and selector.Select. This might not be a Cadence problem, but perhaps I'm missing some knowledge with regards to Golang.
For selector.Select I think the basic idea is that we wait for the next output from a channel. Not entirely sure what selector.AddRecieve does.
For example, in the cadence examples, local_activity link and pasted below:
func signalHandlingWorkflow(ctx workflow.Context) error {
logger := workflow.GetLogger(ctx)
ch := workflow.GetSignalChannel(ctx, SignalName)
for {
var signal string
if more := ch.Receive(ctx, &signal); !more {
logger.Info("Signal channel closed")
return cadence.NewCustomError("signal_channel_closed")
}
logger.Info("Signal received.", zap.String("signal", signal))
if signal == "exit" {
break
}
cwo := workflow.ChildWorkflowOptions{
ExecutionStartToCloseTimeout: time.Minute,
// TaskStartToCloseTimeout must be larger than all local activity execution time, because DecisionTask won't
// return until all local activities completed.
TaskStartToCloseTimeout: time.Second * 30,
}
childCtx := workflow.WithChildOptions(ctx, cwo)
var processResult string
err := workflow.ExecuteChildWorkflow(childCtx, processingWorkflow, signal).Get(childCtx, &processResult)
if err != nil {
return err
}
logger.Sugar().Infof("Processed signal: %v, result: %v", signal, processResult)
}
return nil
}
We don't use any selector.AddReceive
But, in the example here, where it uses signal channels as well: Changing the uber cadence sleeptime based on external input
I'll also paste the code here
func SampleTimerWorkflow(ctx workflow.Context, timerDelay time.Duration) error
{
logger := workflow.GetLogger(ctx)
resetCh := workflow.GetSignalChannel(ctx, "reset")
timerFired := false
delay := timerDelay
for ;!timerFired; {
selector := workflow.NewSelector(ctx)
logger.Sugar().Infof("Setting up a timer to fire after: %v", delay)
timerCancelCtx, cancelTimerHandler := workflow.WithCancel(ctx)
timerFuture := workflow.NewTimer(timerCancelCtx, delay)
selector.AddFuture(timerFuture, func(f workflow.Future) {
logger.Info("Timer Fired.")
timerFired = true
})
selector.AddReceive(resetCh, func(c workflow.Channel, more bool) {
logger.Info("Reset signal received.")
logger.Info("Cancel outstanding timer.")
cancelTimerHandler()
var t int
c.Receive(ctx, &t)
logger.Sugar().Infof("Reset delay: %v seconds", t)
delay = time.Second * time.Duration(t)
})
logger.Info("Waiting for timer to fire.")
selector.Select(ctx)
}
workflow.GetLogger(ctx).Info("Workflow completed.")
return nil
}
You can see there is selector.AddReceive, I'm not entirely sure what the purpose is or when I should use it.
I am trying to send a signal to my workflow that allows me to extend an expiration time. Meaning, it would delay the call of an ExpirationActivity
And when following this example (combined with my current code), as soon as I send the signal to reset, it seems that timerFired gets set immediately to true.
My current code is the below (I've taken out some irrelevant if statements), and previously, I was using only one instance of selector.Select, but somewhere my code wasn't acting properly.
func Workflow(ctx workflow.Context) (string, error) {
// local state per bonus workflow
bonusAcceptanceState := pending
logger := workflow.GetLogger(ctx).Sugar()
logger.Info("Bonus workflow started")
timerCreated := false
timerFired := false
delay := timerDelay
// To query state in Cadence GUI
err := workflow.SetQueryHandler(ctx, "bonusAcceptanceState", func(input []byte) (string, error) {
return bonusAcceptanceState, nil
})
if err != nil {
logger.Info("SetQueryHandler failed: " + err.Error())
return "", err
}
info := workflow.GetInfo(ctx)
executionTimeout := time.Duration(info.ExecutionStartToCloseTimeoutSeconds) * time.Second
// decisionTimeout := time.Duration(info.TaskStartToCloseTimeoutSeconds) * time.Second
decisionTimeout := time.Duration(info.ExecutionStartToCloseTimeoutSeconds) * time.Second
maxRetryTime := executionTimeout // retry for the entire time
retryPolicy := &cadence.RetryPolicy{
InitialInterval: time.Second,
BackoffCoefficient: 2,
MaximumInterval: executionTimeout,
ExpirationInterval: maxRetryTime,
MaximumAttempts: 0, // unlimited, bound by maxRetryTime
NonRetriableErrorReasons: []string{},
}
ao := workflow.ActivityOptions{
TaskList: taskList,
ScheduleToStartTimeout: executionTimeout, // time until a task has to be picked up by a worker
ScheduleToCloseTimeout: executionTimeout, // total execution timeout
StartToCloseTimeout: decisionTimeout, // time that a worker can take to process a task
RetryPolicy: retryPolicy,
}
ctx = workflow.WithActivityOptions(ctx, ao)
selector := workflow.NewSelector(ctx)
timerCancelCtx, cancelTimerHandler := workflow.WithCancel(ctx)
var signal *singalType
for {
signalChan := workflow.GetSignalChannel(ctx, signalName)
// resetCh := workflow.GetSignalChannel(ctx, "reset")
selector.AddReceive(signalChan, func(c workflow.Channel, more bool) {
c.Receive(ctx, &signal)
})
selector.Select(ctx)
if signal.Type == "exit" {
return "", nil
}
// We can check the age and return an appropriate response
if signal.Type == "ACCEPT" {
if bonusAcceptanceState == pending {
logger.Info("Bonus Accepted")
bonusAcceptanceState = accepted
var status string
future := workflow.ExecuteActivity(ctx, AcceptActivity)
if err := future.Get(ctx, &status); err != nil {
logger.Errorw("Activity failed", "error", err)
}
// Start expiration timer
if !timerCreated {
timerCreated = true
timerFuture := workflow.NewTimer(timerCancelCtx, delay)
selector.AddFuture(timerFuture, func(f workflow.Future) {
logger.Info("Timer Fired.")
timerFired = true
})
}
}
}
if signal.Type == "ROLLOVER_1X" && bonusAcceptanceState == accepted {
var status string
future := workflow.ExecuteActivity(ctx, Rollover1x)
if err := future.Get(ctx, &status); err != nil {
logger.Errorw("Activity failed", "error", err)
}
selector.Select(ctx)
}
if signal.Type == "ROLLOVER_COMPLETE" && bonusAcceptanceState == accepted {
var status string
future := workflow.ExecuteActivity(ctx, RolloverComplete)
if err := future.Get(ctx, &status); err != nil {
logger.Errorw("Activity failed", "error", err)
return "", err
}
// Workflow is terminated on return result
return status, nil
}
for; !timerFired && bonusAcceptanceState == accepted && signal.Type == "RESET" {
cancelTimerHandler()
i, err := strconv.Atoi(signal.Value)
if err != nil {
logger.Infow("error in converting")
}
logger.Infof("Reset delay: %v seconds", i)
delay = time.Minute * time.Duration(i)
timerFuture := workflow.NewTimer(timerCancelCtx, delay)
selector.AddFuture(timerFuture, func(f workflow.Future) {
logger.Info("Timer Fired.")
timerFired = true
})
selector.Select(ctx)
}
if timerFired {
var status string
future := workflow.ExecuteActivity(ctx, ExpirationActivity)
if err := future.Get(ctx, &status); err != nil {
logger.Errorw("Activity failed", "error", err)
}
return status, nil
}
}
}

TL;DR:
You will only use selector.AddReceive when you need to let a selector to listen on a channel, like in your 2nd code snippet. If you only need to process signals from a channel directly without selector, then you don't need to use it.
selector.Select is to let the code wait for some events to happen. Because you don't want to use busy looping to wait.
More details on when to use them
Essentially, this is exactly the same concept as Golang select statement. Golang select allows you to wait for timers and channels. Except that Golang doesn't have selector.Select() simply because it's baked into the language itself, but Cadence is a library.
So same as in golang, you don't have to use select statement to use timer or channel. You only need it when you have to write some code to listen on multiple sources of event.
For example, if you have two channels, you want to write some common logic to process these two channels, e.g increase a counter. This counter doesn't belong to any of the channels. It's a common counter. Then using a selector will looks nice.
chA := workflow.GetSignalChannel(ctx, SignalNameA)
chB := workflow.GetSignalChannel(ctx, SignalNameB)
counter := 0
selector.AddReceive(chA)
selector.AddReceive(chB)
For {
selector.Select()
counter += 1
}
The workflow code with selector looks very similar to this in Golang:
counter := 0
for {
select {
case _ := <- chA:
counter += 1
case _ := <- chB:
counter += 1
}
}
Otherwise you may have to use two goroutines to listen on each channel, and do the counting. The golang code looks like this:
counter := 0
go func(){
for{
_ := <- chA
counter += 1
}
}()
go func(){
for{
_ := <- chB
counter += 1
}
}()
This could be a problem of race condition. Unless the counter is well implemented as thread-safe.
And in Cadence workflow code, it's something like this:
chA := workflow.GetSignalChannel(ctx, SignalNameA)
chB := workflow.GetSignalChannel(ctx, SignalNameB)
counter := 0
Workflow.Go(ctx){
for{
chA.Receive(ctx,nil)
counter +=1
}
}
Workflow.Go(ctx){
for{
chB.Receive(ctx,nil)
counter +=1
}
}
However, there is no such race condition in Cadence, because Cadence's coroutine(started byWorkflow.Go()) is not really concurrency. Both the two workflow code above should work perfectly.
But Cadence still provide this selector same as Golang, mostly because the 1st one is more natural to write code.

check the future return result
selector.AddFuture(timerFuture, func(f workflow.Future) {
err := f.Get(ctx, nil)
if err == nil {
logger.Info("Timer Fired.")
timerFired = true
}
})
ref: https://github.com/uber-go/cadence-client/blob/0256258b905b677f2f38fcacfbda43398d236309/workflow/deterministic_wrappers.go#L128-L129

Related

How to read all data from a TCP socket server and execute an operation afterwards

After spending many hours on this. I can't find a way to read all the data coming from a TCP socket server and then make an operation, as I can't find a way to break the loop.
The socket server sends texts containing a lot of lines finishing with "\n". The client should be able to read all those lines and make a POST request with all the data but the loop always hangs out and there is not a way to break it. Then, it will continue waiting for more data, so a stop condition could be a three seconds timeout.
I have tried different solutions (Scanner, ReadString, ReadLine, ReadAll) but it always hangs out and the loop won't ever finish.
The last line in the code is never printed.
conn, err := net.Dial("tcp", "127.0.0.1:15000")
reader := bufio.NewReader(conn)
message := ""
for {
line, err := reader.ReadString('\n')
if err == io.EOF {
break
}
message += line
}
log.Println(message)
If your only option is to read lines until a timeout, you can set a read deadline on the connection after the first read completes. You can then intercept the timeout error, and convert it to an EOF for the buffered reader to correctly interpret your intent.
type timeoutReader struct {
net.Conn
once sync.Once
}
func (r *timeoutReader) Read(b []byte) (int, error) {
n, err := r.Conn.Read(b)
// Set a read deadline only after the first Read completes
r.once.Do(func() {
r.Conn.SetReadDeadline(time.Now().Add(3 * time.Second))
})
// If we got a timeout, treat it as an io.EOF so the bufio.Scanner handles
// the error as if it was the normal end of the stream.
var netErr net.Error
if errors.As(err, &netErr) && netErr.Timeout() {
return n, io.EOF
}
return n, err
}
func main() {
conn, err := net.Dial("tcp", "127.0.0.1:15000")
if err != nil {
log.Fatal(err)
}
scanner := bufio.NewScanner(&timeoutReader{Conn: conn})
message := ""
for scanner.Scan() {
message += scanner.Text()
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
log.Println(message)
}
If the criteria is a timeout of 3 seconds after the first line is received, the solution is to close the socket 3 seconds after the first line is received.
var firstLineReceived bool
conn, err := net.Dial("tcp", "127.0.0.1:15000")
reader := bufio.NewReader(conn)
message := ""
for {
line, err := reader.ReadString('\n')
if err == io.EOF {
break
}
message += line
if !firstLineReceived {
firstLineReceived = true
go func(){
time.Sleep(3*time.Second)
conn.Close()
}()
}
}
log.Println(message)

How to set ToS field in IP header for a TCP connection using Golang

I am trying to create a TCP server and client using Golang where I am able to set the Type of Service field in the IP header in order to prioritise different traffic flows.
The client and servers are able to communicate but I can not figure out how to set the ToS field.
I have tried using the ipv4 Golang package with the method described here: https://godoc.org/golang.org/x/net/ipv4#NewConn
A simplified server example:
func main () {
ln, err := net.Listen("tcp4", "192.168.0.20:1024")
if err != nil {
// error handling
}
defer ln.Close()
for {
c, err := ln.Accept()
if err != nil {
// error handling
}
go func(c net.Conn) {
defer c.Close()
if err := ipv4.NewConn(c).SetTOS(0x28); err != nil {
fmt.Println("Error: ", err.Error())
}
}(c)
}
And the corresponding client (also simplified)
func main () {
conn, err := net.Dial("tcp4", "192.168.0.20:1024")
if err != nil {
fmt.Println(err)
}
for {
writer := bufio.NewWriter(conn)
// Create "packet"
Data := make([]byte, 1200)
endLine := "\r\n"
//Set packetLength
length := strconv.FormatInt(int64(1200), 10)
copy(Data[0:], length)
//Set ID
idString := strconv.FormatInt(int64(1), 10)
if strings.Contains(idString, "\r") || strings.Contains(idString, "\n") || strings.Contains(idString, "\r\n") {
fmt.Println("This is gonna result in an error in the id string.")
}
idbuf := []byte(idString)
copy(Data[15:], idbuf)
//Set timestamp
timestamp0 := time.Now().UnixNano()
timestampString := strconv.FormatInt(timestamp0, 10)
if strings.Contains(timestampString, "\r") || strings.Contains(timestampString, "\n") || strings.Contains(timestampString, "\r\n") {
fmt.Println("This is gonna result in an error in the timestamp string.")
}
buf := []byte(timestampString)
copy(Data[50:], buf)
copy(Data[int(1200)-2:], endLine)
if len(Data) != int(1200) {
fmt.Println("This is also gonna be an error. Length is: ", len(Data))
}
//Send the data and flush the writer
writer.Write(Data)
writer.Flush()
}
//time.Sleep(1*time.Nanosecond)
}
I have also tried creating my own dialer with a control function that passes a syscall in order to set the socket like this:
dialer := &net.Dialer{
Timeout: 5 * time.Second,
Deadline: time.Time{},
LocalAddr: tcpAddr,
DualStack: false,
FallbackDelay: 0,
KeepAlive: 0,
Resolver: nil,
Control: highPrio,
}
func highPrio(network, address string, c syscall.RawConn) error {
return c.Control(func(fd uintptr) {
// set the socket options
err := syscall.SetsockoptInt(syscall.Handle(fd), syscall.IPPROTO_IP, syscall.IP_TOS, 128)
if err != nil {
log.Println("setsocketopt: ", err)
}
})
I am verifying that it does not work by inspecting the traffic with Wireshark and am using Windows 10 Pro as my OS.
I am try you ToS set method at Dial() with golang 1.15.5 and its worked:
dialer := net.Dialer{
Timeout: this.TcpWaitConnectTimeout,
}
dialer.Control = func(network, address string, c syscall.RawConn) error {
var err error
c.Control(func(fd uintptr) {
err = syscall.SetsockoptInt(int(fd), syscall.IPPROTO_IP, syscall.IP_TOS, 0x80)
})
return err
}
c, err := dialer.Dial("tcp", this.serverAddr)
tcpdump show me right ToS

Why are database connections automatically closed?

I'm having an issue with Gorm / Psql where my database connection get automatically closed.
I never call defer dbInstance.Close() in main.go (not anymore for now, I've removed it, since that's the only place in my code where I felt the connection could be wrongfully closed) nor was it ever anywhere else.
The way I'm initializing my db is with a "db" package that looks like this:
package db
import (
"fmt"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/postgres"
)
var DbInstance *gorm.DB
func Init() *gorm.DB {
if DbInstance != nil {
return DbInstance
}
fmt.Println("Opening Db")
db, err := gorm.Open("postgres", "host=localhost port=5432 user=admin dbname=dbname sslmode=disable password=")
if err != nil {
fmt.Println(err)
panic("failed to connect database")
}
return db
}
then I call db.Init() in my main.go and then only call the db from "db.dbInstance" from the rest of my program.
As I've previously mentioned I used to call defer db.DbInstance.Close() from main.go but have tried removing it to see if it fixed the issue but it didn't.
What's strange is that the db connection will work for hours and hours in many different calls/function but always end up closing at some point.
From what i understand it should work :
gorm.Open() uses https://golang.org/pkg/database/sql/, which is
threadsafe and handles connection pooling for you. Don't call
gorm.Open() every request. Call it once when setting up your
application, and make sure you use defer db.Close() so connections are
cleanly ended when your program exits.
Lastly I need to add that it seems (i'm not 100% sure) that it's closing after I do batch inserts but again the .Close() function is never called, anywhere in my program.
I'm a bit lost as to what could be happening? Garbage collector (doesn't make sense the var is global)? psql driver closing in the background? Configuration issue?
I'm adding the batch function for reference just in case:
func InsertWithPostGresLimitSizeV2(DB *gorm.DB, array []interface{}) {
if len(array) == 0 {
return
}
numberOfParams := len(DB.NewScope(array[0]).Fields())
// postgres is limited to 65535 params.
maxStructPerBulk := int(65535 / numberOfParams)
currentIndex := 0
if len(array) > maxStructPerBulk {
for len(array) > currentIndex {
if (maxStructPerBulk + currentIndex) < len(array) {
slice := array[currentIndex:(currentIndex + maxStructPerBulk)]
currentIndex += maxStructPerBulk
_, err := DB.BatchInsert(slice)
log.Println(err)
} else {
slice := array[currentIndex:len(array)]
currentIndex = len(array)
_, err := DB.BatchInsert(slice)
log.Println(err)
}
}
} else {
_, err := DB.BatchInsert(array)
log.Println(err)
}
}
func BatchInsert(db *gorm.DB,objArr []interface{}) (int64, error) {
if len(objArr) == 0 {
return 0, errors.New("insert a slice length of 0")
}
mainObj := objArr[0]
mainScope := db.NewScope(mainObj)
mainFields := mainScope.Fields()
quoted := make([]string, 0, len(mainFields))
for i := range mainFields {
if (mainFields[i].IsPrimaryKey && mainFields[i].IsBlank) || (mainFields[i].IsIgnored) {
continue
}
quoted = append(quoted, mainScope.Quote(mainFields[i].DBName))
}
placeholdersArr := make([]string, 0, len(objArr))
for _, obj := range objArr {
scope := db.NewScope(obj)
fields := scope.Fields()
placeholders := make([]string, 0, len(fields))
for i := range fields {
if (fields[i].IsPrimaryKey && fields[i].IsBlank) || (fields[i].IsIgnored) {
continue
}
var vars interface{}
if (fields[i].Name == "CreatedAt" || fields[i].Name == "UpdatedAt") && fields[i].IsBlank {
vars = gorm.NowFunc()
} else {
vars = fields[i].Field.Interface()
}
placeholders = append(placeholders, mainScope.AddToVars(vars))
}
placeholdersStr := "(" + strings.Join(placeholders, ", ") + ")"
placeholdersArr = append(placeholdersArr, placeholdersStr)
mainScope.SQLVars = append(mainScope.SQLVars, scope.SQLVars...)
}
mainScope.Raw(fmt.Sprintf("INSERT INTO %s (%s) VALUES %s",
mainScope.QuotedTableName(),
strings.Join(quoted, ", "),
strings.Join(placeholdersArr, ", "),
))
if err := mainScope.Exec().DB().Error; err != nil {
return 0, err
}
return mainScope.DB().RowsAffected, nil
}
On last thing is that I was thinking of " fixing " the issue by calling my db through but the ping would slow each of my calls:
func getDb() *gorm.DB {
err := DbInstance.DB().Ping()
if err != nil {
fmt.Println("Connection to db closed opening a new one")
return Init()
}
return DbInstance
}
You can global-search DbInstance.Close() to ensure not ever call it to close it yourself.
If not, you kan set db timeout for longer and raise amount of idle db connections.
At last, It's most important to support auto-reconnecting db dataSource.
Here is part of my auto-reconnecting part you might refer to:
var DB *gorm.DB
func init() {
dbConfig = fmt.Sprintf("host=%s user=%s dbname=%s sslmode=%s password=%s",
"localhost",
"postgres",
"dbname",
"disable",
"password",
)
db, err := gorm.Open("postgres",
dbConfig,
)
db.SingularTable(true)
db.LogMode(true)
db.DB().SetConnMaxLifetime(10 * time.Second)
db.DB().SetMaxIdleConns(30)
DB = db
// auto-connect,ping per 60s, re-connect on fail or error with intervels 3s, 3s, 15s, 30s, 60s, 60s ...
go func(dbConfig string) {
var intervals = []time.Duration{3 * time.Second, 3 * time.Second, 15 * time.Second, 30 * time.Second, 60 * time.Second,
}
for {
time.Sleep(60 * time.Second)
if e := DB.DB().Ping(); e != nil {
L:
for i := 0; i < len(intervals); i++ {
e2 := RetryHandler(3, func() (bool, error) {
var e error
DB, e = gorm.Open("postgres", dbConfig)
if e != nil {
return false, errorx.Wrap(e)
}
return true, nil
})
if e2 != nil {
fmt.Println(e.Error())
time.Sleep(intervals[i])
if i == len(intervals)-1 {
i--
}
continue
}
break L
}
}
}
}(dbConfig)
}
By the way:
// Try f() n times on fail and one time on success
func RetryHandler(n int, f func() (bool, error)) error {
ok, er := f()
if ok && er == nil {
return nil
}
if n-1 > 0 {
return RetryHandler(n-1, f)
}
return er
}

Go - Sending simultaneous emails through goroutines times out - connection reset by peer

I have 50 goroutines that start with a channel. Then, the process forever loops reading 100 database records at a time and sleeping for 10 seconds in between database calls. As it loops through the 100 email records to send, it passes each record through the channel to one of the 50 worker goroutines, who then sends the email. The problem is, after it goes through about 1000 emails, I start getting errors like this:
gomail: could not send email 1: read tcp 10.2.30.25:56708->216.###.##.###:25: read: connection reset by peer
I have to send out about 50k emails per day. What do you recommend? Here's the main code that processes the email queue and passes each record to the worker through the channel:
func main() {
MaxWorkers := 50
println("Creating: " + strconv.Itoa(MaxWorkers) + " workers..")
batchChannel := make(chan EmailQueue.EmailQueueObj)
for i := 0; i < MaxWorkers; i++ {
go startWorker(batchChannel)
}
for {
println("Getting queue..")
data, _ := EmailQueue.GetQueue() //returns 100 database records
println("Reading through " + strconv.Itoa(len(data)) + " records..")
for _, element := range data {
batchChannel <- element
}
time.Sleep(10 * time.Second)
}
}
func startWorker(channel chan EmailQueue.EmailQueueObj) {
var s gomail.SendCloser
var err error
open := false
for obj := range channel {
if !open {
s, err = dialer.Dial()
if err != nil {
fmt.Println(err.Error())
return
} else {
sendEmail(obj, &s)
}
} else {
sendEmail(obj, &s)
}
open = true
}
s.Close()
}
func sendEmail(obj EmailQueue.EmailQueueObj, s *gomail.SendCloser) {
m := gomail.NewMessage()
m.SetHeader("From", "example#example.com")
m.SetHeader("To", obj.Recipient)
m.SetHeader("Subject", obj.Subject.String)
m.SetBody("text/html", obj.HTML.String)
// Send the email
response := ""
status := ""
if err := gomail.Send(*s, m); err != nil {
response = err.Error()
status = "error"
} else {
response = "Email sent"
status = "sent"
}
m.Reset()
return
}
I am using the library gomail to send the emails. I am open to anything, even a new library or method to send these emails. But, what I'm doing currently is not working. Any help is greatly appreciated!

No buffer space available (tcp.cpp:69) when setting SNDBUF and RCVBUF ZeroMQ, golang, MacOSX

I have zeromq: stable 4.1.4 installed using brew on MacOSX and have written a simple PUB/SUB program to test zeromq. But when I run the sample program using flags --bufsize > 5 (to use a buffer of size > 5MB) (go run go_zmq_pubsub.go --bufsize=6); it throws the following exception:
No buffer space available (tcp.cpp:69)
SIGABRT: abort
PC=0x7fff9911c286 m=0
signal arrived during cgo execution
Below is the program I used to test the zeromq4.x
package main
import (
"fmt"
"flag"
"strconv"
"sync"
log "github.com/Sirupsen/logrus"
zmq "github.com/pebbe/zmq4"
"time"
)
var _ = fmt.Println
func main(){
var port int
var bufsize int
flag.IntVar(&port, "port", 7676, "server's zmq tcp port")
flag.IntVar(&bufsize, "bufsize", 0, "socket kernel buffer size")
flag.Parse();
publisher, err := zmq.NewSocket(zmq.PUB)
if(err != nil) {
log.Fatal(err)
}
//set publisher kernel transmit buffer size
//convert into bytes
if err := publisher.SetSndbuf(bufsize * 1000000); err != nil {
log.Fatal(err)
}
defer publisher.Close()
publisher.Bind("tcp://*:" + strconv.Itoa(port))
//SETUP subscriber
subscriber, err := zmq.NewSocket(zmq.SUB)
if(err != nil) {
log.Fatal(err)
}
//set subscriber kernel receive buffer size
if err := subscriber.SetRcvbuf(bufsize * 1000000); err != nil {
log.Fatal(err)
}
defer subscriber.Close()
subscriber.Connect("tcp://127.0.0.1:" + strconv.Itoa(port))
subscriber.SetSubscribe("")
var wg sync.WaitGroup
wg.Add(2)
idx := 0
go func(wg *sync.WaitGroup) {
//start streaming messages
ticker := time.NewTicker(1 * time.Second)
go func() {
for {
select {
case <-ticker.C:
_, err = publisher.Send("PKG:"+strconv.Itoa(idx), 0)
idx++;
if(err != nil) {
log.Error(err)
}
}
}
}()
}(&wg)
//receiver
go func(wg *sync.WaitGroup) {
go func(){
for {
payload, err := subscriber.Recv(0)
_ = payload
if err != nil {
log.Error(err)
break
}
//now sending into worker pool
log.Info("RECEIVE:" + payload)
}
}()
}(&wg)
wg.Wait()
}
On Centos7 with lib-zeromq built from source, the above code works without problem.
Not sure if it's due to libzeromq or the OS itself.
Thanks.
A buffer size of > 5MB is pointless. Anything beyond the bandwidth-delay product of the link concerned is wasted space.
Moderate your requirements.