cadence go-client/ client to reach server for fetching workflow results in panic - cadence-workflow

First time user of Cadence:
Scenario
I have a cadence server running in my sandbox environment.
Intent is to fetch the workflow status
I am trying to use this cadence client
go.uber.org/cadence/client
on my local host to talk to my sandbox cadence server.
This is my simple code snippet:
var cadClient client.Client
func main() {
wfID := "01ERMTDZHBYCH4GECHB3J692PC" << I got this from cadence-ui
ctx := context.Background()
wf := cadClientlient.GetWorkflow(ctx, wfID,"") <<< Panic hits here
log.Println("Workflow RunID: ",wf.GetID())
}
I am sure getting it wrong because the client does not know how to reach the cadence server.
I referred this https://cadenceworkflow.io/docs/go-client/ to find the correct usage but could not find any reference (possible that I might have missed it).
Any help in how to resolve/implement this, will be of much help

I am not sure what panic you got. Based on the code snippet, it's likely that you haven't initialized the client.
To initialize it, follow the sample code here: https://github.com/uber-common/cadence-samples/blob/master/cmd/samples/common/sample_helper.go#L82
and
https://github.com/uber-common/cadence-samples/blob/aac75c7ca03ec0c184d0f668c8cd0ea13d3a7aa4/cmd/samples/common/factory.go#L113
ch, err := tchannel.NewChannelTransport(
tchannel.ServiceName(_cadenceClientName))
if err != nil {
b.Logger.Fatal("Failed to create transport channel", zap.Error(err))
}
b.Logger.Debug("Creating RPC dispatcher outbound",
zap.String("ServiceName", _cadenceFrontendService),
zap.String("HostPort", b.hostPort))
b.dispatcher = yarpc.NewDispatcher(yarpc.Config{
Name: _cadenceClientName,
Outbounds: yarpc.Outbounds{
_cadenceFrontendService: {Unary: ch.NewSingleOutbound(b.hostPort)},
},
})
if b.dispatcher != nil {
if err := b.dispatcher.Start(); err != nil {
b.Logger.Fatal("Failed to create outbound transport channel: %v", zap.Error(err))
client := workflowserviceclient.New(b.dispatcher.ClientConfig(_cadenceFrontendService))

Related

How to use the Kubernetes client-go server side apply functionality properly?

When I run the below code using client-go library I get an inscrutable error? What am I doing wrong?
ctx := context.TODO()
ns := applycorev1.NamespaceApplyConfiguration{
ObjectMetaApplyConfiguration: &applymetav1.ObjectMetaApplyConfiguration{
Name: to.StringPtr("foobar"),
},
}
if _, err := kubeClient.CoreV1().Namespaces().Apply(ctx, &ns, v1.ApplyOptions{}); err != nil {
panic(err)
}
Yields the very unhelpful error:
panic: PatchOptions.meta.k8s.io "" is invalid: fieldManager: Required value: is required for apply patch
What is the correct way to send an Apply operation to the API server in Kube using client-go?
At least you should add FieldManager in your ApplyOptions
I am also trying this out, for now I am referring to https://ymmt2005.hatenablog.com/entry/2020/04/14/An_example_of_using_dynamic_client_of_k8s.io/client-go

Amazon RDS PostgreSQL Performance input/output

I am facing this complex challenge with an RDS PostgreSQL instance. I am almost out of any idea how to handle it. I am launching an app (React+Go+PostreSQL) for which I expect to have around 250-300 users simultaneously making the same API GET request for how long the users wish to use it.
It is a questionnaire kind of app and users will be able to retrieve one question from the database and answer it, the server will save the answer in the DB, and then the user will be able to press next to fetch the next question. I tested my API endpoint with k6 using 500 virtual users for 2 minutes and the database returns dial: i/o timeout or even connection rejected sometimes, usually when it reaches 6000 requests and I get around 93% success. I tried to fine-tune the rds instance with tcp_keep_alive parameters but without any luck, I still cannot manage to get 100% of the request pass. I also tried to increase the general storage from 20gb min to 100gb in rds and switch from the free db.t3.micro to db.t3.medium size.
Any hint would be much appreciated. It should be possible for a normal golang server with postgres to handle this requests at the same time, shouldn't it? It is just a regular select * from x where y statement.
EDIT (CODE SAMPLE):
I use a dependency injection pattern and so I have only one instance of the DB passed to all the other repositories including the API package. The db repo looks like this:
func NewRepository() (DBRepository, error) {
dbname := getenv("POSTGRES_DATABASE", "")
username := getenv("POSTGRES_ROOT_USERNAME", "")
password := getenv("POSTGRES_ROOT_PASSWORD", "")
host := getenv("POSTGRES_HOST", "")
port := getenv("POSTGRES_PORT", "")
dsn := fmt.Sprintf("host=%s user=%s password=%s"+
" dbname=%s port=%s sslmode=disable
TimeZone=Europe/Bucharest", host, username, password, dbname,
port)
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
return nil, err
}
if err != nil {
return nil, err
}
db.AutoMigrate(
//migrate tables are here
)
return &dbRepository{
db: db,
}, nil
}
Currently the parameters use in RDS for TCP keepalive are:
tcp_keepalives_count 30
tcp_keepalives_idle 1000
tcp_keepalives_interval 1000
and I also tried with different numbers.
The query I am doing is a simple .Find() statement from gorm package but it seems like this is not the issue since it gets blocked whenever hits the first query/connection with the db. There are 2 query executed in this endpoint I am testing but it gets stuck on the first. If more info is needed I will update but this issue it gets so frustrating.
My k6 test if the following:
import http from 'k6/http';
import { check } from 'k6';
import { sleep } from 'k6';
export const options = {
insecureSkipTLSVerify: true,
stages: [
{ target: 225, duration: '2m' },
],
};
const access_tokens = []
let random_token = access_tokens[Math.floor(Math.random()*access_tokens.length)];
const params = {
headers: {'Authorization': `Bearer ${random_token}`}
};
export default function () {
let res = http.get('endpoint here', params);
check(res, {'Message': (r)=> r.status === 202});
sleep(1);
}
The DB tables are also indexed and tested with the explain statement.

Mongo-go-driver: context deadline exceeded

I have recently upgraded to the newer and offical golang mongo driver for an app I am working on.
All is work prefectly for my local development however when I hook it up and point to my backend server I am getting a 'context deadline exceeded' when calling the client.Ping(...) method.
The old driver code still works fine and I also print out the connection string and can copy and paste this into the compass app and it works without issues.
However for the life of me I cant work out why this new code is return a context timeout. Only different thing is that mongo is running on a non-standard port of 32680 and I am also using the mgm package. However it just using the offical mongo driver under the hood.
Mongo version is: 4.0.12 (locally and remote)
Connection code is here:
// NewClient creates a mongo DateBase connection
func NewClient(cfg config.Mongo) (*Client, error) {
// create database connection string
conStr := fmt.Sprintf("mongodb://%s:%s#%s:%s", cfg.Username, cfg.Password, cfg.Host, cfg.Port)
// set mgm conf ie ctxTimeout value
conf := mgm.Config{CtxTimeout: cfg.CtxTimeout}
// setup mgm / DateBase connection
err := mgm.SetDefaultConfig(&conf, cfg.Database, options.Client().ApplyURI(conStr))
if err != nil {
return nil, errors.Wrapf(err, "failed to connect to mongodb. cfg: %+v. conStr: %+v.", cfg, conStr)
}
// get access to underlying mongodb client driver, db and mgmConfig. Need for adding additional tools like seeding/migrations/etc
mgmCfg, client, db, err := mgm.DefaultConfigs()
if err != nil {
return nil, errors.Wrap(err, "failed to return mgm.DefaultConfigs")
}
// NOTE: fails here!
if err := client.Ping(mgm.Ctx(), readpref.Primary()); err != nil {
return nil, errors.Wrapf(err, "Ping failed to mongodb. cfg: %+v. conStr: %+v. mgmCfg: %+v", cfg, conStr, mgmCfg)
}
return &Client{
cfg: cfg,
mgmCfg: mgmCfg,
client: client,
db: db,
}, nil
}
HELP! I have no idea how I can debug this anymore that I have?
Try adding your authsource in your DSN,
something like
mongodb://USER:PASSWORD#HOST:PORT/DBNAME?authsource=AUTHSOURCE

Do I need a write buffer for socket in go?

Suppose I had a Tcp server in linux, it would create a new goroutine for a new connnection. When I want to write data to the tcp connection, should I do it just like this
conn.Write(data)
or do it in a goroutine especially for writing, like this
func writeRoutine(sendChan chan []byte){
for {
select {
case msg := <- sendChan :
conn.Write(msg)
}
}
}
just in case that the network was busy.
In a short, Did I need a write buffer in go just like in c/c++ when writing to a socket?
PS maybe I didn't exclaim the problem clearly.
1 I talked of the server, meaning a tcp server runing in linux. It would create a new goroutine for a new connnection. like this
listener, err := net.ListenTCP("tcp", tcpAddr)
if err != nil {
log.Error(err.Error())
os.Exit(-1)
}
for {
conn, err := listener.AcceptTCP()
if err != nil {
continue
}
log.Debug("Accept a new connection ", conn.RemoteAddr())
go handleClient(conn)
}
2 I think my problem isn't much concerned with the code. As we know, when we use size_t write(int fd, const void *buf, size_t count); to write a socket fd in c/c++, for a tcp server, we need a write buffer for a socket in your code necessaryly, or maybe only some of the data is writen successfully. I mean, Do I have to do so in go ?
You are actually asking two different questions here:
1) Should you use a goroutine per accepted client connection in my TCP server?
2) Given a []byte, how should I write to the connection?
For 1), the answer is yes. This is the type of pattern that go is most suited for. If you take a look at the source code for the net/http, you will see that it spawns a goroutine for each connection.
As for 2), you should do the same that you would do in a c/c++ server: write, check how much was written and keep on writing until your done, always checking for errors. Here is a code snippet on how to do it:
func writeConn(data []byte) error {
var start,c int
var err error
for {
if c, err = conn.Write(data[start:]); err != nil {
return err
}
start += c
if c == 0 || start == len(data) {
break
}
}
return nil
}
server [...] create a new goroutine for a new connnection
This makes sense because the handler goroutines can block without delaying the server's accept loop.
If you handled each request serially, any blocking syscall would essentially lock up the server for all clients.
goroutine especially for writing
This would only make sense in use cases where you're writing either a really big chunk of data or to a very slow connection and you need your handler to continue unblocked, for instance.
Note that this is not what is commonly understood as a "write buffer".

Errors when many clients connect to Go server

full code could download at https://groups.google.com/forum/#!topic/golang-nuts/e1Ir__Dq_gE
Could anyone help me to improve this sample code to zero bug?
I think it will help us to develop a bug free client / server code.
my develop steps:
Create a server which could handle multiple connections by goroutine.
Build a client which works fine with simple protocol.
Expand the client to simulate multiple clients (with option -n=1000 clients as default)
TODO: try to reduce lock of server
TODO: try to use bufio to enhance throughput
I found this code is very unstable contains with three problems:
launch 1000 clients, one of them occurs a EOF when reading from server.
launch 1050 clients, got too many open files soon (No any clients opened).
launch 1020 clients, got runtime error with long trace stacks.
Start pollServer: pipe: too many open files
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x28 pc=0x4650d0]
Here I paste my more simplified code.
const ClientCount = 1000
func main() {
srvAddr := "127.0.0.1:10000"
var wg sync.WaitGroup
wg.Add(ClientCount)
for i := 0; i < ClientCount; i++ {
go func(i int) {
client(i, srvAddr)
wg.Done()
}(i)
}
wg.Wait()
}
func client(i int, srvAddr string) {
conn, e := net.Dial("tcp", srvAddr)
if e != nil {
log.Fatalln("Err:Dial():", e)
}
defer conn.Close()
conn.SetTimeout(proto.LINK_TIMEOUT_NS)
defer func() {
conn.Close()
}()
l1 := proto.L1{uint32(i), uint16(rand.Uint32() % 10000)}
log.Println(conn.LocalAddr(), "WL1", l1)
e = binary.Write(conn, binary.BigEndian, &l1)
if e == os.EOF {
return
}
if e != nil {
return
}
// ...
}
This answer on serverfault [1] suggests that for servers that can handle a lot of connections, setting a higher ulimit is the thing to do. Also check for application leaks of memory or file descriptor leaks using lsof.
ulimit -n 99999
[1] https://serverfault.com/a/48820/110909