Handle REST requests in golang GRPC server - rest

Is it possible for a GRPC server written in golang to also handle REST requests?
I've found the grpc-gateway which enables turning an existing proto schema into a rest endpoint but I don't think that suits my needs.
I've written a GRPC server but I need to also serve webhook requests from an external service (like Github or Stripe). I'm thinking of writing a second REST based server to accept these webhooks (and possibly translate/forward them to the GRPC server) but that seems like a code-smell.
Ideally, I'd like for my GRPC server to also be able to, for example, handle REST requests at an endpoint like /webhook or /event but I'm not sure if that's possible and if it is how to configure it.

Looks like I asked my question before giving a large enough effort to resolve it my own. Here's an example of serving REST requests alongside GRPC requests
func main() {
lis, err := net.Listen("tcp", ":6789")
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
// here we register and HTTP server listening on port 6789
// note that we do this in with gofunc so that the rest of this main function can execute - this probably isn't ideal
http.HandleFunc("/event", Handle)
go http.Serve(lis, nil)
// now we set up GRPC
grpcServer := grpc.NewServer()
// this is a GRPC service defined in a proto file and then generated with protoc
pipelineServer := Server{}
pipeline.RegisterPipelinesServer(grpcServer, pipelineServer)
if err := grpcServer.Serve(lis); err != nil {
log.Fatalf("failed to serve: %s", err)
}
}
func Handle(response http.ResponseWriter, request *http.Request) {
log.Infof("handling")
}
With the above, sending a POST to localhost:6789/event will cause the handling log line to be emitted.

Related

ZMQ: Message gets lost in Dealer Router Dealer pattern implementation

I have a working setup where multiple clients send messages to multiple servers. Each message target only one server. The client knows the ids of all possible servers and only sends the messages if such server is actually connected. Each server on startup connects to the socked. There are multiple server workers which bind to inproc router socket. The communication is initiated from client always. The messages are sent asynchronously to each server.
This is achieved using DEALER->ROUTER->DEALER pattern. My problem is that when the number of client & server workers increase, the "ack" sent by server to client (Step # 7 below) is never delivered to client. Thus, the client is stuck waiting for acknowledgement whereas the server is waiting for more messages from client. Both the systems hang and never come out of this condition unless restarted. Details of configuration and communication flow are mentioned below.
I've checked system logs and nothing evident is coming out of it. Any help or guidance to triage this further will be helpful.
At startup, the client connects to the socket to its IP: Port, as a dealer.
"requester, _ := zmq.NewSocket(zmq.DEALER)".
The dealers connect to Broker. The broker connects frontend (client workers) to backend (server workers). Frontend is bound to TCP socket while the backend is bound as inproc.
// Frontend dealer workers
frontend, _ := zmq.NewSocket(zmq.DEALER)
defer frontend.Close()
// For workers local to the broker
backend, _ := zmq.NewSocket(zmq.DEALER)
defer backend.Close()
// Frontend should always use TCP
frontend.Bind("tcp://*:5559")
// Backend should always use inproc
backend.Bind("inproc://backend")
// Initialize Broker to transfer messages
poller := zmq.NewPoller()
poller.Add(frontend, zmq.POLLIN)
poller.Add(backend, zmq.POLLIN)
// Switching messages between sockets
for {
sockets, _ := poller.Poll(-1)
for _, socket := range sockets {
switch s := socket.Socket; s {
case frontend:
for {
msg, _ := s.RecvMessage(0)
workerID := findWorker(msg[0]) // Get server workerID from message for which it is intended
log.Println("Forwarding Message:", msg[1], "From Client: ", msg[0], "To Worker: ")
if more, _ := s.GetRcvmore(); more {
backend.SendMessage(workerID, msg, zmq.SNDMORE)
} else {
backend.SendMessage(workerID, msg)
break
}
}
case backend:
for {
msg, _ := s.RecvMessage(0)
// Register new workers as they come and go
fmt.Println("Message from backend worker: ", msg)
clientID := findClient(msg[0]) // Get client workerID from message for which it is intended
log.Println("Returning Message:", msg[1], "From Worker: ", msg[0], "To Client: ", clientID)
frontend.SendMessage(clientID, msg, zmq.SNDMORE)
}
}
}
}
Once the connection is established,
The client sends a set of messages on frontend socket. The messages contain metadata about the all the messages to be followed
requester.SendMessage(msg)
Once these messages are sent, then client waits for acknowledgement from the server
reply, _ := requester.RecvMessage(0)
The router transfers these messages from frontend to backend workers based on logic defined above
The backend dealers process these messages & respond back over backend socket asking for more messages
The Broker then transfers message from backend inproc to frontend socket
The client processes this message and sends required messsages to the server. The messages are sent as a group (batch) asynchronously
Server receives and processes all of the messages sent by client
After processing all the messages, the server sends an "ack" back to the client to confirm all the messages are received
Once all the messages are sent by client and processed by server, the server sends a final message indicating all the transfer is complete.
The communication ends here
This works great when there is a limited set of workers and messages transferred. The implementation has multiple dealers (clients) sending message to a router. Router in turn sends these messages to another set of dealers (servers) which process the respective messages. Each message contains the Client & Server Worker IDs for identification.
We have configured following limits for the send & receive queues.
Broker HWM: 10000
Dealer HWM: 1000
Broker Linger Limit: 0
Some more findings:
This issue is prominent when server processing (step 7 above) takes more than 10 minutes of time.
The client and server are running in different machines both are Ubuntu-20LTS with ZMQ version 4.3.2
Environment
libzmq version (commit hash if unreleased): 4.3.2
OS: Ubuntu 20LTS
Eventually, it turned out to be configuring Heartbeat for zmq sockets. Referred documentation here http://api.zeromq.org/4-2:zmq-setsockopt
Configured following parameters
ZMQ_HANDSHAKE_IVL: Set maximum handshake interval
ZMQ_HEARTBEAT_IVL: Set interval between sending ZMTP heartbeats
ZMQ_HEARTBEAT_TIMEOUT: Set timeout for ZMTP heartbeats
Configure the above parameters appropriately to ensure that there is a constant check between the client and server dealers. Thus even if one is delayed processing, the other one doesn't timeout abruptly.

How to connect to a local mongodb instance from wasm module?

I'm trying to store some data in my local MongoDB instance using Go compiled to WebAssembly. The problem is, I cannot even connect to it. The mondog instance doesn't react to connection from wasm module in any way. This problem arises only when connecting from wasm module. The same code when compiled in ordinary way works fine, as well as connection from mongo shell. The runned mongod instance has no password protection.
My OS is Windows 10 in case that matters.
I've tried to change mongod bind_ip parameter from localhost to the actual local adress of my machine and use different browsers (Chrome 75.0.3770.80, Opera 60.0.3255.109).
Changing the timeout duration doesn't do the trick also.
func connectToMongo(URI string, timeout time.Duration) *mongo.Client {
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
client, err := mongo.Connect(ctx, options.Client().ApplyURI(URI))
if err != nil {
log.Fatal(err)
}
err = client.Ping(ctx, readpref.Primary())
if err != nil {
log.Fatal(err) // It fails here
}
return client
}
func main() {
client := connectToMongo("mongodb://localhost:27017", 20*time.Second)
}
<html>
<head>
<script type="text/javascript" src="./wasm_exec.js"></script>
<script>
const go = new Go();
WebAssembly.instantiateStreaming(fetch('main.wasm'),go.importObject).then( res=> {
go.run(res.instance)
})
</script>
</head>
</html>
I run mongod.exe without any parameters so it is binded to localhost.
I expected my code to connect to mongod instance, but actually I get the following error in browser console: "context deadline exceeded".
I'm still learning Go and a total newbie in JavaScript so I might be missing something very simple. Any help would be greatly appreciated.
You are trying to connect from WebAssembly to a local server, most likely using a protocol which isn't allowed from the browser WASM sandbox.
WebAssembly can't for instance open low-level network sockets out of the WASM sandbox, you're mainly constrained to the same things that you can do with JavaScript in terms of file, system and network access when you're running WASM in a browser.
It's worth reading up on the constraints that WebAssembly has around security and system access when used in a browser context as well as it's worth noting that it's not WebAssembly that's blocking your connection here, it's the browser that's running the WebAssembly.

How can I debug the following Go code, which tries to make a TCP connection to an IP address and port?

I am getting an IP address and port number from a Bittorrent tracker, for a specific torrent file. It represents a peer on the bittorrent network. I am trying to connect to the peer using this code. The connection always times out (getsockopt: operation timed out). I think I am missing something very fundamental here, because I tried the same code in python with the exact same result, operation timed out. It happens for every single peer IP address.
I downloaded this bittorrent client - https://github.com/jtakkala/tulva
which is able to connect to peers from my system using this type of code (Line 245, peer.go). I have also been able to use similar code for connecting to a tcp server running on localhost.
Edited details after JimB's comment and Kenny Grant's answer
package main
import (
"fmt"
"net"
)
func main() {
raddr := net.TCPAddr{IP: []byte{}/*This byte slice contains the IP*/, Port: int(/*Port number here*/)}
conn, err := net.DialTCP("tcp4", nil, &raddr)
if err != nil {
fmt.Println("Error while connecting", err)
return
}
fmt.Println("Connected to ", raddr, conn)
}
Try it with a known good address, and you'll see your code works fine (with a 4 byte IPv4 address for SO say). Bittorrent peers are transient, so it probably just went away, if testing you should use your own IPs that you know are stable.
raddr := net.TCPAddr{IP: net.IPv4(151, 101, 1, 69), Port: int(80)}
...
-> Connected to {151.101.1.69 80 }
if you're trying to connect to 187.41.59.238:10442, as jimb says, it's not available. For IPs, see the docs:
https://sourcegraph.com/github.com/golang/go#9fd359a29a8cc55ed665542d2a3fe9fef8baaa7d/-/blob/src/net/ip.go#L32:6-32:8

How to setup Kamailio as a simple relay

I have a number of simple SIP endpoints that can be registered on a backend SIP registrars. They can be configured to be registred only on one of the call precessing engines.
I want to use Kamailio to relay REGISTER (and later INVITE) requests to the backend.
So far I have the following config
route[REGISTRAR] {
if (is_method("REGISTER")){
rewritehost("1.2.3.4");
xlog("Registering $(fu{uri.user}) with 1.2.3.4\n");
$var(frst) = "sip:" + $(fu{uri.user}) +"#1.2.3.4";
$var(scnd) = "sip:" + $(fu{uri.user}) +"#2.3.4.5";
uac_replace_from("$var(frst)");
uac_replace_to("$var(frst)");
if( !t_relay_to_tcp("1.2.3.4","5060") ) {
rewritehost("2.3.4.5");
uac_replace_from("$var(scnd)");
uac_replace_to("$var(scnd)");
xlog("Registering $(fu{uri.user}) with 2.3.4.5\n");
if( !t_relay_to_tcp("2.3.4.5","5060") ) {
sl_reply_error();
}
}
exit;
}
else return;
}
This route[REGISTRAR] is called from main SIP request routing. If 1.2.3.4 is UP my test endpoint registers and available for call from other endpoints (though I have to work with INVITE from test endpoint as well). But when 1.2.3.4 is down I get
ERROR: <core> [tcp_main.c:4249]: tcpconn_main_timeout(): connect 1.2.3.4:5060 failed (timeout)
in the /var/log/syslog. I thought that is t_relay_to_tcp fails I can repeat mangling of From and To headers and relay everything to 2.3.4.5, but this doesn't happen.
It might be because of the asyncronous nature of the transmission - kamailio scripts goes further while relayed tcp session is hanging in some backgroud thread.
How should I edit route[REGISTRAR] to relay to the 2.3.4.5 in case of tcp timeout?
Maybe the whole idea of relaying messages that way is wrong?
Some forums shows examples of registreing endpoints on kamailio itself, but it doesn't suit me. I believe that kamailio is powerful enough to solve my problem.
Looks like Kamailio doesn't work this way. So I changed my config like that:
route[REGISTRAR] {
if (is_method("REGISTER")){
rewritehost("1.2.3.4");
xlog("Registering $(fu{uri.user}) with 1.2.3.4\n");
$var(frst) = "sip:" + $(fu{uri.user}) +"#1.2.3.4";
uac_replace_from("$var(frst)");
uac_replace_to("$var(frst)");
t_on_failure("REGISTERBACKUP");
t_relay_to_tcp("1.2.3.4","5060");
}
else return;
failure_route[REGISTERBACKUP] {
rewritehost("2.3.4.5");
xlog("Registering $(fu{uri.user}) with 2.3.4.5\n");
#Edited to relay to 2.3.4.5
t_relay_to_tcp("2.3.4.5","5060");
}
When 1.2.3.4 is down my endpoint registers on 2.3.4.5. When 1.2.3.4 is up is of course registers on it.

delphi 7, indy tcp proxy without remote server

Is it possible to implement something like IdMappedPortTCP without connecting to a remote proxy server?
What I need is
a) a way to edit every HTTP header (for example change the User-Agent for each request) the for every request sent from my computer without having to connect to a remote server. And
b) If possible, I would also like to capture all the http traffic in delphi without the need of a third party application like proxifier.
What i have tried so far is:
a) IdMappedPortTCP and then binding to a remote proxy server, I then modify the AThread.NetData in each request in the IdMappedPortTCPExecute method.
b) using proxifier to capture all http traffic in the computer.
What I have tried so far is using Mapping with IdMappedPortTCP to a local proxy server (e.g. squid, delegate, fiddler, ccproxy), create my own proxy server (using indy 10) - All these worked great for HTTP connections but require installation of root certificate to modify HTTPS requests which is undesired. If its possible to implement any local proxy without having to install root certificates it would be awesome!
I have also tried to modify the TCP REDIRECTOR CODE but being fresh n all in programming, i haven't been successful. I figured i could change the
procedure TForm1.IdTCPServer1Execute(AThread: TIdPeerThread);
var
Cli: TIdTCPClient;
Len: Cardinal;
Data: string;
begin
try
Cli := nil;
try
{ Create & Connect to Server }
Cli := TIdTCPClient.Create(nil);
Cli.Host := 'www.borland.com';
Cli.Port := 80;
{ Connect to the remote server }
Cli.Connect;
..............
Such that I would extract the host and port from request and then assign cli.host to that host and port dynamically for each request. I don't know how viable that is. Like, would it cause computer to hang because of connecting to too many remote host?
Update: with TIdMappedPortTCP, I used AThread.Connection.Capture(myheaders,''); so now I can assign my host to myheaders.Values['host'] and if AThread.Connection.ReadLn = 'CONNECT' I set port to 443 otherwise I set it as 80. Am I on the right track?
procedure TForm1.IdMappedPortTCP1Connect(AThread: TIdMappedPortThread);
var
myheaders: TIdHeaderList;
method : string;
begin
myheaders:=TIdHeaderList.Create;
try
method:= AThread.Connection.ReadLn;
Athread.Connection.Capture(myheaders);
if myheaders.Count<>0 then begin
if Pos('CONNECT',method)<>0 then begin
with TIdTCPClient(AThread.OutboundClient) do begin
Host:=myheaders.Values['host'];
Port:=443;
end;
end else begin
with TIdTCPClient(AThread.OutboundClient) do begin
Host:=myheaders.Values['host'];
Port:=80;
end;
end;
TIdMappedPortThread(AThread).NetData:= method + #13#10 + myheaders.Text + #13#10 + #13#10;
end else begin
TIdMappedPortThread(AThread).NetData:= method + #13#10 + #13#10;
outs.Lines.Add(TIdMappedPortThread(AThread).NetData);
end;
finally
myheaders.Free;
end;
end;
I have put that code in the OnConnect event but it does not seem to be working. What have I done wrong?