I want to have Kafka message (cloud event) to be passed through Kafka Source -> Broker -> ASP.NET core service with headers from initial Kafka message.
Right now I can put message with body, headers and on Kafka, it's consumed by Kafka Source, but headers from Kafka message are replaced somewhere between Kafka and my service.
Initial headers:
correlationid = {guid}
ce-specversion = 1.0
ce-id = {guid}
ce-source = {differentRelativeUriThanBelow}
ce-type = {com.company.product.request.amqp.asynchronous:v1}
Content-Type = application/cloudevents
Received in service:
correlationid =
ce-specversion = 1.0
ce-id = partition:0/offset:52
ce-source = /apis/v1/namespaces/myNamespace/kafkasources/kafka-source-myNamespace#myKafkaTopic
ce-type = dev.knative.kafka.event
Content-Type = application/cloudevents
Any way I can prevent this behaviour or at least configure it in a way so my headers are included in HTTP request received by the service?
Related
I have a working setup where multiple clients send messages to multiple servers. Each message target only one server. The client knows the ids of all possible servers and only sends the messages if such server is actually connected. Each server on startup connects to the socked. There are multiple server workers which bind to inproc router socket. The communication is initiated from client always. The messages are sent asynchronously to each server.
This is achieved using DEALER->ROUTER->DEALER pattern. My problem is that when the number of client & server workers increase, the "ack" sent by server to client (Step # 7 below) is never delivered to client. Thus, the client is stuck waiting for acknowledgement whereas the server is waiting for more messages from client. Both the systems hang and never come out of this condition unless restarted. Details of configuration and communication flow are mentioned below.
I've checked system logs and nothing evident is coming out of it. Any help or guidance to triage this further will be helpful.
At startup, the client connects to the socket to its IP: Port, as a dealer.
"requester, _ := zmq.NewSocket(zmq.DEALER)".
The dealers connect to Broker. The broker connects frontend (client workers) to backend (server workers). Frontend is bound to TCP socket while the backend is bound as inproc.
// Frontend dealer workers
frontend, _ := zmq.NewSocket(zmq.DEALER)
defer frontend.Close()
// For workers local to the broker
backend, _ := zmq.NewSocket(zmq.DEALER)
defer backend.Close()
// Frontend should always use TCP
frontend.Bind("tcp://*:5559")
// Backend should always use inproc
backend.Bind("inproc://backend")
// Initialize Broker to transfer messages
poller := zmq.NewPoller()
poller.Add(frontend, zmq.POLLIN)
poller.Add(backend, zmq.POLLIN)
// Switching messages between sockets
for {
sockets, _ := poller.Poll(-1)
for _, socket := range sockets {
switch s := socket.Socket; s {
case frontend:
for {
msg, _ := s.RecvMessage(0)
workerID := findWorker(msg[0]) // Get server workerID from message for which it is intended
log.Println("Forwarding Message:", msg[1], "From Client: ", msg[0], "To Worker: ")
if more, _ := s.GetRcvmore(); more {
backend.SendMessage(workerID, msg, zmq.SNDMORE)
} else {
backend.SendMessage(workerID, msg)
break
}
}
case backend:
for {
msg, _ := s.RecvMessage(0)
// Register new workers as they come and go
fmt.Println("Message from backend worker: ", msg)
clientID := findClient(msg[0]) // Get client workerID from message for which it is intended
log.Println("Returning Message:", msg[1], "From Worker: ", msg[0], "To Client: ", clientID)
frontend.SendMessage(clientID, msg, zmq.SNDMORE)
}
}
}
}
Once the connection is established,
The client sends a set of messages on frontend socket. The messages contain metadata about the all the messages to be followed
requester.SendMessage(msg)
Once these messages are sent, then client waits for acknowledgement from the server
reply, _ := requester.RecvMessage(0)
The router transfers these messages from frontend to backend workers based on logic defined above
The backend dealers process these messages & respond back over backend socket asking for more messages
The Broker then transfers message from backend inproc to frontend socket
The client processes this message and sends required messsages to the server. The messages are sent as a group (batch) asynchronously
Server receives and processes all of the messages sent by client
After processing all the messages, the server sends an "ack" back to the client to confirm all the messages are received
Once all the messages are sent by client and processed by server, the server sends a final message indicating all the transfer is complete.
The communication ends here
This works great when there is a limited set of workers and messages transferred. The implementation has multiple dealers (clients) sending message to a router. Router in turn sends these messages to another set of dealers (servers) which process the respective messages. Each message contains the Client & Server Worker IDs for identification.
We have configured following limits for the send & receive queues.
Broker HWM: 10000
Dealer HWM: 1000
Broker Linger Limit: 0
Some more findings:
This issue is prominent when server processing (step 7 above) takes more than 10 minutes of time.
The client and server are running in different machines both are Ubuntu-20LTS with ZMQ version 4.3.2
Environment
libzmq version (commit hash if unreleased): 4.3.2
OS: Ubuntu 20LTS
Eventually, it turned out to be configuring Heartbeat for zmq sockets. Referred documentation here http://api.zeromq.org/4-2:zmq-setsockopt
Configured following parameters
ZMQ_HANDSHAKE_IVL: Set maximum handshake interval
ZMQ_HEARTBEAT_IVL: Set interval between sending ZMTP heartbeats
ZMQ_HEARTBEAT_TIMEOUT: Set timeout for ZMTP heartbeats
Configure the above parameters appropriately to ensure that there is a constant check between the client and server dealers. Thus even if one is delayed processing, the other one doesn't timeout abruptly.
I configured an HTTPS website on AWS, which allows visiting from a white list of IPs.
My local machine runs with a VPN connection, which is in the white list.
I could visit the website from web browser or by the java.net.http package with the below code:
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://mywebsite/route"))
.GET() // GET is default
.build();
HttpResponse<Void> response = client.send(request,
HttpResponse.BodyHandlers.discarding());
But if I replaced the code with a Vertx implementation from io.vertx.ext.web.client package, I got a 403 forbidden response from the same website.
WebClientOptions options = new WebClientOptions().setTryUseCompression(true).setTrustAll(true);
HttpRequest<Buffer> request = WebClient.create(vertx, options)
.getAbs("https://mywebsite/route")
.ssl(true).putHeaders(headers);
request.send(asyncResult -> {
if (asyncResult.succeeded()) {
HttpResponse response = asyncResult.result();
}
});
Does anyone have an idea why the Vertx implementation is rejected?
Finally got the root cause. I started a local server that accepts the testing request and forwards it to the server on AWS. The testing client sent the request to localhost and thus "Host=localhost:8080/..." is in the request header. In the Vert.X implementation, a new header entry "Host=localhost:443/..." is wrongly put into the request headers. I haven't debug the Vert.X implementation so I have no idea why it behaviors as this. But then the AWS firewall rejected the request with a rule that a request could not come from localhost.
Is it possible to send a gzipped soap request?
I added an HTTP Header Manager with the following headers:
Content-Type: application/soap+xml; charset=Utf-8
Content-Encoding: gzip
I added a Beanshell PreProcessor as a child of the request which needs to be encoded, and I defined the following script:
import org.apache.commons.io.IOUtils;
import java.util.zip.GZIPOutputStream;
// This only works for the HTTP Request, not Soap Request.
// String bodyString = sampler.getArguments().getArgument(0).getValue();
String bodyString = ctx.getCurrentSampler().getXmlData();
byte [] requestBody = bodyString.getBytes();
ByteArrayOutputStream out = new ByteArrayOutputStream(requestBody.length);
GZIPOutputStream gzip = new GZIPOutputStream(out);
gzip.write(requestBody);
gzip.close();
// This only works for the HTTP Request, not Soap Request.
// sampler.getArguments().getArgument(0).setValue(out.toString(0));
ctx.getCurrentSampler().setXmlData(???);
My problem is the last line, how can I set xmlData?
Jmeter version 3.1
Replace SOAP/XML-RPC Sampler which is deprecated with the HTTP Request sampler
Upgrade to JMeter 5.0 (or whatever is the latest version available at JMeter Downloads page)
Switch from Beanshell to Groovy
Use sampler.getArguments().getArgument(0).setValue(out.toString(0)); in order to generate the request body.
I am new to Confluent/Kafka and I want to find metadata information from kafka
I want to know
list of producers
list of topics
schema information for topic
Confluent version is 5.0
What are classes (methods) that can give this information?
Are there any Rest API's for the same
Also is zookeeper connection necessary to get this information.
1) I don't think that Kafka brokers are aware of producers that produce messages in topics and therefore there is no command line tool for listing them. However, an answer to this SO question suggests that you can list producers by viewing the MBeans over JMX.
2) In order to list the topics you need to run:
kafka-topics --zookeeper localhost:2181 --list
Otherwise, if you want to list the topics using a Java client, you can call listTopics() method of KafkaConsumer.
You can also fetch the list of topics through ZooKeeper
ZkClient zkClient = new ZkClient("zkHost:zkPort");
List<String> topics = JavaConversions.asJavaList(ZkUtils.getAllTopics(zkClient));
3) To get the schema information for a topic you can use Schema Registry API
In particular, you can fetch all subjects by calling:
GET /subjects HTTP/1.1
Host: schemaregistry.example.com
Accept: application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json
which should give a response similar to the one below:
HTTP/1.1 200 OK
Content-Type: application/vnd.schemaregistry.v1+json
["subject1", "subject2"]
You can then get all the versions of a particular subject:
GET /subjects/subject-name/versions HTTP/1.1
Host: schemaregistry.example.com
Accept: application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json
And finally, you can get a specific version of the schema registered under this subject
GET /subjects/subject_name/versions/1 HTTP/1.1
Host: schemaregistry.example.com
Accept: application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json
Or just the latest registered schema:
GET /subjects/subject-name/versions/latest HTTP/1.1
Host: schemaregistry.example.com
Accept: application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json
In order to perform such actions in Java, you can either prepare your own GET requests (see how to do it here) or use Confluent's Schema Registry Java Client. You can see the implementation and the available methods in their Github repo.
Regarding your question about Zookeeper, note that ZK is a requirement for Kafka.
Kafka uses ZooKeeper so you need to first start a ZooKeeper server if
you don't already have one. You can use the convenience script
packaged with kafka to get a quick-and-dirty single-node ZooKeeper
instance.
Hi I'm sending some http requests from a java spring mvc web app and when I have fiddler open I don't see any outgoing responses.
Using this code to send to an address similar to: http:///getstuff?stuff="whatever"
HttpGet httpget = new HttpGet(url);
log.info("Executing request " + httpget.getURI());
// Create a response handler...
ResponseHandler<String> responseHandler = new BasicResponseHandler();
responseBody = httpclient.execute(httpget, responseHandler);
Does anybody know why these outgoing calls are not showing up in fiddler?
Did you remember to configure your JVM to proxy its HTTP requests? http://fiddler2.com/documentation/Configure-Fiddler/Tasks/ConfigureJavaApp