How to check is kafka server is Running in PHP / Lumen / Laravel? - apache-kafka

I need help on kafka. We are connecting to kafka once post created successfully in lumen api. After that sending push to kafka. I'm using below code to sending this.
$conf = new \RdKafka\Conf();
$conf->set('metadata.broker.list', $this->_getHost() . ':' . $this->_getPort());
$producer = new \RdKafka\Producer($conf);
// $producer = $this->connect();
$topic = $producer->newTopic($kafkaTopic);
$topic->produce(RD_KAFKA_PARTITION_UA, 0, json_encode(['body' => json_encode($kafkaMessage), 'properties' => [],
'headers' => [],
]));
$result = $producer->flush(10000);
However, After creating post when calling the kafka and sometimes it's getting error that kafka server is stop.
So, need methodology / stratagy that how can we check that kafka is running or not before creating post.

Related

Clickhouse: kafka broken messages error handling

Clickhouse version 21.12.3.32. I`m following this PR(https://github.com/ClickHouse/ClickHouse/pull/21850) to handle incorrect messages from kafka topic, but after some investigation I`ve found that if a single message contains broken data, a whole batch of received messages cannot be parsed and it can lead to data loss.
Kafka engine table:
CREATE TABLE default.kafka_engine (message String)
ENGINE = Kafka
SETTINGS
kafka_broker_list = 'kafka:9092',
kafka_topic_list = 'topic',
kafka_group_name = 'group',
kafka_format = 'JSONAsString',
kafka_row_delimiter = '\n',
kafka_num_consumers = 1,
kafka_handle_error_mode ='stream';
Example of broken message: [object Object]
First message error: JSON object must begin with '{'.: (at row 1).
Other messages error: Cannot parse input: expected ']' at end of stream..
Is it possible to skip just that broken message and correctly parse other messages in a batch received from kafka topic?
Changing the kafka_format to RawBLOB fixed my issue.

Getting "java.lang.IllegalStateException: Tried to lookup lag for unknown task 3_0" after upgrading Kafka Stream from 2.5.1 to 2.6.2

I just upgraded our Kafka Stream application from 2.5.1 to 2.6.2. It used to work, now it doesn't.
Here is the troublesome topology (I have omitted the irrelevant Serdes):
val builder = new StreamsBuilder()
val contractEventStream: KStream[TariffId, ContractEvent] =
builder.stream[String, ContractUpsertAvro](settings.contractsTopicName)
.flatMap { (_, contractAvro) =>
ContractEvent.from(contractAvro)
.map(contractEvent => (contractEvent.tariffId, contractEvent))
}
val tariffsTable: KTable[TariffId, Tariff] =
builder.stream[String, TariffUpdateEventAvro](settings.tariffTopicName)
.flatMapValues(Tariff.fromAvro(_))
.selectKey((_, tariff) => tariff.tariffId)
.toTable(Materialized.`with`(tariffIdSerde, tariffSerde)) // Materialized.as also throws the same IllegalStateExceptions
contractEventStream
.join(tariffsTable)(JourneyStep.from(_, _).asInstanceOf[ContractCreated])(Joined.`with`(tariffIdSerde, contractEventSerde, tariffSerde))
.selectKey((_, contractUpdated) => contractUpdated.accountId)
.foreach((_, journeyStep) => println(journeyStep))
The join gives the following exception:
java.lang.IllegalStateException: Tried to lookup lag for unknown task 3_0
at org.apache.kafka.streams.processor.internals.assignment.ClientState.lagFor(ClientState.java:306)
at java.util.Comparator.lambda$comparingLong$6043328a$1(Comparator.java:511)
at java.util.Comparator.lambda$thenComparing$36697e65$1(Comparator.java:216)
at java.util.TreeMap.compare(TreeMap.java:1295)
at java.util.TreeMap.put(TreeMap.java:538)
at java.util.TreeSet.add(TreeSet.java:255)
at java.util.AbstractCollection.addAll(AbstractCollection.java:344)
at java.util.TreeSet.addAll(TreeSet.java:312)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.getPreviousTasksByLag(StreamsPartitionAssignor.java:1275)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assignTasksToThreads(StreamsPartitionAssignor.java:1189)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.computeNewAssignment(StreamsPartitionAssignor.java:940)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assign(StreamsPartitionAssignor.java:399)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:589)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:684)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1000(AbstractCoordinator.java:111)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:597)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:560)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1160)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1135)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:206)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:169)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:129)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:602)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:412)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1296)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:767)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:624)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
I can't see what I am doing wrong. The code above works with Kafka 2.5.1. Anyone has any idea what is going on?
The problem is caused by the Kafka Streams cache, which it keeps on disk. This cache is specific to Kafka-version and to the Kafka Streams topology you use (ie. a change in your topology could also lead to this error).
The cache is usually found in /tmp or elsewhere if you passed in the "state.dir" property to Kafka Streams. Clear the directory with the cache and you should be able to cleanly start again.

How to continually consume messages from Apache Pulsar?

How do you continually consume messages from Apache Pulsar using Akka Streams and print each message?
Below is sample code I found from the pulsar4s library. Instead of publishing the messages to another topic, how do you print the consumed messages?
val consumerFn = () => client.consumer(ConsumerConfig(Seq(intopic), Subscription("mysub")))
val producerFn = () => client.producer(ProducerConfig(outtopic))
val control = source(consumerFn, Some(MessageId.earliest))
.map { consumerMessage => ProducerMessage(consumerMessage.data) }
.to(sink(producerFn)).run()
You can simply use Sink.foreach(println))
For example
source(consumerFn, Some(MessageId.earliest))
.runWith(Sink.foreach(println))

google-api-client suddenly comes back with "invalid request"

I've been running Ruby scripts for weeks now using a Service Account, but today I'm getting an "Invalid Request" when I try to build the client using the following function:
def build_client(user_email)
client = Google::APIClient.new
client.authorization = Signet::OAuth2::Client.new(
:token_credential_uri => 'https://accounts.google.com/o/oauth2/token',
:audience => 'https://accounts.google.com/o/oauth2/token',
:scope => 'https://www.googleapis.com/auth/calendar',
:issuer => SERVICE_ACCOUNT_EMAIL,
:signing_key => Google::APIClient::KeyUtils.load_from_pkcs12(SERVICE_ACCOUNT_PKCS12_FILE_PATH, "notasecret"),
:person => user_email
)
client.authorization.fetch_access_token!
return client
end
Is there a lifespan on Service Accounts? I tried creating another Service Account and using that but I get the same result:
Authorization failed. Server message: (Signet::AuthorizationError)
{
"error" : "invalid_request"
}
Stumped!
OK. I figured it out. It was all to do with the user_email. I was reading it from a file and forgot to chomp the linefeed off, so it was objecting to a mal-formed email address.

Magento SOAP 2 API Fatal error: Procedure 'login' not present

I am getting: Fatal error: Procedure 'login' not present in /chroot/home/mystore/mystore.com/html/lib/Zend/Soap/Server.php on line 832
This is where the error is coming from
$soap = $this->_getSoap();
ob_start();
if($setRequestException instanceof Exception) {
// Send SOAP fault message if we've catched exception
$soap->fault("Sender", $setRequestException->getMessage());
} else {
try {
$soap->handle($request);
} catch (Exception $e) {
$fault = $this->fault($e);
$soap->fault($fault->faultcode, $fault->faultstring);
Any Ideas on how to fix the error?
I had the same issue, and which I did to fix it was to go to System/Configuration/Magento Core API and set the value WS-I Compliance as 'No'.
I'm working with a WebService which consumes the Magento V2 API, I don't recall if I generate the web reference using this value as 'Yes'; I'm working with a WS C# using VS 2010.
I had similar problem and I did not want to change the API version. Deleting the WSDL cache helped me.
Run this to get the WSDL cache folder:
php -i | grep soap
From the result you can see that the WDSL cache is enabled and stored in /tmp:
soap
soap.wsdl_cache => 1 => 1
soap.wsdl_cache_dir => /tmp => /tmp
soap.wsdl_cache_enabled => 1 => 1
soap.wsdl_cache_limit => 5 => 5
soap.wsdl_cache_ttl => 86400 => 86400
Remove the cache and run it again:
sudo rm -rf /tmp/*
I found the clue in this article - http://artur.ejsmont.org/blog/content/php-soap-error-procedure-xxx-not-present