cloudwatch log insights how to find the most repeated logs - amazon-cloudwatchlogs

I have a few cloudwatch log groups and it has lot of logs printed and since the ingestion size is vast I want to reduce a few logs since already my log level is INFO so now I need to try to comment out a few info logs.
Since I can not comment on all, I need to figure out what pattern of logs repeated most and check if I can remove any of those logs. so is there a way to get the logs pattern with most repeated count using log insights?

Related

Need some clarification regarding error handling in kafka connect

I'm looking over the following site KIP:298: Error Handling in Connect
In example 2, what does the following configuration will do? Bit more information or an example can help me out to understand.:-
# retry for at most 10 minutes times waiting up to 30 seconds between consecutive failures
errors.retry.timeout=600000
errors.retry.delay.max.ms=30000
And one more thing is, while dealing with sink connector, when I'm getting some errors due to duplicate records, it keeps on trying for a certain period, how to set our own limit of retries?
I tried by setting errors.retry.timeout=0 even though duplicate key error was retrying continuously for certain no.of.times, but if the error is because of schema or serializer it's not retrying.
And finally, errors.log.enable when this is true where does these logs are stored? I was checking in connect log, but not able to find the difference between default log and when the errors.log.enable is set to true.
Not sure how to fix your problem ,but when errors.log.enable=true, you should see 2 additional topic are created for your connector, yourconnector-error and yourconnector-success, you should be able to see the connector failure message in yourconnector-error topic.

Kafka - different configuration settings

I am going through the documentation, and there seems to be there are lot of moving with respect to message processing like exactly once processing , at least once processing . And, the settings scattered here and there. There doesnt seem a single place that documents the properties need to be configured rougly for exactly once processing and atleast once processing.
I know there are many moving parts involved and it always depends . However, like i was mentioning before , what are the settings to be configured atleast to provide exactly once processing and at most once and atleast once ...
You might be interested in the first part of Kafka FAQ that describes some approaches on how to avoid duplication on data production (i.e. on producer side):
Exactly once semantics has two parts: avoiding duplication during data
production and avoiding duplicates during data consumption.
There are two approaches to getting exactly once semantics during data
production:
Use a single-writer per partition and every time you get a network
error check the last message in that partition to see if your last
write succeeded
Include a primary key (UUID or something) in the
message and deduplicate on the consumer.
If you do one of these things, the log that Kafka hosts will be
duplicate-free. However, reading without duplicates depends on some
co-operation from the consumer too. If the consumer is periodically
checkpointing its position then if it fails and restarts it will
restart from the checkpointed position. Thus if the data output and
the checkpoint are not written atomically it will be possible to get
duplicates here as well. This problem is particular to your storage
system. For example, if you are using a database you could commit
these together in a transaction. The HDFS loader Camus that LinkedIn
wrote does something like this for Hadoop loads. The other alternative
that doesn't require a transaction is to store the offset with the
data loaded and deduplicate using the topic/partition/offset
combination.

Kafka connect error handling and improved logging

I was trying to leverage some enhancements in Kafka connect in 2.0.0 release as specified by this KIP https://cwiki.apache.org/confluence/display/KAFKA/KIP-298%3A+Error+Handling+in+Connect and I came across this good blog post by Robin https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues.
Here are my questions
I have set errors.tolerance=all in my connector config. If I understand correctly, it will not fail for bad records and move forward. Is my understanding correct?
In my case, the consumer doesn't fail and stays in the RUNNING state (which is expected) but the consumer offsets don't move forward for the paritions with the bad records. Any guess why this may be happening?
I have set errors.log.include.messages and errors.log.enable to true for my connector but I don't see any additional logging for the failed records. The logs are similar to what I used to see before enabling these properties. I didn't see any message like this https://github.com/apache/kafka/blob/5a95c2e1cd555d5f3ec148cc7c765d1bb7d716f9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/LogReporter.java#L67
Some Context:
In my connector, I do some transformations, validations for every record and if any of these fail, I throw RetriableException. Earlier I was throwing RuntimeException but I changed to RetriableException after reading the comments for RetryWithToleranceOperator class.
I have tried to keep it brief but let me know if any additional context is required.
Thanks so much in advance!

Capturing or logging subjobs messages in Talend

I have working in logging part in talend. I have followed this https://www.talendforge.org/tutorials/tutorial.php?idTuto=33 and successfully able to log error as well as stats of job (i.e begin end time of job) but I want to capture this information as well as in my logs
the information/message of subjobs like of csv file 3 rows in 0.01s 375 rows/s how to record or capture this information
See Stats & Logs in job properties. There you can store this information into files or into a database.
Keep in mind you might need to activate Monitor this connection in the row settings.
In the premium model there is also a Advanced Monitoring Console available which can be used to visualize those logs out of a database.

How does 'client_min_messages' setting affect an application using libpq?

From postgres documentation,
client_min_messages (enum)
Controls which message levels are sent to the client. Valid values are DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, LOG, NOTICE, WARNING, ERROR, FATAL, and PANIC. Each level includes all the levels that follow it. The later the level, the fewer messages are sent. The default is NOTICE. Note that LOG has a different rank here than in log_min_messages.
I am assuming that these messages are not the same as the results (PQResult) of the commands executed. If so, how do I read these messages through libpq? Would there be an impact of these messages on the application`s performance?
Messages are sent as a different message type on the PostgreSQL protocol, usually interleaved with the result stream. libpq sees them and picks them out, then adds them to a queue of notifications you can examine.
See the manual.