I am using auditSink object in order to get the audit logs.
I didn't find any documentation/api regarding retry option for audit logs.
What happens in case the web server / service is not available?
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#auditsink-v1alpha1-auditregistration-k8s-io
The fine source implies there is a retry mechanism, and thus the need for configuring its backoff, but aside from whatever you can find by surfing around in the source, I don't know that any promises have been made about deliverability. If you need such guarantees, you may be happier sending audit event to stdout or to disk and then egressing them the way you would with any other log content
Related
I need to simply monitor if my Kafka cluster is up. Occasionally the machines running Kafka were shutdown. I want to send an email alert if the cluster is not available.
I can create a producer and consumer to send and receive dummy messages periodically. Is there a simpler way to do it?
You can use https://github.com/obsidiandynamics/kafdrop
It won't send you emails, but it much easier than send dummy messages
Actually knowing if cluster is up is not so easy at all, there is discussion with community what is the best practice to decide if kafka cluster is up and active but there is no current good way to get this information, as kafka architecture is distributed system, you might have big clusters and while one or more brokers are down , still having your cluster to give high available service, not effecting the integrity of data. Also you might have problems with one topic while on other topics it might work fine.
One suggestion I read which might give you the most certain approach is to produce "dummy" msgs to your applicative topics, and "skip" these msgs on consumption, that guarantee you that your application would work. I don't like this approach very much as it requires to "send junk to your main topics"
Other approaches are like you say "produce/consume to/from test/healthcheck topic" but it is might not give full guarantee that your application would work, this is a lot like select from dummy in other db approaches... if for them is good enough....
Another suggestion is to use AdminClient to read the metrics of cluster, if metrics are provided that usually means the cluster is healthy , also not very good guarantee...
I asked in comment which language are you using, maybe you are using something like spring which has HealthIndicator to check component status, but for your case it would be little different.
First of all, you should know that Kafka by default should be High
Available, so while building the cluster you should follow the bold
lines of best practices, you should ensure that you have replicas of
machines. This is good assumption that will make you satisfied over implementing all of this.
But, if you want to check health of a cluster, you can use admin process, you can use AdminClient, with help of some utilities; you can check list of topics, groups, etc that you have. But this not 100% guarantee for you although it is good workaround.
You can do that using as you mentioned periodic scheduler, and send email based on the findings you get. But again this is not the ideal solution, and HA cluster infrastructure should save lots of time for you if you build it correctly from the beginning.
I have an Informatica BDM system (note Big Data Management, not Power Centre) and am having a problem with dropping connections when communicating with a third party web service. This fails the REST web service transformation which in turn kills our batch job.
Rather than fail the entire job, I would like each REST call to potentially be retried several times first.
I looked at the documentation, but I see no option to set a re-try on the REST Web Service Consumer transformation. Did I miss it? Or does one have to construct a re-try loop around it in some other way?
Bad News, But the Informatica BDM is yet to come up with an option to Retry. The need for the feature is already been raised by many users and may be the option will come up in the upcoming releases.
For now, we can only keep track of all the requests and responses manually and hope for the best.
I'm trying to find a messaging system that supports the following use case.
Producer registers topic namespace
client subscribes to topic
first client triggers notification on producer to start producing
new client with subscription to the same topic receives data (potentially conflated, similar to hot/cold observables in RX world)
When the last client goes away, unsubscribe or crash, notify the producer to stop producing to said topic
I am aware that according to the pub/sub pattern A producer is defined to be blissfully unaware of the existence of consumers, meaning that my use-case simply does not fit the pub/sub paradigm.
So far I have looked into Kafka, Redis, NATS.io and Amazon SQS, but without much success. I've been thinking about a few possible ways to solve this, Haven't however found anything that would satisfy my needs yet.
One option that springs to mind, for bullet 2) is to model a request/reply pattern as amongs others detailed on the NATS page to have the producer listen to clients. A client would then publish a 'subscribe' message into the system that the producer would pick up on a wildcard subscription. This however leaves one big problem, which is unsubscribing. Assuming the consumer stops as it should, publishing an unsubscribe message just like the subscribe would work. But in the case of a crash or similar this won't work.
I'd be grateful for any ideas, references or architectural patterns/best practices that satisfy the above.
I've been doing quite a bit of research over the past week but haven't come across any satisfying Q&A or articles. Either I'm approaching it entirely wrong, or there just doesn't seem to be much out there which would surprise me as to me, this appears to be a fairly common scenario that applies to many domains.
thanks in advance
Chris
//edit
An actual simple use-case that I have at hand is stock quote distribution.
Quotes come from external source
subscribe to stock A quotes from external system when the first end-user looks at stock A
Stop receiving quotes for stock A from external system when no more end-users look at said stock
RabbitMQ has internal events you can use with the Event Exchange Plugin. Event such as consumer.created or consumer.deleted could be use to trigger some actions at your server level: for example, checking the remaining number of consumers using RabbitMQ Management API and takes action such as closing a topic, based on your use cases.
I don't have any messaging consumer present based publishing in mind. Got ever worst because you'll need kind of heartbeat mechanism to handle consumer crashes.
So here are my two cents, not sue if you're looking for an out of the box solution, but if not, you could wrap your application around a zookeeper cluster to handle all your use cases.
Simply use watchers on ephemeral nodes to check when you have no more consumers ( including crashes) and put some watcher around a 'consumers' path to be advertised when you get consumers.
Consumers side, you would have to register your zk node ID whenever you start it.
It's not so complicated to do, and zk is not the only solution for this, you might use other consensus techs as well.
A start for zookeeper :
https://zookeeper.apache.org/doc/r3.1.2/zookeeperStarted.html
( strongly advise to use curator api, which handle lot of recipes in a smooth way)
Yannick
Unfortunately you haven't specified your use business use case that you try to solve with such requirements. From the sound of it you want not the pub/sub system, but an orchestration one.
I would recommend checking out the Cadence Workflow that is capable of supporting your listed requirements and many more orchestration use cases.
Here is a strawman design that satisfies your requirements:
Any new subscriber sends an event to a workflow with a TopicName as a workflowID to subscribe. If workflow with given ID doesn't exist it is automatically started.
Any subscribe sends another signal to unsubscribe.
When no subscribers are left workflow exits.
Publisher sends an event to the workflow to deliver to subscribers.
Workflow delivers the event to the subscribers using an activity.
If workflow with given TopicName doesn't run the publish event to it is going to fail.
Cadence offers a lot of other advantages over using queues for task processing.
Built it exponential retries with unlimited expiration interval
Failure handling. For example it allows to execute a task that notifies another service if both updates couldn't succeed during a configured interval.
Support for long running heartbeating operations
Ability to implement complex task dependencies. For example to implement chaining of calls or compensation logic in case of unrecoverble failures (SAGA)
Gives complete visibility into current state of the update. For example when using queues all you know if there are some messages in a queue and you need additional DB to track the overall progress. With Cadence every event is recorded.
Ability to cancel an update in flight.
Distributed CRON support
See the presentation that goes over Cadence programming model.
I'm starting out using Rebus with MSMQ, but I cannot seem to find the requirements for MSMQ.
So in the (Roles|Programs and Features) which options do I need to set and what is the impact wrt to Rebus?
I'm pretty sure I need the Message Queuing Server ;-) But what about the others:
Directory Service Integration
HTTP Support
Message Queuing Triggers
Multicast Support
Routing Service
I think none of these are needed and none of the extra features are supported by Rebus.
I don't think you'll ruin anything by checking one or more of those extra options, but the only requirement for Rebus to work is that you put a single checkmark in the top level Message Queueing Server.
You're absolutely right that I ought to document this on the wiki - it's almost too easy :)
we are making some loging issue, where we need write the logentries in the DB. But the process run in a transaction and by rollback are our new logentries also deleted. can I make a write in DB out of the transaction? something like write in temptable with NO-UNDO option...? that the new logentries still remain in DB...?
Another possibility would be to use an app server. Transactions on app server sessions are independent from transactions in the original session (that's what the optional and redundant "DISTINCT TRANSACTION" syntax is all about).
Another option would be to use a simple messaging system. One very easy to setup and use option is STOMP. It is platform neutral and very easy to get going with.
Julian Lyndon-Smith posted the following on PEG about a month ago, and it really is as easy to setup and use as he says (I've tried it, I used ApacheMQ which is also very easy to setup and use):
Following on from presentations in Boston and Finland, dot.r is
pleased to announce the open source Stomp project, available
immediately.
Download from either http://www.dotr.com or
https://bitbucket.org/jmls/stomp , the dot.r stomp programs allow you
to connect your progress session to any other application or service
that is connected to the same message broker.
Open source, free message brokers that support Stomp are:
Fuse
(http://fusesource.com/products/fuse-mq-enterprise/) [a Progress company now owned by Red Hat inc]
Fuse MQ Enterprise is a standards-based, open source messaging platform that deploys with a very small footprint. The lack of license
fees combined with high-performance, reliable messaging that can be
used with any development environment provides a solution that
supports integration everywhere
ActiveMQ
Apache ActiveMQ (tm) (http://activemq.apache.org/)is the most popular
and powerful open source messaging and Integration Patterns server. Apache
ActiveMQ is fast, supports many Cross Language Clients and Protocols, comes
with easy to use Enterprise Integration Patterns and many advanced features
while fully supporting JMS 1.1 and J2EE 1.4.
Apache ActiveMQ is released under the Apache 2.0 License.
RabbitMQ
RabbitMQ is a message broker. The principal idea is pretty simple: it
accepts and forwards messages. You can think about it as a post
office: when you send mail to the post box you're pretty sure that Mr.
Postman will eventually deliver the mail to your recipient. Using this
metaphor RabbitMQ is a post box, a post office and a postman.
The major difference between RabbitMQ and the post office is the fact
that it doesn't deal with paper, instead it accepts, stores and
forwards binary blobs of data - messages.
Please feel free to log any issues on the
https://bitbucket.org/jmls/stomp issue system, and fork the project in
order to commit back all those new features that you are going to add
...
dot.r Stomp uses the permissive MIT licence
(http://en.wikipedia.org/wiki/MIT_License)
Have fun, enjoy !
Julian
Every change to the database must be part of a transaction. If you do not explicitly start one it will be implicitly started for you and scoped to the next outer block with transaction capabilities.
However and although I would not recommend you to, work with sub-transactions. You can invoke a sub transaction by explicitly specifying a DO TRANSACTION within the transaction scope. Although the database will never know about it, the client can roll back the sub transaction while the database can commit the transaction.
But in order to implement something like this you must master the concepts of transaction scope, block behavior and error handling.
RealHeavyDude.
Write your log entries to a no-undo temp-table.
When the code will commit a transaction, or transactions aren't active (transactionID = ?) have your code write the log entries out.
I don't think there is any way to do this in ABL as you planned either efficiently (sprinkling temp-table flushes or other tidbits all over the place is gross) or reliably (what if the application crashes with an un-flushed temp-table?), as others have mentioned. I would suggest making your complicated logging less coupled to your app by making the database writes asynchronous, occurring outside of your application if possible.
Since you're on Windows, you could change your logging to use the .NET log4net library instead of ABL constructs. log4net has a few appenders that would be useful:
AdoNetAppender which lets you log directly to a database
RemoteSyslogAppender which uses the syslog protocol, letting you log to an external Unix syslog or rsyslog daemon (rsyslog supports writing log messages to databases)
UDPAppender which sends the log messages via UDP packets somewhere else to be handled (e.g. a logFaces server, which supports writing to databases)
If you must do it in ABL then you could use a named output stream specifically for your log messages (OUTPUT TO STREAM) which writes to a specific location where an external process is listening to handle it. This file could be a pipe created by something like mkfifo or just a regular text file that is monitored for changes with inotify (not sure what the Windows equivalents of these are). This external process would handle parsing the messages and writing them to the database (basically re-inventing rsyslog).
I like the no-undo temp-table idea, just be sure to put the database write part in a "FINALLY" block in case of unhandled exceptions.