I would like to send all data received from fuse, in a specific Topic, to a Business Process in BPM Studio. Is there any way?
Example:
I send a value to 'testTopic' in Fuse. Then Fuse send this value to a Business Process (or the Business Process retrieve it), then the Business Process do things based on the value recevied, like sending another value to another topic
Is somithing of this kind possible?
Yes it most definitely is possible, although you would need to route from the 'testTopic' to one of the JMS Queues that jBPM can listen on and transform the message to reflect a valid jBPM command. The generic principle is described in the documentation at http://docs.jboss.org/jbpm/v6.0/userguide/jBPMRemoteAPI.html#d0e12149. The real power becomes clear when you look at all the jBPM commands you can send in the packages
org.drools.core.command.runtime.process (Maven: org.drools:drools-core)
and
org.jbpm.services.task.commands (Maven: org.jbpm:jbpm-human-task-core).
When talking from the outside world, it would typically be necessary to identify a correlationKey in the process which is basically the "Business Key" that can be used to identify a process uniquely e.g. as 'ApplicationNumber' for an application process. This can be used to then identify which process you may want to signal/abort/etc.
Since you are working in Fuse you should probably also consider routing that message to the jBPM Rest API described at http://docs.jboss.org/jbpm/v6.0/userguide/jBPMRemoteAPI.html#d0e10088. This may simplify your code a bit because it is a more synchronous API. The drawback however is the REST over HTTP invocation typically does not respect the local transaction.
Related
I know that messaging system is non blocking and scalable and should be used in microservices environment.
The use case that i am questioning is:
Imagine that there's an admin dashboard client responsible for sending API request to create an Item object. There is a microservice that provides API endpoint which uses a MySQL database where the Item should be stored. There is another microservice which uses elastic search for text searching purposes.
Should this admin dashboard client :
A. Send 2 API Calls; 1 Call to MySQL service and another elasticsearch service
or
B. Send message to topic to be consumed by both MySQL service and elasticsearch service?
What are the pros and cons when considering A or B?
I'm thinking that it's a little overkill when only 2 microservices are consuming this topic. Also, the frequency of which the admin is creating Item object is very small.
Like many things in software architecture, it depends. Your requirements, SLAs and business needs should make it clearer.
As you noted, messaging system is not blocking and much more scalable, but, API communication got it pluses as well.
In general, REST APIs are best suited to request/response interactions where the client application sends a request to the API backend over HTTP.
Message streaming is best suited for notifications when new data or events occur that you may want to take action upon.
In you specific case, I would go with a messaging system with is much more scalable and non-blocking.
Your A approach is coupling the "routing" logic into your application. Pretend you need to perform an API call to audit your requests, then you will need to change the code and add another call to your application logic. As you said, the approach is synchronous and unless you're not providing threading logic, your calls will be lined up and won't scale, ie, call mysql --> wait response, then call elastic search --> wait response, ...
In any case you can prefer this approach if you need immediate consistency, ie, the result call of one action feeding the second action.
The B approach is decoupling that routing logic, so, any other service interested in the event can subscribe to the topic and perform the action expected. Totally asynchronous and scalable. Here you will have eventual consistency and you have to recover any possible failure.
I have been working on a project that is basically an e-commerce. It's a multi tenant application in which every client has its own domain and the website adjusts itself based on the clients' configuration.
If the client already has a software that manages his inventory like an ERP, I would need a medium on which, when the e-commerce generates an order, external applications like the ERP can be notified that this has happened to take actions in response. It would be like raising events over different applications.
I thought about storing these events in a database and having the client make requests in a short interval to fetch the data, but something about polling and using a REST Api for this seems hackish.
Then I thought about using Websockets, but if the client is offline for some reason when the event is generated, the delivery cannot be assured.
Then I encountered Message Queues, RabbitMQ to be specific. With a message queue, modeling the problem in a simplistic manner, the e-commerce would produce events on one end and push them to a queue that a clients worker would be processing as events arrive.
I don't know what is the best approach, to be honest, and would love some of you experienced developers give me a hand with this.
I do agree with Steve, using a message queue in your situation is ideal. Message queueing allows web servers to respond to requests quickly, instead of being forced to perform resource-heavy procedures on the spot. You can put your events to the queue and let the consumer/worker handle the request when the consumer has time to handle the request.
I recommend CloudAMQP for RabbitMQ, it's easy to try out and you can get started quickly. CloudAMQP is a hosted RabbitMQ service in the cloud. I also recommend this RabbitMQ guide: https://www.cloudamqp.com/blog/2015-05-18-part1-rabbitmq-for-beginners-what-is-rabbitmq.html
Your idea of using a message queue is a good one, better than database or websockets for the reasons you describe. With the message queue (RabbitMQ, or another server/broker based system such as Apache Qpid) approach you should consider putting a broker in a "DMZ" sort of network location so that your internal ecommerce system can push events out to it, and your external clients can reach into without risking direct access to your core business systems. You could also run a separate broker per client.
Here are the details of my use case:
What's my data..
There would be user experiences, error report, state info and so on. The data is fragmented and may change in the future. So I plan to use NoSQL, maybe mongodb, to save data in the server.
What are the clients..
They are clients written in different languages, like C#, C++, LabVIEW and so on. Some don't even have an access to a mongodb driver, so of course it's not an option to communicate with database directly. And framework like below is needed.
Clients -> (Some protocol) -> Broker -> Database.
As those clients are not web client, so common web server using http may not suit for my case, right? Is there any suggestion for the protocol, broker and database, Or even a new framework.
My goal is to make the clients can send data as convenient as possible.
Thank you!
This is not really new, but a message driven application, which is a well understood pattern.
I did this mostly in Java, so I will stick to this language here.
A broker alone would be not enough here. Let us say you use Apache ActiveMQ as you message broker, you would still need to get your data into the database, since MQ is... ...a message queue. So you need a part which gets the messages out of MQ, processes them according to your business rules and stores them in the (correct) database instance, and the correct collection/bucket/table. Of course you could write this part by hand, but that would be pretty much reinventing the wheel. There is a notion of a "message routing and mediation engine", and the most commonly suggested here is Apache Camel, which has quite some components to communicate with databases and other so called consumers and producers. And that is the key point. In general, if possible, your clients should send their data to the message broker directly. But, if they can't, they can simply send text files or make REST calls – there are actually too many options to list here. This incoming data can be preprocessed and normalized to your standard format by a "route" in Apache Camel (a set of a consumer, conversion rules and a producer, in it's simplest form) and send as an AMQP message to MQ. From there, another Camel route can process the AMQP messages, apply your business rules and store the data in the database... ...or whatever else may come to your mind (for example sending an email).
So this solution supports a multitude of protocols for incoming and outgoing messages (as long as they are supported by Camel) and you have your business rules in a centralized and well defined location.
To implement this, I'd strongly suggest using Apache ServiceMix, which is a distribution of ActiveMQ, Camel and a system to manage the components and business rules.
Finally, web server with http protocal could suit for the use case, I think.
Mostly I want is a universal API for different kinds of clients to save data to cloud. Http has method GET, POST, PUT, DELETE, so with a RESTful API it is naturlly suitable for operate data, I think.
My solution at last is Node.js(Express) + Mongodb (a quite common group), and a RESTful API is provided via Express web server, clients can use http to operate data conviencely. Also, it is quite light weight and easy to get started.
Here is some tutorial: http://cwbuecheler.com/web/tutorials/2013/node-express-mongo/
we are making some loging issue, where we need write the logentries in the DB. But the process run in a transaction and by rollback are our new logentries also deleted. can I make a write in DB out of the transaction? something like write in temptable with NO-UNDO option...? that the new logentries still remain in DB...?
Another possibility would be to use an app server. Transactions on app server sessions are independent from transactions in the original session (that's what the optional and redundant "DISTINCT TRANSACTION" syntax is all about).
Another option would be to use a simple messaging system. One very easy to setup and use option is STOMP. It is platform neutral and very easy to get going with.
Julian Lyndon-Smith posted the following on PEG about a month ago, and it really is as easy to setup and use as he says (I've tried it, I used ApacheMQ which is also very easy to setup and use):
Following on from presentations in Boston and Finland, dot.r is
pleased to announce the open source Stomp project, available
immediately.
Download from either http://www.dotr.com or
https://bitbucket.org/jmls/stomp , the dot.r stomp programs allow you
to connect your progress session to any other application or service
that is connected to the same message broker.
Open source, free message brokers that support Stomp are:
Fuse
(http://fusesource.com/products/fuse-mq-enterprise/) [a Progress company now owned by Red Hat inc]
Fuse MQ Enterprise is a standards-based, open source messaging platform that deploys with a very small footprint. The lack of license
fees combined with high-performance, reliable messaging that can be
used with any development environment provides a solution that
supports integration everywhere
ActiveMQ
Apache ActiveMQ (tm) (http://activemq.apache.org/)is the most popular
and powerful open source messaging and Integration Patterns server. Apache
ActiveMQ is fast, supports many Cross Language Clients and Protocols, comes
with easy to use Enterprise Integration Patterns and many advanced features
while fully supporting JMS 1.1 and J2EE 1.4.
Apache ActiveMQ is released under the Apache 2.0 License.
RabbitMQ
RabbitMQ is a message broker. The principal idea is pretty simple: it
accepts and forwards messages. You can think about it as a post
office: when you send mail to the post box you're pretty sure that Mr.
Postman will eventually deliver the mail to your recipient. Using this
metaphor RabbitMQ is a post box, a post office and a postman.
The major difference between RabbitMQ and the post office is the fact
that it doesn't deal with paper, instead it accepts, stores and
forwards binary blobs of data - messages.
Please feel free to log any issues on the
https://bitbucket.org/jmls/stomp issue system, and fork the project in
order to commit back all those new features that you are going to add
...
dot.r Stomp uses the permissive MIT licence
(http://en.wikipedia.org/wiki/MIT_License)
Have fun, enjoy !
Julian
Every change to the database must be part of a transaction. If you do not explicitly start one it will be implicitly started for you and scoped to the next outer block with transaction capabilities.
However and although I would not recommend you to, work with sub-transactions. You can invoke a sub transaction by explicitly specifying a DO TRANSACTION within the transaction scope. Although the database will never know about it, the client can roll back the sub transaction while the database can commit the transaction.
But in order to implement something like this you must master the concepts of transaction scope, block behavior and error handling.
RealHeavyDude.
Write your log entries to a no-undo temp-table.
When the code will commit a transaction, or transactions aren't active (transactionID = ?) have your code write the log entries out.
I don't think there is any way to do this in ABL as you planned either efficiently (sprinkling temp-table flushes or other tidbits all over the place is gross) or reliably (what if the application crashes with an un-flushed temp-table?), as others have mentioned. I would suggest making your complicated logging less coupled to your app by making the database writes asynchronous, occurring outside of your application if possible.
Since you're on Windows, you could change your logging to use the .NET log4net library instead of ABL constructs. log4net has a few appenders that would be useful:
AdoNetAppender which lets you log directly to a database
RemoteSyslogAppender which uses the syslog protocol, letting you log to an external Unix syslog or rsyslog daemon (rsyslog supports writing log messages to databases)
UDPAppender which sends the log messages via UDP packets somewhere else to be handled (e.g. a logFaces server, which supports writing to databases)
If you must do it in ABL then you could use a named output stream specifically for your log messages (OUTPUT TO STREAM) which writes to a specific location where an external process is listening to handle it. This file could be a pipe created by something like mkfifo or just a regular text file that is monitored for changes with inotify (not sure what the Windows equivalents of these are). This external process would handle parsing the messages and writing them to the database (basically re-inventing rsyslog).
I like the no-undo temp-table idea, just be sure to put the database write part in a "FINALLY" block in case of unhandled exceptions.
Does anybody have experience with configuring distribution lists (sending a message to one queue, and having that message be forwarded to several other queues) for Websphere MQ v7? I want to configure it on my queue manager, rather than the client having to know all the queues to send the messages to. Also, I would prefer not to use a topic, because I want to be able to manage each queue separately. Is there some configuration file, or some way to use WebSphere MQ Explorer to do this?
Thanks
A program that uses a distribution list doesn't have to "know" the queues it sends to in the sense of hard-coding the names. But it does have to supply the list of queue names. Typically you can place these into a namelist and have the sending program retrieve them there. When the program calls PUT it must also be prepared to parse a structure of return codes rather than a single MQRC.
However, you really should reconsider using a topic. You can create administrative subscriptions for each destination queue. This allows you to send the publications to any local or remote queue that you like. It also have the advantage of being able to add or delete destinations without having to restart - or worse, recompile - the sending application.
You can use WMQ Explorer either to manage a namelist or to manage the topic and administrative subscriptions. The topic/subscriptions method is the only way to do this purely through configuration. To use distribution lists requires a program specifically designed for the purpose.