Credit cards messages on AS400 DATAQ - db2

how to export alerts messages from AS400 DATAQ once revived to PF table
to be ready to send to remote oracle table
with consideration that the DATAQ delete the message once the last one received
is it applicable to create trigger on the DATAQ to export the message content and create the table once the message received
please help

Given followup replies in the comments, seems possible the inquiry might be resolved mostly, with use of the Receive Data Queue (QRCVDTAQ) API; i.e. the arrival of a new message\entry on the Data Queue (DTAQ) triggers the receipt by the program requesting the API to await\receive that new message. That feature is available via Java from a class of the JTOPEN\JT400 ToolBox Programming->Java->IBM Toolbox for Java->IBM Toolbox for Java classes->Access classes->Data queues
FWiW: Like other commenters to the OP, I can not figure out what is alluded by mention of a PF or Oracle; the OP text seems possibly to have been generated by a web language translator from non-English text to [seemingly incoherent] English, so if that is not how the text originated, then consider trying to rephrase [or rewrite] the text as one of the comments had suggested. Please understand I am not trying to taunt anyone about their skill with English by having made that statement; only trying to clarify why the request is made. Sometimes for programmers composing an OP, by writing what does occur or needs to occur, written procedurally and written in pseudo-code, can make the OP much clearer to more readers than can writing sentences and paragraphs.

Related

kdb - customized data streaming/ticker plant?

We've been using kdb to handle a number of calculations focused more on traditional desktop sources. We have deployed our web application and are looking to make the leap as to how best to pick up data changes and re-calculate them in kdb to render a "real-time" view of the data as it changes.
From what I've been reading, the use of data loaders(feed handlers) into our own equivalent of a "ticker plant" as a data store is the most documented ideal solution. So far, we have been "pushing" data into kdb directly and calculating as part of a script so we are trying to make the leap from calculation-on-demand to a "live" calculation as data inputs are edited by user.
I'm trying to understand how to manage the feed handlers and timing of updates. We really only want to move data when it changes (web-front end so trying to figure out how best to "trigger" when things change (such as save or lost focus on an editable data grid for example.) We are also thinking our database as the "ticker plant" itself which may minimize feedhandlers.
I found a reference below and it looks like its running a forever-loop which feels excessive but understand the original use case for kdb and streaming data.
Feedhandler - sending data to tickerplant
Does this sound like a solid workflow?
Many thanks in advance!
Resources we've referencing:
Official Manual -https://github.com/KxSystems/kdb/blob/master/d/tick.htm
kdb+ Tick overview: http://www.timestored.com/kdb-guides/kdb-tick-data-store
Source code: https://github.com/KxSystems/kdb-tick
There's a lot to parse here but some general thoughts/ideas:
Yes, most examples of feedhandlers are set up as forever loops but this is often just for convenience for demoing.
Ideally a live data flow should work based on event handling, aka on-event triggers. Kdb/q has this out of the box in the form of the .z handlers. Other languages should have similar concepts of event handling
Some more examples of python/java feeders are here: https://github.com/exxeleron
There's also some details on the official Kx site: https://code.kx.com/q/wp/capi/#publishing-to-a-kdb-tickerplant
It still might be a viable option to have a forever loop, or at least a short timer in the event you want to batch data.
Depending on the amount of dataflow a tickerplant might be overkill for your use-case, but a tickerplant is still useful for (a) separating your processing from the processing of dataflow (i.e. data can still flow through the tickerplant while another process is consuming/calculating) and (b) logging data for recovery purposes.

Lagom | Return Values from read side processor

We are using Lagom for developing our set of microservices. The trick here is that although we are using event sourcing and persisting events into cassandra but we have to store the data in one of the graph DB as well since it will be the one that will be serving most of the queries because of the use case.
As per the Lagom's documentation, all the insertion into Graph database(or any other database) has to be done in ReadSideProcecssor after the command handler persist the events into cassandra as followed by philosophy of CQRS.
Now here is the problem which we are facing. We believe that the ReadSideProcecssor is a listener which gets triggered after the events are generated and persisted. What we want is we could return the response back from the ReadSideProcecssor to the ServiceImpl. Example when a user is added to the system, the unique id generated by the graph has to be returned as one of the response headers. How that can be achieved in Lagom since the response is constructed from setCommandHandler and not the ReadSideProcessor.
Also, we need to make sure that if due to any error at graph side, the API should notify the client that the request has failed but again exceptions occuring in ReadSideProcessor are not propagated to either PersistentEntity or ServiceImpl class. How can that be achieved as well?
Any helps are much appreciated.
The read side processor is not a listener that is attached to the command - it is actually completely disconnected from the persistent entity, it may be running on a different node, at a different time, perhaps even years in the future if you add a new read side processor that first comes up to speed with all the old events in history. If the read side processor were connected synchronously to the command, then it would not be CQRS, there would not be segregation between the command and the query side.
Read side processors essentially poll the database for new events, processing them as they detect them. You can add a new read side processor at any time, and it will get all events from all of history, not just the new ones that are added, this is one of the great things about event sourcing, you don't need to anticipate all your query needs from the start, you can add them as the query need comes.
To further explain why you don't want a connection between the two - what happens if the event persist succeeds, but the update on the graph db fails? Perhaps the graph db is crashed. Does the command have to retry? Does the event have to be deleted? What happens if the node doing the update itself crashes before it has an opportunity to fix the problem? Now your read side is in an inconsistent state from your entities. Connecting them leads to inconsistency in many failure scenarios - for example, like when you update your address with a utility company, and but your bills still go to the old address, and you contact them, and they say "yes, your new address is updated in our system", but they still go to the old address - that's the sort of terrible user experience that you are signing your users up for if you try to connect your read side and write side together. Disconnecting allows Lagom to ensure consistency between the events you have emitted on the write side, and the consumption of them on the read side.
So to address your specific concerns: ID generation should be done on the write side, or, if a subsequent ID is generated on the read side, it should also provide a way of mapping the IDs on the write side to the read side ID. And as for handling errors on the read side - all validation should be done on the write side - the write side should ensure that it never emits an event that is invalid.
Now if the read side processor encounters something that is invalid, then it has two options. One option is it could fail. In many cases, this is a good option, since if something is invalid or inconsistent, then it's likely that either you have a bug or some form of corruption. What you don't want to do is continue processing as if everything is happy, since that might make the data corruption or inconsistency even worse. Instead the read side processor stops, your monitoring should then detect the error, and you can go in and work out either what the bug is or fix the corruption. Of course, there are downsides to doing this, your read side will start lagging behind the write side while it's unable to process new events. But that's also an advantage of CQRS - the write side is able to continue working, continue enforcing consistency, etc, the failure is just isolated to the read side, and only in updating the read side. Instead of your whole system going down and refusing to accept new requests due to this bug, it's isolated to just where the problem is.
The other option that the read side has is it can store the error somewhere - eg, store the event in a dead letter table, or raise some sort of trouble ticket, and then continue processing. This way, you can go and fix the event after the fact. This ensures greater availability, but does come at the risk that if that event that it failed to process was important to the processing of subsequent events, you've potentially just got yourself into a bigger mess.
Now this does introduce specific constraints on what you can and can't do, but I can't really anticipate those without specific knowledge of your use case to know how to address them. A common constraint is set validation - for example, how do you ensure that email addresses are unique to a single user in your system? Greg Young (the CQRS guy) wrote this blog post about those types of problems:
http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/

Quotes and HandlInst <21> field in FIX protocol

I'm trying to understand the variety of message types and data fields in FIX protocol.
I did understand most of the things but I'm still not sure about Quotes and HandlInst.
When a dealer or a broker wants to trade in the market he have a list of all available products (e.g USD/EUR, USD/JPN, ...). each product has sell-price and buy-price which being update in high rate. Which message type generates this values? A Quote message type(quote request and quote response)?
A broker have the option to decide for each one of his dealers whether he automatically response as the counter-party for the dealer orders or the orders go out to the market for a trade. Which filed in the order message types indicates that mark? I was thinking about HandlInst <21> but i'm not quite sure...
Thanks for your help
These are vendor-specific questions. FIX just gives you a pile of messages and fields you can use, and some recommendations for how to use them. Some counterparties even follow those recommendations, but nearly all of them add their own weird customizations or use certain fields in weird ways.
If you are connecting to an external counterparty, you need to read their docs for their specific FIX interface. That will tell you which fields they use, how they use them, and what they expect from you.
So, get your counterparty's docs and read them.
Ok Kitsune, here are my answers for you (you should pay me for this !)
yes, Quote, and QuoteCancel (usually)
I think you are talking about an internal broker decision that's not to do with FIX. HandlInst can be used by a client sending their order to the broker to specify manual or automatic execution, but I think that is in specific limit order cases only, not the usual fill or kill stuff. If the order was $100mio then maybe a client would specify manual execution. Again, as in your other post, the ExecutionReport can specify manual or auto execution as well. Need to look at that...

BEST approach to share messages between data tier and application tier

I was looking for any suggestion or a good approach to handle messages between the data & application tier, what i mean with this is when using store procedures or even using direct SQL statements in the application, there should be a way the data tier notifies upper layers about statement/operation results in at least an organized way.
What i commonly use is two variables in every store procedure:
#code INT,
#message VARCHAR(1024)
I put the DML statements inside a TRY CATCH block and at the end of the try block, set both variables to certain status meaning everything went ok, as you might be thinking i do the opposite on the catch block, set both variables to some failure code if needed perform some error handling.
After the TRY CATCH block i return the result using a record:
SELECT #code AS code, #message AS message
These two variables are used in upper tiers for validation purposes as sending messages to the user, also for committing o rolling back transactions.
Perhaps i'm missing important features like RAISERROR or not cosidering better and more optimal and secure approaches.
I would appreciate your advices and good practices, not asking for cooks recipes there's no need, just the idea but if you decide to include examples they'd be more than welcome.
Thanks
The generally accepted approach for dealing with this is to have your sproc set a return values
and use raise error as has been previously suggested.
The key thing that I suspect you are missing is how to get this data back on the client side.
If you are in a .net environment, you can check for the return value as part of the ado.net code. The raise error messages can be captured by catching the SqlException object and iterating through the errors collection.
Alternatively, you could create and event handler and attach to the connection InfoMessage event. This would allow you to get the messages as they are generated from sql server instead of at the completion of the batch or stored procedure.
another option which I wouldn't recommend, would be to track all of the things you care about into a XML message and return that from the stored proc this gives you a little more structure then your free text field approach.
My experience has been to rely on the inherent workings of the procedure mechanisms. What I mean by this is that if a procedure is to fail, either the implicit raised error (if not using try/catch . . . simply CRUD stuff) or using the explicit RAISERROR. This means that if the code falls into the catch block and I want to notify the application of the error, I would re-raise the error with RAISERROR. In the past, I've have created a custom rethrow procedure so I can trap and log the error information. One could use the rethrow as an opportunity to throw a custom message/code that could be used to guide the application down a different decision path. If the calling application doesn't receive an error on execution, it's safe to assume that it succeeded - so explicit transaction committing can commence.
I've never attempted to write in my own communication mechanism for interaction between the app and the stored proc - the objects in the languages I interact with have events or mechanisms to detect the raised error from the procedure. It sounds to me like your approach doesn't necessarily require too much more code than standard error handling, but I do find it confusing as it seems to be a reinventing the wheel type of scenario.
I hope I've not misunderstood what you are doing, nor do I want to come off as critical; I do believe, however, you have this feedback mechanism available with the standard API and are probably making this too complicated. I would recommend reading up on RAISERROR and see if it's going to meet your needs (see the Using RAISERROR link for this at the bottom of the page as well).

machine learning and code generator from strings

The problem: Given a set of hand categorized strings (or a set of ordered vectors of strings) generate a categorize function to categorize more input. In my case, that data (or most of it) is not natural language.
The question: are there any tools out there that will do that? I'm thinking of some kind of reasonably polished, download, install and go kind of things, as opposed to to some library or a brittle academic program.
(Please don't get stuck on details as the real details would restrict answers to less generally useful responses AND are under NDA.)
As an example of what I'm looking at; the input I'm wanting to filter is computer generated status strings pulled from logs. Error messages (as an example) being filtered based on who needs to be informed or what action needs to be taken.
Doing Things Manually
If the error messages are being generated automatically and the list of exceptions behind the messages is not terribly large, you might just want to have a table that directly maps each error message type to the people who need to be notified.
This should make it easy to keep track of exactly who/which-groups will be getting what types of messages and to update the routing of messages should you decide that some of the messages are being misdirected.
Typically, a small fraction of the types of errors make up a large fraction of error reports. For example, Microsoft noticed that 80% of crashes were caused by 20% of the bugs in their software. So, to get something useful, you wouldn't even need to start with a complete table covering every type of error message. Instead, you could start with just a list that maps the most common errors to the right person and routes everything else to a person for manual routing. Each time an error is routed manually, you could then add an entry to the routing table so that errors of that type are handled automatically in the future.
Document Classification
Unless the error messages are being editorialized by people who submit them and you want to use this information when routing them, I wouldn't recommend treating this as a document classification task. However, if this is what you want to do, here's a list of reasonably good packages for document document classification organized by programming language:
Python - To do this using the Python based Natural Language Toolkit (NLTK), see the Document Classification section in the freely available NLTK book.
Ruby - If Ruby is more of your thing, you can use the Classifier gem. Here's sample code that detects whether Family Guy quotes are funny or not-funny.
C# - C# programmers can use nBayes. The project's home page has sample code for a simple spam/not-spam classifier.
Java - Java folks have Classifier4J, Weka, Lucene Mahout, and as adi92 mentioned Mallet.
Learning Rules with Weka - If rules are what you want, Weka might be of particular interest, since it includes a rule set based learner. You'll find a tutorial on using Weka for text categorization here.
Mallet has a bunch of classifiers which you can train and deploy entirely from the commandline
Weka is nice too because it has a huge number of classifiers and preprocessors for you to play with
Have you tried spam or email filters? By using text files that have been marked with appropriate categories, you should be able to categorize further text input. That's what those programs do, anyway, but instead of labeling your outputs a 'spam' and 'not spam', you could do other categories.
You could also try something involving AdaBoost for a more hands-on approach to rolling your own. This library from Google looks promising, but probably doesn't meet your ready-to-deploy requirements.