we are going to have sphinx running main+delta with nosql source. So we're going to use xmlpipe2. To implement main+delta strategy we have to maintain a marker to distinguish "main rows" from "new rows".
The problem is that, unlike sql sources, with xmlpipe2 we can't tell (at least don't know how) if the indexing was successful or not. If we simply update the marker at the end of the main feed generator, and indexing fails for whatever reason, the setup is going to be in inconsistent state.
For SQL sources we have sql_query_post_index hook, how can we achieve similar thing with xmlpipe2?
You could have a wrapper around indexer. that wrapper runs indexer, captures the output, and if the index worked, then it updates your 'marker'.
Call this wrapper from cron, rather can calling indexer directly
Related
I was wondering if it's currently possible to have an 'external' (.so/.dylib) LLVM plugin (module) pass scheduled at LTO time? The reason for wanting this is a inter-modular optimization I want to add.
I also found this topic; How to write a custom intermodular pass in LLVM?
But a separate tool is not an option for me.
Thanks
I think the most helpful thing here might be to understand how passes are run and what the state of the code is during LTO.
First of all, when optimization passes are run by the compiler, they are done as a set that has been added to a PassManager. This means that LLVM/Clang, when passed something like -O3 will create a copy of a PassManager and subsequently provide it the set of passes expected to provide O3 level of optimization. This is very different from what you are doing with an external library which must be provided manually and cannot be fit into the pass pipeline normally.
Then we have the state of things when doing LTO. During Link Time Optimization, all of the individual translation units have been consolidated and are now a single Module. This means that an optimization which runs on each function will run on every function in the code base. Similarly, a per-module optimization will run on the full Module and therefor offer Inter-Procedural Analysis/Optimization.
If you're looking to use an Intra-Modular Pass then there is no reason to do this at LTO time and instead you can simply make a ModulePass and run that on each unit.
I've been working with Stateful Sessions (KieSession) so far and have managed to get my project running as desired using Scala with a few Java wrappers. I am now trying to switch over to StatelessKieSessions. Based on the documentation I found, I've managed to run the following to insert objects/collections into the session, fire the rules on them and update the facts:
val cmd = CommandFactory.newInsert(myObject, "myObject")
val result = ksession.execute(cmd)
When I print result (which is of class org.drools.core.common.DefaultFactHandle), it shows the structure of the desired fact, updated as expected, preceded by something of the sort "fact 0:1:2050275256:1971742898:2:DEFAULT:NON_TRAIT:"
The documentation says that I should be able to write something like result.getValue("myObject") however this option doesn't seem to be available in Scala. (https://docs.jboss.org/drools/release/6.0.0.Beta1/kie-api-javadoc/org/kie/api/runtime/StatelessKieSession.html)
I understand that Scala-Drools interoperability hasn't been provided in full, however does anyone know of a way to extract updated facts from within a StatelessKieSession or a DefaultFactHandle containing it?
What you get from this execute command is the fact handle of the newly inserted fact. The object therein would still be the one you have inserted, updated or not. You'll have to investigate whether this is something you can use in Scala or not.
There is no command to retrieve all facts that have been changed during the execution of a session. You'll have to monitor this, using some of the available technique.
There's not much to be gained by running a "Stateless Session". If you can achieve what you want using a regular (stateful) session, leave it at that. The stateless session may have its advantages, but don't grapple with it from Scala.
Not really critical question, but i'm curious
I am working on a form and sometimes the generated function name is /1BCDWB/SF00000473, and sometimes /1BCDWB/SF00000472. This goes back and forth.
Does anyone know what's the idea behind this? Cuz i'm quite sure it's not a bug (or i might be wrong on that).
It is not a bug. You always have to use SSF_FUNCTION_MODULE_NAME to determine the actual function module name and call it dynamically using CALL FUNCTION l_function_module.
Smartform FMs are tracked by internal numbering and thats saved in the table STXFADMI. You would always notice the different number in Development System if you have deleted any existing Form. Similarly, you would also notice the different number in your Quality system based on the sequence the forms are imported in QAS and the forms as well (as test forms are not migrated to QAS.
Similar behavior is also true for Adobe Form generated FMs.
You need to understand that every smartform has a different interface and hence the automatically generated function module needs to have different import parameters.
Due to this reason the 'SSF*' FMs generate a FM specific for your smartform. The name of the 'generated' FM changes when you migrate from one system to another. And that's the reason why you should use a variable while calling the 'generated' fm and not hardcode it.
The same goes with Adobe form as someone has rightly said in this thread.
Let us say that I have a Matlab function and I change its signature (i.e. add parameter). As Matlab does not 'compile' is there an easy way to determine which other functions do not use the right signature (i.e. submits the additional parameter). I do not want to determine this at runtime (i.e. get an error message) or have to do text searches. Hope this makes sense. Any feedback would be very much appreciated. Many thanks.
If I understand you correctly, you want to change a function's signature and find all functions/scripts/classes that call it in the "old" way, and change it to the "new" way.
You also indicated you don't want to do it at runtime, or do text searches, but there is no way to detect "incorrect" calls at "parse-time", so I'm afraid these demands leave no option at all to detect old function calls...
What I would do in that case is temporarily add a few lines to the new function:
function myFunc(param1, param2, newParam) % <-- the NEW signature
if nargin == 2
clc, error('old call detected.'); end
and then run the main script/function/whatever in which this function resides. You'll get one error for each time something calls the function incorrectly, along with the error stack in the Matlab command window.
It is then a matter of clicking on the link in the bottom of the error stack, correct the function call, and repeat from the top until no more errors occur.
Don't forget to remove these lines when you're done, or better, replace the word error with warning just to capture anything that was missed.
Better yet: if you're on linux, a text search would be a matter of
$ grep -l 'myFunc(.*,.*); *.m'
which will list all the files having the "incorrect" call. That's not too difficult I'd say...You can probably do a similar thing with the standard windows search, but I can't test that right now.
This is more or less what the dependency report was invented for. Using that tool, you can find what functions/scripts call your altered function. Then it is just a question of manually inspecting every occurrence.
However, I'd advise to make your changes to the function signature such that backwards compatibility is maintained. You can do so by specifying default values for new parameters and/or issuing a warning in those scenarios. That way, your code will run, and you will get run-time hints of deprecated code (which is more or less a necessary evil in interpreted/dynamic languages).
For many dynamic languages (and MATLAB specifically) it is generally impossible to fully inspect the code without the interpreter executing the code. Just imagine the following piece of code:
x = magic(10);
In general, you'd say that the magic function is called. However, magic could map to a totally different function. This could be done in ways that are invisible to a static analysis tool (such as the dependency report): e.g. eval('magic = 1:100;');.
The only way is to go through your whole code base, either inspecting every occurrence manually (which can be found easily with a text search) or by running a test that fully covers your code base.
edit:
There is however a way to access intermediate outputs of the MATLAB parser. This can be accessed using the undocumented and unsupported mtree function (which can be called like this: t = mtree(file, '-file'); for every file in your code base). Using the resulting structure you might be able to find calls with a certain amount of parameters.
I am trying to use Windows Workflow and have a model that looks similar to the image in the link below:
After each of the Send Activities (GetSomthing, GetSomthingElse, GetSomeMoreStuff) the same custom activity is being called (LogSomthingBadHappened).
While it might not look so bad in this picture in my real model the custom activity is a SequenceActivty, has quite a few nodes, and when its repeated 3 times starts to make the workflow look very ugly.
I would like to do something like this:
Can the IfElse branches be merged like this?
Should I be using a State Machine work flow instead (haven't figured these out yet)?
Use a FaultHandler on the workflow and throw a specific exception type that the handler will catch. Not the most graceful, but I think it should work.
In sequential workflows all steps must appear in a specific order, and the execution path is regulated exclusively by control structures (IF, WHILE).Altering the execution path in the way you describe would be like using a GOTO statement in imperative code, which we know leads to unnecessary complexity.
If the activities contained in the SequenceActivity that you need to execute at different stages of your workflow are exacty the same, you could embed them in a custom activity. This way it is easier to manage them since they are contained in a single logical unit.In imperative code, this would be like refactoring out a portion of duplicated code into a method, which is then invoked multiple places.
Another alternative that might work is to put your LogSomthingBadHappened activity into a custom workflow and include that several time. Several things to watch out for: Subworkflow are executed asynchronously, if the LogSomthingBadHappened activity needs state information from the main workflow, copying it to the sub workflow might be hard.
I have not tried this, so it might not even work.
I think the answer by gbanfill points to the right direction.
To generalize, I define the problem as:
Is there a way to define a group of activities that will be executed in several places of a workflow?
Further requirements are:
The group of activities should be defined in XAML only ie no code.
Type of input to this group will, of course, be fixed but actual values should depend on call (like calling a function).
Maybe the way to do it is define sub-workflows and build a custom activity that would instantiate the sub-workflow and wait for it to complete before continuing.
This custom activity should have at least two parameters: the sub-workflow id and input parameters.