I can't seem to find this in any of the documentation.
I have a script that gets triggered on layout > onRecordCommit.
I'm curious if it will also be triggered during a data import to that table?
I was curious too, so I just tested it in FM11 and no, the trigger doesn't fire.
After a success import you could run a script which loops through all of the records to perform the same actions of the triggered script.
Related
I currently have an Azure Devops install that I am configuring for automated build and testing. I would like to enable a Continuous Integration trigger for the build process, however our check-in standards require different parts of our code to be checked in separate from each other.
For example: we are using nettiers auto generated code, so whenever a ticket requires a database change, the nettiers code base gets updated. Because that is auto generated code it gets checked in separately from manual modifications with a comment indicating that it is an auto generated check-in.
A build will fail if it does not have both the nettiers and the manual modifications checked in. However with Continuous Integration turned on, the first check-in will trigger a build to begin that will be missing the second half of the changes which are checked in a couple minutes later.
The ideal way I would like to fix this would be to implement a 5 minute delay between when the CI build first gets triggered, and when it actually begins its work. Even better would be if each successive check-in would cancel the first build and start a new timer with its own build to account for any subsequent check-ins.
An alternative was to solve the issue might be to have a gate on a work item query. However, I have been unsuccessful in figuring out how to implement either of these ideas, or in coming up with other options. Gates based on queries only seem to be available in Release pipelines, not Builds.
Has anyone out there solved a similar problem, or have thoughts on how to solve or work around this issue?
Azure Devops delayed Continuous Integration build
I am afraid there is no such out of box setting/method to set this specify continuous integration build for your case.
As workaround, we could generated code and gets checked in to some specify folder by using nettiers, like \NettiersGenerated.
Then we could exclude that folder by the Path filters under the Enable continuous integration:
In this case, the generated code will not trigger the build pipeline.
Update:
It would require that the nettiers code always gets checked in first
(which would be hard to enforce)
Yes, agree with you. If the build will fail if it does not have both the nettiers and the manual modifications checked in, my first is indeed not reasonable enough.
As another workaround, we could use the Azure DevOps counter and get the rest of the counter through a powershell script, build the pipeline only if the number is even, otherwise cancel the build, like:
Counter expression like
variables:
internalBuildNumber: 1
semanticBuildNumber: $[counter(variables['internalBuildNumber'], 0)]
Powershell scripts:
$value=$(semanticBuildNumber)
switch($value)
{
{($_ % 2) -ne 0} {"Go on build pipeline"}
{($_ % 2) -eq 0}
{
Write-Host "##vso[task.setvariable variable=agent.jobstatus;]canceled"
Write-Host "##vso[task.complete result=Canceled;]DONE"
}
}
In this case, the pipeline will be build when it triggered at second time.
Hope this helps.
I'm currently having trouble applying logon script(powershell) on windows servers.
The logon script has the line to set user environment variables but the variables don't look like being applying immediately from the result of set command on command prompts.
I've been looking at the behavior through process monitor while logging on to the new session.
And finally I have found the newly created variables need to be associated with RegenerateUserEnvironment function on shell32.dll.
I'm able to look at the correct result of set command after RegenerateUserEnvironment is called.
So I was wondering whether we had a way to trigger RegenerateUserEnvironment function and it needs to be executed on powershell.
Can you shed some light on this?
Best Regards,
Haewon
I am trying to trigger a process on the backend when data gets changed.
Here is a working trigger that I am currently using.
xquery version "1.0-ml";
import module namespace trgr="http://marklogic.com/xdmp/triggers"
at "/MarkLogic/triggers.xqy";
if (xdmp:database() eq xdmp:database("nbcu-test-ml-triggers"))
then ()
else fn:error((), 'NOTTRIGGERSDB', xdmp:database()) ,
trgr:create-trigger(
"typeahead_modify",
"Update Typeahead Document",
trgr:trigger-data-event(trgr:directory-scope("/triplestore/", "1"), trgr:document-content("modify"), trgr:post-commit()),
trgr:trigger-module(
xdmp:database("nbcu-test-ml-modules"),
"/ext/",
"sample-trigger.xqy"),
fn:true(),
xdmp:default-permissions(),
fn:true() )
However at the end of the module it is triggering, I would like to call an xdmp:spawn-function in order to do some asynchronous processing.
I am pretty new to Permission management, but I tried adding to the set of permissions a xdmp:privilege, but that didn't work.
Can someone please help to advise how to add xdmp:Spawn execute privilege to this trigger?
Thanks
Edit: I use mlgradle to deploy the /ext/sample-trigger.xqy
The scope of the user running the trigger is the user that caused the insert/update/delete/property-change on the document. The only exception to this rule is the database online event in which you actually define a user.
Therefore, the xdmp:spawn privilege must be attached to a role that is attached (directly or indirectly) to the user described above.
To troubleshoot, you could add xdm:log(xdmp:get-current-user()) to the trigger module to make sure you understand the user being used to invoke the code. Then add the xdmp:spawn privilege to one of the roles of that user.
I'm working on a rather simple script which should handle new values in the spreadsheet and then send emails to specified addresses. And I faced with the problem. My code is listed below:
function onEdit(e) {
//part of the code for checking e.range to process only updated values
sendEmail();
}
function sendEmail() {
// arguments are missed only for demo
GmailApp.sendEmail();
}
While I'm using "simple trigger", my function "sendEmail()" works only if I start it from script editor. I allowed sending emails on behalf of my at first time and then function works fine. But if I'm changing the value in the spreadsheet - function "onEdit(e)" processes new data but function "sendEmail()" does nothing.
I partly solved this problem by using project's triggers from "current project's triggers" menu. In that case, function "sendEmail()" works properly, but I have no access to the information about update.
For my purposes I could use just second way and find new values "manually" every time, but I wish to optimize this work.
So, my questions are:
Is the process I described above proper or I made a mistake
anywhere?
If process proper, is where a way to combine both cases?
Thanks!
You correctly understood that (as the docs say) simple triggers cannot send an email, because they run without authorization. An installable trigger, created via Resources menu, can: it has the same rights as the user who created the trigger. If it is set to fire on edit, it will get the same type of event object as a simple trigger does.
So, a minimal example would be like this, set to run "on edit":
function sendMail(e) {
MailApp.sendEmail('user#gmail.com', 'sheet edited', JSON.stringify(e));
}
It emails the whole event object in JSON format.
Aside: if your script only needs to send email but not read it, use MailApp instead of GmailApp to keep the scope of permissions more narrow.
I need pass data of a task1 (form of task1) to other task (form of task2), and see this data in the form of task2. I use one aspect for this and I have the next code (a part) for the taskListener (event: complete) in task1:
execution.setVariable('wf_data1', task.getVariable('wf_data1'));
In my task2, in the share-config-custom.xml, I have the wf_data1 in the form, but this shows empty.
Why happen this? How to see the wf_data1 in task2?
UPDATE:
The reason of why this not working is which in the file service-context.xml, the redeploy key is "false". I changed this to "true" and all is working.
Greetings,
Arak.
I'm not going to dive into your model and ways of showing it. Alfresco keeps track of the workflow history. I'm not sure till what detail(with/without aspects) is available, but it's quite easy to find out.
With this you can access workflow data in a next task. Just create a custom workflow form controller which retrieves data.