IzPack: Require at least one of multiple packs - plugins

I've got a software that consists of different plugins that can be selected during installation in IzPack. These plugins provide different features to the software: input, processing, output. The software needs at least one input and output plugin to work.
How do I specify that at least one plugin providing a certain feature is selected in the PackPanel?

I believe that this was implemented in izpack v5.0.0-rc5 and newer. PacksPanel does not allow you to continue if you have deselected all the options.
Based on your comment I would solve this by using conditionvalidator:
Basically add condition for each of your packs:
<condition type="packselection" id="pack1inputselected">
<name>Pack 1 input</name>
</condition>
Then make OR conditions with groups of your packs (input, processing, output), e.g. like this:
<condition type="or" id="inputgroup">
<condition type="ref" refid="pack1inputselected" />
<condition type="ref" refid="pack2inputselected" />
</condition>
Then add a final AND validation condition (id is crucial as it has to always start by conditionvalidator word! The conditionvalidator class gets to validate all the conditions starting with conditionvalidator.):
<condition type="and" id="conditionvalidator.packsselected">
<condition type="ref" refid="inputgroup" />
<condition type="ref" refid="processinggroup" />
<condition type="ref" refid="outputgroup" />
</condition>
Add conditionvalidator to PacksPanel in panels element:
<panel classname="PacksPanel" id="panel.packs">
<validator classname="com.izforge.izpack.installer.validator.ConditionValidator" />
</panel>
There. Everytime the condition that is validated (when clicking on next) by conditionvalidator will not be true (that is if you will not have the correct packs selected), it will throw a message and will not allow you to continue. You can change the message by adding string to CustomLangPack with .error.message (e.g. in this example conditionvalidator.packsselected.error.message).

Related

How to Add additional columns to links page to ExternalLink types

How to add columns to ExternalLink on the "Links" page on Azure DevOps Workitem ?
Answered : Not Possible see answer below
Pull Request is not like Code Review Request, it's not a work item
type, we cannot see it from the exported process template. So, I don't
think we can customize the columns like the common work item types. –
Andy Li-MSFT
after going through the following links
link1
link2
and trying the workaround discussed here
I have failed to add more columns to links of the type externallink
i have added the following code as described:
<Page Label="Links" LayoutMode="FirstColumnWide">
<Section>
<Group Label="links">
<Control Type="LinksControl" Name="links">
<LinksControlOptions>
<LinkFilters>
<ExternalLinkFilter Type="Build" />
<ExternalLinkFilter Type="Integrated in build" />
<ExternalLinkFilter Type="Pull Request" />
<ExternalLinkFilter Type="Branch" />
<ExternalLinkFilter Type="Fixed in Commit" />
<ExternalLinkFilter Type="Fixed in Changeset" />
<ExternalLinkFilter Type="Source Code File" />
<ExternalLinkFilter Type="Found in build" />
<ExternalLinkFilter Type="GitHub Pull Request" />
<ExternalLinkFilter Type="GitHub Commit" />
</LinkFilters>
<Columns>
<Column Name="System.State" />
<Column Name="System.ChangedDate" />
<Column Name="System.PullRequest.IsFork" />
</Columns>
</LinksControlOptions>
</Control>
</Group>
</Section>
</Page>
But the results still show only the original columns.
The problem is that the field/column you added (<Column Name="System.PullRequest.IsFork" />) is not a valid work item filed/column. The workaround is only available for work item types due to the columns depend on work item fields.
You need to add a valid work item field/column here. We can get all the available work item fields by calling the Get Work Item REST API with parameter $expand=Fields added in the URL from a specific work item.
GET https://{instance}/{collection}/{project}/_apis/wit/workitems/{id}?$expand=Fields&api-version=4.1
For example, the following screenshots shows all the available fields for my Task work item. (It depends on how you defined the fields, if you defined a custom field, you can also see it from the response body.):
After that, we can add the columns (System.CreatedBy and Microsoft.VSTS.Common.Priority for example in this sample)
Then check the behavior in a Task work item:
Please note that, Pull Requests is not a work item type. We cannot get valid work item fields by calling the Pull Requests REST API. In this case, I don't think we can customize the columns like the common work item types.

Azure Dev Ops add field Remaining work in backlog

is there a possibility to add the field "Microsoft.VSTS.Scheduling.RemainingWork" on backlog level? I would like to have a field with remaining work in the backlog, so that the work can be entered there directly without creating a work item.
example image
Azure DevOps Server version: Version Dev17.M153.3
I added following lines in Bug.xml:
<FIELD name="Verbleibende Arbeit" refname="Microsoft.VSTS.Scheduling.RemainingWork" type="Double" reportable="measure" formula="sum" />
<Group Label="Details">
<Control Label="Verbleibende Arbeit" Type="FieldControl" FieldName="Microsoft.VSTS.Scheduling.RemainingWork" />
<Control Label="Aufwand" Type="FieldControl" FieldName="Microsoft.VSTS.Scheduling.Effort" />
<Control Label="Schweregrad" Type="FieldControl" FieldName="Microsoft.VSTS.Common.Severity" />
</Group>
On the backlog, you can specify columns and the quick add panel configuration. Check this:
Set default columns
Customize the quick add panel
Use witadmin to import and export process configuration.

How to add pre-defined values on "Found in Environment" field

AzureDevOps has a a filed called "Found In Environment".
I want to add this to my work items, however I would like to setup a predefined list of values that the user should select.
I can do this by adding a new custom filed, but It would be strange to add custom filed for a filed that AzureDevOps already have.
Question: Is there any way to configure a predefined set of values for already existing filed?
Actually, Azure DevOps just support users to customize the values for any system picklist(except the reason field) such as Severity, Activity, Priority, etc. You can refer to here to get the information about customize system picklist values.
As my test, the Found in Environment field only support one default value, not picklist items.
So according to your question, some fileds can configure set of values for already existing field, but not the Found in Environment field.
Or you can use XML to customize your own process and fields value by your rule. Below is a simple sample about how to definition a work item picklist items in XML.
In addition, only after Migrate from tfs to azure devops, you will have edit xml options. About XML, you can refer to here
<FIELD name="Priority" refname="Microsoft.VSTS.Common.Priority" type="Integer" reportable="dimension">
<HELPTEXT>Business importance. 1=must fix; 4=unimportant.</HELPTEXT>
<ALLOWEDVALUES expanditems="true">
<LISTITEM value="1" />
<LISTITEM value="2" />
<LISTITEM value="3" />
<LISTITEM value="4" />
</ALLOWEDVALUES>
<DEFAULT from="value" value="2" />
</FIELD>

phing nested if conditions

I am having trouble understanding the Phing documentation regarding multiple conditions for a given <if> tag. It implies you cannot have multiple conditions unless you use the <and> tag, but there are no examples of how to use it. Consequently I nested two <if> tags, however I feel silly doing this when I know there is a better way. Does anyone know how I can use the <and> tag to accomplish the following:
<if><equals arg1="${deployment.host.type}" arg2="unrestricted" /><then>
<if><equals arg1="${db.adapter}" arg2="PDO_MYSQL"/><then>
<!-- Code Here -->
</then></if>
</then></if>
I find it very surprising that no one has had any experience with this. Phing is an implementation of the 'ANT' build tool in PHP instead of Java. It is very useful for PHP developers who feel a lack of a simple and powerful deployment tool. Java's ability to package self contained web projects into a single file or package multiple web project files into a yet bigger file is an amazing capability. ANT or Phing does not get PHP to that point, but its a definite step in the right direction and leaps and bounds easier to understand and use than GNU Make ever was or will be.
According to the Phing documentation:
The <or> element doesn't have any attributes and accepts an arbitrary number of conditions as nested elements. This condition is true if at least one of its contained conditions is, conditions will be evaluated in the order they have been specified in the build file.
It may sound confusing at first, especially with no handy examples available, but the keywords to note are, "accepts an arbitrary number of conditions as nested elements." If you try the following build snippet out, you should easily realize how to use <or> and <and> conditions:
<if>
<or>
<equals arg1="foo" arg2="bar" />
<equals arg1="baz" arg2="baz" />
</or>
<then>
<echo message="Foo equals bar, OR baz equals baz!" />
</then>
</if>
<if>
<or>
<equals arg1="foo" arg2="bar" />
<equals arg1="baz" arg2="bam" />
</or>
<then>
<echo message="Foo equals bar, OR baz equals baz!" />
</then>
<else>
<echo message="No match to OR found." />
</else>
</if>
<fail />

jBPM, concurrent execution and process variables

When a process in jBPM forks into concurrent paths, each of these paths gets their own copy of the process variables, so that they run isolated from each other.
But what happens when the paths join again ?
Obviously there could be conflicting updates.
Does the context revert back to the state before the fork?
Can I choose to copy individual variables from the separate tracks?
I think that you have to configure the Task Controllers of your tasks. In some cases it is enough to set the access attribute in a way that does not result in conflicts (e.g. read access to the first path and read,write access to the second path). If this is not the case then you can implement your own TaskControllerHandler and implement the method void submitTaskVariables(TaskInstance taskInstance, ContextInstance contextInstance, Token token) with your custom logic. Please see: Task Controllers.
I tried a little experiment:
<fork name="fork1" >
<transition to="right" />
<transition to="left" />
</fork>
<node name="left">
<event type="node-enter">
<script>
<expression >
left="left";
shared = left;
</expression>
<variable name='left' access='write' />
<variable name='shared' access='write' />
</script>
</event>
<transition to="join" />
</node>
<node name="right">
<event type="node-enter">
<script>
<expression >
right="right";
token.parent.processInstance.contextInstance.setVariable("fromRight", "woot!");
shared = right;
</expression>
<variable name='right' access='write' />
<variable name='shared' access='write' />
</script>
</event>
<transition to="join" />
</node>
<join name="join" >
<transition to="done"></transition>
</join>
<end-state name="done"/>
At the end I had access to three variables, shared, right and "fromRight" which was set by the script against the parent explicitly.
The shared variable took its value from the right fork, changes made on the left seemed to dissappear.
Note that the transitions aren't actually asynchronous for me, and the whole experiment will have run in one transaction, these factors may affect the outcome