I need to implement a "winning-configuration-property" algorithm in my application.
For example:
for property: dashboard_material
i would create a file (I am planning to represent each property as a file. This is slightly negotiable)dashboard_material.yml , with the following value
(which format i believe presents a consolidated view of the variants of the property and is more suitable for impact analysis when someone changes values in a particular scenario) :
car_type=luxury&car_subtype=luxury_sedan&special_features=none : leather
car_type=luxury&car_subtype=luxury_sedan&special_features=limited_edition : premium_leather
car_type=economy : pseudo_leather
default : pseudo_leather
I need the closest match. A luxury car can be a sedan or a compact.
I am assuming these are "decorators" of a car in object oriented terms, but not finding any useful implementation for the above problem from sample decorator patterns.
For example an API:
GET /configurations/dashboard_material should return the following values based on input parameters:
car_type=economy : pseudo_leather
car_type=luxury & car_subtype=luxury_sedan : leather
car_type=luxury : pseudo_leather (from default. Error or null if value does not exist)
This looks very similiar to the "specifications" or "queryDSL" problem with GetAPIs - in terms of slicing and dicing based on criteria.
but basically i am looking for a run-time determination of a config value from a single microservice
(which is a spring config client. I use git2consul to push values from git into consul KV. The spring config client is tied into the consul KV. I am open to any equivalent or better alternatives).
I would ideally like to do all the processing as a post-processing after the configuration is read (from either a spring config server or consul KV),
so that no real processing happens after the query is recieved. The same post processing will also have to happen after every spring config client "refresh" interval based
on configuration property updates.
I have previously seen such an implementation (as an ops engineer) with netflix archaius implementation, but not again finding any suitable text on the archaius pages.
My trivial/naive solution would be to create multiple maps/trees to store the data and then consolidate them into a single map based on the API request effectively overriding some of the values from the lower priority maps.
I am looking for any open-source implementations or references and avoid having to create new code.
I have this question and trying to get suggestions ,i am working on a a project where the business admins go to UI and sets some rules ..For example lets say JIRA has a feature like if this jiraticket belongs to (Some arbitrary board "XYZ" ) board and type of the jira is "Task" then Label should be added ..
This kind of rules the admin of JIRA through the admin screens sets this rules(How he sets it keep it a side for now )...Now when the user creates a JIRA under the board and sets type with our Lable then based on the rule it should throw an error saying the label should be set ..
There are two parts to implement this feature
1)While admin sets this through the screen we need to create the rule and store it some where..
2)While user creates the jira run the rules which has been stored and say it is valid or not
I am looking for any framework in java it can be done easily for 1) which creates the rule where some framework can understans it and can run the riles with 2) point.
Does some one has any suggestions on this ..?
Thanks,
Swati
I've created an enterprise web service in maximo that uses extsys1. In extsys1 I've created a duplicate of MXPERSONInterface and managed to create a query from it (sync was default). Now when I finished my web service I can succesfully query maximo from soap ui client and get all the person data but what I'd like to know is, can I select which data I want to export in my response ? Like...ignoring everything except name/lastname/email or anything like that.
If anyone did that / knows how with any other mbo any help would be very much appriciated. The thing is I don't want all the raw data being in my response, want to make it as much user-friendly as I can.
There is a way to do import/export of data via Web Services that are
dynamically accessed from external applications.
Another thing to note when you're accessing pre-defined object structures in
this way is that the response will always contain every single field that exists
in that object structure.
I will write down a brief tutorial on how to filter that data so that when
you query your object structure you only get a partition of the data in the response.
For the sake of this tutorial I will use MXPERSON and will export Firstname, Lastname, City,
Country and Postalcode.
First go to Integration > Object Structures > Create New Object Structure.
Name it My_MXPERSON, set to be consumed by INTEGRATION, set Authorized application PERSON and add new row for Source Objects and select Person from object list. Now you can go to More Actions > Include/Exclude Fields. Here you should un-check everything except Firstname, Lastname, City, Country and Postalcode (only them need to be CHECKED). Click save.
Now we need to create an enterprise service by going to Inegration > Enterprise Services > New Enterprise Service. Call your service My_MXPERSON_ES, for Operation set QUERY and for Object
Structure select your My_MXPERSON you created earlyer. Click save.
Next thing is to create a publish channel by going Integration > Publish Channels > New Publish
Channel. Name it My_MXPERSON_PC and for Object Structure select your My_MXPERSON (If you can't find it on the list go to your Object structure and uncheck "Query Only" box. Click save.
Now you have everything set up to create your external system. Integration > External Systems > New External System. name it My_MXPERSON_EXTSYS, set End Point to which format you want your response
to be in, I use MXXMLFILE. On the left side you have 3 typees of queue you need to set up, I have 1 option for first 2 and 2 for last one (select the upper one - ends with cqin). Check Enabled.
Within your External System go to Publish Channels and Select your My_MXPERSON_PC, enable it.
Within your External System go to Enterprise Services and Select your My_MXPERSON_ES, enable it it. Click save.
Last thing you need to do before you're done is to create your web service, go to Integration >
Web Services > New Web Service from Enterprise Service. Name it My_MXPERSON_Query, and select from list My_MXPERSON_EXTSYS_My_MXPERSON_ES, select your Web Service from the list and go to more actions > deploy.
Once your Web Service is deployed you can access the wsdl file from servername/meaweb/wsdl/webservicename.wsdl .
For test here we will use SoapUI to test the wsdl file.
Create a new Soap project and copy / paste the url of the wsdl file. If it loads succesfully paste this in the xml request field.
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:max="http://www.ibm.com/maximo">
<soapenv:Header/>
<soapenv:Body>
<max:QueryMy_MXPERSON baseLanguage="EN" transLanguage="EN">
<max:My_MXPERSONQuery>
<max:PERSON>
<max:Firstname> Name you want to query </max:Firstname>
</max:PERSON>
</max:My_MXPERSONQuery>
</max:QueryMy_MXPERSON>
</soapenv:Body>
</soapenv:Envelope>
Remember to swap "Name you want to query" with the actual name in your table.
Hope this guide helped.
Using Maximo 7.5.0.5, Go To > Integration > External Systems
In External Systems, pick your system that you want to filter records for
Go to the Publish Channels tab
Click on Data Export
In the Export Condition field, enter your where clause to filter your record set
I referenced these steps from IBM Help:
http://publib.boulder.ibm.com/infocenter/tivihelp/v27r1/index.jsp?topic=%2Fcom.ibm.itam.doc%2Fmigrating%2Ft_asset_disposal_export_data.html
Normally, I just reference the link. In my experience though, IBM's web site frequently changes URL structure and occasionally goes offline for "maintenance". For accessibility, I am including the text here. No offense to copyright.
Exporting asset disposal data
To provide information for review or for a company that you hire to dispose of assets, you can use the integration framework applications to export a data file with information about assets that you are planning to dispose of.
Before you begin
Before you attempt to export a file, check that the following tasks are completed:
JMS queues are configured. You can use either continuous queue or sequential queue, depending on your business process.
The external system for asset disposal integration is enabled.
The publish channel is enabled.
About this task
The following procedure explains how to export asset disposal data.
Procedure
1) On the navigation bar, click Go To > Integration > External Systems.
2) On the List tab, select the TAMITEXTSYS external system.
3) On the Publish Channels tab of the External Systems application, select the ITASSETDISPOSAL publish channel and click Data Export.
4) In the Export Condition field in the Data Export window, enter an SQL statement that is appropriate for the Maximo® database that you use. This statement specifies the export condition.
Typically conditions filter by location, by site ID, and by status, as shown in the following example.
location = 'DISPOSAL' and siteid = 'BEDFORD' and status not in ('DECOMMISSIONED','DISPOSED')
The SQL statement must use the database names for attributes as shown in the field help. To view the field help, position the cursor in a field and press Alt+F1. The field help displays the database table and column (attribute) in the following format: ASSET.SITEID, where SITEID is the attribute name.
5) Click OK to export the asset data.
What to do next
The location to which the file is exported depends on the global directory set for the system and on the filedir parameter for the endpoint of the external system. If no global directory is set, look in the root of the application server folder. If no filedir parameter is set for the external system, look in the 'flatfiles' sub-directory. For example,
C:\bea\user_projects\domains\maximo_database\flatfiles\TAMITEXTSYS_ITASSETDISPOSALInterface_1236264695765361846.dat
Another way to locate the file is to search the operating system file structure for TAMITEXTSYS_ITASSETDISPOSALInterface*.dat.
According to this article about backwards compatibility in SOAP by IBM they state that new fields can not be added to output types without breaking the contract. The relevant snip from the page is from the section titled New, optional fields in an existing data type...
You can add an element to an existing complexType as long as you make it optional (using the minOccurs="0" attribute). But be careful. Adding an optional element is a minor change only if its enclosing complexType is received as input to the new service. The new service cannot return a complexType with new fields. If an old client were to receive the new field, the client deserialization would fail because the client would not know about the new field.
This was written in 2004 for the WSDL 1.1 spec. Is this still true under current under the WSDL 1.2 spec? Is there no way to define a default behavior of "ignore" for new unknown fields? This statement also seems implementation specific or is that per the spec?
I am trying to contend with the issue of evolving a SOAP service that returns complex business objects. New fields will be added as consumers find use cases for them. I would like to avoid having keep N versions of the service around for simply adding new fields.
From my personal experience this is still the case. I think your main concern is the versioning methodology. You can look at: http://www.ibm.com/developerworks/webservices/library/ws-version/, or more close to home Web Services API Versioning.
Has anyone tried integrating Apache Camel with Drools Fusion or just Drools.
Following is my use case.
Get data from an external service using REST.
Filter the data (using rules defined in Drools.)
The data from the external service could also be a stream of information (e.g., Tweeter feed, real-time location of a user)
Any help or pointers would be appreciated.
Thanks.
Drools has a camel component. Using it is not much different than using any camel component.
source: https://github.com/droolsjbpm/droolsjbpm-integration/tree/master/drools-camel
binary (in the droolsjbpm-integration bundle): http://www.jboss.org/drools/downloads.html
The only thing to be "aware" of is that Drools can treat camel messages as:
commands
regular facts
as-is objects and re-route then
Some articles:
http://blog.athico.com/search?q=camel
Documentation unfortunately only describes the "command" (1) use case:
http://docs.jboss.org/drools/release/5.4.0.Beta2/droolsjbpm-integration-docs/html/ch01.html
Some test cases you can use as examples for the use cases (2) and (3) above:
https://github.com/droolsjbpm/droolsjbpm-integration/tree/master/drools-camel/src/test/java/org/drools/camel/component
Hope this helps.