Maximo Web Service Data Filter - soap

I've created an enterprise web service in maximo that uses extsys1. In extsys1 I've created a duplicate of MXPERSONInterface and managed to create a query from it (sync was default). Now when I finished my web service I can succesfully query maximo from soap ui client and get all the person data but what I'd like to know is, can I select which data I want to export in my response ? Like...ignoring everything except name/lastname/email or anything like that.
If anyone did that / knows how with any other mbo any help would be very much appriciated. The thing is I don't want all the raw data being in my response, want to make it as much user-friendly as I can.

There is a way to do import/export of data via Web Services that are
dynamically accessed from external applications.
Another thing to note when you're accessing pre-defined object structures in
this way is that the response will always contain every single field that exists
in that object structure.
I will write down a brief tutorial on how to filter that data so that when
you query your object structure you only get a partition of the data in the response.
For the sake of this tutorial I will use MXPERSON and will export Firstname, Lastname, City,
Country and Postalcode.
First go to Integration > Object Structures > Create New Object Structure.
Name it My_MXPERSON, set to be consumed by INTEGRATION, set Authorized application PERSON and add new row for Source Objects and select Person from object list. Now you can go to More Actions > Include/Exclude Fields. Here you should un-check everything except Firstname, Lastname, City, Country and Postalcode (only them need to be CHECKED). Click save.
Now we need to create an enterprise service by going to Inegration > Enterprise Services > New Enterprise Service. Call your service My_MXPERSON_ES, for Operation set QUERY and for Object
Structure select your My_MXPERSON you created earlyer. Click save.
Next thing is to create a publish channel by going Integration > Publish Channels > New Publish
Channel. Name it My_MXPERSON_PC and for Object Structure select your My_MXPERSON (If you can't find it on the list go to your Object structure and uncheck "Query Only" box. Click save.
Now you have everything set up to create your external system. Integration > External Systems > New External System. name it My_MXPERSON_EXTSYS, set End Point to which format you want your response
to be in, I use MXXMLFILE. On the left side you have 3 typees of queue you need to set up, I have 1 option for first 2 and 2 for last one (select the upper one - ends with cqin). Check Enabled.
Within your External System go to Publish Channels and Select your My_MXPERSON_PC, enable it.
Within your External System go to Enterprise Services and Select your My_MXPERSON_ES, enable it it. Click save.
Last thing you need to do before you're done is to create your web service, go to Integration >
Web Services > New Web Service from Enterprise Service. Name it My_MXPERSON_Query, and select from list My_MXPERSON_EXTSYS_My_MXPERSON_ES, select your Web Service from the list and go to more actions > deploy.
Once your Web Service is deployed you can access the wsdl file from servername/meaweb/wsdl/webservicename.wsdl .
For test here we will use SoapUI to test the wsdl file.
Create a new Soap project and copy / paste the url of the wsdl file. If it loads succesfully paste this in the xml request field.
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:max="http://www.ibm.com/maximo">
<soapenv:Header/>
<soapenv:Body>
<max:QueryMy_MXPERSON baseLanguage="EN" transLanguage="EN">
<max:My_MXPERSONQuery>
<max:PERSON>
<max:Firstname> Name you want to query </max:Firstname>
</max:PERSON>
</max:My_MXPERSONQuery>
</max:QueryMy_MXPERSON>
</soapenv:Body>
</soapenv:Envelope>
Remember to swap "Name you want to query" with the actual name in your table.
Hope this guide helped.

Using Maximo 7.5.0.5, Go To > Integration > External Systems
In External Systems, pick your system that you want to filter records for
Go to the Publish Channels tab
Click on Data Export
In the Export Condition field, enter your where clause to filter your record set
I referenced these steps from IBM Help:
http://publib.boulder.ibm.com/infocenter/tivihelp/v27r1/index.jsp?topic=%2Fcom.ibm.itam.doc%2Fmigrating%2Ft_asset_disposal_export_data.html
Normally, I just reference the link. In my experience though, IBM's web site frequently changes URL structure and occasionally goes offline for "maintenance". For accessibility, I am including the text here. No offense to copyright.
Exporting asset disposal data
To provide information for review or for a company that you hire to dispose of assets, you can use the integration framework applications to export a data file with information about assets that you are planning to dispose of.
Before you begin
Before you attempt to export a file, check that the following tasks are completed:
JMS queues are configured. You can use either continuous queue or sequential queue, depending on your business process.
The external system for asset disposal integration is enabled.
The publish channel is enabled.
About this task
The following procedure explains how to export asset disposal data.
Procedure
1) On the navigation bar, click Go To > Integration > External Systems.
2) On the List tab, select the TAMITEXTSYS external system.
3) On the Publish Channels tab of the External Systems application, select the ITASSETDISPOSAL publish channel and click Data Export.
4) In the Export Condition field in the Data Export window, enter an SQL statement that is appropriate for the Maximo® database that you use. This statement specifies the export condition.
Typically conditions filter by location, by site ID, and by status, as shown in the following example.
location = 'DISPOSAL' and siteid = 'BEDFORD' and status not in ('DECOMMISSIONED','DISPOSED')
The SQL statement must use the database names for attributes as shown in the field help. To view the field help, position the cursor in a field and press Alt+F1. The field help displays the database table and column (attribute) in the following format: ASSET.SITEID, where SITEID is the attribute name.
5) Click OK to export the asset data.
What to do next
The location to which the file is exported depends on the global directory set for the system and on the filedir parameter for the endpoint of the external system. If no global directory is set, look in the root of the application server folder. If no filedir parameter is set for the external system, look in the 'flatfiles' sub-directory. For example,
C:\bea\user_projects\domains\maximo_database\flatfiles\TAMITEXTSYS_ITASSETDISPOSALInterface_1236264695765361846.dat
Another way to locate the file is to search the operating system file structure for TAMITEXTSYS_ITASSETDISPOSALInterface*.dat.

Related

Get list of projects with its process Name

I am trying get all the projects with its process name. I tried to use the https://learn.microsoft.com/en-us/rest/api/azure/devops/core/projects/list?view=azure-devops-rest-5.1 which lists all projects without process name.
I know that we can get process of each project with its capabilities set to true. But with this approach i need to hit the api equal to total number of projects.
I want to avoid this and get details of project with its process with minimal api hits, may be a single api hit. Is there a way to get both easily?
There isn't such REST API to do it. But you could do it by calling
https://dev.azure.com/{org}/_settings/projects?__rt=fps&__ver=2
And the projects' results are in fps > dataProviders > data > ms.vss-admin-web.organization-projects-data-provider > Projects (include process template name).
This is the request of Organization Settings > Projects page. (https://dev.azure.com/{org}/_settings/projects)

Dynamic Rules creation with any java rule engine

I have this question and trying to get suggestions ,i am working on a a project where the business admins go to UI and sets some rules ..For example lets say JIRA has a feature like if this jiraticket belongs to (Some arbitrary board "XYZ" ) board and type of the jira is "Task" then Label should be added ..
This kind of rules the admin of JIRA through the admin screens sets this rules(How he sets it keep it a side for now )...Now when the user creates a JIRA under the board and sets type with our Lable then based on the rule it should throw an error saying the label should be set ..
There are two parts to implement this feature
1)While admin sets this through the screen we need to create the rule and store it some where..
2)While user creates the jira run the rules which has been stored and say it is valid or not
I am looking for any framework in java it can be done easily for 1) which creates the rule where some framework can understans it and can run the riles with 2) point.
Does some one has any suggestions on this ..?
Thanks,
Swati

Talend - Stats and Logs - On database - error

I have a job that inserts data from sql server to mysql. I have set the project settings as -
Have checked the check box for - Use statistics(tStatCatcher), Use logs (tLogcatcher), Use volumentrics (tflowmetercatcher)
Have selected 'On Databases'. And put in the table names
(stats_table,logs_table,flowmeter_table) as well. These tables were created before. The schema of these tables were determined using tcreatetable component.
The problem is when I run the job, data is inserted in the stats_table but not in flowmeter_table
My job is as follows
tmssInput -->tmap --> tmysqoutput.
I have not included tstatcatcher,tlogcatcher,tflowmetercatcher. The stats and logs for this job are taken from the project settings.
My question - Why is there no data entered in flowmeter_table? Should I include tStatCatcher , tlogcatcher and tflowmetercatcher explicitly in the job for it to run fine?
I am using TOS
Thanks in advance
Rathi
Using flow meter requires you to manually configure the flows you want to monitor.
On every flow you want to monitor, right-click on the row >parameters>advanced settings>Monitor connection.
Then you should be able to see data in your flow table.
If you are using the project settings , you don't need to add the *Catcher component on your job.
You need to use tstatcatcher,tlogcatcher,tflowmetercatcher composant in the job directly.
The composant have already their schema defined so you jusneed to put a tmap and redirect in the table you want like :
Moreover in order tu use the tlog catcher you need to put some tdie or twarn in your job.

How to import users in CRM 2011 with source GUID

We have three Organization tenents, Dev, Test and Live. All hosted on premise (CRM 2011. [5.0.9690.4376] [DB 5.0.9690.4376]).
Because the way dialogs uses GUIDs to refference record in Lookup, we aim to maintain GUIDs for static records same across all three tenents.
While all other entities are working fine, I am failing to import USERS and also maintain their GUIDS. I am using Export/Import to get the data from Master tenent (Dev) in to the Test and Live tenents. It is very similar to what 'configuration migration tool' does in CRM 2013.
Issue I am facing is that in all other entities I can see the Guid field and hence I map it during the import wizard but no such field shows up in SystemUser entity while running import wizards. For example, with Account, I will export a Account, amend CSV file and import it in the target tenant. When I do this, I map AccountId (from target) to the Account of source and as a result this account's AccountId will be same both in source and target.
At this point, I am about to give up trying but that will cause all dialogs that uses User lookup will fail.
Thank you for your help,
Try following steps. I would strongly recommend to try this on a old out of use tenant before trying it on live system. I am not sure if this is supported by MS but it works for me. (Another thing, you will have to manually assign BU and Roles following import)
Create advance find. Include all required fields for the SystemUser record. Add criteria that selects list of users you would like to move across.
Export
Save file as CSV (this will show the first few hidden columns in excel)
Rename the Primary Key field (in this case User) and remove all other fields with Do Not Modify.
Import file and map this User column (with GUID) to the User from CRM
Import file and check GUIDs in both tenants.
Good luck.
My only suggestion is that you could try to write a small console application that connects to both your source and destination organisations.
Using that you can duplicate the user records from the source to the destination preserving the IDs in the process
I can't say 100% it'll work but I can't immediately think of a reason why it wouldn't. This is assuming all of the users you're copying over don't already existing in your target environments
I prefer to resolve these issues by creating custom workflow activities. For example; you could create a custom workflow activity that returns a user record by an input domain name as a string.
This means your dialogs contain only shared configuration values, e.g. mydomain\james.wood which are used to dynamically find the record you need. Your dialog is then linked to a specific record, but without having the encode the source guid.

How to organise REST endpoints for a drop-wizard application?

I am new to dropwizard and REST.
My sample application is a order viewing system. Currently, I am working on a functionality where the UI page consists of set of order search criteria, search button, and link to download the search result as CSV. Download link is displayed only after the successful search. The application has to write the search result to a CSV file and the file location returned will be used to download the file.
I need help in organising the endpoints for this.
Initially, I thought of an end point GET - /orders - text/JSON with search criteria passed in as query params. But, since I will be actually creating the CSV for every GET request, I am wondering if I am violating the HATEOAS rest constraint for the GET (resource should not be created). Or, since the actual resource is Order and not the CSV, is it ok to have the endpoint as GET?
Or, do I need multiple endpoints adhering to the REST constraints and conventions interacting with each other to produce the required result?
Like:
1.POST - /orders/csv - text/json (file name) : creates the CSV file of orders and returns the JSON of file name.
2.GET /orders/csv/<file_name>: gets the file to download.
Many thanks for your help.