Ok So I want to import Sales orders from Flat file or Excel via an FTP connection . I have a copy of the Sales import SP that our ERP uses but i want to re-Purpose it to read a file direct from a folder location rather than importing it through the ERP UI . I'm not sure how to go about doing this
I have read about flat file and bulk inserts but I dont think this is the correct route , I have included the available SP I have
I would to like to find a way to re-purpose this SP , I have not added the SP as its a bit large
Related
I would like to import external csv file to maillist box.
Is there any way to import external csv file to Unica maillist box? Thank you.
In the edit mode of your flowchart, select the Admin drop down and choose Tables.
Map your .csv data as a base record table from your data source.
Make sure that the audience is the same as your MailList so that your data will show as an option.
then you should be able to run your Mail List process using your data.
We are deploying Tableau for a bank.
We had created 6 test dashboards using dummy data on a staging data base using sql connection and lets say has an ip 10.10.10.10.
Now we need to use the same view we had used with the dummy data on Live data but using a different connection which is again an sql engine & IP lets say as 20.20.20.20. All the variable names and other properties are the same, not difference is that the Live data would not have calculated fields which we can deploy on the Live environment.
The challenge is: the LIVE data being of a bank is highly confidential and cannot be used from outside operations site rather we need to deploy it from an ODC [restricted environment]. Hence we simply cannot do a replace data source.
Hence we are planning to move twbx files and data extracts for each of these views using a shared folder to the ODC. Then the process would be like below:
As the LIVE sql data base is different from the dummy sql we will get error
We will select edit data connection
Will select tableau data extract for each sheet and dashboard
Will then select the option of replace data source and select LIVE SQL database
Will extract the new data
The visualization should work fine
Earlier we had just moved TWBX files hence it failed. Is there a different approach to it.
I did something similar to it
For that, you must have
same schema as of Live database and dummy database
do not change name of any source table or column
create your viz
send it in the .tbw form which is editable HTML format
Now the hard part- open your tbw in notepad and replace all connection details to new one
save and open in the tableau
tell me if it didn't worked
One method would be to modify your hosts file on your local computer, pointing the production server name the staging instance of the database. For example, let's say your production database is prod.url.com and you have a reporting staging db server instance called reportstage.otherurl.com
Open your hosts file. Add an entry for prod.url.com. Point it to reportstage.otherurl.com
Develop the report in Desktop, with the db connection string to prod.url.com.
When you publish the twb file to Server, no connection string changes are needed.
Another easier way is to publish the twb to Server with your staging connection string but edit the connection string in the data source in Server.
Develop the twb file on your local computer against your staging database.
Publish the twb file to Server.
Go to the workbook on Server and instead of looking at the views, click on Data Sources.
Edit the data source(s) connection information. This allows you to edit the server name, port, username, or password.
I've used this second method quite a bit. We have an environment where we can't hit the production db outside of the data center. Our staging environment doesn't have that restriction. We develop against the stage db, deploy, and edit the server name in the data source.
I need to re-create a database from a DashDB Bluemix service into another. And I need to automate this procedure in bash scripts.
The best I can think of is a DashDB REST API that allows me to export the content of the entire database into json format (or any other format you can think of), and a corresponding API that allows me to re-import the content in a different database on the same service or on a different service, possibly in a different Bluemix space. Thanks.
I assume you want to do a one time move and this is not about a continuous replication. In that case simply sign up on http://datascience.ibm.com, navigate to DataWorks, select "Load Data" from navigation panel (open it clicking top left) and then select Cloud Database as source type.
DataWorks load data from dashDB to dashDB
If you however still would prefer to write an own app or script that does the data movement and you want a REST API to export JSON data, then I recommend to write a simple R script that reads the data from a table (using ibmdbR) and writes it to stdout, deploy the script into dashDB (POST /home) and run the R script from your app/script calling /rscript endpoint: https://developer.ibm.com/clouddataservices/wp-content/themes/projectnext-clouddata/dashDB/#/
For Db2 on Cloud and Db2 Warehouse on Cloud, there is a REST API available that allows you to export data from a table in CSV format (up to 100.000 rows) and then load the data back. It will require a few requests as:
POST /auth/tokens
GET /sql_query_export
POST /home_content/{path}
POST /load_jobs
GET /load_jobs/{id}
I've implemented a client npm module for this API - db2-rest-client and you can export a statement result to a JSON file as:
export DB_USERID='<USERID>';export DB_PASSWORD='<PASSWORD>';export DB_URI='https://<SOURCE_HOSTNAME>/dbapi/v3'
db2-rest-client query --query="SELECT * FROM SRC_SCHEMA.SRC_TABLE" > test.json
Then you can transform that data into a .csv file and use the load job:
export DB_USERID='<USERID>';export DB_PASSWORD='<PASSWORD>';export DB_URI='https://<TARGET_HOSTNAME>/dbapi/v3'
db2-rest-client load --file=transformed.csv --table='DEST_TABLE' --schema='DEST_SCHEMA'
We have three Organization tenents, Dev, Test and Live. All hosted on premise (CRM 2011. [5.0.9690.4376] [DB 5.0.9690.4376]).
Because the way dialogs uses GUIDs to refference record in Lookup, we aim to maintain GUIDs for static records same across all three tenents.
While all other entities are working fine, I am failing to import USERS and also maintain their GUIDS. I am using Export/Import to get the data from Master tenent (Dev) in to the Test and Live tenents. It is very similar to what 'configuration migration tool' does in CRM 2013.
Issue I am facing is that in all other entities I can see the Guid field and hence I map it during the import wizard but no such field shows up in SystemUser entity while running import wizards. For example, with Account, I will export a Account, amend CSV file and import it in the target tenant. When I do this, I map AccountId (from target) to the Account of source and as a result this account's AccountId will be same both in source and target.
At this point, I am about to give up trying but that will cause all dialogs that uses User lookup will fail.
Thank you for your help,
Try following steps. I would strongly recommend to try this on a old out of use tenant before trying it on live system. I am not sure if this is supported by MS but it works for me. (Another thing, you will have to manually assign BU and Roles following import)
Create advance find. Include all required fields for the SystemUser record. Add criteria that selects list of users you would like to move across.
Export
Save file as CSV (this will show the first few hidden columns in excel)
Rename the Primary Key field (in this case User) and remove all other fields with Do Not Modify.
Import file and map this User column (with GUID) to the User from CRM
Import file and check GUIDs in both tenants.
Good luck.
My only suggestion is that you could try to write a small console application that connects to both your source and destination organisations.
Using that you can duplicate the user records from the source to the destination preserving the IDs in the process
I can't say 100% it'll work but I can't immediately think of a reason why it wouldn't. This is assuming all of the users you're copying over don't already existing in your target environments
I prefer to resolve these issues by creating custom workflow activities. For example; you could create a custom workflow activity that returns a user record by an input domain name as a string.
This means your dialogs contain only shared configuration values, e.g. mydomain\james.wood which are used to dynamically find the record you need. Your dialog is then linked to a specific record, but without having the encode the source guid.
I've created an enterprise web service in maximo that uses extsys1. In extsys1 I've created a duplicate of MXPERSONInterface and managed to create a query from it (sync was default). Now when I finished my web service I can succesfully query maximo from soap ui client and get all the person data but what I'd like to know is, can I select which data I want to export in my response ? Like...ignoring everything except name/lastname/email or anything like that.
If anyone did that / knows how with any other mbo any help would be very much appriciated. The thing is I don't want all the raw data being in my response, want to make it as much user-friendly as I can.
There is a way to do import/export of data via Web Services that are
dynamically accessed from external applications.
Another thing to note when you're accessing pre-defined object structures in
this way is that the response will always contain every single field that exists
in that object structure.
I will write down a brief tutorial on how to filter that data so that when
you query your object structure you only get a partition of the data in the response.
For the sake of this tutorial I will use MXPERSON and will export Firstname, Lastname, City,
Country and Postalcode.
First go to Integration > Object Structures > Create New Object Structure.
Name it My_MXPERSON, set to be consumed by INTEGRATION, set Authorized application PERSON and add new row for Source Objects and select Person from object list. Now you can go to More Actions > Include/Exclude Fields. Here you should un-check everything except Firstname, Lastname, City, Country and Postalcode (only them need to be CHECKED). Click save.
Now we need to create an enterprise service by going to Inegration > Enterprise Services > New Enterprise Service. Call your service My_MXPERSON_ES, for Operation set QUERY and for Object
Structure select your My_MXPERSON you created earlyer. Click save.
Next thing is to create a publish channel by going Integration > Publish Channels > New Publish
Channel. Name it My_MXPERSON_PC and for Object Structure select your My_MXPERSON (If you can't find it on the list go to your Object structure and uncheck "Query Only" box. Click save.
Now you have everything set up to create your external system. Integration > External Systems > New External System. name it My_MXPERSON_EXTSYS, set End Point to which format you want your response
to be in, I use MXXMLFILE. On the left side you have 3 typees of queue you need to set up, I have 1 option for first 2 and 2 for last one (select the upper one - ends with cqin). Check Enabled.
Within your External System go to Publish Channels and Select your My_MXPERSON_PC, enable it.
Within your External System go to Enterprise Services and Select your My_MXPERSON_ES, enable it it. Click save.
Last thing you need to do before you're done is to create your web service, go to Integration >
Web Services > New Web Service from Enterprise Service. Name it My_MXPERSON_Query, and select from list My_MXPERSON_EXTSYS_My_MXPERSON_ES, select your Web Service from the list and go to more actions > deploy.
Once your Web Service is deployed you can access the wsdl file from servername/meaweb/wsdl/webservicename.wsdl .
For test here we will use SoapUI to test the wsdl file.
Create a new Soap project and copy / paste the url of the wsdl file. If it loads succesfully paste this in the xml request field.
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:max="http://www.ibm.com/maximo">
<soapenv:Header/>
<soapenv:Body>
<max:QueryMy_MXPERSON baseLanguage="EN" transLanguage="EN">
<max:My_MXPERSONQuery>
<max:PERSON>
<max:Firstname> Name you want to query </max:Firstname>
</max:PERSON>
</max:My_MXPERSONQuery>
</max:QueryMy_MXPERSON>
</soapenv:Body>
</soapenv:Envelope>
Remember to swap "Name you want to query" with the actual name in your table.
Hope this guide helped.
Using Maximo 7.5.0.5, Go To > Integration > External Systems
In External Systems, pick your system that you want to filter records for
Go to the Publish Channels tab
Click on Data Export
In the Export Condition field, enter your where clause to filter your record set
I referenced these steps from IBM Help:
http://publib.boulder.ibm.com/infocenter/tivihelp/v27r1/index.jsp?topic=%2Fcom.ibm.itam.doc%2Fmigrating%2Ft_asset_disposal_export_data.html
Normally, I just reference the link. In my experience though, IBM's web site frequently changes URL structure and occasionally goes offline for "maintenance". For accessibility, I am including the text here. No offense to copyright.
Exporting asset disposal data
To provide information for review or for a company that you hire to dispose of assets, you can use the integration framework applications to export a data file with information about assets that you are planning to dispose of.
Before you begin
Before you attempt to export a file, check that the following tasks are completed:
JMS queues are configured. You can use either continuous queue or sequential queue, depending on your business process.
The external system for asset disposal integration is enabled.
The publish channel is enabled.
About this task
The following procedure explains how to export asset disposal data.
Procedure
1) On the navigation bar, click Go To > Integration > External Systems.
2) On the List tab, select the TAMITEXTSYS external system.
3) On the Publish Channels tab of the External Systems application, select the ITASSETDISPOSAL publish channel and click Data Export.
4) In the Export Condition field in the Data Export window, enter an SQL statement that is appropriate for the Maximo® database that you use. This statement specifies the export condition.
Typically conditions filter by location, by site ID, and by status, as shown in the following example.
location = 'DISPOSAL' and siteid = 'BEDFORD' and status not in ('DECOMMISSIONED','DISPOSED')
The SQL statement must use the database names for attributes as shown in the field help. To view the field help, position the cursor in a field and press Alt+F1. The field help displays the database table and column (attribute) in the following format: ASSET.SITEID, where SITEID is the attribute name.
5) Click OK to export the asset data.
What to do next
The location to which the file is exported depends on the global directory set for the system and on the filedir parameter for the endpoint of the external system. If no global directory is set, look in the root of the application server folder. If no filedir parameter is set for the external system, look in the 'flatfiles' sub-directory. For example,
C:\bea\user_projects\domains\maximo_database\flatfiles\TAMITEXTSYS_ITASSETDISPOSALInterface_1236264695765361846.dat
Another way to locate the file is to search the operating system file structure for TAMITEXTSYS_ITASSETDISPOSALInterface*.dat.