MDS import data queue - master-data-services

I am following this guidance: https://www.mssqltips.com/sqlservertutorial/3806/sql-server-master-data-services-importing-data/
The instructions say after we load data into the staging tables, we go into the MDS integration screen and select "START BATCHES".
Is this a manual override to begin the process? or how do I know how to automatically queue up a batch to begin?
Thanks!

Alternative way to run the staging process
After you load the staging table with required data.. call/execute the Staging UDP.
Basically, Staging UDPs are different Stored Procedures for every entity in the MDS database (automatically created by MDS) that follow the naming convention:
stg.udp_<EntityName>_Leaf
You have to provide it values for some parameters. Here is a sample code of how to call these.
USE [MDS_DATABASE_NAME]
GO
EXEC [stg].[udp_entityname_Leaf]
#VersionName = N'VERSION_1',
#LogFlag = 1,
#BatchTag = N'batch1'
#UserName=N’domain\user’
GO
For more details look at:
Staging Stored Procedure (Master Data Services).
Do remember that the #BatchTag value has to match the value that you initially populated in the Staging table.
Automating the Staging process
The simplest way for you to do that would be to schedule a job in SQL Agent which would execute something like the code above to call the staging UDP.
Please note that you would need to get creative about figuring out how the Job will know the correct Batch Tag.
That said, a lot of developers just create a single SSIS Package which does the Loading of data in the Staging table (as step 1) and then Executes the Staging UDP (as the final step).
This SSIS package is then executed through a scheduled SQL Agent job.

Related

Making a determination whether DB/Server is down before I kick-off a pipeline

I want to check whether the database/Server is Online before I kick off a pipeline. In the database is down I want to cancel the pipeline processing. I also would like to log the results in a table.
format (columns) : DBName Status Date
If the DB/Server is down then I want to send an email to concerned team with formatted table showing which DB/Servers are down.
Approach:
Run a query on each of the servers. If there is a result, then format output as shown above. I am using ADF pipeline to achive this. My issue is how do I combine various outputs from different servers.
For e.g.
Server1:
DBName: A Status: ONLINE runDate:xx/xx/xxxx
Server2:
DBName: B Status: ONLINE runDate:xx/xx/xxxx
I would like to combine them as follows:
Server DBName Status runDate
1 A ONLINE xx/xx/xxxx
2 B ONLINE xx/xx/xxxx
Use this to update the logging table as well as in the email if I were to send one out.
Is this possible using the Pipeline activities or do I have to use mapping dataflows?
I did similar work a few weeks ago. We make an API where we put all server-related settings or URL endpoint which we need to ping.
You don't require to store username-password (of SQL Server) at all. When you ping the SQL server, it will timeout if it isn't online. If it's online it will give you password related error. This way you can easily figure out whether it's up and running.
AFAIK, If you are using azure-DevOps you can use your service account to log into the SQL server. If you have set up an AD to log into DevOps, this thing can be done in the build script.
Both way you will be able to make sure whether SQL Server is Up and Running or not.
You can have all the actions as tasks in a yaml pipeline
You need something like below:
steps:
task: Check database status
register: result
task: Add results to a file
shell: "echo text >> filename"
task: send e-mail
when: some condition is met
There are several modules to achieve what you need. You need to find the right modules. You can play around with the flow of tasks by registering results and using the when clause.

boltdb scramble for MOCK / DEV Purposes

I have SOAR that uses boltDB to host it's incidents.
I want to take that boltDB copy over to DEV environment and leverage its data without compromising PROD data.
New to BoltDB; are there tools available for me to review / query bolt DB database. Ultimately looking to see if I can script a solution to scramble certain values within the boltDB?

Differentiate between production, staging and test environments in Websphere Commerce

I am new to a WebSphere Commerce Enterprise v6.0 environment that has already been set up. I was wondering what would be the most definitive way for me to determine which servers are used as Production, which are used as Staging and which are used for Testing?
To my knowledge, WCS has so far not included a DB entry or a script that can return the nature of a WCS server. If there is IBM will need to clearly document it.
The best way to find out in [out-of-the-box] WCS installations what is the nature of a WCS server is probably a query like this:
SELECT CASE
WHEN count(1)>0 THEN 'STAGING'
WHEN count(1)=0 THEN 'PRODUCTION'
END AS WCS_TYPE
FROM STAGLOG WHERE STGPROCESSED = 1;
(Note: A simpler check could just rely on the existence of the STAGLOG table, but I've seen many WCS servers that have this table without being a Staging server.)
The other option is to add a proprietary/custom system property or WCS server.
Non-staging will never have staging triggers.
select * from syscat.triggers where trigschema = CURRENT_SCHEMA and trigname like 'STAG%';
It depends on how you set it up [http://www.ibm.com/support/knowledgecenter/SSZLC2_7.0.0/com.ibm.commerce.admin.doc/tasks/tsscreatestagingserver.htm]
To find from DB if the environment is LIVE or STAGING is to query the STAGLOG table.
If we find entries in STAGLOG table, then that is a STAGING environment. This entries are created by TRIGGERS for Staging Database tables.
In LIVE we will not be having entries in STAGLOG table.

How to insert data into my SQLDB service instance (Bluemix)

I have created an SQLDB service instance and bound it to my application. I have created some tables and need to load data into them. If I write an INSERT statement into RUN DDL, I receive a SQL -104 error. How can I INSERT SQL into my SQLDB service instance.
If you're needing to run your SQL from an application then there are several examples (sample code included) of how to accomplish this at the site listed below:
http://www.ng.bluemix.net/docs/services/SQLDB/index.html#run-a-query-in-java
Additionally, you can execute SQL in the SQL Database Console by navigating to Manage -> Work with Database Objects. More information can be found here:
http://www.ng.bluemix.net/docs/services/SQLDB/index.html#sqldb_005
s.executeUpdate("CREATE TABLE MYLIBRARY.MYTABLE (NAME VARCHAR(20), ID INTEGER)");
s.executeUpdate("INSERT INTO MYLIBRARY.MYTABLE (NAME, ID) VALUES ('BlueMix', 123)");
Full Code
Most people do initial database population or migrations when they deploy their application. Often these database commands are programming language specific. The poster didn't include the programming language. You can accomplish this two ways.
Append a bash script that would call your database scripts that you uploaded. This project shows how you can call that bash script from within your manifest file as part of doing a CF Push.
Some languages like offer a file type or service that will automatically get used to populate the database on initial deploy or when your migrate/synch the db. For example Python Django offers a "fixtures" file that will automatically take a JSON file and populate your database tables

Specify a connection string when building sqlproj

We have started using local SQL Servers (SQL 2012) for development. We have a tool that calls MSBUILD to deploy a SQL Project (.sqproj) to either our local, dev & test databases.
A requirement has come up where we want to use that tool to deploy to other local databases - it's a rare thing to do but needed.
We have setup a .publish.xml file for each normal environment (dev.publish.xml, test.publish.xml, local.publish.xml, where local points to (local)\SQL2012).
We normally run:
msbuild.exe /t:build;publish /p:SqlPublishProfilePath="Local.publish.xml" "c:\workspaces\greg\...\databaseProject.sqlproj"
That works fine as it takes the connection string from the local.publish.xml file and deploys the sql project to our local database.
I'm not sure how to overwrite the publish file to make it point to a different database
I've tried
msbuild.exe /t:build;publish /p:SqlPublishProfilePath="Local.publish.xml" /p:TargetConnectionString="Data Source=SomeOtherPC\SQL2012;Integrated Security=True;Pooling=False" "c:\workspaces\greg\...\databaseProject.sqlproj"
but it still points to (local)\sql2012 instead of SomeOtherPC\SQL2012
Create a different publish profile for this and populate it with the required details (SomeOtherPC, SQL 2012, etc.)
SomeOtherPC.publish.xml
And pass that as the paramter to MSBuild
msbuild.exe /t:build;publish /p:SqlPublishProfilePath="SomeOtherPC.publish.xml"