I have SOAR that uses boltDB to host it's incidents.
I want to take that boltDB copy over to DEV environment and leverage its data without compromising PROD data.
New to BoltDB; are there tools available for me to review / query bolt DB database. Ultimately looking to see if I can script a solution to scramble certain values within the boltDB?
Related
I'm new at prisma 2 but have got a database working. I have used prisma 'init' and 'migrate dev' to create database tables for my model and can interact with the database using the Prisma client - prisma 2.22.1
Usually for a project, I'd have dev, test and prod environments and use env-cmd to set the relevant differences, e.g. connection details for getting to the database.
With prisma 2 however, it seems like there's a single .env file that is used for the database connection details, so I cannot see how to proceed for the different environments.
Note that I'm not meaning different types of database - in this example all are postgresql.
I can see possibilities for getting past this hurdle, for example for the script to write a suitable .env file according to the required environment as part of running the app, but 'not ideal' really doesn't give this idea the review that it deserves. Or getting more computers.
Any suggestions please for using different databases from the same project? Am I missing something basic or is it deliberately blocked?
I have an Azure Data Factory with a pipeline that I'm using to pick up data from an on-premise database and copy to CosmosDB in the cloud. I'm using a data flow step at the end to delete documents that don't exist in the source from the sink.
I have 3 integration runtimes set up:
AutoResolveIntegrationRuntime (default set up by Azure)
Self hosted integration runtime (I set this up to connect to the on-premise database so it's used by the source dataset)
Data flow integration runtime (I set this up to be used by the data flow step with a TTL setting)
The issue I'm seeing is when I trigger the pipeline the AutoResolveIntegrationRuntime is the one being used so I'm not getting the optimisation that I need from the Data flow integration runtime with the TTL.
Any thoughts on what might be going wrong here?
Per my experience, only the AutoResolveIntegrationRuntime (default set up by Azure) supports the optimization:
When we choose the data flow run on non-default integration, there isn't the optimization:
And once the integration runtime created, we also couldn't change the settings:
Data Factory documents didn't talk more about this. When I run the pipeline, I found that the dataflowruntime won't work:
That means that no matter which integration runtime you used to connect to the dataset, data low will always use the Azure Default integration runtime.
SHIR doesnt support dataflow execution.
Currently I have a VM running and installed the binaries needed for fabric-ca. I have a docker-compose file looking like this:
I have some questions regarding this:
the docker-compose file will create one container, if I want it for
more organizations, do I need to copy/paste this and change the port
number? (I don't want to use intermediate CAs).
When registering/enrolling an identity, it will override the default
materials because It will always put the materials from the new identity in /etc/hyperledger/fabric-ca-client. So when creating multiple
identities (orderer, peers, users etc..) how do I need to organize
them? What's the best practise?
In the image you can see that the server and clients are specified,
is this a good approach? Or should the client and the server be a
different container?
More than one CA in a Docker Compose file - you can look at the Build your first network tutorial in the Fabric Docs which has a 2 Org network and various configuration files including Docker Compose.
Combined client/server Container - This might be convenient for testing, but in a production scenario definitely not for Security and Operational Integrity reasons.
Overwriting Identities - the enroll command writes a tree of data to the location specified by the environment variable FABRIC_CA_CLIENT_HOME but you can use --home to redirect the tree to a different location:
fabric-ca-client enroll -u http://Jane:janepw#myca.example.com:7054 --home /home/test/Jane/
I am following this guidance: https://www.mssqltips.com/sqlservertutorial/3806/sql-server-master-data-services-importing-data/
The instructions say after we load data into the staging tables, we go into the MDS integration screen and select "START BATCHES".
Is this a manual override to begin the process? or how do I know how to automatically queue up a batch to begin?
Thanks!
Alternative way to run the staging process
After you load the staging table with required data.. call/execute the Staging UDP.
Basically, Staging UDPs are different Stored Procedures for every entity in the MDS database (automatically created by MDS) that follow the naming convention:
stg.udp_<EntityName>_Leaf
You have to provide it values for some parameters. Here is a sample code of how to call these.
USE [MDS_DATABASE_NAME]
GO
EXEC [stg].[udp_entityname_Leaf]
#VersionName = N'VERSION_1',
#LogFlag = 1,
#BatchTag = N'batch1'
#UserName=N’domain\user’
GO
For more details look at:
Staging Stored Procedure (Master Data Services).
Do remember that the #BatchTag value has to match the value that you initially populated in the Staging table.
Automating the Staging process
The simplest way for you to do that would be to schedule a job in SQL Agent which would execute something like the code above to call the staging UDP.
Please note that you would need to get creative about figuring out how the Job will know the correct Batch Tag.
That said, a lot of developers just create a single SSIS Package which does the Loading of data in the Staging table (as step 1) and then Executes the Staging UDP (as the final step).
This SSIS package is then executed through a scheduled SQL Agent job.
I am new to a WebSphere Commerce Enterprise v6.0 environment that has already been set up. I was wondering what would be the most definitive way for me to determine which servers are used as Production, which are used as Staging and which are used for Testing?
To my knowledge, WCS has so far not included a DB entry or a script that can return the nature of a WCS server. If there is IBM will need to clearly document it.
The best way to find out in [out-of-the-box] WCS installations what is the nature of a WCS server is probably a query like this:
SELECT CASE
WHEN count(1)>0 THEN 'STAGING'
WHEN count(1)=0 THEN 'PRODUCTION'
END AS WCS_TYPE
FROM STAGLOG WHERE STGPROCESSED = 1;
(Note: A simpler check could just rely on the existence of the STAGLOG table, but I've seen many WCS servers that have this table without being a Staging server.)
The other option is to add a proprietary/custom system property or WCS server.
Non-staging will never have staging triggers.
select * from syscat.triggers where trigschema = CURRENT_SCHEMA and trigname like 'STAG%';
It depends on how you set it up [http://www.ibm.com/support/knowledgecenter/SSZLC2_7.0.0/com.ibm.commerce.admin.doc/tasks/tsscreatestagingserver.htm]
To find from DB if the environment is LIVE or STAGING is to query the STAGLOG table.
If we find entries in STAGLOG table, then that is a STAGING environment. This entries are created by TRIGGERS for Staging Database tables.
In LIVE we will not be having entries in STAGLOG table.