Is it possible to assign graphs/collection to TDE generated triples? - marklogic-10

I know that TDE generates unmanaged triples, and unmanaged triples are not part of graphs by default. So is there any way to put these TDE generated triples in specific graphs or collections?

Related

Mapping Synapse data flow with parameterized dynamic source need importing projection dynamically

I am trying to build a cloud data warehouse where I have staged the on-prem tables as parquet files in data lake.
I implemented the metadata driven incremental load.
In the above data flow I am trying to implement merge query passing the table name as parameter so that the data flow dynamically locate respective parquet files for full data and incremental data and then go through some ETL steps to implement merge query.
The merge query is working fine. But I found that projection is not correct. As the source files are dynamic, I also want to "import projection" dynamically during the runtime. So that the same data flow can be used to implement merge query for any table.
In the picture, you see it is showing 104 columns (which is a static projection that it imported at the development time). Actually for this table it should be 38 columns.
Can I dynamically (i.e run-time) assign the projection? If so how?
Or anyone has any suggestion regarding this?
Thanking
Muntasir Joarder
Enable Schema drift in your source transformation when the metadata is often changed. This removes or adds columns in the run time.
The source projection displays what has been imported at the run time but it changes based on the source schema at run time.
Refer to this document for more details with examples.

How to publish data blended data sources?

I have a workbook with datasources blended and calculated fields referencing these datasources.
I get an error alert publishing to tableau server.
How do I publish the data sources?
I use tableau 2019
Create an extract of the blended data source, and publish that extract to the server. This extract will included all the calculated fields that are included in the current workbook.
Generally for this type of use case I suggest publishing your data extracts separately. They can contain your calculated fields. Publishing the calculated fields is actually more performant anyway, many of them become materialised.
You can connect to both server data sources in desktop, and perform the data blending within the Tableau workbook using the published data sources. You can create additional calculated fields, including those from data blends, within the workbook. You're then able to publish the workbook and those calculated fields will remain at the workbook level, not the data source.
Other users connecting to the data sources will only see the calculations published into the extract, not the calculations published with the workbook.
Hope that makes sense.

Store .sav file into RDBMS including meta data

I want to know what is the best approach to store the data from .sav file into RDBMS database with out loosing any meta data model as well as actual response data.
Note first that you can save all the metadata in a sav file where you have deleted all the data and then reapply the metadata to a new, similar sav file using APPLY DICTIONARY.
Otherwise, you would need to create tables in the database for the various attributes. That's easy for variable labels, formats, measurement level, and missing value codes. For value labels it would take a bit more work.
One possible approach would be to use OMS to capture the output from CODEBOOK (without any statistics) as data files and then export those files to the database.

Combining two data sources with exact same schema in Tableau

I have two data sources containing list of orders with the exact field structure (one is the archive, and another is the active database). I'm accessing them through OData connection in Tableau.
What I want is to combine these two data sources so the Tableau chart will display all order numbers and information (as opposed to just the active one, which I'm doing with a single data source).
The two tables don't overlap (since whatever is archived is by definition not active), so I cannot join or blend with the primary key Order No. (or any key for that matter).
How can I combine these data sources? Does the fact that the connection is OData make any difference?
For relational databases, the solution is to define custom SQL with two back to back select statements (1 for each table) separated by the SQL UNION ALL keywords
I don't know whether OData sources support UNION ALL
Create TDE file for Archive data and add this file to the current data extract using the option "Append data from file" from Data-->Extract;

Migrating a schema from one database to other

As part of some requirement, I need to migrate a schema from some existing database to a new schema in a different database. Some part of it is already done and now I need to compare the 2 schema and make changes in the new schema as per gap finding.
I am not using a tool and was trying to understand some details using syscat command but could not get much success.
Any pointer on what is the best way to solve this?
Regards,
Ramakant
A tool really is the best way to solve this – IBM Data Studio is free and can compare schemas between databases.
Assuming you are using DB2 for Linux/UNIX/Windows, you can do a rudimentary compare by looking at selected columns in SYSCAT.TABLES and SYSCAT.COLUMNS (for table definitions), and SYSCAT.INDEXES (for indexes). Exporting this data to files and using diff may be the easiest method. However, doing this for more complex structures (tables with range or database partitioning, foreign keys, etc) will become very complex very quickly as this information is spread across a lot of different system catalog tables.
An alternative method would be to extract DDL using the db2look utility. However, you can't specify the order that db2look outputs objects (db2look extracts DDL based on the objects' CREATE_TIME), so you can't extract DDL for an entire schema into a file and expect to use diff to compare. You would need to extract DDL into a separate file for each table.
Use SchemaCrawler for IBM DB2, a free open-source tool that is designed to produce text output that is designed to be diffed. You can get very detailed information about your schema, including view and stored procedure definitions. All of the information that you need will be output in a single file, and can be compared very easily using a standard diff tool.
Sualeh Fatehi, SchemaCrawler
unfortunately as per company policy, cannot use these tools at this point of time. So am writing some program using JDBC to get the details and do some comparison kind of stuff.