Creating a database like metadata model in IGC using Open IGC Bundles - ibm-information-server

I am trying to create database like metamodel in IGC using Bundles which will be having Tables, Columns, Physical and logical models, Primary and Foreign Keys. How can I create relationship with Key and Columns using custom assets like we have existing one in IGC?

IGC allows you to extend the Catalog and introduce new types of Assets, where you can manage their structure and display. The management of such new Assets is via the supported IGC REST, and you may use the IGC REST Explorer to initially test and publish bundles and post details of such Assets. The default URL for the IGC REST Explorer is: https://YOURSERVER:9443/ibm/iis/igc-rest-explorer/
If you expand the section Bundles - you can use the action: post /bundles/ to create a new set of Assets - Terminal Applications. Use the attached bundle archive.
Additionally - you can use the action: post /bundles/assets to create the instances of such Assets. Copy the contents of the attached file.

Related

Copy and paste Typo3 Sites between 2 backends

I am managing around 12 TYPO3 backends with almost similar content. Is it possible to copy and paste a created site between independent backends? Right now I'm creating by hand 12 sites with the same content. There has to be an easier way.
Well, there is not much I could try. Within TYPO3 I don't see any option to export/import sites from other backends.
First of all you should merge those 12 sites into one backend with multiple root sites and trees. Then you can easily handle different domains and/or languages via the site configurations for those roots.
Of course you can then make use of shared sys_folder pages that contain the content elements, that should be available for multiple sites. To make them available for a specific site, you can use references then.
You can export a page tree and import it into another instance.
On the other hand you can duplicate an instance by copying the complete database and original files.
That includes necessarily the fileadmin/ and uploads/ folders.
typo3conf/ should be duplicated by deployment but might differ in the files typo3conf/LocalConfiguration.php and typo3conf/AdditionalConfiguration.php (e.g. each instance should have other databases).
You can use the core extension impexp to import/export content parts. There is even a context menu entry
Be aware of some drawbacks:
if assets are exported, those are exported & imported you can hit the limit of your memory_limit
take extra care about which uids should be used, e.g. forcing uids can lead to drawbacks
Of course there are other options as well like:
create a custom extension which exports/imports the content on the fly using either something like a custom endpoint or fetching directly the DB if possible
use 1 installation as discussed already
if e.g. using ext:news use something like rss feeds for a poor man import/export with ext:news_importicsxml

How to link assets to imported dashboard in Nessus Security Center?

I have a configured dashboard in my Nessus Security Center. For each component, I have set an asset, for example the asset of my Linux machines. Now I will create the same dashboard with the asset of my windows machines.
By exporting into an xml, I have the choice of three methods:
Keep references
Replace references with placeholder
Delete references
If I take the second option, I found no way to replace the placeholders with the reference to my windows asset list.
The only way I found was: going into each cell of each component and set the asset in the "Target Filters" option.
Is there no general setting for the whole dashboard to configure the assets?
P.S: The definition of the components in the export-xml is unusable, if you can't decrypt it.
The first selection should be keep all references.... depending on the version you are running. When you import the new dashboard it should keep everything. If your building out a completely new system and importing things you will have to build new assets or import those also.

Add custom metadata fields for DAM assets in a particular folder

I created custom metadata fields for DAM Assets in CQ 5.6.1 using the steps detailed here. However, as described in the document, these changed fields are available for ALL assets in the DAM.
I need these metadata fields to be made available to only a specific folder, say /content/dam/foo instead of every asset.
How can I achieve this?
In my knowledge, there is no straight forward way to achieve this but there is one trick to handle this.
AEM DAM has the notion of meta schema editors. These editors are tied to asset file type, meaning - jpeg, mov, etc. For an asset MIME type, you can define the meta data and its associated form.
AEM 6.0 provides choral UI interface to achieve this --
http://localhost:4502/libs/dam/gui/content/metadataschemaeditor/schemalist.html/dam/content/schemaeditors/forms
Am not aware of any such interface in AEM 5.6.1. The nodes tied to this are at /libs/dam/content/asseteditors/image/jpeg/formitems. You could overlay them at /apps to add the required metadata.
Coming to the question, the trick is to add the file type into your dam content hierarchy. Example - /content/dam/jpeg/foo, /content/dam/png/foo, etc. This way, you would get different metadata for different folders of the dam.
Adobe recommends this approach as AEM 6.0 introduced the concept of processing profiles which you could attach to folders. This way, your different file types could get different treatments.

Upload in Alfresco Repository

I'm new to Alfresco. What i'm trying to do is to upload a file through REST API in a folder that i have created using Alfresco Web Administration Interface. I have a few problems:
1) I can see a set of folders, but how are them managed by Alfresco? As far as i know, those folders doesn't really exists physically, they are virtual. How does Alfresco manage the folders structure and files?
2) I have seen many examples on the _REST API use to upload a file. Anyway, the destination is set by something like this
workspace://SpacesStore/aae3b33fd-23d4-4091-ae64-44a8e332091341
I can't understand: What exactly is a SpaceStore? And, does the last-part-code refers to a specific folder? How can i get those codes relative to the folder i see in the Alfresco Web Admin Interface?
1) I can see a set of folders, but how are them managed by Alfresco?
As far as i know, those folders doesn't really exists physically, they
are virtual. How does Alfresco manage the folders structure and files?
Alfresco is an implementation of the Java Content Repository (JCR), this means that all the contents are managed using a logic structure similar to a graph of nodes. Storing and manipolating content must be done using the repository API, that's why you don't see anything at a storage level.
Each content in Alfresco is a node connected to at least another node: the parent.
The storage of Alfresco is based on two components:
File system for storing binaries and search indexes
Database for storing the backup of metadata and associations
The way how Alfresco stores contents is not important for you because you typically want to access using the Alfresco API. You can create your own logic structure in the repository using any type of folders tree and content associations.
2) I have seen many examples on the _REST API use to upload a file.
Anyway, the destination is set by something like this
workspace://SpacesStore/aae3b33fd-23d4-4091-ae64-44a8e332091341 I
can't understand: What exactly is a SpaceStore? And, does the
last-part-code refers to a specific folder?
A repository typically consists of a set of JCR workspaces, the SpacesStore is one of the workspaces in Alfresco and it is the logic partition of contents in their latest version.
Alfresco also contains other workspaces:
userStore: contains person nodes
archiveStore: contains removed nodes
version2Store: contains the version history of nodes
How can i get those codes relative to the folder i see in the Alfresco
Web Admin Interface?
That code is the node reference that is the unique identifier for each node in the repository and as you can see it consists of three parts:
workspace: the store protocol
SpacesStore: the store identifier
uuid: the UUID related to the node
The store reference consists of the store protocol appended with the store identifier and it is the identifier of the workspace where the node lives. The UUID specify the content inside the workspace.
These node references are the ID for nodes and you can see all these informations using the Node Browser inside the Alfresco Explorer | Administration Console trying to navigate your repository starting from the Company Home.
Hope this helps.

Set up own DBpedia server to create new mappings

I want to extend the mappings database of DBpedia. Therefore I want to run my own extraction framework instance on my computer. Although the latter is simply done I cannot figure how to feed the framework with newly created mappings.
What I found out so far:
In "config.properties" I can define my own dump-folder.
Some output directory can be defined as well. But what exactly is stored there?
In "Configuration.scala" the url of a mappings page is defined. Does that mean that the framework expects a web page as input which will then be searched for mappings?
My goal is to define some mappings in a plain text file and then tell the extraction framework somehow to use this file as the source of all mappings.
If everything works smoothly I am going to contribute my results to the dbpedia team.
Thanks for your help!
Some output directory can be defined as well. But what exactly is stored there?
The extraction framework outputs N-Triples and N-Quads of all the extracted data, mapping-based and others (see also the files at http://dbpedia.org/Downloads).
In "Configuration.scala" the url of a mappings page is defined. Does that mean that the framework expects a web page as input which will then be searched for mappings?
The Mappings are loaded from http://mappings.dbpedia.org/ which is a wiki for creating and editing mappings. You can get an account and editor rights there and write your own mappings. They will then be loaded when you run the extraction framework (and the data using the mappings will be available in the next release).
My goal is to define some mappings in a plain text file and then tell the extraction framework somehow to use this file as the source of all mappings. If everything works smoothly I am going to contribute my results to the dbpedia team.
You could go ahead and make the framework read the wiki code of mappings from local text files, but I think it would be better to edit them directly on the wiki. Your contribution will be instantly available.