Creating two application interface framework on recieving one idoc - interface

I want to create multiple AIF on a single recieving IDOC based on basic type or message type or variant or any other criteria.

If you're asking if an IDoc can trigger 2 AIF interfaces simultaneously, the answer is no.
What you can do is to assign an IDoc to multiple AIF interfaces, and then choose which one to be executed based on the IDoc content.
In /AIF/CUST transaction choose
SAP Application Interface Framework -> Interface Development ->
Additional Interface Properties -> Assign IDoc Types
here you can assign 2 or more interface to a message/idoc type
Then choose
SAP Application Interface Framework -> System Configuration ->
Interface Determination -> Define Interface Determination for IDoc
Interfaces
here you can determine which interface to execute when an IDoc is received based on few fields (in this example I'm using Sender partner from control record)
choose the IDoc field used as a condition
choose which value of the field trigger each interface

Related

Mapping between AssetTypes and DomainTypes in Collibra

From Collibra Documentation (https://university.collibra.com/knowledge/collibra-body-of-knowledge/data-governance-operating-model/organizational-concepts/domain-types/)
A Domain Type formally defines which types of Assets can be created in the Domain. In other words it serves as a template. By assigning asset types to a domain type, you can specify which asset types can be created within which domain type.
Is there any such strict mapping in place? Can a user create a mapping between domain type and asset type? Is there any way, through REST APIs / Java Connectors, that these mappings can be createt or retrived?
In Collibra certain domain type has a certain asset type under it. This means from one domain is a physical dictionary then you can have Column Asset, Code and code number asset and etc. To create a separate mapping, you need to have System admin rights. If you do then the first step is to go to setting —> Domain type -> Select the domain type you want to map —> go to characteristics in that domain —> then check Mark on whichever asset type you want to map.
The second step is to create a relationship between that domain and assets if you want to retrieve any data or hierarchy. You can create relationships from setting only where you also need system admin's rights.
Let me know if you have any follow up on this.
(Share a screenshot or details if still has query)

Archimate - Application Layer - Interfaces + Database

I am quite new to the Achrimate 3.0 and I am trying to make my model in it. I put an example below. My goal is to model this stream of data where I have a Source System which is creating output files in specific format -> next Step is Pulling the data by processing component and looking for some values in connected DB -> final step is then deliver this data (pushed by processing component) to Target Systems.
Q1: What relationship is correct to Application Component and Interface? In the picture is triggering (but maybe FLOW fits better) ?
Q2: Database is joined via Access relationship ?
Q3: For my purposes it will need to hold information about DB (columns+types+notes) structure in the diagram, any tips, how to manage it in Archimate ?
Diagram Example Here:
First, it would be nice to read the specification (http://pubs.opengroup.org/architecture/archimate3-doc/chap09.html#_Toc489946063).
Are you sure that you use only application layer? If so, then the interface is not defined correctly, the interface that you specified is rather related to the technology layer.
So:
Wrong interface (IMHO). Figure 67: Application Layer Metamodel of spec show you how elements can be linked on this layer.
Database can be represent as component, not DataObject.
In my experience - no good way. Use the standard reverse engineering mechanism. Associate the resulting UML objects with the Archimate elements if you need it.

Mark a field as transient for one datasource but not another in spring data jpa

I've got an app that sends data to SQL Server, and we'd like to expand it to also write to another data source (possibly amazon s3, but possibly a regular database). The issue is, this new database only needs a subset of the fields in my entity class.
Is there a way that I can mark a field as being transient for one datasource but not another? Or should I be doing something on the Repository level? I'm using Spring Data JPA, and had been using a Spring-generated JpaRepository.
public interface JobRepository extends JpaRepository<MyPojo, Long>{}
It is possible to create two different repository interfaces for two different data sources. In this case, you will need to create two different entities-one for each data source and bind them in your services.
For Data Source A: AEntity, ARepository
For Data Source B: BEntity, BRepository
And in your services, you create a method:
public AEntity createAEntityFromBEntity(BEntity bEntity);
To be able to do this, you will need to mark one of your data sources as #Primary. Please check this link to see how to create two different data source connections with configuration details.

Simulink model interface to external C++ application

I have a fairly complex Simulink model with multiple referenced models that I am trying to interface with external C++ code. I am using 2014a and generating code using Simulink Coder. Below are two important requirements I would like to satisfy while doing this:
Have the ability to create multiple instances of the model in my
external C++ code.
Have the ability to overwrite data that is input to the model and
have it reflected to the external application.
Given these requirements, at the top level what is the best method to interface to external IO, datastores or IO ports?
From my understanding using datastores (or Simulink.Signal objects) and by specifying the appropriate storage class I may be able to satisfy 2 above, but in doing so I would have to specify signal names and this would not allow me to satisfy 1.
If I use IO port blocks at top level, I may be able to satisfy 1 but not 2.
Update: #2 constraint has been removed due to design change. Hence the io port approach works for me.

What are the differences between GemfireRepository and CrudRepository in spring data gemfire

GemfireRepository is a gemfire specific implementation of CrudRepository but the spring data gemfire reference guide says if we use GemfireRepository then we need to
have our domain classes correctly mapped to configured regions as the bottstrap process will
fail otherwise..does that mean that we need to have #Region annotation on the domain classes?In case we use CrudRepository then #Region annotation is not required because CrudRepository is not dependent on Region ?
So I am using GemfireRepository and I have a cacheloader configured as plug in to a region and the cacheloader depends on the GemfireRepository to fetch the data from RDBMS. So according to the reference documentation if GemfireRepository is internally dependent on Region..then does that create a circular dependency?
The SDG GemfireRepository interface extends SDC's CrudRepository interface and adds a couple of methods (findAll(:Sort), and an overloaded save(:Wrapper):T method), See...
http://docs.spring.io/spring-data-gemfire/docs/current/api/org/springframework/data/gemfire/repository/GemfireRepository.html
GemfireRepository interface is "backed" by the SimpleGemfireRepository class.
Whether your application-specific Repository interface extends GemfireRepository or CrudRepository, or even just org.springframework.data.repository.Repository, does not really matter. Extending a Repository interface provided by the framework only determines what methods will be exposed in the backing implementation by the "Proxy" created with the framework.
E.g. if you wanted to create a read-only Repo, you would directly extend org.springframework.data.repository.Repository, and copy only the "read-only" methods from the CrudRepository interface into your application-specific Repository interface (e.g. findOne(:ID), findAll(), exists(:ID), i.e. no data store mutating methods, such as save(:S):S).
But, by using the namespace element in your Spring config, you are instructing the framework to use SDG's Repository infrastructure to handle persistent operations of your application domain objects into GemFire, and specifically Regions. Therefore, either the application domain object must be annotated with #Region, or now, SDG allows an application Repository interface to be annotated with #Region, in cases where you want your application domain object needs to be stored in multiple Regions of GemFire. See 8.1, Entity Mapping in the SDG Ref Guide, for further details..
http://docs.spring.io/spring-data-gemfire/docs/1.4.0.RELEASE/reference/html/mapping.html#mapping.entities
Regarding the "circular dependency"... yes, it creates a circular dependency...
Region A -> CacheLoader A -> ARepository -> Region A.
And will lead to a...
org.springframework.beans.factory.BeanCurrentlyInCreationException
You need to break the cycle.