Tridion OutboundEmail - Contact synchronization from several Presentation servers? - email

I'm facing a problem with OutboundEmail Synchronization for Contacts.
We have the following scenario : 2 load-balanced CMS servers and 3 load-balanced CDE web servers located in different data centers.
Each CDE web server will have it's own SQL server for broker DB and OutboundEmail Subscription + Tracking DB.
If I install local OutboundEmail Subscription + Tracking DB on each CDE, how can I process the Contacts Synchronization from the 3 CDE servers, knowing that for a specific Tridion publication you can only specify 1 synchronization target containing 1 url to profilesync.aspx ?
And idem for Tracking Synchronization.
I must be missing something ...
Any suggestion please?

This scenario is currently not supported, we do support multiple presentation servers but as you mentioned you can only specify one synchronization target under a publication
without going into detail there were compelling reasons not to support this scenario at that point in time, but it is on our backlog
I can think of a couple of options to solve it in this version:
use one database, but i'm guessing the reason to split it up over 3 data-centers is for fail-over/redunancy and/or geographical reasons
setup synchronization/tracking on one server and replicate data to the other 2 databases, note that the replication needs to be bi-directional

Related

Good architectural design for server and gateway user management in kdb

I have a 2 gateways which connects to the server where user details are logged.
I can think of two ways to log users accessing the server through gateway.
First way:
The logging is done on the server side i.e
Server(port 5001) Code:
au:([user:`$()]; tim:`timestamp$()); /- Table to maintain logged users
.z.pw:{`au upsert (x;.z.n); show y; 1b}
Gateway 1:
h:hopen `::5001:a:uts1
Gateway 2:
h: hopen `::5001:b:uts2
Second way:
The logging is done from the gateway i.e
Server(port 5001) Code:
au:([user:`$()]; tim:`timestamp$()); /- Table to maintain logged users
Gateway 1:
q)h:hopen `::5001:a:uts1
q)h"`au upsert (`a;.z.p)"
Gateway 2:
q)h: hopen `::5001:b:uts2
q)h"`au upsert (`b;.z.p)"
Hence, is it good to write the user logging code on the server side(Server) or at the client side(Gateway in this case) or is there is better/standard way to do the same?
EDIT - What if we add a middleware(user manager) between multiple gateways and multiple servers, in that case will it be good to write the user logging code on the middleware(user manager) or at the client side(Gateway in this case)?
If users are connecting through a gateway to a number of servers I would implement the logging of users and authentication at the GW level. This would further abstract sensitive data away from potentially unauthorized users, and simplify keeping a master record account activity as all users need to go through the GW. Performing the account activity on the server-side would then need aggregation over multiple servers to get the full picture of system activity.

Saas: Single-instance vs Multi-instance vs Single-tenant vs Multi-tenant?

I've been reading about instances and tenants and in the Saas architecture. My questions are as follows (please correct anything that you notice I've gotten wrong with any of the following terms):
1) Instance: Is an instance of a piece of software just a copy of that software with its own database? Is there anything more to it than that?
2) Tenant: Is a tenant a user / group of users that share a common set of access privileges to an individual instance?
3) Single-instance: If a Saas provider offers single-instance service, does this mean that they create only a single instance of their software? Or does it mean that there could be multiple instances, but that each instance can serve multiple tenants? If so, is single-instance the same as multi-tenant?
4) Multi-instance: Does this mean that each instance can serve only one tenant, or can there be multiple instances that each serve multiple tenants? ie. Can a multi-instance service be either single-tenant or multi-tenant?
5) Single-tenant: Does this just mean that an individual instance can serve only one tenant, or does it also imply that there are multiple instances? ie. Can a single-tenant service be both single-instance and multi-instance?
6) Multi-tenant: Does this just mean that an individual instance can serve multiple tenants, or does it imply that there is only a single instance? ie. Can a multi-tenant service be both single-instance and multi-instance?
7) To sum up: Can you have single-instance+single-tenant, single-instance+multi-tenant, multi-instance+single-tenant, and multi-instance+multi-tenant?
I'm going to write from my direct experience:
1) simple answer is 'yes'.
2) nearly yes: there will probably be refined access rights, say an administrator or two, and general users.
3) they're providing you with just one instance of that module, which will be single tenant.
4) they're providing you with multiple instances of that module, which will be single tenant.
5) I would use single-tenant to refer to the server hosting the instances is used by only one tenant. This might be done for perceived security benefits, or the server is running on a time zone that is non-standard for SaaS provider, like staying on UTC all year round.
6) I would use multi-tenant to refer to the server hosting the instances is used by more than one tenant. This tends to be more cost effective and probably just as secure as single-tenant.
7) yes, no, yes, yes.

Dedicate a node to a stream - Security rules

Can anyone let me know how to show a stream only in a specific node
i have a 2 nodes cluster.. and i would like to dedicate RIM01 specific to Stream1. RIM02 to Steam2. Meaning any request to that streams or apps in that stream should go to there nodes
So, if a go to RIM01 the Stream2 should be hidden etc...
Central node
RIM02 -- Repository + Engine
RIM03 -- Repository + Engine + Scheduler
i tried lot of security rules like
Filter : ServerNodeConfiguration_,Stream_
(node.#NodeUse="dev") and (node.#NodeType=stream.#StreamType and !resource.stream.Empty())
or
Filter : ServerNodeConfiguration_,Stream_
((resource.resourcetype = "Nodes" and resource.name="RIM01")) and ((resource.name="test"))
but none of them work :/
Thanks
So, at present, load balancing in Qlik Sense applies to Apps, not Streams. Load Balancing routes apps to servers, whereas security rules govern stream visibility. And, unfortunately, there is not a clean mechanism to use node meta-data in security rules. All in all, there isn't a solution for hiding a stream on a given server.
I have the same issue, you can designate the apps are only readable on single node, so depending on how your user stream rights are configured some users may see an empty stream on the node where the app cannot be accessed.
There's some interesting stuff happening with the multi cloud capability where the concept of streams is now collections, which gives lots more flexibility around this type of thing. Alas QEFE capability is only just come with June 2018, and access is limited to certain use cases / customers.

Make Orion fetch data from Cosmos and publish

I have set up a subscription between Orion ContextBroker and Cosmos BigData using Cygnus, and data is properly persisted in Cosmos when an update is made to Orion.
But I want to analyze the data in Cosmos and return the results to Orion, and finally access the result data in Orion from "outside".
How would one do this? Of course, I would like the solution I build to be as "automated" as possible, but mostly I just want to solve this problem.
Any advise is much appreciated!
As general response (as also the question is very general ;), what you need is a process that access to information stored in Cosmos (either using HDFS APIs -such as WebHDFS or HttpFs-, Hive queries, general MapReduce jobs on top of Hadoop, etc.), then implement the client side of the NGSI API that Orion implements in order to inject context elements into Orion based in the information you retrieved from Cosmos. The key operation to do so in the Orion API is updateContext.
The automation degree would depend on how you implement that process. It can be as automated as you want.
EDIT: considering this answer comments, I will try to add more detail.
What I mean is to develop a piece of software (let's call it APOS -A Piece Of Software) implementing the following behaviour:
APOS will grab data from Cosmos any of the interfaces provided by Cosmos, i.e. WebHDFS/HttpFs, Hive, MapReduce jobs, etc.
APOS will process the data to produce some result
APOS will inject that result in Orion, using the Orion REST API described in the Orion user manual. It is particularly useful for that task the updateContext operation. From a client-server point of view, Orion is a server exposing a REST API and APOS is the client interacting with that server.
It is completely up to you how to implement this APOS and how orchestrate the flow from 1 to 3 (e.g. it can run in batch mode all midnights, be triggered by user interaction on a web portal, etc.).
At the present moment, FI-WARE doesn't provide any generic enabler to convert from Cosmos data to NGSI given that each particular realization of the steps 1 to 3 above is different and depends on the use case. However, note that there is software component named Cygnus which implements the other way: from NGIS to Cosmos.

Are there any lightweight OPC servers which can have multiple instances running on the same machine?

I am wanting to run performance testing of an OPC client to 100 servers but I do not want to run 100 machines
Is there any way to set this up for testing purposes?
If they are OPC-DA (or A&E, HDA) servers, and you are really interested in just the performance of the client part, and do not mind actually using a single OPC server that just "looks" like 100 of them, there is a chance that you can use the following trick:
Using REGEDIT, under HKEY_CLASSES_ROOT, find an entry with the ProgID of your server. Its purpose is to point to the server's CLSID.
Make 100 copies of this entry, each time giving it a different ProgID, but pointing still to the same CLSID.
Configure your client to connect to 100 different OPC server, using the ProgIDs you created.
If you client uses just ProgIDs for recognizing the "identity" of the server, it will think it is connecting to 100 different servers, and it will treat them as such, i.e. create 100 separate connections. If the client, however, first converts the ProgID to CLSID, it will see the same CLSID, and it won't work. Obviously, even if this works, the performance test will be skewed as far as the server side is concerned, because there will be just 1 server process, not 100 of them.
Note that CLSID cannot be changed in this way, because it is also hard-coded in the COM class factory inside the server.
Another wild idea that comes to my mind is to use a product such as OPC Tunneler (e.g. MatrikonOPC). In this product you can create any number of "servers", which in fact connect by tunneling to a true OPC server somewhere else. Therefore you might be able to configure 100 tunnels (looking like separate servers), all connecting to 1 target OPC servers.