Is it possible to divert a subset of a service's incidents to a different person? - pagerduty

I have a service for which a subset of the incidents should go to an engineer different from the one on-call. Is there a way to do this on PagerDuty? I know this is possible:
divert all incidents of a service to a different team by having a separate escalation policy and schedule
I want to do this for a subset of the incidents of the service. Is there a way to do it? Can Rulesets be used?
Ideally, I would have liked to send the incidents to a particular user for the subset, without having to worry about creating a schedule and an escalation policy.

Related

Is using deployments to isolate clients in Kubernetes a good idea?

We’re in the process of migrating our aging monolith to a more robust solution and landed on Kubernetes as the most appropriate platform to achieve what we’re looking for. At the same time, we’re looking to split out and isolate our client data for security and improved privacy.
What we’re considering is ultimately having one database per customer, and embedding those connection details into a deployment for each of them. We’d then build a routing service of some kind that would link a client’s request to their respective deployment/service.
Because our individual clients vary wildly in size (we have some that generate thousands of requests per minute, and others that are hundreds per day), we like the option of having the ability to scale them independently through ReplicaSets on the deployments.
However, I have some concerns regarding upper limits of how many deployments can exist/be successfully managed within a cluster, as we’d be looking at potentially hundreds of different clients, which will continue to grow. I also have concerns of costs, and how having dedicated resources (essentially an entire VM) for our smaller clients might impact our budgets.
So my questions are:
is this a good idea at all? Why or why not, and if not, are there alternative architectures we could look at to achieve the same thing?
is this solution more expensive than it needs to be?
I’d appreciate any insights you could offer, thank you!
I can think of a couple options for this situations:
Deploying separate clusters for each customer. This also allows you to size your clusters properly for each customers and configure autoscaling accordingly for each of them. The drawback is that each cluster has a management fee of 0.10$ per hour, but you get full guarantee that everything is isolated, and you can use the cluster autoscaler to make sure that only the VMs that are actually needed for each customer are running. For smaller customers, you may wanna use this with small (and cheap) machine types.
Another option would be to, as mentioned in the comments, use namespaces. However you would have to configure the cluster properly as there exist ways of accessing services in different namespaces.
Implement customer isolation in your own software running on a cluster. This would imply forcing your software to access only the database for a given customer, but I would not recommend to go this route.

Eclipse milo - How to handle data (nodes) visibility in OPCUA so that different users see different data?

I am in the process of analizing how to set up an OPCUA server in the cloud, and one of the challenges is data visibility. As data visibility, I mean that a user/customer can see certain data/devices that only belongs to him, and the same will apply to other users.
So the node creation process will depend on who the connected user is.
How can this be implemented in the best way according to OPCUA and specifically eclipse milo? Is it different namespaces for each customer? Any suggestion will be appreciated.
Different namespaces per customer would be an okay approach, but whether you do that or not you ultimately need to be examining the Session during the execution of Browse, Read, Write, and other services to determine which User is connected and what rights they have.

client, account and transactions, how to design it in a microservices approach?

I am designing a solution and would like to double check if this is according to the microservices architecture.
We have clients, accounts and transactions like a normal bank account.
Clients have basic data like name and address.
Accounts might be for savings or current
Transactions are money transfers between 2 accounts
So I am designing the following way:
1 microservice to manage client data (will manage just client basic data and their addresses)
1 microservice to manage account data (will manage account basic, the client id is part of the account data)
1 microservice to manage money data (will have the account's balance and all transfers)
Please let me know if this is according to the microservice architecture and if you have another understanding.
As per my understanding, the main goal of a microservices architecture is to support faster and continuous releases of different parts of a big system without waiting on each other. There are two approaches to design a new system, microservices first approach and microservices later approach. In the first approach, the system is designed from ground up to follow microservices architecture, the system is initially itself broken down into services and the services talk to each other typically over HTTP & REST. In the other approach, system is initially is not built as microservices and all the modules are under single application. Both of these approaches has it pros and cons which is a separate discussion.
In your case, you are taking the first approach, to design the new system with separate services for each functionality. I am not an expert in banking domain but from what I understand, the client (customer) system can be definitely a separate service and be responsible for maintaining customer master data. The account service can be responsible for maintaining accounts data and serve out account related information. However, account balance is an integral property of an account and it should be always associated with an account. Finally the transfer can be a separate service that can record the transfers between accounts. Whenever there is a transfer, it can query the accounts for their current balance and if the transfer is valid one, then it can record the transfer.
However as this involves financial transaction, you would have to make sure that each transaction follows the ACID rules. Maintaining ACID properties among distributed systems is tricky and there are several ways to mitigate this such as only having ACID transactions on the most critical areas and having eventual consistency on the others. For example, banks do not immediately reflect all the transactions to the customer as "completed" and instead show the message saying "pending to be processed" (eventual consistency) so that customer is aware of the exact status.

Adding Data from UI to different microservices

Imagine you have a user registration form, where you fill in the form:
First Name, Last Name, Age, Address, Prefered way of communication: Sms, Email (radiobuttons). You have 2 microservices:
UserManagement service
Communication service
When user is registered we should create 2 aggregates in 2 services: User in UserManagementContext and UserCommunicationSettings in Communication. There are three ways I could think of achieving this:
Perform 2 different requests from UI. What if one of them fails?
Put all that data in User and then raise integration event with all that data, catch it in CommunicationContext. Fat events, integration events shouldn't contain domain data, just the ids of aggreagates.
Put the message in the queue, so both contexts would have the adapters to take the needed info.
What is the best option to split this data and save the information?
Perform 2 different requests from UI. What if one of them fails?
I don't think this would do. You are going to end up in inconsistent state.
I'm for approach 3#:
User is persisted (created) in your user store.
UserRegistered event is sent around, containing the ID of the user.
All interested parties handle UserRegistered event.
I'd opt for slim events, because your services may need different user data and its better to let them to get this data on their own rather than putting all the things into the event.
As you mentioned to store communication settings, assuming that communication data are supposedly not part of the bounded context of UserManagement Service.
I see a couple of problems with approach #3 proposed in other answer. Though I think approach #3 in original answer is quite similar to my answer.
What if more communication modes are added? Naturally, it should only cause Communication Service to change, not UserManagement Service. SRP. Communication settings MS should store all communication settings related data in its own datastore.
What if user updates his communication settings preference only? Why user management service should take burden of handling that? Communication settings change should just trigger changes in its corresponding micro-service that is Communication Service in our case.
I just find it better to use some natural keys to identify & correlate entities across micro-services rather than internal IDs generated by DB. Consider that tomorrow you decide to use completely different strategy to create "ids" of user for UserManagement service e.g. non-numeric IDs, different id generation algorithm etc. I would want to keep other micro-services unaffected of any such decisions.
Proposed approach:
Include API Gateway in architecture. Frontend always talks to API Gateway.
API Gateway sends commands to message queue like RegisterUser to be consumed by interested micro-services.
If you wish to keep architecture simple, you may go with publishing a single message with all data that can be consumed by any interested micro-services. If you strictly want individual micro-services to see only its relevant data, create a message queue per unique data structure expected by consuming services.

How do I limit a Kafka client to only access one customer's data?

I'm evaluating Apache Kafka for publishing some event streams (and commands) between my services running on machines.
However, most of those machines are owned by customers, on their premises, and connected to their networks.
I don't want a machine owned by one customer to have access to another customer's data.
I see that Kafka has an access control module, which looks like it lets you restrict a client's access based on topic.
So, I could create a topic per customer and restrict each customer to just their own topic. This seems like a bad idea that I could regret in the future, because I've seen things that recommend restricting the number of Kafka topics to the 1000s at most.
Another design is to create a partition per customer. However, I don't see a way to restrict access if I do that.
Is there a way out of this quandary?