Is it possible to manage service dependencies through an API or Terraform? - pagerduty

Can the service dependencies on PagerDuty be managed through an API or Terraform? Does PagerDuty have a standard Terraform service provider for this purpose?

Related

How does wso2 deploy on Kubernetes without using Google Cloud?

I want to deploy WSO2 API Manager with Kubernetes.
Should I use Google Cloud?
Is there another way?
The helm charts 1 for APIM can be deployed on GKE, AKS, EKS, etc. You can even deploy the all-in-one simple deployment pattern 2 in a local Kubernetes cluster like minikube, etc.
You might have to use a cloud provider for more advanced patterns since they require more resources to run.
All these charts are there as samples to get an idea about the deployment patterns. It is not recommended to deploy those as it is in real production scenarios as the resource requirements and infrastructure vary according to the use cases.
1 - https://github.com/wso2/kubernetes-apim
2 - https://github.com/wso2/kubernetes-apim/tree/master/simple/am-single

How to configure an AKS cluster to use secrets from external Vault installed on different AKS Cluster

I have two kubernetes clusters running on Azure AKS.
One cluster named APP-Cluster which is hosting application pods.
One cluster named Vault-Cluster which the Hashicorp Vault is installed on.
I have installed Hashicorp Vault with Consul in HA mode according to below official document. The installation is successful.
https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes
But I am quite lost on how to connect and retrieve the secrets in Vault cluster from another cluster. I would like to use the sidecar injection method of Vault for my app cluster to communicate with vault cluster. I tried the follow the steps in below official document but in the document minikube is used instead of public cloud Kubernetes Service. How do I define the "EXTERNAL_VAULT_ADDR" variable for AKS like described in the document for minikube? Is it the api server DNS address which I can get from Azure portal?
https://learn.hashicorp.com/tutorials/vault/kubernetes-external-vault?in=vault/kubernetes
The way you interact with Vault is via HTTP(s) API. That means you need to expose the vault service running in your Vault-Cluster cluster using one of the usual methods.
As an example you could:
use a service of type LoadBalancer (this works because you are running kubernetes in a cloud provider that supports this feature);
install an ingress controller, expose it (again with a load balancer) and define an Ingress resource for your vault service.
use a node port service
The EXTERNAL_VAULT_ADDR value depends on which strategy you want to use.

CodeDeploy Blue/Green Deployment listener port at API gateway?

I am working on a complicated structure on AWS, which includes an API gateway for users connecting the website located inside a VPC. In this VPC, I have planned to use ALB to load balancing the traffic from outside to different ECS Fargate tasks.
For own purpose, I have planned to use Blue/Green Deployment in CodeDeploy session for deploying the services located in ECS fargate. From the documentations of AWS, I understand the listener port of production and test environment can be set up at load balancer.
I would like to know whether the listener port can be set up at API gateway. As I hope to use CloudFormation for this approach, it would be better related to it. Thanks!

How to automate Azure DevOps Kubernetes Service Connection to Cluster?

To deploy services via Azure Devops to my kubernetes cluster, I need to create a Kubernetes Service Connection manually. I want to automate this by creating the service connection dynamically in Azure DevOps so I can delete and recreate the cluster and deployment. Is this possible? How can I do this?
you can create the service endpoint using the azure devops api,
check this out for api detail
this might be related

Can this work - Google Cloud Endpoints as API Management layer and Istio as Service Mesh on Kubernetes (GKE)

We would like to use Kubernetes for Microservices and Google Cloud Endpoints as API Management layer.
If I understand well, to have Google Cloud Endpoints functionality we need to have a sidecar or proxy for the real microservice. (image: gcr.io/endpoints-release/endpoints-runtime:1)
So if we were to use Istio as service Mesh technology how will Envoy proxy work along with Google Cloud Endpoint? Will it actually proxy even the google cloud endpoint relevant container?
Or is this a bad strategy?