WSO2 APIM 3 docker persist API data - docker-compose

I installed WSO2 APIM 3 using docker-compose, and created some APIs. When I do a docker-compose down, and docker-compose up the API data is gone, how do I persist the data?
Thanks.

Basically you need to mount storages for the databases and API Manager nodes. For APIM nodes, you need to mount APIM/repository/deployment/server folder to a volume. You can find details in https://docs.wso2.com/display/AM260/Common+Runtime+and+Configuration+Artifacts

Related

Can we configure AWS Secrets Manager to integrate with an on-premises k8s cluster

I setup a EKS cluster and integrated AWS Secrets Manager in it following the steps mentioned in https://github.com/aws/secrets-store-csi-driver-provider-aws and it worked as expected.
Now we have a requirement to integrate the AWS Secrets Manager on an on-premises k8s cluster and I am unable to follow the same steps as they seem to be explicitly for AWS EKS based clusters.
I googled around a bit and found you can call the Secrets Manager programmatically using one of the ways in https://docs.aws.amazon.com/secretsmanager/latest/userguide/asm_access.html, but this approach wont work for us.
Is there a k8s way to directly connect to AWS secrets Manager without setting up AWS-CLI and the OIDC cluster ID on the on-premises cluster?
Any help would be highly appreciated.
You can setup external OIDC providers with AWS and also setup K8s to with OIDC, but that is a lot of work.
AWS recently announced IAM Roles Anywhere which will let you use host based certificates to authenticate, but you will still have to call the Secrets Manager APIs.
If you are willing to retrieve secrets through etcd (which may store the secrets base64 encoded on the cluster) you can look at using the opensource External Secrets solution.

kong : replicate data across multiple servers

I am using a kong api gateway. How to use the same data across multiple kong server instances?
I am deploying kong on docker containers. I tried committing all the postgresql data and generating a new docker postgre db image from it. I thought reusing this image would solve this problem, but that did not help.
Use a database to store the configuration, all your kong nodes will get data from this single source.
You can use deck to automatize your deployment

backup Hashicorp Vault server and use the backup to build new server

We are using Hashicorp Vault with Consul as storage, we want to implement a robust backup and recovery strategy for vault.
we are particularly looking to backup all the Vault data and use that file as storage while building new vault server.
I did enough research, not able to find a convincing solution:(
Please provide any suggestions.
This is what we followed in our production environment for the high availability of the Vault server.
As your using Consul as backend, make sure Consul/backend is highly available as all the data/secrets are stored in it.
Just to check the behavior, try running vault server with two instances but pointing to same backend, consul. Observe that both the instances, when UI opened from the browser, points the same data as the backend is same.
When Vault is backed by a persistent/high available storage, Vault can be considered just as front-end/UI service which display data/secrets/policies.
Vault High Availability with Consul that is what was Here_2_learn talking about.
Also, if you using Consul as a storage backend for Vault, you can use the consul snapshot for backing up our data.

How do i run a HA MongoDB in my kubernetes cluster without Portworx?

I want to have a MongoDB deployment as a service to my database per service type microservice architecture model.
Right now I am using helm packages to deploy mongo db by defining persistent volume and persistent volume claims.
But I want to deploy mongodb as HA with storing data in any EBS or so!
When I checked online for this solution everything suggests it with Portworx. But is there a way to do it without using Portworx?
Any help appreciated.

How to create dedicated service catalog per OpenShift project?

Is it possible to create a dedicated service catalog for each project/namespaces in Openshift? I am hosting a multi-tenant OpensShift cluster. When each tenants login to OpenShift cluster, they should only be able see services which is relevant for them in the service catalog.
For eg: Tenant-A should only see MySQL and Apache services. Tenant-B should only see ElasticSearch and Ruby services. Is it possible to do this kind of isolation?