how can spring cloud dataflow app use private Docker repository? - kubernetes

I have a spring cloud dataflow server deployed on kubernetes cluster(not a local scdf server run from jar). Since it requires Docker images to register the apps, but my private docker repository would need credentials to pass the authentication.
Does anyone know on which configuration item/file shall I put my private docker repository credentials?
Thanks a lot!

There's no special handling required from SCDF perspective.
As far as the Kubernetes cluster and from within the backing VMs, if the Docker Deamon is logged into the private registry, at the event of app-resolution, SCDF will run it on the same Docker Deamon, so everything should work automatically.
In other words, it is a setup configuration between the Kubernetes cluster and private registry - nothing specific to SCDF.
For example, PKS and Harbor integration comes out of the box with this setup.
EDIT
If the above setup didn't work, there's the option to create Secret in Kubernetes, which can be used to generate a secret for the private registry - see docs here.
Once you've that configured, you can then pass to SCDF via spring.cloud.deployer.kubernetes.imagePullSecret property. Going by the above example in the Kubernetes docs, the value for this property would be regcred.

Related

How to authenticate to a GKE cluster without using the gcloud CLI

I've got a container inside a GKE cluster and I want it to be able to talk to the Kubernetes API of another GKE cluster to list some resources there.
This works well if run the following command in a separate container to proxy the connection for me:
gcloud container clusters get-credentials MY_CLUSTER --region MY_REGION --project MY_PROJECT; kubectl --context MY_CONTEXT proxy --port=8001 --v=10
But this requires me to run a separate container that, due to the size of the gcloud cli is more than 1GB big.
Ideally I would like to talk directly from my primary container to the other GKE cluster. But I can't figure out how to figure out the IP address and set-up the authentication required for the connection.
I've seen a few questions:
How to Authenticate GKE Cluster on Kubernetes API Server using its Java client library
Is there a golang sdk equivalent of "gcloud container clusters get-credentials"
But it's still not really clear to me if/how this would work with the Java libraries, if at all possible.
Ideally I would write something like this.
var info = gkeClient.GetClusterInformation(...);
var auth = gkeClient.getAuthentication(info);
...
// using the io.fabric8.kubernetes.client.ConfigBuilder / DefaultKubernetesClient
var config = new ConfigBuilder().withMasterUrl(inf.url())
.withNamespace(null)
// certificate or other autentication mechanishm
.build();
return new DefaultKubernetesClient(config);
Does that make sense, is something like that possible?
There are multiple ways to connect to your cluster without using the gcloud cli, since you are trying to access the cluster from another cluster within the cloud you can use the workload identity authentication mechanism. Workload Identity is the recommended way for your workloads running on Google Kubernetes Engine (GKE) to access Google Cloud services in a secure and manageable way. For more information refer to this official document. Here they have detailed a step by step procedure for configuring workload identity and provided reference links for code libraries.
This is drafted based on information provided in google official documentation.

How to handle authentication to external registries using the Artifactory pull-through capacity

I have a Kubernetes cluster, with Artifactory as my internal registry and proxy to pull images from external private registries. How do I authenticate to these external private registries when I want to pull an image from my Kubernetes cluster? Normally in Kubernetes, this is done by using image pull secrets, however, it is not clear if Artifactory is able to handle the secret to authenticate to the external regisitry. What alternatives do I have?
Taking the comment by #Ivonet as the base - you configure the auth against the remote repository source in Artifactory itself. You can see the docs here.
Once Artifactory is setup, you set your imagePullSecret to auth against Artifactory itself. See some examples in this knowledge-base article

How to connect to AWS ECS cluster?

I have successfully created ECS cluster (EC2 Linux + Networking). Is it possible to login to the cluster to perform some administrative tasks? I have not deployed any containers or tasks to it yet. I can’t find any hints for it in AWS console or AWS documentation.
The "cluster" is just a logical grouping of resources. The "cluster" itself isn't a server you can log into or anything. You would perform actions on the cluster via the AWS console or the AWS API. You can connect to the EC2 servers managed by the ECS cluster individually. You would do that via the standard ssh method you would use to connect to any other EC2 Linux server.
ECS will take care most of the administrative works for you.You simply have to deploy and manage your applications on ECS. If you setup ECS correctly, you will never have to connect to instances.
Follow these instructions to deploy your service (docker image): https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Also you can use Cloudwatch to store container logs, so that you don't have to connect to instances to check the logs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

Migrating Cockroach DB from local machine to GCP Kubernetes Engine

Followed instructions here to create a local 3 node secure cluster
Got the go example app running with the following DB connection string to connect to the secure cluster
sql.Open("postgres", "postgresql://root#localhost:26257/dbname?sslmode=verify-full&sslrootcert=<location of ca.crt>&sslcert=<location of client.root.crt>&sslkey=<location of client.root.key>")
Cockroach DB worked well locally so I decided to move the DB (as in the DB solution and not the actual data) to GCP Kubernetes Engine using the instructions here
Everything worked fine - pods created and could use the built in SQL client from the cloud console.
Now I want to use the previous example app to now connect to this new cloud DB. I created a load balancer using kubectl expose command and got a public ip to use in the code.
How do I get the new ca.crt, client.root.crt, client.root.key files to use in my connection string for the DB running on GCP?
We have 5+ developers and the idea is to have them write code on their local machines and connect to the cloud db using the connection strings and the certificates.
Or is there a better way to let 5+ developers use a single DEV DB cluster running on GCP?
The recommended way to run against a Kubernetes CockroachDB cluster is to have your apps run in the same cluster. This makes certificate generation fairly simple. See the built-in SQL client example and its config file.
The config above uses an init container to send a CSR for client certificates and makes them available to the container (in this case just the cockroach sql client, but it would be anything else).
If you wish to run a client outside the kubernetes cluster, the simplest way is to copy the generated certs directly from the client pod. It's recommended to use a non root user:
create the user through the SQL command
modify the client-secure.yaml config for your new user and start the new client pod
approve the CSR for the client certificate
wait for the pod to finish initializing
copy the ca.crt, client.<username>.crt and client.<username>.key from the pod onto your local machine
Note: the public DNS or IP address of your kubernetes cluster is most likely not included in the node certificates. You either need to modify the list of hostnames/addresses before bringing up the nodes, or change your connection URL to sslmode=verify-ca (see client connection parameters for details).
Alternatively, you could use password authentication in which case you would only need the CA certificate.

Kong Enterprise Installation on Kubernetes

I've followed the instructions given in this link to setup kong on kubernetes container in my local machine. I'm able to access APIs behind kong through Kubernetes (minikube) IP. Now, I've enterprise edition (trial version) of kong. Without Kubernetes, i've downloaded Kong enterprise image and able to run Kong in my local machine. But, my question is how to setup enterprise Kong installation on kubernetes container. I assume that i've to tweak "image section" in .yaml to pull enterprise Kong image. But i'm not sure how to do that. Can you help us how to go ahead with enterprise Kong installation on Kubernetes container?
There are (at least) two answers to your question:
set up a private docker registry -- even one inside your own kubernetes cluster -- and push the image to it, then point the image: at the internal registry
that assumes that your enterprise purchase didn't come with access to an authenticated registry hosted by Mashape, would would absolutely be the preferred mechanism for that problem
or I think you can pre-load the docker image onto the nodes via PodSpec:initContainers: in any number of ways: ftp, http, s3api, nfs, ... because the initContainer will run before the Pod container, I would expect kubelet will delay the image pull of the container until the initContainer has finished. If I had a working cluster in front of me, I'd try it out, so take this one with a grain of salt