Playing around with Kong in DB-less mode in a docker container. Trying to figure out if we can use it as a gateway for the company I work for. I currently mount a local folder onto my docker container and pass the path to the kong.yaml file to kong when it starts. When I need to update the configuration, I do a POST to the /config endpoint.
All good so far.
However, my concern is, how I am supposed to handle a Kong restart? The configuration I have will be generated in a separate micro-service from a PostGre database.
Kong will be running as an Ingress controller in our Kubernetes cluster. One thing I could do is expose an endpoint that generates a kong.yml config file based on my data in PostGre. Kong could hit that on start up. I think I can make it a part of its start command.
Anyway, this seems like perhaps a bit of a hack. I was wondering, are there are any best practices around this. I'm sure other people have faced this problem before :-)
Thanks!
Answer
Configuring Kong on Kubernetes is done through Kubernetes native resources (e.g. Ingress) and Kong Custom Resources (e.g. KongConsumer, KongPlugin, KongIngress).
The Kong Ingress Controller will make all necessary changes based on changes to those resources through the Kubernetes API Server.
Additional Info
I highly recommend going through these guides. They are comprehensive and highly educational.
Make sure to keep an eye on the logs coming out of the Kong Ingress Controller pod because this will tell you whether it has successfully reconciled changes based on those resources or not.
Also feel free to take a look at this project where we manage Kong CRs through an on-cluster REST API Microservice.
Related
I am currently working on a project that uses the Prometheus Server (Not the Prometheus Operator).
We're looking to introduce a way of modifying the PrometheusRules without having to redeploy it.
I'm completely new to containers and Kubernetes and a little over my head so I'm hoping someone could let me know if I'm wasting my time trying to make this work.
What I have thought of doing so far is:
Store the PrometheusRules in a configmap.
Apply the configmap of rules to the Prometheus Server configuration.
Create a sidecar to the Prometheus Server that can modify this configMap.
The sidecar will have an API exposed so users will have CRUD functionality for the rules.
When successful modifying a rule, the sidecar will trigger the reload endpoint on the Prometheus Server that causes it to reload its configuration file without having to restart the container.
Thanks
Your initial use case seems valid, though I would say there are better ways of achieving this.
For points 1,2 I would suggest you use the Prometheus Helm Chart for ease of use, better config management and deployment. This would keep track of Prometheus configuration as one single unit rather than maintaining the rule files separately.
For point 3,4:- Making direct untracked changes to the live configuration does not seem safe or secure. Using the helm chart mentioned above I would suggest you make the changes before deploying to the cluster (Use VCS like Git to track changes)
Best case scenario:- Also setup CI/CD pipelines to deploy changes instantly.
Use the reload API as mentioned to reload the new released config.
Explore more about Helm
I'm new in k8s world and using Openshift 4.2.18. I want to deploy a microservice on it. What I need is one common ip and being able to access each microservice using virtual path.
Like this,
https://my-common-ip/microservice1/
https://my-common-ip/microservice2/
https://my-common-ip/microservice3/
Service and deployment are OK. However I'm so confused with the other terms. Should I use route or ingress? Should I use VirtualService like in this link? Also heard about HA-Proxy and Istio. What's the best way of doing this? I would appreciate it if you could provide the information about these terms.
Thanks in advance, Best Regards
Route and ingress are intended to achieve the same end. Originally Kubernetes had no such concept and so in OpenShift the concept of a Route was developed, along with the bits for providing a load balancing proxy etc. In time it was seen as being useful to have something like this in Kubernetes, so using Route from OpenShift as a starting point for what could be done, Ingress was developed for Kubernetes. In the Ingress version they went for a more generic rules based system so how you specify them looks different, but the intent is to effectively be able to do the same thing.If you intend to deploy your application on multiple Kubernetes distributions at the same time then Ingress might be a good option.
Virtual service and istio is service mesh which is not necessary for external access of an app. You bring complexity with a service mesh. Unless the capabilities offered by a service mesh is really needed for your usecase there is no reason to use it.
I'm investigating using Traefik as a reverse proxy within a Service Fabric cluster running Dockerized microservices. The official way of running Traefik within Service Fabric is to use the Service Fabric provider. Running Traefik within a Docker container is not recommended according to the Github readme because you can't reach the Service Fabric API through localhost:19080, but instead have to reach it through its public IP. This requires you to install SSL certs to securely talk to the API, which is a bit of a hassle.
I'm curious if there's any advantage using the Traefix Service Fabric provider (which requires complex setup) rather than just running Traefix in a container running with the File provider. Provided your services had ServiceDnsName attributes in the ApplicationManifest, thus allowing Service Fabric DNS to find them, it seems like a much simpler approach. The Traefik configuration would look something like:
[frontends]
[frontends.api]
backend = "api"
passHostHeader = true
[frontends.api.routes.forwarder]
rule = "PathPrefixStrip: /api/"
[frontends.someservice]
backend = "someservice"
passHostHeader = true
[frontends.someservice.routes.forwarder]
rule = "PathPrefixStrip: /SomeService/"
[backends]
[backends.api]
[backends.api.servers.endpoint]
url = "http://Api:11080"
[backends.someservice]
[backends.someservice.servers.endpoint]
url = "http://SomeService:12080"
You'd map port 80 to your Traefix service, which would then dispatch the HTTP call to the appropriate internal service based on the URL prefix.
Advantages:
No need to talk to the Service Fabric API, which is somewhat hacky to do from within a container.
Possibly more secure; Everything is internal, no need to worry about installing certificates
Disadvantages:
Your service routing is now tied to a TOML file within a Docker container rather than integrated into service and application manifest files. No way to modify this without redeploying that container.
I don't believe this would work unless all services were running in a Container (Though I believe if you enabled the reserve proxy, you can resolve the service by name using http://localhost:19081/AppName/ServiceName instead)
To me, it seems like a cleaner approach provided you're not changing and adding services all the time. Usually, the stuff stays fairly static though.
Are there any gotchas that I'm not considering, or any strong argument against doing this?
I will add my two cents to your considerations:
Is a preference between Dynamic and Static configuration.
The benefit is that it will reload with all the new configurations every time a service comes up with Traefik configuration. Today you say "it won't change", but in a few months, weeks or perhaps days, you might face a new requirement and you have to update the file, if the system evolves quickly, like on Microservices solution, you soon will face a hard work of updating this file by hand.
If you are sure it will hardly change, don't be limited by the Service Fabric Configuration or File, Traefik can get the configuration from REST , ETCD, DynamoDB and many other configuration providers to load the rules.
For the File approach, you can create the file in an Azure FileShare and mount as a volume in the container, so you don't need to rebuild the container to take effect.
Then you can just update the file and:
Restart the container to reload the file, or.
An even better approach is using the file.watch setting you don't need to restart the container and it will watch for any changes in the file.
This applications which are programmed to use the kubernetes API.
Should we assume that openshift container platform, from a kubernetes standpoint, matches all the standards that openshift origin (and kubernetes) does?
Background
Compatibility testing cloud native apps that are shipped can include a large matrix. It seems that, as a baseline, if OCP is meant to be a pure kubernetes distribution, with add ons, then testing against it is trivial, so long as you are only using core kubernetes features.
Alternatively, if shipping an app with support on OCP means you must test OCP, that would to me imply that (1) the app uses OCP functionality or (2) the app uses kube functionality which may not work correctly in OCP, which should be a considered a bug.
In practice you should be able to regard OpenShift Container Platform (OCP) as being the same as OKD (previously known as Origin). This is because it is effectively the same software and setup.
In comparing both of these to plain Kubernetes there are a few things you need to keep in mind.
The OpenShift distribution of Kubernetes is set up as a multi-tenant system, with a clear distinction between normal users and administrators. This means RBAC is setup so that a normal user is restricted in what they can do out of the box. A normal user cannot for example create arbitrary resources which affect the whole cluster. They also cannot run images that will only work if run as root as they run under a default service account which doesn't have such rights. That default service also has no access to the REST API. A normal user has no privileges to enable the ability to run such images as root. A user who is a project admin, could allow an application to use the REST API, but what it could do via the REST API will be restricted to the project/namespace it runs in.
So if you develop an application on Kubernetes where you have an expectation that you have full admin access, and can create any resources you want, or assume there is no RBAC/SCC in place that will restrict what you can do, you can have issues getting it running.
This doesn't mean you can't get it working, it just means that you need to take extra steps so you or your application is granted extra privileges to do what it needs.
This is the main area where people have issues and it is because OpenShift is setup to be more secure out of the box to suit a multi-tenant environment for many users, or even to separate different applications so that they cannot interfere with each other.
The next thing worth mentioning is Ingress. When Kubernetes first came out, it had no concept of Ingress. To fill that hole, OpenShift implemented the concept of Routes. Ingress only came much later, and was based in part of what was done in OpenShift with Routes. That said, there are things you can do with Routes which I believe you still can't do with Ingress.
Anyway, obviously, if you use Routes, that only works on OpenShift as a plain Kubernetes cluster only has Ingress. If you use Ingress, you need to be using OpenShift 3.10 or later. In 3.10, there is an automatic mapping of Ingress to Route objects, so I believe Ingress should work even though OpenShift actually implements Ingress under the covers using Routes and its haproxy router setup.
There are obviously other differences as well. OpenShift has DeploymentConfig because Kubernetes never originally had Deployment. Again, there are things you can do with DeploymentConfig you can't do with Deployment, but Deployment object from Kubernetes is supported. One difference with DeploymentConfig is how it works in with ImageStream objects in OpenShift, which don't exist in Kubernetes. Stick with Deployment/StatefulSet/DaemonSet and don't use the OpenShift objects which were created when Kubernetes didn't have such features you should be fine.
Do note though that OpenShift takes a conservative approach on some resource types and so they may not by default be enabled. This is for things that are still regarded as alpha, or are otherwise in very early development and subject to change. You should avoid things which are still in development even if using plain Kubernetes.
That all said, for the core Kubernetes bits, OpenShift is verified for conformance against CNCF tests for Kubernetes. So use what is covered by that and you should be okay.
https://www.cncf.io/certification/software-conformance/
I have a fully functioning Kubernetes cluster with one master and one worker, running on CoreOS.
Everything is working and my pods and services are running fine. Now I have no clue how to proceed in a webserver idea.
Before I go further: I have no configs at the moment about my idea I am going to explain. I just did a lot of research.
When setting up a pod (nginx) with a service. You get the default nginx page. After that you can setup a mount volume with a hostvolume (volume mapping from host to container).
But lets say I want to seperate every site (multiple sites separated with different pods), how can I let my users add files to their pod/nginx document root? Having FTP in the CoreOS node removes the Kubernetes way and adds security vulnerabilities.
If someone can help me shed some light on this issue, that would be great.
Thanks for your time.
I'm assuming that you want to have multiple nginx servers running. The content of each nginx server is managed by a different admin (you called them users).
TL;DR:
Option 1: Each admin needs to build their own nginx docker image every time the static files change and deploy that new image. This is if you consider these static files as a part of the source-code of the nginx application
Option 2: Use a persistent volume for nginx, the init-script for the nginx image should use something like s3 to sync all its files with s3 and then start nginx
Before you proceed with building an application with kubernetes. The most important thing is to separate your services into 2 conceptual categories, and give up your desire to touch the underlying nodes directly:
1) Stateless: These are services that are built by the developers and can be released. They can be stopped, started, moved from one node to another, their filesystem can be reset during restart and they will work perfectly fine. Majority of your web-services will fit this category.
2) Stateful: These services cannot be stopped and restarted willy nilly like the ones above. Primarily, their underlying filesystem must be persistent and remain the same across runs of the service. Databases, file-servers and similar services are in this category. These need special care and should use k8s persistent-volumes and now stateful-sets.
Typical application:
nginx: build the nginx.conf into the docker image, and deploy it as a stateless service
rails/nodejs/python service: build the source code into the docker image, configure with env-vars, deploy as a stateless service
database: mount a persistent volume, configure with env-vars, deploy as a stateful service.
Separate sites:
Typically, I think at a k8s deployment and a k8s service level. Each site can be one k8s deployment and k8s service set. You can then have separate ways to expose them (different external DNS/IPs)
Application users storing files:
This is firmly in the category of a stateful service. Use a persistent volume to mount to a /media kind of directory
Developers changing files:
Say developers or admins want to use FTP to change the files that nginx serves. The correct pattern is to build a docker image with the new files and then use that docker image. If there are too many files, and you don't consider those files to be a part of the 'source' of the nginx, then use something like s3 and a persistent volume. In your docker image init script, don't directly start nginx. Contact s3, sync all your files onto your persistent volume, then start nginx.
While the options and reasoning listed by iamnat are right, there's at least one more option to add to the list. You could consider using ConfigMap objects, maintain your file within the configmap and mount them to your containers.
A good example can be found in the official documentation - check the Real World Example configuring Redis section to get some actionable input.