I am trying to fully automate the deployment to my Kubernetes Cluster with Bazel and rules_k8s.
But I don't know how to apply external configurations to my cluster.
Usually I would run a command like
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
But I want this to happen automatically when I run my
k8s_objects(
name = "kubernetes_deployment",
objects = [
"//kubernetes:nginx",
"//services/gateway:k8s",
"//services/ideas:k8s",
# ...
]
)
rule to deploy everything to Kubernetes.
try this in your BUILD file, I'm not sure its the best way as it will be re-ran on every build. It would be nice if we could use an http_file here instead of a genrule.
genrule(
name = "extyaml",
srcs = [],
outs = ["certman-k8s.yaml"],
cmd = "curl -L https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml > $#",
)
k8s_object(
name = "certman",
cluster = "minikube",
template = ":certman-k8s.yaml",
)
Related
Context:
I'm reusing terraform modules and I deploy microservices using helm provider within terraform.
Problem:
I'm trying to translate this line into terraform code, to get the current image tag live from prod (in the interest of reusing it). I'm already using kubernetes provider's auth and doesn't make sense to pull kubectl in my CI just for this.
k get deploy my-deployment -n staging -o jsonpath='{$.spec.template.spec.containers[:1].image}'
Kubernetes terraform provider doesn't seem to support data blocks nor helm provider outputs blocks.
Does anyone know how could we get (read) the image tag of a deployment using terraform?
EDIT:
My deployment looks like this:
resource "helm_release" "example" {
name = "my-redis-release"
repository = "https://charts.bitnami.com/bitnami"
chart = "redis"
version = "6.0.1"
values = [
"${file("values.yaml")}"
]
set {
name = "image.tag"
value = "latest"
}
}
The tag will be a hash that will change often and passed on from another repo.
latest in this case should be replaced by the current running tag in the cluster. I can get it using kubectl, using the line above, but not sure how using terraform.
It turns out there are multiple ways of doing it, where the easiest one for me is to reference the set argument of the helm_release resource:
output "helm_image_tag" {
value = [ for setting in helm_release.example.set : setting.value if setting.name == "image.tag" ]
}
The output will then be a list where you can reference it in a shell script (or another scripting language):
+ helm_image_tag = [
+ "latest",
]
If the list format does not suit you, you can create a map output:
output "helm_image_tag" {
value = { for setting in helm_release.example.set : setting.name => setting.value if setting.name == "image.tag" }
}
This produces the following output:
+ helm_image_tag = {
+ "image.tag" = "latest"
}
By using terraform output helm_image_tag you can access this output value and decide what to do with it in the CI.
I am running a Celery Executor and I'm trying to run some python script in the KubernetesPodOperator. Below are examples of what I have tried that didn't work. What am I doing wrong?
Running sctipt
org_node = KubernetesPodOperator(
namespace='default',
image="python",
cmds=["python", "somescript.py" "-c"],
arguments=["print('HELLO')"],
labels={"foo": "bar"},
image_pull_policy="Always",
name=task,
task_id=task,
is_delete_operator_pod=False,
get_logs=True,
dag=dag
)
Running function load_users_into_table()
def load_users_into_table(postgres_hook, schema, path):
gdf = read_csv(path)
gdf.to_sql('users', con=postgres_hook.get_sqlalchemy_engine(), schema=schema)
org_node = KubernetesPodOperator(
namespace='default',
image="python",
cmds=["python", "somescript.py" "-c"],
arguments=[load_users_into_table],
labels={"foo": "bar"},
image_pull_policy="Always",
name=task,
task_id=task,
is_delete_operator_pod=False,
get_logs=True,
dag=dag
)
The script somescript.py must be in Docker image.
Step-1: let's create a image https://docs.docker.com/develop/develop-images/dockerfile_best-practices/.
FROM python:3.8
# copy requirement.txt from local to container
COPY requirements.txt requirements.txt
# install dependencies into container (geopandas, sqlalchemy)
RUN pip install -r requirements.txt
# copy the python script from local to container
COPY somescript.py somescript.py
ENTRYPOINT [ "python", "somescript.py"]
Step-2: Build and push the image into public Docker repository https://hub.docker.com.
NB: kubernetes_pod_operator looks for image from public docker repo
# build image
docker build -t my-python-img:latest .
# test if your image works perfectly
docker run my-python-img:latest
# push image.
docker tag my-python-img username/my-python-img
docker push username/my-python-img
docker pull username/my-python-img
step-3: Lest's create k8s task.
p = KubernetesPodOperator(
namespace='default',
image='username/my-python-img:latest',
labels={'dag-id': dag.dag_id},
name='airflow-my-image-pod',
task_id='load-users',
in_cluster=False, #False: local, True: cluster
cluster_context='microk8s',
config_file='/usr/local/airflow/include/.kube/config',
is_delete_operator_pod=True,
get_logs=True,
dag=dag
)
If you don't understand where configuration file comes from, look here: https://www.astronomer.io/docs/cloud/stable/develop/kubepodoperator-local.
Finally: I want to mention something important when working with databases (credentials). Kubernetes offers the use secret to secure sensitive information. https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html
KubernetesPodOperator launches a Kubernetes pod that runs a container as specified in the operator's arguments.
First Example
In the first example, the following happens:
KubernetesPodOperator instructs K8s to lunch a pod and prepare to run a container in it using the python image (the image parameter) from hub.docker.com (the default image registry)
ENTRYPOINT of the python image is replaced by ["python", "somescript.py" "-c"] (the cmd parameter)
CMD of the python image is replaced by ["print('HELLO')"] (the arguments parameter)
...
The container is run
So, the complete command that is run in the container is
python somescript.py -c print('HELLO')
Obviously, the official Python image from Docker Hub does not have somescript.py in its working directory. Even if did, it probably would have been not the one that you wrote. That is why the command fails with something like:
python: can't open file 'somescrit.py': [Errno 2] No such file or directory
Second Example
In the second example, pretty much the same happens as in the first example, but the command that is run in the container (again based on the cmd and arguments parameters) is
python somescript.py -c None
(None is the string representation of the load_users_into_table()'s return value)
This command fails, because of the same reasons as in the first example.
How It Could be Done (a Sketch)
You could build a Docker image with somescript.py and all its dependencies. Push the image to an image registry. Specify the image, ENTRYPOINT, and CMD in the corresponding parameters of KubernetesPodOperator.
I am new to terraform and devops in general. First I need to get ssh key from url to known host to later use for Flux.
data "helm_repository" "fluxcd" {
name = "fluxcd"
url = "https://charts.fluxcd.io"
}
resource "helm_release" "flux" {
name = "flux"
namespace = "flux"
repository = data.helm_repository.fluxcd.metadata[0].name
chart = "flux"
set {
name = "git.url"
value = "git.project"
}
set {
name = "git.secretName"
value = "flux-git-deploy"
}
set {
name = "syncGarbageCollection.enabled"
value = true
}
set_string {
name = "ssh.known_hosts"
value = Need this value from url
}
}
Then I need to generate key and use it to create kubernetes secret to communicate with gitlab repository.
resource "kubernetes_secret" "flux-git-deploy" {
metadata {
name = "flux-git-deploy"
namespace = "flux"
}
type = "Opaque"
data = {
identity = tls_private_key.flux.private_key_pem
}
}
resource "gitlab_deploy_key" "flux_deploy_key" {
title = "Title"
project = "ProjectID"
key = tls_private_key.flux.public_key_openssh
can_push = true
}
I am not sure if I am on the right track. Any advice will help.
There are few approaches you could use. These can be divided into "two categories":
generate manually the ssh_known_hosts and use the output through variables or files
create the file on the machine where you're running terraform with the command ssh-keyscan <git_domain> and set the path as value for ssh.known_hosts.
You can also use the file function directly in the variable or use the file output directly as env variable. Personally I would not recommend it because the value is saved directly in the terraform state but in this case it is not a critical issue. Critical would be if you're using ssh_keys or credentials.
Another approach would be to use the local-exec provisioner with a null_resource before you create the helm resource for flux and create the file directly in terraform. But additional to that you have to take care of accessing the file you created and also managing the triggers to run the command if a setting is changed.
In general, I would not use terraform for such things. It is fine to provide infrastructure like aws resources or services which are directly bound to the infrastructure but in order to create and run services you need a provisioning tool like ansible where you can run commands like "ssh-keyscan" directly as module. At the end you need a stable pipeline where you run ansible (or your favorite provisioning tool) after a terraform change.
But if you want to use only terraform you're going to right way.
We got a project which consists of more than 20 small services that all reside inside the same repository and are built using bazel.
To reduce management overhead we would like to automagically generate as much as possible, including our images and k8s deployments.
So the question is:
Is there a way to avoid setting the image key in the k8s_deploy step by a rule or function?
We already got a rule which is templating the image inside our manifest to have the image name (and k8s object name) based on the label:
_TEMPLATE = "//k8s:deploy.yaml"
def _template_manifest_impl(ctx):
name = '{}'.format(ctx.label).replace("//cmd/", "").replace("/", "-").replace(":manifest", "")
ctx.actions.expand_template(
template = ctx.file._template,
output = ctx.outputs.source_file,
substitutions = {
"{NAME}": name,
},
)
template_manifest = rule(
implementation = _template_manifest_impl,
attrs = {
"_template": attr.label(
default = Label(_TEMPLATE),
allow_single_file = True,
),
},
outputs = {"source_file": "%{name}.yaml"},
)
This way the service under //cmd/endpoints/customer/log would result in the image eu.gcr.io/project/endpoints-customer-log.
While this works fine so far, we still have to manually set the images dict for k8s_deploy like this:
k8s_deploy(
name = "dev",
images = {
"eu.gcr.io/project/endpoints-customer-log:dev": ":image",
},
template = ":manifest",
)
It would be great to get rid of this, but I failed to find a way yet.
Using a rule does not work because images does not take a label and using a function does not work because i found no way of accessing context in there.
Am I missing something?
The solution I found to get the container registry names out of the build step, was to use bazel for build and skaffold for deploy. Both steps are performed in the same CI pipeline.
My skaffold.yaml is very simple, and provides the mapping of bazel targets to gcr names.
apiVersion: skaffold/v2alpha4
kind: Config
metadata:
name: my_services
build:
tagPolicy:
gitCommit:
variant: AbbrevCommitSha
artifacts:
- image: gcr.io/jumemo-dev/service1
bazel:
target: //server1/src/main/java/server1:server1.tar
- image: gcr.io/jumemo-dev/service2
bazel:
target: //server2/src/main/java/server2:server2.tar
It is invoked using:
$ skaffold build
When running this command:
kubectl apply -f tenten
I get this error:
unable to decode "tenten\.angular-cli.json": Object 'Kind' is missing in '{
"project": {
"$schema": "./node_modules/#angular/cli/lib/config/schema.json",
"name": "tenten"
},
"apps": [{
"root": "src/main/webapp/",
"outDir": "target/www/app",
"assets": [
"content",
"favicon.ico"
],
"index": "index.html",
"main": "app/app.main.ts",
"polyfills": "app/polyfills.ts",
"test": "",
"tsconfig": "../../../tsconfig.json",
"prefix": "jhi",
"mobile": false,
"styles": [
"content/scss/vendor.scss",
"content/scss/global.scss"
],
"scripts": []
}],
It looks like you're running this from the parent directory of your applications. You should 1) create a directory that's parallel to your applications and 2) run yo jhipster:kubernetes in it. Then run kubectl apply -f tenten in that directory after you've built and pushed your docker images. For example, here's the output when I run it from the kubernetes directory in my jhipster-microservices-example project.
± yo jhipster:kubernetes
_-----_
| | ╭──────────────────────────────────────────╮
|--(o)--| │ Update available: 2.0.0 (current: 1.8.5) │
`---------´ │ Run npm install -g yo to update. │
( _´U`_ ) ╰──────────────────────────────────────────╯
/___A___\ /
| ~ |
__'.___.'__
´ ` |° ´ Y `
⎈ [BETA] Welcome to the JHipster Kubernetes Generator ⎈
Files will be generated in folder: /Users/mraible/dev/jhipster-microservices-example/kubernetes
WARNING! kubectl 1.2 or later is not installed on your computer.
Make sure you have Kubernetes installed. Read http://kubernetes.io/docs/getting-started-guides/binary_release/
Found .yo-rc.json config file...
? Which *type* of application would you like to deploy? Microservice application
? Enter the root directory where your gateway(s) and microservices are located ../
2 applications found at /Users/mraible/dev/jhipster-microservices-example/
? Which applications do you want to include in your configuration? (Press <space> to select, <a> to toggle all, <i> to i
nverse selection)blog, store
JHipster registry detected as the service discovery and configuration provider used by your apps
? Enter the admin password used to secure the JHipster Registry admin
? What should we use for the Kubernetes namespace? default
? What should we use for the base Docker repository name? mraible
? What command should we use for push Docker image to repository? docker push
Checking Docker images in applications' directories...
ls: no such file or directory: /Users/mraible/dev/jhipster-microservices-example/blog/target/docker/blog-*.war
identical blog/blog-deployment.yml
identical blog/blog-service.yml
identical blog/blog-postgresql.yml
identical blog/blog-elasticsearch.yml
identical store/store-deployment.yml
identical store/store-service.yml
identical store/store-mongodb.yml
conflict registry/jhipster-registry.yml
? Overwrite registry/jhipster-registry.yml? overwrite this and all others
force registry/jhipster-registry.yml
force registry/application-configmap.yml
WARNING! Kubernetes configuration generated with missing images!
To generate Docker image, please run:
./mvnw package -Pprod docker:build in /Users/mraible/dev/jhipster-microservices-example/blog
WARNING! You will need to push your image to a registry. If you have not done so, use the following commands to tag and push the images:
docker image tag blog mraible/blog
docker push mraible/blog
docker image tag store mraible/store
docker push mraible/store
You can deploy all your apps by running:
kubectl apply -f registry
kubectl apply -f blog
kubectl apply -f store
Use these commands to find your application's IP addresses:
kubectl get svc blog
See the end of my blog post Develop and Deploy Microservices with JHipster for more information.