Following the docs to create a Deployment, I have a .yaml file like this:
apiVersion: extensions/v1beta1
kind: Deployment
...
I wasn't sure what to make of the "extensions/v1beta1", so I ended up here in the API docs.
That makes it sound like I should use a value of "v1", but that doesn't seem to be valid when I try to kubectl apply my .yaml file.
Could someome help me to better understand what the apiVersion values mean and how I can determine the best value to use for each component?
Oh, and I'm using minikube and "kubectl version" reports that client and server are "GitVersion:"v1.3.0".
The docs you linked to are from before the release of Kubernetes 1.0 (a year ago). At that time, we had beta versions of the API and were migrating to the v1 API. Since then, we have introduced multiple API groups, and each API group can have a different version. The version indicates the maturity of the API (alpha is under active development, beta means it will have compatibility/upgradability guarantees, and v1 means it's stable). The deployment API is currently in the second category, so using extensions/v1beta1 is correct.
from documentation suggested by #Vern DeHaven
extensions/v1beta1
This version of the API includes many new, commonly used features of Kubernetes. Deployments, DaemonSets, ReplicaSets, and Ingresses all received significant changes in this release.
Note that in Kubernetes 1.6, some of these objects were relocated from extensions to specific API groups (e.g. apps). When these objects move out of beta, expect them to be in a specific API group like apps/v1.
Using extensions/v1beta1 is becoming deprecated—try to use the specific API group where possible, depending on your Kubernetes cluster version.
Related
For pod we keep it v1
When replicaset we keep it apps/v1.
Question is apps/v1 contains all the objects of v1 as well or what's the hierarchy? Can someone please explain ?
The apiVersion is composed of two components: thegroup and the version.
The version indicates the levels of stability and support: if version contains alpha, the software may contains bugs and the feature may be dropped in future release; if version contains beta, the feature is considered tested, and is enabled by default - the feature will not be dropped but some details may change.
The group have been introduced to ease development and maintenance of k8s. The API group is also specified in REST path when accessing the k8s API. The full list of groups is located: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#-strong-api-groups-strong-
So there is no hierarchy between v1 and apps/v1.
The first API resources introduced in Kubernetes do not have groups. So you will use the apiVersion: v1.
Later resources are linked to a group. For examples, Jobs and CronJobs are both in the group batch. So their apiVersion will be batch/v1. Deployments and replicasets are in the apps group, and are using apiVersion: apps/v1.
You can obtain all api-resources using the command : kubectl api-resources
See also: https://kubernetes.io/docs/reference/using-api/#api-groups
I am setting up my Kubernetes cluster using kubectl -k (kustomize). Like any other such arrangement, I depend on some secrets during deployment. The route I want go is to use the secretGenerator feature of kustomize to fetch my secrets from files or environment variables.
However managing said files or environment variables in a secure and portable manner has shown itself to be a challenge. Especially since I have 3 separate namespaces for test, stage and production, each requiring a different set of secrets.
So I thought surely there must be a way for me to manage the secrets in my cloud provider's official way (google cloud platform - secret manager).
So how would the secretGenerator that accesses secrets stored in the secret manager look like?
My naive guess would be something like this:
secretGenerator:
- name: juicy-environment-config
google-secret-resource-id: projects/133713371337/secrets/juicy-test-secret/versions/1
type: some-google-specific-type
Is this at all possible?
What would the example look like?
Where is this documented?
If this is not possible, what are my alternatives?
I'm not aware of a plugin for that. The plugin system in Kustomize is somewhat new (added about 6 months ago) so there aren't a ton in the wild so far, and Secrets Manager is only a few weeks old. You can find docs at https://github.com/kubernetes-sigs/kustomize/tree/master/docs/plugins for writing one though. That links to a few Go plugins for secrets management so you can probably take one of those and rework it to the GCP API.
There is a Go plugin for this (I helped write it), but plugins weren't supported until more recent versions of Kustomize, so you'll need to install Kustomize directly and run it like kustomize build <path> | kubectl apply -f - rather than kubectl -k. This is a good idea anyway IMO since there are a lot of other useful features in newer versions of Kustomize than the one that's built into kubectl.
As seen in the examples, after you've installed the plugin (or you can run it within Docker, see readme) you can define files like the following and commit them to version control:
my-secret.yaml
apiVersion: crd.forgecloud.com/v1
kind: EncryptedSecret
metadata:
name: my-secrets
namespace: default
source: GCP
gcpProjectID: my-gcp-project-id
keys:
- creds.json
- ca.crt
In your kustomization.yaml you would add
generators:
- my-secret.yaml
and when you run kustomize build it'll automatically retrive your secret values from Google Secret Manager and output Kubernetes secret objects.
I'm trying the leader-election code example provided with the go client (here) in a GKE cluster v1.13.7.
That requires a resource of type Lease of groupVersion coordination.k8s.io/v1 but there isn't. I know that Lease has been promoted to v1 in k8s 1.14 (not yet available with gke), but I expected to find the v1beta1 version.
Try with
kubectl proxy
curl -X GET localhost:8001/apis/coordination.k8s.io
and I get
404 page not found
Although the feature is v1 in 1.14, GKE has not incorporated this feature yet.
Since GKE is a fully managed product, the eng team decide which features to incorporate I to the GKE offering.
I recommend opening a feature request through the Google Public Issue Tracker and provide your use case for the feature to have integrated in future releases
I am new to Kubernetes and Minikube. Both look amazing tools, but I wonder if is there any way to have a single .yml file to deploy my services/deployments in all the environments, including local dev env...
The first limitation I see is related to service discovery since I would like to have my services behind a load balancer on the cloud, but at development environment, I can't since minikube don't support it, so I have to fall back to NodePort.
Can you provide me with some info about that matter?
There are other common differences between environments- names; credentials for any database or other permissioned resources; allocation of RAM/CPU; replica counts. There are limitations that minikube has as a runtime, compared to production k8s.
So- though one can use the same single yaml file in different environments, typically that's not what one wants.
What one usually wants is to have the general architectural shape of the solution be the same across environments, have differences extracted into minimalist configuration, then rendered using templates into environment-specific files to be used at deployment time.
The tool most commonly used to support this kind of approach is helm:
https://helm.sh/
Helm is basically a glorified templating wrapper around kubectl (though it has an in-cluster component). With helm, you can use the same base set of resource files, extract environment differences into config files, and then use helm to deploy as appropriate to each environment.
If I understood your question properly, you would like to spin up your infrastructure using one command and one file.
It is possible; however, it depends on your services. If some pods require another one to be running before they can start, this can get tricky. However technically you can put all your manifest files in one bundle. You can then create all the deployments services etc with kubectl apply -f bundle.yml
To create this bundle, you need to separate every manifest (deployment, service configmap, etc.) by triple dashes (---)
Example:
apiVersion: v1
kind: Namespace
metadata:
name: namespace-1
---
apiVersion: v1
kind: Namespace
metadata:
name: namespace-2
After some intense google and SO search i couldn't find any document that mentions both rolling update and set image, and can stress the difference between the two.
Can anyone shed light? When would I rather use either of those?
EDIT: It's worth mentioning that i'm already working with deployments (rather than replication controller directly) and that I'm using yaml configuration files. It would also be nice to know if there's a way to perform any of those using configuration files rather than direct commands.
In older k8s versions the ReplicationController was the only resource to manage a group of replicated pods. To update the pods of a ReplicationController you use kubectl rolling-update.
Later, k8s introduced the Deployment which manages ReplicaSet resources. The Deployment could be updated via kubectl set image.
Working with Deployment resources (as you already do) is the preferred way. I guess the ReplicationController and its rolling-update command are mainly still there for backward compatibility.
UPDATE: As mentioned in the comments:
To update a Deployment I used kubectl patch as it could also change things like adding new env vars whereas kubectl set image is rather limited and can only change the image version. Also note, that patch can be applied to all k8s resources and is not restricted to be used with a Deployment.
Later, I shifted my deployment processes to use helm - a really neat and k8s native package management tool. Can highly recommend to have a look at it.