I'm using Traefik as IngressRoute.
With kubectl api-resources it is defined as:
NAME SHORTNAMES APIVERSION NAMESPACED KIND
...
ingressroutes traefik.containo.us/v1alpha1 true IngressRoute
...
My problem is that in Kubernetes Dashboard only ingress resources can be viewed, therefore ingressroute resources is not displayed.
How to implement the ability to see ingressroute resources instead of ingresses?
Kubernetes Dashboard does not have the ability to display Traefik IngressRoute, the same way it shows Ingress, without changing it's source code.
If you want, you can create feature request in dashboard GitHub repo, and follow Improve resource support #5232 issue. Maybe in the future such feature will be added.
In the meantime, you can use Traefik's own dashboard.
Related
I'm adding an Ingress as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cheddar
spec:
rules:
- host: cheddar.213.215.191.78.nip.io
http:
paths:
- backend:
service:
name: cheddar
port:
number: 80
path: /
pathType: ImplementationSpecific
but the logs complain:
W0205 15:14:07.482439 1 warnings.go:67] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
time="2021-02-05T15:14:07Z" level=info msg="Updated ingress status" namespace=default ingress=cheddar
W0205 15:18:19.104225 1 warnings.go:67] networking.k8s.io/v1beta1 IngressClass is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 IngressClassList
Why? What's the correct yaml to use?
I'm currently on microk8s 1.20
I have analyzed you issue and came to the following conclusions:
The Ingress will work and these Warnings you see are just to inform you about the available api versioning. You don't have to worry about this. I've seen the same Warnings:
#microk8s:~$ kubectl describe ing
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
As for the "why" this is happening even when you use apiVersion: networking.k8s.io/v1, I have found the following explanation:
This is working as expected. When you create an ingress object, it can
be read via any version (the server handles converting into the
requested version). kubectl get ingress is an ambiguous request,
since it does not indicate what version is desired to be read.
When an ambiguous request is made, kubectl searches the discovery docs
returned by the server to find the first group/version that contains
the specified resource.
For compatibility reasons, extensions/v1beta1 has historically been
preferred over all other api versions. Now that ingress is the only
resource remaining in that group, and is deprecated and has a GA
replacement, 1.20 will drop it in priority so that kubectl get ingress would read from networking.k8s.io/v1, but a 1.19 server
will still follow the historical priority.
If you want to read a specific version, you can qualify the get
request (like kubectl get ingresses.v1.networking.k8s.io ...) or can
pass in a manifest file to request the same version specified in the
file (kubectl get -f ing.yaml -o yaml)
Long story short: despite the fact of using the proper apiVersion, the deprecated one is still being seen as the the default one and thus generating the Warning you experience.
I also see that changes are still being made recently so I assume that it is still being worked on.
I had the same issue and was unable to update the k8s cluster which was subscribed to release channel.
One of the reasons for this log warning generation is the ClusterRole definition of external-dns. The external-dns keep querying the ingresses in k8s cluster as per the rules defined in the Cluster role
- apiGroups: ["extensions", "networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
Found in the helm chart in here
It queries the old extensions of ingress as well which keeps on generating those logs. Please update the cert-manager.
I want to use the same hostname e.g. example.com in two different namespaces with different paths. e.g. in namespace A I want example.com/clientA and in namespace B I want example.com/clientB. Any ideas on how to achieve this?
nginxinc has Cross-Namespace Configuration feature that allows you do exactly what you described.
You can also find there prepared examples with deployments, services, etc.
The only thing you most probably wont like..nginxinc is not free..
Also look here
Cross-namespace Configuration You can spread the Ingress configuration
for a common host across multiple Ingress resources using Mergeable
Ingress resources. Such resources can belong to the same or different
namespaces. This enables easier management when using a large number
of paths. See the Mergeable Ingress Resources example on our GitHub.
As an alternative to Mergeable Ingress resources, you can use
VirtualServer and VirtualServerRoute resources for cross-namespace
configuration. See the Cross-Namespace Configuration example on our
GitHub.
If you do not want to change your default ingress controller (nginx-ingress), another option is to define a service of type ExternalName in your default namespace that points to the full internal service name of the service in the other namespace.
Something like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: my-svc
name: webapp
namespace: default
spec:
externalName: my-svc.my-namespace.svc # <-- put your service name with namespace here
type: ExternalName
Using kubectl api-resources you are able get a list of all resources inside kubernetes.
Nevertheless, I'd like to know witch controller handles which resources.
For example, I've just installed traefik and I see some unknown installed resources:
NAME SHORTNAMES APIGROUP NAMESPACED KIND
ingresses ing extensions true Ingress
ingresses ing networking.k8s.io true Ingress
ingressroutes traefik.containo.us true IngressRoute
ingressroutetcps traefik.containo.us true IngressRouteTCP
Why are there two resources with the same name and diferent APIGROUP?
As the Kubernetes API evolves, APIs are periodically reorganized or upgraded. When APIs evolve, the old API is deprecated and eventually removed.
networking.k8s.io apigroup was introduced in v1.14.Currently ingress exists in both extensions and networking.k8s.io for backward compatibility and to give enough time for ingress controller implementations transition to networking.k8s.io from extensions.Ingress will be moved out of extensions in v1.22.
In a cluster with 2 namespaces (ns1 and ns2), I deploy the same app (deployment) and expose it with a service.
I thought separate namespaces would prevent from executing curl http://deployment.ns1 from a pod in ns2, but apparently, it's possible.
So my question is, how to allow/deny such cross namespaces operations? For example:
pods in ns1 should accept requests from any namespace
pods (or service?) in ns2 should deny all requests from other namespaces
Good that you are working with namespace isolation.
Deploy a new kind Network Policy in your ns1 with ingress all. You can lookup the documentation to define network ingress policy to allow all inbound traffic
Likewise for ns2, you can create a new kind Network Policy and deploy the config in ns2 to deny all ingress. Again the docs will come to rescue to help with you the yaml construct.
It may look something like this:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: ns1
name: web-allow-all-namespaces
spec:
podSelector:
matchLabels:
app: app_name_ns1
ingress:
- from:
- namespaceSelector: {}
It would not be answer you want, but I can provide the helpful feature information to implement your requirements.
AFAIK Kubernetes can define network policy to limit the network access.
Refer Declare Network Policy for more details of Network Policy.
Default policies
Setting a Default NetworkPolicy for New Projects in case OpenShift.
I use the Kubernetes ServiceAccount plugin to automatically inject a ca.crt and token in to my pods. This is useful for applications such as kube2sky which need to access the API Server.
However, I run many hundreds of other pods that don't need this token. Is there a way to stop the ServiceAccount plugin from injecting the default-token in to these pods (or, even better, have it off by default and turn it on explicitly for a pod)?
As of Kubernetes 1.6+ you can now disable automounting API credentials for a particular pod as stated in the Kubernetes Service Accounts documentation
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
Right now there isn't a way to enable a service account for some pods but not others, although you can use ABAC to for some service accounts to restrict access to the apiserver.
This issue is being discussed in https://github.com/kubernetes/kubernetes/issues/16779 and I'd encourage you to add your use can to that issue and see when it will be implemented.