I have installed NGINX on Azure AKS, as default, from the repository, and set it up to handle http and tcp traffic, inside the name space the controller and services are installed. URLs are mapped to internal services. This works fine.
I then created another namespace and installed the same application with same named services, but - again, on a different namespace. The installation seem to work.
I then tried to install another NGINX controller, this time in the new name space, to control the services located there.
Used helm and added --set controller.ingressClass="custom-class-nginx" and --set controller.ingressClassResource.name="custom-class-nginx" to the ugrade --install helm command line.
I also changed the rules configuration used, to use the "custom-class-nginx" for the ingressClassName value.
I now see both ingresses, each in its own name space.
The first instance, installed as default with class name "nginx" works fine.
The second instance does not load balance and I get the NGINX 404 error, when I try to go to the set URLs.
Also, when I look at the logs of the 1st (default) and 2nd (custom) logs in K9s, I see they both show events from the 1st controller. Yes, even in the custom controller's logs.
What an I missing? What am I doing wrong? I read as much info as I could and it is supposed to be easy.
What configuration am I missing? What do I need to make the 2nd controller respond to the URL coming in and move traffic?
Thanks in advance.
Moshe
Related
Hey i have a question.
Im using logback-more-appenders(fluency plugin) to send logs to EFK stack (fluent-bit) which is working in kubernetes cluster, but it lacks kubernetes metadata ( like node/pod names).
I know i can use <additionalField></additionalField> in logbck.xml to add Service name (because this is static), but i cannot do it to dynamic parts like node or pod name.
I tried to do it on fluent-bit side using kubernetes filter, but this works only with tail/systemd inputs not a forward one (it parses tag with filename which contains namespce and pod name). Im using forward plugin to send logs from java software to elasticsearch, and in logback.xml i cannot enter dynamic pod name (or i don't know if i can).
Any tips how i can do it? I prefer to send logs using fluency instead of sniffing host container logs.
In my case, the best i could think of was to change from forward to tail plugin with structured logging (in json).
Have you tried to Pass POD ID and NODE NAME as environment variables in logback.xml as additional fields, that you can attribute the metadata to the logevents?
I have 2 pods inside a single deployment yaml of kubernetes
one for code base and php-fpm together
one for nginx
how to share the code base folder to nginx ?
i dont expect to see any answer that using init command to copy the folder from pod to pod
EDIT
i also try to split frontend service (nginx) and backend service (fpm and the code), but as the application itself require complex nginx rewrite rules, therefore it wont work for my case
BUT according to this repo -> https://gist.github.com/matthewpalmer/741dc7a4c418318f85f2fa8da7de2ea1
it seem not possible to do it without COPY, but copy is super slow if u hv large file base
i wanna do the similar thing as same as docker-compose volume
As far as I can suggest based on the information you've given you have the following options:
a. Build a newer image of the Nginx image as your BASE image and copy all your source code to that image. Then reference that image in the Kubernetes Deployment.
OR
b. Add your source code to a ConfigMap and mount that in as a volume.
OR
c. Use an initContainer (Which you've already said you don't want to do).
All of these from my perspective seem wrong. I think it might be better to revisit why you're doing this and look at whether there are other options.
EDIT (More context now):
You don't need your code added to the Nginx container.
You just need to have the host resolvable. This can be achieved by adding a Service that points at your PHP code with the same name as you've defined in the upstream of the Nginx.
Look at this article: https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend
I'm running Traefik on a Kubernetes cluster to manage Ingress, which has been running ok for a long time.
I recently implemented Cluster-Autoscaling, which works fine except that on one Node (newly created by the Autoscaler) Traefik won't start. It sits in CrashLoopBackoff, and when I log the Pod I get: [date] [time] command traefik error: field not found, node: redirect.
Google found no relevant results, and the error itself is not very descriptive, so I'm not sure where to look.
My best guess is that it has something to do with the RedirectRegex Middleware configured in Traefik's config file:
[entryPoints.http.redirect]
regex = "^http://(.+)(:80)?/(.*)"
replacement = "https://$1/$3"
Traefik actually works still - I can still access all of my apps from their urls in my browser, even those which are on the node with the dead Traefik Pod.
The other Traefik Pods on other Nodes still run happily, and the Nodes are (at least in theory) identical.
After further googling, I found this on Reddit. Turns out Traefik updated a few days ago to v2.0, which is not backwards compatible.
Only this pod had the issue, because it was the only one for which a new (v2.0) image was pulled (being the only recently created Node).
I reverted to v1.7 until I have time to fix it properly. Had update the Daemonset to use v1.7, then kill the Pod so it could be recreated from the old image.
The devs have a Migration Guide that looks like it may help.
"redirect" is gone but now there is "RedirectScheme" and "RedirectRegex" as a new concept of "Middlewares".
It looks like they are moving to a pipeline approach, so you can define a chain of "middlewares" to apply to an "entrypoint" to decide how to direct it and what to add/remove/modify on packets in that chain. "backends" are now "providers", and they have a clearer, modular concept of configuration. It looks like it will offer better organization than earlier versions.
In one of my HTTP(S) LoadBalancer, I wish to change my backend configuration to increase the timeout from 30s to 60s (We have a few 502's that do not have any logs server-side, I wish to check if it comes from the LB)
But, as I validate the change, I got an error saying
Invalid value for field 'namedPorts[0].port': '0'. Must be greater
than or equal to 1
even if i didn't change the namedPort.
This issue seems to be the same, but the only solution is a workaround that does not work in my case :
Thanks for your help,
I faced the same issue and #tmirks 's fix didn't work for me.
After experimenting with GCE for a while, I realised that the issue is with the service.
By default all services are type: ClusterIP unless you specified otherwise.
Long story short, if your service isn't exposed as type: NodePort than the GCE load balancer won't route the traffic to it.
From the official Kubernetes project:
nodeport is a requirement of the GCE Ingress controller (and cloud controllers in general). "On-prem" controllers like the nginx ingress controllers work with clusterip:
I'm sure the OP has resolved this by now, but for anyone else pulling their hair out, this might work for you:
There's a bug of sorts in the GCE Load Balancer UI. If you add an empty frontend IP/Port combo by accident, it will create a named port in the Instance Group called port0 with a value of 0. You may not even realize this happened because you won't see the empty frontend mapping in the console.
To fix the problem, edit your instance group and remove port0 from the list of port name mappings.
After many different attempts I simply deleted the ingress object and recreated it and the problem went away. There must be a bug somewhere that leaves artifacts when ingress in updated.
Background:
Let's say I have a replication controller with some pods. When these pods were first deployed they were configured to expose port 8080. A service (of type LoadBalancer) was also create to expose port 8080 publicly. Later we decide that we want to export an additional port from the pods (port 8081). We change the pod definition and do a rolling-update with no downtime, great! But we want this port to be publicly accessible as well.
Question:
Is there a good way to update a service without downtime (for example by adding a an additional port to expose)? If I just do:
kubectl replace -f my-service-with-an-additional-port.json
I get the following error message:
Replace failedspec.clusterIP: invalid value '': field is immutable
If you name the ports in the pods, you can specify the target ports by name in the service rather than by number, and then the same service can direct target to pods using different port numbers.
Or, as Yu-Ju suggested, you can do a read-modify-write of the live state of the service, such as via kubectl edit. The error message was due to not specifying the clusterIP that had already been set.
In such case you can create a second service to expose the second port, it won't conflict with the other one and you'll have no downtime.
If you have more that one pod running for the same service you may use the Kubernetes Engine within the Google Cloud Console as follows:
Under "Workloads", select your Replication Controller. Within that screen, click "EDIT" then update and save your replication controller details.
Under "Discover & Load Balancing", select your Service. Within that screen, click "EDIT" then update and save your service details. If you changed ports you should see those reflecting under the column "Endpoints" when you've finished editing the details.
Assuming you have at least two pods running on a machine (and a restart policy of Always), if you wanted to update the pods with the new configuration or container image:
Under "Workloads", select your Replication Controller. Within that screen, scroll down to "Managed pods". Select a pod, then in that screen click "KUBECTL" -> "Delete". Note, you can do the same with the command line: kubectl delete pod <podname>. This would delete and restart it with the newly downloaded configuration and container image. Delete each pod one at a time, making sure to wait until a pod has fully restarted and working (i.e. check logs, debug) etc, before deleting the next.