I'm looking for some guidance on exposing UDP services on GKE using the ingress-nginx controller. After following the instructions on https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ I was able to access it when deploying to a local minikube VM using the ConfigMap method. However, when I deploy to GKE the services are unreachable over the IP of the ingress controller service.
I see the ports (1053 and 15353) on the controller are mapped correctly:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx ingress-nginx-controller LoadBalancer 10.51.252.115 <redacted> 80:32307/TCP,443:30514/TCP,1053:32764/UDP,15353:31385/UDP 54d
The cluster itself was created using the google_container_cluster Terraform module with default settings and the controller works well handling HTTPS traffic. One thing I did notice is that the auto-generated firewall rules omit UDP for the specified ports, and use TCP instead. Manually adding a UDP firewall rule for those ports didn't work.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
k8s-fw-a62a982b26a034e0e97258af6717b8b0 cluster-network-labs-us-west1 INGRESS 1000 tcp:80,tcp:443,tcp:1053,tcp:15353 False
I've deployed a simple UDP ping-pong server which works both locally on bare metal and on minikube as a kubernetes service using the ingress-nginx controller. That same controller with an identical configuration causes client requests to time out.
Server
package server
import (
"fmt"
"net"
"os"
"time"
"github.com/spf13/cobra"
)
func response(udpServer net.PacketConn, addr net.Addr, buf []byte) {
fmt.Println("msg", string(buf))
time := time.Now().Format(time.ANSIC)
responseStr := fmt.Sprintf("%v. msg: %v", time, string(buf))
udpServer.WriteTo([]byte(responseStr), addr)
}
var Command = &cobra.Command{
Use: "server",
Short: "Debug UDP server.",
Long: `Provides a UDP server endpoint which responds to pings.`,
RunE: func(cmd *cobra.Command, args []string) error {
udpServer, err := net.ListenPacket("udp", fmt.Sprintf(":%d", serverPort))
if err != nil {
return err
}
defer udpServer.Close()
fmt.Fprintf(os.Stdout, "listening :%d\n", serverPort)
for {
buf := make([]byte, 1024)
_, addr, err := udpServer.ReadFrom(buf)
if err != nil {
continue
}
go response(udpServer, addr, buf)
}
},
}
var serverPort int
func init() {
Command.PersistentFlags().IntVar(&serverPort, "port", 1053, "Port to open an listen for UDP packets")
}
Client
package client
import (
"fmt"
"net"
"os"
"github.com/spf13/cobra"
)
var Command = &cobra.Command{
Use: "client",
Short: "Debug UDP client.",
Long: `Provides a UDP client endpoint which responds to pings.`,
RunE: func(cmd *cobra.Command, args []string) error {
udpServer, err := net.ResolveUDPAddr("udp", fmt.Sprintf("%s:%d", queryHost, queryPort))
if err != nil {
return err
}
conn, err := net.DialUDP("udp", nil, udpServer)
if err != nil {
return err
}
defer conn.Close()
_, err = conn.Write([]byte(msg))
if err != nil {
return err
}
received := make([]byte, 1024)
_, err = conn.Read(received)
if err != nil {
println("Read data failed:", err.Error())
os.Exit(1)
}
println(string(received))
return nil
},
}
var msg string
var queryHost string
var queryPort int
func init() {
Command.PersistentFlags().StringVar(&msg, "msg", "echo", "Message used to send ping/pong requests over UDP")
Command.PersistentFlags().StringVar(&queryHost, "host", "127.0.0.1", "Host used to send ping/pong requests over UDP")
Command.PersistentFlags().IntVar(&queryPort, "port", 1053, "Port used to send ping/pong requests over UDP")
}
Has anyone seem something similar to this or have any ideas on where I can dig in further?
Thanks
Versions:
ingress-nginx helm chart - 4.4.0
ingress-nginx - 1.5.1
Kubernetes - v1.24.5-gke.600
registry.terraform.io/hashicorp/google - v4.43.0
As per this SO1 & Official Doc it is not possible to expose a UDP service externally on GKE.
But as per the referral document you mentioned it is possible to using NGINX Ingress.
Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY].
This guide is describing how it can be achieved using minikube but doing this on a on-premises kubernetes is different and requires a few more steps.
Please go through SO for more information.
Make sure if you are exposing your service which LoadBalnacer type it's creating. With Nginx ingress by default, it might be creating the TCP load balancer only which might be supporting the TCP not sure if Nginx ingress supports the UDP or not.
Network Load Balancer is a good option if you want to expose the UDP service directly as type:LoadBalancer, as it supports the UDP/TCP both.
Ref : https://stackoverflow.com/a/69709859/5525824
Ref for LB service
apiVersion: v1
kind: Service
metadata:
name: udp-service
spec:
selector:
log: "true"
ports:
- name: udp-input
port: 3333
protocol: UDP
targetPort: 3333
type: LoadBalancer
For Nginx you would like to give this try : https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ OR https://github.com/kubernetes/ingress-nginx/issues/4370
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
53: "kube-system/kube-dns:53"
I am using HAProxy as load balancer for my application and to make it highly available I am using keepalive service and floating ip address concept. But whenever my primary load balancer server gets down, by removing it from network or turning it off, my all services go down instead of making secondary load balancer server available.
My keepalived.conf for master server is,
global_defs
{
# Keepalived process identifier
lvs_id haproxy_DH
}
# Script used to check if HAProxy is running
vrrp_script check_haproxy
{
script "pidof haproxy"
interval 2
weight 2
}
# Virtual interface
vrrp_instance VI_01
{
state MASTER
interface eno16777984 #here eth0 is the name of network interface
virtual_router_id 51
priority 101
# The virtual ip address shared between the two loadbalancers
virtual_ipaddress {
172.16.231.162
}
track_script {
check_haproxy
}
}
For backup server it is like,
global_defs
{
# Keepalived process identifier
lvs_id haproxy_DH_passive
}
# Script used to check if HAProxy is running
vrrp_script check_haproxy
{
script "pidof haproxy"
interval 2
weight 2
}
# Virtual interface
vrrp_instance VI_01
{
state BACKUP
interface eno16777984 #here eth0 is the name of network interface
virtual_router_id 51
priority 100
# The virtual ip address shared between the two loadbalancers
virtual_ipaddress {
172.16.231.162
}
track_script {
check_haproxy
}
}
The virtual IP address is assigned and working when both load balancers are up. But whenever machine goes down, my service also goes down. I am using CentOS7, Please help.
Use this,
global_defs {
router_id ovp_vrrp
}
vrrp_script haproxy_check {
script "killall -0 haproxy"
interval 2
weight 2
}
vrrp_instance OCP_EXT {
interface ens192
virtual_router_id 51
priority 100
state MASTER
virtual_ipaddress {
10.19.114.231 dev ens192
}
track_script {
haproxy_check
}
authentication {
auth_type PASS
auth_pass 1cee4b6e-2cdc-48bf-83b2-01a96d1593e4
}
}
more info: read here, https://www.openshift.com/blog/haproxy-highly-available-keepalived
I have an issue where my beatmetric is caught by my http pipeline.
Both Logstash, Elastic and Metricbeat is running in Kubernetes.
My beatmetric is setup to send to Logstash on port 5044 and log to a file in /tmp. This works fine. But whenever I create a pipeline with an http input, this seems to also catch beatmetric inputs and send them to index2 in Elastic as defined in the http pipeline.
Why does it behave like this?
/usr/share/logstash/pipeline/http.conf
input {
http {
port => "8080"
}
}
output {
#stdout { codec => rubydebug }
elasticsearch {
hosts => ["http://my-host.com:9200"]
index => "test2"
}
}
/usr/share/logstash/pipeline/beats.conf
input {
beats {
port => "5044"
}
}
output {
file {
path => '/tmp/beats.log'
codec => "json"
}
}
/usr/share/logstash/config/logstash.yml
pipeline.id: main
pipeline.workers: 1
pipeline.batch.size: 125
pipeline.batch.delay: 50
http.host: "0.0.0.0"
http.port: 9600
config.reload.automatic: true
config.reload.interval: 3s
/usr/share/logstash/config/pipeline.yml
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline"
Even if you have multiple config files, they are read as a single pipeline by logstash, concatenating the inputs, filters and outputs, if you need to run then as separate pipelines you have two options.
Change your pipelines.yml and create differents pipeline.id, each one pointing to one of the config files.
- pipeline.id: beats
path.config: "/usr/share/logstash/pipeline/beats.conf"
- pipeline.id: http
path.config: "/usr/share/logstash/pipeline/http.conf"
Or you can use tags in your input, filter and output, for example:
input {
http {
port => "8080"
tags => ["http"]
}
beats {
port => "5044"
tags => ["beats"]
}
}
output {
if "http" in [tags] {
elasticsearch {
hosts => ["http://my-host.com:9200"]
index => "test2"
}
}
if "beats" in [tags] {
file {
path => '/tmp/beats.log'
codec => "json"
}
}
}
Using the pipelines.yml file is the recommended way to running multiple pipelines
I'm new to akka and wanted to connect two PC using akka remotely just to run some code in both as (2 actors). I had tried the example in akka doc. But what I really do is to add the 2 IP addresses into config file I always get this error?
First machine give me this error:
[info] [ERROR] [11/20/2018 13:58:48.833]
[ClusterSystem-akka.remote.default-remote-dispatcher-6]
[akka.remote.artery.Association(akka://ClusterSystem)] Outbound
control stream to [akka://ClusterSystem#192.168.1.2:2552] failed.
Restarting it. Handshake with [akka://ClusterSystem#192.168.1.2:2552]
did not complete within 20000 ms
(akka.remote.artery.OutboundHandshake$HandshakeTimeoutException:
Handshake with [akka://ClusterSystem#192.168.1.2:2552] did not
complete within 20000 ms)
And second machine:
Exception in thread "main"
akka.remote.RemoteTransportException: Failed to bind TCP to
[192.168.1.3:2552] due to: Bind failed because of
java.net.BindException: Cannot assign requested address: bind
Config file content :
akka {
actor {
provider = cluster
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.3"
canonical.port = 0
}
}
cluster {
seed-nodes = [
"akka://ClusterSystem#192.168.1.3:2552",
"akka://ClusterSystem#192.168.1.2:2552"]
# auto downing is NOT safe for production deployments.
# you may want to use it during development, read more about it in the docs.
auto-down-unreachable-after = 120s
}
}
# Enable metrics extension in akka-cluster-metrics.
akka.extensions=["akka.cluster.metrics.ClusterMetricsExtension"]
# Sigar native library extract location during tests.
# Note: use per-jvm-instance folder when running multiple jvm on one host.
akka.cluster.metrics.native-library-extract-folder=${user.dir}/target/native
First of all, you don't need to add cluster configuration for AKKA remoting. Both the PCs or nodes should be enabled remoting with a concrete port instead of "0" that way you know which port to connect.
Have below configurations
PC1
akka {
actor {
provider = remote
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.3"
canonical.port = 19000
}
}
}
PC2
akka {
actor {
provider = remote
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.4"
canonical.port = 18000
}
}
}
Use below actor path to connect any actor in remote from PC1 to PC2
akka://<PC2-ActorSystem>#192.168.1.4:18000/user/<actor deployed in PC2>
Use below actor path to connect from PC2 to PC1
akka://<PC2-ActorSystem>#192.168.1.3:19000/user/<actor deployed in PC1>
Port numbers and IP address are samples.
I have 2 nodes with keepalived and haproxy services (CentOS7).
If I'm shutdown one node all working fine. But I want to failover the VIPS if haproxy is down.
This is 1st node config:
vrrp_script ha_check {
script "/etc/keepalived/haproxy_check"
interval 2
weight 21
}
vrrp_instance VI_1 {
state MASTER
interface eno16777984
virtual_router_id 151
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 11111
}
virtual_ipaddress {
10.0.100.233
}
smtp_alert
track_script {
ha_check
}
}
2nd node:
vrrp_script ha_check {
script "/etc/keepalived/haproxy_check"
interval 2
fall 2
rise 2
timeout 1
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777984
virtual_router_id 151
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 11111
}
virtual_ipaddress {
10.0.100.233
}
smtp_alert
track_script {
ha_check
}
}
cat /etc/keepalived/haproxy_check
systemctl status haproxy | grep "inactive"
When I stop haproxy it still does not failover the VIPs to the next
host.
[root#cks-hatest1 keepalived]# tail /var/log/messages
Nov 30 10:35:24 cks-hatest1 Keepalived_vrrp[5891]: VRRP_Script(ha_check) failed
Nov 30 10:35:33 cks-hatest1 systemd: Started HAProxy Load Balancer.
Nov 30 10:35:45 cks-hatest1 systemd: Stopping HAProxy Load Balancer...
Nov 30 10:35:45 cks-hatest1 systemd: Stopped HAProxy Load Balancer.
Nov 30 10:35:46 cks-hatest1 Keepalived_vrrp[5891]: VRRP_Script(ha_check) succeeded
What I am doing wrong? Thank you in advance!
In your script you are checking if
systemctl status haproxy
contains keyword "inactive". Is that the value you get when you stop haproxy service manually?
As soon as haproxy service is stopped your logs contains it is started again. Can you verify that?
Also, try with replacing the script as
script "killall -0 haproxy"
It's easy. Try this for example:
vrrp_script check_haproxy {
script "pidof haproxy"
interval 2
weight 2
}
In the end of config you should add following part too:
track_script {
check_haproxy
}