IstIO egress gateway gives HTTP 503 error - kubernetes

I have the following manifest for deploying a IstIO egress gateway routing:
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: REDACTED-egress-se
spec:
hosts:
- sahfpxa.REDACTED
ports:
- number: 8080
name: http-port
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sahfpxa-REDACTED-egress-gw
spec:
selector:
istio: egressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- sahfpxa.REDACTED
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-sahfpxa-REDACTED
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: sahfpxa
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-sahfpxa-REDACTED-through-egress-gateway
spec:
hosts:
- sahfpxa.REDACTED
gateways:
- REDACTED/REDACTED-egress-gw
- mesh
http:
- match:
- gateways:
- mesh
port: 8080
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: sahfpxa
port:
number: 80
weight: 100
- match:
- gateways:
- REDACTED/sahfpxa-REDACTED-egress-gw
port: 8080
route:
- destination:
host: sahfpxa.REDACTED
port:
number: 8080
weight: 100
But I get a connection refused from the sidecar istio-proxy container Pod of the affected namespace and a HTTP 503 error from the workload container in that namespace.
Any ideas what could be wrong with the configuration or how I can debug it?
Thanks in advance.
Best regards,
rforberger

There were few errors in Your deployment manifest like DestinationRule was not pointing at Your ServiceEntry.
You can try to match Yours with these manifest files:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: etth
spec:
hosts:
- etth.pl
ports:
- number: 8080
name: http-port
protocol: HTTP
resolution: DNS
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- etth.pl
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-cnn
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: etth
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-cnn-through-egress-gateway
spec:
hosts:
- etth.pl
gateways:
- istio-egressgateway
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: etth
port:
number: 80
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: etth.pl
port:
number: 8080
weight: 100
You can check if routes are present in:
istioctl pc routes $(kubectl get pods -l istio=egressgateway -o jsonpath='{.items[0].metadata.name}' -n istio-system).istio-system -o json
$ istioctl pc routes $(kubectl get pods -l istio=egressgateway -o jsonpath='{.items[0].metadata.name}' -n istio-system).istio-system -o json
[
{
"name": "http.80",
"virtualHosts": [
{
"name": "etth.pl:80",
"domains": [
"etth.pl",
"etth.pl:80"
],
"routes": [
{
"match": {
"prefix": "/",
"caseSensitive": true
},
"route": {
"cluster": "outbound|8080||etth.pl",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxGrpcTimeout": "0s"
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking/v1alpha3/namespaces/default/virtual-service/direct-cnn-through-egress-gateway"
}
}
},
"decorator": {
"operation": "etth.pl:8080/*"
},
"typedPerFilterConfig": {
"mixer": {
"#type": "type.googleapis.com/istio.mixer.v1.config.client.ServiceConfig",
"disableCheckCalls": true,
"mixerAttributes": {
"attributes": {
"destination.service.host": {
"stringValue": "etth.pl"
},
"destination.service.name": {
"stringValue": "etth.pl"
},
"destination.service.namespace": {
"stringValue": "default"
}
}
},
"forwardAttributes": {
"attributes": {
"destination.service.host": {
"stringValue": "etth.pl"
},
"destination.service.name": {
"stringValue": "etth.pl"
},
"destination.service.namespace": {
"stringValue": "default"
}
}
}
}
}
}
]
}
],
"validateClusters": false
},
{
"virtualHosts": [
{
"name": "backend",
"domains": [
"*"
],
"routes": [
{
"match": {
"prefix": "/stats/prometheus"
},
"route": {
"cluster": "prometheus_stats"
}
}
]
}
]
}
]

Related

Containerized logic app not working when deployed to AKS

We are trying to deploy a logic app as containerized workload in AKS. Following is our Docker file:
FROM mcr.microsoft.com/azure-functions/dotnet:3.0.14492-appservice
ENV AzureWebJobsStorage=<StorageAccount connection string>
ENV AZURE_FUNCTIONS_ENVIRONMENT Development
ENV AzureWebJobsScriptRoot=/home/site/wwwroot
ENV AzureFunctionsJobHost__Logging__Console__IsEnabled=true
ENV FUNCTIONS_V2_COMPATIBILITY_MODE=true
COPY ./bin/release/netcoreapp3.1/publish/ /home/site/wwwroot
Following is our deployment manifest file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pfna-pgt-sf-pdfextract
namespace: canary
labels:
app: pfna-pgt-sf-pdfextract
spec:
replicas: 1
selector:
matchLabels:
app: pfna-pgt-sf-pdfextract
template:
metadata:
labels:
app: pfna-pgt-sf-pdfextract
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: pfna-pgt-sf-pdfextract
image: "image_link"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: AzureBlob_connectionString
value: <connection_string>
- name: AzureWebJobsStorage
value: <connection_string>
imagePullSecrets:
- name: sbx-acr-secret
---
apiVersion: v1
kind: Service
metadata:
name: pfna-pgt-sf-pdfextract
namespace: canary
labels:
app: pfna-pgt-sf-pdfextract
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http-pfna-pgt-sf-pdfextract
selector:
app: pfna-pgt-sf-pdfextract
Following is connections.json:
{
"serviceProviderConnections": {
"AzureBlob": {
"parameterValues": {
"connectionString": "#appsetting('AzureWebJobsStorage')"
},
"serviceProvider": {
"id": "/serviceProviders/AzureBlob"
},
"displayName": "localAzureBlob"
}
},
"managedApiConnections": {}
}
Following is the host.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
"version": "[1.*, 2.0.0)"
},
"extensions": {
"workflow": {
"settings": {
"Runtime.Backend.VariableOperation.MaximumStatelessVariableSize": "5000000"
}
}
}
}
The image is running successfully in docker desktop but when deployed to AKS we are getting 'Function host is not running'.
Please help resolve this.
You need to specify WEBSITE_HOSTNAME as well (doesn't matter what it is, just needs to be specified)
That being said, as of today there is another issue that is causing the function host to not start (libadvapi32.dll).

Cannot GET localhost:9000/main.js frontend using Kubernetes / Ingress-NGINX

I have a SPA and a microservices architecture. I am running the program locally on my machine using skaffold dev and kubernetes with Google Cloud Provider (GCP). I am connecting my frontend to my backend using Ingress-NGINX. When I go to the host name on my browser mavata.dev (configured on my local machine), I can no longer load the site. I get a "Cannot GET localhost:9000/main.js" net:::ERR_CONNECTION_REFUSED. See below for my config:
Kubernetes Config:
(ingress-srv.yaml)
# RUN: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml
# for GCP run: kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)
# then run: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
# certmanager.k8s.io/cluster-issuer: core-prod
# nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
# nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
# nginx.ingress.kubernetes.io/rewrite-target: /
# nginx.ingress.kubernetes.io/secure-backends: "true"
# nginx.ingress.kubernetes.io/ssl-redirect: "true"
# nginx.ingress.kubernetes.io/websocket-services: core-service
# nginx.org/websocket-services: core-service
#---
# nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
# nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
# nginx.ingress.kubernetes.io/server-snippets: |
# location / {
# proxy_set_header Upgrade $http_upgrade;
# proxy_http_version 1.1;
# proxy_set_header X-Forwarded-Host $http_host;
# proxy_set_header X-Forwarded-Proto $scheme;
# proxy_set_header X-Forwarded-For $remote_addr;
# proxy_set_header Host $host;
# proxy_set_header Connection "upgrade";
# proxy_cache_bypass $http_upgrade;
# }
spec:
rules:
- host: mavata.dev # need to update 'hosts' file on local machine (in VS Code) C:\Windows\System32\drivers\etc
http:
paths:
# - backend:
# pathType: Prefix
# serviceName: tornado-socket
# servicePort: 8000
# - path: /api/company/create
# pathType: Prefix
# backend:
# service:
# name: company-clusterip-srv
# port:
# number: 4000
# - path: /api/company/?(.*)
# pathType: Prefix
# backend:
# service:
# name: company-srv
# port:
# number: 4000
# - path: /api/companies
# pathType: Prefix
# backend:
# service:
# name: companies-srv
# port:
# number: 4000
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-server-srv
port:
number: 4000
# - path: /api/permissions/?(.*)
# pathType: Prefix
# backend:
# service:
# name: permissions-srv
# port:
# number: 4000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 9000
(client-depl.yaml)
apiVersion: apps/v1
kind: Deployment # tpye of k8s object we want to create
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client-container
image: us.gcr.io/mavata/frontend
ports:
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client-container
protocol: TCP
port: 9000
targetPort: 9000
SPA Webpack Dev Config:
(webpack.dev.config)
const { merge } = require('webpack-merge');
// const ModuleFederationPlugin = require('webpack/lib/container/ModuleFederationPlugin');
const commonConfig = require('./webpack.common');
// const packageJson = require('../package.json');
const globals = require('../src/data-variables/global');
const port = globals.port;
const devConfig = {
mode: 'development',
output: {
publicPath: `http://localhost:${port}/` // don't forget the slash at the end
},
devServer: {
host: '0.0.0.0',
port: port,
allowedHosts: ['mavata.dev'],
historyApiFallback: {
index: 'index.html',
},
},
// plugins: [
// new ModuleFederationPlugin({
// name: 'container',
// filename: 'remoteEntry.js',
// remotes: {
// marketingMfe: 'marketingMod#http://localhost:8081/remoteEntry.js',
// authMfe: 'authMod#http://localhost:8082/remoteEntry.js',
// companyMfe: 'companyMod#http://localhost:8083/remoteEntry.js',
// dataMfe: 'dataMod#http://localhost:8084/remoteEntry.js'
// },
// exposes: {
// './Functions': './src/functions/Functions',
// './Variables': './src/data-variables/Variables'
// },
// shared: {...packageJson.dependencies, ...packageJson.peerDependencies},
// }),
// ],
};
module.exports = merge(commonConfig, devConfig);
(webpack.common.config)
const HtmlWebpackPlugin = require('html-webpack-plugin');
const path = require('path');
module.exports = {
module: {
rules: [
{
test: /\.m?js$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader',
options: {
presets: ['#babel/preset-react', '#babel/preset-env'],
plugins: ['#babel/plugin-transform-runtime'],
},
},
},
{
test: /\.css$/,
use: ['style-loader', 'css-loader']
},
{
test: /\.(scss)$/,
use: [
{
// Adds CSS to the DOM by injecting a `<style>` tag
loader: 'style-loader'
},
{
// Interprets `#import` and `url()` like `import/require()` and will resolve them
loader: 'css-loader'
},
{
// Loads a SASS/SCSS file and compiles it to CSS
loader: 'sass-loader'
},
]
},
// {
// test: /\.s[ac]ss$/i,
// use: [
// // Creates `style` nodes from JS strings
// "style-loader",
// // Translates CSS into CommonJS
// "css-loader",
// // Compiles Sass to CSS
// "sass-loader",
// ],
// },
{
test: /\.svg$/,
use: ['#svgr/webpack'],
},
{
test: /\.(woff(2)?|ttf|eot)(\?v=\d+\.\d+\.\d+)?$/,
exclude: /node_modules/,
use: [
{
loader: 'file-loader',
options: {
name: '[path][name].[ext]',
outputPath: 'fonts/'
}
}
]
},
{
// Load all images as base64 encoding if they are smaller than 8192 bytes
test: /\.(png|jpg|jpeg|gif|ico|svg|webp)$/,
use: [
{
// loader: 'url-loader',
loader: 'file-loader',
options: {
// On development we want to see where the file is coming from, hence we preserve the [path]
name: '[path][name].[ext]?hash=[hash:20]',
//name: '[path][name].[ext]',
limit: 8192
},
},
],
},
{
// Load all icons
test: /\.(eot|woff|woff2|svg|ttf)([\?]?.*)$/,
use: [
{
loader: 'file-loader',
}
],
},
{
test: /\.(ttf|eot|woff|woff2)$/,
loader: 'file-loader',
options: {
name: 'fonts/[name].[ext]',
},
},
],
},
plugins: [
new HtmlWebpackPlugin({
template: './public/index.html',
}),
],
resolve: {
extensions: ['', '.js', '.jsx', '.scss', '.eot', '.ttf', '.svg', '.woff'],
modules: ['node_modules', 'src', 'scripts', 'images', 'fonts'],
alias: {
Navbar: path.resolve(__dirname, '../src/components/navbar/'),
containerMfe: path.resolve(__dirname, '../src/'),
Variables: path.resolve(__dirname, '../src/data-variables/Variables.js'),
Functions: path.resolve(__dirname, '../src/functions/Functions.js')
}
},
};

telepresence: error: workload "xxx-service.default" not found

I have this chart of a personal project deployed in minikube:
---
# Source: frontend/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: xxx-app-service
spec:
selector:
app: xxx-app
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
# Source: frontend/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '3'
creationTimestamp: '2022-06-19T21:57:01Z'
generation: 3
labels:
app: xxx-app
name: xxx-app
namespace: default
resourceVersion: '43299'
uid: 7c43767a-abbd-4806-a9d2-6712847a0aad
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: xxx-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: xxx-app
spec:
containers:
- image: "registry.gitlab.com/esavara/xxx/wm:staging"
name: frontend
imagePullPolicy: Always
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 10
periodSeconds: 3
env:
- name: PORT
value: "3000"
resources: {}
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
# Source: frontend/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: '2022-06-19T22:28:58Z'
generation: 1
name: xxx-app
namespace: default
resourceVersion: '44613'
uid: b58dcd17-ee1f-42e5-9dc7-d915a21f97b5
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: "xxx-app-service"
port:
number: 3000
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- ip: 192.168.39.80
---
# Source: frontend/templates/gitlab-registry-sealed.json
{
"kind": "SealedSecret",
"apiVersion": "bitnami.com/v1alpha1",
"metadata": {
"name": "regcred",
"namespace": "default",
"creationTimestamp": null
},
"spec": {
"template": {
"metadata": {
"name": "regcred",
"namespace": "default",
"creationTimestamp": null
},
"type": "kubernetes.io/dockerconfigjson",
"data": null
},
"encryptedData": {
".dockerconfigjson": "AgBpHoQw1gBq0IFFYWnxlBLLYl1JC23TzbRWGgLryVzEDP8p+NAGjngLFZmtklmCEHLK63D9pp3zL7YQQYgYBZUjpEjj8YCTOhvnjQIg7g+5b/CPXWNoI5TuNexNJFKFv1mN5DzDk9np/E69ogRkDJUvUsbxvVXs6/TKGnRbnp2NuI7dTJ18QgGXdLXs7S416KE0Yid9lggw06JrDN/OSxaNyUlqFGcRJ6OfGDAHivZRV1Kw9uoX06go3o+AjVd6eKlDmmvaY/BOc52bfm7pY2ny1fsXGouQ7R7V1LK0LcsCsKdAkg6/2DU3v32mWZDKJgKkK5efhTQr1KGOBoLuuHKX6nF5oMA1e1Ii3wWe77lvWuvlpaNecCBTc7im+sGt0dyJb4aN5WMLoiPGplGqnuvNqEFa/nhkbwXm5Suke2FQGKyzDsMIBi9p8PZ7KkOJTR1s42/8QWVggTGH1wICT1RzcGzURbanc0F3huQ/2RcTmC4UvYyhCUqr3qKrbAIYTNBayfyhsBaN5wVRnV5LiPxjLbbOmObSr/8ileJgt1HMQC3V9pVLZobaJvlBjr/mmNlrF118azJOFY+a/bqzmtBWwlcvVuel/EaOb8uA8mgwfnbnShMinL1OWTHoj+D0ayFmUdsQgMYwRmStnC7x/6OXThmBgfyrLguzz4V2W8O8qbdDz+O5QoyboLUuR9WQb/ckpRio2qa5tidnKXzZzbWzFEevv9McxvCC1+ovvw4IullU8ra3FutnTentmPHKU2OPr1EhKgFKIX20u8xZaeIJYLCiZlVQohylpjkHnBZo1qw+y1CTiDwytunpmkoGGAXLx++YQSjEJEo889PmVVlSwZ8p/Rdusiz1WbgKqFt27yZaOfYzR2bT++HuB5x6zqfK6bbdV/UZndXs"
}
}
}
I'm trying to use Telepresence to redirect the traffic from the deployed application to a Docker container which have my project mounted inside and has hot-reloading, to continue the development of it but inside Kubernetes, but running telepresence intercept xxx-app-service --port 3000:3000 --env-file /tmp/xxx-app-service.env fails with the following error:
telepresence: error: workload "xxx-app-service.default" not found
Why is this happening and how do I fix it?

Skipper https rest end point requests returning http urls

I am trying a poc with Spring cloud dataflow streams and have the the application iis running in Pivotal Cloud Foundry. Trying the same in kubernetes and the spring dataflow server dashboard is not loading.Debugged the issue and found the root cause is when the dashboard is loaded, its trying to hit the Skipper rest end point /api and this returns a response with the urls of other end points in skipper but the return urls are all in http. How can i force skipper to return https urls instead of http? Below is the response when i try to curl the same endpoints .
C:>curl -k https:///api
RESPONSE FROM SKIPPER
{
"_links" : {
"repositories" : {
"href" : "http://<skipper_url>/api/repositories{?page,size,sort}",
"templated" : true
},
"deployers" : {
"href" : "http://<skipper_url>/api/deployers{?page,size,sort}",
"templated" : true
},
"releases" : {
"href" : "http://<skipper_url>/api/releases{?page,size,sort}",
"templated" : true
},
"packageMetadata" : {
"href" : "**http://<skipper_url>/api/packageMetadata{?page,size,sort,projection}**",
"templated" : true
},
"about" : {
"href" : "http://<skipper_url>/api/about"
},
"release" : {
"href" : "http://<skipper_url>/api/release"
},
"package" : {
"href" : "http://<skipper_url>/api/package"
},
"profile" : {
"href" : "http://<skipper_url>/api/profile"
}
}
}
kubernetes deployment yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: skipper-server-network-policy
spec:
podSelector:
matchLabels:
app: skipper-server
ingress:
- from:
- namespaceSelector:
matchLabels:
gkp_namespace: ingress-nginx
egress:
- {}
policyTypes:
- Ingress
- Egress
---
apiVersion: v1
kind: Secret
metadata:
name: poc-secret
data:
.dockerconfigjson: ewogICJhdXRocyI6
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
replicas: 1
selector:
matchLabels:
app: skipper-server
template:
metadata:
labels:
app: skipper-server
annotations:
kubernetes.io/psp: nonroot
spec:
containers:
- name: skipper-server
image: <image_path>
imagePullPolicy: Always
ports:
- containerPort: 7577
protocol: TCP
resources:
limits:
cpu: "4"
memory: 2Gi
requests:
cpu: 25m
memory: 1Gi
securityContext:
runAsUser: 99
imagePullSecrets:
- name: poc-secret
serviceAccount: spark
serviceAccountName: spark
---
apiVersion: v1
kind: Service
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
ports:
- port: 80
targetPort: 7577
protocol: TCP
name: http
selector:
app: skipper-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
ingress.kubernetes.io/secure-backends: "true"
kubernetes.io/ingress.allow.http: true
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
rules:
- host: "<skipper_url>"
http:
paths:
- path: /
backend:
serviceName: skipper-server
servicePort: 80
tls:
- hosts:
- "<skipper_url>"
SKIPPER APPLICATION.properties
spring.datasource.url=jdbc:h2:mem:testdb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.server.use-forward-headers=true
The root cause was skipper /api end point returning http urls for the /deployer and kubernetes ingress trying to redirect and getting blocked with a 308 error. Added below to skipper env properties and this fixed the issue.
DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
spec:
containers:
env:
- name: "server.tomcat.internal-proxies"
value: ".*"
- name: "server.use-forward-headers"
value: "true"**
INGRESS
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
**nginx.ingress.kubernetes.io/ssl-redirect: false**

Kafka behind Traefik on Kubernetes

I am trying to configure a Kafka cluster behind Traefik but my producers and client (that are outside kubernetes) don't connect to the bootstrap-servers. They keep saying:
"no resolvable boostrap servers in the given url"
Actually here is the Traefik ingress:
{
"apiVersion": "extensions/v1beta1",
"kind": "Ingress",
"metadata": {
"name": "nppl-ingress",
"annotations": {
"kubernetes.io/ingress.class": "traefik",
"traefik.frontend.rule.type": "PathPrefixStrip"
}
},
"spec": {
"rules": [
{
"host": "" ,
"http": {
"paths": [
{
"path": "/zuul-gateway",
"backend": {
"serviceName": "zuul-gateway",
"servicePort": "zuul-port"
}
},
{
"path": "/kafka",
"backend": {
"serviceName": "kafka-broker",
"servicePort": "kafka-port"
}
[..]
}
What I give to the kafka consumers/producers is the public IP of Traefik.
Here is the flow: [Kafka producers/consumers] -> Traefik(exposed as Load Balancer) -> [Kafka-Cluster]
Is there any solution? Otherwise I was thinking to add a kafka-rest proxy (https://docs.confluent.io/current/kafka-rest/docs/index.html) between Traefik and the kafka brokers but I think isn't the ideal solution.
I did. You can refer to it, in kubernetes ,deployment kafka.yaml
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: "test:1:1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://:9092,OUTSIDE://kafka-com:30322"
- name: KAFKA_LISTENERS
value: "INSIDE://:9092,OUTSIDE://:30322"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INSIDE"
kafka service,the external service invocation address, or traefik proxy address
---
kind: Service
apiVersion: v1
metadata:
name: kafka-com
namespace: dev
labels:
k8s-app: kafka
spec:
selector:
k8s-app: kafka
ports:
- port: 9092
name: innerport
targetPort: 9092
protocol: TCP
- port: 30322
name: outport
targetPort: 30322
protocol: TCP
nodePort: 30322
type: NodePort
Ensure that Kafka external port and nodePort port are consistent,Other services call kafka-com:30322, my blog write this config_kafka_in_kubernetes, hope to help U !