Kubernetes Websockets using Socket.io, ExpressJS and Nginx Ingress - kubernetes

I want to connect a React Native application using Socket.io to a server that is inside a Kubernetes Cluster hosted on Google Cloud Platform (GKE).
There seems to be an issue with the Nginx Ingress Controller declaration but I cannot find it.
I have tried adding nginx.org/websocket-services; rewriting my backend code so that it uses a separate NodeJS server (a simple HTTP server) on port 3004, then exposing it via the Ingress Controller under a different path than the one on port 3003; and multiple other suggestions from other SO questions and Github issues.
Information that might be useful:
Cluster master version: 1.15.11-gke.15
I use a Load Balancer managed with Helm (stable/nginx-ingress) with RBAC enabled
All deployments and services are within the namespace gitlab-managed-apps
The error I receive when trying to connect to socket.io is: Error: websocket error
For the front-end part, the code is as follows:
App.js
const socket = io('https://example.com/app-sockets/socketns', {
reconnect: true,
secure: true,
transports: ['websocket', 'polling']
});
I expect the above to connect me to a socket.io namespace called socketdns.
The backend code is:
app.js
const express = require('express');
const app = express();
const server = require('http').createServer(app);
const io = require('socket.io')(server);
const redis = require('socket.io-redis');
io.set('transports', ['websocket', 'polling']);
io.adapter(redis({
host: process.env.NODE_ENV === 'development' ? 'localhost' : 'redis-cluster-ip-service.gitlab-managed-apps.svc.cluster.local',
port: 6379
}));
io.of('/').adapter.on('error', function(err) { console.log('Redis Adapter error! ', err); });
const nsp = io.of('/socketns');
nsp.on('connection', function(socket) {
console.log('connected!');
});
server.listen(3003, () => {
console.log('App listening to 3003');
});
The ingress service is:
ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-connect-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-read-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-send-timeout: "7200"
nginx.org/websocket-services: "app-sockets-cluster-ip-service"
name: ingress-service
namespace: gitlab-managed-apps
spec:
tls:
- hosts:
- example.com
secretName: letsencrypt-prod
rules:
- host: example.com
http:
paths:
- backend:
serviceName: app-cms-cluster-ip-service
servicePort: 3000
path: /?(.*)
- backend:
serviceName: app-users-cluster-ip-service
servicePort: 3001
path: /app-users/?(.*)
- backend:
serviceName: app-sockets-cluster-ip-service
servicePort: 3003
path: /app-sockets/?(.*)
- backend:
serviceName: app-sockets-cluster-ip-service
servicePort: 3003
path: /app-sockets/socketns/?(.*)

The solution is to remove the nginx.ingress.kubernetes.io/rewrite-target: /$1 annotation.
Here is a working configuration: (please note that apiVersion has changed since the question has been asked)
Ingress configuration
ingress-service.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "64m"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
name: ingress-service
namespace: default
spec:
tls:
- hosts:
- example.com
secretName: letsencrypt-prod
rules:
- host: example.com
http:
paths:
- backend:
service:
name: app-sockets-cluster-ip-service
port:
number: 3003
path: /app-sockets/?(.*)
pathType: Prefix
On the service (Express.js):
app.js
const redisAdapter = require('socket.io-redis');
const io = require('socket.io')(server, {
path: `${ global.NODE_ENV === 'development' ? '' : '/app-sockets' }/sockets/`,
cors: {
origin: '*',
methods: ['GET', 'POST'],
},
});
io.adapter(redisAdapter({
host: global.REDIS_HOST,
port: 6379,
}));
io.of('/').adapter.on('error', err => console.log('Redis Adapter error! ', err));
io.on('connection', () => {
//...
});
The global.NODE_ENV === 'development' ? '' : '/app-sockets' bit is related to an issue in development. If you change it here, you must also change it in the snippet below.
In development the service is under http://localhost:3003 (sockets endpoint is http://localhost:3003/sockets).
In production the service is under https://example.com/app-sockets (sockets endpoint is https://example.com/app-sockets/sockets).
On frontend
connectToWebsocketsService.js
/**
* Connect to a websockets service
* #param tokens {Object}
* #param successCallback {Function}
* #param failureCallback {Function}
*/
export const connectToWebsocketsService = (tokens, successCallback, failureCallback) => {
//SOCKETS_URL = NODE_ENV === 'development' ? 'http://localhost:3003' : 'https://example.com/app-sockets'
const socket = io(`${ SOCKETS_URL.replace('/app-sockets', '') }`, {
path: `${ NODE_ENV === 'development' ? '' : '/app-sockets' }/sockets/`,
reconnect: true,
secure: true,
transports: ['polling', 'websocket'], //required
query: {
// optional
},
auth: {
...generateAuthorizationHeaders(tokens), //optional
},
});
socket.on('connect', successCallback(socket));
socket.on('reconnect', successCallback(socket));
socket.on('connect_error', failureCallback);
};
Note: I wasn't able to do it on the project mentioned in the question, but I have on another project which is hosted on EKS, not GKE. Feel free to confirm if this works for you on GKE as well.

Just change annotations to
nginx.ingress.kubernetes.io/websocket-services: "app-sockets-cluster-ip-service"
instead of
nginx.org/websocket-services: "app-sockets-cluster-ip-service"
Mostly it will resolve your issue.

Related

Example Nginx plus ingress for sticky sessions during canary deployment

I’m deploying 2 services to kubernetes pods which simply echo a version number; echo-v1 & echo-v2
Where echo-v2 is considered the canary deployment, I can demonstrate sticky sessions as canary weight is reconfigured from 0 to 100 using canary & canary-weight annotations.
2 ingresses are used:
The first routes to echo-v1 with a session cookie annotation.
The second routes to echo-v2 with canary true,canary weight and session cookie annotations.
The second ingress I can apply without impacting those sessions started on the first ingress and new sessions follow the canary weighting as expected.
However I’ve since learned that those annotations are for nginx community and won’t work with nginx plus.
How can I achieve the same using ingress(es) with nginx plus?
This is the ingress configuration that works for me using Nginx community vs Nginx plus.
Nginx community:
(coffee-v1 service)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: "cookie"
name: ingress-coffee
spec:
rules:
- http:
paths:
- path: /coffee
pathType: Exact
backend:
service:
name: coffee-v1
port:
number: 80
(coffee-v2 'canary' service)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "100"
name: ingress-coffee-canary
spec:
rules:
- http:
paths:
- path: /coffee
pathType: Exact
backend:
service:
name: coffee-v2
port:
number: 80
Nginx plus:
(coffee-v1 & coffee-v2 as type 'virtualserver' not 'ingress')
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: cafe
spec:
host: cloudbees-training.group.net
tls:
secret: cloudbees-trn.aks.group.net-tls
upstreams:
- name: coffee-v1
service: coffee-v1-svc
port: 80
sessionCookie:
enable: true
name: srv_id_v1
path: /coffee
expires: 2h
- name: coffee-v2
service: coffee-v2-svc
port: 80
sessionCookie:
enable: true
name: srv_id_v2
path: /coffee
expires: 2h
routes:
- path: /coffee
matches:
- conditions:
- cookie: srv_id_v1
value: ~*
action:
pass: coffee-v1
- conditions:
- cookie: srv_id_v2
value: ~*
action:
pass: coffee-v2
# 3 options to handle new session below:
#
# 1) All new sessions to v1:
# action:
# pass: coffee-v1
#
# 2) All new sessions to v2:
# action:
# pass: coffee-v2
#
# 3) Split new sessions by weight
# Note: 0,100 / 100,0 weightings causes sessions
# to drop for the 0 weighted service:
# splits:
# - weight: 50
# action:
# pass: coffee-v1
# - weight: 50
# action:
# pass: coffee-v2

Websocket connection fails for microfrontend (using Kubernetes ingress-nginx and port forwarding)

I am building a microfrontend-microservices app where the microfrontend is written in React and the backend utilizes a Kubernetes cluster, in particular an ingress-nginx service. There are many services (auth, company, companies, event bus, permissions ingress and query services) with a few corresponding micro (client) apps that are built using Webpack Module Federation and webpack dev server. I am using skaffold as a dev tool.
When I run skaffold dev --port-forwarding, I am able to load the app in the browser using the host domain of mavata.dev but with errors (which eventually make certain pages which rely on module federation fail to load entirely). I get the following errors in chrome dev tools:
WebSocket connection to 'wss://mavata.dev:9000/ws' failed
WebSocket connection to 'wss://mavata.dev:8081/ws' failed
WebSocket connection to 'wss://mavata.dev:8083/ws' failed
How do I fix the websocket connection?? Below are relevant code snippets..
(Client) Company App Deployment: client-company-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-company-depl
spec:
replicas: 1
selector:
matchLabels:
app: client-company
template:
metadata:
labels:
app: client-company
spec:
containers:
- name: client-company
image: bridgetmelvin/client-company
---
apiVersion: v1
kind: Service
metadata:
name: client-company-srv
spec:
selector:
app: client-company
ports:
- name: client-company
protocol: TCP
port: 8083
targetPort: 8083
(a similar deployment for the (client) container app but with a port and targetPort of 9000)
(another similar deployment for the (client) marketing app but with a port and targetPort of 8081)
Ingress Service Config: ingress-srv.yaml:
# RUN: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
certmanager.k8s.io/cluster-issuer: core-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/websocket-services: core-service
nginx.org/websocket-services: core-service
spec:
rules:
- host: mavata.dev # need to update 'hosts' file on local machine (in VS Code) C:\Windows\System32\drivers\etc
http:
paths:
- path: /company/create
pathType: Prefix
backend:
service:
name: company-clusterip-srv
port:
number: 4000
- path: /company
pathType: Prefix
backend:
service:
name: query-srv
port:
number: 4002
- path: /company/?(.*)
pathType: Prefix
backend:
service:
name: company-srv
port:
number: 4000
- path: /companies
pathType: Prefix
backend:
service:
name: companies-srv
port:
number: 4002
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-server-srv
port:
number: 4004
- path: /permissions/?(.*)
pathType: Prefix
backend:
service:
name: permissions-srv
port:
number: 4001
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-container-srv
port:
number: 9000
(Client) Container webpack.dev.js:
const { merge } = require('webpack-merge');
const ModuleFederationPlugin = require('webpack/lib/container/ModuleFederationPlugin');
const commonConfig = require('./webpack.common');
const packageJson = require('../package.json');
const globals = require('../src/data-variables/global');
const port = globals.port;
const devConfig = {
mode: 'development',
output: {
publicPath: `http://localhost:${port}/` // don't forget the slash at the end
},
devServer: {
host: '0.0.0.0',
port: port,
allowedHosts: ['mavata.dev'],
historyApiFallback: {
index: 'index.html',
},
},
plugins: [
new ModuleFederationPlugin({
name: 'container',
filename: 'remoteEntry.js',
remotes: {
marketingMfe: 'marketingMod#http://localhost:8081/remoteEntry.js',
authMfe: 'authMod#http://localhost:8082/remoteEntry.js',
companyMfe: 'companyMod#http://localhost:8083/remoteEntry.js',
dataMfe: 'dataMod#http://localhost:8084/remoteEntry.js'
},
exposes: {
'./Functions': './src/functions/Functions',
'./Variables': './src/data-variables/Variables'
},
shared: {...packageJson.dependencies, ...packageJson.peerDependencies},
}),
],
};
module.exports = merge(commonConfig, devConfig);
(Client) Company webpack.dev.js:
const { merge } = require('webpack-merge');
const HtmlWebpackPlugin = require('html-webpack-plugin');
const ModuleFederationPlugin = require('webpack/lib/container/ModuleFederationPlugin');
const commonConfig = require('./webpack.common');
const packageJson = require('../package.json');
const path = require('path');
const globals = require('../src/variables/global')
const port = globals.port
const devConfig = {
mode: 'development',
output: {
publicPath: `http://localhost:${port}/`, // don't forget the slash at the end
},
devServer: {
port: port,
historyApiFallback: {
index: 'index.html',
},
},
plugins: [
new ModuleFederationPlugin({
name: 'companyMod',
filename: 'remoteEntry.js',
exposes: {
'./CompanyApp': './src/bootstrap',
},
remotes: {
containerMfe: 'container#http://localhost:9000/remoteEntry.js',
dataMfe: 'dataMod#http://localhost:8084/remoteEntry.js'
},
shared: {...packageJson.dependencies, ...packageJson.peerDependencies},
}),
new HtmlWebpackPlugin({
template: './public/index.html',
}),
],
};
module.exports = merge(commonConfig, devConfig);

URL of remoteEntry in Kubernetes Cluster

I am trying to build a series of Micro-Frontends using Webpack 5 and the ModuleFederationPlugin.
In the webpack config of my container app I have to configure how the container is going to reach out to the other microfrontends so I can make use of those micro-frontends.
This all works fine when I am serving locally, not using Docker and Kubernetes and my Ingress Controller.
However because I am using Kubernetes and an Ingress Controller, I am unsure what the remote host would be.
Link to Repo
Here is my container webpack.dev.js file
const { merge } = require("webpack-merge");
const HtmlWebpackPlugin = require("html-webpack-plugin");
const ModuleFederationPlugin = require("webpack/lib/container/ModuleFederationPlugin");
const commonConfig = require("./webpack.common");
const packageJson = require("../package.json");
const devConfig = {
mode: "development",
devServer: {
host: "0.0.0.0",
port: 8080,
historyApiFallback: {
index: "index.html",
},
compress: true,
disableHostCheck: true,
},
plugins: [
new ModuleFederationPlugin({
name: "container",
remotes: {
marketing:
"marketing#https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8081/remoteEntry.js",
},
shared: packageJson.dependencies,
}),
],
};
module.exports = merge(commonConfig, devConfig);
and here is my Ingress Config
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /marketing?(.*)
pathType: Prefix
backend:
service:
name: marketing-srv
port:
number: 8081
- path: /?(.*)
pathType: Prefix
backend:
service:
name: container-srv
port:
number: 8080
and here is my marketing webpack.dev.js file
const { merge } = require("webpack-merge");
const ModuleFederationPlugin = require("webpack/lib/container/ModuleFederationPlugin");
const commonConfig = require("./webpack.common");
const packageJson = require("../package.json");
const devConfig = {
mode: "development",
devServer: {
host: "0.0.0.0",
port: 8081,
historyApiFallback: {
index: "index.html",
},
compress: true,
disableHostCheck: true, // That solved it
},
plugins: [
new ModuleFederationPlugin({
name: "marketing",
filename: "remoteEntry.js",
exposes: {
"./core": "./src/bootstrap",
},
shared: packageJson.dependencies,
}),
],
};
module.exports = merge(commonConfig, devConfig);
I am totally stumped as to what the remote host would be to reach out to my marketing micro-frontend
serving it as usual without running it in a docker container or kubernetes cluster, the remote host would be
https://localhost:8081/remoteEntry.js
but that doesn't work in a kubernetes cluster
I tried using the ingress controller and namespace, but that too, does not work
https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8081/remoteEntry.js
This is the error I get
https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8081/remoteEntry.js
If your client and the node are on the same network (eg. can ping each other), do kubectl get service ingress-nginx --namespace ingress-nginx and take note of the nodePort# (TYPE=NodePort, PORT(S) 443:<nodePort#>/TCP). Your remote entry will be https://<any of the worker node IP>:<nodePort#>/remoteEntry.js
If you client is on the Internet and your worker node has public IP, your remote entry will be https://<public IP of the worker node>:<nodePort#>/remoteEntry.js
If you client is on the Internet and your worker node doesn't have public IP, you need to expose your ingress-nginx service with LoadBalancer. Do kubectl get service ingress-nginx --namespace ingress-nginx and take note of the EXTERNAL IP. Your remote entry become https://<EXTERNAL IP>/remoteEntry.js

GKE ingress resource returns 404 although it is connected to a service

I added a default nginx-ingress deployment with a regional IP that I got from GCP.
helm install nginx-ingress \
nginx-stable/nginx-ingress \
--set rbac.create=true \
--set controller.service.loadBalancerIP="<GCP Regional IP>"
I have a dockerized node app with a single .js file. Which I deployed with a basic helm chart. The service is called node-app-blue-helm-chart
const http = require('http');
const hostname = '0.0.0.0';
const port = 80;
const server = http.createServer((req, res) => {
if (req.url == '/another-page'){
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.end('<h1>another page</h1>');
} else {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.end('<h1>Hello World</h1>');
}
});
server.listen(port, hostname, () => {
console.log('Server running at http://%s:%s/', hostname, port);
});
process.on('SIGINT', function() {
console.log('Caught interrupt signal and will exit');
process.exit();
});
I deployed following ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "*.example.com"
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: node-app-blue-helm-chart
port:
number: 80
Although ingress resource acquires IP and endpoint. It still returns 404 error. What can be wrong? Can host: "*.example.com" section be a problem?
More info:
kubectl describe ing ingress-resource
Name: ingress-resource
Namespace: default
Address: <GCP Regional IP>
Default backend: default-http-backend:80 (10.0.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
*.example.com
/* node-app-blue-helm-chart:80 (10.0.0.15:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
kubectl describe svc node-app-blue-helm-chart
Name: node-app-blue-helm-chart
Namespace: default
Labels: app.kubernetes.io/instance=node-app-blue
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=helm-chart
app.kubernetes.io/version=1.16.0
helm.sh/chart=helm-chart-0.1.0
Annotations: meta.helm.sh/release-name: node-app-blue
meta.helm.sh/release-namespace: default
Selector: app.kubernetes.io/instance=node-app-blue,app.kubernetes.io/name=helm-chart
Type: ClusterIP
IP Families: <none>
IP: 10.3.248.13
IPs: 10.3.248.13
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.0.0.15:80
Session Affinity: None
Events: <none>
What I tried:
Removing * from /* in ingress resource. Didn't fix the issue.
kubectl describe ing ingress-resource
Name: ingress-resource
Namespace: default
Address: W.X.Y.Z
Default backend: default-http-backend:80 (10.0.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
*.example.com
/ node-app-blue-helm-chart:80 (10.0.0.15:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated <invalid> nginx-ingress-controller Configuration for default/ingress-resource was added or updated
Try to edit your Ingress. You have set a path=/*, which may not be what you meant to do. A / should do:
[...]
spec:
rules:
- host: "*.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: node-app-blue-helm-chart
port:
number: 80

Problem Sub Path Ingress Controller for Backend Service

I have problem setting path ingress controller for backend service. For example i want setup :
frontend app with angular (Path :/)
backend service with NodeJs (Path :/webservice).
NodeJS : Index.js
const express = require('express')
const app = express()
const port = 4000
app.get('/', (req, res) => res.send('Welcome to myApp!'))
app.use('/data/office', require('./roffice'));
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Another Route:roffice.js
var express = require('express')
var router = express.Router()
router.get('/getOffice', async function (req, res) {
res.send('Get Data Office')
});
module.exports = router
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ws-stack
spec:
selector:
matchLabels:
run: ws-stack
replicas: 2
template:
metadata:
labels:
run: ws-stack
spec:
containers:
- name: ws-stack
image: wsstack/node/img
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4000
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: service-wsstack
labels:
run: service-wsstack
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
nodePort: 30009
targetPort: 4000
selector:
run: ws-stack
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: stack-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: hello-world.info
- http:
paths:
- path: /
backend:
serviceName: service-ngstack --> frondend
servicePort: 80
- path: /webservice
backend:
serviceName: service-wsstack --> backend
servicePort: 80
i setup deployment, service and ingress successfully. but when i called with curl
curl http://<minikubeip>/webservice --> Welcome to myApp! => Correct
curl http://<minikubeip>/webservice/data/office/getOffice --> Welcome to myApp! => Not correct
if i called another route, the result is the same 'Welcome to myApp'. But if i used Nodeport
curl http://<minikubeip>:30009/data/office/getOffice => 'Get Data Office', working properly.
What is the problem? any solution? Thank you
TL;DR
nginx.ingress.kubernetes.io/rewrite-target: /$2
path: /webservice($|/)(.*)
Explanation
The problem is from that line in your ingress:
nginx.ingress.kubernetes.io/rewrite-target: /
You're telling nginx to rewrite your url to / whatever it matched.
/webservice => /
/webservice/data/office/getOffice => /
To do what you're trying to do use regex, here is a simple example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: stack-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: hello-world.info
- http:
paths:
- path: /
backend:
serviceName: service-ngstack --> frondend
servicePort: 80
- path: /webservice($|/)(.*)
backend:
serviceName: service-wsstack --> backend
servicePort: 80
This way you're asking nginx to rewrite your url with the second matching group.
Finally it gives you:
/webservice => /
/webservice/data/office/getOffice => /data/office/getOffice