I am trying to connect to the default mongo image in Kubernetes, I am running Desktop docker for windows, below is the YAML configuration for the mongo service which I am using in the server code to connect.
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27107
Trying to connect to the Kubernetes instance through the below code
import express from 'express'
import { json } from 'body-parser'
import mongoose from 'mongoose'
const app = express()
app.use(json())
const start = async () => {
console.log('Starting servers...!')
try {
await mongoose.connect('mongodb://auth-mongo-srv:27017/auth', {
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex: true
})
console.log('Connected to MongoDB !')
} catch (err) {
console.log(err)
}
app.listen(3000, () => {
console.log('Listening on port :3000 !')
})
}
start()
Services and pods details
PS D:\auth> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-mongo-srv ClusterIP 10.103.93.74 <none> 27017/TCP 5h27m
auth-srv ClusterIP 10.101.151.7 <none> 3000/TCP 5h27m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
PS D:\auth> kubectl get ep auth-mongo-srv
NAME ENDPOINTS AGE
auth-mongo-srv 10.1.0.15:27107 5h28m
getting below errors
MongooseServerSelectionError: connect ECONNREFUSED 10.103.93.74:27017
at NativeConnection.Connection.openUri (/app/node_modules/mongoose/lib/connection.js:846:32)
at /app/node_modules/mongoose/lib/index.js:351:10
at /app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5
at new Promise (<anonymous>)
at promiseOrCallback (/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)
at Mongoose._promiseOrCallback (/app/node_modules/mongoose/lib/index.js:1149:10)
at Mongoose.connect (/app/node_modules/mongoose/lib/index.js:350:20)
at /app/src/index.ts:28:20
at step (/app/src/index.ts:33:23)
at Object.next (/app/src/index.ts:14:53)
at /app/src/index.ts:8:71
at new Promise (<anonymous>)
at __awaiter (/app/src/index.ts:4:12)
at start (/app/src/index.ts:25:15)
at Object.<anonymous> (/app/src/index.ts:43:1)
at Module._compile (node:internal/modules/cjs/loader:1109:14) {
reason: TopologyDescription {
type: 'Single',
setName: null,
maxSetVersion: null,
maxElectionId: null,
servers: Map(1) { 'auth-mongo-srv:27017' => [ServerDescription] },
stale: false,
compatible: true,
compatibilityError: null,
logicalSessionTimeoutMinutes: null,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
commonWireVersion: null
}
Do I need to run the Mongo DB service locally to connect to the Kubernetes mongo image?
Below image is the Docker and Kubernetes versions.
Please use the container port to in stateful set or deployment to open the port.
Deployment example :
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
type: LoadBalancer
ports:
- port: 27017
name: http
selector:
app: mongo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
version: v1
spec:
containers:
- name: mongo
image: mongo:latest
ports:
- containerPort: 27017
statefulsets example :
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
app: mongodb
spec:
replicas: 1
template:
metadata:
labels:
app: mongodb
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-volume
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
here is an example statefulset for mongo db.
Related
I'd like to deploy my k8s with NestJS backend server and redis.
In order to remove user service from the core service of NestJS, I would like to run user service as a service of k8s, and use the cache server of user db referenced by the user service as a service in k8s.
To do that, I set up the user service's database config module like this.
import { Module } from '#nestjs/common'
import { TypeOrmModule, TypeOrmModuleAsyncOptions, TypeOrmModuleOptions } from '#nestjs/typeorm'
import { SnakeNamingStrategy } from 'typeorm-naming-strategies'
let DATABASE_NAME = 'test'
if (process.env.NODE_ENV) {
DATABASE_NAME = `${DATABASE_NAME}_${process.env.NODE_ENV}`
}
const DB_HOST: string = process.env.DB_HOST ?? 'localhost'
const DB_USERNAME: string = process.env.DB_USERNAME ?? 'user'
const DB_PASSWORD: string = process.env.DB_PASSWORD ?? 'password'
const REDIS_HOST: string = process.env.REDIS_HOST ?? 'localhost'
const databaseConfig: TypeOrmModuleAsyncOptions = {
useFactory: (): TypeOrmModuleOptions => ({
type: 'mysql',
host: DB_HOST,
port: 3306,
username: DB_USERNAME,
password: DB_PASSWORD,
database: DATABASE_NAME,
autoLoadEntities: true,
synchronize: true,
namingStrategy: new SnakeNamingStrategy(),
logging: false,
cache: {
type: 'redis',
options: {
host: REDIS_HOST,
port: 6379,
},
},
timezone: '+09:00',
}),
}
#Module({
imports: [
TypeOrmModule.forRootAsync({
...databaseConfig,
}),
],
})
export class DatabaseModule {}
And, to implement k8s I used a helm.
Helm's template folders are as follows.
- configmap
- deployment
- pod
- service
And, under those folders are as follows.
// configmap/redis.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
data:
redis-config: |
maxmemory 20mb
maxmemory-policy allkeys-lru
// deployment/user_service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
labels:
app: user-service
namespace: default
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: user-service
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: user-service
spec:
containers:
- image: {{ .Values.user_service.image }}:{{ .Values.user_service_version }}
imagePullPolicy: Always
name: user-service
ports:
- containerPort: 50051
protocol: TCP
env:
- name: COGNITO_CLIENT_ID
value: "some value"
- name: COGNITO_USER_POOL_ID
value: "some value"
- name: DB_HOST
value: "some value"
- name: DB_PASSWORD
value: "some value"
- name: DB_USERNAME
value: "some value"
- name: NODE_ENV
value: "test"
- name: REDIS_HOST
value: "10.100.77.0"
// pod/redis.yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
labels:
app: redis
spec:
containers:
- name: redis
image: redis:latest
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
name: redis
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
// service/user_service.yaml
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
clusterIP: 10.100.88.0
selector:
app: user-service
ports:
- protocol: TCP
port: 50051
targetPort: 50051
// service/redis.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
clusterIP: 10.100.77.0
selector:
app: redis
ports:
- name: redis
protocol: TCP
port: 6379
targetPort: 6379
With above yaml files, I install helm chart named test.
After installing, the result of kubectl get svc,po,deploy,configmap is like this.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 4d4h
service/user-service ClusterIP 10.100.88.0 <none> 50051/TCP 6s
service/redis ClusterIP 10.100.77.0 <none> 6379/TCP 6s
NAME READY STATUS RESTARTS AGE
pod/user-service-78548d4d8f-psbr2 0/1 ContainerCreating 0 6s
pod/redis 0/1 ContainerCreating 0 6s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/user-service 0/1 1 0 6s
NAME DATA AGE
configmap/kube-root-ca.crt 1 4d4h
configmap/redis-config 1 6s
But, when I checked the user-service's deploy logs, these error was occurred.
[Nest] 1 - 02/07/2023, 7:15:32 AM ERROR [TypeOrmModule] Unable to connect to the database. Retrying (1)...
Error: connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
I also checked through the console log that the REDIS_HOST environment variable is 10.100.77.0 in the database config of user-service, but an error was appearing while referring to the local host as above.
Is there any error in the part I set?
you can use service for connect to Redis. for this use redis.redis as REDIS_HOST in your application.
I have a problem with connection between the services. Here is my yaml file for first app deployment
apiVersion: v1
kind: Service
metadata:
name: k8s-to-nginx
spec:
type: LoadBalancer
selector:
app: k8s-to-nginx
ports:
- port: 3333
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-to-nginx
spec:
replicas: 2
selector:
matchLabels:
app: k8s-to-nginx
template:
metadata:
labels:
app: k8s-to-nginx
spec:
containers:
- name: k8s-to-nginx
image: spyfrommars/k8s-web-to-nginx
resources:
limits:
memory: '128Mi'
cpu: '250m'
ports:
- containerPort: 3000
and here is second
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: '128Mi'
cpu: '250m'
ports:
- containerPort: 80
In my app I have two endpoints, here
app.get('/', (req, res) => {
const helloMessage = `Hello from the ${os.hostname()}`
console.log(helloMessage)
res.send(helloMessage)
})
app.get('/nginx', async (req, res) => {
const url = 'https://nginx'
const response = await fetch(url)
const body = await response.text()
res.send(body)
})
I reach root endpoint without problem by ServiceIP:port (this one with LoadBalancer port type) and when I try to reach to second endpoint by ServiceIP:port/name_of_service it makes nothing....
Can you help me please?
kubectl get namespace
default Active 3h33m
ingress-nginx Active 3h11m
kube-node-lease Active 3h33m
kube-public Active 3h33m
kube-system Active 3h33m
kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.102.205.190 localhost 80:31378/TCP,443:31888/TCP 3h12m
ingress-nginx-controller-admission ClusterIP 10.103.97.209 <none> 443/TCP 3h12m
When I am making the request from nextjs getInitialProps http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser then its throwing an error Error: connect ECONNREFUSED 127.0.0.1:443.
LandingPage.getInitialProps = async () => {
if (typeof window === "undefined") {
const { data } = await axios.get(
"http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser",
{
headers: {
Host: "ticketing.dev",
},
}
);
return data;
} else {
const { data } = await axios.get("/api/users/currentuser");
return data;
}
};
My auth.deply.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: sajeebxn/auth
env:
- name: MONGO_URI
value: 'mongodb://tickets-mongo-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
And my ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- ticketing.dev
# secretName: e-ticket-secret
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
Try using http://ingress-nginx-controller.ingress-nginx/api/users/currentuser.
This worked for me
I am facing an weird issue in kubernetes yaml file with initContainers. It shows that my initContainer is successfully running but it is in not ready state and it remains forever. There are no errors in initcontainer logs and logs shows success result .Am i missing anything ?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: graphql-engine
strategy: {}
template:
metadata:
labels:
io.kompose.service: graphql-engine
spec:
initContainers:
# GraphQl
- env:
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: HASURA_GRAPHQL_LOG_LEVEL
value: debug
- name: HASURA_GRAPHQL_UNAUTHORIZED_ROLE
value: public
- name: PVX_MLCRAFT_ACTIONS_URL
value: http://pvx-mlcraft-actions:3010
image: hasura/graphql-engine:v2.10.1
name: graphql-engine
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
containers:
- env:
- name: AUTH_CLIENT_URL
value: http://localhost:3000
- name: AUTH_EMAIL_PASSWORDLESS_ENABLED
value: "true"
- name: AUTH_HOST
value: 0.0.0.0
- name: AUTH_LOG_LEVEL
value: debug
- name: AUTH_PORT
value: "4000"
- name: AUTH_SMTP_HOST
value: smtp.gmail.com
- name: AUTH_SMTP_PASS
value: fahkbhcedmwolqzp
- name: AUTH_SMTP_PORT
value: "587"
- name: AUTH_SMTP_SENDER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_SMTP_USER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_WEBAUTHN_RP_NAME
value: Nhost App
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_GRAPHQL_URL
value: http://graphql-engine:8080/v1/graphql
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: POSTGRES_PASSWORD
value: postgres
image: nhost/hasura-auth:latest
name: auth
ports:
- containerPort: 4000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
type: LoadBalancer
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: auth
spec:
ports:
- name: "4000"
port: 4000
targetPort: 4000
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
Init Container expected to be in ready state
The Status field of the initContainer is not relevant here. What you need is that your initContainer is deterministic. Currently your initContainer is running, because the used image is built to run indefinite.
Initcontainers need to built that they run their process and then exit with an exitcode 0. Graphql-engine on the other hand is a container that will run indefinite and provide an API.
What are you trying to accomplish with this graphql-engine pod?
I'm attempting to use the Statistics Gathering Jenkins plugin to forward metrics to Logstash. The plugin is configured with the following url: http://logstash.monitoring-observability:9000. Both Jenkins and Logstash are deployed on Kubernetes. When I run a build, which triggers metrics forwarding via this plugin, I see the following error in the logs:
2022-02-19 23:29:20.464+0000 [id=263] WARNING o.j.p.s.g.util.RestClientUtil$1#failed: The request for url http://logstash.monitoring-observability:9000/ has failed.
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:173
I get the same behavior when I exec into the jenkins pod and attempt to curl logstash:
jenkins#jenkins-7889fb54b8-d9rvr:/$ curl -vvv logstash.monitoring-observability:9000
* Trying 10.52.9.143:9000...
* connect to 10.52.9.143 port 9000 failed: Connection refused
* Failed to connect to logstash.monitoring-observability port 9000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to logstash.monitoring-observability port 9000: Connection refused
I also get the following error in the logstash logs:
[ERROR] 2022-02-20 00:05:43.450 [[main]<tcp] pipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Tcp port=>9000, codec=><LogStash::Codecs::JSON id=>"json_f96babad-299c-42ab-98e0-b78c025d9476", enable_metric=>true, charset=>"UTF-8">, host=>"jenkins-server.devops-tools", ssl_verify=>false, id=>"0fddd9afb2fcf12beb75af799a2d771b99af6ac4807f5a67f4ec5e13f008803f", enable_metric=>true, mode=>"server", proxy_protocol=>false, ssl_enable=>false, ssl_key_passphrase=><password>>
Error: Cannot assign requested address
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
Here is my jenkins-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: devops-tools
labels:
app: jenkins-server
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-server
template:
metadata:
labels:
app: jenkins-server
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
serviceAccountName: jenkins-admin
containers:
- name: jenkins
env:
- name: LOGSTASH_HOST
value: logstash
- name: LOGSTASH_PORT
value: "5044"
- name: ELASTICSEARCH_HOST
value: elasticsearch-logging
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
image: jenkins/jenkins:lts
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "500Mi"
cpu: "500m"
ports:
- name: httpport
containerPort: 8080
- name: jnlpport
containerPort: 50000
livenessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-pv-claim
Here is my jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-server
namespace: devops-tools
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8080'
spec:
selector:
app: jenkins-server
k8s-app: jenkins-server
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
Here is my logstash-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: monitoring-observability
labels:
app: logstash
spec:
selector:
matchLabels:
app: logstash
replicas: 1
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
env:
- name: JENKINS_HOST
value: jenkins-server
- name: JENKINS_PORT
value: "8080"
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 9000
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
Here is my logstash-service.yaml
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: monitoring-observability
labels:
app: logstash
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "logstash"
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 9000
targetPort: 9000
type: ClusterIP
Here is my logstash configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
host => "jenkins-server.devops-tools"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
There are no firewalls configured in my cluster that would be blocking traffic on port 9000. I have also tried this same configuration with port 5044 and get the same results. It seems as though my logstash instance is not actually listening on the containerPort. Why might this be?
I resolved this error by updating the configmap to this:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
Note that all references to the jenkins host have been removed.