No connection between services in Kubernetes - kubernetes

I have a problem with connection between the services. Here is my yaml file for first app deployment
apiVersion: v1
kind: Service
metadata:
name: k8s-to-nginx
spec:
type: LoadBalancer
selector:
app: k8s-to-nginx
ports:
- port: 3333
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-to-nginx
spec:
replicas: 2
selector:
matchLabels:
app: k8s-to-nginx
template:
metadata:
labels:
app: k8s-to-nginx
spec:
containers:
- name: k8s-to-nginx
image: spyfrommars/k8s-web-to-nginx
resources:
limits:
memory: '128Mi'
cpu: '250m'
ports:
- containerPort: 3000
and here is second
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: '128Mi'
cpu: '250m'
ports:
- containerPort: 80
In my app I have two endpoints, here
app.get('/', (req, res) => {
const helloMessage = `Hello from the ${os.hostname()}`
console.log(helloMessage)
res.send(helloMessage)
})
app.get('/nginx', async (req, res) => {
const url = 'https://nginx'
const response = await fetch(url)
const body = await response.text()
res.send(body)
})
I reach root endpoint without problem by ServiceIP:port (this one with LoadBalancer port type) and when I try to reach to second endpoint by ServiceIP:port/name_of_service it makes nothing....
Can you help me please?

Related

Pods not communicating with Gossip Router and hence not forming JGroups single cluster

I have a stateful set as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jgroups-leader-poc
labels:
app: jgroups-leader-poc
spec:
serviceName: jgroups-leader-poc
replicas: 3
selector:
matchLabels:
app: jgroups-leader-poc
template:
metadata:
labels:
app: jgroups-leader-poc
spec:
containers:
- name: jgroups-leader-poc-container
image: localhost:5001/jgroups-leader-ui:1.0
imagePullPolicy: Always
env:
- name: jgroups.gossip_routers
value: "localhost[12001]"
- name: jgroups.tcp.ip
value: "site_local,match-interface:eth0"
- name: jgroups.tcp.ntfnport
value: "7800"
- name: JGROUPS_EXTERNAL_ADDR
value: "match-interface:eth0"
ports:
- name: http
containerPort: 8080
protocol: TCP
- containerPort: 7800
name: k8sping-port
TcpGossip.xml as follows
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<TCP external_addr="${JGROUPS_EXTERNAL_ADDR:match-interface:eth0}"
bind_addr="${jgroups.tcp.ip}" bind_port="${jgroups.tcp.ntfnport:0}"
sock_conn_timeout="300"
max_bundle_size="60000"
enable_diagnostics="false"
thread_naming_pattern="cl"
thread_pool.enabled="true"
thread_pool.min_threads="1"
thread_pool.max_threads="25"
thread_pool.keep_alive_time="5000" />
<TCPGOSSIP initial_hosts="${jgroups.gossip_routers:localhost[12001]}" reconnect_interval="3000"/>
<MERGE3 min_interval="10000" max_interval="30000"/>
<FD_SOCK/>
<FD_ALL timeout="60000" interval="15000" timeout_check_interval="5000"/>
<FD_HOST check_timeout="5000" interval="15000" timeout="60000"/>
<VERIFY_SUSPECT timeout="5000"/>
<pbcast.NAKACK2 use_mcast_xmit="false" use_mcast_xmit_req="false" xmit_interval="1000"/>
<UNICAST3 xmit_table_max_compaction_time="3000" />
<pbcast.STABLE desired_avg_gossip="50000" max_bytes="4M"/>
<pbcast.GMS print_local_addr="true" join_timeout="5000" view_bundling="true"/>
<MFC max_credits="2M"/>
<FRAG2 frag_size="30K"/>
<pbcast.STATE buffer_size="1048576" max_pool="10"/>
<pbcast.FLUSH timeout="0"/>
</config>
And I have started Gossip Router as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gossiprouter
labels:
run: gossiprouter
spec:
replicas: 1
selector:
matchLabels:
run: gossiprouter
template:
metadata:
labels:
run: gossiprouter
spec:
containers:
- image: belaban/gossiprouter:latest
name: gossiprouter
ports:
- containerPort: 8787
- containerPort: 9000
- containerPort: 12001
env:
- name: LogLevel
value: "TRACE"
---
apiVersion: v1
kind: Service
metadata:
name: gossiprouter
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 8787
targetPort: 8787
name: debug
protocol: TCP
- port: 9000
targetPort: 9000
name: netcat
protocol: TCP
- port: 12001
targetPort: 12001
name: gossiprouter
protocol: TCP
selector:
run: gossiprouter
when I do kubectl get pods
kubectl get svc shows
Source code for reference:
public class Chatter extends ReceiverAdapter {
JChannel channel;
#Value("${app.jgroups.config:jgroups-config.xml}")
private String jGroupsConfig;
#Value("${app.jgroups.cluster:chat-cluster}")
private String clusterName;
#PostConstruct
public void init() {
try {
channel = new JChannel(jGroupsConfig);
channel.setReceiver(this);
channel.connect(clusterName);
checkLeaderStatus();
channel.getState(null, 10000);
} catch (Exception ex) {
log.error("registering the channel in JMX failed: {}", ex);
}
}
public void close() {
channel.close();
}
public void viewAccepted(View newView) {
log.info("view: " + newView);
checkLeaderStatus();
}
private void checkLeaderStatus() {
Address address = channel.getView().getMembers().get(0);
if (address.equals(channel.getAddress())) {
log.info("Nodes are started, I'm the master!");
}
else
{
log.info("Nodes are started, I'm a slave!");
}
}
}
The issue here is none of the pods are getting connected to running gossip router although localhost:12001 is given as jgroups.gossip_routers in the Statefulset. Hence all the pods are forming separate jgroups cluster instead of single cluster.
Gossip router service details as follows:
Sorry for the delay! Do you have a public image for jgroups-leader-ui:1.0?
I suspect that the members don't communicate with each other because they connect directly to each other, possibly some issue with external_addr.
Why are you using TCP:TCPGOSSIP instead of TUNNEL:PING?
OK, so I tried this out with a sample app/image (chat.sh) / belaban/jgroups.
Try the following steps:
kubectl apply -f gossiprouter.yaml (your existing YAML)
Find the address of eth of the GossipRouter pod (in my case below: 172.17.0.3)
kubectl -f jgroups.yaml (the Yaml is pasted below)
kubectl exec jgroups-0 probe.sh should list 3 members:
/Users/bela$ kubectl exec jgroups-0 probe.sh
#1 (180 bytes):
local_addr=jgroups-1-42392
physical_addr=172.17.0.5:7800
view=[jgroups-0-13785|2] (3) [jgroups-0-13785, jgroups-1-42392, jgroups-2-14656]
cluster=chat
version=4.2.4.Final (Julier)
#2 (180 bytes):
local_addr=jgroups-0-13785
physical_addr=172.17.0.4:7800
view=[jgroups-0-13785|2] (3) [jgroups-0-13785, jgroups-1-42392, jgroups-2-14656]
cluster=chat
version=4.2.4.Final (Julier)
#3 (180 bytes):
local_addr=jgroups-2-14656
physical_addr=172.17.0.6:7800
view=[jgroups-0-13785|2] (3) [jgroups-0-13785, jgroups-1-42392, jgroups-2-14656]
cluster=chat
version=4.2.4.Final (Julier)
3 responses (3 matches, 0 non matches)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jgroups
labels:
run: jgroups
spec:
replicas: 3
selector:
matchLabels:
run: jgroups
serviceName: "jgroups"
template:
metadata:
labels:
run: jgroups
spec:
containers:
- image: belaban/jgroups
name: jgroups
command: ["chat.sh"]
args: ["-props gossip.xml"]
env:
- name: DNS_QUERY
value: "jgroups.default.svc.cluster.local"
- name: DNS_RECORD_TYPE
value: A
- name: TCPGOSSIP_INITIAL_HOSTS
value: "172.17.0.3[12001]"
---
apiVersion: v1
kind: Service
metadata:
# annotations:
# service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: jgroups
labels:
run: jgroups
spec:
publishNotReadyAddresses: true
clusterIP: None
ports:
- name: ping
port: 7800
protocol: TCP
targetPort: 7800
- name: probe
port: 7500
protocol: UDP
targetPort: 7500
- name: debug
port: 8787
protocol: TCP
targetPort: 8787
- name: stomp
port: 9000
protocol: TCP
targetPort: 9000
selector:
run: jgroups
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---

Iceberg on kubernetes, rest container problem

I'm trying to run iceberg on kubernetes.
Here are the files that I'm using:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: mc
name: mc
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mc
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: mc
spec:
containers:
- command:
- /bin/sh
- -c
- ' until (/usr/bin/mc config host add minio http://minio:9000 admin password) do echo ''...waiting...'' && sleep 1; done; /usr/bin/mc rm -r --force minio/warehouse; /usr/bin/mc mb minio/warehouse; /usr/bin/mc policy set public minio/warehouse; exit 0; '
env:
- name: AWS_ACCESS_KEY_ID
value: admin
- name: AWS_REGION
value: us-east-1
- name: AWS_SECRET_ACCESS_KEY
value: password
image: minio/mc
name: mc
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: minio
name: minio
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: minio
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: minio
spec:
containers:
- args:
- server
- /data
- --console-address
- :9001
env:
- name: MINIO_ROOT_PASSWORD
value: password
- name: MINIO_ROOT_USER
value: admin
image: minio/minio
name: minio
ports:
- containerPort: 9001
- containerPort: 9000
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: minio
name: minio
spec:
type: LoadBalancer
ports:
- name: "9001"
port: 9001
targetPort: 9001
- name: "9000"
port: 9000
targetPort: 9000
selector:
io.kompose.service: minio
status:
loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: rest
name: rest
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: rest
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: rest
spec:
containers:
- env:
- name: AWS_ACCESS_KEY_ID
value: admin
- name: AWS_REGION
value: us-east-1
- name: AWS_SECRET_ACCESS_KEY
value: password
- name: CATALOG_IO__IMPL
value: org.apache.iceberg.aws.s3.S3FileIO
- name: CATALOG_S3_ENDPOINT
value: http://minio:9000
- name: CATALOG_WAREHOUSE
value: s3a://warehouse/wh/
image: tabulario/iceberg-rest:0.2.0
name: rest
ports:
- containerPort: 8181
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: rest
name: rest
spec:
type: LoadBalancer
ports:
- name: "8181"
port: 8181
targetPort: 8181
selector:
io.kompose.service: rest
status:
loadBalancer: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: spark-iceberg-claim0
name: spark-iceberg-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: spark-iceberg-claim1
name: spark-iceberg-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: spark-iceberg
name: spark-iceberg
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: spark-iceberg
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: spark-iceberg
spec:
containers:
- env:
- name: AWS_ACCESS_KEY_ID
value: admin
- name: AWS_REGION
value: us-east-1
- name: AWS_SECRET_ACCESS_KEY
value: password
image: tabulario/spark-iceberg
name: spark-iceberg
ports:
- containerPort: 8888
- containerPort: 8080
resources: {}
volumeMounts:
- mountPath: /home/iceberg/warehouse
name: spark-iceberg-claim0
- mountPath: /home/iceberg/notebooks/notebooks
name: spark-iceberg-claim1
restartPolicy: Always
volumes:
- name: spark-iceberg-claim0
persistentVolumeClaim:
claimName: spark-iceberg-claim0
- name: spark-iceberg-claim1
persistentVolumeClaim:
claimName: spark-iceberg-claim1
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: spark-iceberg
name: spark-iceberg
spec:
type: LoadBalancer
ports:
- name: "8882"
port: 8882
targetPort: 8888
- name: "8081"
port: 8081
targetPort: 8080
selector:
io.kompose.service: spark-iceberg
status:
loadBalancer: {}
after deploy all pods are working fine except the rest pod
kubectl get pod
when I see the log I get this message
Exception in thread "main" java.lang.NumberFormatException: For input string: "tcp://10.101.184.144:8181"
kubectl logs res-pod
How can I get the rest pod work properly?
I try to change the image of the rest deployment and change some ports of the service

Kubernetes init container hanging (Init container is running but not ready)

I am facing an weird issue in kubernetes yaml file with initContainers. It shows that my initContainer is successfully running but it is in not ready state and it remains forever. There are no errors in initcontainer logs and logs shows success result .Am i missing anything ?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: graphql-engine
strategy: {}
template:
metadata:
labels:
io.kompose.service: graphql-engine
spec:
initContainers:
# GraphQl
- env:
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: HASURA_GRAPHQL_LOG_LEVEL
value: debug
- name: HASURA_GRAPHQL_UNAUTHORIZED_ROLE
value: public
- name: PVX_MLCRAFT_ACTIONS_URL
value: http://pvx-mlcraft-actions:3010
image: hasura/graphql-engine:v2.10.1
name: graphql-engine
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
containers:
- env:
- name: AUTH_CLIENT_URL
value: http://localhost:3000
- name: AUTH_EMAIL_PASSWORDLESS_ENABLED
value: "true"
- name: AUTH_HOST
value: 0.0.0.0
- name: AUTH_LOG_LEVEL
value: debug
- name: AUTH_PORT
value: "4000"
- name: AUTH_SMTP_HOST
value: smtp.gmail.com
- name: AUTH_SMTP_PASS
value: fahkbhcedmwolqzp
- name: AUTH_SMTP_PORT
value: "587"
- name: AUTH_SMTP_SENDER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_SMTP_USER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_WEBAUTHN_RP_NAME
value: Nhost App
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_GRAPHQL_URL
value: http://graphql-engine:8080/v1/graphql
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: POSTGRES_PASSWORD
value: postgres
image: nhost/hasura-auth:latest
name: auth
ports:
- containerPort: 4000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
type: LoadBalancer
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: auth
spec:
ports:
- name: "4000"
port: 4000
targetPort: 4000
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
Init Container expected to be in ready state
The Status field of the initContainer is not relevant here. What you need is that your initContainer is deterministic. Currently your initContainer is running, because the used image is built to run indefinite.
Initcontainers need to built that they run their process and then exit with an exitcode 0. Graphql-engine on the other hand is a container that will run indefinite and provide an API.
What are you trying to accomplish with this graphql-engine pod?

Connection Refused on Port 9000 for Logstash Deployment on Kubernetes

I'm attempting to use the Statistics Gathering Jenkins plugin to forward metrics to Logstash. The plugin is configured with the following url: http://logstash.monitoring-observability:9000. Both Jenkins and Logstash are deployed on Kubernetes. When I run a build, which triggers metrics forwarding via this plugin, I see the following error in the logs:
2022-02-19 23:29:20.464+0000 [id=263] WARNING o.j.p.s.g.util.RestClientUtil$1#failed: The request for url http://logstash.monitoring-observability:9000/ has failed.
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:173
I get the same behavior when I exec into the jenkins pod and attempt to curl logstash:
jenkins#jenkins-7889fb54b8-d9rvr:/$ curl -vvv logstash.monitoring-observability:9000
* Trying 10.52.9.143:9000...
* connect to 10.52.9.143 port 9000 failed: Connection refused
* Failed to connect to logstash.monitoring-observability port 9000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to logstash.monitoring-observability port 9000: Connection refused
I also get the following error in the logstash logs:
[ERROR] 2022-02-20 00:05:43.450 [[main]<tcp] pipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Tcp port=>9000, codec=><LogStash::Codecs::JSON id=>"json_f96babad-299c-42ab-98e0-b78c025d9476", enable_metric=>true, charset=>"UTF-8">, host=>"jenkins-server.devops-tools", ssl_verify=>false, id=>"0fddd9afb2fcf12beb75af799a2d771b99af6ac4807f5a67f4ec5e13f008803f", enable_metric=>true, mode=>"server", proxy_protocol=>false, ssl_enable=>false, ssl_key_passphrase=><password>>
Error: Cannot assign requested address
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
Here is my jenkins-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: devops-tools
labels:
app: jenkins-server
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-server
template:
metadata:
labels:
app: jenkins-server
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
serviceAccountName: jenkins-admin
containers:
- name: jenkins
env:
- name: LOGSTASH_HOST
value: logstash
- name: LOGSTASH_PORT
value: "5044"
- name: ELASTICSEARCH_HOST
value: elasticsearch-logging
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
image: jenkins/jenkins:lts
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "500Mi"
cpu: "500m"
ports:
- name: httpport
containerPort: 8080
- name: jnlpport
containerPort: 50000
livenessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-pv-claim
Here is my jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-server
namespace: devops-tools
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8080'
spec:
selector:
app: jenkins-server
k8s-app: jenkins-server
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
Here is my logstash-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: monitoring-observability
labels:
app: logstash
spec:
selector:
matchLabels:
app: logstash
replicas: 1
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
env:
- name: JENKINS_HOST
value: jenkins-server
- name: JENKINS_PORT
value: "8080"
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 9000
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
Here is my logstash-service.yaml
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: monitoring-observability
labels:
app: logstash
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "logstash"
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 9000
targetPort: 9000
type: ClusterIP
Here is my logstash configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
host => "jenkins-server.devops-tools"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
There are no firewalls configured in my cluster that would be blocking traffic on port 9000. I have also tried this same configuration with port 5044 and get the same results. It seems as though my logstash instance is not actually listening on the containerPort. Why might this be?
I resolved this error by updating the configmap to this:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
Note that all references to the jenkins host have been removed.

Failed to connect mongo-express to mongoDb in k8s

I configured mongodb with user name and password, and deployed mongoDb and mongoDb express.
The problem is that I'm getting the following error in mongo-express logs:
Could not connect to database using connectionString: mongodb://username:password#mongodb://lc-mongodb-service:27017:27017/"
I can see that the connection string contains 27017 port twice, and also "mongodb://" in the middle that should not be there.
This is my mongo-express deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lc-mongo-express
labels:
app: lc-mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: lc-mongo-express
template:
metadata:
labels:
app: lc-mongo-express
spec:
containers:
- name: lc-mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: lc-configmap
key: DATABASE_URL
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_USERNAME
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_PASSWORD
---
apiVersion: v1
kind: Service
metadata:
name: lc-mongo-express-service
spec:
selector:
app: lc-mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
And my mongoDb deployment:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lc-mongodb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
storageClassName: gp2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: lc-mongodb
labels:
app: lc-mongodb
spec:
replicas: 1
serviceName: lc-mongodb-service
selector:
matchLabels:
app: lc-mongodb
template:
metadata:
labels:
app: lc-mongodb
spec:
volumes:
- name: lc-mongodb-storage
persistentVolumeClaim:
claimName: lc-mongodb-pvc
containers:
- name: lc-mongodb
image: "mongo"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_USERNAME
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_PASSWORD
command:
- mongod
- --auth
volumeMounts:
- mountPath: '/data/db'
name: lc-mongodb-storage
---
apiVersion: v1
kind: Service
metadata:
name: lc-mongodb-service
labels:
name: lc-mongodb
spec:
selector:
app: lc-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
What am I doing wrong?
Your connection string format is wrong
You should be trying out something like
mongodb://[username:password#]host1[:port1][,...hostN[:portN]][/[defaultauthdb][?options]]
Now suppose if you are using the Node js
const MongoClient = require('mongodb').MongoClient;
const uri = "mongodb+srv://<username>:<password>#<Mongo service Name>/<Database name>?retryWrites=true&w=majority";
const client = new MongoClient(uri, { useNewUrlParser: true });
client.connect(err => {
// creating collection
const collection = client.db("test").collection("devices");
// perform actions on the collection object
client.close();
});
also you missing the Db path args: ["--dbpath","/data/db"] in command while using the PVC and configuring the path
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
args: ["--dbpath","/data/db"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-creds
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-creds
key: password
volumeMounts:
- name: "mongo-data-dir"
mountPath: "/data/db"
volumes:
- name: "mongo-data-dir"
persistentVolumeClaim:
claimName: "pvc"