How to measure Hyperledger Fabric performance using Hyperledger Caliper in Kubernetes setting - kubernetes

My fabric network is deployed in a local Kubernetes cluster(vagrant) using the following
https://medium.com/swlh/how-to-implement-hyperledger-fabric-external-chaincodes-within-a-kubernetes-cluster-fd01d7544523 tutorial.
The pods are up and running, and I was able to insert/read marbles from fabric-cli.
I was not able to configure caliper to measure the performance of my deployment. I ran the caliper 0.4.2 docker image in the same 'hyperledger' namespace.
the caliper deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: caliper
name: caliper
namespace: hyperledger
spec:
selector:
matchLabels:
app: caliper
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: caliper
spec:
containers:
- env:
- name: CALIPER_BIND_SUT
value: fabric:2.2
- name: CALIPER_BENCHCONFIG
value: benchmarks/myAssetBenchmark.yaml
- name: CALIPER_NETWORKCONFIG
value: networks/networkConfig3.yaml
- name: CALIPER_FABRIC_GATEWAY_ENABLED
value: "true"
- name: CALIPER_FLOW_ONLY_TEST
value: "true"
image: hyperledger/caliper:0.4.2
name: caliper
command:
- caliper
args:
- launch
- manager
tty: true
volumeMounts:
- mountPath: /hyperledger/caliper/workspace
name: caliper-workspace
- mountPath: /hyperledger/caliper/fabric-samples
name: fabric-workspace
workingDir: /hyperledger/caliper/workspace
restartPolicy: Always
volumes:
- name: caliper-workspace
hostPath:
path: /home/vagrant/caliper-workspace
type: Directory
- name: fabric-workspace
hostPath:
path: /home/vagrant/fabr volumeMounts:
- mountPath: /hyperledger/caliper/workspace
name: caliper-workspace
- mountPath: /hyperledger/caliper/fabric-samples
name: fabric-workspace
workingDir: /hyperledger/caliper/workspace
restartPolicy: Always
volumes:
- name: caliper-workspace
hostPath:
path: /home/vagrant/caliper-workspace
type: Directory
- name: fabric-workspace
hostPath:
path: /home/vagrant/fabric-external-chaincodes/
type: Directoryic-external-chaincodes/
type: Directory
I followed https://hyperledger.github.io/caliper/v0.4.2/fabric-tutorial/tutorials-fabric-existing/ documentation on running caliper alongside existing fabric network.
the networkconfig3.yaml file
name: Fabric
version: '2.0.0'
mutual-tls: true
caliper:
blockchain: fabric
sutOptions:
mutualTls: true
channels:
- channelName: mychannel
contracts:
- id: marbles
organizations:
- mspid: org1MSP
identities:
certificates:
- name: 'Admin'
admin: true
clientPrivateKey:
path: '../fabric-samples/crypto-config/peerOrganizations/org1/users/Admin#org1/msp/keystore/priv_sk'
clientSignedCert:
path: '../fabric-samples/crypto-config/peerOrganizations/org1/users/Admin#org1/msp/signcerts/Admin#org1-cert.pem'
- name: 'User1'
clientPrivateKey:
path: '../fabric-samples/crypto-config/peerOrganizations/org1/users/User1#org1/msp/keystore/priv_sk'
clientSignedCert:
path: '../fabric-samples/crypto-config/peerOrganizations/org1/users/User1#org1/msp/signcerts/User1#org1-cert.pem'
connectionProfile:
path: 'networks/profile-org1.yaml'
discover: true
the org1 connection profile will look like
name: Fabric
version: '1.0.0'
client:
organization: org1
connection:
timeout:
peer:
endorser: '300'
organizations:
org1:
mspid: org1MSP
peers:
- peer0-org1
peers:
peer0-org1:
url: grpcs://peer0-org1:7051
grpcOptions:
ssl-target-name-override: peer0-org1
grpc.keepalive_time_ms: 600000
tlsCACerts:
path: ../fabric-samples/crypto-config/peerOrganizations/org1/peers/peer0-org1/msp/tlscacerts/tlsca.org1-cert.pem
the myAssetBenchmark.yaml file
test:
name: marble-benchmark
description: test benchmark
workers:
type: local
number: 2
rounds:
- label: initMarble
description: init marbles benchmark
txNumber: 100
rateControl:
type: fixed-load
opts:
tps: 25
workload:
module: workload/init.js
monitor:
type:
- none
observer:
type: local
interval: 1
The caliper is failing because the connection to the peers is not going through.
2021-01-05T04:37:55.592Z - ^[[32minfo^[[39m: [NetworkConfig]: buildPeer - Unable to connect to the endorser peer0-org1 due to Error: Failed to connect before the deadline on Endorser- name: peer0-org1, url:grpcs://peer0-org1:7051, connected:false, connectAttempted:true
some more error logs
2021-01-04T01:08:35.466Z - error: [DiscoveryService]: send[mychannel] - no discovery results
2021-01-04T01:08:38.473Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Discoverer- name: peer0-org1, url:grpcs://peer0-org1:7051, connected:false, connectAttempted:true
2021-01-04T01:08:38.473Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server peer0-org1 url:grpcs://peer0-org1:7051 timeout:3000
2021-01-04T01:08:38.473Z - error: [ServiceEndpoint]: ServiceEndpoint grpcs://peer0-org1:7051 reset connection failed :: Error: Failed to connect before the deadline on Discoverer- name: peer0-org1, url:grpcs://peer0-org1:7051, connected:false, connectAttempted:true
What are the issues with my current configuration?
Is there any blog or documentation to look more?

Your connection profile doesn't look correct as you haven't specified the tlsCACert information correctly. As you need to use a connection profile that works with node 2.2 the following might work
name: Fabric
organizations:
org1:
mspid: org1MSP
peers:
- peer0-org1
peers:
peer0-org1:
url: grpcs://peer0-org1:7051
tlsCACerts:
path: ../fabric-samples/crypto-config/peerOrganizations/org1/peers/peer0-org1/msp/tlscacerts/tlsca.org1-cert.pem
there are some details here about the node sdk 2.2 expected format for the connection profile but I'm not sure how correct they are https://hyperledger.github.io/fabric-sdk-node/release-2.2/tutorial-commonconnectionprofile.html

Related

Command [/usr/local/bin/dub path /var/lib/kafka/data writable] FAILED ! in Kafka

I want to deploy a kafka in k8s cluster.I have a zookeeper which is deploy without any issue but for deploying the kafka it doesnot become running and stuck in crashloopbackoff.I check it logs and here is its log:
===> User uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
===> Configuring ...
===> Running preflight checks ...
===> Check if /var/lib/kafka/data is writable ...
Command [/usr/local/bin/dub path /var/lib/kafka/data writable] FAILED !
And here is my deployment file :
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker0
spec:
replicas: 2
selector:
matchLabels:
app: kafka
id: "0"
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: confluentinc/cp-kafka:6.2.2
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://127.0.0.1:9092
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_CREATE_TOPICS
value: input:1:1, output:1:1
volumeMounts:
- mountPath: /var/lib/kafka/data
name: kafka-data
volumes:
- name: kafka-data
persistentVolumeClaim:
claimName: kafka-data
I find this link but in all sulotions the talk about deploy with docker and there isn't any solution for kubernetes.Also I try to add user: root or user: "0:0" but I couldn't find out how to add it.

k3s with skaffold: container can't be pulled

I'm creating a micro-service API I want to run on a local raspberry k3s cluster.
The aim is to use skaffold to deploy during development.
The problem I have is everytime I use skaffold dev I have the same error :
deployment/epos-auth-deploy: container epos-auth is waiting to start: 192.168.1.10:8080/epos_auth:05ea8c1#sha256:4e7f7c7224ce1ec4831627782ed696dee68b66929b5641e9fc6cfbfc4d2e82da can't be pulled
I've tried to setup a local docker registry which is defined with this docker-compose.yaml file:
version: '2.0'
services:
registry:
image: registry:latest
volumes:
- ./registry-data:/var/lib/registry
networks:
- registry-ui-net
ui:
image: joxit/docker-registry-ui:static
ports:
- 8080:80
environment:
- REGISTRY_TITLE=Docker Registry
- REGISTRY_URL=http://registry:5000
depends_on:
- registry
networks:
- registry-ui-net
networks:
registry-ui-net:
It's working on http://192.168.1.10:8080 on my local network.
It seems ok when it builds and pushes the image.
I also have set /etc/docker/daemon.json on my local computer
{
"insecure-registries": ["192.168.1.10:8080"],
"registry-mirrors": ["http://192.168.1.10:8080"]
}
I've set the /etc/rancher/k3s/registries.yaml on all the nodes:
mirrors:
docker.io:
endpoint:
- "http://192.168.1.10:8080"
skaffold.yaml looks like this :
apiVersion: skaffold/v2alpha3
kind: Config
metadata:
name: epos-skaffold
deploy:
kubectl:
manifests:
- ./infra/k8s/skaffold/*
build:
local:
useBuildkit: true
artifacts:
- image: epos_auth
context: epos-auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
And ./infra/k8s/skaffold/epos-auth-deploy.yaml looks like this :
apiVersion: apps/v1
kind: Deployment
metadata:
name: epos-auth-deploy
spec:
replicas: 1
selector:
matchLabels:
app: epos-auth
template:
metadata:
labels:
app: epos-auth
spec:
containers:
- name: epos-auth
image: epos_auth
env:
- name: NATS_URL
value: http://nats-srv:4222
- name: NATS_CLUSTER_ID
value: epos
- name: NATS_CLIENT_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-keys
key: key
- name: JWT_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: jwt-keys
key: private_key
- name: JWT_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: jwt-keys
key: public_key
- name: MONGO_URI
value: mongodb://mongodb-srv:27017/auth-service
---
apiVersion: v1
kind: Service
metadata:
name: epos-auth-srv
spec:
selector:
app: epos-auth
ports:
- name: epos-auth
protocol: TCP
port: 3000
targetPort: 3000
I did
skaffold config set default-repo 192.168.1.10:8080
skaffold config set insecure-registries 192.168.1.10:8080
I really don't know what's wrong with this.
Do you have any clue please ?
Please change the docker.io reference as it is confusing;
mirrors:
docker.io:
endpoint:
- "http://192.168.1.10:8080"
And use the image with the full registry path just like:
registry.local:5000/epos_auth
https://devopsspiral.com/articles/k8s/k3d-skaffold/
I installed docker-ce on the master node. Then I created the master node with --docker parameter in the command.
When I created the k3s node, I defined a context. I forgot to replace default context in ~/.skaffold/config

Chaincode instantiation fails in AWS EKS network

I have a simple one orderer and one peer network deployed on AWS-Elastic Kubernetes Services. I created the network using the official eks documentation. I am able to bring up the orderer as well as the peer within the pods. The peer is able to create and join the channel as well as anchor peer update transaction is successful as well.
High-level configuration for the pods:
Started as Stateful Sets
Pods use dynamic PV with Storage Class configured for aws-ebs
Exposed using service type load balancer
Uses Docker In Docker(DIND) approach for chaincode
Detailed Configuration for Orderer Pod:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: allparticipants-orderer
labels:
app: allparticipants-orderer
spec:
serviceName: orderer
replicas: 1
selector:
matchLabels:
app: allparticipants-orderer
template:
metadata:
labels:
app: allparticipants-orderer
spec:
containers:
- name: allparticipants-orderer
image: <docker-hub-url>/orderer:0.1
imagePullPolicy: Always
command: ["sh", "-c", "orderer"]
ports:
- containerPort: 7050
env:
- name: FABRIC_LOGGING_SPEC
value: DEBUG
- name: ORDERER_GENERAL_LOGLEVEL
value: DEBUG
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /var/hyperledger/orderer/orderer.genesis.block
- name: ORDERER_GENERAL_LOCALMSPID
value: OrdererMSP
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /var/hyperledger/orderer/msp
- name: ORDERER_GENERAL_TLS_ENABLED
value: "false"
- name: ORDERER_GENERAL_TLS_PRIVATEKEY
value: /var/hyperledger/orderer/tls/server.key
- name: ORDERER_GENERAL_TLS_CERTIFICATE
value: /var/hyperledger/orderer/tls/server.crt
- name: ORDERER_GENERAL_TLS_ROOTCAS
value: /var/hyperledger/orderer/tls/ca.crt
volumeMounts:
- name: allparticipants-orderer-ledger
mountPath: /var/ledger
volumeClaimTemplates:
- metadata:
name: allparticipants-orderer-ledger
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: allparticipants-orderer-sc
resources:
requests:
storage: 1Gi
Detailed configuration for Peer along with DIND in the same pod:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: allparticipants-peer0
labels:
app: allparticipants-peer0
spec:
serviceName: allparticipants-peer0
replicas: 1
selector:
matchLabels:
app: allparticipants-peer0
template:
metadata:
labels:
app: allparticipants-peer0
spec:
containers:
- name: docker
env:
- name: DOCKER_TLS_CERTDIR
value:
securityContext:
privileged: true
image: "docker:stable-dind"
ports:
- containerPort: 2375
volumeMounts:
- mountPath: /var/lib/docker
name: dockervolume
- name: allparticipants-peer0
image: <docker-hub-url>/peer0:0.1
imagePullPolicy: Always
command: ["sh", "-c", "peer node start"]
ports:
- containerPort: 7051
env:
- name: CORE_VM_ENDPOINT
value: http://localhost:2375
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: 0.0.0.0:7052
- name: FABRIC_LOGGING_SPEC
value: debug
- name: CORE_LOGGING_PEER
value: debug
- name: CORE_LOGGING_CAUTHDSL
value: debug
- name: CORE_LOGGING_GOSSIP
value: debug
- name: CORE_LOGGING_LEDGER
value: debug
- name: CORE_LOGGING_MSP
value: info
- name: CORE_LOGGING_POLICIES
value: debug
- name: CORE_LOGGING_GRPC
value: debug
- name: CORE_LEDGER_STATE_STATEDATABASE
value: goleveldb
- name: GODEBUG
value: "netdns=go"
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_SKIPHANDSHAKE
value: "true"
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_COMMITTER_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /etc/hyperledger/fabric/tls/server.crt
- name: CORE_PEER_TLS_KEY_FILE
value: /etc/hyperledger/fabric/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /etc/hyperledger/fabric/tls/ca.crt
- name: CORE_PEER_ID
value: allparticipants-peer0
- name: CORE_PEER_ADDRESS
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_LISTENADDRESS
value: 0.0.0.0:7051
- name: CORE_PEER_EVENTS_ADDRESS
value: 0.0.0.0:7053
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: <peer0-load-balancer-url>:7051
- name: CORE_PEER_LOCALMSPID
value: AllParticipantsMSP
- name: CORE_PEER_MSPCONFIGPATH
value: <path-to-msp-of-peer0>
- name: ORDERER_ADDRESS
value: <orderer-load-balancer-url>:7050
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: FABRIC_CFG_PATH
value: /etc/hyperledger/fabric
volumeMounts:
- name: allparticipants-peer0-ledger
mountPath: /var/ledger
- name: dockersock
mountPath: /host/var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: dockervolume
persistentVolumeClaim:
claimName: docker-pvc
volumeClaimTemplates:
- metadata:
name: allparticipants-peer0-ledger
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: allparticipants-peer0-sc
resources:
requests:
storage: 1Gi
Not including Storage Class and Service configuration for the pods as they seem to work fine.
So, as stated earlier, the peer is able to create and join the channel as well as I am able to install the chaincode in the peer. However, while instantiating the chaincode from within the peer, I get the following error:
Error: error endorsing chaincode: rpc error: code = Unavailable desc = transport is closing
Apart from this the Docker(DIND) container started within peer0's pod has following warning logs:
level=warning msg="887409d917bd7ef74ebe8617b8dcbedc8197741662a14b988491c54085e9acfd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/887409d917bd7ef74ebe8617b8dcbedc8197741662a14b988491c54085e9acfd/mounts/shm, flags: 0x2: no such file or directory"
Additionally, there are no logs in this docker container when I submit the following instantiation request:
peer chaincode instantiate -o $ORDERER_ADDRESS -C carchannel -n fabcar
-l node -v 0.1.1 -c '{"Args":[]}' -P "OR ('AllParticipantsMSP.member','AllParticipantsMSP.peer', 'AllParticipantsMSP.admin', 'AllParticipantsMSP.client')"
I tried searching for similar issues but not able to find one that matches this eks one.
Is there an issue with pod configuration or eks configuration? Not able to get past this one. Can someone please point me in the right direction? I am quite new to K8s.
Update 1:
I updated the service type to Load Balancer keeping rest of the configurations similar. Still I get the same error.
Update 2:
Configured DIND approach for the chaincode container.
Update 3:
Mounted pv and pvc for the dind container as well as updated the CORE_VM_ENDPOINT access protocol from tcp to http.
The issue here was the image used for Docker In Docker container. I downgraded the docker-in-docker image version from docker:stable-dind one to docker:18-dind. The stable image version has TLS enabled by default. In my case, I tried setting the value of the environment variable DOCKER_TLS_CERTDIR to blank. But that did not work out.
Attaching the snippet of the DIND configuration:
containers:
- name: docker
securityContext:
privileged: true
image: "docker:18-dind"
ports:
- containerPort: 2375
protocol: TCP
volumeMounts:
- mountPath: /var/lib/docker
name: dockervolume
Note: While I have been able to work this out, I am not marking this as the accepted answer since TLS might be required to instantiate the chaincode and there should be a way to use that and I would be open to and looking out for those answers.

How to fix "connection refused" error during filebeat start in kubernetes

Error starting filebeat as daemonset in kubernetes
I run filebeat as a kubernetes daemonset in kubernetes cluster with autodiscover turned on and I have a errors in filebeat log:
"ERROR kubernetes/util.go:90 kubernetes: Querying for pod failed with error: performing request: Get https://10.96.0.1:443/api/v1/namespaces/default/pods/filebeat-daemon-7nkqt: dial tcp 10.96.0.1:443: connect: connection refused
ERROR kubernetes/watcher.go:183 kubernetes: Performing a resource sync err performing request: Get https://10.96.0.1:443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=0:
dial tcp 10.96.0.1:443: connect: connection refused for *v1.PodList".
When I add a command "sleep 5" into filebeat start command line, errors disappear.
Does anyone have a solution for this problem?
Here is a filebeat manifest:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config-json
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.registry_file: "/data/logs/elog/filebeat/registry"
filebeat.autodiscover:
providers:
- type: kubernetes
include_annotations: ["logging"]
templates:
- condition:
equals:
kubernetes.annotations.logging: "json"
config:
- type: docker
json:
keys_under_root: false
add_error_key: false
ignore_decoding_error: true
message_key: message
overwrite_keys: true
containers.ids:
- "${data.kubernetes.container.id}"
processors:
- drop_event: # drop others if any
when:
not.equals:
kubernetes.annotations.logging: "json"
- add_cloud_metadata: ~
- add_kubernetes_metadata:
in_cluster: true
processors:
- drop_fields:
fields: ["host"]
output.logstash:
hosts: ["logstash-host:5044"]
logging:
to_files: true
files:
path: "/data/logs/elog/filebeat"
name: filebeat.log
rotateeverybytes: 10485760 # = 10MB
keepfiles: 7
level: info
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat-daemon
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
name: filebeat
k8s-app: filebeat
template:
metadata:
labels:
name: filebeat
k8s-app: filebeat
spec:
containers:
- name: filebeat-container
image: docker-registry.company.com/elog/filebeat-test2
command: ['/bin/bash', '-c', 'ln -f -s /data/configs/filebeat/filebeat.yml /data/elog/filebeat/current/filebeat.yml;sleep 5;/startup_filebeat.sh']
volumeMounts:
- mountPath: /var/lib/docker/containers
name: filebeat-log-source
- name: filebeat-config
mountPath: /data/configs/filebeat
- name: varlog
mountPath: /var/log
readOnly: true
imagePullSecrets:
- name: regcred
volumes:
- name: filebeat-log-source
hostPath:
path: /var/lib/docker/containers
type: DirectoryOrCreate
- name: filebeat-config
configMap:
defaultMode: 0644
name: filebeat-config-json
- name: varlog
hostPath:
path: /var/log
type: DirectoryOrCreate

Kubernetes pod unreachable

I've got this bizarre issue where a pod in one of my environments isn't allowing requests through on the port specified
From another pod:
/app # wget 10.12.2.22:6000
Connecting to 10.12.2.22:6000 (10.12.2.22:6000)
wget: error getting response`
From the service:
/app # wget services:6000
Connecting to services:6000 (10.15.249.82:6000)
wget: error getting response: Resource temporarily unavailable
From inside itself:
/app # wget localhost:6000
converted 'http://localhost:6000' (ANSI_X3.4-1968) -> 'http://localhost:6000' (UTF-8)
--2018-10-03 01:26:39-- http://localhost:6000/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:6000... connected.
HTTP request sent, awaiting response... 404 Not Found
2018-10-03 01:26:39 ERROR 404: Not Found.
Here's my YAML:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: services
namespace: production
labels:
name: services
spec:
replicas: 1
selector:
matchLabels:
app: services
version: production
template:
metadata:
name: services
labels:
app: services
version: production
spec:
containers:
- name: services
image: gcr.io/myproject/services:v1.6.0
imagePullPolicy: Always
volumeMounts:
- name: "gc-serviceaccount"
readOnly: true
mountPath: "/app/secrets"
securityContext:
privileged: true
ports:
- name: http
containerPort: 3000
protocol: TCP
- name: sockets
containerPort: 6000
protocol: TCP
env:
- name: NODE_ENV
value: "local"
- name: PORTAL_HOST
value: "portal.myproject.com"
- name: PORTAL_HOSTNAME
value: "http://portal"
- name: DESIGNER_HOSTNAME
value: "http://designer"
- name: FILE_HOST
value: "https://files.myproject.com"
- name: STORAGE_BUCKET
value: "production"
- name: DATA_PATH
value: ""
- name: MONGO_DB_NAME
value: "MyProject"
- name: MONGO_REPLICA_SET
value: "myproject"
- name: PORT
value: "3000"
- name: REDIS_HOSTNAME
value: "redis"
- name: REDIS_PORT
value: "6379"
- name: SOCKETCLUSTER_WS_ENGINE
value: "ws"
- name: SOCKETCLUSTER_PORT
value: "6000"
- name: CLIENT_HOST
value: "myproject.com"
readinessProbe:
httpGet:
path: /healthz
port: 3000
periodSeconds: 1
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 10
volumes:
- name: "gc-serviceaccount"
secret:
secretName: "gc-serviceaccount"
---
apiVersion: v1
kind: Service
metadata:
name: services
namespace: production
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
- name: sockets
port: 6000
targetPort: sockets
selector:
app: services
It's worth noting that I have no issues when trying to connect to the pod from another pod on port 3000.
So far I created a new environment and deployed to that but I'm receiving the same error.
Strangely enough, my other, older environments work fine, so I'm wondering if it's a Kubernetes issue?