Nginx Ingress getting 504 gateway time-out - kubernetes

I’m quite new to k8s in general, only been using for smaller projects but made it work. I hope btw this is the right channel to ask questions (in this case about ingress-nginx). I’m trying to setup a cluster with a gateway-api and a few microservices (all written in NestJs). To give a little background, I first had everything in docker-compose and my entry was also a Nginx container with letsencrypt. The whole docker, works great locally.
This was the config used for my NGinx Docker:
upstream equmedia-api {
server equmedia-api:3000;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
return 301 https://$server_name$request_uri;
}
server {
listen 80;
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
keepalive_timeout 70;
server_name subdomain.example.com;
ssl_session_cache shared:SSR:10m;
ssl_session_timeout 10m;
ssl_certificate /etc/letsencrypt/live/equmedia.pixeliner.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/equmedia.pixeliner.com/privkey.pem;
access_log /var/log/nginx/nginx.access.log;
error_log /var/log/nginx/nginx.error.log;
location / {
proxy_pass http://equmedia-api;
# proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
As you can see, it upstreamed to my api container.
Eventually I wanted to turn the whole deployment into k8s. Seemed like a good followup practice after the small projects.
I learned about ingress-nginx and gave it my first try, but I seem to have struck a wall.
Here is the setup I'm trying to achieve:
Through DigitalOcean the setup will be behind a LoadBalancer.
Here is my Ingress NGinx controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: equmedia-ingress-api
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/proxy-protocol: "true"
nginx.ingress.kubernetes.io/ssl-proxy-headers: "X-Forwarded-Proto: https"
spec:
tls:
- hosts:
- subdomain.example.com
secretName: quickstart-example-tls
rules:
- host: subdomain.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: equmedia-api
port:
number: 3000
And my api service:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: equmedia-api
name: equmedia-api
spec:
ports:
- port: 3000
targetPort: 3000
selector:
io.kompose.service: equmedia-api
status:
loadBalancer: {}
When I try to access "https://subdomain.example.com/api/health", I get a 504 Gateway Time-out. Looking at the ingress controller logs I get the following:
2023/02/17 15:51:44 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
2023/02/17 15:51:49 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
2023/02/17 15:51:54 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
Anyone that can point me into the right direction, to fix this issue?
EDIT
The outcome for
kubectl get pods -l io.kompose.service=equmedia-api:
NAME READY STATUS RESTARTS AGE
equmedia-api 1/1 Running 0 2d2h
kubectl get svc:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
equmedia-api ClusterIP 10.245.173.11 <none> 3000/TCP 23h
equmedia-api-rabbitmq ClusterIP 10.245.17.225 <none> 5672/TCP,15673/TCP 2d17h
equmedia-api-redis ClusterIP 10.245.120.11 <none> 6379/TCP 2d17h
equmedia-auth-db ClusterIP 10.245.94.21 <none> 5432/TCP 2d17h
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 2d17h
quickstart-ingress-nginx-controller LoadBalancer 10.245.36.216 179.128.139.106 80:31194/TCP,443:31609/TCP 2d16h
quickstart-ingress-nginx-controller-admission ClusterIP 10.245.232.77 <none> 443/TCP 2d16h
EDIT2:
I've requested my domain https://subdomain.example.com/api/health through browser, curl and postman. All of them return timeouts.
kubectl get pods -A -o wide | grep 10.244.0.228 returns:
default equmedia-api 1/1 Running 0 2d4h 10.244.0.228 temp-pool-qyhii <none> <none>
kubectl get svc -A | grep 10.244.0.228 returns nothing
EDIT3:
Here is the logs of my API:
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [NestFactory] Starting Nest application...
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AppModule dependencies initialized +136ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] RedisCacheModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] UtilsModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AxiosWrapperModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] PassportModule dependencies initialized +32ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] JwtModule dependencies initialized +3ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] TerminusModule dependencies initialized +2ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] DiscoveryModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigModule dependencies initialized +2ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +0ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ScheduleModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +61ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +17ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +7ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] HealthModule dependencies initialized +8ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] CacheModule dependencies initialized +2ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] MailModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] HttpModule dependencies initialized +3ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +24ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullQueueModule dependencies initialized +7ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] PaymentModule dependencies initialized +8ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] CustomerModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ContentModule dependencies initialized +2ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AdserveModule dependencies initialized +3ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AuthModule dependencies initialized +2ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] OpenIdModule dependencies initialized +65ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] HealthController {/api/health}: +18ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health, GET} route +5ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-ping, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-disk, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-memory, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-microservice/:name, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] OpenIdController {/api/open-id}: +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/login, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/user, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/callback, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/logout, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AuthController {/api/auth}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signup, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signin, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signout, POST} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/refresh, GET} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] UserController {/api/user}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/get-user-id/email?, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/get-authenticated-user, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/:id/change-user-password, PUT} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/:id/delete-user-account, DELETE} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/confirm/:token, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/forgot-password, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/set-new-password/:token, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] UsersController {/api/users}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/users, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] PaymentController {/api/payment}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/:id, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/create/:id, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/:id, PUT} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] CustomerController {/api/customer}: +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer, GET} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/profile/:id, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/create, POST} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/delete/:id, DELETE} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/update/:id, PUT} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] ContentController {/api/content}: +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/create, POST} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/update/:contentId, PUT} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/delete/:contentId, DELETE} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/category/:categoryId, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/slug/:slug, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] CategoryController {/api/category}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/update/:categoryId, PUT} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/delete/:categoryId, DELETE} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] WidgetController {/api/widget}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/create, POST} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/update/:widgetId, PUT} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/delete/:widgetId, DELETE} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AdvertiserController {/api/adserve/advertiser}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId/assign, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AdserveController {/api/adserve}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/serve, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/redirect, GET} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] PublisherController {/api/adserve}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/:publisherId, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/:publisherId/zone/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] ReportController {/api/adserve/report}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/report, GET} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [NestApplication] Nest application successfully started +58ms
-- API GATEWAY RUNNING - PORT: 3000 --
No errors are logged, and through a port-forward I also see my api working.
EDIT4:
Here is the gist with all pods/services/claims/...
https://gist.github.com/pixeliner/2c89048294197155b0d4833ab4045f3c

Your output text:
2023/02/17 15:51:44 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
2023/02/17 15:51:49 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
2023/02/17 15:51:54 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
Implies the request is timing out on the IP 10.244.0.228:3000
Things to check:
Is the service IP 10.244.0.228: kubectl get svc equmedia-api (it will likely be of type ClusterIP)
Port forward to the service directly: kubectl port-forward svc/equmedia-api 3000:3000 and then try to access localhost:3000 in another terminal or in your browser. Does it respond, does it error or does it timeout?
Check the pods your service is trying to match: kubectl get pods -l io.kompose.service=equmedia-api -- does this return any pods? If so, are they in Ready state or are they erroring? Do they have a value greater than 0 in the Restarts count?
Check the logs of the pod(s) kubectl logs -f {pod-name} and see if it is unable to start up, or if it is repeatedly starting.
UPDATE 1
Please add the output of the following commands to your question. Wrap the output with three backticks (`) on a single line before and after to preserve formatting:
kubectl get pods -l io.kompose.service=equmedia-api
kubectl get svc
UPDATE 2
Since the IP that your controller is 10.244.0.228 see if any of your pods or services actually have that IP. Please add the output of these commands:
kubectl get pods -A -o wide | grep 10.244.0.228
kubectl get svc -A | grep 10.244.0.228
UPDATE 3
I've yet to try deploying the gist, but I have noticed something
You have networkpolicies setup and you have labelled your pod
apiVersion: v1
kind: Pod
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.network/backend: "true" # <<--- HERE
io.kompose.service: equmedia-api
name: equmedia-api-pod
spec:
...
This matches your network policy here:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: backend
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/backend: "true"
podSelector:
matchLabels:
io.kompose.network/backend: "true"
Now, this network policy reads (based in the information off this link)
"Allow connections from Pods with the label io.kompose.network/backend="true" (last three lines) to pods that match the labels io.kompose.network/backend="true" (the ingress.from.podSelector bit)
Sooo.... assuming I'm reading this correct, the reason the ingress controller is not able to talk to the pod, is because the controller pod does not have a label io.kompose.network/backend="true", and since you did not include that in your gist, I'm assuming you're using the ingress controller chart as a subchart/dependency. And if so, then out of the box, that chart won't have this label. This would explain why we were able to port-forward to the pod and the service directly, but the controller pod was not able to talk to the pod.
And easy way to verify this is to either delete the backend networkpolicy, or modify it to allow all ingress traffic as a test (something like the example here)
If this works, it will confirm the networkpolicy is blocking the traffic.

Related

Docker compose with NestJS PostgreSQL not connecting

I have tried to dockerize NestJS app with PostgreSQL. Postgres is refusing connection and is also showing one log that says database system was shut down at <<some timestamp>>. This is docker-compose.yml and the logs.
version: '3'
services:
postgres:
image: postgres
restart: always
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- '5432:5432'
environment:
- POSTGRES_DB=gm
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=admin#gmail.com
- PGADMIN_DEFAULT_PASSWORD=admin
- PGADMIN_LISTEN_PORT=5050
ports:
- "5050:5050"
api:
image: gm-server
build:
dockerfile: Dockerfile
context: .
volumes:
- .:/home/node
ports:
- '8081:4001'
depends_on:
- postgres
env_file: .env
command: npm run start:prod
volumes:
pgdata:
server-postgres-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
server-postgres-1 |
server-postgres-1 | 2023-01-04 09:36:45.249 UTC [1] LOG: starting PostgreSQL 15.0 (Debian 15.0-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
server-postgres-1 | 2023-01-04 09:36:45.250 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
server-postgres-1 | 2023-01-04 09:36:45.250 UTC [1] LOG: listening on IPv6 address "::", port 5432
server-postgres-1 | 2023-01-04 09:36:45.255 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
server-postgres-1 | 2023-01-04 09:36:45.261 UTC [29] LOG: database system was shut down at 2023-01-04 09:36:27 UTC
server-postgres-1 | 2023-01-04 09:36:45.274 UTC [1] LOG: database system is ready to accept connections
server-api-1 |
server-api-1 | > nestjs-app#0.0.1 start:prod
server-api-1 | > node dist/main
server-api-1 |
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [NestFactory] Starting Nest application...
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [InstanceLoader] MulterModule dependencies initialized +61ms
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [InstanceLoader] MulterModule dependencies initialized +1ms
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [InstanceLoader] ConfigHostModule dependencies initialized +0ms
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [InstanceLoader] AppModule dependencies initialized +1ms
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM LOG [InstanceLoader] ConfigModule dependencies initialized +0ms
server-api-1 | [Nest] 19 - 01/04/2023, 9:36:47 AM ERROR [ExceptionHandler] connect ECONNREFUSED 127.0.0.1:5432
server-api-1 | Error: connect ECONNREFUSED 127.0.0.1:5432
server-api-1 | at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1300:16)
server-api-1 exited with code 1
And I have also tried most of the relevant answers (before Stackoverlow stars marking me as duplicate) and they didn't work. Yes, I have tried changing the host to host.docker.internal as suggested by the previous
For more clarity, here is my typeorm-datasource config in NestJS
import { DataSource } from 'typeorm';
export const typeOrmConnectionDataSource = new DataSource({
type: 'postgres',
host: 'host.docker.internal',
port: 5432,
username: 'postgres',
password: 'postgres',
database: 'gm',
entities: [__dirname + '/**/*.entity{.ts,.js}'],
migrations: [__dirname + '/migrations/**/*{.ts,.js}'],
logging: true,
synchronize: false,
migrationsRun: false,
});
Why this problem is different than the other "duplicate" questions?
The reason this problem is different is due to the reason because
the other threads also don't solve the issue.
even if we consider they do, the solutions didn't work for me.
More evidence?
I have tried all solutions for this one
Another solution with didn't work.
Another one, even followed the chat
Apparently the problem lies in NestJS not compiling properly because of my customization to its scripts. Check if that's an issue.
After fixing this issue, Just follow the instructions to use "postgres" as host, and it will work (if you are facing the same issue).

NestJs GRPC Kubernetes: Microservice stuck on Initialization when run inside kubernetes cluster

Please help me troubleshoot:
Starting the grpc microservice container in a docker environment produces these logs:
[Nest] 24352 - 04.06.2022, 11:36:37 LOG [InstanceLoader] UserMessageModule dependencies initialized +3ms
[Nest] 24352 - 04.06.2022, 11:36:37 LOG [InstanceLoader] GraphQLModule dependencies initialized +3ms
[Nest] 24352 - 04.06.2022, 11:36:37 LOG [NestMicroservice] Nest microservice successfully started +132ms
[Nest] 24352 - 04.06.2022, 11:36:37 LOG [RoutesResolver] AppController {/}: +10ms
[Nest] 24352 - 04.06.2022, 11:36:37 LOG [RouterExplorer] Mapped {/health, GET} route +5ms
When running it inside a kubernetes cluster the logs seem to be stuck at line 2:
[Nest] 18 - 06/04/2022, 9:04:38 AM LOG [InstanceLoader] UserMessageModule dependencies initialized +1ms
[Nest] 18 - 06/04/2022, 9:04:38 AM LOG [InstanceLoader] GraphQLModule dependencies initialized +0ms
Any ideas why this would happen without an (timeout or similar) error message?

NestJS app build with docker can't access postgres database in the same docker network: ECONNREFUSED 127.0.0.1:5432

I want to run my app on my local machine within Docker. I don't want to optimize the size of my docker app or build it for production now.
Docker builds my backend-api app and postgres-db successfully. But I can only access the docker postgres database outside my docker e.g. with dbeaver installed on my computer. Also if I start my backend-api app WITHOUT Docker with "npm run start", my app can also access the database without any errors and can also write into the posgres db. Only when I build the backend-api with Docker and launch the app inside the Docker, I get this error. Since this only happens within Docker, I assume that something important is missing in my Dockerfile.
My docker-compose.yml:
version: '3'
services:
backend:
build: .
container_name: backend-api
command: npm run start
restart: unless-stopped
ports:
- 3000:3000
volumes:
- .:/usr/src/backend
networks:
- docker-network
depends_on:
- database
database:
image: postgres:latest
container_name: backend-db
ports:
- 5432:5432
volumes:
- postgresdb/:/var/lib/postgresql/data/
networks:
- docker-network
environment:
POSTGRES_USER: devuser
POSTGRES_PASSWORD: devpw
POSTGRES_DB: devdb
volumes:
postgresdb:
networks:
docker-network:
driver: bridge
My Dockerfile:
FROM node:16.15.0-alpine
WORKDIR /usr/src/backend
COPY . .
RUN npm install
sudo docker-compose up --build output:
Starting backend-db ... done
Recreating backend-api ... done
Attaching to backend-db, backend-api
backend-db |
backend-db | PostgreSQL Database directory appears to contain a database; Skipping initialization
backend-db |
backend-db | 2022-05-03 20:35:46.065 UTC [1] LOG: starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
backend-db | 2022-05-03 20:35:46.066 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
backend-db | 2022-05-03 20:35:46.066 UTC [1] LOG: listening on IPv6 address "::", port 5432
backend-db | 2022-05-03 20:35:46.067 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
backend-db | 2022-05-03 20:35:46.071 UTC [26] LOG: database system was shut down at 2022-05-03 20:35:02 UTC
backend-db | 2022-05-03 20:35:46.077 UTC [1] LOG: database system is ready to accept connections
backend-api |
backend-api | > backend#0.0.1 start
backend-api | > nodemon
backend-api |
backend-api | [nodemon] 2.0.15
backend-api | [nodemon] to restart at any time, enter `rs`
backend-api | [nodemon] watching path(s): src/**/*
backend-api | [nodemon] watching extensions: ts
backend-api | [nodemon] starting `IS_TS_NODE=true ts-node -r tsconfig-paths/register src/main.ts`
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM LOG [NestFactory] Starting Nest application...
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM LOG [InstanceLoader] TypeOrmModule dependencies initialized +73ms
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM LOG [InstanceLoader] AppModule dependencies initialized +0ms
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM LOG [InstanceLoader] ConfigModule dependencies initialized +1ms
backend-api | [Nest] 30 - 05/03/2022, 8:35:50 PM ERROR [TypeOrmModule] Unable to connect to the database. Retrying (1)...
backend-api | Error: connect ECONNREFUSED 127.0.0.1:5432
backend-api | at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1187:16)
backend-api | [Nest] 30 - 05/03/2022, 8:35:53 PM ERROR [TypeOrmModule] Unable to connect to the database. Retrying (2)...
My ormconfig.ts:
import { ConnectionOptions } from 'typeorm';
const config: ConnectionOptions = {
type: 'postgres',
host: 'localhost',
port: 5432,
username: 'devuser',
password: 'devpw',
database: 'devdb',
entities: [__dirname + '/**/*.entity{.ts,.js}'],
synchronize: false,
migrations: [__dirname + '/**/migrations/**/*{.ts,.js}'],
cli: {
migrationsDir: 'src/migrations',
},
};
export default config;
Please let me know what other information you would like me to provide. I'm new to the Docker world.
When you run a docker container localhost results in the container's localhost, not your machine's, so it's not the right host to use. As you're using docker-compose, a docker network is automatically created using the service's names as hosts. So instead of using localhost for you database host, you can use database as the host, and now the docker network will route the request properly to the database service as defined in your docker-compose.yml file
Ran into the same issue and this docker-compose.yml setup worked for me
version: "3.8"
services:
database:
container_name: db
image: postgres:14.2
ports:
- "5432:5432"
environment:
- POSTGRES_HOST_AUTH_METHOD
backend:
container_name: api
build:
dockerfile: Dockerfile
context: .
restart: on-failure
depends_on:
- database
ports:
- "3000:3000"
environment:
- DATABASE_HOST=database
- DATABASE_PORT=5432
- DATABASE_USER=postgres
- DATABASE_PASSWORD
- DATABASE_NAME
- DATABASE_SYNCHRONIZE
- NODE_ENV
Following were two main notable changes
changed the host to database so the docker network will route the
request properly to the database service as defined in your
docker-compose.yml file.
added a user postgres as default
PostgreSQL user

Nestcloud with docker-compose consul micro-service connection refused

I've dockerize a nestcloud app with consul and a bunch of microservices.
However, i'm unable to start any of the microservice (for example test-service below, as it seems consul is refusing any tcp connection :
[Nest] 1 - 11/23/2021, 9:47:34 PM [NestFactory] Starting Nest application...
[Nest] 1 - 11/23/2021, 9:50:56 PM [ConfigModule] Unable to initial ConfigModule, retrying...
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] ServiceRegistryModule dependencies initialized +114ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] LoggerModule dependencies initialized +0ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] BootModule dependencies initialized +0ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] HttpModule dependencies initialized +6ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] AppModule dependencies initialized +2ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [InstanceLoader] ConsulModule dependencies initialized +0ms
[Nest] 1 - 11/23/2021, 9:47:34 PM [ConfigModule] Unable to initial ConfigModule, retrying... +39ms
Error: consul: kv.get: connect ECONNREFUSED 127.0.0.1:8500
Here's my docker-compose.yaml :
version: "3.2"
services:
test-service:
build:
context: .
dockerfile: apps/test-service/Dockerfile
args:
NODE_ENV: development
image: "test-service:latest"
restart: always
depends_on:
- consul
environment:
- CONSUL_HOST=consul
- DISCOVERY_HOST=localhost
ports:
- 50054:50054
consul:
container_name: consul
ports:
- "8400:8400"
- "8500:8500"
- "8600:53/udp"
image: consul
command: ["agent", "-server", "-bootstrap", "-ui", "-client", "0.0.0.0"]
labels:
kompose.service.type: nodeport
kompose.service.expose: "true"
kompose.image-pull-policy: "Always"
the Dockerfile for my microservice test-service :
FROM node:12-alpine
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
RUN mkdir -p /usr/src/app
ADD . /usr/src/app
WORKDIR /usr/src/app
RUN yarn global add #nestjs/cli
RUN yarn install --production=false
# Build production files
RUN nest build test-service
# Bundle app source
COPY . .
EXPOSE 50054
CMD ["node", "dist/apps/test-service/main.js"]
and the bootstrap-development.yaml used by nestcloud to launch the microservice
consul:
host: localhost
port: 8500
config:
key: mybackend/config/${{ service.name }}
service:
discoveryHost: localhost
healthCheck:
timeout: 1s
interval: 10s
tcp: ${{ service.discoveryHost }}:${{ service.port }}
maxRetry: 5
retryInterval: 5000
tags: ["v1.0.0", "microservice"]
name: io.mybackend.test.service
port: 50054
loadbalance:
ruleCls: RandomRule
logger:
level: info
transports:
- transport: console
level: debug
colorize: true
datePattern: YYYY-MM-DD h:mm:ss
label: ${{ service.name }}
- transport: file
name: info
filename: info.log
datePattern: YYYY-MM-DD h:mm:ss
label: ${{ service.name }}
# 100M
maxSize: 104857600
json: false
maxFiles: 10
- transport: dailyRotateFile
filename: info.log
datePattern: YYYY-MM-DD-HH
zippedArchive: true
maxSize: 20m
maxFiles: 14d
I can successfully ping the consul container from the microservice container with :
docker exec -ti test-service ping consul
Do you see anything wrong with my config, and if so can you please tell how i can get it to work please ?
You trying to access consul with localhost from NestJs service, this is another container, it's localhost is different from consul localhost, you need to access consule with the container name as the host
Change consul host to ${{ CONSUL_HOST }} in bootstrap-development.yaml.
CONSUL_HOST is defined in docker-compose.yaml: CONSUL_HOST=consul
new bootstrap-development.yaml:
consul:
host: ${{CONSUL_HOST}}
port: 8500
...
and rebuilt the container with : docker-compose --project-directory=. -f docker-compose.dev.yml up --build

Hyperledger - MSP error: the supplied identity is not valid: x509: certificate signed by unknown authority

I am currently working from the hyperledger fabric-samples. I've successfully run first-network and fabcar based on available tutorials. I'm now trying to combine the two to make a network with 3 peers in a single org and use the node sdk to query, etc. A repo of my current fabric-samples dir is available here. I've been able to use byfn.sh to build the network, enrollAdmin.js, and registerUser.js. When attempting to query or invoke I have run into this issue:
Store path:/home/victor/fabric-samples/first-network/hfc-key-store
Successfully loaded user1 from persistence
Assigning transaction_id: bc0240f672d075de2f84d50b292ed0e2214dacc0ef2888d0fa7e25d872a99b03
error: [client-utils.js]: sendPeersProposal - Promise is rejected: Error: 2 UNKNOWN: access denied: channel [mychannel] creator org [Org1MSP]
at new createStatusError (/home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:64:15)
at /home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:583:15
error: [client-utils.js]: sendPeersProposal - Promise is rejected: Error: 2 UNKNOWN: access denied: channel [mychannel] creator org [Org1MSP]
at new createStatusError (/home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:64:15)
at /home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:583:15
error: [client-utils.js]: sendPeersProposal - Promise is rejected: Error: 2 UNKNOWN: access denied: channel [mychannel] creator org [Org1MSP]
at new createStatusError (/home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:64:15)
at /home/victor/fabric-samples/first-network/node_modules/grpc/src/client.js:583:15
HERE
Transaction proposal was bad
Failed to send Proposal or receive valid response. Response null or status is not 200. exiting...
Failed to invoke successfully :: Error: Failed to send Proposal or receive valid response. Response null or status is not 200. exiting...
Using docker logs I looked at the logs for one of the peers and found this:
2018-04-24 19:05:09.370 UTC [msp] getMspConfig -> INFO 001 Loading NodeOUs
2018-04-24 19:05:09.392 UTC [nodeCmd] serve -> INFO 002 Starting peer:
Version: 1.1.0
Go version: go1.9.2
OS/Arch: linux/amd64
Experimental features: false
Chaincode:
Base Image Version: 0.4.6
Base Docker Namespace: hyperledger
Base Docker Label: org.hyperledger.fabric
Docker Namespace: hyperledger
2018-04-24 19:05:09.392 UTC [ledgermgmt] initialize -> INFO 003 Initializing ledger mgmt
2018-04-24 19:05:09.393 UTC [kvledger] NewProvider -> INFO 004 Initializing ledger provider
2018-04-24 19:05:12.811 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 005 Created state database _users
2018-04-24 19:05:13.215 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 006 Created state database _replicator
2018-04-24 19:05:14.086 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 007 Created state database _global_changes
2018-04-24 19:05:14.433 UTC [kvledger] NewProvider -> INFO 008 ledger provider Initialized
2018-04-24 19:05:14.433 UTC [ledgermgmt] initialize -> INFO 009 ledger mgmt initialized
2018-04-24 19:05:14.433 UTC [peer] func1 -> INFO 00a Auto-detected peer address: 172.18.0.9:7051
2018-04-24 19:05:14.433 UTC [peer] func1 -> INFO 00b Returning peer0.org1.example.com:7051
2018-04-24 19:05:14.433 UTC [peer] func1 -> INFO 00c Auto-detected peer address: 172.18.0.9:7051
2018-04-24 19:05:14.434 UTC [peer] func1 -> INFO 00d Returning peer0.org1.example.com:7051
2018-04-24 19:05:14.435 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00e Entering computeChaincodeEndpoint with peerHostname: peer0.org1.example.com
2018-04-24 19:05:14.436 UTC [nodeCmd] computeChaincodeEndpoint -> INFO 00f Exit with ccEndpoint: peer0.org1.example.com:7052
2018-04-24 19:05:14.436 UTC [nodeCmd] createChaincodeServer -> WARN 010 peer.chaincodeListenAddress is not set, using peer0.org1.example.com:7052
2018-04-24 19:05:14.436 UTC [eventhub_producer] start -> INFO 011 Event processor started
2018-04-24 19:05:14.437 UTC [chaincode] NewChaincodeSupport -> INFO 012 Chaincode support using peerAddress: peer0.org1.example.com:7052
2018-04-24 19:05:14.438 UTC [sccapi] registerSysCC -> INFO 013 system chaincode cscc(github.com/hyperledger/fabric/core/scc/cscc) registered
2018-04-24 19:05:14.438 UTC [sccapi] registerSysCC -> INFO 014 system chaincode lscc(github.com/hyperledger/fabric/core/scc/lscc) registered
2018-04-24 19:05:14.438 UTC [sccapi] registerSysCC -> INFO 015 system chaincode escc(github.com/hyperledger/fabric/core/scc/escc) registered
2018-04-24 19:05:14.438 UTC [sccapi] registerSysCC -> INFO 016 system chaincode vscc(github.com/hyperledger/fabric/core/scc/vscc) registered
2018-04-24 19:05:14.438 UTC [sccapi] registerSysCC -> INFO 017 system chaincode qscc(github.com/hyperledger/fabric/core/chaincode/qscc) registered
2018-04-24 19:05:14.440 UTC [gossip/service] func1 -> INFO 018 Initialize gossip with endpoint peer0.org1.example.com:7051 and bootstrap set [peer1.org1.example.com:7051]
2018-04-24 19:05:14.442 UTC [msp] DeserializeIdentity -> INFO 019 Obtaining identity
2018-04-24 19:05:14.444 UTC [gossip/discovery] NewDiscoveryService -> INFO 01a Started {peer0.org1.example.com:7051 [] [98 55 107 77 184 123 189 240 183 227 157 211 146 161 226 74 43 48 67 169 32 99 66 147 109 71 222 49 249 172 59 136] peer0.org1.example.com:7051 <nil>} incTime is 1524596714444440316
2018-04-24 19:05:14.444 UTC [gossip/gossip] NewGossipService -> INFO 01b Creating gossip service with self membership of {peer0.org1.example.com:7051 [] [98 55 107 77 184 123 189 240 183 227 157 211 146 161 226 74 43 48 67 169 32 99 66 147 109 71 222 49 249 172 59 136] peer0.org1.example.com:7051 <nil>}
2018-04-24 19:05:14.447 UTC [gossip/gossip] start -> INFO 01c Gossip instance peer0.org1.example.com:7051 started
2018-04-24 19:05:14.449 UTC [cscc] Init -> INFO 01d Init CSCC
2018-04-24 19:05:14.449 UTC [sccapi] deploySysCC -> INFO 01e system chaincode cscc/(github.com/hyperledger/fabric/core/scc/cscc) deployed
2018-04-24 19:05:14.449 UTC [sccapi] deploySysCC -> INFO 01f system chaincode lscc/(github.com/hyperledger/fabric/core/scc/lscc) deployed
2018-04-24 19:05:14.450 UTC [escc] Init -> INFO 020 Successfully initialized ESCC
2018-04-24 19:05:14.450 UTC [sccapi] deploySysCC -> INFO 021 system chaincode escc/(github.com/hyperledger/fabric/core/scc/escc) deployed
2018-04-24 19:05:14.450 UTC [sccapi] deploySysCC -> INFO 022 system chaincode vscc/(github.com/hyperledger/fabric/core/scc/vscc) deployed
2018-04-24 19:05:14.451 UTC [qscc] Init -> INFO 023 Init QSCC
2018-04-24 19:05:14.451 UTC [sccapi] deploySysCC -> INFO 024 system chaincode qscc/(github.com/hyperledger/fabric/core/chaincode/qscc) deployed
2018-04-24 19:05:14.451 UTC [nodeCmd] initSysCCs -> INFO 025 Deployed system chaincodes
2018-04-24 19:05:14.451 UTC [nodeCmd] serve -> INFO 026 Starting peer with ID=[name:"peer0.org1.example.com" ], network ID=[dev], address=[peer0.org1.example.com:7051]
2018-04-24 19:05:14.452 UTC [nodeCmd] serve -> INFO 027 Started peer with ID=[name:"peer0.org1.example.com" ], network ID=[dev], address=[peer0.org1.example.com:7051]
2018-04-24 19:05:14.452 UTC [nodeCmd] func7 -> INFO 028 Starting profiling server with listenAddress = 0.0.0.0:6060
2018-04-24 19:05:16.371 UTC [ledgermgmt] CreateLedger -> INFO 029 Creating ledger [mychannel] with genesis block
2018-04-24 19:05:16.409 UTC [fsblkstorage] newBlockfileMgr -> INFO 02a Getting block information from block storage
2018-04-24 19:05:16.757 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 02b Created state database mychannel_
2018-04-24 19:05:16.945 UTC [kvledger] CommitWithPvtData -> INFO 02c Channel [mychannel]: Committed block [0] with 1 transaction(s)
2018-04-24 19:05:17.557 UTC [ledgermgmt] CreateLedger -> INFO 02d Created ledger [mychannel] with genesis block
2018-04-24 19:05:17.626 UTC [cscc] Init -> INFO 02e Init CSCC
2018-04-24 19:05:17.627 UTC [sccapi] deploySysCC -> INFO 02f system chaincode cscc/mychannel(github.com/hyperledger/fabric/core/scc/cscc) deployed
2018-04-24 19:05:17.628 UTC [sccapi] deploySysCC -> INFO 030 system chaincode lscc/mychannel(github.com/hyperledger/fabric/core/scc/lscc) deployed
2018-04-24 19:05:17.628 UTC [escc] Init -> INFO 031 Successfully initialized ESCC
2018-04-24 19:05:17.628 UTC [sccapi] deploySysCC -> INFO 032 system chaincode escc/mychannel(github.com/hyperledger/fabric/core/scc/escc) deployed
2018-04-24 19:05:17.629 UTC [sccapi] deploySysCC -> INFO 033 system chaincode vscc/mychannel(github.com/hyperledger/fabric/core/scc/vscc) deployed
2018-04-24 19:05:17.629 UTC [qscc] Init -> INFO 034 Init QSCC
2018-04-24 19:05:17.629 UTC [sccapi] deploySysCC -> INFO 035 system chaincode qscc/mychannel(github.com/hyperledger/fabric/core/chaincode/qscc) deployed
2018-04-24 19:05:27.629 UTC [deliveryClient] try -> WARN 036 Got error: rpc error: code = Canceled desc = context canceled , at 1 attempt. Retrying in 1s
2018-04-24 19:05:27.629 UTC [blocksProvider] DeliverBlocks -> WARN 037 [mychannel] Receive error: Client is closing
2018-04-24 19:05:28.925 UTC [gossip/service] updateEndpoints -> WARN 038 Failed to update ordering service endpoints, due to Channel with mychannel id was not found
2018-04-24 19:05:29.302 UTC [kvledger] CommitWithPvtData -> INFO 039 Channel [mychannel]: Committed block [1] with 1 transaction(s)
2018-04-24 19:05:33.013 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 03a Created state database mychannel_lscc
2018-04-24 19:05:33.016 UTC [lscc] executeInstall -> INFO 03b Installed Chaincode [fabcar] Version [1.0] to peer
2018-04-24 19:05:34.803 UTC [golang-platform] GenerateDockerBuild -> INFO 03c building chaincode with ldflagsOpt: '-ldflags "-linkmode external -extldflags '-static'"'
2018-04-24 19:05:34.804 UTC [golang-platform] GenerateDockerBuild -> INFO 03d building chaincode with tags:
2018-04-24 19:06:06.351 UTC [cceventmgmt] HandleStateUpdates -> INFO 03e Channel [mychannel]: Handling LSCC state update for chaincode [fabcar]
2018-04-24 19:06:06.868 UTC [couchdb] CreateDatabaseIfNotExist -> INFO 03f Created state database mychannel_fabcar
2018-04-24 19:06:07.278 UTC [kvledger] CommitWithPvtData -> INFO 040 Channel [mychannel]: Committed block [2] with 1 transaction(s)
2018-04-24 19:06:15.476 UTC [protoutils] ValidateProposalMessage -> WARN 041 channel [mychannel]: MSP error: the supplied identity is not valid: x509: certificate signed by unknown authority
2018-04-24 19:06:15.689 UTC [protoutils] ValidateProposalMessage -> WARN 042 channel [mychannel]: MSP error: the supplied identity is not valid: x509: certificate signed by unknown authority
These errors lead me to believe I have the certificates configured incorrectly, but searching for information on the issue hasn't been fruitful so far. How can I locate the source of this error? I'll post my docker-compose-cli.yaml here:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
volumes:
orderer.example.com:
ca.example.com:
peer0.org1.example.com:
peer1.org1.example.com:
peer2.org1.example.com:
networks:
byfn:
services:
ca.example.com:
image: hyperledger/fabric-ca:x86_64-1.1.0
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.example.com
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.example.com-cert.pem
- FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/4239aa0dcd76daeeb8ba0cda701851d14504d31aad1b2ddddbac6a57365e497c_sk
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca.example.com
networks:
- byfn
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
networks:
- byfn
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
depends_on:
- orderer.example.com
- couchdb0
networks:
- byfn
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com
depends_on:
- orderer.example.com
- couchdb1
networks:
- byfn
peer2.org1.example.com:
container_name: peer2.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer2.org1.example.com
depends_on:
- orderer.example.com
- couchdb2
networks:
- byfn
couchdb0:
container_name: couchdb0
image: hyperledger/fabric-couchdb
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 5984:5984
networks:
- byfn
couchdb1:
container_name: couchdb1
image: hyperledger/fabric-couchdb
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 6984:5984
networks:
- byfn
couchdb2:
container_name: couchdb2
image: hyperledger/fabric-couchdb
# Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password
# for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode.
environment:
- COUCHDB_USER=
- COUCHDB_PASSWORD=
ports:
- 7984:5984
networks:
- byfn
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer2.org1.example.com
networks:
- byfn
your sdk is getting certificate from a CA which is not configured properly.
Suggestions:
1-> Check that your CA server is getting started with correct pem file.
2-> Correct _sk (private key)
If you are using cryptogen then both of the above file you will get inside corresponding organisation folder provide the correct file to bootstrap CA . It will work fine.
You must go to the container that is complaining about that certificates, open the corresponding terminal and add the CA authorities certificate to the system's trusted CA repository like this, for example.
In ubuntu:
Go to /usr/local/share/ca-certificates/
Create a new folder, i.e.
"sudo mkdir HyperledgerCerts"
Copy the .crt file into the
"HyperledgerCerts" folder
Make sure the permissions are OK (755 for
the folder, 644 for the file)
Run "sudo update-ca-certificates"
This should solve the problem.
Hope this helps
Make sure that FABRIC_CA_CLIENT_HOME is set to the correct directory, especially when using fabric-ca-client outside docker containers.
For example, before calling fabric-ca-client register or fabric-ca-client enroll, you should set
export FABRIC_CA_CLIENT_HOME=/path/to/organizations/peerOrganizations/org1.example.com/