I wonder if any HELM guru's can shed some light/point me in the right direction...
I'm testing a PoC where we're using molecular to build an app which has a few services linking in the back-end (ticket,notification). We're using NATS as the transport layer and have managed to get our services to talk to each other when either running the env using docker-compose or a simple k8's file we run using minikube.
Now I'm trying to bring this to our K8's cluster using HELM and am struggling to get NATS to talk to the services. I've tried setting the env var on the services for transportation as nats://nats:4222 and have advertised 4222 as the container port on the NATS chart.
However, once running my helm upgrade install and checking the NATS pod status, it is failing with the following output:
│ [1] 2021/03/19 14:12:21.843799 [INF] STREAM: Streaming Server is ready │
│ [1] 2021/03/19 14:12:23.569211 [ERR] 127.0.0.1:42594 - cid:5 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:32.679854 [ERR] 127.0.0.1:42800 - cid:6 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:33.568306 [ERR] 127.0.0.1:42830 - cid:7 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:42.679902 [ERR] 127.0.0.1:43040 - cid:8 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:43.568204 [ERR] 127.0.0.1:43066 - cid:9 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:52.680184 [ERR] 127.0.0.1:43270 - cid:10 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:53.570613 [ERR] 127.0.0.1:43288 - cid:11 - Client parser ERROR, state=0, i=0: proto='"GET / HTTP/1.1\r\nhost: 10.202.211"...' │
│ [1] 2021/03/19 14:12:55.655059 [INF] STREAM: Shutting down. │
│ [1] 2021/03/19 14:12:55.655252 [INF] Initiating Shutdown... │
│ [1] 2021/03/19 14:12:55.655536 [INF] Server Exiting.. │
│ stream closed
Not sure what I'm missing, should I be advertising the NATS address as an ingress?
Any guidance greatly appreciated
K8's ver: Client: v1.17.2 Server: v1.16.13
HELM ver: v3.1.1
image: nats-streaming:latest (I'm not using the full functionality of streaming, so could downgrade to regular nats if easier)
If you already have a working docker compose yaml, I recommend using Kompose tool to convert Docker Compose yaml to helm chart using below command
Documentation Link : https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Command :
kompose convert -c
Related
I am trying to setup a mongodb sharded setup using the bitnami helm chart. I just tried to set the rootPassword but it seem the server no longer starts. In the config server, I see
{"t":{"$date":"2022-10-23T10:44:04.360+00:00"},"s":"I", "c":"NETWORK", "id":4712102, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Host failed in replica set","attr":{"replicaSet":"mongo-mongodb-sharded-shard-0","host":"mongo-mongodb-sharded-shard0-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017","error":{"code":6,"codeName":"HostUnreachable","errmsg":"Error connecting to mongo-mongodb-sharded-shard0-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-mongodb-sharded-shard0-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)"},"action":{"dropConnections":true,"requestImmediateCheck":false,"outcome":{"host":"mongo-mongodb-sharded-shard0-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017","success":false,"errorMessage":"HostUnreachable: Error connecting to mongo-mongodb-sharded-shard0-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-mongodb-sharded-shard0-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)"}}}}
{"t":{"$date":"2022-10-23T10:44:04.361+00:00"},"s":"I", "c":"-", "id":4333222, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"RSM received error response","attr":{"host":"mongo-mongodb-sharded-shard1-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017","error":"HostUnreachable: Error connecting to mongo-mongodb-sharded-shard1-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-mongodb-sharded-shard1-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)","replicaSet":"mongo-mongodb-sharded-shard-1","response":{}}}
{"t":{"$date":"2022-10-23T10:44:04.361+00:00"},"s":"I", "c":"NETWORK", "id":4712102, "ctx":"ReplicaSetMonitor-TaskExecutor","msg":"Host failed in replica set","attr":{"replicaSet":"mongo-mongodb-sharded-shard-1","host":"mongo-mongodb-sharded-shard1-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017","error":{"code":6,"codeName":"HostUnreachable","errmsg":"Error connecting to mongo-mongodb-sharded-shard1-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-mongodb-sharded-shard1-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)"},"action":{"dropConnections":true,"requestImmediateCheck":false,"outcome":{"host":"mongo-mongodb-sharded-shard1-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017","success":false,"errorMessage":"HostUnreachable: Error connecting to mongo-mongodb-sharded-shard1-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017 :: caused by :: Could not find address for mongo-mongodb-sharded-shard1-data-0.mongo-mongodb-sharded-headless.default.svc.cluster.local:27017: SocketException: Host not found (authoritative)"}}}}
Then I try to look at the shards:
mongodb 10:44:47.02 INFO ==> Trying to connect to MongoDB server mongo-mongodb-sharded...
timeout reached before the port went into state "inuse"
And the sharded mongos
│ mongodb 10:44:42.82 INFO ==> Trying to connect to MongoDB server mongo-mongodb-sharded-configsvr-0.mongo-mongodb-sharded-headless.default.svc.cluster.lo │
│ mongodb 10:44:42.83 INFO ==> Found MongoDB server listening at mongo-mongodb-sharded-configsvr-0.mongo-mongodb-sharded-headless.default.svc.cluster.loca │
│ MongoServerError: Authentication failed. │
│ MongoServerError: Authentication failed. │
Seem like the sharded mongos is down because theres an auth error? But why?
My Chart.yaml
apiVersion: v2
name: experiment-mongo-sharding
description:
type: application
version: 0.1.0
appVersion: "1.16.0"
dependencies:
- name: mongodb-sharded
version: 6.0.2
repository: https://charts.bitnami.com/bitnami
values.yaml
mongodb-sharded:
auth:
rootUser: root
rootPassword: password
replicaSetKey: somesecret
Whats wrong here?
Also, once its up, how do I connect to it?
I am trying to move away from cloudsql proxy using it as a sidecar to separate service. Followed all the steps mentioned in here https://github.com/GoogleCloudPlatform/cloud-sql-proxy/tree/main/examples/k8s-serviceafter deploying in kubernetes cluster i am getting an error mentioned below
┌─────────────────────────────────────────────────────────── Logs(pygeno/pgbouncer-xxxxxxx-xxxxx:pgbouncer)[1m] ───────────────────────────────────────────────────────────┐
│ Autoscroll:On FullScreen:Off Timestamps:Off Wrap:Off │
│ pgbouncer 04:58:23.60 │
│ pgbouncer 04:58:23.61 Welcome to the Bitnami pgbouncer container │
│ pgbouncer 04:58:23.61 Subscribe to project updates by watching https://github.com/bitnami/containers │
│ pgbouncer 04:58:23.62 Submit issues and feature requests at https://github.com/bitnami/containers/issues │
│ pgbouncer 04:58:23.62 │
│ pgbouncer 04:58:23.65 INFO ==> ** Starting PgBouncer setup ** │
│ pgbouncer 04:58:23.67 INFO ==> Validating settings in PGBOUNCER_* env vars... │
│ pgbouncer 04:58:23.69 INFO ==> Initializing PgBouncer... │
│ pgbouncer 04:58:23.74 INFO ==> Waiting for PostgreSQL backend to be accessible │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host │
│ cannot resolve host "postgresql": lookup postgresql on xx.0.xx.xx:xx: no such host
i got struck here and not able to move further.
I resolved it by adding environment variable POSTGRESQL_HOST and assigned value 127.0.0.1
After deploying Strapi to Google App Engine, I can see it running due to the following log messages:
2022-09-28 04:38:51 default[20220928t143613] ┌────────────────────┬──────────────────────────────────────────────────┐
2022-09-28 04:38:51 default[20220928t143613] │ Time │ Wed Sep 28 2022 04:38:51 GMT+0000 (Coordinated … │
2022-09-28 04:38:51 default[20220928t143613] │ Launched in │ 4328 ms │
2022-09-28 04:38:51 default[20220928t143613] │ Environment │ production │
2022-09-28 04:38:51 default[20220928t143613] │ Process PID │ 11 │
2022-09-28 04:38:51 default[20220928t143613] │ Version │ 4.3.9 (node v16.17.0) │
2022-09-28 04:38:51 default[20220928t143613] │ Edition │ Community │
2022-09-28 04:38:51 default[20220928t143613] └────────────────────┴──────────────────────────────────────────────────┘
2022-09-28 04:38:51 default[20220928t143613] Actions available
2022-09-28 04:38:51 default[20220928t143613] Welcome back!
2022-09-28 04:38:51 default[20220928t143613] To manage your project 🚀, go to the administration panel at:
2022-09-28 04:38:51 default[20220928t143613] http://0.0.0.0:8081/admin
2022-09-28 04:38:51 default[20220928t143613] To access the server ⚡️, go to:
2022-09-28 04:38:51 default[20220928t143613] http://0.0.0.0:8081
I also have Cron Jobs running successfully and appearing in the log messages:
2022-09-28 05:08:10 default[20220928t150116] CRON_JOB_PRODUCT_IMPORT_START
2022-09-28 05:08:10 default[20220928t150116] CRON_JOB_PRODUCT_IMPORT_END
After the deployment, I attempt to launch the website using gcloud app browse and the request times out eventually with a 404 Not Found. I see no activity in the log messages for the requests to the URL.
Other useful information:
app.yaml
runtime: nodejs16
instance_class: B1
basic_scaling:
idle_timeout: 5m
max_instances: 1
build_env_variables:
API_HOST: project-id.uc.r.appspot.com
NODE_ENV: 'production'
env_variables:
ADMIN_JWT_SECRET: 'XXXXXXXXXXXXX'
API_TOKEN_SALT: 'XXXXXXXXXXXXX'
APP_KEYS: 'XXXXXXXXXXXXX'
DATABASE_HOST: '/cloudsql/project-id:us-central1:database-id'
DATABASE_PORT: '5432'
DATABASE_NAME: 'database-id'
DATABASE_USERNAME: 'postgres'
DATABASE_PASSWORD: 'XXXXXXXXXXXXX'
DATABASE_SSL: 'false'
HOST: '0.0.0.0'
JWT_SECRET: 'XXXXXXXXXXXXX'
NODE_ENV: 'production'
PORT: '1337'
beta_settings:
cloud_sql_instances: 'project-id:us-central1:database-id'
Dockerfile
FROM node:16
# Installing libvips-dev for sharp Compatability
RUN apt-get update && apt-get install libvips-dev -y
# Node Environment (development/production)
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
RUN echo "NODE_ENV: $NODE_ENV"
ARG API_HOST=todo
ENV API_HOST=${API_HOST}
RUN echo "API_HOST: $API_HOST"
# Root directory
WORKDIR /opt/
COPY ./ .
ENV PATH /opt/node_modules/.bin:$PATH
RUN yarn config set network-timeout 600000 -g && yarn ci
WORKDIR /opt/app
RUN yarn build
EXPOSE 1337
CMD ["yarn", "start"]
I can see the service is starting multiple times in the logs, whilst max_instances is set to 1. Is there a /health status that could be failing?
I got to the end of writing this question and noticed something in the logs:
No entrypoint specified, using default entrypoint: /serve
This leads me to someone's comment about exposing their Nodejs application on port 8080. I open the application on port 8080 and now Strapi is loading.
Kinda lost here, I have a single node minikube instance running Kubernetes v1.23.8 with docker as the driver. I also use pulumi to deploy code-based infrastructure.
Started to see erratic errors when deploying on different resources like etcdserver: request timed out and timed out waiting to be Ready
Tried to debug the api-server in minikube to see if I get more info and got errors like
│ {"level":"warn","ts":"2022-08-09T14:39:44.235Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.000487575s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"} │
│ WARNING: 2022/08/09 14:39:44 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" │
│ {"level":"warn","ts":"2022-08-09T14:39:46.244Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.000336072s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"}
Completely lost on how to move forward troubleshooting this
This Adonis application fails compiling frontend assets while running inside Docker container. The app was scaffolded using yarn utility container defined at docker-compose.yaml file running docker compose run --rm yarn create adonis-ts-app app command and selecting true for ESLint, Prettier and Encore choices.
% What Actually Happens %
The application fails to compile assets. No sign of error from the running container:
$ docker compose logs app
docker-bind-mount-issue-app-1 | yarn run v1.22.17
docker-bind-mount-issue-app-1 | $ node ace serve --watch
docker-bind-mount-issue-app-1 | [ info ] building project...
docker-bind-mount-issue-app-1 | [ info ] starting http server...
docker-bind-mount-issue-app-1 | [ encore ] Running webpack-dev-server ...
docker-bind-mount-issue-app-1 | [ info ] watching file system for changes
docker-bind-mount-issue-app-1 | [1645110239969] INFO (app/39 on b105e817d618): started server on 0.0.0.0:3333
docker-bind-mount-issue-app-1 | ╭────────────────────────────────────────────────────────╮
docker-bind-mount-issue-app-1 | │ │
docker-bind-mount-issue-app-1 | │ Server address: http://127.0.0.1:3333 │
docker-bind-mount-issue-app-1 | │ Watching filesystem for changes: YES │
docker-bind-mount-issue-app-1 | │ Encore server address: http://localhost:8080 │
docker-bind-mount-issue-app-1 | │ │
docker-bind-mount-issue-app-1 | ╰────────────────────────────────────────────────────────╯
docker-bind-mount-issue-app-1 | [ encore ] DONE Compiled successfully in 14380ms3:04:08 PM
docker-bind-mount-issue-app-1 | [ encore ] webpack compiled successfully
Here's the screenshot.
% What I Expect %
Assets should be compiled by Webpack Encore and I should get a confirmation like the following:
yarn run v1.22.15
$ node ace serve --watch
[ info ] building project...
[ info ] starting http server...
[ encore ] Running webpack-dev-server ...
[ info ] watching file system for changes
[1645110644372] INFO (app/9208 on msrumon): started server on 0.0.0.0:3333
╭────────────────────────────────────────────────────────╮
│ │
│ Server address: http://127.0.0.1:3333 │
│ Watching filesystem for changes: YES │
│ Encore server address: http://localhost:8080 │
│ │
╰────────────────────────────────────────────────────────╯
UPDATE: public\assets\manifest.json
UPDATE: public\assets\entrypoints.json
[ encore ] DONE Compiled successfully in 1449ms9:10:46 PM
[ encore ] webpack compiled successfully
Here's the screenshot.
% Reproduction Steps %
Clone the above repository.
Install dependencies by running docker compose run --rm yarn install.
Start the development server by running docker compose up --detach app.
Browse http://localhost:3333 from any browser.
% Extra Note %
When I install the dependencies using yarn utility container (docker compose run --rm yarn install) and then start the server directly from host machine (yarn run dev), I get 'encore' is not recognized as an internal or external command, operable program or batch file. error:
$ docker compose run --rm yarn install
[+] Running 1/0
- Network docker-bind-mount-issue_default Created 0.0s
yarn install v1.22.17
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
Done in 159.92s.
$ yarn run dev
yarn run v1.22.15
$ node ace serve --watch
[ info ] building project...
[ info ] starting http server...
[ encore ] 'encore' is not recognized as an internal or external command,
operable program or batch file.
[ warn ] Underlying encore dev server died with "1 code"
[1645111433640] INFO (app/27364 on msrumon): started server on 0.0.0.0:3333
[ info ] watching file system for changes
╭────────────────────────────────────────────────────────╮
│ │
│ Server address: http://127.0.0.1:3333 │
│ Watching filesystem for changes: YES │
│ Encore server address: http://localhost:8080 │
│ │
╰────────────────────────────────────────────────────────╯
Apparently node_modules/.bin was empty, but it wasn't when I ran yarn install directly on host machine. So, out of curiosity, I attempted to replicate the exact same issue for an Express app and nodemon binary to see whether the aforementioned directory stays empty or not. And I found that it wasn't empty. So I couldn't figure out where the problem was. I'd appreciate if anybody would be able to help me.
I tried multiple combinations of different solutions, and so far this seems to be the most consistent "workaround" I've got. I'm still open for solution.