Im using mongox fork on my elixir server
it used to work well, untill today, when I keep getting the below error:
GenServer #PID<0.23055.0> terminating
** (stop) exited in: GenServer.call(Mongo.PBKDF2Cache, {\"a9f40827e764c2e9d74318e934596194\", <<88, 76, 231, 25, 765, 153, 136, 68, 54, 109, 131, 126, 543, 654, 201, 250>>, 10000}, 5000)
** (EXIT) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started
(elixir) lib/gen_server.ex:766: GenServer.call/3
(mongox) lib/mongo/connection/auth/scram.ex:66: Mongo.Connection.Auth.SCRAM.second_message/5
(mongox) lib/mongo/connection/auth/scram.ex:25: Mongo.Connection.Auth.SCRAM.conversation_first/6
(mongox) lib/mongo/connection/auth.ex:29: anonymous fn/3 in Mongo.Connection.Auth.run/1
(elixir) lib/enum.ex:2914: Enum.find_value_list/3
(mongox) lib/mongo/connection/auth.ex:28: Mongo.Connection.Auth.run/1
(mongox) lib/mongo/connection.ex:206: Mongo.Connection.connect/2
(connection) lib/connection.ex:622: Connection.enter_connect/5
Last message: nil
State: %{auth: [{\"my-user\", \"mypassword\"}], database: \"my-db\", opts: [backoff: 1000, hosts: [{\"xxxxxx.mlab.com\", \"17060\"}], port: 17060, hostname: 'xxxxxx.mlab.com', name: MongoPool, max_overflow: 500], queue: %{}, request_id: 0, socket: nil, tail: nil, timeout: 5000, wire_version: nil, write_concern: [w: 1]}
after digging into the code, I figured out:
failed on this row (on the deps):
https://github.com/work-capital/mongox/blob/feature/nan_type_support/lib/mongo/connection/auth/scram.ex#L66
when trying to call to pbkdf2
which makes a genserver call
def pbkdf2(password, salt, iterations) do
GenServer.call(#name, {password, salt, iterations})
is this an error with the connecting to the mongo instance ( which is on mlab) or is it an issue with the code?
here are my configs:
config.exs
config :mongo_app, MongoApp,
host: "xxxx.mlab.com",
database: "my-db",
username: "my-user",
password: "mypass**",
port: "17060",
pool_size: "100",
max_overflow: "500"
mix.exs:
def application do
[extra_applications: [:logger, :poolboy],
mod: {MongoApp.Application, []}]
end
defp deps do
[{:mongox, git: "https://github.com/work-capital/mongox.git", branch: "feature/nan_type_support"},
{:poolboy, "~> 1.5"}
]
application.ex:
defmodule MongoApp.Application do
use Application
def start(_type, _args) do
MongoApp.connect
import Supervisor.Spec, warn: false
children = [
]
opts = [strategy: :one_for_one, name: MongoApp.Supervisor]
Supervisor.start_link(children, opts)
end
end
Related
I have a custom made docker image for the backend of my app. I have a yaml file that runs my app image and a mongo image. However, when I use docker-compose on the yml file, I get the following error (about 20 seconds and the containers start running):
(node:33) [MONGOOSE] DeprecationWarning: Mongoose: the `strictQuery` option will be switched back to `false` by default in Mongoose 7. Use `mongoose.set('strictQuery', false);` if you want to prepare for this change. Or use `mongoose.set('strictQuery', true);` to suppress this warning.
(Use `node --trace-deprecation ...` to show where the warning was created)
Server listening on port 3000
/cloudband/node_modules/mongoose/lib/connection.js:825
const serverSelectionError = new ServerSelectionError();
^
MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
at Connection.openUri (/cloudband/node_modules/mongoose/lib/connection.js:825:32)
at /cloudband/node_modules/mongoose/lib/index.js:409:10
at /cloudband/node_modules/mongoose/lib/helpers/promiseOrCallback.js:41:5
at new Promise (<anonymous>)
at promiseOrCallback (/cloudband/node_modules/mongoose/lib/helpers/promiseOrCallback.js:40:10)
at Mongoose._promiseOrCallback (/cloudband/node_modules/mongoose/lib/index.js:1262:10)
at Mongoose.connect (/cloudband/node_modules/mongoose/lib/index.js:408:20)
at Object.<anonymous> (/cloudband/server/server.js:15:4)
at Module._compile (node:internal/modules/cjs/loader:1239:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1293:10) {
reason: TopologyDescription {
type: 'Unknown',
servers: Map(1) {
'localhost:27017' => ServerDescription {
address: 'localhost:27017',
type: 'Unknown',
hosts: [],
passives: [],
arbiters: [],
tags: {},
minWireVersion: 0,
maxWireVersion: 0,
roundTripTime: -1,
lastUpdateTime: 28094812,
lastWriteDate: 0,
error: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017
at connectionFailureError (/cloudband/node_modules/mongodb/lib/cmap/connect.js:387:20)
at Socket.<anonymous> (/cloudband/node_modules/mongodb/lib/cmap/connect.js:310:22)
at Object.onceWrapper (node:events:628:26)
at Socket.emit (node:events:513:28)
at emitErrorNT (node:internal/streams/destroy:151:8)
at emitErrorCloseNT (node:internal/streams/destroy:116:3)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
cause: Error: connect ECONNREFUSED 127.0.0.1:27017
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 27017
},
[Symbol(errorLabels)]: Set(1) { 'ResetPool' }
},
topologyVersion: null,
setName: null,
setVersion: null,
electionId: null,
logicalSessionTimeoutMinutes: null,
primary: null,
me: null,
'$clusterTime': null
}
},
stale: false,
compatible: true,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
setName: null,
maxElectionId: null,
maxSetVersion: null,
commonWireVersion: 0,
logicalSessionTimeoutMinutes: null
},
code: undefined
}
Here are my files:
Dockerfile:
FROM node:19.4.0
WORKDIR /cloudband
COPY package.json /cloudband/
COPY package-lock.json /cloudband/
RUN npm ci
COPY .env /cloudband/
COPY server /cloudband/server/
EXPOSE 3000
CMD ["npm", "run", "dev:server"]
YAML file:
version: '3'
services:
mongo:
image: mongo
container_name: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
cloudband:
image: cloudband
container_name: cloudband
ports:
- 3000:3000
command: npm run dev:server
networks:
app:
I expected my application and mongo db to start running in their respective containers and for them to be able to communicate (i.e. create documents / find documents / etc.).
What I have already tried:
-making sure they are in the same network (they are)
-making sure they can ping each other (they can)
-adding links to my app in the yaml file
-checked configurations and i think they are ok (port, host, ip)
-switching my uri to the following things:
# MONGO_URI_=mongodb://admin:password#localhost:27017/dbname
MONGO_URI_=mongodb://localhost:27017/dbname
# MONGO_URI_=mongodb://127.0.0.1:27017/dbname
Things to consider:
node v18.12.0 is installed on my computer
In a container, localhost means the container itself.
Docker-compose creates a docker network where the containers can talk to each other using their service name or container names as host names.
So, instead of
MONGO_URI_=mongodb://localhost:27017/dbname
you need to use
MONGO_URI_=mongodb://mongo:27017/dbname
I'm having trouble working my code in docker. Could you please help me?
I'm putting my application in docker together with mongo in docker, but when I run the file inside docker it doesn't connect with the mongo of the other docker and accuses Timeout.
My Docker:
version: "3.4"
services:
mongo_db:
image: mongo:6.0
ports:
- "27017:27017"
volumes:
- ./mongo_db:/data/db
container_name: mongo_db
mongo_app:
image: mongo_img_db:latest
links:
- mongo_db
command: python3 /app/main.py
container_name: mongo_app
Error:
### Connection: Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True), 'app')
Traceback (most recent call last):
File "/app/main.py", line 216, in <module>
db_calc.update_storage_stats(db_calc.calculate_artifacts_size())
File "/app/main.py", line 43, in calculate_artifacts_size
artifacts_size = self.db_client.builds.aggregate(
File "/usr/local/lib/python3.9/site-packages/pymongo/collection.py", line 2428, in aggregate
with self.__database.client._tmp_session(session, close=False) as s:
File "/usr/local/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1757, in _tmp_session
s = self._ensure_session(session)
File "/usr/local/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1740, in _ensure_session
return self.__start_session(True, causal_consistency=False)
File "/usr/local/lib/python3.9/site-packages/pymongo/mongo_client.py", line 1685, in __start_session
self._topology._check_implicit_session_support()
File "/usr/local/lib/python3.9/site-packages/pymongo/topology.py", line 538, in _check_implicit_session_support
self._check_session_support()
File "/usr/local/lib/python3.9/site-packages/pymongo/topology.py", line 554, in _check_session_support
self._select_servers_loop(
File "/usr/local/lib/python3.9/site-packages/pymongo/topology.py", line 238, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 63542415a868649d12f2a966, topology_type: Unknown, servers: [<ServerDescription ('localhost', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:27017: [Errno 111] Connection refused')>]>
I have a Redis cluster deployed in Kubernetes with the bitnami/redis-cluster chart. I made a connection to cluster from nodejs using redis-cluster npm package. After the connection made, I try to set thousands of keys continuously. To finish all the process time expected to be ten minutes. But after two minutes of connection, Redis connection lost.
Whether it is a problem with connectivity, what I have to do? Like a change in config or anything.
What may be the exact reason for that connection lost in my scenario or in common scenarios?
My code:
const redis = require('redis');
const redisCluster = require('redis-clustr')
const redisClient = new redisCluster({
servers: [
{ host: "IP", port: 6379 },
{ host: "IP", port: 6379 },
{ host: "IP", port: 6379 },
{ host: "IP", port: 6379 },
{ host: "IP", port: 6379 },
{ host: "IP", port: 6379 }
],
createClient: (port, host) => {
return redis.createClient({
port,
host,
auth_pass: 'PASSWORD',
});
}
});
<My data processing code to set thousands of keys>
(node:15827) UnhandledPromiseRejectionWarning: AbortError: Redis connection lost and command aborted. It might have been processed.
at RedisClient.flush_and_error (/root/data-pipeline/node_modules/redis/index.js:362:23)
at RedisClient.connection_gone (/root/data-pipeline/node_modules/redis/index.js:664:14)
at Socket.<anonymous> (/root/data-pipeline/node_modules/redis/index.js:293:14)
at Object.onceWrapper (events.js:313:30)
at emitNone (events.js:111:20)
at Socket.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1064:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
(node:15827) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:15827) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
I'm trying to build a Phoenix application with Distillery, however when I try start the compiled application I keep getting an error about not being able to fine the database.
17:44:50.925 [error] GenServer #PID<0.1276.0> terminating
** (KeyError) key :database not found in: [hostname: "localhost", username: "martinffx", types: Ecto.Adapters.Postgres.TypeModule, port: 5432, name: Glitchr.Repo.Pool, otp_app: :glitchr, repo: Glitchr.Repo, adapter: Ecto.Adapters.Postgres, pool_size: 10, ssl: true, pool_timeout: 5000, timeout: 15000, adapter: Ecto.Adapters.Postgres, pool_size: 10, ssl: true, pool: DBConnection.Poolboy]
(elixir) lib/keyword.ex:343: Keyword.fetch!/2
(postgrex) lib/postgrex/protocol.ex:76: Postgrex.Protocol.connect/1
(db_connection) lib/db_connection/connection.ex:134: DBConnection.Connection.connect/2
(connection) lib/connection.ex:622: Connection.enter_connect/5
(stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3
Last message: nil
Now the details above aren't what I've provided in the prod.ex
config :glitchr, Glitchr.Endpoint,
http: [port: {:system, "PORT"}],
url: [host: "localhost", port: {:system, "PORT"}],
cache_static_manifest: "priv/static/manifest.json",
server: true,
root: ".",
version: Mix.Project.config[:version],
secret_key_base: System.get_env("SECRET_KEY_BASE")
config :glitchr, Glitchr.Repo,
adapter: Ecto.Adapters.Postgres,
url: System.get_env("DATABASE_URL"),
pool_size: String.to_integer(System.get_env("POOL_SIZE") || "10"),
ssl: true
Where my database url is DATABASE_URL=postgres://glitchr:password#db:5433/glitchr
Why is that? How can I debug what I'm not getting right?
I have a task in my ansible-playbook script to open TCP port on a remote machine. but when I run my ansible playbook it throws an error. But when i run "firewall-cmd --permanent --zone=public --add-port=1234/tcp" and "firewalld-cmd --reload" I can see port is added in public zone.
Environment
Ansible local: OS x El Capitan
Ansible remote: AWS Centos 7 minimum version
Ansible version: 2.1.1.0
Remote python version: 2.7.5
My task
- name: open management console port
firewalld: port=1234/tcp zone=public permanent=true state=enabled immediate=yes
The error I am getting
fatal: [X.X.X.X]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_MojhHQ/ansible_module_firewalld.py\", line 605, in <module>\r\n main()\r\n File \"/tmp/ansible_MojhHQ/ansible_module_firewalld.py\", line 456, in main\r\n is_enabled = get_port_enabled_permanent(zone, [port, protocol])\r\n File \"/tmp/ansible_MojhHQ/ansible_module_firewalld.py\", line 170, in get_port_enabled_permanent\r\n fw_zone = fw.config().getZoneByName(zone)\r\n File \"<string>\", line 2, in getZoneByName\r\n File \"/usr/lib/python2.7/site-packages/slip/dbus/polkit.py\", line 103, in _enable_proxy\r\n return func(*p, **k)\r\n File \"<string>\", line 2, in getZoneByName\r\n File \"/usr/lib/python2.7/site-packages/firewall/client.py\", line 52, in handle_exceptions\r\n return func(*args, **kwargs)\r\n File \"/usr/lib/python2.7/site-packages/firewall/client.py\", line 1505, in getZoneByName\r\n path = dbus_to_python(self.fw_config.getZoneByName(name))\r\n File \"/usr/lib64/python2.7/site-packages/dbus/proxies.py\", line 70, in __call__\r\n return self._proxy_method(*args, **keywords)\r\n File \"/usr/lib/python2.7/site-packages/slip/dbus/proxies.py\", line 50, in __call__\r\n return dbus.proxies._ProxyMethod.__call__(self, *args, **kwargs)\r\n File \"/usr/lib64/python2.7/site-packages/dbus/proxies.py\", line 145, in __call__\r\n **keywords)\r\n File \"/usr/lib64/python2.7/site-packages/dbus/connection.py\", line 651, in call_blocking\r\n message, timeout)\r\ndbus.exceptions.DBusException: org.fedoraproject.slip.dbus.service.PolKit.NotAuthorizedException.org.fedoraproject.FirewallD1.config: \r\n", "msg": "MODULE FAILURE", "parsed": false}
- name: Install firewalld
yum:
name: firewalld
state: latest
notify:
- start firewalld
- name: start firewalld
service:
name: firewalld
state: started
enabled: yes
become: yes
- name: enable 1234
firewalld:
zone: public
port: 1234/tcp
permanent: true
state: enabled
become: yes
Do it this way . It will work
dbus.exceptions.DBusException: org.fedoraproject.slip.dbus.service.PolKit.NotAuthorizedException.org.fedoraproject.FirewallD1.config indicates there's some sort of permissions error. The task probably needs to elevate its privileges with become: yes.
See the become documentation for more details.