We get the following error when we try to push backup using wal-e:
2020-07-16T21:18:55Z <Greenlet at 0x7f2a59fadc48: <wal_e.worker.upload.PartitionUploader object at 0x7f2a59f96cc0>([ExtendedTarInfo(submitted_path='/var/lib/postgres)> failed with PermissionError
wal_e.operator.backup WARNING MSG: blocking on sending WAL segments
DETAIL: The backup was not completed successfully, but we have to wait anyway. See README: TODO about pg_cancel_backup
STRUCTURED: time=2020-07-16T21:18:55.651073-00 pid=19697
wal_e.main CRITICAL MSG: An unprocessed exception has avoided all error handling
DETAIL: Traceback (most recent call last):
File "/var/lib/postgresql/virtualenvs/wal-e/lib/python3.5/site-packages/wal_e/operator/backup.py", line 197, in database_backup
**kwargs)
File "/var/lib/postgresql/virtualenvs/wal-e/lib/python3.5/site-packages/wal_e/operator/backup.py", line 500, in _upload_pg_cluster_dir
pool.put(tpart)
File "/var/lib/postgresql/virtualenvs/wal-e/lib/python3.5/site-packages/wal_e/worker/upload_pool.py", line 108, in put
self._wait()
File "/var/lib/postgresql/virtualenvs/wal-e/lib/python3.5/site-packages/wal_e/worker/upload_pool.py", line 65, in _wait
raise val
File "src/gevent/greenlet.py", line 766, in gevent._greenlet.Greenlet.run
File "/var/lib/postgresql/virtualenvs/wal-e/lib/python3.5/site-packages/wal_e/worker/upload.py", line 96, in __call__
gpg_key=self.gpg_key) as pl:
File "/var/lib/postgresql/virtualenvs/wal-e/lib/python3.5/site-packages/wal_e/pipeline.py", line 92, in __enter__
self.stdin = pipebuf.NonBlockBufferedWriter(stdin)
File "/var/lib/postgresql/virtualenvs/wal-e/lib/python3.5/site-packages/wal_e/pipebuf.py", line 225, in __init__
_setup_fd(self._fd)
File "/var/lib/postgresql/virtualenvs/wal-e/lib/python3.5/site-packages/wal_e/pipebuf.py", line 62, in _setup_fd
set_buf_size(fd)
File "/var/lib/postgresql/virtualenvs/wal-e/lib/python3.5/site-packages/wal_e/pipebuf.py", line 53, in set_buf_size
fcntl.fcntl(fd, fcntl.F_SETPIPE_SZ, OS_PIPE_SZ)
PermissionError: [Errno 1] Operation not permitted
It's not clear why fcntl call may lead to PermissionError.
PostgreSQL version: 9.6, Python: 3.5, Wal-e : 1.1.1 (tried also 1.0.3 and 1.1.0).
It was working previously and stopped working at some point (without any noticeable changes).
Well, I'm late to the game. See https://github.com/wal-e/wal-e/issues/270
I've worked around it by patching wal-e to not set this.
Related
I use IntelliJ IDEA's bundled database client (DataGrip) to manage my database connections, both local and remote. And using docker to connect to postgres with following settings:
services:
postgresql:
image: postgres:11
ports:
- "5432:5432"
expose:
- "5432"
environment:
- POSTGRES_USER=$user
- POSTGRES_PASSWORD=$pass
- POSTGRES_DB=k$db
After upgrading from Catalina to Big Sur, connection to local db fails and it just shows a connection error message as follows:
[08001] Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
java.net.ConnectException: Connection refused (Connection refused).
When I run docker-compose up, I get the following error:
Traceback (most recent call last):
File "site-packages/urllib3/connectionpool.py", line 677, in urlopen
File "site-packages/urllib3/connectionpool.py", line 392, in _make_request
File "http/client.py", line 1252, in request
File "http/client.py", line 1298, in _send_request
File "http/client.py", line 1247, in endheaders
File "http/client.py", line 1026, in _send_output
File "http/client.py", line 966, in send
File "site-packages/docker/transport/unixconn.py", line 43, in connect
ConnectionRefusedError: [Errno 61] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "site-packages/requests/adapters.py", line 449, in send
File "site-packages/urllib3/connectionpool.py", line 727, in urlopen
File "site-packages/urllib3/util/retry.py", line 403, in increment
File "site-packages/urllib3/packages/six.py", line 734, in reraise
File "site-packages/urllib3/connectionpool.py", line 677, in urlopen
File "site-packages/urllib3/connectionpool.py", line 392, in _make_request
File "http/client.py", line 1252, in request
File "http/client.py", line 1298, in _send_request
File "http/client.py", line 1247, in endheaders
File "http/client.py", line 1026, in _send_output
File "http/client.py", line 966, in send
File "site-packages/docker/transport/unixconn.py", line 43, in connect
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionRefusedError(61, 'Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "site-packages/docker/api/client.py", line 205, in _retrieve_server_version
File "site-packages/docker/api/daemon.py", line 181, in version
File "site-packages/docker/utils/decorators.py", line 46, in inner
File "site-packages/docker/api/client.py", line 228, in _get
File "site-packages/requests/sessions.py", line 543, in get
File "site-packages/requests/sessions.py", line 530, in request
File "site-packages/requests/sessions.py", line 643, in send
File "site-packages/requests/adapters.py", line 498, in send
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionRefusedError(61, 'Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "docker-compose", line 3, in <module>
File "compose/cli/main.py", line 67, in main
File "compose/cli/main.py", line 123, in perform_command
File "compose/cli/command.py", line 69, in project_from_options
File "compose/cli/command.py", line 132, in get_project
File "compose/cli/docker_client.py", line 43, in get_client
File "compose/cli/docker_client.py", line 170, in docker_client
File "site-packages/docker/api/client.py", line 188, in __init__
File "site-packages/docker/api/client.py", line 213, in _retrieve_server_version
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', ConnectionRefusedError(61, 'Connection refused'))
[1269] Failed to execute script docker-compose
Connecting to remote db's are not broken somehow, they work. Is there anyone came across this problem?
After Big Sur upgrade, I also had a warning whenever I open a new terminal:
zsh compinit: insecure directories, run compaudit for list.
Ignore insecure directories and continue [y] or abort compinit [n]?
I did not think they are related but the solution to this problem, explained in this thread, also solved the main postgresql connection issue for me. But my computer keeps restarting occasionally and after this restart, I again get the main problem when I run docker-compose up inside IDE's terminal. I then manually restart and it works. Although not being a permanent soluiton, this solved my problem for now.
I was orchestrating two dataflow job with cloud composer and it was working fine for month. Suddenly the two jobs stopped working with the following error message:
in download_blob File
"/usr/local/lib/python3.6/site-packages/google/cloud/storage/client.py",
line 399, in get_bucket retry=retry, File
"/usr/local/lib/python3.6/site-packages/google/cloud/storage/bucket.py",
line 1002, in reload retry=retry, File
"/usr/local/lib/python3.6/site-packages/google/cloud/storage/_helpers.py",
line 225, in reload retry=retry, File
"/usr/local/lib/python3.6/site-packages/google/cloud/storage/_http.py",
line 63, in api_request return call() File
"/usr/local/lib/python3.6/site-packages/google/api_core/retry.py",
line 286, in retry_wrapped_func on_error=on_error, File
"/usr/local/lib/python3.6/site-packages/google/api_core/retry.py",
line 184, in retry_target return target() File
"/usr/local/lib/python3.6/site-packages/google/cloud/_http.py", line
479, in api_request timeout=timeout, File
"/usr/local/lib/python3.6/site-packages/google/cloud/_http.py", line
337, in _make_request method, url, headers, data, target_object,
timeout=timeout File
"/usr/local/lib/python3.6/site-packages/google/cloud/_http.py", line
374, in _do_request return self.http.request( File
"/usr/local/lib/python3.6/site-packages/google/cloud/_http.py", line
157, in http return self._client._http File
"/usr/local/lib/python3.6/site-packages/google/cloud/client.py", line
187, in _http
self._http_internal.configure_mtls_channel(self._client_cert_source)
AttributeError: 'AuthorizedSession' object has no attribute
'configure_mtls_channel'
In the jobs I download a file from google cloud storage with the storage client. I assumed it was because of some dependencies issues. In the composer environment I installed google-cloud-storage without specifying a version. I tried specifying different versions of the package but nothing seems to work.
Thanks!
This seems to be related to this issue.
Try pinning google-cloud-core to 1.5.0, then I highly recommend for you to Drain your jobs once you get them back to work (assuming they have streaming jobs).
I am using the latest master branch of the git repo https://github.com/celery/librabbitmq and installing librabbitmq==2.0.0 for Python 3.6 by following the instructions in the readme
Using the development version
You can clone the repository by doing the following:
$ git clone git://github.com/celery/librabbitmq.git
Then install it by doing the following:
$ cd librabbitmq
$ make install # or make develop
This works fine (after installing certain binaries for c compliation in the OS), but when I then make a small a+b add task and call it with add.delay(2,2) it fails with the following error. I looked up and saw that Celery 4 uses json as serializer, so clearly it is not because if pickle serialization
Changing from librabbitmq to pyamqp broker works normally
Same exact situation in both MacOS and Ubuntu 16
[2018-04-30 23:40:02,956: CRITICAL/MainProcess] Unrecoverable error:
SystemError(' returned a result with an error set',) Traceback (most
recent call last): File
"/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/kombu/messaging.py",
line 624, in _receive_callback
return on_m(message) if on_m else self.receive(decoded, message) File
"/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 570, in on_task_received
callbacks, File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/worker/strategy.py",
line 145, in task_message_handler
handle(req) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/worker/worker.py",
line 221, in _process_task_sem
return self._quick_acquire(self._process_task, req) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/kombu/async/semaphore.py",
line 62, in acquire
callback(*partial_args, **partial_kwargs) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/worker/worker.py",
line 226, in _process_task
req.execute_using_pool(self.pool) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/worker/request.py",
line 531, in execute_using_pool
correlation_id=task_id, File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/concurrency/base.py",
line 155, in apply_async
**options) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/billiard/pool.py",
line 1486, in apply_async
self._quick_put((TASK, (result._job, None, func, args, kwds))) File
"/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/concurrency/asynpool.py",
line 813, in send_job
body = dumps(tup, protocol=protocol) TypeError: can't pickle memoryview objects
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File
"/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/worker/worker.py",
line 203, in start
self.blueprint.start(self) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/bootsteps.py",
line 119, in start
step.start(parent) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/bootsteps.py",
line 370, in start
return self.obj.start() File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 320, in start
blueprint.start(self) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/bootsteps.py",
line 119, in start
step.start(parent) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 596, in start
c.loop(*c.loop_args()) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/celery/worker/loops.py",
line 88, in asynloop
next(loop) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/kombu/async/hub.py",
line 354, in create_loop
cb(*cbargs) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/kombu/transport/base.py",
line 236, in on_readable
reader(loop) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/kombu/transport/base.py",
line 218, in _read
drain_events(timeout=0) File "/Users/somghosh/.virtualenvs/ctdb/lib/python3.6/site-packages/librabbitmq-2.0.0-py3.6-macosx-10.6-intel.egg/librabbitmq/init.py",
line 227, in drain_events
self._basic_recv(timeout) SystemError: returned a result with an error set
This library is not recommended to use as rabbitmq broker with celery. Instead please try py-amqp. this is more maintained and less buggy.
Following the steps to configure the titan srever
bin/titan.sh
Forking Cassandra...
Running `nodetool statusthrift`... OK (returned exit status 0 and printed string "running").
Forking Elasticsearch...
Connecting to Elasticsearch (127.0.0.1:9300)... OK (connected to 127.0.0.1:9300).
Forking Gremlin-Server...
Connecting to Gremlin-Server (127.0.0.1:8182)... OK (connected to 127.0.0.1:8182).
Run gremlin.sh to connect.
The server started perfectly but when i am connecting with python and then run the script the error which i got mentioned below
Traceback (most recent call last):
File "/home/admin-12/Documents/bitbucket/ecodrone/ecodrone/GremlinConnector.py", line 28, in <module>
data = (execute_query("""g.V()"""))
File "/home/admin-12/Documents/bitbucket/ecodrone/ecodrone/GremlinConnector.py", line 22, in execute_query
results = future_results.result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/admin-12/.local/lib/python3.6/site-packages/gremlin_python/driver/resultset.py", line 81, in cb
f.result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/admin-12/.local/lib/python3.6/site-packages/gremlin_python/driver/connection.py", line 77, in _receive
self._protocol.data_received(data, self._results)
File "/home/admin-12/.local/lib/python3.6/site-packages/gremlin_python/driver/protocol.py", line 71, in data_received
result_set = results_dict[request_id]
KeyError: None
versioning i am using
titan - 1.0.0
gremlin-python - 3.3.2
apache-tinkerpop-gremlin-server-3.3.1
Titan supports some an extremely old version of TinkerPop and I'm sure you'll find some incompatibility there if you try to use gremlin-python 3.3.2. As Titan is no longer supported, I suggest you upgrade to JanusGraph, a more current and maintained version of Titan.
I'm using mongo-connector and neo4j_doc_manager for syncing the mongodb's data to neo4j, it used to work perfectly but today it started giving following error.
2016-07-29 17:18:59,558 [CRITICAL] mongo_connector.oplog_manager:549 - Exception during collection dump
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/mongo_connector/oplog_manager.py", line 501, in do_dump
upsert_all(dm)
File "/usr/local/lib/python2.7/site-packages/mongo_connector/oplog_manager.py", line 485, in upsert_all
dm.bulk_upsert(docs_to_dump(namespace), mapped_ns, long_ts)
File "/usr/local/lib/python2.7/site-packages/mongo_connector/util.py", line 38, in wrapped
reraise(new_type, exc_value, exc_tb)
File "/usr/local/lib/python2.7/site-packages/mongo_connector/util.py", line 32, in wrapped
return f(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/mongo_connector/doc_managers/neo4j_doc_manager.py", line 89, in bulk_upsert
tx.commit()
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher/core.py", line 306, in commit
return self.post(self.__commit or self.__begin_commit)
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher/core.py", line 261, in post
raise self.error_class.hydrate(error)
File "/usr/local/lib/python2.7/site-packages/py2neo/cypher/error/core.py", line 54, in hydrate
error_cls = getattr(error_module, title)
Neo4jOperationFailed: 'module' object has no attribute 'ConstraintValidationFailed'
2016-07-29 17:18:59,563 [ERROR] mongo_connector.oplog_manager:557 - OplogThread: Failed during dump collection cannot recover!
You're trying to insert data which doesn't match the constraints of your Neo4j schema (unicity or existence), and apparently the code doesn't know how to handle that type of error, though it does give its name:
ConstraintValidationFailed
You should maybe activate some log to see the data which it is trying to insert, or the Cypher query it's trying to execute.