I'm trying to run supervisord on a read only filesystem.
I have tried to stop supervisord from writing log and pid files using the following configuration:
[supervisord]
nodaemon=true
user=root
logfile=/dev/stdout
logfile_maxbytes=0
pidfile=/dev/null
However, when I attempt to start, I still receive the following error:
Traceback (most recent call last):
File "/usr/bin/supervisord", line 11, in <module>
load_entry_point('supervisor==3.3.4', 'console_scripts', 'supervisord')()
File "/usr/lib/python2.7/site-packages/supervisor/supervisord.py", line 349, in main
options = ServerOptions()
File "/usr/lib/python2.7/site-packages/supervisor/options.py", line 428, in __init__
existing_directory, default=tempfile.gettempdir())
File "/usr/lib/python2.7/tempfile.py", line 275, in gettempdir
tempdir = _get_default_tempdir()
File "/usr/lib/python2.7/tempfile.py", line 217, in _get_default_tempdir
("No usable temporary directory found in %s" % dirlist))
IOError: [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Is it possible to start/run supervisord on a read only filesystem?
Set the TEMPDIR environment variable to a rw volume mount.
Related
I deleted the anaconda directory under the home and bashrc configurations.
Now, I need to install it again, but it occurs a problem evenif overwrites unsuccessful installation on Linux.
Should I delete some additional config files? How can I handle this?
sh Downloads/Anaconda3-2022.10-Linux-x86_64.sh -u -p /home/user/anaconda3/
PREFIX=/home/user/anaconda3
Unpacking payload ...
concurrent.futures.process._RemoteTraceback:
'''
Traceback (most recent call last):
File "concurrent/futures/process.py", line 384, in wait_result_broken_or_wakeup
File "multiprocessing/connection.py", line 256, in recv
TypeError: __init__() missing 1 required positional argument: 'msg'
'''
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "entry_point.py", line 69, in <module>
File "concurrent/futures/process.py", line 559, in _chain_from_iterable_of_lists
File "concurrent/futures/_base.py", line 608, in result_iterator
File "concurrent/futures/_base.py", line 445, in the result
File "concurrent/futures/_base.py", line 390, in __get_result
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
[8382] Failed to execute script entry_point
Make sure deleted .conda directory under home and have enough disk space.
No need to delete .cache or any bin libraries.
I'm converting my Main.py python code to .exe application and I need to load ids.txt there are some values line by line, but after converting is done the Auto-py-to-exe-master free Github app says this error message:
Traceback (most recent call last):
File "main.py", line 37, in <module>
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\janka\\AppData\\Local\\Temp\\_MEI200202\\base_library.zip\\ids.txt'
By the way my file location of ids.txt is called in other directory and from application path as follows:
with open(os.path.join(sys.path[0], "ids.txt"), "r") as f:
devIds = f.read().splitlines()
I am working on VirtualBox 6.0 with Python 3.5. I am trying to install airflow from github using the requirements-python3.5.txt file (https://raw.githubusercontent.com/apache/airflow/v1-10-stable/requirements/requirements-python3.5.txt). However, when I try to download this file from the command line, I get a read-only file system:
vagrant#learnairflow:~$ source .sandbox/bin/activate
(.sandbox) vagrant#learnairflow:~$ wget https://raw.githubusercontent.com/apache/airflow/v1-10-stable/requirements/requirements-python3.5.txt
--2020-06-13 15:47:54-- https://raw.githubusercontent.com/apache/airflow/v1-10-stable/requirements/requirements-python3.5.txt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.232.48.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.232.48.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6210 (6.1K) [text/plain]
requirements-python3.5.txt: Read-only file system
Cannot write to ‘requirements-python3.5.txt’ (Success).
Subsequently, when I try to install airflow I get the following error:
(.sandbox) vagrant#learnairflow:~$ pip install "apache-airflow[celery, crypto, mysql, rabbitmq, redis]"==1.10.10 --constraint requirements-python3.5.txt
WARNING: The directory '/home/vagrant/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
ERROR: Exception:
Traceback (most recent call last):
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/cli/base_command.py", line 188, in _main
status = self.run(options, args)
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/cli/req_command.py", line 185, in wrapper
return func(self, options, args)
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/commands/install.py", line 288, in run
wheel_cache = WheelCache(options.cache_dir, options.format_control)
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/cache.py", line 296, in __init__
self._ephem_cache = EphemWheelCache(format_control)
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/cache.py", line 265, in __init__
globally_managed=True,
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/utils/temp_dir.py", line 137, in __init__
path = self._create(kind)
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/utils/temp_dir.py", line 185, in _create
tempfile.mkdtemp(prefix="pip-{}-".format(kind))
File "/usr/local/lib/python3.5/tempfile.py", line 358, in mkdtemp
prefix, suffix, dir, output_type = _sanitize_params(prefix, suffix, dir)
File "/usr/local/lib/python3.5/tempfile.py", line 130, in _sanitize_params
dir = gettempdir()
File "/usr/local/lib/python3.5/tempfile.py", line 296, in gettempdir
tempdir = _get_default_tempdir()
File "/usr/local/lib/python3.5/tempfile.py", line 231, in _get_default_tempdir
dirlist)
FileNotFoundError: [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/home/vagrant']
I've tried using the sudo command but it doesn't work either. Do you have any idea of what might be causing this error and how to fix it? Thank you in advance!
Followed the instructions from here: http://www.azerothcore.org/wiki/Install-with-Docker
I used the v8 data
When I run docker-compose up I get the following:
Building ac-worldserver
Traceback (most recent call last):
File "site-packages/docker/utils/build.py", line 97, in create_archive
File "tarfile.py", line 1972, in addfile
File "tarfile.py", line 250, in copyfileobj
File "tempfile.py", line 481, in func_wrapper
OSError: [Errno 28] No space left on device
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "bin/docker-compose", line 6, in <module>
File "compose/cli/main.py", line 72, in main
File "compose/cli/main.py", line 128, in perform_command
File "compose/cli/main.py", line 1077, in up
File "compose/cli/main.py", line 1073, in up
File "compose/project.py", line 548, in up
File "compose/service.py", line 367, in ensure_image_exists
File "compose/service.py", line 1106, in build
File "site-packages/docker/api/build.py", line 160, in build
File "site-packages/docker/utils/build.py", line 31, in tar
File "site-packages/docker/utils/build.py", line 100, in create_archive
OSError: Can not read file in context: /home/azerothcore/wotlk/azerothcore-wotlk/docker/worldserver/data/mmaps/5332641.mmtile
[21981] Failed to execute script docker-compose
It is likely disc space related, i had same error and there is an error above it that indicates the build ran out of disc space. Works after clearing space it uses over 10gb in my case.
I had the same error when I tried to mount a large file. The solution for me was to create a .dockerignore file containing the name of the directory where the large file was saved.
Has anyone seen this error from gsutil or know how to fix it? I get it when I try to run any gsutil command, but here is an example trying to use ls on a bucket in my google cloud project.
$ gsutil ls gs://BUCKET/FOLDER
Traceback (most recent call last):
File "/home/gmcinnes/bin/google-cloud-sdk/bin/bootstrapping/gsutil.py", line 68, in <module>
bootstrapping.PrerunChecks(can_be_gce=True)
File "/home/gmcinnes/bin/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 279, in PrerunChecks
CheckCredOrExit(can_be_gce=can_be_gce)
File "/home/gmcinnes/bin/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 167, in CheckCredOrExit
cred = c_store.Load()
File "/home/gmcinnes/bin/google-cloud-sdk/bin/bootstrapping/../../lib/googlecloudsdk/core/credentials/store.py", line 206, in Load
cred = store.get()
File "/home/gmcinnes/bin/google-cloud-sdk/bin/bootstrapping/../../lib/oauth2client/client.py", line 350, in get
self.acquire_lock()
File "/home/gmcinnes/bin/google-cloud-sdk/bin/bootstrapping/../../lib/oauth2client/multistore_file.py", line 222, in acquire_lock
self._multistore._lock()
File "/home/gmcinnes/bin/google-cloud-sdk/bin/bootstrapping/../../lib/oauth2client/multistore_file.py", line 281, in _lock
self._file.open_and_lock()
File "/home/gmcinnes/bin/google-cloud-sdk/bin/bootstrapping/../../lib/oauth2client/locked_file.py", line 370, in open_and_lock
self._opener.open_and_lock(timeout, delay)
File "/home/gmcinnes/bin/google-cloud-sdk/bin/bootstrapping/../../lib/oauth2client/locked_file.py", line 211, in open_and_lock
raise e
IOError: [Errno 37] No locks available
Thanks
Figured it out. The filesystem on that machine was full. I cleaned it up and it works now.
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 10079084 9678804 0 100% /