OSError: [Errno 99] Cannot assign requested address - centos

Trying to run jupyter notebook on a CentOS 7. It comes back with:
OSError: [Errno 99] Cannot assign requested address
And the stack trace:
[user#desktop ~]$ jupyter notebook
Traceback (most recent call last):
File "/home/use/anaconda3/bin/jupyter-notebook", line 6, in <module>
sys.exit(notebook.notebookapp.main())
File "/home/user/anaconda3/lib/python3.6/site-packages/jupyter_core/application.py", line 267, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/home/user/anaconda3/lib/python3.6/site-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/home/user/anaconda3/lib/python3.6/site-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/home/user/anaconda3/lib/python3.6/site-packages/notebook/notebookapp.py", line 1296, in initialize
self.init_webapp()
File "/home/user/anaconda3/lib/python3.6/site-packages/notebook/notebookapp.py", line 1120, in init_webapp
self.http_server.listen(port, self.ip)
File "/home/user/anaconda3/lib/python3.6/site-packages/tornado/tcpserver.py", line 142, in listen
sockets = bind_sockets(port, address=address)
File "/home/user/anaconda3/lib/python3.6/site-packages/tornado/netutil.py", line 197, in bind_sockets
sock.bind(sockaddr)
OSError: [Errno 99] Cannot assign requested address

jupyter notebook --ip=127.0.0.1 --port=8888
I had to simply set the ip/port params. The issue was likely caused because the default ip/port that it was previously trying to assign was already taken!

In a remote VM, I solved the issue by
$ jupyter-notebook --ip=0.0.0.0 --port=8888
...
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://0.0.0.0:8888/?token=1234567890abcdefghijklmnopqrstuvwxyz (the token is for demo)
...
Note: do not assign the specific ip
then I can connect to jupyter notebook via:
http://your_vm_ip:8888/?token=1234567890abcdefghijklmnopqrstuvwxyz
(replace 0.0.0.0 with your_vm_ip)

Here is a permanent solution.
Create a configuration file for Jupyter, enter in the terminal: jupyter notebook --generate-config
The last command will create a file in /home/USER/.jupyter/jupyter_notebook_config.py
Open the file jupyter_notebook_config.py and edit the variable c.NotebookApp.ip as follows:
# c.NotebookApp.ip = 'localhost'
c.NotebookApp.ip = '127.0.0.1'
Enter in the terminal: jupyter notebook
Remarks: sometimes need to first chmod to grant permissions, the file

If you've tried several ports already (using --port XXXX), and none work:
Check that the localhost entry in /etc/hosts/ is not set to something other than 127.0.0.1.

Related

AWX-Web error when installing awx-operator on Kubernetes

I am currently installing the awx-operator and I have come across an issue while I am trying to expose the application to the outside world.
But I have come across an error with the awx-web container within the awx-5b58db49c-9r4hp. When I run kubectl logs pod/awx-5b58db49c-9r4hp -c awx-web, I get the following output:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/awx/conf/settings.py", line 81, in _ctit_db_wrapper
yield
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/awx/conf/settings.py", line 411, in __getattr__
value = self._get_local(name)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/awx/conf/settings.py", line 355, in _get_local
setting = Setting.objects.filter(key=name, user__isnull=True).order_by('pk').first()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/query.py", line 653, in first
for obj in (self if self.ordered else self.order_by('pk'))[:1]:
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/query.py", line 274, in __iter__
self._fetch_all()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/query.py", line 1242, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/query.py", line 55, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/sql/compiler.py", line 1140, in execute_sql
cursor = self.connection.cursor()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 256, in cursor
return self._cursor()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 233, in _cursor
self.ensure_connection()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 217, in ensure_connection
self.connect()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 217, in ensure_connection
self.connect()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 195, in connect
self.connection = self.get_new_connection(conn_params)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/postgresql/base.py", line 178, in get_new_connection
connection = Database.connect(**conn_params)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: FATAL: password authentication failed for user "awx"
2021-05-12 14:28:54,478 ERROR [-] awx.conf.settings Database settings are not available, using defaults.
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/oauth2_provider/settings.py", line 138, in __getattr__
val = self.user_settings[attr]
KeyError: 'OAUTH2_VALIDATOR_CLASS'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 217, in ensure_connection
self.connect()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 195, in connect
self.connection = self.get_new_connection(conn_params)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/postgresql/base.py", line 178, in get_new_connection
connection = Database.connect(**conn_params)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: FATAL: password authentication failed for user "awx"
I am not too sure whether this is a big deal or just a red-herring. I am just in need of some clarification. If I need to get any more information to aide troubleshooting, please let me know!
As per AWX 19.0.0: password authentication failed for user "awx" issue is not anymore in minikube v1.20.0, awx-operator 0.9.0 so advice is try use this version for now
I was also observing the same error message => password authentication failed for user "awx".
I was running the awx-operator version 0.10.0 on my kubernetes cluster created using kubeadm and not using minikube.
I hosted my postgres persistent volume needed for awx postgres pod on my worker node which already had some stale previously migrated data from a different kubernetes cluster. I had to cleanup that already lying data on my worker node hostPath where i mounted my persistent volume and make a fresh install with fresh data from postgres pod and the password authentication error never come back.

Why am I getting a read-only file system from github and an error when trying to install apache airflow?

I am working on VirtualBox 6.0 with Python 3.5. I am trying to install airflow from github using the requirements-python3.5.txt file (https://raw.githubusercontent.com/apache/airflow/v1-10-stable/requirements/requirements-python3.5.txt). However, when I try to download this file from the command line, I get a read-only file system:
vagrant#learnairflow:~$ source .sandbox/bin/activate
(.sandbox) vagrant#learnairflow:~$ wget https://raw.githubusercontent.com/apache/airflow/v1-10-stable/requirements/requirements-python3.5.txt
--2020-06-13 15:47:54-- https://raw.githubusercontent.com/apache/airflow/v1-10-stable/requirements/requirements-python3.5.txt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.232.48.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.232.48.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6210 (6.1K) [text/plain]
requirements-python3.5.txt: Read-only file system
Cannot write to ‘requirements-python3.5.txt’ (Success).
Subsequently, when I try to install airflow I get the following error:
(.sandbox) vagrant#learnairflow:~$ pip install "apache-airflow[celery, crypto, mysql, rabbitmq, redis]"==1.10.10 --constraint requirements-python3.5.txt
WARNING: The directory '/home/vagrant/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
ERROR: Exception:
Traceback (most recent call last):
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/cli/base_command.py", line 188, in _main
status = self.run(options, args)
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/cli/req_command.py", line 185, in wrapper
return func(self, options, args)
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/commands/install.py", line 288, in run
wheel_cache = WheelCache(options.cache_dir, options.format_control)
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/cache.py", line 296, in __init__
self._ephem_cache = EphemWheelCache(format_control)
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/cache.py", line 265, in __init__
globally_managed=True,
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/utils/temp_dir.py", line 137, in __init__
path = self._create(kind)
File "/home/vagrant/.sandbox/lib/python3.5/site-packages/pip/_internal/utils/temp_dir.py", line 185, in _create
tempfile.mkdtemp(prefix="pip-{}-".format(kind))
File "/usr/local/lib/python3.5/tempfile.py", line 358, in mkdtemp
prefix, suffix, dir, output_type = _sanitize_params(prefix, suffix, dir)
File "/usr/local/lib/python3.5/tempfile.py", line 130, in _sanitize_params
dir = gettempdir()
File "/usr/local/lib/python3.5/tempfile.py", line 296, in gettempdir
tempdir = _get_default_tempdir()
File "/usr/local/lib/python3.5/tempfile.py", line 231, in _get_default_tempdir
dirlist)
FileNotFoundError: [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/home/vagrant']
I've tried using the sudo command but it doesn't work either. Do you have any idea of what might be causing this error and how to fix it? Thank you in advance!

Running supervisord on a read only filesystem

I'm trying to run supervisord on a read only filesystem.
I have tried to stop supervisord from writing log and pid files using the following configuration:
[supervisord]
nodaemon=true
user=root
logfile=/dev/stdout
logfile_maxbytes=0
pidfile=/dev/null
However, when I attempt to start, I still receive the following error:
Traceback (most recent call last):
File "/usr/bin/supervisord", line 11, in <module>
load_entry_point('supervisor==3.3.4', 'console_scripts', 'supervisord')()
File "/usr/lib/python2.7/site-packages/supervisor/supervisord.py", line 349, in main
options = ServerOptions()
File "/usr/lib/python2.7/site-packages/supervisor/options.py", line 428, in __init__
existing_directory, default=tempfile.gettempdir())
File "/usr/lib/python2.7/tempfile.py", line 275, in gettempdir
tempdir = _get_default_tempdir()
File "/usr/lib/python2.7/tempfile.py", line 217, in _get_default_tempdir
("No usable temporary directory found in %s" % dirlist))
IOError: [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Is it possible to start/run supervisord on a read only filesystem?
Set the TEMPDIR environment variable to a rw volume mount.

ERROR: gcloud crashed (CannotConnectToMetadataServerException): <urlopen error [Errno -2] Name does not resolve>

I am having issues configuring my container to point to my Kubernetes cluster with the command gcloud container clusters get-credentials. I get the following error.
ERROR: gcloud crashed (CannotConnectToMetadataServerException): <urlopen error [Errno -2] Name does not resolve>
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
Enhanced logging:
CannotConnectToMetadataServerException: <urlopen error [Errno -2] Name does not resolve>
2018-04-10 18:00:42,625 ERROR ___FILE_ONLY___ BEGIN CRASH STACKTRACE
Traceback (most recent call last):
File "/google-cloud-sdk/lib/googlecloudsdk/gcloud_main.py", line 147, in main
gcloud_cli.Execute()
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 818, in Execute
self._HandleAllErrors(exc, command_path_string, specified_arg_names)
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 856, in _HandleAllErrors
exceptions.HandleError(exc, command_path_string, self.__known_error_handler)
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/exceptions.py", line 526, in HandleError
core_exceptions.reraise(exc)
File "/google-cloud-sdk/lib/googlecloudsdk/core/exceptions.py", line 111, in reraise
six.reraise(type(exc_value), exc_value, tb)
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 792, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 751, in Run
self._parent_group.RunGroupFilter(tool_context, args)
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 692, in RunGroupFilter
self._parent_group.RunGroupFilter(context, args)
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 693, in RunGroupFilter
self._common_type().Filter(context, args)
File "/google-cloud-sdk/lib/surface/container/__init__.py", line 71, in Filter
context['api_adapter'] = api_adapter.NewAPIAdapter('v1')
File "/google-cloud-sdk/lib/googlecloudsdk/api_lib/container/api_adapter.py", line 147, in NewAPIAdapter
return NewV1APIAdapter()
File "/google-cloud-sdk/lib/googlecloudsdk/api_lib/container/api_adapter.py", line 151, in NewV1APIAdapter
return InitAPIAdapter('v1', V1Adapter)
File "/google-cloud-sdk/lib/googlecloudsdk/api_lib/container/api_adapter.py", line 172, in InitAPIAdapter
api_client = core_apis.GetClientInstance('container', api_version)
File "/google-cloud-sdk/lib/googlecloudsdk/api_lib/util/apis.py", line 297, in GetClientInstance
api_name, api_version, no_http, _CheckResponse, enable_resource_quota)
File "/google-cloud-sdk/lib/googlecloudsdk/api_lib/util/apis_internal.py", line 153, in _GetClientInstance
http_client = http.Http(enable_resource_quota=enable_resource_quota)
File "/google-cloud-sdk/lib/googlecloudsdk/core/credentials/http.py", line 64, in Http
creds = store.LoadIfEnabled()
File "/google-cloud-sdk/lib/googlecloudsdk/core/credentials/store.py", line 281, in LoadIfEnabled
return Load()
File "/google-cloud-sdk/lib/googlecloudsdk/core/credentials/store.py", line 348, in Load
cred = STATIC_CREDENTIAL_PROVIDERS.GetCredentials(account)
File "/google-cloud-sdk/lib/googlecloudsdk/core/credentials/store.py", line 162, in GetCredentials
cred = provider.GetCredentials(account)
File "/google-cloud-sdk/lib/googlecloudsdk/core/credentials/store.py", line 214, in GetCredentials
if account in c_gce.Metadata().Accounts():
File "/google-cloud-sdk/lib/googlecloudsdk/core/credentials/gce.py", line 127, in Accounts
gce_read.GOOGLE_GCE_METADATA_ACCOUNTS_URI + '/')
File "/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 289, in DecoratedFunction
exceptions.reraise(to_reraise[1], tb=to_reraise[2])
File "/google-cloud-sdk/lib/googlecloudsdk/core/exceptions.py", line 111, in reraise
six.reraise(type(exc_value), exc_value, tb)
File "/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 159, in TryFunc
return func(*args, **kwargs), None
File "/google-cloud-sdk/lib/googlecloudsdk/core/credentials/gce.py", line 52, in _ReadNoProxyWithCleanFailures
raise CannotConnectToMetadataServerException(e)
CannotConnectToMetadataServerException: <urlopen error [Errno -2] Name does not resolve>
To give some color, we kick off a build to CircleCI everytime we push code to github. However, we have a container we call internally belushi, that we use to run our entire infrastructure. This container has gcloud installed in it. CircleCI infrastructure is on AWS and when they spin up the belushi container we actually run gcloud get-credentials that point the belushi container to our project in google cloud, which has a kubernetes cluster configured and we run all of our functional CI testing in that cluster. So we need that belushi pod to configure into the ci project to move forward.
The weird thing is that the belushi:latest image always configures properly; however, when we are working on belushi we often branch and create a new image to run tests. So for example, I will create a branch in belushi and then have a new hash of 1234567, so we will spin up the belushi:1234567 image and try to run things, and the first thing we do is configure it to point to the ci project; however, we get that metadata resolve issue.
I feel like it is DNS related or maybe the metadata server isn't allow the new image of belushi to communicate with it right away. After I retry it a bunch of times it will eventually configure properly (without any code changes). So I wonder if the metadata server is rejecting it for some reason or it could be on AWS not resolving for some unknown reason.
First thing you can do to troubleshoot is, when you get this error, attempt this:
curl -H "Metadata-Flavor:Google" http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/
The metadata server should respond straight away with your service account metadata.
Is your container behind any kind of http proxy?

OpenERP Server Error Access denied

After installing Odoo, I went to web panel where it asked create new database.
As I entered details I got error. I can change master password successfully.
I already created database on putty and there is no openerp-server.conf file under /etc/ folder.
Odoo
OpenERP Server Error
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/openerp/http.py", line 500, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/usr/lib/python2.7/dist-packages/openerp/http.py", line 517, in dispatch
result = self._call_function(**self.params)
File "/usr/lib/python2.7/dist-packages/openerp/http.py", line 284, in _call_function
return self.endpoint(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/openerp/http.py", line 733, in __call__
return self.method(*args, **kw)
File "/usr/lib/python2.7/dist-packages/openerp/http.py", line 376, in response_wrap
response = f(*args, **kw)
File "/usr/lib/python2.7/dist-packages/openerp/addons/web/controllers/main.py", line 714, in create
params['create_admin_pwd'])
File "/usr/lib/python2.7/dist-packages/openerp/http.py", line 807, in proxy_method
result = dispatch_rpc(self.service_name, method, args)
File "/usr/lib/python2.7/dist-packages/openerp/http.py", line 100, in dispatch_rpc
result = dispatch(method, params)
File "/usr/lib/python2.7/dist-packages/openerp/service/db.py", line 62, in dispatch
security.check_super(passwd)
File "/usr/lib/python2.7/dist-packages/openerp/service/security.py", line 33, in check_super
raise openerp.exceptions.AccessDenied()
AccessDenied: Access denied.
Using following command you will get location of .conf file
locate openerp-server.conf
Now go to that path and open it and check out master password whether it's same as you given while creating a new database.
#Danish the master password should be "admin" it is required to newly create your database
The master password that you are using to create the database is not same as what you have set for the PostgreSQL server.
#Danish
Check the tools -> config file and look what are the username and
password been set for accessing the database.
Check pg_hba.conf file under /etc/postgresql/9.1/main, that it is
allowing the connection for the specified database user in tools ->
config file.