ubuntu openstack ocata - Discovering versions from the identity service failed - ubuntu-16.04

command:
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
error:
Discovering versions from the identity service failed when creating
the password plugin. Attempting to determine version from URL.
Internal Server Error (HTTP 500)
Error coming in keystone.log:
2018-06-12 10:40:05.888577 mod_wsgi (pid=16170): Target WSGI script '/usr/bin/keystone-wsgi-admin' cannot be loaded as Python module.
2018-06-12 10:40:05.888611 mod_wsgi (pid=16170): Exception occurred processing WSGI script '/usr/bin/keystone-wsgi-admin'.
2018-06-12 10:40:05.888634 Traceback (most recent call last):
2018-06-12 10:40:05.888656 File "/usr/bin/keystone-wsgi-admin", line 51, in <module>
2018-06-12 10:40:05.888688 application = initialize_admin_application()
2018-06-12 10:40:05.888702 File "/usr/lib/python2.7/dist-packages/keystone/server/wsgi.py", line 129, in initialize_admin_application
2018-06-12 10:40:05.888726 config_files=_get_config_files())
2018-06-12 10:40:05.888739 File "/usr/lib/python2.7/dist-packages/keystone/server/wsgi.py", line 53, in initialize_application
2018-06-12 10:40:05.888759 common.configure(config_files=config_files)
2018-06-12 10:40:05.888772 File "/usr/lib/python2.7/dist-packages/keystone/server/common.py", line 30, in configure
2018-06-12 10:40:05.888792 keystone.conf.configure()
2018-06-12 10:40:05.888805 File "/usr/lib/python2.7/dist-packages/keystone/conf/__init__.py", line 126, in configure
2018-06-12 10:40:05.888826 help='Do not monkey-patch threading system modules.'))
2018-06-12 10:40:05.888839 File "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2288, in __inner
2018-06-12 10:40:05.888860 result = f(self, *args, **kwargs)
2018-06-12 10:40:05.888872 File "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2478, in register_cli_opt
2018-06-12 10:40:05.888892 raise ArgsAlreadyParsedError("cannot register CLI option")
2018-06-12 10:40:05.888915 ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option
error.log:
[Tue Jun 12 10:12:18.510745 2018] [mpm_event:notice] [pid 29892:tid 139804806121344] AH00491: caught SIGTERM, shutting down
[Tue Jun 12 10:12:29.674244 2018] [wsgi:warn] [pid 16158:tid 139690338350976] mod_wsgi: Compiled for Python/2.7.11.
[Tue Jun 12 10:12:29.674304 2018] [wsgi:warn] [pid 16158:tid 139690338350976] mod_wsgi: Runtime using Python/2.7.12.
[Tue Jun 12 10:12:29.676957 2018] [mpm_event:notice] [pid 16158:tid 139690338350976] AH00489: Apache/2.4.18 (Ubuntu) mod_wsgi/4.3.0 Python/2.7.12 configured -- resuming normal operations
[Tue Jun 12 10:12:29.676985 2018] [core:notice] [pid 16158:tid 139690338350976] AH00094: Command line: '/usr/sbin/apache2'
Please can somebody help me to solve the issue.

Issue solved.
Error was in mod_wsgi according to log. Web Service Gateway Interface (WSGI) middleware pipeline for the Identity service is configured in keystone-paste.ini file, thus verified my file with the openstack docs keystone-paste.ini file available on internet thus changed pipeline configuration and issue get solved.
I have edited /etc/keystone/keystone-paste.ini file
Under [pipeline:public_api]
pipeline = healthcheck cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id
changed above line to:
pipeline = healthcheck cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension public_service
Same way edited [pipeline:admin_api]
pipeline = healthcheck cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id
changed pipeline to:
pipeline = healthcheck cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension s3_extension admin_service
Also made changes in [pipeline:api_v3]
pipeline = healthcheck cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id
changed above line to:
pipeline = healthcheck cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension_v3 s3_extension service_v3
By making following changes issue get solved.

Related

Getting 404 for jupyterLab on jupyterHub

So I'm trying to setup JupyterHub service with JupyterLab on it in production mode (Centos). For the simplicity, I have chosen system user authentication (PAM). So now I have several users with the ability to run individual servers. The problem is that I need to setup a JupyterLab so they could work properly.
I did everything (and probably a little more) jupyterhub docs tells to do:
Enable jupyterlab extension system-wide (jupyter serverextension enable --py jupyterlab --sys-prefix)
Added needded config option (c.Spawner.cmd = ["jupyter-labhub"])
Some other useless stuff
But the /lab url still returning 404 error.
Related console output:
Jun 29 14:48:51 jupyter-infra-1 pipenv[11806]: Message: '404 GET /user/nataliya/lab (nataliya#::ffff:127.0.0.1) 57.16ms'
Jun 29 14:48:51 jupyter-infra-1 pipenv[11806]: Arguments: ()
Jun 29 14:48:51 jupyter-infra-1 pipenv[11806]: --- Logging error ---
Jun 29 14:48:51 jupyter-infra-1 pipenv[11806]: Traceback (most recent call last): Jun 29 14:48:51 jupyter-infra-1 pipenv[11806]: File "/usr/local/lib64/python3.8/site-packages/tornado/web.py", line 1681, in _execute
Jun 29 14:48:51 jupyter-infra-1 pipenv[11806]: result = self.prepare()
Jun 29 14:48:51 jupyter-infra-1 pipenv[11806]: File "/usr/local/lib/python3.8/site-packages/notebook/base/handlers.py", line 697, in prepare
Jun 29 14:48:51 jupyter-infra-1 pipenv[11806]: raise web.HTTPError(404)
Jun 29 14:48:51 jupyter-infra-1 pipenv[11806]: tornado.web.HTTPError: HTTP 404: Not Found
Probably not related console output:
Jun 29 14:39:36 jupyter-infra-1 pipenv[11806]: raise web.HTTPError(404, u'Kernel does not exist: %s' % kernel_id)
Jun 29 14:39:36 jupyter-infra-1 pipenv[11806]: tornado.web.HTTPError: HTTP 404: Not Found (Kernel does not exist: d85c98fa-2998-4fb3-85e2-8dc6ecbd093a)
Jun 29 14:39:36 jupyter-infra-1 pipenv[11806]: During handling of the above exception, another exception occurred:
Jun 29 14:39:36 jupyter-infra-1 pipenv[11806]: Traceback (most recent call last):
Jun 29 14:39:36 jupyter-infra-1 pipenv[11806]: File "/usr/lib64/python3.8/logging/__init__.py", line 1081, in emit
Jun 29 14:39:36 jupyter-infra-1 pipenv[11806]: msg = self.format(record)
Jun 29 14:39:36 jupyter-infra-1 pipenv[11806]: File "/usr/lib64/python3.8/logging/__init__.py", line 925, in format
Jun 29 14:39:36 jupyter-infra-1 pipenv[11806]: return fmt.format(record)
Jun 29 14:39:36 jupyter-infra-1 pipenv[11806]: File "/usr/local/lib64/python3.8/site-packages/tornado/log.py", line 196, in format
Jun 29 14:39:36 jupyter-infra-1 pipenv[11806]: formatted = self._fmt % record.__dict__
I'm out of any ideas, so would be appreciated for any.
So, I fix this by installing jupyterlab package by pip3.
And this is how it was:
I tried to enable jupyterlab for a user with /home/jupyterhub/.local/bin/pipenv run jupyter serverextension enable --py jupyterlab --user
but that return me exception:
ModuleNotFoundError: No module named 'jupyterlab'
I re-checked that my Pipfile.lock includes a jupyterlab extension... and it was!
2. I tried to install jupyterlab with the most standart way:
pip3 install jupyterlab
And it did a thing!
Still not sure why the jupyterlab installed with pipenv didn't have result.

FreeIPA Server Error - ipa: ERROR: No valid Negotiate header in server response

I have recently installed FreeIPA on RHEL7. This seems to be running well for few hours and then calls to ipa starts to fail with the following error.
ipa: ERROR: No valid Negotiate header in server response
==================================================
[root ~]# ipa -v user-find --all
ipa: INFO: trying https://xxx.xxx.xxx.xxx/ipa/json
ipa: INFO: [try 1]: Forwarding 'user_find/1' to json server 'https://xxx.xxx.xxx.xxx/ipa/json'
ipa: ERROR: No valid Negotiate header in server response
-=================================================
[I have masked the hostnames with 'xxx']
In /var/log/httpd/error_log - I see the following error.
[Thu Dec 14 15:50:23.413286 2017] [auth_gssapi:error] [pid 10694] [client xxx.xxx.xxx.xxx:50198] GSS ERROR In Negotiate Auth: gss_accept_sec_context() failed: [Unspecified GSS failure. Minor code may provide more information ( Request ticket server HTTP/xxx.xxxx.xxxx.xxx#EC2.INTERNAL kvno 2 not found in keytab; keytab is likely out of date)], referer: https://xxx.xxx.xxx.xxx/ipa/xml
What is the possible cause? Looks like some misconfiguration.

getting timeout error during commiting large files on svn

We are using SVN with ubuntu 14.04 and eclipse subversion and apache. It works fine when we commit small files. But when we try to commit large file, it gives following error.
eclipse error:
Some of selected resources were not committed.
Some of selected resources were not committed.
svn: E175002: Commit failed (details follow):
svn: E175002: Commit failed (details follow):
svn: E175002: can not read HTTP status line
svn: E175002: PUT request failed on '/svn/test/!svn/wrk/953b88fa-5601-0010-8146-c3b0661fb4b6/trunk/*/TokenManagerImpl.java'
apache error.log
[Mon Sep 05 19:12:18.533736 2016] [dav:error] [pid 26083:tid 140002512074496] (70007)The timeout specified has expired: [client 182.75.153.50:56725] Timeout reading the body (URI: /svn/test/!svn/wrk/953b88fa-5601-0010-8146-c3b0661fb4b6/trunk/*/TokenManagerImpl.java) [408, #0]
[Mon Sep 05 19:12:18.533851 2016] [dav:error] [pid 26083:tid 140002512074496] [client 182.75.153.50:56725] mod_dav_svn close_stream: error closing write stream [500, #185004]
[Mon Sep 05 19:12:18.533876 2016] [dav:error] [pid 26083:tid 140002512074496] [client 182.75.153.50:56725] Unexpected end of svndiff input [500, #185004]
Below is apche timeout configuration (apache.conf)
Timeout 300
KeepAlive On
MaxKeepAliveRequests 250
KeepAliveTimeout 20
LimitRequestBody 0
Below is reqtimeout.conf
<IfModule reqtimeout_module>
RequestReadTimeout header=200-400,minrate=5000
RequestReadTimeout body=1000,minrate=5000
</IfModule>
It seems that invalid RequestReadTimeout configuration is the root cause. You should not put a special timeout for body.
Apache Subversion transfers commit data as request body and it's size depends on the size of commit. Therefore, with the config you currently have, all commit operations that take more than ~1000 seconds will fail for your users.
Just in case someone else ends up here with:
svn commit via http
Large binary file ~80MB.
Transmitting file data .
svn: E185004: Commit failed (details follow):
svn: E185004: Unexpected end of svndiff input
Change the apache2 reqtimeout.conf file for the header as well.
(The symptom is the svn commit will croak after 40 seconds...)
RequestReadTimeout header=200-400,minrate=500
Fixed it for me. "Your mileage may vary..."

Error with Schedulers and Inbound Emails in sugarcrm

We use SugarCRM CE 6.5.16 on Centos 6.5.
I am getting this error :
Wed Apr 9 15:37:10 2014 [10389][1][ERROR] Unable to load custom logic file: include/SugarSearchEngine/SugarSearchEngineQueueManager.php
The real problem is that i dont receive emails from my inbound email.
They are all set up.I added the cron job to the crontab. Well actually I receive some emails, like 3 or 4 from 100. In the schedulers the job status is "running" and last successful run is "Never".
Every other scheduler job has a status "Done" and has last successful run.
I repaired inbound emails , scheduler jobs but with no effect.
The only thing i found is this :
http://suitecrm.com/forum/search?query=SugarSearchEngineQueueManager&searchdate=all&childforums=1
So I commented out this code and i no longer get the error but still I don't receive emails.
I don't know what else to do.
Please help me if you can !Thanks !
EDIT
I found that
"This file is only included in PRO version and it's useless in Community Edition.
Code Fix:
1. Comment code in /custom/Extension/application/Ext/LogicHooks/SugarFTSHooks.php
Do a Fast Rebuild from Administration (index.php?module=Administration&action=repair). This process will rebuild the piece of code that uses the SugarCRM to call inexistent file SugarSearchEngineQueueManager /custom/application/Ext/LogicHooks/logichooks.ext.php"
So I commented out the code and did the rebuild (yeah i did it before but now i know for sure that this file should not be in Sugarcrm CE )
The error doesn't show anymore but my scheduler still stays "running" and nothings happens , except:
When i did this (What Matthew Poer said) , I received 4 emails just like before... so something is causing a problem.
delete from job_queue where scheduler_id = 'THE_SCHEDULER_ID';
update schedulers set last_run = subdate(now(),360) where id = 'THE_SCHEDULER_ID';
EDIT 2:
This is from php error_log
[Sun Apr 13 03:34:27 2014] [notice] Digest: generating secret for digest authentication ...
[Sun Apr 13 03:34:27 2014] [notice] Digest: done
[Sun Apr 13 03:34:33 2014] [notice] Apache/2.2.15 (Unix) DAV/2 mod_nss/2.2.15 NSS/3.14.0.0 Basic ECC PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips mod_wsgi/3.2 Python/2.6.6 mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operations
[Sun Apr 13 12:22:52 2014] [error] [client 122.155.18.51] File does not exist: /usr/share/phpMyAdmin/translators.html
[Sun Apr 13 13:45:31 2014] [error] [client 122.155.18.51] File does not exist: /usr/share/phpMyAdmin/translators.html
[Sun Apr 13 15:43:39 2014] [error] [client 66.249.66.74] File does not exist: /opt/otrs/var/httpd/htdocs/js/js-cache/ModuleJS_784dc12bf89d72db064caa6e8690168b.js
[Sun Apr 13 15:43:40 2014] [error] [client 66.249.66.74] File does not exist: /opt/otrs/var/httpd/htdocs/skins/Customer/default/css-cache/CommonCSS_b1f924c426a0e1a9f1553197a2ce25a4.css
[Sun Apr 13 15:43:41 2014] [error] [client 66.249.66.74] File does not exist: /opt/otrs/var/httpd/htdocs/js/js-cache/CommonJS_7f98ddff2f339e3b515f7901d82600bb.js
[Mon Apr 14 11:09:04 2014] [error] [client 192.168.10.1] PHP Warning: file_get_contents(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /usr/share/phpMyAdmin/version_check.php on line 16, referer: http://support.expert-m.net/phpmyadmin/main.php?token=d2e60372f8b5d6d53f0c3c80a536be27
[Mon Apr 14 11:09:04 2014] [error] [client 192.168.10.1] PHP Warning: file_get_contents(http://www.phpmyadmin.net/home_page/version.json): failed to open stream: php_network_getaddresses: getaddrinfo failed: Name or service not known in /usr/share/phpMyAdmin/version_check.php on line 16, referer: http://support.expert-m.net/phpmyadmin/main.php?token=d2e60372f8b5d6d53f0c3c80a536be27
[Mon Apr 14 12:45:25 2014] [error] [client 178.235.72.68] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:)
This is from the logs folder of SugarCRM, error.log
[Mon Apr 14 08:58:59 2014] [error] [client 192.168.10.1] PHP Notice: Undefined index: 8854a79c-6171-036c-e7df-534548e8bc81 in /var/www/sugarcrm/public_html/modules/Emails/EmailUIAjax.php on line 879, referer: http://sugarcrm.support.expert-m.net/index.php?module=Emails&action=index&parentTab=All
[Mon Apr 14 08:58:59 2014] [error] [client 192.168.10.1] PHP Notice: Undefined index: 8854a79c-6171-036c-e7df-534548e8bc81 in /var/www/sugarcrm/public_html/modules/Emails/EmailUIAjax.php on line 880, referer: http://sugarcrm.support.expert-m.net/index.php?module=Emails&action=index&parentTab=All
[Mon Apr 14 11:22:17 2014] [error] [client 192.168.10.1] PHP Notice: Undefined index: 8854a79c-6171-036c-e7df-534548e8bc81 in /var/www/sugarcrm/public_html/modules/Emails/EmailUIAjax.php on line 879, referer: http://sugarcrm.support.expert-m.net/index.php?module=Emails&action=index&parentTab=All
[Mon Apr 14 11:22:17 2014] [error] [client 192.168.10.1] PHP Notice: Undefined index: 8854a79c-6171-036c-e7df-534548e8bc81 in /var/www/sugarcrm/public_html/modules/Emails/EmailUIAjax.php on line 880, referer: http://sugarcrm.support.expert-m.net/index.php?module=Emails&action=index&parentTab=All
[Mon Apr 14 11:24:47 2014] [error] [client 192.168.10.1] File does not exist: /var/www/sugarcrm/public_html/favicon.ico
I didn't paste all the info from the logs. There is more but the errors are repeating.
The file include/SugarSearchEngine/SugarSearchEngineQueueManager.php won't exist in your system because it's a Pro+ feature.
To reset a scheduler entry that got "stuck," delete the scheduler information from the job queue in the database and reset the last_run value. Find the ID of the scheduler from the URL within SugarCRM or by select id,name from schedulers. Once you have the ID of this scheduler, run this two queries:
delete from job_queue where scheduler_id = 'THE_SCHEDULER_ID';
update schedulers set last_run = subdate(now(),360) where id = 'THE_SCHEDULER_ID';

Perl Catalyst and FastCgi error logging issues

I have a catalyst app running through fast cgi and the apache error logs are useless.
Example:
[Thu Oct 13 08:44:35 2011] [error] [client {IP}] FastCGI: server "/usr/local/www/handprints2/script/handprints2_fastcgi.pl" stderr: | -> handprints2::View::json->process | 0.000523s |, referer: https://[SERVER]/handprints2/
[Thu Oct 13 08:44:35 2011] [error] [client {IP}] FastCGI: server "/usr/local/www/handprints2/script/handprints2_fastcgi.pl" stderr: | /end | 0.000324s |, referer: https://[SERVER]handprints2/
[Thu Oct 13 08:44:35 2011] [error] [client {IP}] FastCGI: server "/usr/local/www/handprints2/script/handprints2_fastcgi.pl" stderr: '------------------------------------------------------------+-----------', referer: https://[SERVER]/handprints2/
Is there a way to fix this?
You can configure your own log feeds and format in apache using the TransferLog and LogFormat directives:
TransferLog /tmp/sample.log
LogFormat "bazinga -> %U"
See Apache 2.0 Logging Directives or Apache 1.3 Logging Directives
I had the same problem and didn't really find the Apache log config route that convenient.
This does the job pretty well though: https://metacpan.org/pod/Catalyst::Plugin::Log::Handler
Description from CPAN:
If your Catalyst project logs many messages, logging via standard
error to Apache's error log is not very clean: The log messages are
mixed with other web applications' noise; and especially if you use
mod_fastcgi, every line will be prepended with a long prefix.
An alternative is logging to a file. But then you have to make sure
that multiple processes won't corrupt the log file. The module
Log::Handler by Jonny Schulz does exactly this, because it supports
message-wise flocking.
This module is a wrapper for said Log::Handler.