Not able to start localstack docker on Mac - localstack

I have installed localstack in my local. Using below docker version.
Docker version :- 18.09.1.
Also created a tmp directory in my mac.
TMPDIR=/private$TMPDIR
localstack works fine without docker version.As I am writing test cases so thought to start docker container and run my test cases.
Here is the my log output. can you help me what could be the wrong..
localstack start --docker
Starting local dev environment. CTRL-C to quit.
docker run -it -e LOCALSTACK_HOSTNAME="localhost" -p 8080:8080 -p 443:443 -p 4567-4584:4567-4584 -p 4590-4593:4590-4593 -v "/private/var/folders/_t/t3f89j5n6r1cdsztj2pc9rg00000gq/T/localstack:/tmp/localstack" -v "/var/run/docker.sock:/var/run/docker.sock" -e DOCKER_HOST="unix:///var/run/docker.sock" -e HOST_TMP_FOLDER="/private/var/folders/_t/t3f89j5n6r1cdsztj2pc9rg00000gq/T/localstack" "localstack/localstack"
2019-02-15 19:40:44,638 CRIT Supervisor running as root (no user in config file)
2019-02-15 19:40:44,641 INFO supervisord started with pid 1
2019-02-15 19:40:45,646 INFO spawned: 'dashboard' with pid 8
2019-02-15 19:40:45,649 INFO spawned: 'infra' with pid 9
(. .venv/bin/activate; bin/localstack web)
(. .venv/bin/activate; exec bin/localstack start)
2019-02-15 19:40:46,667 INFO success: dashboard entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-02-15 19:40:46,668 INFO success: infra entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Starting local dev environment. CTRL-C to quit.
2019-02-15T19:40:46:INFO:werkzeug: * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
2019-02-15T19:40:46:INFO:werkzeug: * Restarting with stat
2019-02-15T19:40:47:WARNING:werkzeug: * Debugger is active!
2019-02-15T19:40:47:INFO:werkzeug: * Debugger PIN: 411-291-669
Starting mock API Gateway (http port 4567)...
Starting mock DynamoDB (http port 4569)...
Starting mock SES (http port 4579)...
Starting mock Kinesis (http port 4568)...
Starting mock Redshift (http port 4577)...
Starting mock S3 (http port 4572)...
Starting mock CloudWatch (http port 4582)...
Starting mock CloudFormation (http port 4581)...
Starting mock SSM (http port 4583)...
Starting mock SQS (http port 4576)...
Starting mock Secrets Manager (http port 4584)...
Starting local Elasticsearch (http port 4571)...
Starting mock SNS (http port 4575)...
Starting mock STS (http port 4592)...
Starting mock DynamoDB Streams service (http port 4570)...
Starting mock Firehose service (http port 4573)...
Starting mock Route53 (http port 4580)...
Starting mock ES service (http port 4578)...
Starting mock Lambda service (http port 4574)...
2019-02-15T19:41:14:WARNING:infra.pyc: Service "dynamodb" not yet available, retrying...
2019-02-15T19:41:45:WARNING:infra.pyc: Service "dynamodb" not yet available, retrying...
2019-02-15T19:41:59:WARNING:infra.pyc: Service "dynamodb" not yet available, retrying...
2019-02-15T19:42:47:WARNING:infra.pyc: Service "s3" not yet available, retrying...
2019-02-15T19:43:22:WARNING:infra.pyc: Service "elasticsearch" not yet available, retrying...
2019-02-15T19:43:25:WARNING:infra.pyc: Service "elasticsearch" not yet available, retrying...
2019-02-15T19:43:29:WARNING:infra.pyc: Service "elasticsearch" not yet available, retrying...
2019-02-15T19:43:32:WARNING:infra.pyc: Service "elasticsearch" not yet available, retrying...
2019-02-15T19:43:35:WARNING:infra.pyc: Service "elasticsearch" not yet available, retrying...
2019-02-15T19:43:39:WARNING:infra.pyc: Service "elasticsearch" not yet available, retrying...
2019-02-15T19:43:42:ERROR:localstack.services.es.es_starter: Elasticsearch health check failed (retrying...): TransportError(502, u'') Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/es/es_starter.py", line 59, in check_elasticsearch
out = es.cat.aliases()
File "/opt/code/localstack/.venv/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 76, in _wrapped
return func(*args, params=params, **kwargs)
File "/opt/code/localstack/.venv/lib/python2.7/site-packages/elasticsearch/client/cat.py", line 23, in aliases
'aliases', name), params=params)
File "/opt/code/localstack/.venv/lib/python2.7/site-packages/elasticsearch/transport.py", line 314, in perform_request
status, headers_response, data = connection.perform_request(method, url, params, body, headers=headers, ignore=ignore, timeout=timeout)
File "/opt/code/localstack/.venv/lib/python2.7/site-packages/elasticsearch/connection/http_requests.py", line 90, in perform_request
self._raise_error(response.status_code, raw_data)
File "/opt/code/localstack/.venv/lib/python2.7/site-packages/elasticsearch/connection/base.py", line 125, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
TransportError: TransportError(502, u'')
2019-02-15T19:43:42:WARNING:infra.pyc: Service "elasticsearch" not yet available, retrying...
2019-02-15T19:43:42:ERROR:infra.pyc: Error checking state of local environment (after some retries): Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/infra.py", line 353, in check_infra
raise e
AssertionError
Error starting infrastructure: Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/infra.py", line 491, in start_infra
check_infra(apis=apis)
File "/opt/code/localstack/localstack/services/infra.py", line 362, in check_infra
check_infra(retries - 1, expect_shutdown=expect_shutdown, apis=apis, additional_checks=additional_checks)
File "/opt/code/localstack/localstack/services/infra.py", line 362, in check_infra
check_infra(retries - 1, expect_shutdown=expect_shutdown, apis=apis, additional_checks=additional_checks)
File "/opt/code/localstack/localstack/services/infra.py", line 362, in check_infra
check_infra(retries - 1, expect_shutdown=expect_shutdown, apis=apis, additional_checks=additional_checks)
File "/opt/code/localstack/localstack/services/infra.py", line 362, in check_infra
check_infra(retries - 1, expect_shutdown=expect_shutdown, apis=apis, additional_checks=additional_checks)
File "/opt/code/localstack/localstack/services/infra.py", line 362, in check_infra
check_infra(retries - 1, expect_shutdown=expect_shutdown, apis=apis, additional_checks=additional_checks)
File "/opt/code/localstack/localstack/services/infra.py", line 362, in check_infra
check_infra(retries - 1, expect_shutdown=expect_shutdown, apis=apis, additional_checks=additional_checks)
File "/opt/code/localstack/localstack/services/infra.py", line 362, in check_infra
check_infra(retries - 1, expect_shutdown=expect_shutdown, apis=apis, additional_checks=additional_checks)
File "/opt/code/localstack/localstack/services/infra.py", line 362, in check_infra
check_infra(retries - 1, expect_shutdown=expect_shutdown, apis=apis, additional_checks=additional_checks)
File "/opt/code/localstack/localstack/services/infra.py", line 362, in check_infra
check_infra(retries - 1, expect_shutdown=expect_shutdown, apis=apis, additional_checks=additional_checks)
File "/opt/code/localstack/localstack/services/infra.py", line 362, in check_infra
check_infra(retries - 1, expect_shutdown=expect_shutdown, apis=apis, additional_checks=additional_checks)
File "/opt/code/localstack/localstack/services/infra.py", line 360, in check_infra

Related

AWX-Web error when installing awx-operator on Kubernetes

I am currently installing the awx-operator and I have come across an issue while I am trying to expose the application to the outside world.
But I have come across an error with the awx-web container within the awx-5b58db49c-9r4hp. When I run kubectl logs pod/awx-5b58db49c-9r4hp -c awx-web, I get the following output:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/awx/conf/settings.py", line 81, in _ctit_db_wrapper
yield
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/awx/conf/settings.py", line 411, in __getattr__
value = self._get_local(name)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/awx/conf/settings.py", line 355, in _get_local
setting = Setting.objects.filter(key=name, user__isnull=True).order_by('pk').first()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/query.py", line 653, in first
for obj in (self if self.ordered else self.order_by('pk'))[:1]:
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/query.py", line 274, in __iter__
self._fetch_all()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/query.py", line 1242, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/query.py", line 55, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/models/sql/compiler.py", line 1140, in execute_sql
cursor = self.connection.cursor()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 256, in cursor
return self._cursor()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 233, in _cursor
self.ensure_connection()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 217, in ensure_connection
self.connect()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 217, in ensure_connection
self.connect()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 195, in connect
self.connection = self.get_new_connection(conn_params)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/postgresql/base.py", line 178, in get_new_connection
connection = Database.connect(**conn_params)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: FATAL: password authentication failed for user "awx"
2021-05-12 14:28:54,478 ERROR [-] awx.conf.settings Database settings are not available, using defaults.
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/oauth2_provider/settings.py", line 138, in __getattr__
val = self.user_settings[attr]
KeyError: 'OAUTH2_VALIDATOR_CLASS'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 217, in ensure_connection
self.connect()
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 195, in connect
self.connection = self.get_new_connection(conn_params)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/postgresql/base.py", line 178, in get_new_connection
connection = Database.connect(**conn_params)
File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: FATAL: password authentication failed for user "awx"
I am not too sure whether this is a big deal or just a red-herring. I am just in need of some clarification. If I need to get any more information to aide troubleshooting, please let me know!
As per AWX 19.0.0: password authentication failed for user "awx" issue is not anymore in minikube v1.20.0, awx-operator 0.9.0 so advice is try use this version for now
I was also observing the same error message => password authentication failed for user "awx".
I was running the awx-operator version 0.10.0 on my kubernetes cluster created using kubeadm and not using minikube.
I hosted my postgres persistent volume needed for awx postgres pod on my worker node which already had some stale previously migrated data from a different kubernetes cluster. I had to cleanup that already lying data on my worker node hostPath where i mounted my persistent volume and make a fresh install with fresh data from postgres pod and the password authentication error never come back.

Connection to local postgresql database fails after upgrade to Big Sur

I use IntelliJ IDEA's bundled database client (DataGrip) to manage my database connections, both local and remote. And using docker to connect to postgres with following settings:
services:
postgresql:
image: postgres:11
ports:
- "5432:5432"
expose:
- "5432"
environment:
- POSTGRES_USER=$user
- POSTGRES_PASSWORD=$pass
- POSTGRES_DB=k$db
After upgrading from Catalina to Big Sur, connection to local db fails and it just shows a connection error message as follows:
[08001] Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
java.net.ConnectException: Connection refused (Connection refused).
When I run docker-compose up, I get the following error:
Traceback (most recent call last):
File "site-packages/urllib3/connectionpool.py", line 677, in urlopen
File "site-packages/urllib3/connectionpool.py", line 392, in _make_request
File "http/client.py", line 1252, in request
File "http/client.py", line 1298, in _send_request
File "http/client.py", line 1247, in endheaders
File "http/client.py", line 1026, in _send_output
File "http/client.py", line 966, in send
File "site-packages/docker/transport/unixconn.py", line 43, in connect
ConnectionRefusedError: [Errno 61] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "site-packages/requests/adapters.py", line 449, in send
File "site-packages/urllib3/connectionpool.py", line 727, in urlopen
File "site-packages/urllib3/util/retry.py", line 403, in increment
File "site-packages/urllib3/packages/six.py", line 734, in reraise
File "site-packages/urllib3/connectionpool.py", line 677, in urlopen
File "site-packages/urllib3/connectionpool.py", line 392, in _make_request
File "http/client.py", line 1252, in request
File "http/client.py", line 1298, in _send_request
File "http/client.py", line 1247, in endheaders
File "http/client.py", line 1026, in _send_output
File "http/client.py", line 966, in send
File "site-packages/docker/transport/unixconn.py", line 43, in connect
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionRefusedError(61, 'Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "site-packages/docker/api/client.py", line 205, in _retrieve_server_version
File "site-packages/docker/api/daemon.py", line 181, in version
File "site-packages/docker/utils/decorators.py", line 46, in inner
File "site-packages/docker/api/client.py", line 228, in _get
File "site-packages/requests/sessions.py", line 543, in get
File "site-packages/requests/sessions.py", line 530, in request
File "site-packages/requests/sessions.py", line 643, in send
File "site-packages/requests/adapters.py", line 498, in send
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionRefusedError(61, 'Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "docker-compose", line 3, in <module>
File "compose/cli/main.py", line 67, in main
File "compose/cli/main.py", line 123, in perform_command
File "compose/cli/command.py", line 69, in project_from_options
File "compose/cli/command.py", line 132, in get_project
File "compose/cli/docker_client.py", line 43, in get_client
File "compose/cli/docker_client.py", line 170, in docker_client
File "site-packages/docker/api/client.py", line 188, in __init__
File "site-packages/docker/api/client.py", line 213, in _retrieve_server_version
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', ConnectionRefusedError(61, 'Connection refused'))
[1269] Failed to execute script docker-compose
Connecting to remote db's are not broken somehow, they work. Is there anyone came across this problem?
After Big Sur upgrade, I also had a warning whenever I open a new terminal:
zsh compinit: insecure directories, run compaudit for list.
Ignore insecure directories and continue [y] or abort compinit [n]?
I did not think they are related but the solution to this problem, explained in this thread, also solved the main postgresql connection issue for me. But my computer keeps restarting occasionally and after this restart, I again get the main problem when I run docker-compose up inside IDE's terminal. I then manually restart and it works. Although not being a permanent soluiton, this solved my problem for now.

Not able to run OpenDistro for Elastic in kubernetes as non-root -supervisord error

I am setting up OpenDistro for Elastic in Kubernetes. The cluster has pod security in place that will not allow privileged pods. When I start the cluster the logs indicated a permission issue with /usr/share/supervisor/supervisord.log
I have a securityContext set on the deployment
securityContext:
runAsUser: 1000
fsGroup: 1000
``
The error message from kubectl logs es-master-0 is
```/usr/share/elasticsearch/config/elasticsearch.yml seems to be already configured for Security. Quit.
Traceback (most recent call last):
File "/usr/bin/supervisord", line 9, in <module>
load_entry_point('supervisor==4.0.2', 'console_scripts', 'supervisord')()
File "/usr/lib/python2.7/site-packages/supervisor-4.0.2-py2.7.egg/supervisor/supervisord.py", line 358, in main
go(options)
File "/usr/lib/python2.7/site-packages/supervisor-4.0.2-py2.7.egg/supervisor/supervisord.py", line 368, in go
d.main()
File "/usr/lib/python2.7/site-packages/supervisor-4.0.2-py2.7.egg/supervisor/supervisord.py", line 70, in main
self.options.make_logger()
File "/usr/lib/python2.7/site-packages/supervisor-4.0.2-py2.7.egg/supervisor/options.py", line 1472, in make_logger
backups=self.logfile_backups,
File "/usr/lib/python2.7/site-packages/supervisor-4.0.2-py2.7.egg/supervisor/loggers.py", line 417, in handle_file
handler = RotatingFileHandler(filename, 'a', maxbytes, backups)
File "/usr/lib/python2.7/site-packages/supervisor-4.0.2-py2.7.egg/supervisor/loggers.py", line 212, in __init__
FileHandler.__init__(self, filename, mode)
File "/usr/lib/python2.7/site-packages/supervisor-4.0.2-py2.7.egg/supervisor/loggers.py", line 159, in __init__
self.stream = open(filename, mode)
IOError: [Errno 13] Permission denied: '/usr/share/supervisor/supervisord.log'

ansible k8s module failing to connect to cluster with 503 - appends /version/openshift to non openshift cluster

I'm trying to use ansible new k8s module (based ok k8_raw from 2.6) to maintain an aks k8 cluster.
While I can work with the cluster with kubectl , any command with the k8s cluster fails with a 503 error.
For example this task:
- name: deploy kured daemonset
k8s:
state: present
context: "{{ cluster_name}}"
host: "redacted"# tried specifying this, but does not help
kubeconfig: "~/.kube/config"
src: "aks/utils/kured-ds.yaml"
And failure:
Traceback (most recent call last):
File "/home/alonisser/.ansible/tmp/ansible-tmp-1549320815.98-157731551192134/AnsiballZ_k8s.py", line 113, in <module>
_ansiballz_main()
File "/home/alonisser/.ansible/tmp/ansible-tmp-1549320815.98-157731551192134/AnsiballZ_k8s.py", line 105, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/alonisser/.ansible/tmp/ansible-tmp-1549320815.98-157731551192134/AnsiballZ_k8s.py", line 48, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/tmp/ansible_k8s_payload_IYmGFG/__main__.py", line 233, in <module>
File "/tmp/ansible_k8s_payload_IYmGFG/__main__.py", line 229, in main
File "/tmp/ansible_k8s_payload_IYmGFG/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py", line 131, in execute_module
File "/tmp/ansible_k8s_payload_IYmGFG/ansible_k8s_payload.zip/ansible/module_utils/k8s/common.py", line 172, in get_api_client
File "/home/alonisser/.local/lib/python2.7/site-packages/openshift/dynamic/client.py", line 103, in __init__
self.__init_cache()
File "/home/alonisser/.local/lib/python2.7/site-packages/openshift/dynamic/client.py", line 113, in __init_cache
self.__resources.update(self.parse_api_groups())
File "/home/alonisser/.local/lib/python2.7/site-packages/openshift/dynamic/client.py", line 169, in parse_api_groups
new_group[version] = self.get_resources_for_api_version(prefix, group['name'], version, preferred)
File "/home/alonisser/.local/lib/python2.7/site-packages/openshift/dynamic/client.py", line 181, in get_resources_for_api_version
resources_response = load_json(self.request('GET', path))['resources']
File "/home/alonisser/.local/lib/python2.7/site-packages/openshift/dynamic/client.py", line 363, in request
_return_http_data_only=params.get('_return_http_data_only', True)
File "/home/alonisser/.local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/home/alonisser/.local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/home/alonisser/.local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 342, in request
headers=headers)
File "/home/alonisser/.local/lib/python2.7/site-packages/kubernetes/client/rest.py", line 231, in GET
query_params=query_params)
File "/home/alonisser/.local/lib/python2.7/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (503)
Reason: Service Unavailable
Ansible version: 2.7/8(dev)
What am I missing?
UPDATE:
When I've added print statement to the libs used by the module beneath I found out somewhere in the pipeline /version/openshift is appended to the host name, which of course fails, because it's a non openshift cluster
Any work around for this bug?
Answer: turned out there were two failing requests. the first is to version/openshift is catched by the client and doesn't cause the crash. the crash actually happened because of an error with my cluster metrics server, which while not really needed by the k8 client used by ansible still fails a request to it.
So if anyone bumps into it, might be helpful

elasticsearch-curator k8s Helm chart cannot connect to HTTPS

I am using the following Helm chart: https://github.com/kubernetes/charts/tree/master/incubator/elasticsearch-curator and passing the following in my values.yaml file:
config:
elasticsearch:
hosts:
- my-es-aws-endpoint
port: 443
ssl: True
In the pods logs I see the following exception:
Preparing Action ID: 1, "delete_indices"
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 601, in urlopen
chunked=chunked)
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
six.raise_from(e, None)
File "<string>", line 2, in raise_from
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 383, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.6/http/client.py", line 1331, in getresponse
response.begin()
File "/usr/local/lib/python3.6/http/client.py", line 297, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.6/http/client.py", line 266, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
It seems like it is trying to connect to HTTP, not HTTPS. I have tested the connection from my k8s cluster to es:443 and it works.
Do you know if HTTPS is not supported or am I doing something wrong?
...
It looks like I was passing the config in the wrong section and it was not picking it up properly. I passed it here and it works:
# Having config_yaml WILL override the other config
config_yml: |-
---
client:
hosts:
- my-es-aws-endpoint
port: 443
use_ssl: True