How to install collections i a AWX container - ansible-awx

Is it possible to install collections in a axw container ?
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c4d255148cc8 netboxcommunity/netbox:latest "/opt/netbox/docker-…" 5 days ago Up 5 days 0.0.0.0:8000->8080/tcp, :::8000->8080/tcp netbox-docker_netbox_1
ac0784c16861 netboxcommunity/netbox:latest "/opt/netbox/venv/bi…" 5 days ago Up 5 days netbox-docker_netbox-worker_1
31b850bf8d12 redis:6-alpine "docker-entrypoint.s…" 5 days ago Up 5 days 6379/tcp netbox-docker_redis-cache_1
df0977f446f4 postgres:12-alpine "docker-entrypoint.s…" 5 days ago Up 5 days 5432/tcp netbox-docker_postgres_1
983b698274af redis:6-alpine "docker-entrypoint.s…" 5 days ago Up 5 days 6379/tcp netbox-docker_redis_1
4150e6ae71cc ansible/awx:17.1.0 "/usr/bin/tini -- /u…" 6 days ago Up 6 days 8052/tcp awx_task
5583bbf60d45 ansible/awx:17.1.0 "/usr/bin/tini -- /b…" 6 days ago Up 6 days 0.0.0.0:80->8052/tcp, :::80->8052/tcp awx_web
c9d92412d1cd redis "docker-entrypoint.s…" 6 days ago Up 6 days 6379/tcp awx_redis
71b99bde5d41 postgres:12 "docker-entrypoint.s…" 6 days ago Up 6 days 5432/tcp awx_postgres

Got the solution.
Create a folder: collections
Create file: requirements.yml
File content:
collections:
- name: community.general

Related

How to properly query Kafka REST Proxy?

I'm running a dockerized distribution of Confluent platform:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e6963904b485 confluentinc/cp-enterprise-control-center:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:9021->9021/tcp, :::9021->9021/tcp control-center
49ade0e752b4 confluentinc/cp-ksqldb-cli:7.0.1 "/bin/sh" 11 hours ago Up 11 hours ksqldb-cli
95b0982c0159 confluentinc/ksqldb-examples:7.0.1 "bash -c 'echo Waiti…" 11 hours ago Up 11 hours ksql-datagen
e28e3b937f6e confluentinc/cp-ksqldb-server:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8088->8088/tcp, :::8088->8088/tcp ksqldb-server
af92bfb84cb1 confluentinc/cp-kafka-rest:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8082->8082/tcp, :::8082->8082/tcp rest-proxy
318a999e76dc cnfldemos/cp-server-connect-datagen:0.5.0-6.2.0 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8083->8083/tcp, :::8083->8083/tcp, 9092/tcp connect
0c299fbda7c5 confluentinc/cp-schema-registry:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:8081->8081/tcp, :::8081->8081/tcp schema-registry
a33075002386 confluentinc/cp-server:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 0.0.0.0:9092->9092/tcp, :::9092->9092/tcp, 0.0.0.0:9101->9101/tcp, :::9101->9101/tcp broker
135f832fbccb confluentinc/cp-zookeeper:7.0.1 "/etc/confluent/dock…" 11 hours ago Up 11 hours 2888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 3888/tcp zookeeper
Kafka REST Proxy is running on port 8082
When I issue an HTTP GET call against the REST proxy:
curl --silent -X GET http://10.0.0.253:8082/kafka/clusters/ | jq
All I get is:
{
"error_code": 404,
"message": "HTTP 404 Not Found"
}
Given my configuration, what can I change to actually get some useful information out of Kafka REST Proxy?

Keycloak LetsEncrypt Nginx Reverse Proxy Docker Compose

I am trying to setup an keycloak instance with an ssl connection over an nginx proxy my 'docker ps' output:
d7fd473cc77b jboss/keycloak "/opt/jboss/tools/do…" 34 minutes ago Up 8 minutes 0.0.0.0:8080->8080/tcp, 8443/tcp auth
76e757bbe129 mariadb "sh -c ' echo 'CREAT…" 34 minutes ago Up 8 minutes 0.0.0.0:3306->3306/tcp backend-database
d99e23470955 stilliard/pure-ftpd:hardened-latest "/bin/sh -c '/run.sh…" 34 minutes ago Up 8 minutes 0.0.0.0:21->21/tcp, 0.0.0.0:30000->30000/tcp, 30001-30009/tcp ftp-server
95f4fbdea0de wordpress:latest "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 80/tcp wordpress
b3e40ca6de48 mariadb:latest "docker-entrypoint.s…" 35 minutes ago Up 35 minutes 3306/tcp database
e5c12bb5ba52 nginx "/docker-entrypoint.…" 37 minutes ago Up 37 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-web
c0ac90a6c408 jrcs/letsencrypt-nginx-proxy-companion "/bin/bash /app/entr…" 37 minutes ago Up 37 minutes nginx-letsencrypt
33ae7de5f598 jwilder/docker-gen "/usr/local/bin/dock…" 37 minutes ago Up 37 minutes nginx-gen
As you can see at the above console output, I am also running an instance of wordpress in a docker container and this works like a charm, no problems with unsigned or invalid ssl certificates, just erverything fine. But when I am trying to call the web interface of keycloak over the domain with the corresponding port (in my case: 8080), I got the following error:
Fehlercode: SSL_ERROR_RX_RECORD_TOO_LONG
And when I am trying to call the web interface over the ip address also with the corresponding port, I got the message that the connection isn't safe.
Hopefully this are enough information for you guys, to figure out what I've done wrong.
So far,
Daniel

Kubeadm - no port 6443 after cluster creation

I'm trying to create Kubernetes HA cluster using kubeadm.
Kubeadm version: v.1.11.1
I'm using following instructions: kubeadm ha
All passed ok, except the final point. Nodes can't see each other on port 6443.
sudo netstat -an | grep 6443
Shows nothing.
In journalctl -u kubelet I see following error:
reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://<LB>:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-19-111-200.ec2.internal&limit=500&resourceVersion=0: dial tcp 172.19.111.200:6443: connect: connection refused
List of docker runs on instance:
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e3eabb527a92 0e4a34a3b0e6 "kube-scheduler --ad…" 19 hours ago Up 19 hours k8s_kube-scheduler_kube-scheduler-ip-172-19-111-200.ec2.internal_kube-system_31eabaff7d89a40d8f7e05dfc971cdbd_1
123e78fa73c7 55b70b420785 "kube-controller-man…" 19 hours ago Up 19 hours k8s_kube-controller-manager_kube-controller-manager-ip-172-19-111-200.ec2.internal_kube-system_85384ca66dd4dc0adddc63923e2425a8_1
e0aa05e74fb9 1d3d7afd77d1 "/usr/local/bin/kube…" 19 hours ago Up 19 hours k8s_kube-proxy_kube-proxy-xh5dg_kube-system_f6bc49bc-959e-11e8-be29-0eaa4481e274_0
f5eac0b8fe7b k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-proxy-xh5dg_kube-system_f6bc49bc-959e-11e8-be29-0eaa4481e274_0
541011b3e83a k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_etcd-ip-172-19-111-200.ec2.internal_kube-system_84d934eebaace20c70e0f268eb100028_0
a5e203947686 k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-scheduler-ip-172-19-111-200.ec2.internal_kube-system_31eabaff7d89a40d8f7e05dfc971cdbd_0
89dbcdda659c k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-apiserver-ip-172-19-111-200.ec2.internal_kube-system_4202bb793950ae679b2a433ea8711d18_0
5948e629d90e k8s.gcr.io/pause:3.1 "/pause" 19 hours ago Up 19 hours k8s_POD_kube-controller-manager-ip-172-19-111-200.ec2.internal_kube-system_85384ca66dd4dc0adddc63923e2425a8_0
Forwarding in sysctl exists:
sudo sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.ip_forward = 1
Nodes can't see each other on port 6443.
It seems like your api server in not runnning.
Fact that you have error stating :6443: connect: connection refused is pointing towards your api server not running.
This is further confirmed from your list of running docker containers on instances - you are missing api server container. Note that you have related container with "/pause" but you are missing container with "kube-apiserver --...". Your scheduler and controller-manger appear to run correctly, but api server is not.
Now you have to dig in and see what prevented your api server from starting properly. Check kubelet logs on all control-plane nodes.
This also happens if your Linux kernel is not configured to do ip4/ip6 transparently.
An ip4 address configured when the kube-api listens on an ip6 interface breaks.

Deploying in GCE through jenkins creates 20 octet-stream files, every time

I have just gotten my jenkins VM working in GCE so that i can deploy through an URL and it's working nicely.
However, every time i deploy something 20 files issaved in my bucket, all named some gibberish with the type application/octet-stream:
009d705c4df3f1dad977db3848777703330f221b 641.43 KB application/octet-stream 6 minutes ago
0f29dadc1db1c1e3bd68b5e87c87030b28ff737e 51 B application/octet-stream 6 minutes ago
12179b7c0898cca08d1f2724b0e789ae77b539f4 3.55 KB application/octet-stream 6 minutes ago
14876ba2bedf347151e0268c8dde15e71c88b388 6.12 KB application/octet-stream 6 minutes ago
3a4948f3c6b79bf9240222609241f881c408d04d 1.63 KB application/octet-stream 6 minutes ago
3bc3db8eb76aaced6191a7dcf812796a6fa5057a 2.5 KB application/octet-stream 6 minutes ago
6b4646e0ae099703f738053bfaeeede3a1f8a67e 46.1 KB application/octet-stream 6 minutes ago
6d77bca129e58bbf053bbabc86c23b9103bdea0d 194 B application/octet-stream 6 minutes ago
8059e3541a420a5a2f60d99c46d8cc4a5bba3b8f 48.14 KB application/octet-stream 6 minutes ago
92b798df48237e525d34018efdb7f2aace4fdbb4 523.43 KB application/octet-stream 6 minutes ago
b1f22e54252cdb6a84e92414709340d668c33d3a 1,022 B application/octet-stream 6 minutes ago
bad8545d6a001b02f6225c2aade36b2100581d0d 2.83 KB application/octet-stream 6 minutes ago
bd80270cee4f7e90baed299f1d6ae1be55e7b4a5 10.45 KB application/octet-stream 6 minutes ago
c626a57d3f004800b634679fa1963d7c09ae585b 2.19 KB application/octet-stream 6 minutes ago
ce093c434a0f35df34034e6fc58d1889364cfdc2 1.66 KB application/octet-stream 6 minutes ago
d7a8d655f068e92a18971d74b410963e35251c8a 422 B application/octet-stream 6 minutes ago
d8dfb95d6e41de19f4112d99d1485a628096848d 185 B application/octet-stream 6 minutes ago
e06cd18064609994d27a83438b0e3dcbfebc5c67 1.39 KB application/octet-stream 6 minutes ago
f20ec52431df2c411bcb60965b3a2c212405f747 7.72 KB application/octet-stream 6 minutes ago
fb51d1d331190dfc3f2b6756bea79ff3ff92755d 815 B application/octet-stream 6 minutes ago
Anyone know what's causing this in jenkins?
I opened one of the files and there was some text about linotype and verisign?
When Jenkins runs build, test and deploy steps for your application, it uploads the logs and any build artifacts to Google Cloud Storage where you can view them, as mentioned in this Help Center article [1].

docker revert changes to container

I'm trying to snapshot my docker container so that I can revert back to a single point in time.
I've looked at docker save and docker export but neither of these seems to do what I'm looking for. Am I missing something?
You might want to use docker commit. This command will create a new docker image from one of your docker containers. This way you can easily create a new container later on based on that new image.
Be aware that the docker commit command won't save any data stored in Docker data volumes. For those you need to make backups.
For instance if you are working with the following Dockerfile which declares a volume and will write the date every 5 seconds to two files (one being in the volume, the other not):
FROM base
VOLUME /data
CMD while true; do date >> /data/foo.txt; date >> /tmp/bar.txt; sleep 5; done
Build a image from it:
$ docker build --force-rm -t so-26323286 .
and run a new container from it:
$ docker run -d so-26323286
Wait a bit so that the running docker container have a chance to write the date to the two files a couple of times.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
07b094be1bb2 so-26323286:latest "/bin/sh -c 'while t 5 seconds ago Up 5 seconds agitated_lovelace
Then commit your container into a new image so-26323286:snapshot1:
$ docker commit agitated_lovelace so-26323286:snapshot1
You can now see that you have two images availables:
$ docker images | grep so-26323286
so-26323286 snapshot1 03180a816db8 19 seconds ago 175.3 MB
so-26323286 latest 4ffd141d7d6f 9 minutes ago 175.3 MB
Now let's verify that a new container run from so-26323286:snapshot1 would have the /tmp/bar.txt file:
$ docker run --rm so-26323286:snapshot1 cat /tmp/bar.txt
Sun Oct 12 09:00:21 UTC 2014
Sun Oct 12 09:00:26 UTC 2014
Sun Oct 12 09:00:31 UTC 2014
Sun Oct 12 09:00:36 UTC 2014
Sun Oct 12 09:00:41 UTC 2014
Sun Oct 12 09:00:46 UTC 2014
Sun Oct 12 09:00:51 UTC 2014
And witness that such a container does not have any /data/foo.txt file (as /data is a data volume):
$ docker run --rm so-26323286:snapshot1 cat /data/foo.txt
cat: /data/foo.txt: No such file or directory
Finally if you want to access to the /data/foo.txt file which is in the first (still running) container, you can use the docker run --volumes-from option:
$ docker run --rm --volumes-from agitated_lovelace base cat /data/foo.txt
Sun Oct 12 09:00:21 UTC 2014
Sun Oct 12 09:00:26 UTC 2014
Sun Oct 12 09:00:31 UTC 2014
Sun Oct 12 09:00:36 UTC 2014
Sun Oct 12 09:00:41 UTC 2014
Sun Oct 12 09:00:46 UTC 2014
Sun Oct 12 09:00:51 UTC 2014
Sun Oct 12 09:00:56 UTC 2014
Sun Oct 12 09:01:01 UTC 2014
Sun Oct 12 09:01:06 UTC 2014
Sun Oct 12 09:01:11 UTC 2014
Sun Oct 12 09:01:16 UTC 2014
Here is an example of how to do it with the hello-world image from Docker Hub
First run the hello-world image, thereby downloading the image:
docker run hello-world
Then get the hash of the image you want to get the has of
docker history hello-world
You will see something like:
IMAGE CREATED
fce289e99eb9 15 months ago
fce289e99eb9 is your hash-code.
To tag this image, you run:
docker tag fce289e99eb9 hello-world:SNAPSHOT-1.0
To list all the tags for a repository, use:
docker image ls hello-world
And you will get something like:
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world SNAPSHOT-1.0 fce289e99eb9 15 months ago 1.84kB
hello-world latest fce289e99eb9 15 months ago 1.84kB