hostname is set in /etc/hostname but when I run: hostname -F /etc/hostname response
Hostname: set hostname:Operation not permitted same response as root sudo and changed /etc/sysctl.conf to allow permissions to user
this is the iSH app to setup network interfaces any solution to resolve this issue?
Related
Because I'm too stupid to run sysbox like this excellent article suggests, I'm trying to mount /var/run/docker.sock directly into the container from the host via hostPath to build Docker images in a Jenkins container.
Simple as this:
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
This works, but when trying to run the Docker CLI inside the container I get:
$ docker version
Client:
Version: 20.10.16
// ...
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/version": dial unix /var/run/docker.sock: connect: permission denied
The user inside the container is:
$ id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
But the file has these permissions:
$ ls -la /var/run/docker.sock
srw-rw---- 1 root 998 0 Jun 6 15:23 /var/run/docker.sock
I do have root on the host node and security is not much of a concern since the Jenkins can only be accessed via VPN.
Thanks for any suggestion!
Hi i keep getting this error when using ansible via kubespray and I am wondering how to over come it
TASK [bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux)] ********************************************************************************************************************************************************************************************************
task path: /home/dc/xcp-projects/kubespray/roles/bootstrap-os/tasks/main.yml:50
<192.168.10.55> (1, b'\x1b[1;31m==== AUTHENTICATING FOR org.freedesktop.hostname1.set-hostname ===\r\n\x1b[0mAuthentication is required to set the local host name.\r\nMultiple identities can be used for authentication:\r\n 1. test\r\n 2. provision\r\n 3. dc\r\nChoose identity to authenticate as (1-3): \r\n{"msg": "Command failed rc=1, out=, err=\\u001b[0;1;31mCould not set property: Connection timed out\\u001b[0m\\n", "failed": true, "invocation": {"module_args": {"name": "node3", "use": null}}}\r\n', b'Shared connection to 192.168.10.55 closed.\r\n')
<192.168.10.55> Failed to connect to the host via ssh: Shared connection to 192.168.10.55 closed.
<192.168.10.55> ESTABLISH SSH CONNECTION FOR USER: provision
<192.168.10.55> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="provision"' -o ConnectTimeout=10 -oStrictHostKeyChecking=no -o ControlPath=/home/dc/.ansible/cp/c6d70a0b7d 192.168.10.55 '/bin/sh -c '"'"'rm -f -r /home/provision/.ansible/tmp/ansible-tmp-1614373378.5434802-17760837116436/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.10.56> (0, b'', b'')
fatal: [node2]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"name": "node2",
"use": null
}
},
"msg": "Command failed rc=1, out=, err=\u001b[0;1;31mCould not set property: Method call timed out\u001b[0m\n"
}
my inventory file is as follows
all:
hosts:
node1:
ansible_host: 192.168.10.54
ip: 192.168.10.54
access_ip: 192.168.10.54
node2:
ansible_host: 192.168.10.56
ip: 192.168.10.56
access_ip: 192.168.10.56
node3:
ansible_host: 192.168.10.55
ip: 192.168.10.55
access_ip: 192.168.10.55
children:
kube-master:
hosts:
node1:
node2:
kube-node:
hosts:
node1:
node2:
node3:
etcd:
hosts:
node1:
node2:
node3:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}
I also have a file which provision the users in the following manner
- name: Add a new user named provision
user:
name: provision
create_home: true
shell: /bin/bash
password: "{{ provision_password }}"
groups: sudo
append: yes
- name: Add a new user named dc
user:
name: dc
create_home: true
shell: /bin/bash
password: "{{ provision_password }}"
groups: sudo
append: yes
- name: Add provision user to the sudoers
copy:
dest: "/etc/sudoers.d/provision"
content: "provision ALL=(ALL) NOPASSWD: ALL"
- name: Add provision user to the sudoers
copy:
dest: "/etc/sudoers.d/dc"
content: "dc ALL=(ALL) NOPASSWD: ALL"
- name: Disable Root Login
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: "PermitRootLogin no"
state: present
backup: yes
notify:
- Restart ssh
I have run the ansible command in the following manner
ansible-playbook -i kubespray/inventory/mycluster/hosts.yaml --user="provision" --ssh-extra-args="-oStrictHostKeyChecking=no" --key-file "/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key" kubespray/cluster.yml -vvv
as well as
ansible-playbook -i kubespray/inventory/mycluster/hosts.yaml --user="provision" --ssh-extra-args="-oStrictHostKeyChecking=no" --key-file "/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key" --become-user="provision" kubespray/cluster.yml -vv
both yield the same error an interestingly escalation seems to succeed on earlier points
after reading this article
https://askubuntu.com/questions/542397/change-default-user-for-authentication
I have decided to add the users to the sudo group but the error still persists
looking into the main.yaml file position suggested by the error it seems to be this code possibly causing issues?
# Workaround for https://github.com/ansible/ansible/issues/42726
# (1/3)
- name: Gather host facts to get ansible_os_family
setup:
gather_subset: '!all'
filter: ansible_*
- name: Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux)
hostname:
name: "{{ inventory_hostname }}"
when:
- override_system_hostname
- ansible_os_family not in ['Suse', 'Flatcar Container Linux by Kinvolk', 'ClearLinux'] and not is_fedora_coreos
The OS'es of the hosts are ubuntu 20.04.02 server
is there anything more I can do?
From Kubespray documentation:
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
As stated, the --become is mandatory, it allows to do privilege escalation for most of the system modifications (like setting the hostname) that Kubespray performs.
With --user=provision you're just setting the SSH user, but it will need privilege escalation anyway.
With --become-user=provision you're just saying that privilege escalation will escalade to 'provision' user (but you would need --become to do the privilege escalation).
In both cases, unless 'provision' user has root permissions (not sure putting it in root group is enough), it won't be enough.
For the user 'provision' to be enough, you need to make sure that it can perform a hostnamectl <some-new-host> without being asked for authentication.
I have Ubuntu 18.04 VM running on GCP and I have a problem when connecting remote after changing the default port on Mongodb
After the installation on Mongodb I followed a few steps to enable remote access on default port in the file /etc/mongodb.conf changed the bindIP to 0.0.0.0 and open the default port on GCP firewall and I was able to connect to Mongodb.
But I want to change default Mongodb port switch from 27017 to for example: 38018
I changed the port in /etc/mongodb.conf from 27017 to 38018, I've restarted mongo service and open the new port on GCP firewall.
After changing the port I'm able to connect from terminal with the following command
mongo --port 38018 -u "user" -p "pass" --authenticationDatabase "admin"
But when I try to connect from outside on the new port with mongo compass the connection is refused, what I'm missing here?
Also I've checked is it mongo running on the new port with
sudo netstat -tulpn | grep 38018
I get the following message
tcp 0 0 0.0.0.0:38018 0.0.0.0:* LISTEN 6644/mongod
Ubuntu 18.04.4 LTS
mongod --version db version v4.2.7
UFW inactive
Here is my mongo config file
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 38018
bindIp: 0.0.0.0
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
security:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
I execute the following command as #YasBES said
mongod -f your_config_file.conf
and restarted the mongo process, the process was unable to start
After checking the log i found this error
Failed to start up WiredTiger under any compatibility version.
And I found the following command to fix the error
sudo chown -R mongodb:mongodb /var/lib/mongodb/
Next I removed 27017 .sock file
sudo rm /tmp/mongodb-27017.sock
and give proper ownership on the newly created file
sudo chown mongodb:mongodb mongodb-38018.sock
after executing those commands the process started successfully
when I look at the mongod.log I got those messages
2020-05-28T17:13:06.807+0000 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
2020-05-28T17:13:06.807+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongodb/diagnostic.data'
2020-05-28T17:13:06.811+0000 I SHARDING [LogicalSessionCacheRefresh] Marking collection config.system.sessions as collection version: <unsharded>
2020-05-28T17:13:06.812+0000 I SHARDING [LogicalSessionCacheReap] Marking collection config.transactions as collection version: <unsharded>
2020-05-28T17:13:06.812+0000 I NETWORK [listener] Listening on /tmp/mongodb-38018.sock
2020-05-28T17:13:06.812+0000 I NETWORK [listener] Listening on 0.0.0.0
2020-05-28T17:13:06.812+0000 I NETWORK [listener] waiting for connections on port 38018
2020-05-28T17:13:07.003+0000 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
Now the process start and it says it's listening on 38018 but still can't connect remotely
2020-05-28T17:13:06.812+0000 I NETWORK [listener] Listening on 0.0.0.0
2020-05-28T17:13:06.812+0000 I NETWORK [listener] waiting for connections on port 38018
After some time of research I came to decision to test the same setup on Azure cloud with the same Mongo version, configuration like on GCP.
After changing the default mongo port from 27017 to 38018 I opened the firewall on Azure with the new 38018 port and the remote connection was established.
This is the moment where I discovered that there is something wrong with the port forwarding on GCP.
I was using GCP and I did a lot of forwardings for other machines and other services but this one was strange.
I tried couple of different types of forwarding with different priorities IP ranges etc.
The one that worked is with logs on, priority 100, ip ranges 0.0.0.0/0 and protocol and ports 38018 tcp/udp.
I'm currently running a REST API on a docker container with the following Dockerfile:
FROM python:2.7
WORKDIR /app
RUN pip install uwsgi
COPY awf/requirements.txt ./awf/requirements.txt
RUN pip install -r awf/requirements.txt
COPY ./ ./
# Call collectstatic (customize the following line with the minimal environment variables needed for manage.py to run):
RUN python manage.py collectstatic --noinput
EXPOSE 8000
ENTRYPOINT [ "uwsgi" ]
CMD [ "--wsgi-file", "awf/wsgi.py", "--ini", "uwsgi.ini" ]
The Python REST API has the following DATABASE config in settings.yaml:
DATABASE: {
engine: 'django.contrib.gis.db.backends.postgis',
name: 'name',
user: 'user',
password: 'pass',
host: '192.168.99.100',
port: '5432',
}
I have set this host:'192.168.99.100' because this is the docker-machine ip output.
When I run the docker container without mapping port 5432, I get the following error:
OperationalError: could not connect to server: Connection refused
Is the server running on host "192.168.99.100" and accepting TCP/IP
connections on port 5432?
But then I map ports while running the docker container:
docker run -p 8000:8000 -p 5432:5432 img
And I get the following error:
OperationalError: server closed the connection unexpectedly This
probably means the server terminated abnormally before or while
processing the request.
I don't know if I missed some configuration. But I added the IP address range 172.17.0.0/16 to pg_hba.conf and also configured PostreSQL to listen for connections on all IP, according to this solution:
Allow docker container to connect to a local/host postgres database
EDIT
pg_hba.conf:
# IPv4 local connections:
host all all 127.0.0.1/32 trust
host all all 192.168.99.0/16 trust
host all all 172.17.0.0/16 trust
I've build a docker container running a mongodb-instance, that should be exposed to the host.
However, when i want to connect from the host into the mongodb-container, the connection will be denied.
This is my Dockerfile:
FROM mongo:latest
RUN mkdir -p /var/lib/mongodb && \
touch /var/lib/mongodb/.keep && \
chown -R mongodb:mongodb /var/lib/mongodb
ADD mongodb.conf /etc/mongodb.conf
VOLUME [ "/var/lib/mongodb" ]
EXPOSE 27017
USER mongodb
WORKDIR /var/lib/mongodb
ENTRYPOINT ["/usr/bin/mongod", "--config", "/etc/mongodb.conf"]
CMD ["--quiet"]
/etc/mongodb.conf:
And this is the config-file for MongoDB, where i bind the IP 0.0.0.0 explicitly as found here on SO, that 127.0.0.1 could be the root cause of my issue (but it isn't)
systemLog:
destination: file
path: /var/log/mongodb/mongo.log
logAppend: true
storage:
dbPath: /var/lib/mongodb
net:
bindIp: 0.0.0.0
The docker container is running, but a connection from the host is not possible:
host$ docker run -p 27017:27017 -d --name mongodb-test mongodb-image
host$ docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6ec958034a6f mongodb-image "/usr/bin/mongod --co" 4 seconds ago Up 3 seconds 0.0.0.0:27017->27017/tcp mongodb-test
Find the IP-Address:
host$ docker inspect 6ec958034a6f |grep IPA
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAMConfig": null,
"IPAddress": "172.17.0.2",
Try to connect:
host$ mongo 172.17.0.2:27017
MongoDB shell version v3.4.0
connecting to: mongodb://172.17.0.2:27017
2016-12-16T15:53:40.318+0100 W NETWORK [main] Failed to connect to 172.17.0.2:27017 after 5000 milliseconds, giving up.
2016-12-16T15:53:40.318+0100 E QUERY [main] Error: couldn't connect to server 172.17.0.2:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:234:13
#(connect):1:6
exception: connect failed
When i ssh into the container, i can connect to mongo and list the test database successfully.
Use host.docker.internal with exposed port : host.docker.internal:27017
Using localhost instead of the ip, allows the connection.
Combine it with the exposed port: localhost:27017
I tested the solution as it was stated in the comments, and it works.