cant enable mongo service - mongodb

I am trying to enable mongo service using ansible on my aws AMI. Here is the task for the playbook
- name: Mongodb repo
yum_repository:
name: mongodb
description: mongodb
baseurl: https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
gpgkey: https://www.mongodb.org/static/pgp/server-3.4.asc
- name: Install mongodb
yum:
name: mongodb-org
state: present
- name: Enable mongodb
service:
name: mongodb-org
enabled: true
and here is the error
TASK [mongodb_ami : Enable mongodb] ********************************************
fatal: [default]: FAILED! => {"changed": false, "msg": "Could not find the requested service mongodb-org: host"}
The first two task are okay but the last one (enabling doesnt work). How can I resolve this?

Are you sure the service name is mongodb-org? I think the service name is mongod:
- name: Enable mongodb
service:
name: mongod
enabled: true

Related

ansible and postgres not allowing configuration: LOG: provided user name (postgres) and authenticated user name (boop) do not match

Im trying to configure postgres with ansible. i have two vms running ubuntu 22.0.4.1 on an internal network. they are happy to use standard ansible commands. however upon using the standard ansible commands I get.
unable to connect to database: connection to server on socket \"/var/run/postgresql/.s.PGSQL.5432\" failed: fatal: peer authentication failed for user \"postgres\"
message.
the log says
LOG: provided user name (postgres) and authenticated user name (boop) do not match
i used the following playbook:
---
- name: Setup
hosts: postgres_primaries
become: true
tasks:
- name: Install dependencies for PostgreSQL
apt:
name: "{{ item.name }}"
update_cache: true
state: latest
with_items:
- { name: bash }
- { name: openssl }
- { name: libssl-dev }
- { name: libssl-doc }
- name: Install PostgreSQL
package:
name: "{{ item.name }}"
update_cache: true
state: present
with_items:
- { name: postgresql }
- { name: postgresql-contrib }
- { name: libpq-dev }
- { name: python3-psycopg2 }
- name: Ensure the PostgreSQL service is running
service: name=postgresql state=started enabled=yes
- name: Daemon-Reload for Postgres if case of config change
systemd:
state: restarted
daemon-reload: yes
name: postgresql
- name: work on database
hosts: postgres_primaries
become_user: postgres
vars_files:
- vars.yml
tasks:
- name: create database
postgresql_user:
name: test1
password: boop
i tried mapping boop to postgres the following user_name map:
postgres boop postgres
i tried editing my pg_hba.conf to a more catch all condition:
local all all peer
this should give me a user but instead the aforementioned error turns up. if i try to add a become: yes to the final task i get a error related to moving files as an unprivileged user.

Kibana - Elastic - Fleet - APM - failed to listen:listen tcp bind: can't assign requested address

Having setup Kibana and a fleet server, I now have attempted to add APM.
When going through the general setup - I forever get an error no matter what is done:
failed to listen:listen tcp *.*.*.*:8200: bind: can't assign requested address
This is when following the steps for setup of APM having created the fleet server.
This is all being launched in Kubernetes and the documentation has been gone through several times to no avail.
We did discover that we can hit the
/intake/v2/events
etc endpoints when shelled into the container but 404 for everything else. Its close but no cigar so far following the instructions.
As it turned out, the general walk through is soon to be depreciated in its current form as is.
And setup is far far simpler in a helm file where its actually possible to configure kibana with package ref for your named apm service.
xpack.fleet.packages:
- name: system
version: latest
- name: elastic_agent
version: latest
- name: fleet_server
version: latest
- name: apm
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server on ECK policy
id: eck-fleet-server
is_default_fleet_server: true
namespace: default
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
package_policies:
- name: fleet_server-1
id: fleet_server-1
package:
name: fleet_server
- name: Elastic Agent on ECK policy
id: eck-agent
namespace: default
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
is_default: true
package_policies:
- name: system-1
id: system-1
package:
name: system
- package:
name: apm
name: apm-1
inputs:
- type: apm
enabled: true
vars:
- name: host
value: 0.0.0.0:8200
Making sure these are set in the kibana helm file will allow any spun up fleet server to automatically register as having APM.
The missing key in seemingly all the documentation is the need of a APM service.
The simplest example of which is here:
Example yaml scripts

How to wait until env for appid is created in jelastic manifest installation?

I have the following manifest:
jpsVersion: 1.3
jpsType: install
application:
id: shopozor-k8s-cluster
name: Shopozor k8s cluster
version: 0.0
baseUrl: https://raw.githubusercontent.com/shopozor/services/dev
settings:
fields:
- name: envName
caption: Env Name
type: string
default: shopozor
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: version
type: string
caption: Version
default: v1.16.3
onInstall:
- installKubernetes
- enableSubDomains
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cmd
cmd: |-
curl -fsSL ${baseUrl}/scripts/install_k8s.sh | /bin/bash
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.version}
jaeger: false
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
domains: staging,api-staging,assets-staging,api,assets
Unfortunately, when I run that manifest, the k8s cluster gets installed, but the subdomains cannot be created (yet), because:
[15:26:28 Shopozor.cluster:3]: enableSubDomains: {"action":"enableSubDomains","params":{}}
[15:26:29 Shopozor.cluster:4]: api [cp]: {"method":"jelastic.env.binder.AddDomains","params":{"domains":"staging,api-staging,assets-staging,api,assets"},"nodeGroup":"cp"}
[15:26:29 Shopozor.cluster:4]: ERROR: api.response: {"result":2303,"source":"JEL","error":"env for appid [5ce25f5a6988fbbaf34999b08dd1d47c] not created."}
What jelastic API methods can I use to perform the necessary waiting until subdomain creation is possible?
My current workaround is to split that manifest into two manifests: one cluster installation manifest and one update manifest creating the subdomains. However, I'd like to have everything in the same manifest.
Please change this:
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
domains: staging,api-staging,assets-staging,api,assets
to:
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
envName: ${settings.envName}
domains: staging,api-staging,assets-staging,api,assets

Ansible and postgresql error with psycopg2

I'm trying to configure postgresql by ansible in a VPS.
Look for a solution, I tried to change peer for md5 and trust too in the postgre conf.
My role:
- name: Install o Postgresql
become: yes
apt:
name: ['libpq-dev', 'python3-dev', 'postgresql', 'postgresql-contrib']
- name: Install o psycopg2
become: yes
pip:
name: psycopg2-binary
executable: pip3
- name: ensure postgresql is running
service:
name: postgresql
state: started
enabled: yes
- name: ensure database is created
become: true
become_user: postgres
postgresql_db:
name: "{{ db_name }}"
The tasks 1,2,3 is ok. But the task 4 "ensure database is created" I receive this error:
psycopg2.OperationalError: FATAL: role "postgresql" does not exist
My playbook
- hosts: dev
remote_user: develop
roles:
- update_apt
- nginx
- webapp
- postgresql
- git

Ansible Enabling Postgresql Services

I'm trying to enable these postgresql services with an ansible playbook, but I get this error all the time
TASK [postgresql : enabling postgresql services] ******************************************************************************************************
fatal: [some-remote-server]: FAILED! => {
"changed": false,
"cmd": "'systemctl enable postgresql-9.6.service' 'systemctl start postgresql-9.6.service'",
"rc": 2
}
MSG:
[Errno 2] No such file or directory
This would be my task
- name: enabling postgresql services
check_mode: no
command:
args:
argv:
- systemctl enable postgresql-9.6.service
- systemctl start postgresql-9.6.service
become: yes
You might want to use the service module
- name: enabling postgresql services
service:
name: postgresql
state: started
enabled: yes