Looking into Confluent Docs for installation, installation commands seems to need sudo permission. I have couple of questions regarding the same:
Is it sudo root privileges that are needed here, or should we be sudo'ing to some specific confluent user like cp-kafka to install the platform? I presume we need sudo root privileges for install.
Will the platform create all the necessary service user accounts for each of the individual components like Kafka, ZooKeeper etc? Or should they be created upfront and kept ready before installation is initiated?
What should be the user group that confluent needs / creates?
Thanks
should we be sudo'ing to some specific confluent user like cp-kafka to install the platform?
So, to clarify, "sudo to a user" is not a phrase in Linux. su is the command to "switch users". If you do not currently have sudo privileges, then you will need to actually logout and log back in to this other user. This could be done with su from that current account, though (which you will be prompted for a password), but not sudo su
Those users do not exist prior to installation, and installing any software typically is just sudo yum install, for example, assuming your current user has access to do so (via /etc/sudoers).
Will the platform create all the necessary service user accounts for each of the individual components like Kafka, ZooKeeper etc?
It should, yes. Maybe except a Zookeeper service user, if I recall correctly because Zookeeper is "installed" within the Kafka directory, owned by cp-kafka.
What should be the user group that confluent needs / creates?
Nothing needs pre-created. The scripts will create cp-kafka, cp-schema-registry, cp-kafka-connect, cp-ksql, etc for each component.
For easily installing on a cluster, it is recommended to try the Ansible playbooks
Related
I am attempting to start a Postgres SQL server on amazon Linux using the command
sudo service postgresql start
I installed the server using this method. I have added it here for simplicity
sudo rpm -i https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6-x86_64/pgdg-ami201503-96-9.6-2.noarch.rpm
and then
sudo yum install postgresql96-server.x86_64
after which i did this to install the command line tools for postgres
sudo yum install postgresql96.x86_64 postgresql96-libs.x86_64
Any suggestions on how I can start the server ? I usually start the server using
the command
sudo service postgresql start
however its not working in this case as it says "Unrecognized service"
I then tried this
postgres -D /usr/local/pgsql/data
postgres: could not access directory "/usr/local/pgsql/data": No such file or directory. Run initdb or pg_basebackup to initialize a PostgreSQL data directory.
Having the same issue, or similar. May be I installed pgsql from source, don't remember. We could make our own service start files. How? Let's find out! >>RTFM<< starting with what we already know:
man service
which leads us to chkconfig(8), so
man chkconfig
and it gives us an option
chkconfig --add ${svcname}
to add a brand new service under a name we choose!
But before we do, we might actually want to check what's already there. With
service --status-all
we get a list of all known services and their run status. And I found "postmaster" in my list, and as you might know, the PostgreSQL master server to connect to used to be called "postmaster". Yet, when I try
service postmaster status
it also tells me it doesn't know such service. OK, forget it -- for now -- just let's move on with making our own! But I still want to peek what there is in run-level 3 (normal server run level). So I go
ls -1 /etc/rc.d/rc3.d |fgrep post
and there I find: "K36postgresql95"! So, accordingly our service name should be "postgresql95". Trying that:
service postgresql95 status
it says now "postmaster is stopped". Confusingly the name the service reports for itself both in service --status-all and when we individually inquire for it is different than the name used to actually address it in the service command. Good to know. Easy enough to search /etc/rc.d for the name of interest.
service postgresql95 start
now starts the service. And check with
psql -U ${pguser} ${pgdb}
and I find that working. So now all I need to do is enable that service at system boot to auto-start
chkconfig --levels 3 postgresql95 on
and that works, doesn't it?
PS: It doesn't matter that I happen to run version 9.5
I recently installed PostgreSQL 9.2.24 on Amazon Linux 2 and I had to initialize the database manually before being able to create ROLE and DATABASE as I normally would on Ubuntu.
// initialize database after installing with yum
$ sudo postgresql-setup initdb
// start
$ sudo systemctl start postgresql.service
I am using the following configuration, ubuntu 16.04 apache2 php 7.0 owncloud 10.0.3. I think I have made an error when I setup ownclound. The data directory lives in /var/www/owncloud/data ( I believe that owncloud.log resides in this folder). I have deployed fail2ban and the issue that I am having is that fail2ban cannot access the data folder because I ran sudo chown -R www-data:www-data /var/www/owncloud/. The only way I access the log file is through the OWNcloud gui settings > general > log. where I can see the failed login attempts by me. I cannot seem to get Fail2ban to read the owncloud log.
I am new to ubuntu and Owncloud can anyone advise how to rectify this issue, owncloud is working fine and I am using ip addresses to restrict access to owncloud. Fail2ban was supposed to make the server secure so that I could open up owncloud to the internet.
Regards
Steve
You should change the permissions of the log file so that it can be read by everyone but written only by the php process. Do a 'chmod 755 /var/log/owncloud/owncloud.log'
By the way. I suggest that you migrate from Owncloud to Nextcloud. It is a full replacement, fully open source, more features and more secure. And it has a fail2ban equivalent brute force protection already build in :-)
I wonder is there a way to set user in vagrant configuration, so that box will be provisioned from non-root account? The thing is that I want to run chef-client on boxes as specific user (deployer), and not root, but for that I should run provisioner and create this user first, and this provisioner is created under root user.
As I understand, the one solution is to run provisioning for create deployer user, and then change all chef-related files and directories on box to be owned by deployer user, and then run the actual provisioning from chef server.
Is there some better solution?
Forgive me if I'm just restating the second half of your question but it seems like you may want to create a minimal starting provisioner (runs as root) then spawn another provisioner as your intended user.
Here is an example of how I install my dotfiles as my ssh user (vagrant)
# ... in shell provision script...
su -c "cd /home/vagrant/.dotfiles && bash install.bash" vagrant
Similar Vagrant github issue
With a freshly installed version of Postgres 9.2 via yum repository on Centos 6, how do you run postgres as a different user when it is configured to run as 'postgres:postgres' (u:g) out of the box?
In addition to AndrewPK's explanation, I'd like to note that you can also start new PostgreSQL instances as any user by stopping and disabling the system Pg service, then using:
initdb -D /path/to/data/directory
pg_ctl start -D /path/to/data/directory
This won't auto-start the server on boot, though. For that you must integrate into your init system. On CentOS 6 a simple System V-style init script in /etc/init.d/ and a suitable symlink into /etc/rc3.d/ or /etc/rc3.d/ (depending on default runlevel) is sufficient.
If running more than one instance at a time they must be on different ports. Change the port directive in postgresql.conf in the datadir or set it on startup with pg_ctl -o "-p 5433" .... You may also need to override the unix_socket_directories if your user doesn't have write permission to the default socket directory.
pg_ctl
initdb
This is only for a fresh installation (as it pertained to my situation) as it involves blowing away the data dir.
The steps I took to resolve this issue while utilizing the packaged startup scripts for a fresh installation:
Remove the postgres data dir /var/lib/pgsql/9.2/data if you've already gone through the initdb process with the postgres user:group configured as default.
Modify the startup script (/etc/init.d/postgresql-9.2) to replace all instances of postgres:postgres with NEWUSER:NEWGROUP.
Modify the startup script to replace all instances of postgres in any $SU -l postgres lines with the NEWUSER.
run /etc/init.d/postgres initdb to regenerate the cluster using the new username
Make sure any logs created are owned by the new user or remove old logs if error on initdb (the configuration file in my case was found in /var/lib/pgsql/9.2/data/postgresql.conf).
Startup postgres and it should now be running under the new user/group.
I understand this might not be what other people are looking for if they have existing postgres db's and want to restart the server to run as a different user/group combo - this was not my case, and I didn't see an answer posted anywhere for a 'fresh' install utilizing the pre-packaged startup scripts.
I'm trying to setup Capistrano to do our deployments, but I now stumbled upon what seems to be a common assumption of capistrano users: that the user you SSH to the remote host will have permission to write to the directory of deployment.
Here, administrators are common users with a single distinction: they can sudo. At first, I thought that would be enough, since there are some configurations related to sudo, but it seems that's not the case after all.
Is there a way around this? Creating a user shared by everyone doing deployment is not an acceptable solution.
Edit: to make it clear, no deploy action should happen without calling sudo -- that's the gateway point that checks whether the user is allowed to deploy or not, and it should be a mandatory checkpoint.
The presently accepted answer does not fit that criteria. It goes around sudo by granting extra permissions to the user. I'm accepting it anyway because I've come to the conclusion that Capistrano is fundamentally broken in this regard.
I assume you are deploying to a Linux distro. The easiest way to resolve your issue is to create a group, say, deployers, and add each user who should have the permissions to deploy to that group. Once the group is created and the users are in the group, change the ownership and permissions on the deployment path.
Depending on the distro, the syntax will vary slightly. Here it is for ubuntu/debian:
Create the group:
$ sudo groupadd deployers
Add users to group:
$ sudo usermod -a -G deployers daniel
The last argument there is the username.
Next, update the ownership of the deployment path:
$ sudo chown -R root:deployers /deploy/to/path/
The syntax for is :. Here I am assuming that the user that currently owns the path is root. Update to which ever user should own the directory.
Finally, change the permissions on the deployment path:
$ sudo chmod -R 0766 /deploy/to/path/
That will allow users in the deployers group to read and write all files and directories beneath /deploy/to/path