I followed following steps to setup KDC & kerberos. Now while kinit facing following issue.
OS - SUSE 11
1. zypper install krb5 krb5-server krb5-client
2. Updated krb5.conf with proper realm details.
3. kdb5_util create -s <kerberos database created. password provided on prompt>
4. echo "*/admin#EMEA.EBS.CORPINTRA.NET" >> /var/lib/kerberos/krb5kdc/kadm5.acl <provided permission to principals>
5. rckrb5kdc restart & then rckadmind restart
6. kadmin.local -q "addprinc admin/admin" <creating principal>
7. kadmin.local -q "list_principals" <verified principals>
8. kinit admin/admin#EMEA.EBS.CORPINTRA.NET <initalise>
9. klist
Create kerberos user:
10. kadmin.local
- addprinc himansu#EMEA.EBS.CORPINTRA.NET
(provide passwrd, when prompted)
11. ktutil
- addent -password -p himansu#EMEA.EBS.CORPINTRA.NET -k 1 -e RC4-HMAC
- wkt himansu.keytab
- q
12. ls -lrt himansu.keytab
13. kinit -kt himansu.keytab himansu#EMEA.EBS.CORPINTRA.NET
EXCEPTION:
kinit(v5): Key table entry not found while getting initial credentials
Related
I'm trying to create a custom DDEV Provider, to import the current database and also user generated files from the web server.
I want to use it with TYPO3 Projects, where I develop the EXT locally with DDEV (because its awesome :) ) and I want to update my local database and also the "fileadmin" files with the help of the ddev pull function.
I've read the docs: Introduction to Hosting Provider Integration and I tested the bash commands locally within the DDEV Container (ddev ssh) and I'm able to connect to the remote Webserver and make a database dump and transfer it to the local DDEV container.
So I added the bash commands to the my custom provider .yaml file in the /provider/ folder.
Here is the current file:
environment_variables:
DB_NAME: db_name
DB_USER: password
DB_PASSWORD: password
HOST_IP: 11.11.11.11
SSH_USERNAME: username
SSH_PASSWORD: password
SSH_PORT: 22
db_pull_command:
command: |
# Creates the .download folder if it doesn't exist
mkdir -p /var/www/html/.ddev/.downloads
# execute the mysqldump on the remote webserver via SSH
ssh -p ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP} 'mysqldump -h 127.0.0.1 -u ${DB_USER} -p ${DB_PASSWORD} ${DB_NAME} > /tmp/${DB_NAME}.sql.gz'
# download to sql file to the ddev folder
scp -P ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP}:/tmp/${DB_NAME}.sql.gz /var/www/html/.ddev/.downloads/db.sql.gz.
If I execute the pull with ddev pull my-provider I get the following Error:
Downloading database...
bash: 03: command not found
Pull failed: Failed to exec mkdir -p /var/www/html/.ddev/.downloads
I assumed that the commands are executed like I would within the DDEV Container (with ddev ssh). What am I missing?
My Environment:
TYPO3 v10.4.20
Windows 10 (WSL)
Docker Desktop 3.5.2
DDEV-Local version v1.17.7
architecture amd64
db drud/ddev-dbserver-mariadb-10.3:v1.17.7
dba phpmyadmin:5
ddev-ssh-agent drud/ddev-ssh-agent:v1.17.0
docker 20.10.7
docker-compose 1.29.2
The web server is running on Plesk.
Note: I only tried to implement the db pull command so far.
UPDATE 09.11.21:
So I've gotten this far that I'm able update and also download the files. However I'm only able to do it, if I hardcode the variables. Everytime I'm trying to setup the environment_variables: I get the following error, if I run the ddev pull myProvider:
Downloading database...
bash: 03: command not found
Here is my current .yaml file with the environment_variables:, which currently don't work. I've tested all the commands within ddev ssh
and it works if I call them manually.
environment_variables:
DB_NAME: db_name
DB_USER: db_user
DB_PASSWORD: 'Password$'
HOST_IP: 10.10.10.10
SSH_USERNAME: username
SSH_PORT: 21
auth_command:
command: |
ssh-add -l >/dev/null || ( echo "Please 'ddev auth ssh' before running this command." && exit 1 )
db_pull_command:
command: |
mkdir -p /var/www/html/.ddev/.downloads
ssh -p ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP} "mysqldump -h 127.0.0.1 -u ${DB_USER} -p'${DB_PASSWORD}' ${DB_NAME} > /tmp/${DB_NAME}.sql"
scp -P ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP}:/tmp/${DB_NAME}.sql /var/www/html/.ddev/.downloads/db.sql
gzip -f /var/www/html/.ddev/.downloads/db.sql
files_pull_command:
command: |
scp -P ${SSH_PORT} -r ${SSH_USERNAME}#${HOST_IP}:/path/to/public/fileadmin/user_upload /var/www/html/.ddev/.downloads/files
Do I declare the variables the wrong way? Or what is it that I'm missing?
For anyone who has trouble connecting via ssh without the password promt, you can run the following commands:
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 username#host
Afterward you should be able to connect without a password promt. Try the following: ssh -p 22 username#host
before you try to ddev puul you have to execute ddev auth ssh
Thanks to #rfay for pointing me into the right direction.
The Problem was, that my password containted a special charater (not a $ though) which needed to be escaped.
After escpaing it correctly like so
environment_variables:
DB_PASSWORD: 'Password\&\'
the ddev pull works.
I hope my .yaml file helps someone else that needs to pull from a webserver.
I am trying to use the 'dotnet dev-certs' tool to export an https certificate to include with a Docker image. Right now I am using:
dotnet dev-certs https -v -ep $(HOME)\.aspnet\https -p <password>
and I get the error:
Exporting the certificate including the private key.
Writing exported certificate to path 'xxx\.aspnet\https'.
Failed writing the certificate to the target path
Exception message: Access to the path 'xxx\.aspnet\https' is denied.
An error ocurred exporting the certificate.
Exception message: Access to the path 'xxx\.aspnet\https' is denied.
There was an error exporting HTTPS developer certificate to a file.
The problem I see is that no matter what path I supply to export the certificate to I get the same 'Access to the path is denied' error. What am I missing? I know this command has been suggested in numerous places. But I cannot seem to get it to work.
Thank you.
The export path should specify a file, not a directory. This fixed the issue for me on Mac:
dotnet dev-certs https -v -ep ${HOME}/.aspnet/https/aspnetapp.pfx -p <password>
For Ubuntu users:
install libnss3-tools:
sudo apt-get update -y
sudo apt-get install -y libnss3-tools
create or verify if the folder below exists on machine:
$HOME/.pki/nssdb
export the certificate:
dotnet dev-certs https -v -ep ${HOME}/.aspnet/https/aspnetapp.pfx
Run the following commands:
certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n localhost -i /home/<REPLACE_WITH_YOUR_USER>/.aspnet/https/aspnetapp.pfx
certutil -d sql:$HOME/.pki/nssdb -A -t "C,," -n localhost -i /home/<REPLACE_WITH_YOUR_USER>/.aspnet/https/aspnetapp.pfx
exit and restart the browser
Source: https://learn.microsoft.com/en-us/aspnet/core/security/enforcing-ssl?view=aspnetcore-5.0&tabs=visual-studio#ssl-linux
For me the problem was I was using .Net 5 under CentOS 7.8. Uninstalling .Net 5 and using .Net Core 3.1 SDK instead solved the problem.
What I'm trying to do is to convert this installing script for webodm (https://gist.github.com/lkpanganiban/5226cc8dd59cb39cdc1946259c3fea6e) written in bash to be used in tcsh shell under a freenas jail.
I have now enter at part where I can't find a solution to and my hope is that someone can en light me what to do next.
The line that is triggering the problem is :
su - postgres -c "psql -d webodm_dev -c "\""CREATE EXTENSION postgis;"\"" "
The whole error line :
ERROR: could not load library "/usr/local/lib/postgresql/plpgsql.so": dlopen (/usr/local/lib/postgresql/plpgsql.so) failed: /usr/local/lib/postgresql/plpgsql.so: Undefined symbol "MakeExpandedObjectReadOnly"
pkg info give :
postgis24-2.4.5_1 Geographic objects support for PostgreSQL databases
postgresql95-client-9.5.15_2 PostgreSQL database (client)
postgresql95-contrib-9.5.15_2 The contrib utilities from the PostgreSQL distribution
postgresql95-server-9.5.15_2 PostgreSQL is the most advanced open-source database available anywhere
And yes the file exists:
root#webodm2:~ # ls -l /usr/local/lib/postgresql/plpgsql.so
-rwxr-xr-x 1 root wheel 195119 Feb 7 18:16 /usr/local/lib/postgresql/plpgsql.so
root#webodm2:~ #
So anyone have some idea ?
I faced this issue after the upgrade from postgres 11 to 12, here how to fix it for Linux and Mac (without brew)
$ sudo su postgres
$ /usr/lib/postgresql/12/bin/pg_upgrade \
--old-datadir=/var/lib/postgresql/11/main \
--new-datadir=/var/lib/postgresql/12/main \
--old-bindir=/usr/lib/postgresql/11/bin \
--new-bindir=/usr/lib/postgresql/12/bin \
--old-options '-c config_file=/etc/postgresql/11/main/postgresql.conf' \
--new-options '-c config_file=/etc/postgresql/12/main/postgresql.conf' \
you can add --check to do a dry test upgrade without changing anything in your postgres installation.
for Mac users with brew installation:
after the upgrade run the following command"
$ brew postgresql-upgrade-database
That error message means that you have a plpgsql.so from PostgreSQL 9.5 or earlier and try to use it with PostgreSQL 9.6 or later.
Either you are picking up the wrong library, or you copied files around.
Anyway, the problem has nothing to do with PostGIS.
It might be your database has an outdated version, try to run the checks before running brew postgresql-upgrade-database. OR try to restart your service brew services restart postgres.
psql --version # 11.4 <--- psql cli version
psql -c 'select version();' postgres # 10.2 <--- db version in storage
brew info postgres # check pg info <--- found solution
brew postgresql-upgrade-database # upgrade db version in storage and fixed the issue
I just installed postgresql on a Macbook with brew install postgresql. Then I try to psql, but it requires password and then show psql: FATAL: password authentication failed for user "<myname>".
I have not set up anything, and inputting my mac password does nothing. What should I do now?
So your username probably does not exist, as the default username that ships with the db is postgres.
Further, I was prevented from the submission of an empty password, which is blank by default for the postgres user.
You might try
cd ~/
sudo -u postgres psql -c "ALTER USER postgres PASSWORD 'postgres';"
Password: YOUR_LOGIN_PWD_HERE (required for sudo)
and then to use
psql -U postgres
password: postgres
I'm not 100% sure of which SO answer I got this from, perhaps here. Hope this helps.
The above did not work for me.
The below steps worked for me:
Step 1: Uninstall Postgres using the following steps:
sudo /Library/PostgreSQL/10/uninstall-postgresql.app/Contents/MacOS/uninstall-postgresql
PS: my postgres version is 10
Step 2: Remove Postgresql user
System Preference > userse & Groups > Unlock > remove postgresql user by clicking "-"
Step 3: Remove existing databases
rm -rf /usr/local/var/postgres/*
Step 4: Install and Start Postgres using brew
brew update
brew install postgresql
brew services start postgresql
Step 5: Create database
initdb /usr/local/var/postgres -E utf8
You can start accessing postgres
psql -h localhost -d postgres
The answer by Pramida almost worked for me... the difference is I was using 9.6 Postgres.
So I ran:
sudo
/Library/PostgreSQL/9.6/uninstall-postgresql.app/Contents/MacOS/installbuilder.sh
and somehow that got rid of my username and almost all of postgres user. I think
I then blew away the directory
sudo rm -rf /Library/PostgreSQL/9.6
And then I installed using brew above.
in my case macboook big sur v 11 you should create /var/postgresql#12 in Mackbook/usr/local
and open terminal in /opt/homebrew/Cellar/postgresql#12/12.8/bin and run
/opt/homebrew/Cellar/postgresql#12/12.8/bin/initdb -D /usr/local/var/postgresql#12
then run in terminal
echo 'export PATH="/opt/homebrew/Cellar/postgresql#12/12.8/bin:$PATH"' >> ~/.zshrc
then
psql -h localhost -p 5432 -d postgres
and enjoy creating user
rm -rf /usr/local/var/postgres && initdb /usr/local/var/postgres -E utf8
logrotate(3.8.6) on RHEL7 is giving me PAM Auth rejections when running postscripts command with substitute user.
same configuration works fine when i force logrotate from shell as root, but failing with below error when logrotate is executed by cron
logrotate config
/var/log/rabbitmq/*.log {
su rabbitmq rabbitmq
daily
dateext
dateyesterday
missingok
rotate 7
compress
delaycompress
notifempty
sharedscripts
postrotate
su rabbitmq -s /bin/sh "echo"-c
endscript
}
content in /var/log/secure
May 3 22:57:01 ip-10-6-78-5 su: pam_unix(su:auth): auth could not identify password for [rabbitmq]
May 3 22:57:01 ip-10-6-78-5 su: pam_unix(su:auth): auth could not identify password for [rabbitmq]
May 3 22:57:01 ip-10-6-78-5 su: pam_succeed_if(su:auth): requirement "uid >= 1000" not met by user "rabbitmq"
May 3 22:57:01 ip-10-6-78-5 su: pam_succeed_if(su:auth): requirement "uid >= 1000" not met by user "rabbitmq"
May 3 22:57:01 ip-10-6-78-5 su: FAILED SU (to rabbitmq) root on none
May 3
Could you try to add sudo before su and use -c parameter for su command, so your commands should looks like:
sudo su rabbitmq -c '/bin/sh "echo"-c'