Cracking/changing htpasswd - brute-force

I have an old website that I created a folder that's protected with htpasswd. However, it's over a decade old, and I have since forgotten the password. I wish to access the contents of the folder. I'm able to view the directory contents via the control panel, but I'm unable to access the individual protected files.
I have access to the htpasswd file, and it has lines of user:password, where the password seems to be hashed (13 characters, uppercase/lowercase/digits). I tried loading it into John and it detects it as CRYPT, but was unable to crack it even after a few hours. Are there better ways of accessing the files? Given server access, can I reset/remove the password protection? Or, failing that, are there better/faster ways of brute forcing the password hash?

So you have SSH access to the server, but don't remember the values used to generate the passwords stored in the htpasswd file so you can't access them via control panel?
You could just login and rename (disable) the .htaccess file:
$ mv protected_dir/.htaccess protected_dir/.old.htaccess
If you want to crack the old password first read this: https://www.slashroot.in/how-are-passwords-stored-linux-understanding-hashing-shadow-utils to understand the password format.
Then basically you can use John, but you have to take the password file and append the salt to each password.
$ john --wordlist=passwd_salted.txt passwords_to_crack.txt
Good luck!

Related

upgrade from postgres 12 to 13 causes user authority problem

I have a windows install of postgres 12.6-1 installed at port 6432. I have installed a newer version at port 9432 to test the database against our application.
Firstly I tried to dump globals from the 12 to sql, and install the user list into the 13. This was a disaster as all the users including the superuser now were inaccessible.
So I read the release notes, and it say to use pg_upgrade. After a lot of pain, I get it to run, but it appears to have just run the pg_dumpall like I did.
pg_upgrade failed at the point of generating databases as the local super user, because the user load has damaged the passwords and now the database cannot be accessed again.
I have checked the SQL output from the PG_DUMPALL command with and without --binary-upgrade, and it appears to be identical in it's generation of MD5 hash data from the database.
Do I need another tool?
An I doing something wrong?
The 13 database is empty, so any drastic action would be ok.
The ED 13 installation defaults the pg_hba.conf encryption to scram-sha-256. If you have loaded passwords with this encryption, keep it. If you (like me) unknowingly loaded md5 encrypted passwords, just change the encryption to md5 on the lines for pg_hba.conf and restart postgres.
If you wish to keep the scram-sha-256 encryption level, Then I suspect there is no alternative but to edit the pg_dumpall output and change the syntax to plain text password entry, and reset the passwords on the new db. I know this works because I just tried loading a sample of the file with plain text password, and was able to log in as the new user.
Thanks to Adrian Klaver and jjanes.

PostgreSQL security local (pg_hba.conf )

In PostgreSQL we can just change local md5 to trust in pg_hba.conf. then we can access all data in database using psql without need of password.So anyone can change this line who can access local machine.
So, Is there way to password protect our database even someone change pg_hba.conf to trust
( I want to create offline app and need to protect client database,I need something like ms access, once we set the password it always ask for password )
As long as client has root/administrator acces on the computer you can't do much about pg_hba. You could make it read only but root can overyde anything. You could mount config file on read only file system but this is too complicated.
Solution can be only at database level(not OS or application): crypted data and triggers where you implement supplimentary security.
I don't think postresql is the answer for your requirement, maybe SQLite is the right one.

Postgres Data Encryption at Rest Using LUKS with dm-crypt

We are trying to encrypt Postgres data at rest. Can't find any documentation to encrypt Postgres data folder using LUKS with dm-encrypt.
No special instructions are necessary – PostgreSQL will use the opened encrypted filesystem just like any other file system. Just point initdb to a directory in the opened file system, and it will create a PostgreSQL cluster there.
Automatic server restarts will fail, because you need to enter the passphrase.
Of all the ways to protect a database, encrypting the file system is the least useful:
Usually, attacks on a database happen via the client, normally with SQL injection. Encrypring the file system won't help.
The other common attack vector are backups. Backups done with pg_dump or pg_basebackup are not encrypted.
But I guess you know why you need it.

Mercurial Keyring Prompts for Password Every time

I am using the mercurial key-ring extension to store the password to my remote repository on BitBucket, so I don't have to enter it every time I push to the remote repository. Ironically, it asks me for the password to unlock the key-ring every time I need to access it; thereby completely mitigating its purpose to me. What am I doing wrong?
In my global mercurial config (~/.hgrc) I have the following lines:
[extensions]
hgext.mercurial_keyring = /etc/mercurial/mercurial_keyring.py
In my repo mercurial config (.hg/hgrc), I have:
[paths]
default = https://username#bitbucket.org/username/repo
Example:
> hg out
> comparing with https://username#bitbucket.org/username/repo
> Please enter password for encrypted keyring:
I have tried uninstalling the keyring and trying again. I've also played about with configuration settings I've found online to no avail. I also couldn't find anything on encrypted keyring and non-encrypted keyring in regards to mercurial.
How can I get it so that I don't have to enter a password at all when I perform actions to the remote repo?
I don't know if this was already the case at the time the question was asked, but now the solution is directly explained in the keyring extension wiki link in your question.
Just enabling the keyring extension is not enough, you also need to tell Mercurial the remote repo and the username in the config file.
Quote from the link:
3.2. Repository configuration (HTTP)
Edit repository-local .hg/hgrc and save there the remote repository
path and the username, but do not save the password. For example:
[paths]
myremote = https://my.server.com/hgrepo/someproject
[auth]
myremote.schemes = http https
myremote.prefix = my.server.com/hgrepo
myremote.username = mekk
Simpler form with url-embedded name can also be used:
[paths]
bitbucket = https://User#bitbucket.org/User/project_name/
Note: if both the username and password are given in .hg/hgrc, the
extension will use them without using the password database. If the
username is not given, extension will prompt for credentials every
time, also without saving the password. So, in both cases, it is
effectively reverting to the default behaviour.
Note that you don't need to specify all the information shown in those examples.
On my machine (Mercurial 5.0.2 on Windows), I'm using a simpler form which also works for multiple repos.
This is a 1:1 copy from my actual config file:
[extensions]
mercurial_keyring =
[auth]
bb.prefix = https://bitbucket.org/
bb.username = christianspecht
This uses the keyring extension to save the password for the user christianspecht, for all remote repos whose URL starts with https://bitbucket.org/.
The prefix bb can be freely picked, so you can use this to save multiple URLs/usernames at once.
This works perfectly well (at least until Bitbucket drops Mercurial support in a few weeks...) - it asks for the password once, then it's automatically saved and it never asks again.
it asks me for the password to unlock the key-ring. What am I doing wrong?
Nothing. Read the keyring docs, password for accessing keyring must be provided once for session

How to send a file using scp from a perl script using only ftp hostname, login and password, WITHOUT ppk file

Hoe to send a file using scp from a perl script using only ftp hostname, login and password, WITHOUT ppk file
This is what I use when I do have ppk file.
open scp://username#hostname -privatekey="\path\to\ppkfile\ppkfile.ppk "
put filename.csv /home/destination_flder
exit
Thank you!
one of the possible solution is that u can configure destination machine for passwordless ssh and then
use the following command to transfer file or copy
scp $source_filepath username#machinename:$destination_filepath
Suic's recommendation is very good, looks like exactly what you're looking for. Personally I haven't used that particular module, though.
I've had good success with Net::SFTP::Foreign, which is a perl wrapper for SFTP client. It supports password-based login in most situations (see documentation for details). In my experience SFTP is usually available whenever SCP is and gives greater level of control.