vault permission denied from remote host - hashicorp-vault

I have very basic vault server with oob policies.
I can unseal server from remote host but when I call "vault secrets list" from remote host with root token I get "permission denied".
In the same time when I call this command from local host all is ok.
Vault version - 1.2.3
What is wrong with my configuration?

It was a strange my own error - I had VAULT_TOKEN env variable with old wrong token.

Related

cloud_sql_proxy uses unspecified credential file instead of gcloud auth login

My computer rebooted and for some reasons I ignore, cloud_sql_proxy tries to use a credential file instead of my gcloud auth login:
cloud_sql_proxy --instances=instance:region:orcadb=tcp:54321
2022/09/06 12:43:07 Rlimits for file descriptors set to {Current = 8500, Max =
9223372036854775807}
2022/09/06 12:43:07 using credential file for authentication; email=example#example.iam.gserviceaccount.com
2022/09/06 12:43:08 errors parsing config:
googleapi: Error 403: The client is not authorized to make this request., notAuthorized
I checked that my gcloud login is correct by using gcloud auth login, and then checking using gcloud config list account.
I also tried adding the flag --enable_iam_login to the command.
My permissions are already set to owner.
How can I use cloud_sql_proxy without the credential file? Thanks! :)
If no credential file is pointed for cloud_sql_proxy then it takes the file path from the GOOGLE_APPLICATION_CREDENTIALS env. var first.
You have to check if this env var has a value
echo %GOOGLE_APPLICATION_CREDENTIALS%
If yes then clean it up
set GOOGLE_APPLICATION_CREDENTIALS=
Now the cloud_sql_proxy should take the current glcoud account

AWS EC2 403 Forbidden error

I have a development version of my application deployed on ec2 and I'm getting a 403 Forbidden Error on navigating to the given public IPV4 address.
I can start rails c after ssh-ing into the instance and manipulate the data from there.
Since the 403 Forbidden is from nginx so I checked the error logs and found the following:
*191 directory index of "/home/ubuntu/<app-name>/client/" is forbidden, client: <client-ip>, server: _, request: "GET / HTTP/1.1", host: "<host-ip>"
Which is clearly the error I'm getting. Checking my psql logs shows me the following:
So the error is in how my credentials are setup.
I tried to go to my pg_hba.conf and my navigation route was cd /var/lib/postgresql/9.5/main but I can't cd in to main after that since it says permission denied
I tried to view the pg_hba.conf by running:
sudo vim /var/lib/postgresql/9.5/main/pg_hba.conf but it tries to create a new file so clearly the file doesn't exist in that path.
I already ensured that my credentials are correct by doing sudo -u postgres psql
Also, the request from my front end is being made from port 80 and I checked that I have that in the security configuration for my EC2 server

Client secret not provided in request [unauthorized_client]

Here what i tried login to server where keyclock deploy and use the below directory /keycloak/bin/
and run following command
./kcadm.sh config credentials --server https://<IP ADRESS>:8666/auth --realm master --user admin --password admin
But this command throw error.
Client secret not provided in request [unauthorized_client]
Why client information is required ? I have to do this through Admin CLI
Login into the keycloak
Create a New realm
Create User and userGroup.
So according to me in this case client secret or any such information not require but admin-cli command complaining about same.
Here is the solution of the above problem.After installation the keycloak .Keycloak will by default create few clients(account,admin-cli,broker,master-realm,security-admin-console) and in these all clients admin-cli will be come with access-type=public So if you are trying to login through keycloak u have to fire below command from /keycloak/bin directory
./kcadm.sh config credentials --server https://<IP ADDRESS>:8666/auth --realm master --user admin --password admin --client admin-cli
As i am using https you may get the below error as well
Failed to send request - sun.security.validator.ValidatorException:
PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to
find valid certification path to requested target
To overcome this issue please generate the certificate and put inside /keycloak/security/ssl folder and then fire this command
kcadm.sh config truststore --trustpass $PASSWORD ~/.keycloak/truststore.jks
Now question how to create the realm then after login through admin-cli client use below command
./kcadm.sh create realms -s realm=demorealm -s enabled=true

Private Github Repositories with Envoy

Anybody has any problems deploying with Laravel's envoy when using private Github repos?
When manually cloning my repo from the production server, the ssh key seems to be accessible but when using Envoy, I always get a "Permission denied (publickey) error.
Thanks
It is probably because the ssh key on your remote server requires a password.
If you change the Envoy.blade.php to perform some other task you should be able to establish whether you are connecting to your remote correctly.
#servers(['web' => 'user#domain.com'])
#task('deploy')
cd /path/to/site
git status
#endtask
Should return something like:
[user#domain.com]: On branch master
Your branch is up-to-date with 'origin/master'.
nothing to commit, working directory clean
If you are connecting using a Mac or Linux you probably don't have to enter your password because your terminal is using ssh-agent which silently handles your authentication.
Wikipedia article on ssh-agent
When connecting over ssh, ssh-agent isn't running and the script is being prompted for a password which is where it is failing.
To get around this you could to generate a new key on the remote machine that doesn't use a password.
If you want to restrict the ssh key to a single repository on GitHub have a look at deploy keys
You need to pass the -A (as per the man page it - Enables forwarding of the authentication agent connection. This can also be specified on a per-host basis in a configuration file) in you ssh string.
You will also need add your ssh key for agent forwarding (on the machine which can access the git remote which I assume be your localhost)
ssh-add -K ~/.ssh/your_private_key
Something like this
#servers(['web' => '-A user#domain.com'])
#task('deploy')
cd /path/to/site
git status
#endtask
Git remote commands should now work.

Permission denied to new ssh user when pushing

I'm using terminal, I want to delete ssh keys from and old user (old_username) and set a new one (new_username). I have done as is in this tutorial.
When I run: ssh -T git#github.com I get the correct message:
`Hi new_username! You've successfully authenticated'.
But when I try to push a repository I get denied:
remote: Permission to new_username/test2.git denied to old_username.
fatal: unable to access 'https://github.com/new_username/test2/': The requested URL returned error: 403
I've tried deleting .ssh folder and setting again ssh, but the problem persists.
Using an https url means your ssh connection is not used. At all.
Try switching to ssh:
git clone git#github.com:new_username/test2
That will actually use your ssh credentials, meaning your public and private keys stored in ~/.shh/id_rsa(.pub).
If on Linux or OSX, check a file called ~/.netrc, which contains username/password information that most apps will use when connecting to remote servers. Yes, it even affects git via the https protocol. If you're using a frontend to connect to github, you probably need to clear its preferences so it stops trying to use the old username.