OS: macOS Sierra
Browser: Safari v11.0.3
Problem: Cannot launch safaridriver even though safaridriver --enable has been run.
Error
'safaridriver could not launch because it is not configured correctly or you need to authenticate. Re-run safaridriver(1) and pass the '--enable' flag to configure and/or authenticate. For more information, consult the safaridriver(1) man page.'
Error Log
qa01:~ svctest$ safaridriver --enable
Password:
qa01:~ svctest$ safaridriver -p 0
ERROR: safaridriver could not launch because it is not configured
correctly or you need to authenticate. Re-run safaridriver(1) and
pass the '--enable' flag to configure and/or authenticate.
For more information, consult the safaridriver(1) man page.
qa01:~ svctest$
you need to run it as a superuser, that will correctly save new configuration:
sudo safaridriver --enable
Many bugs have been fixed in safaridriver --enable since this question was asked, including fixes for running under sudo. Please close the question.
The issue is the logged in account privileges. Even though an admin password was used to enable the Safari driver the logged in account was not an admin.
This works for me:
sudo -u <your user> safaridriver --enable
After that you can see in Safari menu Develop->Allow Remote Automation is checked
Related
I have installed and uninstalled and reinstalled GCloud on MACOS Monterey (Chipset M1) and I'm facing the next situation: When I run in Terminal gcloud auth login, it displays the next message:
WARNING: Failed to start a local webserver listening on any port between 8085 and 8184. Please check your firewall settings or locally running programs that may be blocking or using those ports.
WARNING: Defaulting to --no-browser mode.
You are authorizing gcloud CLI without access to a web browser. Please run the following command on a machine with a web browser and copy its output back here. Make sure the installed gcloud version is 372.0.0 or newer.
I have tried in many ways to install: The last one was this:
curl https://sdk.cloud.google.com | bash
exec -l $SHELL #restart shell
But I still facing that message.
Anybody couls help me with this?
This happens because my Internet provider has blocked these ports. There will be to make some fixes to the router.
Patch solution for this:
gcloud auth login --no-launch-browser
Follow the instructions given on Terminal
everyone.
I hope that someone can help to answer my question.
I am joining a project in which I have to use various docker containers. I was told that I just needed to use docker-compose to pull down all the necessary containers. I tried this, and got two different errors, based on whether I used sudo or not. My machine is Ubuntu bionic beaver 18.04.4LTS
I have docker-engine installed according to the installation instructions for Bionic on the github page, and docker-compose is likewise installed according to its instructions. I did not create a "docker" group since I have sudo access.
We have two repos that I have to log in to before I can do anything. In order to prevent my passwords from being stored unencrypted in config.json, I followed this guide to set up a secure credential store:
https://www.techrepublic.com/article/how-to-setup-secure-credential-storage-for-docker/
However, rather than asking me for the password and/or passphrase mentioned in this article, the login process makes me enter the actual passwords to the repos. So, the secure credential store may not be working, which might be causing the problem.
At any rate, once I log in and the two commands show login succeeded, I then try to do a
docker-compose pull
on the repos. When I do
sudo docker-compose pull
I get this final error:
docker.errors.DockerException: Credentials store error: StoreError('Credentials store docker-credential-pass exited with "exit status 2: gpg: WARNING: unsafe ownership on homedir '/home/myuser/.gnupg'\ngpg: decryption failed: No secret key".')
an ls of the .gnupg directory is
myuser#myhost$ ls -lA ~ | grep gnupg
drwx------ 4 myuser myuser 226 Feb 9 13:35 .gnupg
gpg --list-secret-keys shows my keypair when I run it as myuser.
I am assuming that what is happening is that because I am running as sudo the user trying to access this directory is root, not myuser, and so it is failing. However, if I leave off the sudo
docker-compose pull
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))
I am guessing that this is because my normal user doesn't have the ability to connect to the docker daemon's Unix socket.
So, how do I make these play together? Is the answer to add a docker group so that the command still runs as myuser and not as root? or is there another way to do this?
Also, why is my credential store not asking me for the password set by docker-credential-pass or the GPG passphrase? I suspect these two are related. Perhaps the pull is trying to send my authentication tokens over again and can't because it doesn't have access to the secure credentials store.
All of the above are guesses. Does anyone know what is going on here?
Thanking you in advance,
Brad
I just wanted to follow up with a solution to this question that worked for me.
Firstly, you need to add your user to the docker group that was created during docker-engine's installation.
sudo usermod --append --groups docker your_user_name
Because I had already used sudo to try this, there were a few files that ended up being created by root.
So, you have to chown a few things.
sudo chown your_user_name:your_group_name ~/.docker/config.json
Note that for the group name I used
docker
but I'm not sure if that's necessary.
Then, there were files inside the ~/.password-store directory that needed to be changed.
sudo chown -R your_user_name:your_group_name ~/.password-store
Most of these files are already owned by you, but the recorded credentials are not.
Then, the magic that fixed it all. From
https://ask.csdn.net/questions/5153956
you have to do this.
export GPG_TTY=$(tty)
and it is this last that makes gpg work.
Then, you can log in to your repos if you have to without using sudo
docker login -u repo_user_name your_repo_host
and then log in with your repo password.
Note that I don't know why you have to use the repo password instead of using the stored credentials.
Once you log in, you should be able to do a
docker-compose pull
without sudo
from the directory where you want the containers to be placed.
Note that you will probably have to provide your GPG passphrase at first. I'm not sure about this because I had already unlocked the key by following the steps in the above link to check to see if docker-credential-pass had the right credential store password stored.
and that should do it.
I am no more able to do sudo su - to get root authentication. I am logged in as test user. I mapped test from unconfined_u to SElinux user staff_u. I restarted the vm machine on google cloud. I can login to vm as test user, but if I do sudo su -, I get following error:
su: avc.c:74: avc_context_to_sid_raw: Assertion `avc_running' failed.
Aborted
It seems to be a known bug of CentOS and Redhat:
0011249: su: avc.c:74: avc_context_to_sid_raw: Assertion `avc_running'
failed.
Description Whenever I execute a su - command after switching
into the sysadm_r, I get this fault. The system is in enforcing mode
when I get this error. When in permissive mode, this does trigger the
error. Attempted to address the problem with audit2allow to see if
that could temporarily resolved the problem while I looked for a
permanent solution but that did not work either
There are the 2 referencies:
Centos bug
RedHat Bug
I changed the following file in centos
/etc/sysconfig/selinux
and set it as
SELINUX=disabled
and run the reboot command.
but now I cannot login to my server. It shows the error
Permission denied, please try again.
when I login as a user and password.
How can I get back into my server.
Disabling SELinux and rebooting seems to set the UsePAM option in your sshd_config to no which explicitly does not work in RHEL:
# WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several problems
If you are able to detach and access the volume externally, you can set this back to yes and should be able to ssh in again.
Ok. I got the solution.
when I login with domain after disabling SELinux lets say
ssh user#domain
I cannot login even with correct password
but I can login as
ssh user#ip
I don't know the reason but this actually worked for me.
I have a capistrano deployment recipe I've been using for some time to deploy my web app and then restart apache/nginx using the sudo command. Recently cap deploy is hanging when I try to execute these sudo commands. I see the output:
"[sudo] password for "
With my server name and the remote login, but this is not a secure login prompt. The cap shell is just hanging waiting for more output and does not allow me to type my password in to complete the remote sudo command.
Is there a way to fix this or a decent work around? I did not want to remove the sudo password prompt of my remote user for web restart commands.
This seems to happen when connecting to CentOS machines as well. Add the following line in your capistrano deploy file:
default_run_options[:pty] = true
Also make sure to use the sudo helper instead of executing sudo in your run commands directly. For example:
# not
run "sudo chown root:root /etc/my.cnf"
# but
sudo "chown root:root /etc/my.cnf"
The other advice may be sound, but I found that once I updated to Capistrano 2.5.3 the problem went away. I have to make sure I stop running the default versions of tools that came with my O/S.
# prevent sudo prompting for password
set :sudo_prompt, ""