I am trying to have the cloudera manager run a check on a kudu cluster, which eventually will be the following command, run as the kudu user::
kudu cluster ksck master_host
The output of this command is:
Not authorized: leader master liveness check error: Could not connect to the cluster: Client connection negotiation failed: client connection to 10.x.y.z:7051: server requires authentication, but client does not have Kerberos credentials available
If I run this command manually from the command line, as kudu, I have the same error. If I try to run kinit, a password is asked for the kudu user, but as far as I understand it, all the "backend" users are passwordless.
If I update $HOME/.klogin to allow my user with ksu I do have a krb ticket (klist shows it) but it is still not a ticket for the kudu user, and I end up having the same error message.
My kerberos-fu is weak, but as far as I thought, the cluster was well configured, spark/impala/kudu work well together, without authorisation issue. The inspector is all green, there are kudu credentials for all hosts of the cluster.
How could I have this command run properly from the cloudera manager?
Half answer:
To run the command in the command line, you can run it from the account of a user who is in the superuser_acl setting from kudu. Then as this user run kinit and then you can run the kudu cluster ksck command.
This does not explain why the same user from cloudera manger still cannot run the rebalance, but at least I have a woarkaround.
Related
I got the following error when trying to use a Notary client to get the digest of a signed image in my IBM Container Registry. Can anyone advise how to solve it?
# notary -s https://us.icr.io:4443 lookup us.icr.io/securek8s/hello-world latest
* fatal: unauthorized: The login credentials are not valid, or your IBM Cloud account is not active.
BTW, I built the Notary client from https://github.com/theupdateframework/notary
Notary uses your credentials from your Docker login cache. The error message that you received suggests that your login to us.icr.io isn't valid. This usually means that your credentials have expired.
If you have the ibmcloud CLI and the container-registry plugin installed, you can refresh your login by making sure that you're targeting the US South registry (ibmcloud cr region-set us.icr.io) and then logging in with ibmcloud cr login.
If you don't have the CLI plugin installed, you can log in using Docker commands directly. For more information, see Automating access to IBM Cloud Container Registry
in the IBM Cloud docs.
Application runs fine in normal mode. But when run it as task using cf run task "cf run-task ".java-buildpack/open_jdk_jre/bin/java org.springframework.boot.loader.JarLauncher" --name task1". It fails giving
c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: Error requesting access token.
Basically could not able to read profile SPRING_PROFILES_ACTIVE value
I think it was not able to connect to the pcf server and get the access token, which is required to connect to the config server. This problem may arise when the application is running in a network behind a firewall and has no direct connection to internet or the pcf server.
We have TFS 2017 in premise setup hosted on our internal network. Lets call that tfs.OurInternalDomain.com
TFS application and its build controllers and agents are all hosted on our internal n/w.
Our production servers are hosted on a separate domain (data center) for security reasons.
I am trying to deploy TFS Build artifacts [files and folders] from within our internal n/w onto our production server using TFS Release management definition.
I am able to copy the files using "Copy files from" task onto a folder on our production server (which is on a separate domain) from our internal n/w using a separate ID with $(AdminLogin) and $(Password). This userID is a local admin on the production server. TFS services are running under a separate ID on our domain.
These are the variables for task: "Copy files from"
Source=$(System.DefaultWorkingDirectory)/$(BuildDefinitionName)/$(BuildArtifactName)
Machines=$(ServerOneOnSeparateDomain)
Admin Login=$(AdminLogin)
Password=$(Password)
Destination Folder=$(BuildDropLocation)
So far so good.
Next task is to run a powershell script on the target machine and that is where the build agent on our internal n/w is not able to execute the powershell script. I used both -http and https protocol. Below is the error log when http was selected.
Executing the powershell script: D:\TFS2017Build\Agent1\tasks\PowerShellOnTargetMachines\1.0.41\PowerShellOnTargetMachines.ps1
Deployment started for machine: '<ServerOneOnSeparateDomain>.com:5985'
##[debug]Deployment logs for Deployment operation on <ServerOneOnSeparateDomain>:5985
##[debug]Permission denied while trying to connect to the target machine <ServerOneOnSeparateDomain> on the port:5985 via power shell remoting. Please check the following link for instructions: https://go.microsoft.com/fwlink/?LinkID=390236System.Management.Automation.Remoting.PSRemotingTransportException: Connecting to remote server <ServerOneOnSeparateDomain> failed with the following error message : WinRM cannot process the request. The following error with errorcode 0x80090311 occurred while using Kerberos authentication: There are currently no logon servers available to service the logon request.
Possible causes are:
-The user name or password specified are invalid.
-Kerberos is used when no authentication method and no user name are specified.
-Kerberos accepts domain user names, but not local user names.
-The Service Principal Name (SPN) for the remote computer name and port does not exist.
-The client and remote computers are in different domains and there is no trust between the two domains.
After checking for the above issues, try the following:
-Check the Event Viewer for events related to authentication.
-Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or use HTTPS transport.
Note that computers in the TrustedHosts list might not be authenticated.
Below is output when I execute winrm on production server:
winrm quickconfig
WinRM service is already running on this machine.
WinRM is already set up for remote management on this computer.
Is there a way to fix this such that we do not disturb the existing TFS architecture of having the TFS Application, Build controller and agents hosted on the InternalDomain and be able to execute a powershell script on a separate domain? If not, is there any other way to fix this?
My end objective is to be able to deploy code to production via TFS that is hosted on our internal n/w.
I may provide more details if required.
According to this part error info:
##[debug]Permission denied while trying to connect to the target machine <ServerOneOnSeparateDomain> on the port:5985 via power shell remoting. Please check the following link for instructions: https://go.microsoft.com/fwlink/?LinkID=390236System.Management.Automation.Remoting.PSRemotingTransportException: Connecting to remote server <ServerOneOnSeparateDomain> failed with the following error message : WinRM cannot process the request. The following error with errorcode 0x80090311 occurred while using Kerberos authentication: There are currently no logon servers available to service the logon request.
Permission denied The account used here must have permission to connect via power shell remoting.
To establish a PSSession or run a command on a remote computer, the user must have permission to use the session configurations on the remote computer.
By default, only members of the Administrators group on a computer have permission to use the default session configurations. Therefore, only members of the Administrators group can connect to the computer remotely.
To allow other users to connect to the local computer, give the user Execute permissions to the default session configurations on the local computer.
The following command opens a property sheet that lets you change the security descriptor of the default Microsoft.PowerShell session configuration on the local computer.
Set-PSSessionConfiguration Microsoft.PowerShell -ShowSecurityDescriptorUI
If that fails try adding the source to the TrustedHosts of the remote machine. You can read how here http://technet.microsoft.com/en-us/library/hh847850.aspx.
If you want to use https, you need Configure WinRM to listen on 5986.
More detail info please refer below similar issue and tutorial:
Release Management Error - Permission denied while trying to connect
to the target machine
Configuring WinRM over HTTPS to enable PowerShell remoting
I am setting up an integration tests build where I am just trying to start up a windows service.
I have used the InvokeProcess command to run the powershell scripts which just does the following
Start-Service ServiceName
The script fails when I run the build process but when I executed the same script outside TFS it works. I get the following error in TFS logs
Start-Service : Service 'ServiceName (ServiceName)' cannot be started due to the following error: Cannot open ServiceName
service on computer '.'.
Then I tried changing the way I am starting the service and used SC.exe with parameters "Start ServiceName" in the InvokeProcess and I get Access Denied error in TFS as follows:-
SC start ServiceName.
[SC] StartService: OpenService FAILED 5:
Access is denied.
I am using Network Service account to run the build.
After searching a while, I have come to the conclusion that I have to run the InvokeProcess with elevated privileges but I don't know how would I do that with in TFS.
Any help is much appreciated.
We run our build agent as a custom service account and give that domain account admin access on the servers we deploy to.
I have resolved the issue by adding Network Service account to the administrator group. I might not go with this solution as it seems wrong to assign administrative rights to Network Service account but I don't know how to assign Service Start/Stop permissions to Network Service without adding this account to Administrator group.
In short, I agree with the answer that a custom service account must be used to run the build with appropriate privileges.
We have a SQL server (Name: SQL) that launches an SSIS job with proxy credentials (a service account), consisting of multiple steps.
One of these steps require files to be put in a local folder on a remote machine (Name: VM) and execute a program that securely copies these files to a service on the net. I have successfully ran both PowerShell and WinRM commands to do this (as administrator), but I need to find a way to run them without being an admin on SQL.
All of these steps work fine, when the service account is a local administrator of both SQL and VM. However, we do not want the service account to be a local admin on SQL.
The command I run is:
Invoke-Command -ComputerName vm.fqdn -ScriptBlock {E:\Share\ThirdParty\FTP_Admin\FtpUpload.bat}
I found a google post suggesting I need to give access to the root/CIIV2 namespace. I gave the service account full control and restarted the WinRM service.
When it fails (NOT running as administrator), the security log gets populated with 4656 event ID's.
Any idea what I can try? Been stumped on this for a while.
Here is the link on the Microsoft technet forums:
http://social.technet.microsoft.com/Forums/en-US/ITCG/thread/70a5a870-b911-4b1a-9c68-e7d91142e511
Long story short - ensure the server has been patched to post SP1 (Server 2008 R2) at least if you are running into these problems.
By default, only administrators have access to the (default) runspace you are connecting to:
On the vm.fqdn, try running:
set-pssessionconfiguration -Name microsoft.powershell -ShowSecurityDescriptorUI
and grant full control to the service account. Restart the WinRM service (just confirm when asked).