TestConfiguration.ps1 error: FabricDataRoot system drive could not be confirmed for machine XXXX - azure-service-fabric

we have one Microsoft Dynamics 365 Finance + Operations (on-premises) cluster but the self-sign certificate is expired.
so I follow this Microsoft manual to do certificate rotation .
I cleaned up the existing environment and want to redeploy the service fabric, when I run the command ‘TestConfiguration.ps1’ to test cluster config, it get error as below, it say cannot confim FabricDataRoot system drive. but I have defined the FabricDataRoot value at clusterConfig.json.
Thanks
command output:
PS C:\D365Install\Microsoft.Azure.ServiceFabric.WindowsServer.8.2.1571.9590> .\TestConfiguration.ps1 -ClusterConfigFilePath .\ClusterConfig.json
Trace folder already exists. Traces will be written to existing trace folder: C:\D365Install\Microsoft.Azure.ServiceFabric.WindowsServer.8.2.1571.9590\DeploymentTraces
Running Best Practices Analyzer...
Opening TraceWriter SFDeployer, path C:\D365Install\Microsoft.Azure.ServiceFabric.WindowsServer.8.2.1571.9590\DeploymentTraces\SFDeployer-637862273938616676.trace
FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.5.
FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.9.
FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.7.
FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.6.
FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.11.
FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.12.
System drive for FabricDataRoot could not be confirmed for a machine in the configuration. See DeploymentTraces for details.
Best Practices Analyzer determined environment has an issue. Please see additional BPA log output in DeploymentTraces folder.
Closing TraceWriter SFDeployer, path C:\D365Install\Microsoft.Azure.ServiceFabric.WindowsServer.8.2.1571.9590\DeploymentTraces\SFDeployer-637862273938616676.trace
LocalAdminPrivilege : True
IsJsonValid : True
IsCabValid :
RequiredPortsOpen : True
RemoteRegistryAvailable : True
FirewallAvailable : True
RpcCheckPassed : True
NoDomainController : True
NoConflictingInstallations : True
FabricInstallable : True
DataDrivesAvailable : False
DrivesEnoughAvailableSpace :
Passed : False
Test Config failed with exception: System.InvalidOperationException: Best Practices Analyzer determined environment has an issue. Please see additional BPA log output in DeploymentTraces folder.
at System.Management.Automation.MshCommandRuntime.ThrowTerminatingError(ErrorRecord errorRecord)
clusterConfig.json :
"name": "Setup",
"parameters": [
{
"name": "FabricDataRoot",
"value": "C:\\SF"
},
{
"name": "FabricLogRoot",
"value": "C:\\SF\\Log"
Trace log:
2022/04/22-04:55:26.559,Verbose,3628,SystemFabricDeployer.SFDeployer,Fabric is set up on machine 192.168.99.9: False
2022/04/22-04:55:26.606,**Error**,3100,SystemFabricDeployer.SFDeployer,FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.9.
2022/04/22-04:55:26.606,**Error**,5452,SystemFabricDeployer.SFDeployer,FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.8.
2022/04/22-04:55:26.606,**Error**,3628,SystemFabricDeployer.SFDeployer,FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.5.
2022/04/22-04:55:26.637,**Error**,3100,SystemFabricDeployer.SFDeployer,FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.6.
2022/04/22-04:55:26.637,**Error**,3628,SystemFabricDeployer.SFDeployer,FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.12.
2022/04/22-04:55:26.637,**Error**,5452,SystemFabricDeployer.SFDeployer,FabricSettings FabricDataRoot system drive could not be confirmed for machine 192.168.99.11.
2022/04/22-04:55:26.653,**Error**,5452,SystemFabricDeployer.SFDeployer,System drive for FabricDataRoot could not be confirmed for a machine in the configuration. See DeploymentTraces for details.
2022/04/22-04:55:26.653,**Error**,5452,SystemFabricDeployer.SFDeployer,Best Practices Analyzer determined environment has an issue. Please see additional BPA log output in DeploymentTraces folder.

Thanks for Rob help reformat the question.
I had solve the problem. that is because anti-virus software disable computer default share. when i disable the anti-virus, the script works normal.
DataDrivesAvailable : True

Related

Having Issues Checking Workstation Bitlocker Status Remotely

I've created a ps1 file that runs on our UTIL server for all workstations on our domain that checks if the computer is online, skips offline computers, checks bitlocker status, formats results, and writes to a CSV file.
The script essentially uses manage-bde -cn $Computer -status C: and works great on most machines. However, there are a few machines that are confirmed on the network and online that do not reply with the status.
I ran the same command manually in powershell on the UTIL server to the affected machines and get the result "ERROR: An error occurred while connecting to the Bitlocker management interface. Check that you have administrative rights on the computer and the computer name is correct" If I connect to the computer and check status on the computer itself, it displays results no problem.
I'm logged into the UTIL server as an admin running powershell as admin. My question is, what would cause some computers to return results successfully and others to have an issue connecting to the Bitlocker management interface? Has anyone seen this before?
What process is executing your script when you're not in an interactive session? A scheduled task, a service? What security context does that process run in?
Based on some other threads I have seen on this, you should check these items:
Not running the command as an admin
Not having a compatible TPM
The TPM being disabled in the BIOS (it is on many computers)
The TPM or BitLocker services not being started.
A TPM reporting as a 1.2 TPM when in fact it is a 1.1 TPM.
I had the same issue in my net.
Solved by setting up one rule for remote client Windows firewall. Ther rule is intended to allow WMI (Windows Management Instrumentation) access to Remote Machine (see this link for further info https://social.technet.microsoft.com/Forums/lync/en-US/a2f2abb3-35f6-4c1a-beee-d09f311b4507/group-policy-to-allow-wmi-access-to-remote-machine?forum=winservergen )
Regards
Andrea

VSTS "Windows Machine File Copy" task failed: ErrorMessage: 'The network path was not found'

in VSTS, is it possible to copy files build in VSTS to my local PC? I found a task called "Windows Machine File Copy" and I tried to use it to copy file to my local PC.
There is a machine field:
machine field
And I followed instruction to fill my PC Computer Name here. Then I share a folder named "test1". I want to copy files build in VSTS to "test1" folder in my PC but it caused this error.
error
Could someone who had experience with this task provide me with some help. Thank you.
The error message 'The network path was not found' clearly pointed out the problem.
To copy files to local PC which needs the Agent machine can access the local PC, but obviously the Hosted Agent cannot access the local PC.
So as a workaround you can try to setup a private build agent (the machine which in your local network) for the build: Deploy an agent on Windows, then grant the permissions for the build agent service account to access another local PC. Then you can copy files...

Why can't I see C:\ProgramData\Amazon folder on Microsoft 2016 free ware instance? I need to run a script to attach additional volume to that instance

I created a windows 2016 instance type on AWS (free tier). I created a "Cold HDD" volume and attached it to the windows 2016 instance thru Managment console. So far so good.
I am able to RDP into the instance after getting Administrator password. But I can't see the attached "Cold HDD" volume when I log into the windows 2016 instance.
So I launched "Disk Management" on the instance and enabled the new volume.
I googled and came to know that we need to run a powershell script to enable all attached volumes at the start of the instance.
Script is:
<powershell>
C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeDisks.ps1
</powershell>
But I can't find C:\ProgramData\Amazon folder at all on C drive of the windows 2016 instance.
I don't know what to do.
https://aws.amazon.com/premiumsupport/knowledge-center/secondary-volumes-windows-server-2016/
The InitializeDisks.ps1 script is part of EC2Launch.
To accommodate the change from .NET Framework to .NET Core, the EC2Config service has been deprecated on Windows Server 2016 AMIs and replaced by EC2Launch. EC2Launch is a bundle of Windows PowerShell scripts that perform many of the tasks performed by the EC2Config service.
This should be installed by default on the Windows 2016 AMI, but note that the C:\ProgramData\Amazon directory is hidden.
If for some reason it is not installed, you should be able to install it manually as follows:
To download and install the latest version of EC2Launch
If you have already installed and configured EC2Launch on an instance, make a backup of the EC2Launch configuration file. The
installation process does not preserve changes in this file. By
default, the file is located in the following directory:
C:\ProgramData\Amazon\EC2-Windows\Launch\Config.
Download EC2Launch.zip from the following location to a directory on the instance:
https://s3.amazonaws.com/ec2-downloads-windows/EC2Launch/latest/EC2-Windows-Launch.zip
Download the Install.ps1 PowerShell script from the following location to the same directory where you downloaded EC2Launch.zip:
https://s3.amazonaws.com/ec2-downloads-windows/EC2Launch/latest/install.ps1
Run Install.ps1
Replace your backup of the EC2Launch configuration file in the C:\ProgramData\Amazon\EC2-Windows\Launch\Config directory.
Source: http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2launch.html
As a test, I just deployed the Windows 2016 Base AMI template and can confirm that C:\ProgramData\Amazon does exist (ProgramData is a hidden directory, so go to View > Show Hidden Files to see it).
I also added a Cold Storage HDD and (as you noted) the following User Data (under the "Advanced Details" section of the "Configure Instance Details" page) to my instance on launch:
<powershell>
C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeDisks.ps1
</powershell>
And can confirm that on boot of the VM the Cold HDD was correctly/automatically initialized and available as the D:\ drive.
If you didn't add the required User Data when you were first launching your instance, you can add it later by selecting your instance and go to Actions > Instance Settings > View/Change User Data.

Kerberos: kinit on Windows 8.1 leads to empty ticket cache

I installed Kerberos for Windows on a new set-up Windows 8.1 machine.
Domain: not set
Workgroup: WORKGROUP
I edited the krb5.ini file in C:\ProgramData\MIT\Kerberos5 directory like this:
[libdefaults]
default_realm = HSHADOOPCLUSTER.DE
[realms]
HSHADOOPCLUSTER.DE = {
admin_server = had-job.server.de
kdc = had-job.server.de
}
After a restart, I made a kinit -kt daniel.keytab daniel to authenticate me against the Realm via console. Also getting a ticket by user and password via the Kerberos Ticket Manager seems to work fine, as the ticket is shown in the UI.
What I'm wondering about is, that when I call a klist I get an empty list back, which says something like cached tickets: 0:
This seems not normal to me, as my Ubuntu computer shows valid tickets by klist after a kinit.
What am I doing wrong? Is there some more configuration to do? Sometimes I read about a ksetup tool, but I don't know which settings here are neccessary and which not...
============================================================
After I set
[libdefaults]
...
default_ccache_name = FILE:C:/ProgramData/Kerberos/krb5cc_%{uid}
in my krb5.conf, the kinit command via console and via Kerberos Ticket Manager creates a file in the specified path. So far everything looks good.
But: The kinit command creates tickets with very different file names (long vs. short), depending if I run the console as "admin" (short name) or not (long name), see the screenshot below. The Kerberos Ticket Manager only shows one of the tickets:
If run as admin:
Shows the ticket I created via admin console
Creates ticket files with short file names
If run as normal:
Shows the ticket I created via "normal" console
Creates ticket files with long file names
The klist command still doesn't show the cached tickets, independent if console was opened as admin or not.
The MIT Kerberos documentation states that...
There are several kinds of credentials cache supported in the MIT
Kerberos library. Not all are supported on every platform ...
FILE caches are the simplest and most portable. A simple flat file format is used to store one credential after another. This is the
default...
API is only implemented on Windows. It communicates with a server process that holds the credentials in memory... The default
credential cache name is determined by ...
The KRB5CCNAMEenvironment variable...
The default_ccache_name profile variable in [libdefaults]
The hardcoded default, DEFCCNAME
But AFAIK, on Windows the hard-coded default cache is API: and that's what you can manage with the UI. kinit also uses that protocol by default.
I personally never could use klist to use that protocol, even with the "standard" syntax i.e. either
  klist -c API:
or
  set KRB5CCNAME=API:
  klist
On the other hand, if you point KRB5CCNAME to a FILE:***** then you can kinit then klist the ticket; but it will not show in the UI and will not be available to web browsers and the like.
If klist command doesn't show the keys even after setting environment variable like KRB5CCNAME (i.e. set KRB5CCNAME=C:\kerberos_cache\cache\krb5cache, its a file not a directory. You'll have to create parent directory manually), then chances are that the klist command that you're running is not from MIT Kerberos Windows installation in C:\Program Files\MIT\Kerberos\bin but rather the klist command from Windows itself in C:\Windows\system32.
You can check that out by running which klist if you have cygwin tools. In this case, simplest solution is to copy klist.exe into MIT Kerberos installation's bin directory as a new file i.e. klist_mit.exe. Cache entries should be shown if you run klist_mit command.

Setting up a VM for Selenium tests in online TFBuild

EDIT: I overlooked "Prerequisites for executing build definitions is to have your build agent ready, here are steps to setup your build agent, you can find more details in this blog ." from these steps. I'm currently trying to get that build agent up and running on an Azure VM and will report back.
I'm following these steps to try and get CD and Selenium tests running through my Visual Studio Online TFbuild. I've had some helpful hints after sending some feedback via email, but I'm still not able to get past the file copy step.
I've created a Windows 10 Enterprise VM.
I've correctly set the ip address in my build test machines and am able to RDP into the machine.
I've successfully (after several attempts) gotten Remote Power shell working (though I'm not 100% certain winrm s winrm/config/client '#{TrustedHosts="Hosted Agent"}'). I got the name from https://{}.visualstudio.com/DefaultCollection/_admin/_AgentQueue or Build > edit build > General > Default Queue > Manage.
PS C:\users\cdd\Desktop> winrm quickconfig
WinRM service is already running on this machine.
WinRM is already set up for remote management on this computer.
This seems to be ready after
PS remoting is not supported when network connection type is public. Please check http://blogs.msdn.com/b/powershell/archive/2009/04/03/setting-network-location-to-private.aspx.
and echo "setting executionpolicy"
powershell -command "& Set-ExecutionPolicy -executionpolicy unrestricted -force"
echo "setting remoting"
powershell -command "& Enable-PSRemoting -force"
That's a lot of details, but I'm still stuck after that:
Copy started for - '{ip}:5985'
Copy status for machine '{ip}:5985' : 'Failed'
Failed to execute the powershell script. Consult the logs below for details of the error.
Failed to connect to the path \{ip} with the user cdd for copying.System error 53 has occurred.
The network path was not found.
For more info please refer to http://aka.ms/windowsfilecopyreadme
I have a few questions:
Do I have the correct name of the VM?
Do you have steps on how to get the VM setup to allow FileCopy?
I'm probably missing something else, I'm not familiar with PowerShell or getting this setup. What can I try to get the path available for my cdd adminstration user that I setup when I created the VM?
To copy files to an Azure VM machine, you should use the "Azure File Copy" step that provided in VSO build definition. It provides detailed setting for you to access to your Azure VM machine.