Updating custom rules in SCOM when moving to new version of Active Directory - scom

We are currently in the process of upgrading our Active Directory environment from 2012 to 2016.
As such we have several custom rules setup within SCOM to alert on changes to certain AD groups.
These rules have been setup in SCOM to target 'Active Directory Domain Controller Server 2012 R2 Computer Role'
Is there any way to edit these rules to now target the 'Active Directory Domain Controller Server 2016 Computer Role'
IF not is there a way to copy rules which would allow me to recreate them easily whilst changing the target.
Additionally should there be another Target I pick in order that when we do upgrade to AD 2019 I don't need to re-visit this task.

if you run the following PowerShell command, it will show you all versioned DC role classes and their first parents:
Get-SCOMClass -DisplayName "*Domain Controller Server * Computer Role" | ForEach-Object { Write-Output "$($_.DisplayName) => $($_.Base.Identifier.Path)" }
My output looks like:
Active Directory Domain Controller Server 2016 Computer Role => Microsoft.Windows.Server.AD.Library.DomainControllerRole
Active Directory Domain Controller Server 2012 R2 Computer Role => Microsoft.Windows.Server.AD.Library.DomainControllerRole
Active Directory Domain Controller Server 2000 Computer Role => Microsoft.Windows.Server.AD.DomainControllerRole
Active Directory Domain Controller Server 2008 Computer Role => Microsoft.Windows.Server.AD.DomainControllerRole
Active Directory Domain Controller Server 2012 Computer Role => Microsoft.Windows.Server.AD.Library.DomainControllerRole
Active Directory Domain Controller Server 2003 Computer Role => Microsoft.Windows.Server.AD.DomainControllerRole
So, 2008 DC and below based on Microsoft.Windows.Server.AD.DomainControllerRole, but above is based on Microsoft.Windows.Server.AD.Library.DomainControllerRole. Presumably, you don't need older versions, so your target should be Microsoft.Windows.Server.AD.Library.DomainControllerRole, unless MS changes the inheritance path again.
Next, how to change your rules. It's not possible to change target in SCOM Console, even if rule's management pack is not sealed. However, if your MP is not sealed, you can:
Export your MP as XML.
Change reference from the current versioned MP to the library (Microsoft.Windows.Server.AD.Class.Library).
Replace all current targets.
Import the MP back.
Your new reference should be like (run Get-SCOMManagementPack -Name Microsoft.Windows.Server.AD.Class.Library to verify to your environment).
<Reference Alias="MPAlias">
<ID>Microsoft.Windows.Server.AD.Class.Library</ID>
<Version>10.0.2.1</Version>
<PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
</Reference>
Then your rules' targets should be like:
<Rule ID="your_ID" Target="MPAlias!Microsoft.Windows.Server.AD.Library.DomainControllerRole"

Related

What does 'System.ConfigItem.ObjectStatusEnum.Active' represent in SCOM

I query the following SCOM endpoint: OperationsManager/data/objectInformation/<object id>
Among the response properties, I receive the following property:
<MonitoringObjectProperty>
<name>Object Status</name>
<value>System.ConfigItem.ObjectStatusEnum.Active</value>
</MonitoringObjectProperty>
I want to know what this property represents. I am looking for a way to query the API to figure out if a given server is running or not (crashed/network disconnected etc) & wondering if this property represents this attribute.
It is not used in SCOM, its leftover from System Center Service Manager. Back in 2012 when they built Service Manager SCSM they used the code base from SCOM 2012. Then they merged the updated code SCSM back into SCOM (for some unknown reason) this created a bunch of useless properties and tables in the SCOM DB.
Many of these fields can still be updated manually with PowerShell but I would not recommend it.
Here is a link for more information. Using the Asset Status Property in SCOM
Here is how you can use the API to get server status. SCOM REST API to get Windows/Linux machine's availability (whether the server is running & reachable)?

How to read a local csv file using Azure Data Factory and a self-hosted runtime?

I have a Windows Server VM with the ADF Integration Runtime installed running under a local account called deploy. This account is a member of the local admins group. The server is not domain-joined.
I created a new linked service (File System) and pointed it to a csv file on the root of the C drive as a test. When I test the connection I get Connection failed.
Error occurred when trying to access the file in Folder 'C:\etr.csv', File filter: ''. The directory name is invalid. Activity ID: 1b892702-7cc3-48d5-83c7-c680d6d15afd.
Any ideas on a fix?
The linked service needs to be a folder on the target machine. In your screenshot, change C:\etr.csv to C:\ and then define a new dataset that uses the linked service to select etr.csv.
The dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. So the linked service should point to the folder instead of file. It should be C:\ instead of C:\etr.csv

Is it possibe to have multiple kerberos tickets on same machine?

I have a use case where I need to connect to 2 different DBS using 2 different accounts. And I am using Kerberos for authentication.
Is it possible to create multiple Kerberos tickets on same machine?
kinit account1#DOMAIN.COM (first ticket)
kinit account2#DOMAIN.COM (second ticket)
Whenever I do klist, I only see most recent ticket created. It doesn't show all the tickets.
Next, I have a job that needs to first use ticket for account1 (for connection to DB1) and then use ticket for account2 (for DB2).
Is that possible? How do I tell in DB connection what ticket to use?
I'm assuming MIT Kerberos and linking to those docs.
Try klist -A to show all tickets in the ticket cache. If there is only one try switching your ccache type to DIR as described here:
DIR points to the storage location of the collection of the credential caches in FILE: format. It is most useful when dealing with multiple Kerberos realms and KDCs. For release 1.10 the directory must already exist. In post-1.10 releases the requirement is for parent directory to exist and the current process must have permissions to create the directory if it does not exist. See Collections of caches for details. New in release 1.10. The following residual forms are supported:
DIR:dirname
DIR::dirpath/filename - a single cache within the directory
Switching to a ccache of the latter type causes it to become the primary for the directory.
You do this by specifying the default ccache name as DIR:/path/to/cache on one of the ways described here.
The default credential cache name is determined by the following, in descending order of priority:
The KRB5CCNAME environment variable. For example, KRB5CCNAME=DIR:/mydir/.
The default_ccache_name profile variable in [libdefaults].
The hardcoded default, DEFCCNAME.

TFSSecurity with vstfs:///Classifcation IDs

I'm trying to script some permission removals from a user list using PowerShell. The problem is that when I perform a command to list the user groups associated with the user, I get a generic result that cannot be used when I perform the command to remove the user from that group.
To get the groups for the user:
tfssecurity /im <domain>\<username> /server:<tfsserver>:8080/tfs
Results:
The target Team Foundation Server is
http://:8080/tfs/. Resolving identity
"\username"...
SID: S-1-5-21-3609080306-XXXXXXXXXX-XXXXXXXXX-5728
DN: CN=LastName, FirstName,OU=Disabled Users,DC=company,DC=com
Identity type: Windows user Logon name: \ Mail
address: username#domain.com Display name: lastname, firstname
Description: TFS User
Member of 1 group(s): [A] [TeamProject]\Developers
Done.
The Problem: When I try to remove the user from the group returned:
tfssecurity /g- "[TeamProject]\Developers" <domain>\<username> /collection:http://tfsserver:8080/tfs/collection/
I get:
The target Team Foundation Server is
http://tfsserver:8080/tfs/collection. Resolving identity
"[TeamProject]\Developers"...
Error: The identity cannot be resolved.
What I'm looking for, is something like:
vstfs:///Classification/TeamProject/af89c143-2f5e-4f5b-974e-903e8db86f73\Developers
I do know that the TFS UI can provide those group SIDS, but I'd like to see if I can get those SIDS from TFSSecurity or other command base to that can be leveraged by PowerShell.
C:\Program Files (x86)\Microsoft Visual Studio 14.0>tfssecurity /g-
"[Archive Projects]\Developers" \
/server:http://:8080/tfs/ Microsoft (R) TFSSecurity - Team
Foundation Server Security Tool Copyright (c) Microsoft Corporation.
All rights reserved.
The target Team Foundation Server is
http://tfs-na.ihs.com:8080/tfs. Resolving identity
"[Archive Projects]\Developers"...
Error: Multiple identities found matching '[Archive
Projects]\Developers'. Please specify one of the following identities:
[Archive Projects]\Developers (vstfs:///Classification/TeamProject/8153b33c-addc-48c2-81c0-XxXXXxxxXXXX\Developers)
[Archive Projects]\Developers (vstfs:///Classification/TeamProject/f3d25cfe-41b3-4f30-a329-BBBbbBBBbbbb\Developers)
[Archive Projects]\Developers (vstfs:///Classification/TeamProject/c0820b8e-2af0-416c-88b5-CCcccCCCccCC\Developers)
No need to use SID in the using of tfssecurity /g- command. Your command is right.
tfssecurity /g- "[TeamProject]\Developers" <domain>\<username> /collection:http://tfsserver:8080/tfs/collection/
According to the error The identity cannot be resolved, this is more like a connectivity problem with the domain server. Use a direct connection between the Team Foundation Server en de AD server, all identities can be resolved. Besides, if you are using two different domains with your account and TFS server. Make sure they are trusted each other, details take a look at this question: TFSSecurity Unable to Resolve Identity

How to deploy with Release Management to remote datacenter

We are running TFS and Release Management on premises, and i want to deploy my applications to a remote datacenter.
Access is over the internet, so there is no windows shares available.
I am using the vNext templates, and afaik RM seems to only support unc paths over windows shares.
How can i use Release Management to deploy software to this datacenter?
Im working on this solution:
Use WebDav on a IIS located inside the datacenter.
RM server and Target can use the WebDav client built into windows and access it by an unc path.
I haven't gotten this to work yet, as RM won't use the correct credentials to logon to the webdav server.
Updated with my solution
This is only a proof of concept, and is not production tested.
Setup a WebDav site accessible from both RM server and Target server
Install the feature "Desktop experience" on both servers
Make the following DLL
using System;
using System.ComponentModel.Composition;
using System.Diagnostics;
using System.IO;
using Microsoft.TeamFoundation.Release.Common.Helpers;
using Microsoft.TeamFoundation.Release.Composition.Definitions;
using Microsoft.TeamFoundation.Release.Composition.Services;
namespace DoTheNetUse
{
[PartCreationPolicy(CreationPolicy.Shared)]
[Export(typeof(IThreadSafeService))]
public class DoTheNetUse : BaseThreadSafeService
{
public DoTheNetUse() : base("DoTheNetUse")
{}
protected override void DoAction()
{
Logger.WriteInformation("DoAction: [DoTheNetUse]");
try
{
Logger.WriteInformation("# DoTheNetUse.Start #");
Logger.WriteInformation("{0}, {1}", Environment.UserDomainName, Environment.UserName);
{
Logger.WriteInformation("Net use std");
var si = new ProcessStartInfo("cmd.exe", #"/c ""net use \\sharedwebdavserver.somewhere\DavWWWRoot\ /user:webdavuser webdavuserpassword""");
si.UseShellExecute = false;
si.RedirectStandardOutput = true;
si.RedirectStandardError = true;
var p = Process.Start(si);
p.WaitForExit();
Logger.WriteInformation("Net use output std:" + p.StandardOutput.ReadToEnd());
Logger.WriteInformation("Net use output err:" + p.StandardError.ReadToEnd());
}
//##########################################################
Logger.WriteInformation("# Done #");
}
catch (Exception e)
{
Logger.WriteError(e);
}
}
}
}
Name it "ReleaseManagementMonitor2.dll"
Place it in the a subfolder to The service "ReleaseManagementMonitor"
Configure the shared path as the solution below states.
DO NOT OVERWITE THE EXISTING "ReleaseManagementMonitor2.dll"
The reason that this works is MEF.
The ReleaseManagementMonitor service tries to load the dll "ReleaseManagementMonitor2.dll" from all subfolders.
This dll implements a service interface that RM recognises.
It the runs "net use" to apply the credentials to the session that the service runs under, and thereby grants access to the otherwise inaccessible webdav server.
This solution is certified "Works on my machine"
RM does work only with UNC, you are right on that.
You can leverage that to make your scenario work -
In Theory
Create a boundary machine on the RM domain, where your drops can be copied.
The deploy action running on your datacenter can then copy bits from this boundary machine, using credentials that have access on that domain. (These credentials are provided by you in the WPF console)
How this works
1. Have a dedicated machine on the RM server domain (say D1) that will be used as a boundary machine.
2. Define this machine as a boundary machine in RM by specifying a shared path that will be used by your data centre. Go to settings tab in your WPF console, create a new variable - { Key = RMSharedUNCPath, Value = \\BoundaryMachine\DropsLocation }. RM now understands you want to use this machine as your boundary machine.
3. Make sure you take care of these permissions
RM Server should have write permissions on the \\BoundaryMachine\DropsLocation share.
Pass down credentials of domain D1 to the target machine in the data centre (Domain D2), that can be used to access the share.
4. Credentials can be passed down fron the WPF console, you will have to define the following two config variables in the settings tab again.
Key = RMSharedUNCPathUser ; Value = domain D1 user name
Key = RMSharedUNCPathPwd ; Value = password for the user defined above.
PS - Variable names are case sensitive.
Also, to let RM know that you want to use the SharedUNC mechanism, check the corresponding checkbox for the RM server and connect to it via IP and not DNS name as these must be in different domains, i.e.
Try to use Get-Content on local-server then Set-Content on the remote server passing the file contents over;
Could package everything into an archive of some kind.
The Release Management is copying VisualStudioRemoteDeployer.exe to C:\Windows\DtlDownloads\VisualStudioRemoteDeployer folder on the target server then is copying the scripts from the specified location to target server using robocopy.
So you have to give permissions from your target server to your scripts location.
Release Management update 4 supports "Build drops stored on TFS servers"
http://blogs.msdn.com/b/visualstudioalm/archive/2014/11/11/what-s-new-in-release-management-for-vs-2013-update-4.aspx