Customzing EC2 Windows instances without using a Custom AMI - powershell

We are currently setting up a CloudFormation stack based on the template created by AWS Toolkit for Visual Studio when deploying using "Load balanced template". We need to create a script that customizes the EC2-instances somewhat. More specifically we want to:
1. Install two certificates into the certificate store.
2. Configure IIS to use one of the certificates.
3. Enable TLS 1.2 on IIS.
We need to install these certs at the IIS, instead of the load balancer, because we need to support client cert authentication.
We'd like to achieve this without having to create a custom AMI, because we want to be able to easily update the AMI as new versions arrive. We are using the following: ami-f6803f9f (which is the default used by the template).
We therefore want to do these customizations as part of the CloudFormation template. I've tried to create a simple file (just to make sure the scripting works) by using the "AWS::CloudFormation::Init" part of the template. However, when I launch the stack the file never gets created. The part of the template that is supposed to create the file looks like this:
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"files" : {
"C:/ClientCA.pfx" : {
"content" : { "Fn::Join" : ["", [
"test1\n",
"test2\n"
]]}
}
}
}
}
}
My questions are therefore:
1. Why is the file not being created? Is it because there's something wrong with the template or does this AMI not supported these types if init-scripts?
2. We are planning on downloading the certs from S3 using "AWS::CloudFormation::Init" and the installing them using a PowerShell-script that we add to UserData. Is this a good approach or should we do it differently?

I've just tested your snippet with a current Windows Server 2012 AMI and it worked just fine. Therefore my best guess is that ami-f6803f9f is a custom AMI already (at least I can't find it anywhere official) and lacks the required orchestration for Deploying Applications with AWS CloudFormation (this is the generic explanation for Unix/Linux, see Bootstrapping AWS CloudFormation Windows Stacks for a short Windows oriented example):
AWS CloudFormation includes a set of helper applications (cfn-init,
cfn-signal, cfn-get-metadata, and cfn-hup) that are based on
cloud-init. These helper applications not only provide functionality
similar to cloud-init, but also allow you to update your metadata
after your instance and applications are up and running. [...] [emphasis mine]
The emphasized applications are those responsible for reading and acting on the metadata defined in the template, i.e. creating C:/ClientCA.pfx in your example. These helper applications are nowadays included in all current Amazon EBS-Backed Windows Server 2012 RTM AMIs, but haven't in the Amazon EBS-Backed Windows Server 2008 R2 AMIs usually, except for dedicated ones like the Amazon EBS-Backed Windows Server 2008 R2 English 64-bit - Base for CloudFormation.
Obviously you can also install these CloudFormation Helper Scripts on a custom AMI and move on from there, but if you haven't any specific reason to do so, I highly recommend to start with a current Amazon EBS-Backed Windows Server 2012 RTM AMI, which provides these and a few other likewise desired administrative productivity components out of the box (e.g. Windows PowerShell 3.0 and the new AWS Tools for Winodws PowerShell).

Old question - but I'm tipping that the reason the file is not being created is that the cloudformation script is not executing cfn-init.
The key part is to ensure that you've updated the userdata scripts...
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"<script>\n",
"powershell.exe add-windowsfeature web-webserver -includeallsubfeature -logpath $env:temp\\webserver_addrole.log \n",
"powershell.exe add-windowsfeature web-mgmt-tools -includeallsubfeature -logpath $env:temp\\mgmttools_addrole.log \n",
"cfn-init.exe -v -s ", {"Ref" : "AWS::StackId"}, " -r WebServerLaunchConfiguration --region ", {"Ref" : "AWS::Region"}, "\n",
"</script>\n",
"<powershell>\n",
"new-website -name", {"Ref" : "Name"}, " -port 80 -physicalpath c:\\inetpub\\", {"Ref" : "Name"}, " -ApplicationPool \".NET v4.5\" -force \n",
"remove-website -name \"Default Web Site\" \n",
"start-website -name ", {"Ref" : "Name"}, " \n",
"</powershell>"
The above script adds the web server features, the management tools, and then kicks of the cfn-init. It's cfn-init that is responsible for parsing of the meta data.
There are more details about bootstrapping IIS on AWS on my Kloud blog.

Related

Is it possible to use cloud formation to deploy a Cloud9 ide on an EC2 image that is not obsolete?

Apparently Cloud9 out of the box is being shipped on an essentially obsolete EC2 instance, as it does not have a current, recent, or viable instance of the aws cli.
$ aws --version
aws-cli/1.19.112 Python/2.7.18 Linux/4.14.296-222.539.amzn2.x86_64 botocore/1.20.112
As far as I can tell, Amazon recommends using version 2.9.1;
But even the most recent series 1 version is 1.27.19
Is there any way of using CloudFormation to deploy Cloud9 on a more contemporary EC2 instance? I want to roll Cloud9 out to a dev organization, but it is distressing to me that it seems to be deployed crippled (and yes, I need to use more recent cli options for the initial configuration of each new IDE).
Have you tried with the identifier for the Amazon Machine Image (AMI)?
That's used to create the EC2 instance, because to declare this entity in your AWS CloudFormation template need use this syntax in your JSON file:
{
"Type" : "AWS::Cloud9::EnvironmentEC2",
"Properties" : {
"AutomaticStopTimeMinutes" : Integer,
"ConnectionType" : String,
"Description" : String,
"ImageId" : String,
"InstanceType" : String,
"Name" : String,
"OwnerArn" : String,
"Repositories" : [ Repository, ... ],
"SubnetId" : String,
"Tags" : [ Tag, ... ]
}
}
Then, to choose an AMI for the instance, you must specify a valid 'AMI alias' or a valid AWS Systems Manager path, the default AMI is used if the parameter isn't explicitly assigned a value in the request.
Check the entire process in the AWS Cloud9 environment EC2.
AMI aliases
Amazon Linux (default): amazonlinux-1-x86_64
Amazon Linux 2: amazonlinux-2-x86_64
Ubuntu 18.04: ubuntu-18.04-x86_64
SSM paths
Amazon Linux (default): resolve:ssm:/aws/service/cloud9/amis/amazonlinux-1-x86_64
Amazon Linux 2: resolve:ssm:/aws/service/cloud9/amis/amazonlinux-2-x86_64
Ubuntu 18.04: resolve:ssm:/aws/service/cloud9/amis/ubuntu-18.04-x86_64

How to use powershell to get EC2 instance public IP and private IP address? And list the IPs for these instances

If I know EC2 Instance ID and EC2 Instance Name,
How to use powershell script to get EC2 instance public IP and private IP address through this information? And list the IPs for these instances
If you have not already done so.
How about downloading and install the AWs PowerShell tools and use their native cmdlets to extract this information.
AWS Tools for Windows PowerShell
The AWS Tools for Windows PowerShell lets developers and
administrators manage their AWS services from the Windows PowerShell
scripting environment. Now you can manage your AWS resources with the
same Windows PowerShell tools you use to manage your Windows
environment
https://aws.amazon.com/powershell
AWS Tools for Windows PowerShell Users Guide
The AWS Tools for Windows PowerShell are a set of PowerShell cmdlets
that are built on top of the functionality exposed by the AWS SDK for
.NET. The AWS Tools for Windows PowerShell enable you to script
operations on your AWS resources from the PowerShell command line.
Although the cmdlets are implemented using the service clients and
methods from the SDK, the cmdlets provide an idiomatic PowerShell
experience for specifying parameters and handling results. For
example, the cmdlets for the PowerShell Tools support PowerShell
pipelining—that is, you can pipeline PowerShell objects both into and
out of the cmdlets.
The AWS Tools for Windows PowerShell are flexible in how they enable
you to handle credentials including support for the AWS Identity and
Access Management (IAM) infrastructure; you can use the tools with IAM
user credentials, temporary security tokens, and IAM roles. The AWS
Tools for Windows PowerShell support the same set of services and
regions as supported by the SDK.
http://awsdocs.s3.amazonaws.com/powershell/latest/aws-pst-ug.pdf
(Get-EC2Instance -Filter $filter_reservation).Instances
InstanceId : i-5203422c
ImageId : ami-7527031c
State : Amazon.EC2.Model.InstanceState
PrivateDnsName : ip-10-251-50-12.ec2.internal
PublicDnsName : ec2-198-51-100-245.compute-1.amazonaws.com
StateTransitionReason :
KeyName : myPSKeyPair
AmiLaunchIndex : 0
ProductCodes : {}
InstanceType : t1.micro
LaunchTime : 12/11/2013 6:47:22 AM
Placement : Amazon.EC2.Model.Placement
KernelId :
RamdiskId :
Platform : Windows
Monitoring : Amazon.EC2.Model.Monitoring
SubnetId :
VpcId :
PrivateIpAddress : 10.251.50.12
PublicIpAddress : 198.51.100.245
StateReason :
Architecture : x86_64
RootDeviceType : ebs
RootDeviceName : /dev/sda1
BlockDeviceMappings : {/dev/sda1}
VirtualizationType : hvm
InstanceLifecycle :
SpotInstanceRequestId :
License :
ClientToken :
Tags : {}
SecurityGroups : {myPSSecurityGroup}
SourceDestCheck : False
Hypervisor : xen
NetworkInterfaces : {}
IamInstanceProfile :
EbsOptimized : False
See also:
AWS EC2 Windows Instance – Get instance details
https://aaronsaikovski.wordpress.com/2015/01/05/aws-ec2-windows-instance-get-instance-details/
How to get the instance id from within an ec2 instance? How can I find
out the instance id of an ec2 instance from within the ec2 instance?
How to get the instance id from within an ec2 instance?
To view an individual EC2 Instance's Private IP, run this, substituting your specific InstanceId and region:
(Get-Ec2Instance -InstanceId i-9999999999999999 -Region us-east-1).Instances.PrivateIpAddress
For the public (if it has one) use:
(Get-Ec2Instance -InstanceId i-9999999999999999 -Region us-east-1).Instances.PublicIpAddress
If the EC2 Instance has a Public IP and you want to know whether it is and Elastic IP (static) or assigned from AWS public IP pool, you can check the OwnerId of the NetworkInterface Association. For Elastic IP's, the OwnerId will be your account id; for Public IP's assigned from the AWS IP Pool, it will be something with "amazon", such as "amazon-ebs" or just "amazon":
$AccountId = Get-AWSAccount
$ec2 = (Get-Ec2Instance -InstanceId i-99999999999999999 -Region us-east-1).Instances
if ($ec2.PublicIpAddress) {
if ($ec2.NetworkInterfaces.Association.IpOwnerId -like $AccountId) {
Write-Output ("Elastic IP: {0}" -f $ec2.PublicIpAddress)
}
else {
Write-Output ("AWS Public IP Pool {0}" -f $ec2.PublicIpAddress)
}
}
Be aware that if your EC2 is using the AWS Public IP Pool that none will be assigned when the Instance is powered off. It is released when powered off and gets a new one when the Instance is powered back on. Refer to Amazon EC2 Instance IP Addressing for more details.

failed to create service fabric cluster on win-server 2012 R2

I am trying to create a standalone service fabric cluster on on-prem environment, using Windows Server 2012R2. After I run the CreateServiceFabricCluster.ps, got the following error in the power shell windows
System.Fabric.FabricDeployer.ClusterManifestValidationException:
Cluster manifest validation failed with exception
System.ArgumentException: IP address is not allowed for credential
type 'Windows' when fabric runs as NetworkService, please use
hostnames.
How to update the json config file?
Had the same problem, the Microsoft documentation seems not to mention this. I fixxed it by modifying the JSON so the iPAddress properties are the same as the nodeName properties like this:
"nodes":[
{
"nodeName":"cl1m1",
"iPAddress":"cl1m1",
"nodeTypeRef":"NodeType0",
"faultDomain":"fd:/cl1",
"upgradeDomain":"UD0"
},
{
"nodeName":"cl1m2",
"iPAddress":"cl1m2",
"nodeTypeRef":"NodeType0",
"faultDomain":"fd:/cl1",
"upgradeDomain":"UD1"
},
{
"nodeName":"cl1m3",
"iPAddress":"cl1m3",
"nodeTypeRef":"NodeType0",
"faultDomain":"fd:/cl1",
"upgradeDomain":"UD2"
}
After modifying the config just running the cluster setup again worked for me.
Inside \SfDevCluster\Data directory, you have clusterManifest.xml file. There you could change IPAddressOrFQDN property for your nodes and put there hostnames.
On a development machine, you can go to C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\[particular folder], ClusterManifestTemplate.xml and have this setting the same every time you deploy a new cluster.

Azure Service Fabric - change config settings for a deployed Application

How do I change settings for a deployed application in Service Fabric?
I have a provisioned cluster and an application deployed to the cluster with two applications. I would like to be able to change my services' settings and have them pick up those changes, but I don't see how I can do that.
Previously, we've done all of our services with worker roles in Cloud Services, and the portal allows for changing configurations, but it does not appear to do so for Service Fabric. From the Service Fabric Explorer I can drill down to the service, go to MANIFEST and view the XML with the settings. I just don't see a way to edit or change it. I've struggled finding anything in the SF documentation addressing this.
The portal doesn't expose a way to do this. It needs to be done via an upgrade of the application. Just change the settings in your settings XML file and perform an upgrade. In the VS publish dialog for your application project, you can update your version numbers appropriately by changing the config package version which will automatically bubble up to update the containing service and application versions.
Building on Matt Thalman's answer, here's documentation on modifying the settings in the application or service manifest XML files, updating the version numbers, and performing an application upgrade: Service Fabric application upgrade tutorial using Visual Studio. You can also perform the app upgrade using PowerShell.
Additional to above answers, adding some powershell code..
we may use below powershell code to connect to Service Fabric from powershell and get the application parameters and then update specific parameter and re deploy..
### Change the connection here (from Profile-Cloud.xml
$ConnectArgs = #{
ConnectionEndpoint="devxxxxxx.westus.cloudapp.azure.com:19000"
X509Credential="true"
ServerCertThumbprint="52BFxxxxxxxxxx"
FindType="FindByThumbprint"
FindValue="EF3A2xxxxxxxxxxxxxx"
StoreLocation="CurrentUser"
StoreName="My"
}
Connect-ServiceFabricCluster #ConnectArgs
$myApplication = Get-ServiceFabricApplication -ApplicationName fabric:/ABC.MyService
$appParamCollection = $myApplication.ApplicationParameters
### Update your parameter here..
$applicationParameterMap.ElasticSearch_Username="sachin2"
$applicationParameterMap = #{}
foreach ($pair in $appParamCollection)
{
$applicationParameterMap.Add($pair.Name, $pair.Value);
}
### Start Udpating
Start-ServiceFabricApplicationUpgrade -ApplicationName $myApplication.ApplicationName.OriginalString -ApplicationTypeVersion $myApplication.ApplicationTypeVersion -ApplicationParameter $applicationParameterMap -Monitored -FailureAction Rollback -ForceRestart $true
### Check the status until it is Ready
(Get-ServiceFabricApplication -ApplicationName fabric:/ABC.MyService).ApplicationStatus
### Check the parameters to confirm those're updated
Get-ServiceFabricApplication -ApplicationName fabric:/ABC.MyService
You may change or remove the -ForceRestart as per your requriements

How to deploy with Release Management to remote datacenter

We are running TFS and Release Management on premises, and i want to deploy my applications to a remote datacenter.
Access is over the internet, so there is no windows shares available.
I am using the vNext templates, and afaik RM seems to only support unc paths over windows shares.
How can i use Release Management to deploy software to this datacenter?
Im working on this solution:
Use WebDav on a IIS located inside the datacenter.
RM server and Target can use the WebDav client built into windows and access it by an unc path.
I haven't gotten this to work yet, as RM won't use the correct credentials to logon to the webdav server.
Updated with my solution
This is only a proof of concept, and is not production tested.
Setup a WebDav site accessible from both RM server and Target server
Install the feature "Desktop experience" on both servers
Make the following DLL
using System;
using System.ComponentModel.Composition;
using System.Diagnostics;
using System.IO;
using Microsoft.TeamFoundation.Release.Common.Helpers;
using Microsoft.TeamFoundation.Release.Composition.Definitions;
using Microsoft.TeamFoundation.Release.Composition.Services;
namespace DoTheNetUse
{
[PartCreationPolicy(CreationPolicy.Shared)]
[Export(typeof(IThreadSafeService))]
public class DoTheNetUse : BaseThreadSafeService
{
public DoTheNetUse() : base("DoTheNetUse")
{}
protected override void DoAction()
{
Logger.WriteInformation("DoAction: [DoTheNetUse]");
try
{
Logger.WriteInformation("# DoTheNetUse.Start #");
Logger.WriteInformation("{0}, {1}", Environment.UserDomainName, Environment.UserName);
{
Logger.WriteInformation("Net use std");
var si = new ProcessStartInfo("cmd.exe", #"/c ""net use \\sharedwebdavserver.somewhere\DavWWWRoot\ /user:webdavuser webdavuserpassword""");
si.UseShellExecute = false;
si.RedirectStandardOutput = true;
si.RedirectStandardError = true;
var p = Process.Start(si);
p.WaitForExit();
Logger.WriteInformation("Net use output std:" + p.StandardOutput.ReadToEnd());
Logger.WriteInformation("Net use output err:" + p.StandardError.ReadToEnd());
}
//##########################################################
Logger.WriteInformation("# Done #");
}
catch (Exception e)
{
Logger.WriteError(e);
}
}
}
}
Name it "ReleaseManagementMonitor2.dll"
Place it in the a subfolder to The service "ReleaseManagementMonitor"
Configure the shared path as the solution below states.
DO NOT OVERWITE THE EXISTING "ReleaseManagementMonitor2.dll"
The reason that this works is MEF.
The ReleaseManagementMonitor service tries to load the dll "ReleaseManagementMonitor2.dll" from all subfolders.
This dll implements a service interface that RM recognises.
It the runs "net use" to apply the credentials to the session that the service runs under, and thereby grants access to the otherwise inaccessible webdav server.
This solution is certified "Works on my machine"
RM does work only with UNC, you are right on that.
You can leverage that to make your scenario work -
In Theory
Create a boundary machine on the RM domain, where your drops can be copied.
The deploy action running on your datacenter can then copy bits from this boundary machine, using credentials that have access on that domain. (These credentials are provided by you in the WPF console)
How this works
1. Have a dedicated machine on the RM server domain (say D1) that will be used as a boundary machine.
2. Define this machine as a boundary machine in RM by specifying a shared path that will be used by your data centre. Go to settings tab in your WPF console, create a new variable - { Key = RMSharedUNCPath, Value = \\BoundaryMachine\DropsLocation }. RM now understands you want to use this machine as your boundary machine.
3. Make sure you take care of these permissions
RM Server should have write permissions on the \\BoundaryMachine\DropsLocation share.
Pass down credentials of domain D1 to the target machine in the data centre (Domain D2), that can be used to access the share.
4. Credentials can be passed down fron the WPF console, you will have to define the following two config variables in the settings tab again.
Key = RMSharedUNCPathUser ; Value = domain D1 user name
Key = RMSharedUNCPathPwd ; Value = password for the user defined above.
PS - Variable names are case sensitive.
Also, to let RM know that you want to use the SharedUNC mechanism, check the corresponding checkbox for the RM server and connect to it via IP and not DNS name as these must be in different domains, i.e.
Try to use Get-Content on local-server then Set-Content on the remote server passing the file contents over;
Could package everything into an archive of some kind.
The Release Management is copying VisualStudioRemoteDeployer.exe to C:\Windows\DtlDownloads\VisualStudioRemoteDeployer folder on the target server then is copying the scripts from the specified location to target server using robocopy.
So you have to give permissions from your target server to your scripts location.
Release Management update 4 supports "Build drops stored on TFS servers"
http://blogs.msdn.com/b/visualstudioalm/archive/2014/11/11/what-s-new-in-release-management-for-vs-2013-update-4.aspx