Is it possible to use cloud formation to deploy a Cloud9 ide on an EC2 image that is not obsolete? - aws-cloudformation

Apparently Cloud9 out of the box is being shipped on an essentially obsolete EC2 instance, as it does not have a current, recent, or viable instance of the aws cli.
$ aws --version
aws-cli/1.19.112 Python/2.7.18 Linux/4.14.296-222.539.amzn2.x86_64 botocore/1.20.112
As far as I can tell, Amazon recommends using version 2.9.1;
But even the most recent series 1 version is 1.27.19
Is there any way of using CloudFormation to deploy Cloud9 on a more contemporary EC2 instance? I want to roll Cloud9 out to a dev organization, but it is distressing to me that it seems to be deployed crippled (and yes, I need to use more recent cli options for the initial configuration of each new IDE).

Have you tried with the identifier for the Amazon Machine Image (AMI)?
That's used to create the EC2 instance, because to declare this entity in your AWS CloudFormation template need use this syntax in your JSON file:
{
"Type" : "AWS::Cloud9::EnvironmentEC2",
"Properties" : {
"AutomaticStopTimeMinutes" : Integer,
"ConnectionType" : String,
"Description" : String,
"ImageId" : String,
"InstanceType" : String,
"Name" : String,
"OwnerArn" : String,
"Repositories" : [ Repository, ... ],
"SubnetId" : String,
"Tags" : [ Tag, ... ]
}
}
Then, to choose an AMI for the instance, you must specify a valid 'AMI alias' or a valid AWS Systems Manager path, the default AMI is used if the parameter isn't explicitly assigned a value in the request.
Check the entire process in the AWS Cloud9 environment EC2.
AMI aliases
Amazon Linux (default): amazonlinux-1-x86_64
Amazon Linux 2: amazonlinux-2-x86_64
Ubuntu 18.04: ubuntu-18.04-x86_64
SSM paths
Amazon Linux (default): resolve:ssm:/aws/service/cloud9/amis/amazonlinux-1-x86_64
Amazon Linux 2: resolve:ssm:/aws/service/cloud9/amis/amazonlinux-2-x86_64
Ubuntu 18.04: resolve:ssm:/aws/service/cloud9/amis/ubuntu-18.04-x86_64

Related

Unable to connect to MongoDb while I running a job from Jenkins on docker

I have a docker installed on Linux machine, I have a container with Jenkins that trigger a job, my final step is to run tests, one of my first steps in my Nunit tests is to connect to MongoDB that also a part of my docker stack.
From the Jenkins log I got the following error :
A timeout occured after 30000ms selecting a server using
CompositeServerSelector{ Selectors =
MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector,
LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000
} }. Client view of cluster state is { ClusterId : "1", ConnectionMode
: "Automatic", Type : "Unknown", State : "Disconnected", Servers : [{
ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/"my AWS
host":27017
Please note:
1) MongoDB and Jenkins containers are located on the same network.
2) I can get a curl from Jenkins container to Mongo's full IP address.
3) If I am running from my local pc and pointing to the remote machine (to the same docker) Mongo connection is working.
4) In my AWS console, all traffic and ports are open on both sides.
Had a very similar issue, In my case, we used public DNS that cause us the problem.Consider changing from public DNS to Public IP.

How to use powershell to get EC2 instance public IP and private IP address? And list the IPs for these instances

If I know EC2 Instance ID and EC2 Instance Name,
How to use powershell script to get EC2 instance public IP and private IP address through this information? And list the IPs for these instances
If you have not already done so.
How about downloading and install the AWs PowerShell tools and use their native cmdlets to extract this information.
AWS Tools for Windows PowerShell
The AWS Tools for Windows PowerShell lets developers and
administrators manage their AWS services from the Windows PowerShell
scripting environment. Now you can manage your AWS resources with the
same Windows PowerShell tools you use to manage your Windows
environment
https://aws.amazon.com/powershell
AWS Tools for Windows PowerShell Users Guide
The AWS Tools for Windows PowerShell are a set of PowerShell cmdlets
that are built on top of the functionality exposed by the AWS SDK for
.NET. The AWS Tools for Windows PowerShell enable you to script
operations on your AWS resources from the PowerShell command line.
Although the cmdlets are implemented using the service clients and
methods from the SDK, the cmdlets provide an idiomatic PowerShell
experience for specifying parameters and handling results. For
example, the cmdlets for the PowerShell Tools support PowerShell
pipelining—that is, you can pipeline PowerShell objects both into and
out of the cmdlets.
The AWS Tools for Windows PowerShell are flexible in how they enable
you to handle credentials including support for the AWS Identity and
Access Management (IAM) infrastructure; you can use the tools with IAM
user credentials, temporary security tokens, and IAM roles. The AWS
Tools for Windows PowerShell support the same set of services and
regions as supported by the SDK.
http://awsdocs.s3.amazonaws.com/powershell/latest/aws-pst-ug.pdf
(Get-EC2Instance -Filter $filter_reservation).Instances
InstanceId : i-5203422c
ImageId : ami-7527031c
State : Amazon.EC2.Model.InstanceState
PrivateDnsName : ip-10-251-50-12.ec2.internal
PublicDnsName : ec2-198-51-100-245.compute-1.amazonaws.com
StateTransitionReason :
KeyName : myPSKeyPair
AmiLaunchIndex : 0
ProductCodes : {}
InstanceType : t1.micro
LaunchTime : 12/11/2013 6:47:22 AM
Placement : Amazon.EC2.Model.Placement
KernelId :
RamdiskId :
Platform : Windows
Monitoring : Amazon.EC2.Model.Monitoring
SubnetId :
VpcId :
PrivateIpAddress : 10.251.50.12
PublicIpAddress : 198.51.100.245
StateReason :
Architecture : x86_64
RootDeviceType : ebs
RootDeviceName : /dev/sda1
BlockDeviceMappings : {/dev/sda1}
VirtualizationType : hvm
InstanceLifecycle :
SpotInstanceRequestId :
License :
ClientToken :
Tags : {}
SecurityGroups : {myPSSecurityGroup}
SourceDestCheck : False
Hypervisor : xen
NetworkInterfaces : {}
IamInstanceProfile :
EbsOptimized : False
See also:
AWS EC2 Windows Instance – Get instance details
https://aaronsaikovski.wordpress.com/2015/01/05/aws-ec2-windows-instance-get-instance-details/
How to get the instance id from within an ec2 instance? How can I find
out the instance id of an ec2 instance from within the ec2 instance?
How to get the instance id from within an ec2 instance?
To view an individual EC2 Instance's Private IP, run this, substituting your specific InstanceId and region:
(Get-Ec2Instance -InstanceId i-9999999999999999 -Region us-east-1).Instances.PrivateIpAddress
For the public (if it has one) use:
(Get-Ec2Instance -InstanceId i-9999999999999999 -Region us-east-1).Instances.PublicIpAddress
If the EC2 Instance has a Public IP and you want to know whether it is and Elastic IP (static) or assigned from AWS public IP pool, you can check the OwnerId of the NetworkInterface Association. For Elastic IP's, the OwnerId will be your account id; for Public IP's assigned from the AWS IP Pool, it will be something with "amazon", such as "amazon-ebs" or just "amazon":
$AccountId = Get-AWSAccount
$ec2 = (Get-Ec2Instance -InstanceId i-99999999999999999 -Region us-east-1).Instances
if ($ec2.PublicIpAddress) {
if ($ec2.NetworkInterfaces.Association.IpOwnerId -like $AccountId) {
Write-Output ("Elastic IP: {0}" -f $ec2.PublicIpAddress)
}
else {
Write-Output ("AWS Public IP Pool {0}" -f $ec2.PublicIpAddress)
}
}
Be aware that if your EC2 is using the AWS Public IP Pool that none will be assigned when the Instance is powered off. It is released when powered off and gets a new one when the Instance is powered back on. Refer to Amazon EC2 Instance IP Addressing for more details.

IBM Cloud Object Storage credentials

I am trying to setup a Raspberry Pi that connects to an Object Storage service on IBM Cloud. In all tutorials on Object Storage, credentials are of this format:
{
"auth_url": "https://identity.open.softlayer.com",
"project": "object_storage_xxxxxxxx_xxxx_xxxx_b35a_6d007e3f9118",
"projectId": "512xxxxxxxxxxxxxxxxxxxxxe00fe4e1",
"region": "dallas",
"userId": "e8c19efxxxxxxxxxxxxxxxxxxx91d53e",
"username": "admin_1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxa66",
"password": "fTxxxxxxxxxxw8}l",
"domainId": "15xxxxxxxxxxxxxxxxxxxxxxxxxxxx2a",
"domainName": "77xxx3",
"role": "admin"
}
According to here for example
Where the following comment is given:
Inside the IBM Cloud web interface you can create or read existing credentials. If your program runs on IBM Cloud (Cloudfoundry or Kubernetes) the credentials are also available via the VCAP environment variable
However, I am not running my Python script on IBM Cloud, rather on a RPi that sends data to it. In my Object Storage service, there is a "service credentials" tab, which has the following form:
{
"apikey": "XXXXXX-_XXXXXXXXXXXXXXXXXX_XXXXXX",
"endpoints": "https://cos-service.bluemix.net/endpoints",
"iam_apikey_description": "Auto generated apikey during resource-key
operation for Instance - crn:v1:bluemix:public:cloud-object-
storage:global:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"iam_apikey_name": "auto-generated-apikey-XXXXXXXX-XXXX-XXXX-XXXX-
XXXXXXXXXXXX",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-
identity::XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX::serviceid:ServiceId-
XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"resource_instance_id": "crn:v1:bluemix:public:cloud-object-
storage:global:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}
So how do I find the credentials needed so I can use the SWIFT protocol in Python to send data from my Raspberry Pi to my Object Storage service?
Instead of swift which I don’t think is supported, you can use IBM’s flavour of S3 object storage protocol. There is a python library you can use to make this easy
For example to connect to cos s3:
import ibm_boto3
from ibm_botocore.client import Config
api_key = 'API_KEY'
service_instance_id = 'RESOURCE_INSTANCE_ID'
auth_endpoint = 'https://iam.bluemix.net/oidc/token'
service_endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = ibm_boto3.resource('s3',
ibm_api_key_id=api_key,
ibm_service_instance_id=service_instance_id,
ibm_auth_endpoint=auth_endpoint,
config=Config(signature_version='oauth'),
endpoint_url=service_endpoint)
Th IBM boto3 library is very similar to the boto3 library that is used to connect to amazon s3 object storage. The main difference is in setting up the initial connection which I have shown above. After you have done that you can find plenty of examples online for using boto3, here is one:
# Upload a new file
data = open('test.jpg', 'rb')
s3.Bucket('my-bucket').put_object(Key='test.jpg', Body=data)
From: http://boto3.readthedocs.io/en/latest/guide/quickstart.html
You might want to look at question/answer I list below. Basically what you need is an access key and secret key to add in your Python code to connect to your Cloud Object Storage account.
https://stackoverflow.com/a/48936053/9392933

failed to create service fabric cluster on win-server 2012 R2

I am trying to create a standalone service fabric cluster on on-prem environment, using Windows Server 2012R2. After I run the CreateServiceFabricCluster.ps, got the following error in the power shell windows
System.Fabric.FabricDeployer.ClusterManifestValidationException:
Cluster manifest validation failed with exception
System.ArgumentException: IP address is not allowed for credential
type 'Windows' when fabric runs as NetworkService, please use
hostnames.
How to update the json config file?
Had the same problem, the Microsoft documentation seems not to mention this. I fixxed it by modifying the JSON so the iPAddress properties are the same as the nodeName properties like this:
"nodes":[
{
"nodeName":"cl1m1",
"iPAddress":"cl1m1",
"nodeTypeRef":"NodeType0",
"faultDomain":"fd:/cl1",
"upgradeDomain":"UD0"
},
{
"nodeName":"cl1m2",
"iPAddress":"cl1m2",
"nodeTypeRef":"NodeType0",
"faultDomain":"fd:/cl1",
"upgradeDomain":"UD1"
},
{
"nodeName":"cl1m3",
"iPAddress":"cl1m3",
"nodeTypeRef":"NodeType0",
"faultDomain":"fd:/cl1",
"upgradeDomain":"UD2"
}
After modifying the config just running the cluster setup again worked for me.
Inside \SfDevCluster\Data directory, you have clusterManifest.xml file. There you could change IPAddressOrFQDN property for your nodes and put there hostnames.
On a development machine, you can go to C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\[particular folder], ClusterManifestTemplate.xml and have this setting the same every time you deploy a new cluster.

Customzing EC2 Windows instances without using a Custom AMI

We are currently setting up a CloudFormation stack based on the template created by AWS Toolkit for Visual Studio when deploying using "Load balanced template". We need to create a script that customizes the EC2-instances somewhat. More specifically we want to:
1. Install two certificates into the certificate store.
2. Configure IIS to use one of the certificates.
3. Enable TLS 1.2 on IIS.
We need to install these certs at the IIS, instead of the load balancer, because we need to support client cert authentication.
We'd like to achieve this without having to create a custom AMI, because we want to be able to easily update the AMI as new versions arrive. We are using the following: ami-f6803f9f (which is the default used by the template).
We therefore want to do these customizations as part of the CloudFormation template. I've tried to create a simple file (just to make sure the scripting works) by using the "AWS::CloudFormation::Init" part of the template. However, when I launch the stack the file never gets created. The part of the template that is supposed to create the file looks like this:
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"files" : {
"C:/ClientCA.pfx" : {
"content" : { "Fn::Join" : ["", [
"test1\n",
"test2\n"
]]}
}
}
}
}
}
My questions are therefore:
1. Why is the file not being created? Is it because there's something wrong with the template or does this AMI not supported these types if init-scripts?
2. We are planning on downloading the certs from S3 using "AWS::CloudFormation::Init" and the installing them using a PowerShell-script that we add to UserData. Is this a good approach or should we do it differently?
I've just tested your snippet with a current Windows Server 2012 AMI and it worked just fine. Therefore my best guess is that ami-f6803f9f is a custom AMI already (at least I can't find it anywhere official) and lacks the required orchestration for Deploying Applications with AWS CloudFormation (this is the generic explanation for Unix/Linux, see Bootstrapping AWS CloudFormation Windows Stacks for a short Windows oriented example):
AWS CloudFormation includes a set of helper applications (cfn-init,
cfn-signal, cfn-get-metadata, and cfn-hup) that are based on
cloud-init. These helper applications not only provide functionality
similar to cloud-init, but also allow you to update your metadata
after your instance and applications are up and running. [...] [emphasis mine]
The emphasized applications are those responsible for reading and acting on the metadata defined in the template, i.e. creating C:/ClientCA.pfx in your example. These helper applications are nowadays included in all current Amazon EBS-Backed Windows Server 2012 RTM AMIs, but haven't in the Amazon EBS-Backed Windows Server 2008 R2 AMIs usually, except for dedicated ones like the Amazon EBS-Backed Windows Server 2008 R2 English 64-bit - Base for CloudFormation.
Obviously you can also install these CloudFormation Helper Scripts on a custom AMI and move on from there, but if you haven't any specific reason to do so, I highly recommend to start with a current Amazon EBS-Backed Windows Server 2012 RTM AMI, which provides these and a few other likewise desired administrative productivity components out of the box (e.g. Windows PowerShell 3.0 and the new AWS Tools for Winodws PowerShell).
Old question - but I'm tipping that the reason the file is not being created is that the cloudformation script is not executing cfn-init.
The key part is to ensure that you've updated the userdata scripts...
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"<script>\n",
"powershell.exe add-windowsfeature web-webserver -includeallsubfeature -logpath $env:temp\\webserver_addrole.log \n",
"powershell.exe add-windowsfeature web-mgmt-tools -includeallsubfeature -logpath $env:temp\\mgmttools_addrole.log \n",
"cfn-init.exe -v -s ", {"Ref" : "AWS::StackId"}, " -r WebServerLaunchConfiguration --region ", {"Ref" : "AWS::Region"}, "\n",
"</script>\n",
"<powershell>\n",
"new-website -name", {"Ref" : "Name"}, " -port 80 -physicalpath c:\\inetpub\\", {"Ref" : "Name"}, " -ApplicationPool \".NET v4.5\" -force \n",
"remove-website -name \"Default Web Site\" \n",
"start-website -name ", {"Ref" : "Name"}, " \n",
"</powershell>"
The above script adds the web server features, the management tools, and then kicks of the cfn-init. It's cfn-init that is responsible for parsing of the meta data.
There are more details about bootstrapping IIS on AWS on my Kloud blog.