AWSCloudFormation - cfn-init failed to run command - aws-cloudformation

I am using cloudformation for installing elasticsearch.
I am downloading and extracting tar.gz.
The following is my EC2 instance section :
"masterinstance": {
"Type": "AWS: : EC2: : Instance",
"Metadata": {
"AWS: : CloudFormation: : Init": {
"configSets" : {
"ascending" : [ "config1" , "config2" ]
},
"config1": {
"sources": {
"/home/ubuntu/": "https: //s3.amazonaws.com/xxxxxxxx/elasticsearch.tar.gz"
},
"files": {
"/home/ubuntu/elasticsearch/config/elasticsearch.yml": {
"content": {
"Fn: : Join": [
"",
[
xxxxxxxx
]
]
}
}
}
},
"config2" : {
"commands": {
"runservice": {
"command": "~/elasticsearch/bin/elasticsearch",
"cwd" : "~",
"test" : "~/elasticsearch/bin/elasticsearch > test.txt",
"ignoreErrors" : "false"
}
}
}
}
},
"Properties": {
"ImageId": "ami-xxxxxxxxxx",
"InstanceType": {
"Ref": "InstanceTypeParameter"
},
"Tags": [
xxxxxxxx
],
"KeyName": "everybody",
"NetworkInterfaces": [
{
"GroupSet": [
{
"Ref": "newSecurity"
}
],
"AssociatePublicIpAddress": "true",
"DeviceIndex": "0",
"SubnetId": {
"Ref": "oneSubnet"
}
}
],
"UserData": {
"Fn: : Base64": {
"Fn: : Join": [
"",
[
"#!/bin/bash\n",
"sudo add-apt-repository-yppa: webupd8team/java\n",
"sudo apt-get update\n",
"echo'oracle-java8-installershared/accepted-oracle-license-v1-1selecttrue'|sudo debconf-set-selections\n",
"sudo apt-getinstall-yoracle-java8-installer\n",
"apt-get update\n",
"apt-get-y installpython-setuptools\n",
"easy_installhttps: //s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n",
"/usr/local/bin/cfn-init",
"--stack Elasticsearch",
"--resource masterinstance",
"--configsets ascending",
"-v\n"
]
]
}
}
}
}
I am using AWS::CloudFormation::Init for configuration and other settings.
After extracting the tar , I want to start elasticsearch , which I am doing through the command section in AWS::CloudFormation::Init but ,
after the complete creation of stack when I ssh into my instances, I am not able to see my elasticsearch service running.
All other things like extracting tar and creating file is working correctly.
I have gone through the cfn-init.log , it gives me the following information :
2016-07-19 05:53:15,776 P2745 [INFO] Test for Command runservice
2016-07-19 05:53:15,778 P2745 [INFO] -----------------------Command Output-----------------------
2016-07-19 05:53:15,778 P2745 [INFO] /bin/sh: 1: ~/elasticsearch/bin/elasticsearch: not found
2016-07-19 05:53:15,778 P2745 [INFO] ------------------------------------------------------------
2016-07-19 05:53:15,779 P2745 [ERROR] Exited with error code 127
~
If I fire the above command ~/elasticsearch/bin/elasticsearch directly on my instance then it is working perfectly.
What I am doing wrong here.
Thank you.

I'm guessing that the home directory (~) is evaluating to a different user (not Ubuntu) when trying to run ES. I think CFN-Init runs as the root user instead of as ubuntu/ec2-user. Try to change the paths in the config2 command block to fully qualified paths (/home/ubuntu/elasticsearch).

Related

EMR - Airflow to run scala jar file airflow.exceptions.AirflowException

I am trying to run a scala jar file from AIRFLOW using emr and the jar file is designed to read mssql-jdbc and postgresql.
From airflow, I'm able to create cluster
My SPARK_STEPS looks like
SPARK_STEPS = [
{
'Name': 'Trigger_Source_Target',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': ['spark-submit',
'--master', 'yarn',
'--jars', '/mnt/MyScalaImport.jar',
'--class', 'org.classname',
's3://path/SNAPSHOT.jar',
'SQL_Pwd', 'PostgreSQL_PWD', 'loadtype'],
}
}
]
After this I have JOB_FLOW_OVERRIDES defined-
JOB_FLOW_OVERRIDES = {
"Name": "pfdt-cluster-airflow",
"LogUri": "s3://path/elasticmapreduce/",
"ReleaseLabel": "emr-6.4.0",
"Applications": [
{"Name": "Spark"},
],
"Instances": {
"InstanceGroups": [
{
"Name": "Master nodes",
"Market": "ON_DEMAND",
"InstanceRole": "MASTER",
"InstanceType": "m5.xlarge",
"InstanceCount": 1,
}
],
"KeepJobFlowAliveWhenNoSteps": True,
"TerminationProtected": False,
'Ec2KeyName': 'pem_file_name',
"Ec2SubnetId": "subnet-123"
},
'BootstrapActions': [
{
'Name': 'import custom Jars',
'ScriptBootstrapAction': {
'Path': 's3://path/subpath/copytoolsjar.sh',
'Args': []
}
}
],
'Configurations': [
{
'Classification': 'spark-defaults',
'Properties': {
'spark.jars': 's3://jar_path/mssql-jdbc-8.4.1.jre8.jar'
}
}
],
"VisibleToAllUsers": True,
"JobFlowRole": "EMR_EC2_DefaultRole",
"ServiceRole": "EMR_DefaultRole",
"Tags": [
{"Key": "Environment", "Value": "Development"},
],
}
To copy the scala .jar file from S3 to local to airflow- I have a shell script which does the work: Path- s3://path/subpath/copytoolsjar.sh
aws s3 cp s3://path/SNAPSHOT.jar /mnt/MyScalaImport.jar
On triggering the airflow-
It fails at node watch_step
Errors what I'm getting are-
stdout.gz =>
stderr.gz =>
22/04/08 13:38:23 INFO CodeGenerator: Code generated in 25.5907 ms
Exception in thread "main" java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.$anonfun$driverClass$2(JDBCOptions.scala:108)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.(JDBCOptions.scala:108)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.(JDBCOptions.scala:38)
How to resolve this issue-
I have my jars at-
s3://path/subpath/mssql-jdbc-8.4.1.jre8.jar
s3://path/subpath/postgresql-42.2.24.jar
To upload jar files(mssql-jdbc-8.4.1.jre8.jar, postgresql-42.2.24.jar) to airflow local-
In the bootstrap step-
'BootstrapActions': [ { 'Name': 'import custom Jars', 'ScriptBootstrapAction': { 'Path': 's3://path/subpath/copytoolsjar.sh', 'Args': [] } } ]
In copytoolsjar.sh file write the command as-
aws s3 cp cp s3://path/SNAPSHOT.jar /mnt/MyScalaImport.jar && bash -c "sudo aws s3 cp s3://path/subpath/mssql-jdbc-8.4.1.jre8.jar /usr/lib/spark/jars/" && bash -c "sudo aws s3 cp s3://path/subpath/postgresql-42.2.24.jar /usr/lib/spark/jars/"
Work will be done

Error connecting to environment 1 Org Local Fabric: Error querying channels: 14 UNAVAILABLE: failed to connect to all addresses

I am unable to run my ibm evote blockchain application in hyperledger faric.I am using IBM Evote in VS Code (v1.39) in ubuntu 16. When I start my local fabric (1 org local fabric), I am facing above error.
following is my local_fabric_connection.json file code
{
"name": "local_fabric",
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com"
],
"certificateAuthorities": [
"ca.org1.example.com"
]
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:17051"
}
},
"certificateAuthorities": {
"ca.org1.example.com": {
"url": "http://localhost:17054",
"caName": "ca.org1.example.com"
}
}
}
and following is the snapshot
Based off your second image it doesn't look like your 1 Org Local Fabric started properly in the first place (you have no gateways and for some reason your wallets aren't grouped together!).
If you teardown your 1 Org Local Fabric then start it again hopefully it'll work.

Launch an ec2 instance with cloudformation

I am trying to launch an ec2 instance using cloudformation.I created this json template but I get error Template format error: At least one Resources member must be defined.
{
"Type" : "AWS::EC2::Instance",
"Properties" : {
"ImageId" : "ami-08ddb3f251a88cf33",
"InstanceType" : "t2.micro ",
"KeyName" : "Stagingkey",
"LaunchTemplate" : {
"LaunchTemplateId" : "jen1",
"LaunchTemplateName" : "Launchinstance",
"Version":"V1"
},
"SecurityGroupIds" : [ "sg-055f49a32efd4238b" ],
"SecurityGroups" : [ "jenkins_group" ],
}
}
What am I doing wrong?
Is there any other template for ap-south-1 region which I could use? Any help would be appreciated.
The error says it all: At least one Resources member must be defined.
The major sections of a template are:
Parameters
Mappings
Resources
Outputs
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "My Stack",
"Resources": {
"MyInstance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-08ddb3f251a88cf33",
"InstanceType": "t2.micro ",
"KeyName": "Stagingkey",
"LaunchTemplate": {
"LaunchTemplateId": "jen1",
"LaunchTemplateName": "Launchinstance",
"Version": "V1"
},
"SecurityGroupIds": [
"sg-055f49a32efd4238b"
],
"SecurityGroups": [
"jenkins_group"
]
}
}
}
}
You'll need to test it. For example, it is unlikely that you will define both SecurityGroupIds and SecurityGroups.
All the properties you have entered are properties of an EC2 resource, which you need to declare. You have no resources block/a logical name for you resource, like so:
"Resources": {
"MyTomcatName": {
"Type": "AWS::EC2::Instance",
"Properties": {
[...]

I cannot see the properties/values using spring cloud config and git

I am using the sample project
https://github.com/spring-cloud-samples/configserver
I run the project and when i point my browser to
http://localhost:8888/foo/development/
I get the following values
{
"name": "foo",
"profiles": [
"development"
],
"label": "master",
"propertySources": [
{
"name": "overrides",
"source": {
"eureka.instance.nonSecurePort": "${CF_INSTANCE_PORT:${PORT:${server.port:8080}}}",
"eureka.instance.hostname": "${CF_INSTANCE_IP:localhost}",
"eureka.client.serviceUrl.defaultZone": "http://localhost:8761/eureka/"
}
}
]
}
But i do not get the values in the file foo-development.properties in
https://github.com/spring-cloud-samples/config-repo
I am new to spring-cloud config. Could somebody point in the right direction to the values of the property file ?
Thank you
I ran the config-server in Ubuntu and everything works there as expected. This must be a problem in windows only. The output I get in Ubuntu is the following:
{
"name": "foo",
"profiles": [
"development"
],
"label": "master",
"propertySources": [
{
"name": "overrides",
"source": {
"eureka.instance.nonSecurePort": "${CF_INSTANCE_PORT:${PORT:${server.port:8080}}}",
"eureka.instance.hostname": "${CF_INSTANCE_IP:localhost}",
"eureka.client.serviceUrl.defaultZone": "http://localhost:8761/eureka/"
}
},
{
"name": "https://github.com/spring-cloud-samples/config-repo/foo-development.properties",
"source": {
"bar": "spam"
}
},
{
"name": "https://github.com/spring-cloud-samples/config-repo/foo.properties",
"source": {
"foo": "bar"
}
},
{
"name": "https://github.com/spring-cloud-samples/config-repo/application.yml",
"source": {
"info.description": "Spring Cloud Samples",
"info.url": "https://github.com/spring-cloud-samples",
"eureka.client.serviceUrl.defaultZone": "http://user:${eureka.password:}#localhost:8761/eureka/",
"invalid.eureka.password": "<n/a>"
}
}
]
}

Chef: Trying to get build-essential to install on our node before Postgres

Here's our node configuration:
{
"run_list": [
"recipe[apt]",
"recipe[build-essential]",
[
"rackbox"
]
],
"rackbox": {
"jenkins": {
"job": "job1",
"git_repo": "https://github.com/hayesmp/railsgirls-app.git",
"command": "bundle exec rake",
"ip_address": "192.237.181.154",
"host": "subocean-southerner"
},
"ruby": {
"versions": [
"2.0.0-p247"
],
"global_version": "2.0.0-p247"
},
"apps": {
"unicorn": [
{
"appname": "app1",
"hostname": "app1"
}
]
},
"db_root_password": "iloverandompasswordsbutthiswilldo",
"databases": {
"postgresql": [
{
"database_name": "app1_production",
"username": "app1",
"password": "app1_pass"
}
]
}
}
}
I'm just not sure where to insert the build essential compiletime = true attribute for my configuration.
This is the sample code for this stack overflow post: Chef: Why are resources in an "include_recipe" step being skipped?
name "myapp"
run_list(
"recipe[build-essential]",
"recipe[myapp]"
)
default_attributes(
"build_essential" => {
"compiletime" => true
}
)
Paste this into your node configuration:
"build_essential": {
"compiletime": true
}
BTW: you should use recipe[rackbox] instead of [rackbox] in your run_list