I'm having a free-style project in Jenkins. and I want to run a pre-script (bash) that will get list of running instances (from aws ec2) and loads them into a multi-select parameter in jenkins.
I have the script ready but I can't figure where to put it.
no matter what I do I can't get this right.
Can someone please explain me what i'm missing?
This is old but just in case someone is looking for a solution:
Use the Extensible Choice Parameter Plugin to
create a parametrised build. As the Choice Provider option, choose System Groovy Choice Parameter. To fetch a list of running instances, use the following Groovy script:
def command = "aws ec2 describe-instances --query [Reservations[*].Instances[*].InstanceId] --output text"
def proc = command.execute()
proc.waitFor()
def instances = proc.in.text.readLines()
return instances
Of course you need to have aws access configured on your Jenkins server.
Related
Trying to clean up some testing for IaC using Inspec, But hardcoding security_group_ids is a no go for obvious reasons.
Im trying to use the ruby sdk instead to pull down the id based of a name (ie like you do with Terraform data resources).
But we work from aws named profiles and while Inspec can connect to named profiles when i run the test ie :
inspec exec . -t aws://prod_account
Is it possible from Inspec to link the call to aws named profiles to ruby code within a control?
since inspec is written in ruby, you can embed any ruby code within your spec files. for instance, you can have a ruby code with an array and for each array have a spec code.
thus, you can implement a logic for collecting the security group ids and then iterate over them.
I've got two projects (backoffice and frontoffice) deployed using CloudFormation.
In the frontoffice, I import some DynamoDB table names from the backoffice stack as Environment variables for my Lambdas.
To run some acceptance tests I need to sometimes deploy the frontoffice withtout deploying the backoffice. Therefore, the frontoffice will try to do an ImportValue of an Export that doesn't exists.
Is there any pattern that would allow me to get the Frontend deployed anyway - and then handle the lack of value in my code ?
You could pass an additional Parameter to frontoffice indicating whether you are going to deploy with backoffice or not.
Based on the value of the parameter, you could use DependsOn and/or Fn::If to either import or not the DynamoDB table names.
For a fully automated solution without any extra Parameter, you would have to use custom resource. The resource would be a lambda function, which would use AWS SDK to query CloudFormation stacks and check for backoffice.
I am working in a GCP lab (Securing Google Cloud with CFT Scorecard). All instructions for the lab are given.
First I have to run the following two commands to set environment variables
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
In the second command given above I don't know what to replace with my own credentials? May be that is the reason I am getting error.
Now I have to enable the "cloudasset.googleapis.com" gcloud service. For this they gave the following command.
gcloud services enable cloudasset.googleapis.com \
--project $GOOGLE_PROJECT
Error for this is given in the screeshot attached herewith:
Error in the serviec enabling command
Next step is to clone the policy: The given command for that is:
git clone https://github.com/forseti-security/policy-library.git
After that they said: "You realize Policy Library enforces policies that are located in the policy-library/policies/constraints folder, in which case you can copy a sample policy from the samples directory into the constraints directory".
and gave this command:
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
On running this command I received this:
error on running the directory command
Finally they said "Create the bucket that will hold the data that Cloud Asset Inventory (CAI) will export" and gave the following command:
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME
I am confused in where to replace my own credentials like in the place of project_Id I wrote my own project id.
Also I don't know these errors are ocurring. Kindly help me.
I'm unable to access the tutorial.
What happens if you run the following:
echo ${DEVSHELL_PROJECT_ID}
I suspect you'll get an empty result because I think this environment variable isn't actually set.
I think it should be:
echo ${DEVSHELL_GCLOUD_CONFIG}
Does that return a result?
If so, perhaps try using that variable instead:
export GOOGLE_PROJECT=${DEVSHELL_GCLOUD_CONFIG}
export CAI_BUCKET_NAME=cai-${GOOGLE_PROJECT}
It's not entirely clear to me why this tutorial is using this approach but, if the above works, it may get you further along.
We're you asked to create a Google Cloud Platform project?
As per the shared error, this seems to be because your env variable GOOGLE_PROJECT is not set. You can verify it by using echo $GOOGLE_PROJECT and seeing whether it returns the project ID or not. You could also use echo $DEVSHELL_PROJECT_ID. If that returns the project ID and the former doesn't, it means that you didn't export the variable as stated at the beginning.
If the problem is that GOOGLE_PROJECT doesn't have any value, there are different approaches on how to solve it.
Set the env variable as you explained at the beginning. Obviously this will only work if the variable DEVSHELL_PROJECT_ID is also set.
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
Manually set the project ID into that variable. This is far from ideal because in Qwiklabs they create a new temporal project on every lab, so this would've only worked if you were still on that project. The project ID can be seen on both of your shared screenshots.
export GOOGLE_PROJECT=qwiklabs-gcp-03-c6e1787dc09e
Avoid using the argument --project. According to the documentation, the aforementioned argument is optional and if none is used the command will take the one by default, which will be on the configuration settings. You can get the current project by using this:
gcloud config get-value project
If the previous command matches the project ID you want to use, you can simply issue the following command:
gcloud services enable cloudasset.googleapis.com
Notice that the project ID is not being explicitly mentioned using --project.
Regarding your issue with the GitHub file, I have checked the repository and the file storage_blacklist_public.yaml doesn't seem to be in the directory policy-library/samples. There seems to be a trace that it was once there, but it isn't anymore, they should probably update the lab as it isn't anymore.
About your credentials confusion, you don't have to use your own project ID, just the one given on your lab. If I recall properly all the needed data should be on the left side of the lab. Still, you shouldn't need to authenticate in a normal situation as you are already logged in your temporal project if you are accessing it form the Cloud Shell, which is where you should be doing all this.
Adding this for the later versions
in the gcloud shell you can set a temp variable for the current project id with
PROJECT_ID="$(gcloud config get-value project)"
then use like
--project ${PROJECT_ID}
I am trying to create new service using Create Service step in Service Control Manager plugin in udeploy but my step is failing while executing it. This is what I see in output window
sc.exe create 'MyServiceName' '/binPath=MyServicePath\n/start=auto\n/'
The link https://developer.ibm.com/urbancode/plugindoc/ibmucd/microsoft-windows-services/1-2/steps/#create_service states that I have to pass argument in a newline-separated list of arguments but as you can see my arguments are passed in big single quote and I am not sure how to address this. Any help is appreciated.
I was able to resolve this by Passing argument like below in argument box for Create Service Step in UDeploy UI
binPath=[Press Enter]
MyServicePath=[Press Enter]
start=[Press Enter]
auto=[Press Enter]
Make sure there is no extra spaces at the end of each line.
I want to run some pyspark jobs using Google Dataproc with different project Ids without success so far. I'm newbie with pyspark and Google Cloud but I've followed this example and runs well (if the BigQuery dataset is either public or belongs to my GCP project, which is ProjectA). Input parameters look like this:
bucket = sc._jsc.hadoopConfiguration().get('fs.gs.system.bucket')
projectA = sc._jsc.hadoopConfiguration().get('fs.gs.project.id')
input_directory ='gs://{}/hadoop/tmp/bigquery/pyspark_input'.format(bucket)
conf = {
# Input Parameters
'mapred.bq.project.id': projectA,
'mapred.bq.gcs.bucket': bucket,
'mapred.bq.temp.gcs.path': input_directory,
'mapred.bq.input.project.id': 'projectA',
'mapred.bq.input.dataset.id': 'my_dataset',
'mapred.bq.input.table.id': 'my_table',
}
# Load data in from BigQuery.
table_data = sc.newAPIHadoopRDD(
'com.google.cloud.hadoop.io.bigquery.JsonTextBigQueryInputFormat',
'org.apache.hadoop.io.LongWritable',
'com.google.gson.JsonObject',
conf=conf)
But what I need is to run a job from a BQ dataset of a ProjectB (I have credentials to query it), so when setting the input parameters, which look like this:
conf = {
# Input Parameters
'mapred.bq.project.id': projectA,
'mapred.bq.gcs.bucket': bucket,
'mapred.bq.temp.gcs.path': input_directory,
'mapred.bq.input.project.id': 'projectB',
'mapred.bq.input.dataset.id': 'the_datasetB',
'mapred.bq.input.table.id': 'the_tableB',
}
and try to load data in from BQ, my script keeps running infinitely. How should I set it up properly?
FYI, after running the example I mentioned before, I can see that 2 carpets (shard-0 and shard-1) are created in Google Storage and contain the corresponding BQ data, but with my job only shard-0 is created and it's empty.
I talked to my co-worker Dennis and here is his suggestion:
"Hmm, not sure, it should work. They might want to test with "bq" CLI inside the master node to manually try some "bq extract" job of the projectB table into their GCS bucket since that's all the connector is doing under the hood.
If I had to guess I'd suspect they only meant their personal username has the credentials to query projectB, but the default service account of projectA might not have the query permissions. Everything inside Dataproc VMs act on behalf of the compute service account assigned to the VMs, not the end-user.
They can
gcloud compute instances describe -m
and somewhere in there it lists the service-account email address."