Bamboo Deployment compose variable name in ssh task - deployment

With the bamboo version 9, you are able to run ssh task on multiple hosts within one step.
So you can define to run this on the maschines host1 and host2.
I want to use one deployment to roll this out to both maschines but with a slightly different config.
So my idea was to configure varibales to hold a specifiv value for each sytem.
So I tried to setup my variables like
Variable name
Value
host1_feature1
true
host2_feature1
false
Within the ssh task I would write this to do something like: feature1Var=$(echo "bamboo.$HOSTNAME_feature1")
Using echo $feature1Var will result in bamboo.host1_feature1 or bamboo.host2_feature1 depending on the host where it's currently deploying.
But when I try to access that variable in script with echo ${!feature1Var}, which should result in true or false depending, bamboo tells me that bash: line 2: bamboo.host1_feature1: invalid variable name and similar for host 2.
Anyone any clue how to solve this, desites multiple deployment or putting config in the ssh task?

Related

Failure/timeout invoking Lambda locally with SAM

I'm trying to get a local env to run/debug Python Lambdas with VSCode (windows). I'm using a provided HelloWorld example to get the hang of this but I'm not being able to invoke.
Steps used to setup SAM and invoke the Lambda:
I have Docker installed and running
I have installed the SAM CLI
My AWS credentials are in place and working
I have no connectivity issues and I'm able to connect to AWS normally
I create the SAM application (HelloWorld) with all the files and resources, I didn't change anything.
I run "sam build" and it finishes sucessfully
I run "sam local invoke" and it fails with timeout. I increased the timeout to 10s, still times out. The HelloWorld Lambda code only prints and does nothing else, so I'm guessing the code isn't the problem, but something else relating to the container or the SAM env itself.
C:\xxxxxxx\lambda-python3.8>sam build Your template contains a
resource with logical ID "ServerlessRestApi", which is a reserved
logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended.
Building codeuri:
C:\xxxxxxx\lambda-python3.8\hello_world runtime: python3.8 metadata:
{} architecture: x86_64 functions: ['HelloWorldFunction'] Running
PythonPipBuilder:ResolveDependencies Running
PythonPipBuilder:CopySource
Build Succeeded
Built Artifacts : .aws-sam\build Built Template :
.aws-sam\build\template.yaml
C:\xxxxxxx\lambda-python3.8>sam local invoke Invoking
app.lambda_handler (python3.8) Skip pulling image and use local one:
public.ecr.aws/sam/emulation-python3.8:rapid-1.51.0-x86_64.
Mounting C:\xxxxxxx\lambda-python3.8.aws-sam\build\HelloWorldFunction
as /var/task:ro,delegated inside runtime container Function
'HelloWorldFunction' timed out after 10 seconds
No response from invoke container for HelloWorldFunction
Any hints on what's missing here?
Thanks.
Mostly, a lambda function gets timed out because of some resource dependency. Are you using any external resource, maybe db connection or some REST API call ?
Please put more prints in lambda_handler(your function handler), before calling any resource, then you might know where exactly it is waiting. Also increase the timeout to 1 minute or more because most of the external resource call over HTTPS will have 30 secs timeouts.
The log suggests that either the container wasn't started, or SAM couldn't connect to it.
Sometimes the hostname resolution on Windows can be affected by hosts file or system settings.
Try running the invoke command as follows (this will make the container ports bind to all interfaces):
sam local invoke --container-host-interface 0.0.0.0
...additionally try setting the container-host parameter (set to localhost by default):
sam local invoke --container-host-interface 0.0.0.0 --container-host host.docker.internal
The next piece of puzzle is incorporating these settings into VSCODE. This can to be done in two places:
create samconfig.toml in the root dir of the project with the following contents. This will allow running sam local invoke from the terminal without having to add the command line argument:
version=0.1
[default.local_invoke.parameters]
container_host_interface = "0.0.0.0"
update launch configuration as follows to enable VSCode debugging:
...
"sam": {
"localArguments": ["--container-host-interface","0.0.0.0"]
}
...

Access agent hostname for a build variable

I've got release pipelines defined that have worked. I've got a config transform that will write a API url to a config file (currently with a hardcoded api url).
What I'd like to do is be able to have the config be re-written based on the agent its being deployed on.
eg. if the machine being deployed to is TEST-1, I'd like to write https://TEST-1.somedomain.com/api into a config using that transform step.
The .somedomain.com/api can be static.
I've tried modifying the pipeline variable's value to be https://${{Environment.Name}}.somedomain.com/api, but it just replaces the API_URL in the config with that literal string (does not populate machine name in that variable).
Being that variables are the source of value that is being written to configs during the transform, I'm struggling to see another way to do this.
some gotchas
Using non yaml pipeline definitions (I know I saw people put logic in variable definitions within yaml pipelines)
Can't just use localhost, as the configuration is being read into a javascript rich app that would have js trying to connect to localhost vs trying to connect to the server.
I'm interested in any ways I could solve this problem
${{Environment.Name}} is not valid syntax for either YAML or classic pipelines.
In classic pipelines it would be $(Environment.Name).
In YAML, $(Environment.Name) or ${{ variables['Environment.Name'] }} would work.

How do I make pytest fail fast as a user level configuration?

I want to always run pytest in a fail-fast mode like --maxfail=1, regardless the code repository I am testing.
Mainly I am using for a config item which can be setup as an environment variable or a user homedir config file which would make it fail fast.
The following environment variable should do the job:
export PYTEST_ADDOPTS="-x"
More info:
How to change command line options defaults
Failure options

Bluemix Continuous Delivery deploy script pass env variables

I need to pass some environment variables to the deploy script like user names and spaces, service plans, etc. The idea was to use env in the manifest.yml file, but I can't get that working - seems like I can only use the predefined CF_APP etc.
Any tips on passing stuff to the deploy script?
Espen
You can define your own environment variables in manifest.yml file.
env:
<var-name>: <value>
<var-name>: <value>
Checkout this link https://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html to see in details the fields available in manifest.yml.

Rundeck winrm configuration

I have been trying to use Rundeck to send powershell commands to windows boxes.
I am using "rundeck-winrm-plugin"
https://github.com/rundeck-plugins/rundeck-winrm-plugin
It says to configure it in either project.properties or framework.properties file.
Here is how my /var/rundeck/projects/SecureCloud/etc/project.properties file looks like.
project.name=Cloud
project.ssh-authentication=privateKey
project.ssh.user=Domain\\rundeck-user
service.NodeExecutor.default.provider=jsch-ssh
project.ssh-keypath=/var/lib/rundeck/.ssh/id_rsa
resources.source.1.config.url=http\://localhost\:4567/puppetdb
resources.source.1.config.timeout=30
service.FileCopier.default.provider=jsch-scp
resources.source.1.type=url
resources.source.1.config.cache=true
service.NodeExecutor.default.provider=overthere-winrm
winrm-user=Domain\\rundeck-user
winrm-password-storage-path=keys/ldap-rundeck-user-pass
I can't figure out how to define username and password according to this document:
https://github.com/rundeck-plugins/rundeck-winrm-plugin
I already have winrm-user already defined so I don't know if I still have to define rundeck-user#Domain if yes, then how (I am using kerberos) ?
project.username=rundeck-user#Domain ?
How to define hostname here ?
project.hostname=machine-name ?
Should I even use /var/rundeck/projects/SecureCloud/etc/project.properties file when I already declared there:
service.NodeExecutor.default.provider=jsch-ssh
and this doc says to put this line there:
service.NodeExecutor.default.provider=overthere-winrm
If no, then where should I put my configuration ?
Username and password
There are two ways you can define authentication
Basic:
You can use a Secure option in the rundeck job and an option name that matches your node definition name for that option. You can set the username in the node definition as well
Kerberos:
This is how you define the username (Make sure you use Caps for the domain as defined in the krb5.conf file)
username="user#YOUR_DOMAIN.COM"
Hostname is defined in the node definition. To define a node you can do it under /var/rundeck/projects/SecureCloud/etc/resources.xml For example:
node name="YOURSERVER" connectionType="WINRM_NATIVE" node-executor="overthere-winrm" winrm-password-option="winrmPassword" winrm-protocol="https" winrm-auth-type="basic" username="YOURUSER" winrmPassword="winrmPassword" hostname="YOURHOSTNAME:PORT"
You don't need to define the node executor in your node definition if overthere-winrm is already set as the default node executor in the Configuration/Plugins/NodeExecutor section of the project in the Rundeck GUI
you can follow Rundeck Windows Nodes Configuration for the steps to follow for overthere winrm configurations.