How I can run task once? - deployment

There is playbook for deploying many hosts.
I need to send the meaning of a variable ОNCE (run local_action once?!) to REST service after deploy.
How can I accomplish this?

As the playbook are ment to be idempotent, I would say that the easiest way to do this would be to run a bash script that:
Check if a file, let's say /var/lock/foobar, exists
Execute the call on your WebService IF the file does not exists
Write /var/lock/foobar
So you script is idempotent and can be called numerous time but making the call only once.

Why not just add a task to the end of the deploy playbook?
- hosts: rest_service_target
tasks:
- name: Post to REST
local_action: command curl {{ url_of_rest_service_target }} {{ curl_arguments }}
Could add some error handling so this is done only when deploy is successful, send an email when deploy fails, etc. http://docs.ansible.com/playbooks_error_handling.html

Related

Detecting deploy failure from Ansistrano Deploy

We are using Ansistrano Deploy
roles:
- role: ansistrano.deploy
We want to be able to detect when the deploy fails for any reason (or succeeds), so we can send a Slack notification.
How can we get a return code or similar on this to know the result of the deploy?
Not being familiar under the hood with Ansistrano and more specifically with the error handling already in place inside the role, I'm not entirely sure this will work out of the box.
But my first natural attempt would be to use a block with error handling. This will require to change the way you call the role to use import_role instead of the play-level role: keyword.
Here is a pseudo playbook example to give you the global idea:
- hosts: my_deploy_hosts
tasks:
- name: deploy my_app with some error control
block:
- name: run the ansistrano deploy role
import_role:
name: ansistrano.deploy
- name: If we got there, above ran successfully
debug:
msg: "You should send a ++ message to slack"
rescue:
- name: If we get into this something went wrong
debug:
msg: "Houston. Houston. We have a problem."

github actions get URL of test build

I can't seem to find out how to get the URL of a test workflow anywhere in the docs. I have a simple job which runs tests and on fail it needs to post the URL of the failed job to another web service.
I was expecting this to be in the default env vars but apparently not.
Thanks,
${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
This was originally suggested in this post on GH forum.
Am convinced there is nothing about this in the docs, but I eventually found that this works:
https://github.com/<name>/<repo>/commit/$GITHUB_SHA/checks
You can get the GitHub Action URL for the particular commit by formulating the URL like the example below for a shell script step.
- name: Run shell cmd
run: echo "https://github.com/${{github.repository}}/commit/${{github.sha}}/checks/${{github.run_id}}"
Alternatively, GitHub action provides the env's as GITHUB_REPOSITORY, GITHUB_SHA, GITHUB_RUN_ID in each step and only need to construct the URL in the above pattern.

Provide Proxy Information to Job

Wondering if anyone has come across this: Is it possible to provide proxy information to Concourse job? Something along lines of this:
- name: bosh-deploy-0
...
jobs:
- name: deploybosh
properties:
http_proxy_url: <http_proxy_url>:<http_proxy_port>
https_proxy_url: <https_proxy_url>:<http_proxy_port>
no_proxy:
- localhost
- 127.0.0.1
If anyone has a working example, I'd be very much appreaciative!!
You can only set these properties per worker. https://github.com/concourse/concourse-bosh-release/blob/v4.2.1/jobs/worker/spec#L142-L153.
If you want a job to run with specific proxy information set, you need to
Deploy a worker with those properties set, and with some worker tag.
Configure every step of the job with that same tag.
You could also set the proxy settings at the beginning of your job task (and optionally pass the proxy endpoint with parameters or a config server backend). That's maybe not the nicest way, however, it works quite well.

how to write e2e test automation for application containing kafka, postgres, and rest api docker containers

I have an app which is setup by docker-compose. The app contains docker containers for kafka, postgres, rest api endpoints.
One test case is to post data to endpoints. In the data, there is a field called callback URL. the app will parse the data and send the data to the callback URL.
I am curious whether there is any test framework for similar test cases. and how to verify the callback URL is hit with data?
Docker compose support has been added to endly. In the pipeline workflow for the app (app.yaml), you can add a "deploy" task and start the docker services by invoking docker-compose up.
Once test task is completed and your callback url is invoked, in your validation task, you can check to see if it was invoked with the expected data. For this you can utilize endly's recording feature and replay it to validate the callback request.
Below is an example of an ETL application app.yaml using docker-compose with endly to start the docker services. Hope it helps.
tasks: $tasks
defaults:
app: $app
version: $version
sdk: $sdk
useRegistry: false
pipeline:
build:
name: Build GBQ ETL
description: Using a endly shared workflow to build
workflow: app/docker/build
origin:
URL: ./../../
credentials: localhost
buildPath: /tmp/go/src/etl/app
secrets:
github: git
commands:
- apt-get -y install git
- export GOPATH=/tmp/go
- export GIT_TERMINAL_PROMPT=1
- cd $buildPath
- go get -u .
- $output:/Username/? ${github.username}
- $output:/Password/? ${github.password}
- export CGO_ENABLED=0
- go build -o $app
- chmod +x $app
download:
/$buildPath/${app}: $releasePath
/$buildPath/startup.sh: $releasePath
/$buildPath/docker-entrypoint.sh: $releasePath
/$buildPath/VERSION: $releasePath
/$buildPath/docker-compose.yaml: $releasePath
deploy:
start:
action: docker:composeUp
target: $target
source:
URL: ${releasePath}docker-compose.yaml
In your question below, where is Kafka involved? Both sound like HTTP calls.
1)Post data to endpoint
2)Against send data to the callback URL
One test case is to post data to endpoints. In the data, there is a field called callback URL. the app will parse the data and send the data to the callback URL.
Assuming the callback URL is an HTTP endpoint(e.g. REST or SOAP) with POST/PUT api, then it's better to expose a GET endpoint on the same resource. In that case, when callback POST/PUT is invoked, the server side state/data changes and next, use the GET api to verify the data is correct. The output of the GET API is the Kafka data which was sent to the callback URL(this assumes your 1st post message was to a kafka topic).
You can achieve this using traditional JUnit way using bit of code or via declarative way where you can completely bypass coding.
The example has dockerized Kafka containers to bring up locally and run the tests
This section Kafka with REST APIs explains automated way of testing combination of REST api testing with Kafka data streams .
e.g.
---
scenarioName: Kafka and REST api validation example
steps:
- name: produce_to_kafka
url: kafka-topic:people-address
operation: PRODUCE
request:
recordType: JSON
records:
- key: id-lon-123
value:
id: id-lon-123
postCode: UK-BA9
verify:
status: Ok
recordMetadata: "$NOT.NULL"
- name: verify_updated_address
url: "/api/v1/addresses/${$.produce_to_kafka.request.records[0].value.id}"
operation: GET
request:
headers:
X-GOVT-API-KEY: top-key-only-known-to-secu-cleared
verify:
status: 200
value:
id: "${$.produce_to_kafka.request.records[0].value.id}"
postCode: "${$.produce_to_kafka.request.records[0].value.postcode}"
Idaithalam is a low code Test automation Framework, developed using Java and Cucumber. It leverages Behavior Driven Development (BDD). Tester can create test cases/scripts in simple Excel with API Spec. Excel is a simplified way to create Json based test scripts in Idaithalam. Test cases can be created quickly and tested in minutes.
As a tester, you need to create Excel and pass it to Idaithalam Framework.
First, generate the Json based test scripts(Virtualan Collection) from Excel. During test execution, this test script collection can be directly utilized.
Then it generates Feature files from the Virtualan Collection and its executed.
Lastly, It generates test report in BDD/Cucumber style.
This provide complete testing support for REST APIs, GraphQL, RDBMS DB and Kafka Event messages
Refer following link for more information to set up and execute.
https://tutorials.virtualan.io/#/Excel
How to create test scripts using excel

capistrano (v3) deploys the same code on all roles

If I understand correctly the standard git deploy implementation with capistrano v3 deploys the same repository on all roles. I have a more difficult app that has several types of servers and each type has its own code base with its own repository. My database server for example does not need to deploy any code.
How do I tackle such a problem in capistrano v3?
Should I write my own deployment tasks for each of the roles?
How do I tackle such a problem in capistrano v3?
All servers get the code, as in certain environments the code is needed to perform some actions. For example in a typical setup the web server needs your static assets, the app server needs your code to serve the app, and the db server needs your code to run migrations.
If that's not true in your environment and you don't want the code on the servers in some roles, you could easily send a pull request to add the no_release feature back from Cap2 in to Cap3.
You can of course take the .rake files out of the Gem, and load those in your Capfile, which is a perfectly valid way to use the tool, and modify them for your own needs.
The general approach is that if you don't need code on your DB server, for example, why is it listed in your deployment file?
I can confirm you can use no_release: true to disable a server from deploying the repository code.
I needed to do this so I could specifically run a restart task for a different server.
Be sure to give your server a role so that you can target it. There is a handy function called release_roles() you can use to target servers that have your repository code.
Then you can separate any tasks (like my restart) to be independent from the deploy procedure.
For Example:
server '10.10.10.10', port: 22, user: 'deploy', roles: %w{web app db assets}
server '10.10.10.20', port: 22, user: 'deploy', roles: %w{frontend}, no_release: true
namespace :nginx do
desc 'Reloading PHP will clear OpCache. Remove Nginx Cache files to force regeneration.'
task :reload do
on roles(:frontend) do
execute "sudo /usr/sbin/service php7.1-fpm reload"
execute "sudo /usr/bin/find /var/run/nginx-cache -type f -delete"
end
end
end
after 'deploy:finished', 'nginx:reload'
after 'deploy:rollback', 'nginx:reload'
# Example of a task for release_roles() only
desc 'Update composer'
task :update do
on release_roles(:all) do
execute "cd #{release_path} && composer update"
end
end
before 'deploy:publishing', 'composer:update'
I can think of many scenarios where this would come in handy.
FYI, this link has more useful examples:
https://capistranorb.com/documentation/advanced-features/property-filtering/