I'm using the AWS CodeDeploy platform for automatic deployment of my REST services. The deployment script got a lot of steps that copy/configure/do other staff. If any of the steps fails - the entire deployment fails for this server and I got a clear notification about it. So, the last step I need is a health check - a validation that configuration was appropriate and all is up and running.
Of cause, I can make a couple curl POSTs, parse their results and use some extracted values within more curl POSTs to get some sanity coverage, but all this parsing sounds like a wheel invention.
Is there any convinient testing framework/tool that can be easily "packed" and invoked in scripts without installing a huge testing siutes on each of my production servers?
Given that you're doing REST you probably can rely on the status codes instead of parsing the body. If you get a code that's not in 2xx, then something is wrong.
If you want a more elaborate check you could add a special endpoint that does some DB queries and maybe sends some harmless queries to its integrations.
And the most complicated option would be to implement a smart post-deployment steps that follow some workflow procedure. You'd need either to use an elaborate bash-scripting, or use more advanced programming languages and frameworks (like RestAssured in Java or RestClient in Groovy).
Don't forget to introduce a loop with some timeout that does a health check since your first request may be sent too early while the app is still being deployed.
Here is an example of simple bash-script that checks the status and the version of the app:
#!/usr/bin/env bash
# Helps to define whether application deployment was successful by checking
# connection to HTTP resource. If the page is loaded and the response is 200
# or 201, then the script finishes successfully. In case connection refused
# is or Gateway Timeout (503) the script is trying to connect again within
# timeout period. Otherwise script finishes with fail.
# Needs required parameter url to application and optional parameters timeout
# (by default equals to 180) and artifact version. If artifact version
# parameter is given and the response is 200 or 201, then script also checks
# that # deployed version (gets from $url/version) equals to the passed
# version. If not, the script finishes with fail. Example of usage in bash
# script:
# sh post_deployment_test.sh http://blah.com/version 100 1.0.102-20160404.101644-5
# result=$?
#
# If $result value equals to 0, then connection is successfully established,
# otherwise, it is not established.
url=$1
timeout=$2
version=$3
if [ -z "$timeout" ]; then
timeout=180
fi
counter=0
delay=3
while [ $counter -le $timeout ]; do
command="curl -L -s -o /dev/null -w %{http_code} $url"
echo "Executing: $command"
status_code=$($command)
curl_code=$?
# Curl error code CURLE_COULDNT_CONNECT (7) means fail to connect to host or proxy.
# It occurs, in particular, in case when connection refused.
if [ $curl_code -ne 0 ] && [ $curl_code -ne 7 ]; then
echo "Connection is not established"
exit 1
fi
if [ $curl_code = 7 ] || [ $status_code = 503 ]; then
echo "Connection has not been established yet, because connection refused or service unavailable. Trying to connect again"
sleep $delay
let counter=$counter+$delay
continue
elif [ $status_code = 200 ] || [ $status_code = 201 ]; then
if [ -z "$version" ]; then
echo "Connection is successfully established"
exit 0
else
grep_result=`curl -L -s $url | grep $version`
if [ -z "$grep_result" ]; then
echo `curl -L -s $url`
echo "Deployed version doesn't equal to expected"
exit 1
else
echo "Connection is successfully established"
exit 0
fi
fi
else
echo "Connection is not established"
exit 1
fi
done
echo "Connection is not established"
exit 1
I've found something nice I was looking for: jasmine-node as a test runtime + frisby.js as a validation script tool.
It's both really portable (I just run npm install during the deployment) and really convenient in terms of scripting, e.g.(official example from frisby):
var frisby = require('frisby');
.get('https://api.twitter.com/1/statuses/user_timeline.json?screen_name=brightbit')
.expectStatus(200)
.expectHeaderContains('content-type', 'application/json')
.expectJSON('0', {
place: function(val) { expect(val).toMatchOrBeNull("Oklahoma City, OK"); }, // Custom matcher callback
user: {
verified: false,
location: "Oklahoma City, OK",
url: "http://brightb.it"
}
})
.expectJSONTypes('0', {
id_str: String,
retweeted: Boolean,
in_reply_to_screen_name: function(val) { expect(val).toBeTypeOrNull(String); }, // Custom matcher callback
user: {
verified: Boolean,
location: String,
url: String
}
})
.toss();
Related
I have a simple script to deploy a pubsub application.
This script will run on every deploy of my Cloud Run service and I have a line with:
gcloud pubsub topics create some-topic
I want to improve my script if the topic already exist, currently if I run my script, the output will be:
ERROR: Failed to create topic [projects/project-id/topics/some-topic]: Resource already exists in the project (resource=some-topic).
ERROR: (gcloud.pubsub.topics.create) Failed to create the following: [some-topic].
I tried the flag --no-user-output-enabled but no success.
Is there a way to ignore if the resource already exists, or a way to check before create?
Yes.
You can repeat the operation knowing that, if the topic didn't exist beforehand, it will if the command succeeds.
You can swallow stderr (with 2>/dev/null) and then check whether the previous command ($?) succeeded (0):
gcloud pubsub topic create do-something 2>/dev/null
if [ $? -eq 0 ]
then
# Command succeeded, topic did not exist
echo "Topic ${TOPIC} did not exist, created."
else
# Command did not succeed, topic may (!) not have existed
echo "Failure"
fi
NOTE This approach misses the fact that, the command may fail and the topic didn't exist (i.e. some other issue).
Alternatively (more accurately and more expensively!) you can enumerate the topics first and then try (!) to create it if it doesn't exist:
TOPIC="some-topic"
RESULT=$(\
gcloud pubsub topics list \
--filter="name.scope(topics)=${TOPIC}" \
--format="value(name)" 2>/dev/null)
if [ "${RESULT}" == "" ]
then
echo "Topic ${TOPIC} does not exist, creating..."
gcloud pubsub topics create ${TOPIC}
if [ $? -eq 0 ]
then
# Command succeeded, topic created
else
# Command did not succeed, topic was not created
fi
fi
Depending on the complexity of your needs, you can automate using:
any of Google's (Pub/Sub) libraries which provide better error-handling and retry capabilities.
Terraform e.g. google_pubsub_topic
I had this same issue so I thought I'd try to give a full-fledged function to address this. Building on what #DazWilkin posted, below is a bash script that takes 2 inputs
Project you want to point to
Topic/Subscription Name (In this example the topic and subscription names are the same, however, it's quite straightforward to have an additional input be assigned to the subscription name)
The function will:
Check if the current working project is the same. If not it will set it
Check if the topic exists in the project. If not it will attempt to create it and wait for the response
Check if the subscription exists in the project. If not it will also attempt to create it and wait for a response
function create_pubsub() {
# Get Current Project
current_project=$(gcloud config get-value project)
echo "Current Project is: ${current_project}"
# Check if Current project matches the specified project
if [[ "$current_project" != "$1" ]]; then
gcloud config set project $1
else
echo "The project provided matches the current working project"
fi
# Check if topic exists in project
_topic=$(gcloud pubsub topics list \
--filter="name.scope(topics)=$2" \
--format="value(name)" 2>/dev/null)
# React accordingly
if [[ "${_topic}" != "" ]]; then
echo "Topic $2 already exists in project ${current_project}"
else
echo "The topic '$2' does not exist in project ${current_project}. Creating it now..."
gcloud pubsub topics create $2
# Check if command executed successfully
if [ $? -eq 0 ]; then
echo "Topic $2 was created successfully"
else
echo "An error occured. Topic was NOT created"
fi
fi
# Check if subscription exists in project
_subscription=$(gcloud pubsub subscriptions list \
--filter="name=projects/$1/subscriptions/$2" \
--format="value(name)" 2>/dev/null)
# React Accordingly
if [[ "${_subscription}" != "" ]]; then
echo "Subscription $2 already exists in project ${current_project}"
else
echo "The subscription '$2' does not exist in project ${current_project}. Creating it now..."
gcloud pubsub subscriptions create $2 --topic=$2
# Check if command executed successfully
if [ $? -eq 0 ]; then
echo "Subscription $2 was created successfully"
else
echo "An error occured. Subscription was NOT created"
fi
fi
}
After adding this function to your bashrc or zshrc file, the way you would call this function in the terminal would be create_pubsub <PROJECT_ID> <TOPIC_ID>
Hope this is helpful.
I want to create web service for my Phonegap Android application which will further call progress 4GL 91.D procedure.
Does any one knowy idea how to create web service for this.
That will be a struggle. You CAN create a server that listens to a socket but you will have to handle everything yourself!
Look at this example.
However, you are likely better off writing the webservice in a language with a better support and then finding another way of getting the data out of the DB. If youre really stuck with a 10+ year old version you really should consider migrating to something else.
You don't have to upgrade everything -- you could just obtain a license for a version 10 client. V10 clients can connect to v9 databases (the rule is that the client can be up to one major release higher) so you could use that to build a SOAP service. Or you could get a v10 "webspeed" license.
Or you could write a simple enough CGI wrapper to some 4GL code if you have those sorts of skills. I occasionally toss together something like this:
#!/bin/bash
#
LOGFILE=/tmp/myservice.log
SVC=sample
# if a FIFO does not exist for the specified service then create it in /tmp
#
# $1 = direction -- in or out
# $2 = unique service name
#
pj_fifo() {
if [ ! -p /tmp/$2.$1 ]
then
echo `date` "Creating FIFO $2.$1" >> ${LOGFILE}
rm -f /tmp/$2.$1 >> ${LOGFILE} &2>&1
/bin/mknod -m 666 /tmp/$2.$1 p >> ${LOGFILE} &2>&1
fi
}
if [ "${REQUEST_METHOD}" = "POST" ]
then
read QUERY_STRING
fi
# header must include a blank line
#
# we're returning XML
#
echo "Content-type: text/xml" # or text/html or text/plain
echo
# debugging echo...
#
# echo $QUERY_STRING
#
# echo "<html><head><title>Sample CGI Interface</title></head><body><pre>QUERY STRING = ${QUERY_STRING}</pre></body></html>"
# ensure that the FIFOs exist
#
pj_fifo in $SVC
pj_fifo out $SVC
# make the request
#
echo "$QUERY_STRING" > /tmp/${SVC}.in
# send the response back to the requestor
#
cat /tmp/${SVC}.out
# all done!
#
echo `date` "complete" >> ${LOGFILE}
Then you just arrange for a background session to be reading /tmp/sample.in:
/* sample.p
*
* mbpro dbname -p sample.p > /tmp/sample.log 2>&1 &
*
*/
define variable request as character no-undo.
define variable result as character no-undo.
input from value( "/tmp/sample.in" ).
output to value( "/tmp/sample.out" ).
do while true:
import unformatted request.
/* parse it and do something with it... */
result = '<?xml version="1.0"?>~n<status>~n'.
result = result + "ok". /* or whatever turns your crank... */
result = result + "</status>~n".
end.
When input arrives parse the line and do whatever. Spit the answer back out to /tmp/sample.out and loop. It's not very fancy but if your needs are modest it is easy to do. If you need more scalability, robustness or security then you might ultimately need something more sophisticated but this will at least let you get started prototyping.
I am writing a plugin to check authentication to a https site and then search for a text in the response html,body to confirm successful login. I have created the following plugin
#!/bin/bash
add_uri='--no-check-certificate https://'
end_uri='/'
result=$(wget -O- $add_uri$1$end_uri --post-data=$2)
flag=`echo $result|awk '{print match($0,"QC Domain")}'`;
echo $flag
echo "Nagios refreshes properly1"
if [[ $flag -gt 0 ]] ; then
echo 'ALL SEEMS FINE!!'
exit 0
else
echo 'Some Problem'
exit 2
fi;
When I execute this plugin directly from command line
./check_nhttps <url here> '<very long post data with credential information>'
The plugin works as expected(For both + & - test cases) and there seems to be no issues.
But when the plugin runs from Nagios,
check_command check_nhttps! <url here> '<very long post data with credential information>'
It always shows critical error(Prints else condition text "Some Problem" too).
P.S : Tried sending the post data with double quotes also.
Please help!!!
I'd think its very probable that your post data contains some characters that confuse nagios, maybe a space, or even a !. Better put the post data into some file and use --post-file. Also, you might insert echo "$2" > /tmp/this_is_my_post_data_when_executed_by_nagios into your script and check if the post data is ok.
Is there a way to discover on what server app.psgi is running?
For example, I am looking for some idea for how to solve the next code fragment from app.psgi:
#app.psgi
use Modern::Perl;
use Plack::Builder;
my $app = sub { ... };
my $server = MyApp::GetServerType(); # <--- I need some idea for how to write this...
given($server) {
when (/plackup/) { ... do something ... };
when (/Starman/) { ... do something other ... };
default { die "Unknown" };
}
$app;
Checking the PLACK_ENV environment variable is not a solution...
Short answer, inspect the caller:
#app.psgi
# use Modern::Perl;
use feature qw(switch say);
use Carp qw(longmess);
use Plack::Builder;
my $app = sub {
return [ 200, [ 'Content-Type' => 'text/plain' ], [ 'Hello World' ] ];
};
# This hack gets what we need out of the call stack
my $stack = longmess("Stack:");
# say STDERR $stack;
given($stack) {
when (/plackup/) { say STDERR "Server: plackup" };
when (/Starman/) { say STDERR "Server: starman" };
default { die "Server: Unknown" };
}
return $app;
However, doing this in the app.psgi will make your code less portable. If you die on an unknown server people won't be able to run your code in an unknown location.
Also, be aware that this code may be run multiple times depending on how the server is implemented so any side effects may occur multiple times.
For example, here is the output for plackup:
plackup --app /usr/lusers/bburnett/dev/trunk/getserver.psgi
Server: plackup
HTTP::Server::PSGI: Accepting connections at http://0:5000/
So far so good. But here is the output for starman:
starman --app /usr/lusers/bburnett/dev/trunk/getserver.psgi
2014/02/21-16:09:46 Starman::Server (type Net::Server::PreFork) starting! pid(27365)
Resolved [*]:5000 to [0.0.0.0]:5000, IPv4
Binding to TCP port 5000 on host 0.0.0.0 with IPv4
Setting gid to "15 15 0 0 15 20920 20921 20927"
Server: starman
Server: starman
Server: starman
Server: starman
Server: starman
Here it gets run once for the master and once per child (defaults to four children).
If you really want something different to happen for these different servers a more robust way may be to subclass them yourself and put the code into each subclass passing -s My::Starman::Wrapper to plackup and starman as needed.
If you really want a switch statement and to put the code in one place, you could look into writing some code that calls Plack::Loader or Plack::Runner. Take a look at the source for plackup, and you'll see how it wraps Plack::Runner. Take a look at the source for Plack::Loader, and you'll see how it gets the backend to run and then loads the appropriate server class.
I am using Net:Appliance::Session to login to a remote Unix server, but am not able to connect. Below is my code and the debug output:
my $s = Net::Appliance::Session->new({
personality => 'Bash',
transport => 'SSH',
host => $host,
});
$s->set_global_log_at('debug');
try {
print "Trying to connect\n";
$s->connect({ username => $user, password => $pass });
print "Executing command\n";
print $s->cmd($cmd);
}
catch {
warn "failed to execute command: $_";
}
finally {
$s->close;
};
And the output is:
Trying to connect
[ 0.019420] pr finding prompt
[ 0.028553] tr creating Net::Telnet wrapper for ssh
[ 0.031377] tr connecting with: ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -l user ...
[ 3.151205] du SEEN:
Warning: Permanently added '...' (RSA) to the list of known hosts.
[ 3.183935] pr failed: [Can't call method "isa" on an undefined value at /usr/lib/perl5/site_perl/5.14/Net/CLI/Interact/Phrasebook.pm line 247.
], sending WAKE_UP and trying again
[ 3.184943] pr finding prompt
[ 4.898408] du SEEN:
Warning: Permanently added '...' (RSA) to the list of known hosts.
Password:
[ 4.920447] pr failed to find prompt! wrong phrasebook?
failed to execute command: Warning: Permanently added '...' (RSA) to the list of known hosts.
Password:
...propagated at /usr/lib/perl5/site_perl/5.14/Net/CLI/Interact/Role/Prompt.pm line 127.
When I login through Putty, I get the following response and can login successfully:
login as: user
Using keyboard-interactive authentication.
Password:
I cannot figure out what I am doing wrong. Any help is appreciated.
EDIT: I think I should mention that I am using Cygwin for this. I have manually logged in to the remote server and the keys in my .ssh/known_hosts file are also set, but still get the RSA error when running this program in Cygwin. I saw this question in SO: "Warning: Permanently added to the list of known hosts” message from Git and added the line UserKnownHostsFile ~/.ssh/known_hosts to my config file, but the error refuses to go away.
EDIT2: When I use the -vvv option in the above program, I get the following output:
Trying to connect
[ 0.020327] pr finding prompt
[ 0.062541] tr creating Net::Telnet wrapper for ssh
[ 0.063709] tr connecting with: ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -vvv -l user 1.1.1.1
[ 0.731041] du SEEN:
OpenSSH_6.2p2
[ 0.851829] pr failed: [Can't call method "isa" on an undefined value at /usr/lib/perl5/site_perl/5.14/Net/CLI/Interact/Phrasebook.pm line 247.
], sending WAKE_UP and trying again
[ 0.852459] pr finding prompt
[ 0.852748] du SEEN:
OpenSSH_6.2p2, OpenSSL 1.0.1e 11 Feb 2013
[ 0.863739] pr failed to find prompt! wrong phrasebook?
failed to execute command: OpenSSH_6.2p2, OpenSSL 1.0.1e 11 Feb 2013
...propagated at /usr/lib/perl5/site_perl/5.14/Net/CLI/Interact/Role/Prompt.pm line 127.
The Net::Appliance::Session module is using is set of matching patterns called "Phrasebook" to guess password query output, command ending prompt, ...
In your case, there are 2 major issue and one minor/cosmetic one:
Net::Appliance::Session rely on connection profile. The correct one is named "bash" and not "Bash"
The bash default phrasebook (located in "~site_perl/Net/CLI/Interact/phrasebook/unix/bash/pb") is targeting ssh/bash based appliance and is not matching your everyday unix server behavior:
prompt user
match /[Uu]sername: $/
prompt pass
match /password(?: for \w+)?: $/
prompt generic
match /\w+#.+\$ $/
prompt privileged
match /^root#.+# $/
macro begin_privileged
send sudo su -
match pass or privileged
macro end_privileged
send exit
match generic
macro disconnect
send logout
As you can see, both "generic" and "pass" prompt does not match your usual linux password and prompt. You will need to adjust it to your needs:
create a library structure by creating some nested directory: "mylib\mybash\"
make a copy of the "bash" phrasebook to that nested directory and edit it to match your unix server behaviour.
There is also the ssh warning output:
Warning: Permanently added '...' (RSA) to the list of known hosts.
You just need to set ssh warnings to off using either the "-q" or "-o LogLevel=quiet" options to the ssh calling options.
So, in the end, your code would look like that:
my $s = Net::Appliance::Session->new
({ add_library => 'mylib',
personality => 'mybash',
transport => 'SSH',
host => $host,
connect_options => { opts => [ '-q', ], },
});
$s->set_global_log_at('debug');
try {
print "Trying to connect\n";
$s->connect({ username => $user, password => $pass });
print "Executing command\n";
print $s->cmd($cmd);
}
catch {
warn "failed to execute command: $_";
}
finally {
$s->close;
};
With a phrasebook like this one (quickly tuned to my freebsd server):
prompt user
match /[Uu]sername: $/
prompt pass
match /[Pp]assword:\s*$/
prompt generic
match /\w+#.+[\$>] $/
prompt privileged
match /^root#.+# $/
macro begin_privileged
send sudo su -
match pass or privileged
macro end_privileged
send exit
match generic
macro disconnect
send logout
macro paging
send terminal length %s
NOTE:
About "Net::Appliance::Session" vs "Net::OpenSSH":
both modules are handling ssh connectionx fine
"Net::Appliance::Session" is more "cisco/whatever-appliance" oriented, but should permit easily to connect to a server, execute 1 command, get its result, then go root and execute another command (very handy if you don't have direct root access from ssh)
"Net::OpenSSH" is handling command execution though ssh on 1 command only basis, that is it execute a command, get its result and exit. No direct way to set an environment, go root and execute the command (you need to use a wrapper like Net::Telnet on it to do that)
"Net::OpenSSH" requires a fairly recent version of openssh client and does not work on Windows, not even under Cygwin (see "Net::OpenSSH" manual).
Try using Net::OpenSSH instead. It would be easier to use and more reliable when talking to a Unix server.