Rest API Testing from commandline - rest

I am preparing a SDK, and SDK as of now, does not have CI system separately.
I want to test some REST endpoints which should be available when the user uses SDK to create the software and try to run with our framework.
I have written all the manual steps in shell script and planning to put the script as crontab to run it every few hours.
Now, for rest end point testing, I was thinking of just using curl and checking if we getting data back. but this can turn into a lot of work,as we expand the functionality. I looked into frisby framework which kind of suits my needs.
Is there any recommendation for allowing me to test rest services when the framework software is started.

Probably swat is exactly what you need. Reasons :
This is DSL for web, rest services test automation
it uses curl command line API to create http requests
it is both DSL and command line tool to run test scenarios written on DSL
it is configurable both from bash style scripts and general configs
it is very easy to start with
probably in your case curl based test cases could be easily converted into swat DSL format
(*) disclosure - I am the author of swat.

I have created a very small bash script to test JSON APIs which might be useful. It uses jq and curl as dependencies. curl for making request and jq for JSON processing.It is only designed to test JSON APIs.
Link: api-test
Every API call you want to run is stored in a JSON file with format below:
{
"name": "My API test",
"testCases": {
"test_case_1": {
"path": "/path_1",
"method": "POST",
"description": "Best POST api",
"body": {
"value": 1
},
"header": {
"X-per": "1"
}
},
}
"url": "http://myapi.com"
}
To run a test case:
api-test -f test.json run test_case_1
api-test -f test.json run all # run all API call at once.
It will produce output in an organized way
Running Case: test_case_1
Response:
200 OK
{
"name": "Ram",
"full_name": "Ram Shah"
}
META:
{
"ResponseTime": "0.078919s",
"Size": "235 Bytes"
}
It also supports automated testing of API with jq JSON comparison and normal equality/subset comparisons.

Related

Azure Batch and API Job ADD: upload file in wd directory with

I'm using API Job Add to create one Job with one Task in Azure Batch.
This is my test code:
{
"id": "20211029-1540",
"priority": 0,
"poolInfo": {
"poolId": "pool-test"
},
"jobManagerTask": {
"id": "task2",
"commandLine": "cmd /c dir",
"resourceFiles": [
{
"storageContainerUrl": "https://linkToMyStorage/MyProject/StartTask.txt"
}
]
}
}
To execute the API I'm using Postman and to monitor the result I'm using BatchExplorer.
The job and it's task are created correctly, but the 'wd' folder generate automatically is empty.
If I understood fine, I should see the linked file in the storage variable, right?
Maybe, some other parameter is needed in the Json of the body?
Thank you!
Task state of completed does not necessarily indicate success. From your json body, you most likely have an error:
"resourceFiles": [
{
"storageContainerUrl": "https://linkToMyStorage/MyProject/StartTask.txt"
}
You've specified a storageContainerUrl with a file. Also ensure you have provided proper permissions (either via SAS or a user managed identity).

GitHub Actions: How to access to the log of current build via Terminal

I'm trying to get familiar with Github Actions. I have configured my workflow in a way, that every time I push my code to GitHub, the code will automatically be built and pushed to heroku.
How can I access the build log information in terminal without going to github.com?
With the latest cli/cli tool named gh (1.9.0+), you can simply do
(from your terminal, without going to github.com):
gh run view <jobId> --log
# or
gh run view <jobId> --log-failed
See "Work with GitHub Actions in your terminal with GitHub CLI"
With the new gh run list, you receive an overview of all types of workflow runs whether they were triggered via a push, pull request, webhook, or manual event.
To drill down into the details of a single run, you can use gh run view, optionally going into as much detail as the individual steps of a job.
For more mysterious failures, you can combine a tool like grep with gh run view --log to search across a run’s entire log output.
If --log is too much information, gh run --log-failed will output only the log lines for individual steps that failed.
This is great for getting right to the logs for a failed step instead of having to run grep yourself.
And with GitHub CLI 2.4.0 (Dec. 2021), gh run list comes with a --json flag for JSON export.
Use
curl \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/<github-user>/<repository>/actions/workflows/<workflow.yaml>/runs
https://docs.github.com/en/free-pro-team#latest/rest/reference/actions#list-workflow-runs
This will return a JSON with the following structure:
{
"total_count": 1,
"workflow_runs": [
{
"id": 30433642,
"node_id": "MDEyOldvcmtmbG93IFJ1bjI2OTI4OQ==",
"head_branch": "master",
"head_sha": "acb5820ced9479c074f688cc328bf03f341a511d",
"run_number": 562,
"event": "push",
"status": "queued",
"conclusion": null,
"workflow_id": 159038,
"url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642",
"html_url": "https://github.com/octo-org/octo-repo/actions/runs/30433642",
"pull_requests": [],
"created_at": "2020-01-22T19:33:08Z",
"updated_at": "2020-01-22T19:33:08Z",
"jobs_url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642/jobs",
"logs_url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642/logs",
"check_suite_url": "https://api.github.com/repos/octo-org/octo-repo/check-suites/414944374",
"artifacts_url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642/artifacts",
"cancel_url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642/cancel",
"rerun_url": "https://api.github.com/repos/octo-org/octo-repo/actions/runs/30433642/rerun",
"workflow_url": "https://api.github.com/repos/octo-org/octo-repo/actions/workflows/159038",
"head_commit": {...},
"repository": {...},
"head_repository": {...}
]
}
Access the jobs_url with a PAT that has repository admin rights.

How to create a secret-text type of credential for Jenkins using Jenkins API?

So far, I'm using the credentials plugin on Jenkins and I do a POST to {JENKINS_URL}/credentials/store/system/domain/_/createCredentials using a credentials.xml that looks like this:
<com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
<scope>GLOBAL</scope>
<id>my-test-cred</id>
<description>This is an example from REST API</description>
<username>xyz-test</username>
<password>
xyz-yay
</password></com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl>
and it successfully creates a credential of type username:password.
But suppose I want to create a credential of type secret-text which would hold a token or a secret, like say, a GitHub token, how can I make a credentials.xml for that kind? I've searched high and low and I cannot find a definitive guide here :-(
I used successfully the class from #rkparmar's proposal with XML format to create a secret text credential in Jenkins.
<org.jenkinsci.plugins.plaincredentials.impl.StringCredentialsImpl>
<scope>GLOBAL</scope>
<id>testID</id>
<secret>thisIsAtest</secret>
<description>TEST</description>
</org.jenkinsci.plugins.plaincredentials.impl.StringCredentialsImpl>
Agree! It is difficult to find xml to credential for secret-text. If you have way to get xml from .json than please find .json below to create credentials for secret-text
curl -X POST 'http://user:token#jenkins_server:8080/credentials/store/system/domain/_/createCredentials' \
--data-urlencode 'json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "myid",
"secret": "mysecret",
"description": "mydescription",
"$class": "org.jenkinsci.plugins.plaincredentials.impl.StringCredentialsImpl"
}
}

Why using Google Cloud Drive Rest API file.list can not get all the files?

I am using the following CURL command to retrieve all my google drive files, however, it only list a very limited part of the whole bunch of files. Why?
curl -H "Authorization: Bearer ya29.hereshouldbethemaskedaccesstokenvalue" https://www.googleapis.com/drive/v3/files
result
{
"kind": "drive#fileList",
"incompleteSearch": false,
"files": [
{
"kind": "drive#file",
id": "2fileidxxxxxxxx",
"name": "testnum",
"mimeType": "application/vnd.google-apps.folder"
},
{
"kind": "drive#file",
"id": "1fileidxxxxxxx",
"name": "test2.txt",
...
}
token scope includes
https://www.googleapis.com/auth/drive.file
https://www.googleapis.com/auth/drive.appdata
Using the Android SDK also facing the same issue.
Any help would be appreciated.
Results from files.list are paginated -- your response should include a "nextPageToken" field, and you'll have to make another call for the next page of results. See documentation here about the files list call. You may want to use one of the client libraries to make this call (see the examples at the bottom of the page)
I have the same problem when try to get list of files in Google Drive folder. This folder has more than 5000 files, but API return only two of them. The problem is -- when files in folder shared with anyone with a link, in fact it isn't shared with you until you open it. Owner of this folder must specify you as viewer.

Run a MapReduce job via rest api

I use hadoop2.7.1's rest apis to run a mapreduce job outside the cluster. This example "http://hadoop-forum.org/forum/general-hadoop-discussion/miscellaneous/2136-how-can-i-run-mapreduce-job-by-rest-api" really helped me. But when I submit a post response, some strange things happen:
I look at "http://master:8088/cluster/apps" and a post response produce two jobs as following picture:
strange things: a response produces two jobs
After wait a long time, the job which I defined in the http response body fail because of FileAlreadyExistsException. The reason is another job creates the output directory, so Output directory hdfs://master:9000/output/output16 already exists.
This is my response body:
{
"application-id": "application_1445825741228_0011",
"application-name": "wordcount-demo",
"am-container-spec": {
"commands": {
"command": "{{HADOOP_HOME}}/bin/hadoop jar /home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /data/ /output/output16"
},
"environment": {
"entry": [{
"key": "CLASSPATH",
"value": "{{CLASSPATH}}<CPS>./*<CPS>{{HADOOP_CONF_DIR}}<CPS>{{HADOOP_COMMON_HOME}}/share/hadoop/common/*<CPS>{{HADOOP_COMMON_HOME}}/share/hadoop/common/lib/*<CPS>{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/*<CPS>{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/lib/*<CPS>{{HADOOP_YARN_HOME}}/share/hadoop/yarn/*<CPS>{{HADOOP_YARN_HOME}}/share/hadoop/yarn/lib/*<CPS>./log4j.properties"
}]
}
},
"unmanaged-AM": false,
"max-app-attempts": 2,
"resource": {
"memory": 1024,
"vCores": 1
},
"application-type": "MAPREDUCE",
"keep-containers-across-application-attempts": false
}
and this is my command:
curl -i -X POST -H 'Accept: application/json' -H 'Content-Type: application/json' http://master:8088/ws/v1/cluster/apps?user.name=hadoop -d #post-json.txt
Can anybody help me? Thanks a lot.
When you run the map reduce, see that you do not have output folder as the job will not run if it is present. You can write program so that you can delete the folder is it exists, or manually delete it before calling the rest api. This is just to prevent the data loss and avoid overwriting the output of other job.