I'm working with a private cloud platform that is used for creating and testing Virtual Machines. They have rich API which allows me to create VMs:
{
"name": "WIN2016-01",
"description": "This is a new VM",
"vcpus": 4,
"memory": 2147483648,
"templateUuid": "sdsdd66-368c-4663-82b5-dhsg7739smm",
...
}
I need to automate this process of creating machines by just simply iterating -01 part, so it becomes:
"name": "WIN2016-01",
"name": "WIN2016-02",
"name": "WIN2016-03"
etc.
I tried to use Postman Runner and build the workflow https://learning.getpostman.com/docs/postman/collection_runs/building_workflows/ but with no luck - not sure what syntax I need to use in Tests tab.
This is one way of doing it.
Create a collection and your POST request.
In your pre-request, add the following:
/* As this will be run through the Collection Runner, this extracts
the number of the current iteration. We're adding +1, as the iteration starts from 0.*/
let count = Number(pm.info.iteration) + 1;
//Convert the current iteration number, to a '00' number format (will be a string)
let countString = ((count) < 10) ? '0' + count.toString() :
count.toString();
//Set an environment variable, which can be used anywhere
pm.environment.set("countString", countString)
In your POST request body, do something like this:
{
"name": "WIN2016-{{countString}}",
...
}
Now, run your collection through the 'Collection Runner', and enter the number of Iterations (e.g. how many times you want your collection to run). You can also add a Delay, if your API imposes rate limits.
Finally, click Run.
Related
I have a google form and a sheet that collects the responses which of course always appear at the bottom. I have been using the following script to copy the last response (which is always on the last row) from the Response sheet (Form Responses 2) to row two of another sheet (All Responses). When run by a trigger on Form Submit the script inserts a blank row into All Responses, then the copied values into another row above the blank row. Please can you help and tell me why and how I might change the script so the blank row is not added:
function CopyLastrowformresponse () {
var ss = SpreadsheetApp.getActive();
var AR = ss.getSheetByName("All Responses");
var FR = ss.getSheetByName("Form responses 2");
var FRlastrow = FR.getLastRow();
AR.insertRowBefore(2);
FR.getRange(FRlastrow, 1, FRlastrow, 22).copyTo(AR.getRange("A2"), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false);
}
A few things could be going on here.
You're getting a number of rows equal to FRlastrow, when I think you only want to be getting 1 row.
Apps Script has buggy behavior with onFormSubmit() triggers, so you may to check duplicate triggers (see this answer).
The script isn't fully exploiting the event object provided by onFormSubmit(). Specifically, rather than getting the last row from one sheet, you could use e.values, which is the same data.
I would change the script to be something like this:
function CopyLastrowformresponse (e) {
if (e.values && e.values[1] != "") { // assuming e.values[1] (the first question) is required
SpreadsheetApp.getActive()
.getSheetByName("All Responses")
.insertRowBefore(2)
.getRange(2, 1, 1, e.values.length)
.setValues([e.values]);
}
}
But, ultimately, if all you want to do is simply reverse the order of the results, then I'd ditch Apps Script altogether and just use the =SORT() function.
=SORT('Form responses 2'!A:V, 'Form responses 2'!A:A, FALSE)
I am having trouble with object comparison.
I have an application which spits out JSON data. Each time I run my script, I will get the current values and import the stored values from the previous run (either from a .json or .xml file). At the end of the script, I will overwrite the stored values with the current values. The data looks like this:
Stored values:
id : 6549888
description : Windows CPU via WMI
name : Test
dataPoints : {#{id=6314; dataSourceId=6549888; name=CPUBusyPercent; description=%
of Busy CPU; alertTransitionInterval=8; alertClearTransitionInterval=0; type=2; dataType=2;
maxDigits=4; postProcessorMethod=expression;
postProcessorParam=100-(PercentProcessorTime/100000); rawDataFieldName=; maxValue=; minValue=0;
userParam1=; userParam2=; userParam3=; alertForNoData=3; alertExpr=>= 50 60 70; alertSubject=CPU
alert on ##HOST##; alertBody=The host ##HOST## is in state ##LEVEL##. CPU is ##VALUE## percent
busy - it has been in this state since ##START##, or for ##DURATION##}}
Current values:
id : 6549888
description : Windows CPU via WMI
name : Test
dataPoints : {#{id=6314; dataSourceId=6549888; name=CPUBusyPercent; description=%
of Busy CPU; alertTransitionInterval=8; alertClearTransitionInterval=0; type=2; dataType=2;
maxDigits=4; postProcessorMethod=expression;
postProcessorParam=100-(PercentProcessorTime/100000); rawDataFieldName=; maxValue=; minValue=0;
userParam1=; userParam2=; userParam3=; alertForNoData=3; alertExpr=>= 90 95 98; alertSubject=CPU
alert on ##HOST##; alertBody=The host ##HOST## is in state ##LEVEL##. CPU is ##VALUE## percent
busy - it has been in this state since ##START##, or for ##DURATION##}}
In this example, someone changed the alertExpr from "50 60 70" to "90 95 98" and I need to catch that (along with changes made to the rest of the properties).
When I run a foreach loop, I am not getting back any differences, but when I look at a single property, Powershell does report the difference. So the following returns nothing:
foreach ($dataSource in $currentDataSources) {
foreach ($recordedDatasource in $previousDataSources) {
If ($dataSource.id -eq $recordedDatasource.id) {
Compare-Object $dataSource $recordedDatasource
}
}
}
But looking at the index 1034 (that's my test item) does show a difference:
Compare-Object $currentDataSources[1034].datapoints.alertexpr $previousDatasources[1034].datapoints.alertexpr
What is the best way to check each of the properties between my to sets of data?
Thanks.
I have the following code, which reads what task name I passed to gulp: release or test and decides what task group to load from the files based on that.
var argv = require('yargs').argv;
var group = argv._[0];
var groups = {
"release": ["tasks/release/*.js", , "tasks/release/deps.json"],
"test": ["tasks/test/*.js", "tasks/test/deps.json"]
};
require("gulp-task-file-loader").apply(null, groups[group]);
Isn't there a better way to get the commanded tasks from gulp itself instead of using yargs?
I found a great tutorial about tools for CLI. According to it I should use commander, so I do so. It is much better than yargs. Another possible solution to use process.argv[2] in this case, but it is much better to use a parser in long term.
var program = require("commander");
program.parse(process.argv);
var group = program.args[0];
var groups = {
"release": ["tasks/release/*.js", , "tasks/release/deps.json"],
"test": ["tasks/test/*.js", "tasks/test/deps.json"]
};
require("gulp-task-file-loader").apply(null, groups[group]);
I'm profiling my application locally (using the Dev server) to get more information about how GAE works. My tests are comparing the common full Entity query and the Projection Query. In my tests both queries do the same query, but the Projection is specified with 2 properties. The test kind has 100 properties, all with the same value for each Entity, with a total of 10 Entities. An image with the Datastore viewer and the Appstats generated data is shown bellow. In the Appstats image, Request 4 is a memcache flush, Request 3 is the test database creation (it was already created, so no costs here), Request 2 is the full Entity query and Request 1 is the projection query.
I'm surprised that both queries resulted in the same amount of reads. My guess is that small and read operations and being reported the same by Appstats. If this is the case, I want to separate them in the reports. That's the queries related functions:
// Full Entity Query
public ReturnCodes doQuery() {
DatastoreService dataStore = DatastoreServiceFactory.getDatastoreService();
for(int i = 0; i < numIters; ++i) {
Filter filter = new FilterPredicate(DBCreation.PROPERTY_NAME_PREFIX + i,
FilterOperator.NOT_EQUAL, i);
Query query = new Query(DBCreation.ENTITY_NAME).setFilter(filter);
PreparedQuery prepQuery = dataStore.prepare(query);
Iterable<Entity> results = prepQuery.asIterable();
for(Entity result : results) {
log.info(result.toString());
}
}
return ReturnCodes.SUCCESS;
}
// Projection Query
public ReturnCodes doQuery() {
DatastoreService dataStore = DatastoreServiceFactory.getDatastoreService();
for(int i = 0; i < numIters; ++i) {
String projectionPropName = DBCreation.PROPERTY_NAME_PREFIX + i;
Filter filter = new FilterPredicate(DBCreation.PROPERTY_NAME_PREFIX + i,
FilterOperator.NOT_EQUAL, i);
Query query = new Query(DBCreation.ENTITY_NAME).setFilter(filter);
query.addProjection(new PropertyProjection(DBCreation.PROPERTY_NAME_PREFIX + 0, Integer.class));
query.addProjection(new PropertyProjection(DBCreation.PROPERTY_NAME_PREFIX + 1, Integer.class));
PreparedQuery prepQuery = dataStore.prepare(query);
Iterable<Entity> results = prepQuery.asIterable();
for(Entity result : results) {
log.info(result.toString());
}
}
return ReturnCodes.SUCCESS;
}
Any ideas?
EDIT: To get a better overview of the problem I have created another test, which do the same query but uses the keys only query instead. For this case, Appstats is correctly showing DATASTORE_SMALL operations in the report. I'm still pretty confused about the behavior of the projection query which should also be reporting DATASTORE_SMALL operations. Please help!
[I wrote the go port of appstats, so this is based on my experience and recollection.]
My guess is this is a bug in appstats, which is a relatively unmaintained program. Projection queries are new, so appstats may not be aware of them, and treats them as normal read queries.
For some background, calculating costs is difficult. For write ops, the cost are returned with the results, as they must be, since the app has no way of knowing what changed (which is where the write costs happen). For reads and small ops, however, there is a formula to calculate the cost. Each appstats implementation (python, java, go) must implement this calculation, including reflection or whatever is needed over the request object to determine what's going on. The APIs for doing this are not entirely obvious, and there's lots of little things, so it's easy to get it wrong, and annoying to get it right.
lets say I have a text file with lines as such:
[4/20/11 17:07:12:875 CEST] 00000059 FfdcProvider W com.test.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: FFDC Incident emitted on D:/Prgs/testing/WebSphere/AppServer/profiles/ProcCtr01/logs/ffdc/server1_3d203d20_11.04.20_17.07.12.8755227341908890183253.txt com.test.testserver.management.cmdframework.CmdNotificationListener 134
[4/20/11 17:07:27:609 CEST] 0000005d wle E CWLLG2229E: An exception occurred in an EJB call. Error: Snapshot with ID Snapshot.8fdaaf3f-ce3f-426e-9347-3ac7e8a3863e not found.
com.lombardisoftware.core.TeamWorksException: Snapshot with ID Snapshot.8fdaaf3f-ce3f-426e-9347-3ac7e8a3863e not found.
at com.lombardisoftware.server.ejb.persistence.CommonDAO.assertNotNull(CommonDAO.java:70)
Is there anyway to easily import a data source such as this into protovis, if not what would the easiest way to parse this into a JSON format. For example for the first entry might be parsed like so:
[
{
"Date": "4/20/11 17:07:12:875 CEST",
"Status": "00000059",
"Msg": "FfdcProvider W com.test.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I",
},
]
Thanks, David
Protovis itself doesn't offer any utilities for parsing text files, so your options are:
Use Javascript to parse the text into an object, most likely using regex.
Pre-process the text using the text-parsing language or utility of your choice, exporting a JSON file.
Which you choose depends on several factors:
Is the data somewhat static, or are you going to be running this on a new or dynamic file each time you look at it? With static data, it might be easiest to pre-process; with dynamic data, this may add an annoying extra step.
How much data do you have? Parsing a 20K text file in Javascript is totally fine; parsing a 2MB file will be really slow, and will cause the browser to hang while it's working (unless you use Workers).
If there's a lot of processing involved, would you rather put that load on the server (by using a server-side script for pre-processing) or on the client (by doing it in the browser)?
If you wanted to do this in Javascript, based on the sample you provided, you might do something like this:
// Assumes var text = 'your text';
// use the utility of your choice to load your text file into the
// variable (e.g. jQuery.get()), or just paste it in.
var lines = text.split(/[\r\n\f]+/),
// regex to match your log entry beginning
patt = /^\[(\d\d?\/\d\d?\/\d\d? \d\d:\d\d:\d\d:\d{3} [A-Z]+)\] (\d{8})/,
items = [],
currentItem;
// loop through the lines in the file
lines.forEach(function(line) {
// look for the beginning of a log entry
var initialData = line.match(patt);
if (initialData) {
// start a new item, using the captured matches
currentItem = {
Date: initialData[1],
Status: initialData[2],
Msg: line.substr(initialData[0].length + 1)
}
items.push(currentItem);
} else {
// this is a continuation of the last item
currentItem.Msg += "\n" + line;
}
});
// items now contains an array of objects with your data