Lets say I have a scenario in my demo.feature file
Scenario Outline: Gather and load all submenus
Given I will login using <username> and <password>
When I will click all links
Examples :
| username | password |
| user1 | pass1 |
| use2 | pass2 |
lets say i have a file called users.json
How can i get those usernames and passwords from that external file to my demo.feature ?
Can I catch the file by passing parameters to my npm script like below ?
npm run cucumber -- --params.environment.file=usernames.json
I recommend having the login step access that json file within the step definition. Just make sure not to check it into the repo and instead always expect it to be in a location but only locally and not in the repository.
Doing the above is useful for a couple of reasons:
- An engineer running your tests does not need to know that a param must be passed in from the command line
- The code is self-descriptive in that step as to how it logs in
- You can add better error handling
- You can use multiple user files if needs be by having hooks define paths etc based on tags
Related
Before running the code, install ibm-watson &
ibm-cloud-sdk-core package and also pip instll PyJWT==1.7.1.
I found in IBM document that "For a Python script you can run to export logs and convert them to CSV format, download the export_logs_py.py file from the Watson Assistant GitHub) repository."
But I don't really know where & how should I modify in order to connect my ibm skill.
There is no demo or instruction about where I can find those argument.
I only find these information in skill api details but it seems it needs more.
Do anyone have an example version about how to use the .py they provided?
(I'm a coding beginner, not really understand every lines in the .py)
The .py shows an error after I run the file without modification:
runfile('C:/export_logs.py', wdir='C:/Users/admin/Downloads')
usage: export_logs.py [-h] [--logtype {ASSISTANT,WORKSPACE,DEPLOYMENT}]
[--language LANGUAGE] [--filetype {CSV,TSV,XLSX,JSON}]
[--url URL] [--version VERSION]
[--totalpages TOTALPAGES] [--pagelimit PAGELIMIT]
[--filter FILTER] [--strip STRIP]
apikey id filename
export_logs.py: error: the following arguments are required: apikey, id, filename
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
The conversation I want to download:
First of all, Workspaces in IBM Watson Assistant are now called Skills.
To understand what arguments(positional and optional) you need to pass to the Python script, run the below command
python export_logs_py.py -h
Wherever you see workspace, you can replace it with skill.
To export the logs in the .csv file format, run the below command
python export_logs_py.py --filetype CSV --url <URL> <API_KEY> <SKILL_ID> output.csv
Replace placeholders <URL, <API_KEY> and <SKILL_ID> with appropriate values mentioned below.
<URL> & <API_KEY> - You can find them under the Manage page of your Watson Assistant service page
<SKILL_ID> - The same as the one in the image you uploaded. Check this StackOverflow answer for more info.
For Assistant logs, add --logtype ASSISTANT. The default is WORKSPACE.
You can also find the logs in the UI under Analytics section of your Skill
As you can see, the script reported an error and said that you have to provide the apikey, the id and the (presumably output) filename as parameters. It also showed that additional parameters can be specified.
usage: export_logs.py [-h] [--logtype {ASSISTANT,WORKSPACE,DEPLOYMENT}]
[--language LANGUAGE] [--filetype {CSV,TSV,XLSX,JSON}]
[--url URL] [--version VERSION]
[--totalpages TOTALPAGES] [--pagelimit PAGELIMIT]
[--filter FILTER] [--strip STRIP]
apikey id filename
Your next step could be now to invoke the script again, but provide an API key for Watson Assistant, the skill ID and a filename as additional paramaters. Next, I would try to something like, e.g., trying out to specify the output type:
export_logs.py --filetype CSV myapikey skillID output.csv
I am not the author of that script, but that is how I would approach it if I wanted to use it
I want to inspect a list of all routes, that Vapor app is serving. Is there a script or a run-time command that will generate the list for me?
I'm looking for something similar to rake routes in Ruby on Rails
Run vapor run routes from the command line after executing vapor build
Another alternative to get the routes use the following command. It will build your project and displays all routes registered to the Application's Router in an ASCII-formatted table.
$ swift run Run routes
+------+------------------+
| GET | /search |
+------+------------------+
| GET | /hash/:string |
+------+------------------+
A colon preceding a path component indicates a variable parameter. A colon with no text following is a parameter whose result will be discarded.
I'm trying to create Project and Task in TAC using MetaServletCaller.bat file.
I'm able to create a project using the bat file, but didn't get how to link or assign jobs to that project.
How to create project with the jobs using MetaServletCaller.bat file?
Talend MetaServletCaller API doesn't provide any command for creating a job from an export file. The only way to do this would be to do it in Talend studio, or programmatically using the commandline importItems command which allows you to import an exported job (while logged in to the project):
| importItems source (dir|.zip) imports items |
| -if (--item-filter) filterExpr item filter expression |
| -im (--implicit) import implicit |
| -o (--overwrite) overwrite existing items |
| -s (--status) import the status |
| -sl (--statslogs) import stats & logs params |
You can find the commandline API reference here.
I'm trying to setup a deploy process that targets 16 web sites each hosting an instance of the same application.
Websites and AppPools are named as such:
appServer1:
app10.site.com
app11.site.com
app12.site.com
app13.site.com
appServer2:
app20.site.com
app21.site.com
app22.site.com
app23.site.com
etc.
etc.
...with each website having a correspondingly named AppPool.
I am desperately trying to determine how to use a single Deploy NuGet Package step to target all of these websites/app pools using variables and a combination of powershell scripts if possible.
I'd like to have a single step where I can variable substitute the website and app pool names. As this is the only difference. I basically need the equivalent of being able to loop the nuget package step passing it a list of website and app pool names. I cannot simply use variables because I can only resolve to the machine level with variable scoping.
Create list of all Website and AppPool names, iterate them passing each value to a Step for execution. ForEach processing step for lack of better words.
I do have the ability to rename the AppPools if need be for a more consistent pattern, but I cannot change the website names
Any ideas would be greatly appreciated.
http://help.octopusdeploy.com/discussions/questions/3481-every-website-in-the-deploy-has-a-different-apppool-and-website-name-how-to-deal-no-other-differences
There's a lot to your question, but I'm going to take a stab at explaining our approach, in hopes of jogging your creative juices.
tl;dr
simply put, use your own powershell scripts to install the web-application. In there you can set the app pool name on a per website basis
For starters, we do do a separate deployment step for each project. The scripts we use will allow you to do all deployments from a single deploy.ps1 (including unique appPool names), but we find that it really helps keep each deployment nice and lean, and easy to manage. Each project get's it's own nupkg and therein contains the predeploy.ps1, deploy.ps1, and postdeploy.ps1 as well as a folder of build/deploy scripts that we've open sourcesd, and a folder of environment config xml files.
A sample of an environment config would be this. The name is simply [envName].xml
<!-- environments\Production.xml -->
<environmentSettings>
<webSites>
<app>
<physicalPathRoot>c:\inetpub</physicalPathRoot>
<physicalFolderPrefix>appname</physicalFolderPrefix>
<siteProtcol>https</siteProtcol>
<siteName>appname.tld</siteName>
<siteHost>appname.tld</siteHost>
<portNumber>443</portNumber>
<appPath>/</appPath>
<appPool>
<name>appname.tld</name>
<!-- valid identityTypes are: [LocalSystem, LocalService, NetworkService, SpecificUser, ApplicationPoolIdentity] -->
<identityType>NetworkService</identityType>
<!-- Set this value to the User the Service will run under in the format DOMAIN\username -->
<!-- If Running as 'NetworkService' then 'NT AUTHORITY\Network Service' is used -->
<userName>NT AUTHORITY\Network Service</userName>
<!-- Leave blank unless using SpecificUser -->
<password></password>
<maxWorkerProcesses>5</maxWorkerProcesses>
</appPool>
</app>
</webSites>
<serverDatabase>
<name>database_name</name>
<connectionString>REPLACED BY OCTOPUS</connectionString>
<providerName>System.Data.SqlClient</providerName>
</serverDatabase>
</environmentSettings>
You can see in the corresponding Get-EnvironmentSettings.ps1 where we load up the config, and then update it with any Octopus variables. This is the trickiest part, because we use dot-Notation to update the paths (case sensitive).
Our octopus variables really only contain information that is secret, as everything else lives in [environment].xml
| Name | Value | Scope
--------------------------------------------------------------------------
| webSites.app.appPool.password | supersecret | Production
So now a typical deployment script simply imports the modules, grab environmentSettings, update config, and install the web app.
# Top of the script, get Octopus environment and version
param(
[string] $version = $OctopusPackageVersion,
[string] $environment = $OctopusEnvironmentName
)
# Make sure a failed deployment actually fails
$ErrorActionPreference = "Stop"
# Import the modules
$currentDir = Split-Path $script:MyInvocation.MyCommand.Path
$moduleDir = "$currentDir\modules"
Import-Module BuildDeployModules
# Grab the environment settings
$environmentSettings = Get-EnvironmentSettings $environment "//environmentSettings"
$databaseSettings = $environmentSettings.serverDatabase
$websiteSettings = $environmentSettings.webSites.app
# update the config
Update-XmlConfigValues $currentDir\website\Web.config "//appSettings/add[#key='databaseName']" $($databaseSettings.name) "value"
Update-XmlConfigValues $currentDir\website\Web.config "//connectionStrings/add[#name='databaseConnection']" $($databaseSettings.connectionString) "connectionString"
Update-XmlConfigValues $currentDir\website\Web.config "//connectionStrings/add[#name='databaseConnection']" $($databaseSettings.providerName) "providerName"
# Install the web application
Install-WebApplication $environment $websiteSettings $version "anonymousAuthentication"
In doing all of this, the web application is installed into IIS with a specific application pool, and appropriate config transforms without relying on any unknowns.
Our nupkg structure looks something like this
appname.1.2.3.4.nupkg
environments
dev.xml
staging.xml
qual.xml
production.xml
modules
[all of our build modules]
website
[all of our website files]
This is super repeatable, easy to maintain, and easy to edit config. Hope it helps
I am trying to specify rather complicated labeling rule in VCS of Teamcity. Not sure if what I am trying to do is possible or not.
This is my directory structure I have inherited.
mysvn/abc/repos
|
-TestDomain
-TestSystem
|
-MyFrameWork
-MySoftware
|
-MySoftwareDevices
-MySoftwareFiles
|
-branches
-tags
-trunk1
-MySoftwareDriver
|
- branches
- tags
- trunk2
I want to specify such a rule that in the working directory of the checkout directory of Teamcity has structure like this:
Teamcity checkout directory
|
-FolderA
-FolderB
Where FolderA has contents of trunk1 and FolderB has contents of trunk2.
Is it possible to be done?
SVN URL like:
mysvn/abc/repos/TestSystem/MySoftware/MySoftwareFiles/trunk1
does give me trunk1. But I need contents of trunk1 and trunk2 under two different folders in the same build checkout directory.
Labeling I have been using: trunk=>tags
Ok got the thing working. Its kind of hit and trial. Hoping someone comes up with proper explanations.
Labeling rules still has:
trunk=>tags
The SVN URL : mysvn/abc/repos/TestSystem
It is the checkout rules which I added:
-:.
+:MySoftware/MySoftwareFiles/trunk1=>MyDir/FolderA
+:MySoftware/MySoftwareDriver/trunk2=>MyDir/FolderB