Setting headless as CLI arg - protractor

My Protractor suite generally uses the Chrome non-headless mode so the tests can be monitored and stuff, but I tend to switch often between headless and normal while writing tests. Constantly changing the conf.js file is a hassle so I'd like to be able to do this via a command line argument. Something like the following:
npm test -- --headless
npm test-headless
As you can see I'm running Protractor via npm, so a complex argument construction is not a problem here.
I haven't been able to find a way to do this using uncle Google. Can someone point me in the right direction?

keep it simle. Create two protractor.conf files:
- one for local (non headless purpose) - protractor.local.conf
- another for headless purpose that you have already had
And create some scripts that will run what you need, for example:
"scripts": {
"test-headless": "node ./config/protractor.headless.conf.js",
"test-local": "node ./config/protractor.local.conf.js",
}

A somewhat hackish solution, but it works! Check for this flag (or any other) in your configuration file using node's process.argv global. You can then dynamically configure Protractor accordingly. This is one of the great perks of JS config files.
For example:
const isHeadless = process.argv.includes('--headless');
exports.config = {
capabilities: {
browserName: 'chrome',
chromeOptions: {
args: [
isHeadless && '--headless'
].filter(Boolean)
}
}
};
This does raise a warning in the Protractor CLI:
Ignoring unknown extra flags: headless. This will be an error in
future versions, please use --disableChecks flag to disable the
Protractor CLI flag checks.
As with all hacks you use them with some level of risk. The --disableChecks flag will get rid of the warning and future error, but may introduce other issues.

Related

Go Stackdriver debugger error loading program

I am trying to set up Stackdriver debugging using Go. Using the article and this great medium post I came up with this solution.
Key parts, in cloudbuild.yaml
- name: gcr.io/cloud-builders/wget
args: [
"-O",
"go-cloud-debug",
"https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug"
]
...
Dockerfile I have
...
COPY gopath/bin/stackdriver-demo /stackdriver-demo
ADD go-cloud-debug /
ADD source-context.json /
CMD ["/go-cloud-debug","-sourcecontext=./source-context.json", "-appmodule=go-errrep","-appversion=1.0","--","/stackdriver-demo"]
...
However the pods keeps crashing, the container logs show this error:
Error loading program: decoding dwarf section info at offset 0x0: too short
EDIT: Using https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug may be outdated as I haven't seen it used outside Daz's medium post. The official docs uses the package cloud.google.com/go/cmd/go-cloud-debug-agent
I have update cloudbuild.yaml file to install this package:
- name: 'gcr.io/cloud-builders/go'
args: ["get", "-u", "cloud.google.com/go/cmd/go-cloud-debug-agent"]
env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux']
- name: 'gcr.io/cloud-builders/go'
args: ["install", "cloud.google.com/go/cmd/go-cloud-debug-agent"]
env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux']
And in the Dockerfile I can get access to the binary in gopath/bin/go-cloud-debug-agent
When I execute the gopath/bin/go-cloud-debug-agent with my own program as an argument:
/go-cloud-debug-agent -sourcecontext=./source-context.json -appmodule=go-errrep -appversion=1.0 -- /stackdriver-demo
I get another opaque error:
Error loading program: AttrStmtList not present or not int64 for unit 88
So basically using the cloud-debug binary from https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug and cloud-debug-agent binary from the package cloud.google.com/go/cmd/go-cloud-debug-agent both don't work and give different errors.
Would appreciate any tips on what I'm doing wrong and how to fix it.
OK :-)
Yes, you should follow the current Stackdriver documentation, e.g. go-cloud-debug-agent
Unfortunately, there are now various issues with my post including a (currently broken) gcr.io/cloud-builders/kubectl for regions.
I think your issue pertains to your use of golang:alpine. Alpine uses musl rather than the glibc that you find on most other Linux distro's and so, you really must compile for Alpine to ensure your binaries reference the correct libc.
I'm able to get your solution working primarily by switching your Dockerfile to pull the Cloud Debug Agent while on Alpine and to compile your source on Alpine:
FROM golang:alpine
RUN apk add git
RUN go get -u cloud.google.com/go/cmd/go-cloud-debug-agent
ADD main.go src
RUN CGO_ENABLED=0 go build -gcflags=all='-N -l' src/main.go
ADD source-context.json /
CMD ["bin/go-cloud-debug-agent","-sourcecontext=/source-context.json", "-appmodule=stackdriver-demo","-appversion=1.0","--","main"]
I think that should get you beyond the errors that you documented and you should be able to deploy your container to Kubernetes.
I've made my version of your image publicly available (and will retain it for a few days for you):
gcr.io/dazwilkin-190402-55473323/roberson34#sha256:17cb45f1320e2fe04e0681310506f4c229896429192b0d1c2c8dc20ed54adb0d
You may wish to reference it (by that digest) in your deployment.yaml
NB For Error Reporting to be "interesting", your code needs to generate errors and, with your example, this is going to be challenging (usually a good thing). You may consider adding another errorful handler that always results in errors so that you may test the service.

How do I make pytest fail fast as a user level configuration?

I want to always run pytest in a fail-fast mode like --maxfail=1, regardless the code repository I am testing.
Mainly I am using for a config item which can be setup as an environment variable or a user homedir config file which would make it fail fast.
The following environment variable should do the job:
export PYTEST_ADDOPTS="-x"
More info:
How to change command line options defaults
Failure options

Deploy and run a Go API server on Ubuntu/Centos

I just finished my first backend with Go using Iris framework but now I need to put it on production so I can use it in the Slack app I built.
In order to test the code locally I only run my file with go run main.go and ngrok to test with the Slack API, it's working and it's finished.
I have a droplet with Ubuntu 16.04.3 and other one with Centos 7... I was searching for something like pm2 for go, running the server and using nginx to point that port but I read that with Go it's different and I have to use something like this https://fabianlee.org/2017/05/21/golang-running-a-go-binary-as-a-systemd-service-on-ubuntu-16-04/
But that's a very long configuration for a simple server and my questions are:
Is this the usual way to config the APIs with Go?
Apart of DigitalOcean, do you recommend to use a different service to run my API?
This is really my first time with Go and I just want to learn more, I am a backend developer with Laravel and NodeJS.
You can use pm2 if you want. When you build a go project it creates a binary executable, lets say backend-server, which you can run from terminal and will start the app like this:
$ ./backend-server
If it's not executable or has permission denied issue, add the executable permission to it.
$ chmod +x backend-server
You binary should be ready to run. I like to do it with a json config file (process.json) so that I can pass extra env variables as well and don't have to type a lot in terminal.
My process.json looks something like this:
{
"apps" : [{
"name" : "backend-app",
"script" : "./backend-server",
"env": {
"DB_USER": "db_user",
"PORT": 8080
}
}]
}
Finally you can start the app using pm2 like this:
$ pm2 start process.json
More details about json config can be found in official doc
I think most people use Supervisor for this purpose, including me.
To make it very easy for you, just take a look at my Golang project, isaac-racing-server and use it as a template for yours by replacing isaac-racing-server with the name of your app. (The Supervisor files are in a subdirectory.)

Persistent Karma test runner with autoWatch=false

I am trying to run Karma via node (gulp, specifically) persistently in the background but to manually re-run the tests i.e. I have autoWatch set to false. I start the server with something like:
karma_server = new karma.Server(config.karma)
karma_server.start()
Then elsewhere I would like to trigger the tests running when files are updated outside Karma. The API method that one would expect might work is server.refreshFiles(), but it does not do this.
Internally it seems like executor.schedule() might do the trick, but it seems to be undocumented, private, and inaccessible.
So when autoWatch is turned off, how does one trigger Karma testing with an existing server? I'm sure I must be missing something obvious as otherwise the autoWatch option would always need to be true for the server to be useful.
If you have a server already running you can use the karma runner to communicate with it:
var runner = require('karma').runner,
karmaConfig = {/* The karma config here */};
runner.run(karmaConfig, callback);
The grunt-karma plugin works like this, you can check it out for more info:
https://github.com/karma-runner/grunt-karma/blob/master/tasks/grunt-karma.js

How do I configure grunt-karma to run tests directly when starting a watch?

I have configured my grunt/karma setup according to https://github.com/karma-runner/grunt-karma
I'm also using it together with grunt-contrib-watch as described under https://github.com/karma-runner/grunt-karma#karma-server-with-grunt-watch
Almost everything works great, but how do I configure karma to perform all tests directly when the watch is started?
I start it with karma:unit:start watch but then I must first change a file before the tests are performed.
I have stared at the karma config params at http://karma-runner.github.io/0.8/config/configuration-file.html but still cannot find the correct param.
I haven't used grunt-karma before, but the easiest option is probably to configure your watch task, so that it runs it's tasks at startup. This can be done via options.atBegin. So if you take the example from the grunt-karma documentation, you would write:
watch: {
karma: {
files: ['app/js/**/*.js', 'test/browser/**/*.js'],
tasks: ['karma:unit:run'],
options: {
atBegin: true
}
}
},