AWS Device Farm: Protractor scripts are running on local machine instead of AWS cloud machines - protractor

https://docs.aws.amazon.com/devicefarm/latest/testgrid/testing-frameworks-nodejs.html
After following above link, I am trying to run Protractor scripts on Device Farm, however scripts are getting executed on my local machine instead of being executed on AWS Cloud machines (i.e. browser instance is opening on my personal laptop). Please advise what changes I need to do to make it run on cloud virtual machine?
Below is my conf.js code:
exports.config = {
specs: ['ABC.js'],
hostname: "testgrid-devicefarm.us-west-2.amazonws.com",
port: 443,
path: "xyz../wd/hub",
protocol: "https" }
Below is the output:
Selenium standalone server started at http://xxx.yyy.x.y:xxxxx/wd/hub
Started
1 spec, 0 failures
Finished in 6.941 seconds
[21:17:42] I/local - Shutting down selenium standalone server.
[21:17:42] I/launcher - 0 instance(s) of WebDriver still running
[21:17:42] I/launcher - firefox #01 passed
(Note: I was able to run Selenium scripts successfully on Device Farm. Only issue I am facing is with Protractor)

NOTE: Let me preface this by saying that Protractor v5 and v7 does not support the W3C specification / Selenium 4, and thus cannot be used on AWS Device Farm (see the Github issues [1] and [2]) as the AWS service requires it.
To connect to a remote address using the URL AWS Device Farm provides, the seleniumAddress parameter can be used in the conf.js.
Here's an example of a config on Protractor v6
exports.config = {
framework: 'jasmine',
seleniumAddress: <Your Remote URL>,
specs: ['spec.js'],
capabilities: {
browserName: 'firefox'
}
}

Related

Grpc server not listening to port 5001 when run as a Windows service

I created the GrpcGreeter and GrpcGreeterClient projects in Visual Studio 2019 from the following page:
[https://learn.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-5.0&tabs=visual-studio][1]
The only change I made to these examples was that in order for the GrpcGreeter app to run as a Windows service, I added ".UseWindowsService()" to IHostBuilder CreateHostBuilder. I published both to local folders while in VS, and selected Self Contained for the Deployment Mode.
Server and client work fine using https://localhost:5001 when run from the either the VS environment or when running the published GrpcGreeter.exe and GrpcGreeterClient.exe directly.
I then used "Sc create" to successfully create a Windows service with GrpcGreeter.exe. Then on the Services window I started the service.
The problem is that when run as a Windows service the GrpcGreeter.exe does not listen on port 5001, as shown with netstat -anb (it does listen to port 5354, apparently). And of course when I then run GrpcGreeterClient.exe it does not connect. When GrpcGreeter.exe is run not as a Windows service netstat shows that it is listening to 5001, and GrpcGreeterClient.exe talks to it just fine.
A look at Event Viewer shows 3 errors happening immediately whenever I start the service on the Services window. I'm abbreviating them below.
1st:
Faulting application name: GrpcGreeter.exe, version: 1.0.0.0, time stamp: 0x5f6b3846
Faulting module name: ntdll.dll, version: 10.0.19041.546, time stamp: 0xd49544eb
Exception code: 0xc0000374
Fault offset: 0x000e6763
...
2nd:
Fault bucket , type 0
Event Name: FaultTolerantHeap
Response: Not available
Cab Id: 0
Problem signature:
P1: GrpcGreeter.exe
...
3rd:
Fault bucket 2242750238749681031, type 1
Event Name: APPCRASH
Response: Not available
Cab Id: 0
Problem signature:
P1: GrpcGreeter.exe
...
Please help. Thank you.
this is a very old post but I too came across with this issue when deploying a windows service with gRPC. Not sure will it solve your problem or not but my issue was that when you deploy into the windows service, it needs to have a certificate configured. It was stated in this documentation here under the "Set HTTPS certificates by using configuration" part
So I have created a self signed certificate using openssl where you can refer here too, then just add the .pfx file into the kestrel configuration as shown by the Microsoft documentation, build it and publish it as a windows service. After that, just proceed with the normal service creation procedure using
sc create
// and then
sc start
The windows service should now be running with the gRPC server without any issue (For my case at least). One thing to note is that because this is a self signed certificate which is not exactly trustable, when the frontend attempts to communicate with the server, it will have an error about the cert. You just need to trust it and it will be fine.
On browser, just go to the link that is hosting the gRPC, for example https://localhost:5001, click advanced and trust it.
In my case, I was using electron + angular so I just need to add this code snippet that I have gotten from here. Now my frontend can communicate with the gRPC server in the windows service normally.
// ignore self signed certificate in dev mode
if (process.env.NODE_ENV === 'development') {
// SSL/TSL: this is the self signed certificate support
app.on('certificate-error', (event, webContents, url, error, certificate, callback) => {
// On certificate error we disable default behaviour (stop loading the page)
// and we then say "it is all fine - true" to the callback
event.preventDefault();
callback(true);
});
}

Vapor cloud deploy failed: Sockets Error: Failed trying to connect to http://redis.eu.vapor.cloud:6379

I setup Vapor project manually with swift package manager. I follow the documentation.
It build and runs successfully in my local machine, both for debug and release build.
But it failed to deploy to vapor cloud:
....
....
env: development
db: none
replicas: 1
replica size: free
branch: development
build: clean
Creating deployment [Done]
Connecting to build logs ...
Waiting in Queue [Failed]
Error: Sockets Error: Failed trying to connect to http://redis.eu.vapor.cloud:6379
Identifier: Sockets.SocketsError.connectFailed
Here are some possible causes:
- The hostname or port is not valid
Anyone knows what caused this error?
I open an issue on GitHub and got this response:
Hi, It's usually caused by either a firewall, or a proxy preventing connection to our Redis cluster, that is providing log feedback to the terminal.
We have seen it a couple of times, and are working on allowing to see log output in the dashboard, for these kind of situations :)
Still cannot find the solution.

"host not allowed" error when deploying a play framework application to Amazon AWS with Boxfuse

I am trying to deploy a simple web application written using Play Framework in Scala to Amazon web service.
The web application is running OK in development mode and production mode in my local machine, and I've changed its default port to 80.
I used Boxfuse to deploy to AWS as suggested.
I first run "sbt dist"
then "boxfuse run -env=prod"
Things went well as desired. The image is fused and pushed to AWS. AMI is created. Instance was started and my application was running.
i-0f696ff22df4a2b71 => 2017-07-13 01:28:23.940 [info] play.api.Play - Application started (Prod)
Then came the error message:
WARNING: Healthcheck (http://35.156.38.90/) returned 400 instead of 200. Retrying for the next 300 seconds ...
i-0f696ff22df4a2b71 => 2017-07-13 01:28:24.977 [info] p.c.s.AkkaHttpServer - Listening for HTTP on /0.0.0.0:80
i-0f696ff22df4a2b71 => 2017-07-13 01:28:25.512 [warn] p.f.h.AllowedHostsFilter - Host not allowed: 35.156.38.90
The instance was terminated after repeated try after 3 minutes. It gave a warning like:
Ensure your application responds with an HTTP 200 at / on port 80
But I've made sure the application responds in local machine, and I tried both Windows and Ubuntu, all works well.
Also, running "boxfuse run" on local machine, I can connect to it using "http://localhost", but still have the error.
Hope someone with experience can give me some suggestions. Thanks in advance.
ps: not sure if relevant, I added these settings to application.conf
http {
address = 0.0.0.0
port = 80
}
Judging from the error message, it looks like the problem might be related to play.filters.hosts.allowed not set up in application.conf. The filter lets you configure which hosts can access your application. More details about the Play filter is available here.
Here's a configuration example:
play.filters.hosts {
allowed = ["."]
}
Note that allowed = ["."] matches all hosts hence would not be recommended in a production environment.
As stated in the Boxfuse Play Documentation:
If your application uses the allowed hosts filter you must ensure play.filters.hosts.allowed in application.conf allows connections from anywhere as this filter otherwise causes ELB healthchecks to fail. For example:
play.filters.hosts {
allowed = ["."]
}
More info in the official Play documentation.

Configuring Meteor deployment to Google Compute Engine VM using mupx

Whilst I've tried several solutions to related problems on SO, nothing appears to fix my problem when deploying a Meteor project to a VM on Google Compute Engine.
I setup mupx to handle the deployment and don't have any apparent issues when running
sudo mupx deploy
My mup.json is as follows
{
// Server authentication info
"servers": [
{
"host": "104.199.141.232",
"username": "simonlayfield",
"password": "xxxxxxxx"
// or pem file (ssh based authentication)
// "pem": "~/.ssh/id_rsa"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.36 by default. Do not use v, only version number.
"nodeVersion": "0.10.36",
// Install PhantomJS in the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (No spaces)
"appName": "simonlayfield",
// Location of app (local directory)
"app": ".",
// Configure environment
"env": {
"ROOT_URL": "http://simonlayfield.com"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 30
}
When navigating to my external IP in the browser I can see the Meteor site template however the Mongodb data isn't showing up.
http://simonlayfield.com
I have set a firewall rule up on the VM to allow traffic through port 27017
Name: mongodb
Description: Allow port 27017 access to http-server
Network: default
Source filter: Allow from any source (0.0.0.0/0)
Allowed protocols and ports: tcp:27017
Target tags: http-server
I've also tried passing the env variable MONGO_URL but after several failed attempts I found this post on the Meteor forums suggesting that it is not required when using a local Mongodb database.
I'm currently connecting to the VM using ssh rather than the gcloud SDK but if it will help toward a solution I'm happy to set that up.
I'd really appreciate it if someone could provide some guidance on how I can know specifically what is going wrong. Is the firewall rule I've setup sufficient? Are there other factors than need to be considered when using a Google Compute Engine VM specifically? Is there a way for me to check logs on the server via ssh to gain extra clarity around a connection/firewall/configuration problem?
My knowledge in this area is limited and so apologies if there's an easy fix that has evaded me.
Thanks in advance.
There were some recent meteord updates, please rerun your deployment
Also as a side note: I always specify a port for mup / mupx files
"env": {
"PORT": 5050,
"ROOT_URL": "http://youripaddress"
},

strongloop slc deploy env var complications

I've been deploying a loopback app via a custom init.d/app.conf script, using slc run --detach --cluster "cpu", but want to move to using strong-pm, as recommended.
But I've come across some limitations and am looking for any guidance on how to replicate the setup with which I'm currently familiar.
Currently I set app-specific configuration inside server/config.local.js and server/datasources.local.js, most importantly the PORT at which the app should listen for connections on. This works perfectly using slc run for local development and remote deploying for staging, all I do is set different env vars for each distinct app:
datasources.local.js:
module.exports = {
"mysqlDS": {
name: "mysqlDS",
connector: "mysql",
host: process.env.PROTEUS_MYSQL_HOST,
port: process.env.PROTEUS_MYSQL_PORT,
database: process.env.PROTEUS_MYSQL_DB,
username: process.env.PROTEUS_MYSQL_USER,
password: process.env.PROTEUS_MYSQL_PW
}
}
config.local.js:
module.exports = {
port: process.env.PROTEUS_API_PORT
}
When I deploy using strong-pm, I am not able to control this port, and it always gets set to 3000+N, where N is just incremented based on the service ID assigned to the app when it's deployed.
So even when I deploy and then set env using
slc ctl -C http://localhost:8701 env-set proteus-demo PROTEUS_API_PORT=3033 PROTEUS_DB=demo APP_DOMAIN=demo.domain.com
I see that strong-pm completely ignores PROTEUS_API_PORT when it redeploys with the new env vars:
ENV has changed, restarting
Service "1" listening on 0.0.0.0:3001
Restarting next commit Runner: commit 1/deploy/default/demo-deploy
Start Runner: commit 1/deploy/default/demo-deploy
Request (status) of current Runner: child 20066 commit 1/deploy/default/demo-deploy
Request {"cmd":"status"} of Runner: child 20066 commit 1/deploy/default/demo-deploy
3001! Not 3033 like I want, and spec'd in config.local.js. Is there a way to control this explicitly? I do not want to need to run an slc inspection command to determine the port for my nginx upstream block each time I deploy an app. Would be awesome to be able to specify listen PORT by service name, too.
FWIW, this is on an aws instance that will host demo and staging apps pointing to separate DBs and on different PORTs.
strong-pm only sets a PORT environment variable, which the app is responsible for honouring.
Based on loopback-boot/lib/executor:109, it appears that loopback actually prefers the PORT environment variable over the value in the config file. In that case it seems your best bet is to either:
pass a port in to app.listen() yourself
set one of the higher priority environment variables such as npm_config_port (which would normally be set via npm start --port 1234).