AWS CDK Glue Job + Trigger created but won't run - aws-cloudformation

I have the following AWS CDK configuration in TypeScript (abriged):
const jobProps = {
command: {
name: 'glueetl',
pythonVersion: '3',
scriptLocation: `s3://${s3bucket.bucketName}/${this.scriptName}`,
},
connections: { connections: [connectionName] },
defaultArguments: { },
description: idEnv + '-job',
executionProperty: {
maxConcurrentRuns: 1,
},
glueVersion: '2.0',
maxRetries: 0,
name: idEnv + '-job',
numberOfWorkers: 2,
role: glueServiceRole.roleArn,
timeout: 180, // minutes
workerType: 'Standard',
};
const job = new CfnJob(this, idEnv, jobProps);
const trigger = new CfnTrigger(this, idEnv + '-trigger', {
type: 'SCHEDULED',
description: 'Scheduled run for ' + job.name,
schedule: this.JOB_SCHEDULE,
actions: [
{
jobName: job.name,
},
],
});
The trigger is created, it is seen in the Console and it is linked to the Job. But it just won't run (manual Job run is OK). What am I missing?

You need to add "startOnCreation: true" to the CfnTrigger props, so the trigger status will be enabled by default.

Related

Running sequilize migration with umzug through github ci/cd

im using sequlize with umzug - migrations work locally, when i create a job for it, it cannot find the neccessery modules.
I got a mirgrator.js file.
const { migrator } = require('./iumzug.js');
migrator.runAsCLI()
And an iumzug.ts file as well, which configured like this.
const { Sequelize } = require('sequelize');
const { envVar } = require('./src/utilities/env-var')
const { Umzug, SequelizeStorage } = require("umzug")
const sequelize = new Sequelize({
database: envVar.DB_DATABASE,
host: envVar.DB_HOST,
port: 5432,
schema: ["TEST"].includes(envVar.NODE_ENV) ? 'test' : 'public',
username: envVar.DB_USERNAME,
password: envVar.DB_PASSWORD,
dialect: 'postgres',
ssl: true,
dialectOptions: {
ssl: {
require: true,
},},});
const migrator = new Umzug({
migrations: {
glob: ["./src/database/*.ts", { cwd: __dirname }],
resolve: ({ name, path, context }) => {
// eslint-disable-next-line #typescript-eslint/no-var-requires
const migration = require(path);
return {
// adjust the parameters Umzug will
// pass to migration methods when called
name,
up: async () => migration.up(context, Sequelize),
down: async () => migration.down(context, Sequelize)
};
}
},
context: sequelize.getQueryInterface(),
storage: new SequelizeStorage({
sequelize,
modelName: "migration_meta"
}),
logger: console
});
module.exports = { migrator }
i created a migration job on my github yml file as followes:
migrations:
runs-on: ubuntu-latest
needs: build
steps:
- uses: actions/checkout#v3
- name: migrations step
run: |
node migrator.js up
when i run github action - i get this error
looking for alternatives / directions to fix it.
Let me know if i need to add anymore code / pictures of the process.

Tanka/Jsonnet - Howto loop over templates/imports?

i am trying to create multiple grafana-instances with slightly different config-files with tanka. The following works, as long as the the configmap.grafana_ini is in-place. But this becomes very unreadable with a growing config. So i am looking for a way to move the configmaps to their own file and import.
But if i move that to it's own file and use an import/str i am getting a "computed imports are not allowed" error or the instance-variable becomes unknown.
local tanka = import 'github.com/grafana/jsonnet-libs/tanka-util/main.libsonnet';
local helm = tanka.helm.new(std.thisFile);
local k = import 'github.com/grafana/jsonnet-libs/ksonnet-util/kausal.libsonnet';
(import 'config.libsonnet') +
{
local configMap = k.core.v1.configMap,
local container = k.core.v1.container,
local stateful = k.apps.v1.statefulSet,
local ingrdatasourcesess = k.networking.v1.ingress,
local port = k.core.v1.containerPort,
local service = k.core.v1.service,
local pvc = k.core.v1.persistentVolumeClaim,
local ports = [port.new('http', 3000)],
grafana: {
g(instance):: {
local this = self,
deployment:
stateful.new(
name='grafana-' + instance.handle,
replicas=1,
containers=[
container.new(
name='grafana-' + instance.handle,
image=$._config.grafana.image + instance.theme + ':' + $._config.grafana.version
)
+ container.withPorts(ports),
],
)
+ stateful.metadata.withLabels({ 'io.kompose.service': 'grafana-' + instance.handle })
+ stateful.configMapVolumeMount(this.configMaps.grafana_ini, '/etc/grafana/grafana.ini', k.core.v1.volumeMount.withSubPath('grafana.ini'))
+ stateful.spec.withServiceName('grafana-' + instance.handle)
+ stateful.spec.selector.withMatchLabels({ 'io.kompose.service': 'grafana-' + instance.handle })
+ stateful.spec.template.metadata.withLabels({ 'io.kompose.service': 'grafana-' + instance.handle })
+ stateful.spec.template.spec.withImagePullSecrets({
name: 'registry.gitlab.com',
})
+ stateful.spec.template.spec.withRestartPolicy('Always'),
service:
k.util.serviceFor(self.deployment)
+ service.mixin.spec.withType('ClusterIP'),
configMaps: {
grafana_ini:
configMap.new(
'grafana-ini-' + instance.handle, {
'grafana.ini': std.manifestIni(
{
main: {
app_mode: 'production',
instance_name: instance.handle,
},
sections: {
server: {
protocol: 'http',
http_port: '3000',
domain: 'dashboard.' + $._config.ingress.realm + '.' + $._config.ingress.tld + '/' + instance.handle + '/',
root_url: $._config.ingress.protocol + 'dashboard.' + $._config.ingress.realm + '.' + $._config.ingress.tld + '/' + instance.handle + '/',
serve_from_sub_path: true,
},
},
}
),
}
),
},
},
deploys: [self.g(instance) for instance in $._config.grafana.instances],
},
}
Here is the config-part:
{
_config+:: {
grafana+: {
image: 'registry.gitlab.com/xxx/frontend/grafana/',
version: 'v7.3.7',
client_secret: 'xyz',
adminusername: 'admin',
adminpassword: 'admin',
instances: [
{
name: "xxx's Grafana",
handle: 'xyz',
theme: 'xxx',
alerting: 'false',
volume_size: '200M',
default: true,
allow_embedding: false,
public: 'false',
secret_key: 'xxxx',
email: {
host: '',
user: '',
password: '',
from_address: '',
from_name: '',
},
datasources: [
{
name: 'xxx Showcase',
type: 'influxdb',
access: 'proxy',
url: 'http://influx:8086',
database: 'test123',
user: 'admin',
password: 'admin',
editable: false,
isDefault: false,
version: 1
},
],
dashboards: [
{
src: 'provisioning/dashboards/xxx_showcase_dashboard.json',
datasource: 'xxx Showcase',
title: 'xxx office building',
template: true,
},
],
},
],
},
},
}
EDITED, as suggested by 2nd post.
greetings,
strowi
Thanks for clarifying, find below a possible solution, note I trimmed down parts of your original files, for better readability.
I think that the main highlight here is the iniFile() function, so that we can explicitly pass (config, instance) to it.
main.jsonnet
local tanka = import 'github.com/grafana/jsonnet-libs/tanka-util/main.libsonnet';
local helm = tanka.helm.new(std.thisFile);
local k = import 'github.com/grafana/jsonnet-libs/ksonnet-util/kausal.libsonnet';
(import 'config.libsonnet') +
{
local configMap = k.core.v1.configMap,
local container = k.core.v1.container,
local stateful = k.apps.v1.statefulSet,
local ingrdatasourcesess = k.networking.v1.ingress,
local port = k.core.v1.containerPort,
local service = k.core.v1.service,
local pvc = k.core.v1.persistentVolumeClaim,
local ports = [port.new('http', 3000)],
grafana: {
g(instance):: {
local this = self,
/* <snip...> */
configMaps: {
local inilib = import 'ini.libsonnet',
grafana_ini:
configMap.new(
'grafana-ini-' + instance.handle, inilib.iniFile($._config, instance)
),
},
},
deploys: [self.g(instance) for instance in $._config.grafana.instances],
},
}
config.libsonnet
{
_config+:: {
// NB: added below dummy ingress field
ingress:: {
realm:: 'bar',
tld:: 'foo.tld',
protocol:: 'tcp',
},
grafana+: {
image: 'registry.gitlab.com/xxx/frontend/grafana/',
version: 'v7.3.7',
client_secret: 'xyz',
adminusername: 'admin',
adminpassword: 'admin',
instances: [
{
name: "xxx's Grafana",
handle: 'xyz',
/* <snip...> */
},
],
},
},
}
ini.libsonnet
{
iniFile(config, instance):: {
'grafana.ini': std.manifestIni(
{
main: {
app_mode: 'production',
instance_name: instance.handle,
},
sections: {
server: {
protocol: 'http',
http_port: '3000',
domain: 'dashboard.' + config.ingress.realm + '.' + config.ingress.tld + '/' + instance.handle + '/',
root_url: config.ingress.protocol + 'dashboard.' + config.ingress.realm + '.' + config.ingress.tld + '/' + instance.handle + '/',
serve_from_sub_path: true,
},
},
}
),
},
}

Deploy a FargateService to an ECS that's living within a different Stack (preoject)

1- I have a project core-infra that encapasses all the core infra related compoents (VPCs, Subnets, ECS Cluster...etc)
2- I have microservice projects with independant stacks each used for deployment
I want to deploy a FargateService from a microservice project stack A to the already existing ECS living within the core-infra stack
Affected area/feature
Pulumi Service
ECS
Deploy microservice
FargateService
Pulumi github issue link
Pulumi Stack References are the answer here:
https://www.pulumi.com/docs/intro/concepts/stack/#stackreferences
Your core-infra stack would output the ECS cluster ID and then stack B consumes that output so it can, for example, deploy an ECS service to the given cluster
(https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/).
I was able to deploy using aws classic.
PS: The setup is way more complex than with the awsx, the doc and resources aren't exhaustive.
Now I have few issues:
The Loadbalancer isn't reachable and keeps loading forever
I don't have any logs in the CloudWatch LogGoup
Not sure how to use the LB Listner with the ECS service / Not sure about the port mapping
Here is the complete code for reference (people who're husteling) and I'd appreciate if you could suggest any improvments/answers.
// Capture the EnvVars
const appName = process.env.APP_NAME;
const namespace = process.env.NAMESPACE;
const environment = process.env.ENVIRONMENT;
// Load the Deployment Environment config.
const configMapLoader = new ConfigMapLoader(namespace, environment);
const env = pulumi.getStack();
const infra = new pulumi.StackReference(`org/core-datainfra/${env}`);
// Fetch ECS Fargate cluster ID.
const ecsClusterId = infra.getOutput('ecsClusterId');
// Fetch DeVpc ID.
const deVpcId = infra.getOutput('deVpcId');
// Fetch DeVpc subnets IDS.
const subnets = ['subnet-aaaaaaaaaa', 'subnet-bbbbbbbbb'];
// Fetch DeVpc Security Group ID.
const securityGroupId = infra.getOutput('deSecurityGroupId');
// Define the Networking for our service.
const serviceLb = new aws.lb.LoadBalancer(`${appName}-lb`, {
internal: false,
loadBalancerType: 'application',
securityGroups: [securityGroupId],
subnets,
enableDeletionProtection: false,
tags: {
Environment: environment
}
});
const serviceTargetGroup = new aws.lb.TargetGroup(`${appName}-t-g`, {
port: configMapLoader.configMap.service.http.externalPort,
protocol: configMapLoader.configMap.service.http.protocol,
vpcId: deVpcId,
targetType: 'ip'
});
const http = new aws.lb.Listener(`${appName}-listener`, {
loadBalancerArn: serviceLb.arn,
port: configMapLoader.configMap.service.http.externalPort,
protocol: configMapLoader.configMap.service.http.protocol,
defaultActions: [
{
type: 'forward',
targetGroupArn: serviceTargetGroup.arn
}
]
});
// Create AmazonECSTaskExecutionRolePolicy
const taskExecutionPolicy = new aws.iam.Policy(
`${appName}-task-execution-policy`,
{
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Action: [
'ecr:GetAuthorizationToken',
'ecr:BatchCheckLayerAvailability',
'ecr:GetDownloadUrlForLayer',
'ecr:BatchGetImage',
'logs:CreateLogStream',
'logs:PutLogEvents',
'ec2:AuthorizeSecurityGroupIngress',
'ec2:Describe*',
'elasticloadbalancing:DeregisterInstancesFromLoadBalancer',
'elasticloadbalancing:DeregisterTargets',
'elasticloadbalancing:Describe*',
'elasticloadbalancing:RegisterInstancesWithLoadBalancer',
'elasticloadbalancing:RegisterTargets'
],
Resource: '*'
}
]
})
}
);
// IAM role that allows Amazon ECS to make calls to the load balancer
const taskExecutionRole = new aws.iam.Role(`${appName}-task-execution-role`, {
assumeRolePolicy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Principal: {
Service: ['ecs-tasks.amazonaws.com']
},
Action: 'sts:AssumeRole'
},
{
Action: 'sts:AssumeRole',
Principal: {
Service: 'ecs.amazonaws.com'
},
Effect: 'Allow',
Sid: ''
},
{
Action: 'sts:AssumeRole',
Principal: {
Service: 'ec2.amazonaws.com'
},
Effect: 'Allow',
Sid: ''
}
]
}),
tags: {
name: `${appName}-iam-role`
}
});
new aws.iam.RolePolicyAttachment(`${appName}-role-policy`, {
role: taskExecutionRole.name,
policyArn: taskExecutionPolicy.arn
});
// New image to be pulled
const image = `${configMapLoader.configMap.service.image.repository}:${process.env.IMAGE_TAG}`;
// Set up Log Group
const awsLogGroup = new aws.cloudwatch.LogGroup(`${appName}-awslogs-group`, {
name: `${appName}-awslogs-group`,
tags: {
Application: `${appName}`,
Environment: 'production'
}
});
const serviceTaskDefinition = new aws.ecs.TaskDefinition(
`${appName}-task-definition`,
{
family: `${appName}-task-definition`,
networkMode: 'awsvpc',
executionRoleArn: taskExecutionRole.arn,
requiresCompatibilities: ['FARGATE'],
cpu: configMapLoader.configMap.service.resources.limits.cpu,
memory: configMapLoader.configMap.service.resources.limits.memory,
containerDefinitions: JSON.stringify([
{
name: `${appName}-fargate`,
image,
cpu: parseInt(
configMapLoader.configMap.service.resources.limits.cpu
),
memory: parseInt(
configMapLoader.configMap.service.resources.limits.memory
),
essential: true,
portMappings: [
{
containerPort: 80,
hostPort: 80
}
],
environment: configMapLoader.getConfigAsEnvironment(),
logConfiguration: {
logDriver: 'awslogs',
options: {
'awslogs-group': `${appName}-awslogs-group`,
'awslogs-region': 'us-east-2',
'awslogs-stream-prefix': `${appName}`
}
}
}
])
}
);
// Create a Fargate service task that can scale out.
const fargateService = new aws.ecs.Service(`${appName}-fargate`, {
name: `${appName}-fargate`,
cluster: ecsClusterId,
taskDefinition: serviceTaskDefinition.arn,
desiredCount: 5,
loadBalancers: [
{
targetGroupArn: serviceTargetGroup.arn,
containerName: `${appName}-fargate`,
containerPort: configMapLoader.configMap.service.http.internalPort
}
],
networkConfiguration: {
subnets
}
});
// Export the Fargate Service Info.
export const fargateServiceName = fargateService.name;
export const fargateServiceUrl = serviceLb.dnsName;
export const fargateServiceId = fargateService.id;
export const fargateServiceImage = image;

assumed-role is not authorized to perform: route53:ListHostZonesByDomain; Adding a Route53 Policy to a CodePipeline CodeBuildAction's Assumed Rule

My goal is to create a website at subdomain.mydomain.com pointing to a CloudFront CDN distributing a Lambda running Express that's rendering an S3 website. I'm using AWS CDK to do this.
I have an error that says
[Error at /plants-domain] User: arn:aws:sts::413025517373:assumed-role/plants-pipeline-BuildCDKRole0DCEDB8F-1BHVX6Z6H5X0H/AWSCodeBuild-39a582bf-8b89-447e-a6b4-b7f7f13c9db1 is not authorized to perform: route53:ListHostedZonesByName
It means:
[Error at /plants-domain] - error in the stack called plants-domain
User: arn:aws:sts::1234567890:assumed-role/plants-pipeline-BuildCDKRole0DCEDB8F-1BHVX6Z6H5X0H/AWSCodeBuild-39a582bf-8b89-447e-a6b4-b7f7f13c9db is the ARN of the Assumed Role associated with my object in the plants-pipeline executing route53.HostedZone.fromLookup() (but which object is it??)
is not authorized to perform: route53:ListHostedZonesByName the Assumed Role needs additional Route53 permissions
I believe this policy will permit the object in question to lookup the Hosted Zone:
const listHostZonesByNamePolicy = new IAM.PolicyStatement({
actions: ['route53:ListHostedZonesByName'],
resources: ['*'],
effect: IAM.Effect.ALLOW,
});
The code using Route53.HostedZone.fromLookup() is in the first stack domain.ts. My other stack consumes the domain.ts template using CodePipelineAction.CloudFormationCreateUpdateStackAction (see below)
domain.ts
// The addition of this zone lookup broke CDK
const zone = route53.HostedZone.fromLookup(this, 'baseZone', {
domainName: 'domain.com',
});
// Distribution I'd like to point my subdomain.domain.com to
const distribution = new CloudFront.CloudFrontWebDistribution(this, 'website-cdn', {
// more stuff goes here
});
// Create the subdomain aRecord pointing to my distribution
const aRecord = new route53.ARecord(this, 'aliasRecord', {
zone: zone,
recordName: 'subdomain',
target: route53.RecordTarget.fromAlias(new targets.CloudFrontTarget(distribution)),
});
pipeline.ts
const pipeline = new CodePipeline.Pipeline(this, 'Pipeline', {
pipelineName: props.name,
restartExecutionOnUpdate: false,
});
// My solution to the missing AssumedRole synth error: Create a Role, add the missing Policy to it (and the Pipeline, just in case)
const buildRole = new IAM.Role(this, 'BuildRole', {
assumedBy: new IAM.ServicePrincipal('codebuild.amazonaws.com'),
path: '/',
});
const listHostZonesByNamePolicy = new IAM.PolicyStatement({
actions: ['route53:ListHostedZonesByName'],
resources: ['*'],
effect: IAM.Effect.ALLOW,
});
buildRole.addToPrincipalPolicy(listHostZonesByNamePolicy);
pipeline.addStage({
// This is the action that fails, when it calls `cdk synth`
stageName: 'Build',
actions: [
new CodePipelineAction.CodeBuildAction({
actionName: 'CDK',
project: new CodeBuild.PipelineProject(this, 'BuildCDK', {
projectName: 'CDK',
buildSpec: CodeBuild.BuildSpec.fromSourceFilename('./aws/buildspecs/cdk.yml'),
role: buildRole, // this didn't work
}),
input: outputSources,
outputs: [outputCDK],
runOrder: 10,
role: buildRole, // this didn't work
}),
new CodePipelineAction.CodeBuildAction({
actionName: 'Assets',
// other stuff
}),
new CodePipelineAction.CodeBuildAction({
actionName: 'Render',
// other stuff
}),
]
})
pipeline.addStage({
stageName: 'Deploy',
actions: [
// This is the action calling the compiled domain stack template
new CodePipelineAction.CloudFormationCreateUpdateStackAction({
actionName: 'Domain',
templatePath: outputCDK.atPath(`${props.name}-domain.template.json`),
stackName: `${props.name}-domain`,
adminPermissions: true,
runOrder: 50,
role: buildRole, // this didn't work
}),
// other actions
]
});
With the above configuration, unfortunately, I still receive the same error:
[Error at /plants-domain] User: arn:aws:sts::413025517373:assumed-role/plants-pipeline-BuildCDKRole0DCEDB8F-1BHVX6Z6H5X0H/AWSCodeBuild-957b18fb-909d-4e22-94f0-9aa6281ddb2d is not authorized to perform: route53:ListHostedZonesByName
With the Assumed Role ARN, is it possible to track down the object missing permissions? Is there another way to solve my IAM/AssumedUser role problem?
Here is the answer from the official doco: https://docs.aws.amazon.com/cdk/api/latest/docs/pipelines-readme.html#context-lookups
TLDR:
pipeline by default cannot do lookups -> 2 options:
synth on dev machine (make sure a dev has permissions)
add policy for lookups
new CodePipeline(this, 'Pipeline', {
synth: new CodeBuildStep('Synth', {
input: // ...input...
commands: [
// Commands to load cdk.context.json from somewhere here
'...',
'npm ci',
'npm run build',
'npx cdk synth',
// Commands to store cdk.context.json back here
'...',
],
rolePolicyStatements: [
new iam.PolicyStatement({
actions: ['sts:AssumeRole'],
resources: ['*'],
conditions: {
StringEquals: {
'iam:ResourceTag/aws-cdk:bootstrap-role': 'lookup',
},
},
}),
],
}),
});
Based on the error, the pipeline role (and it would work at the stage or action...)
By default a new role is being created for the pipeline:
role?
Type: IRole (optional, default: a new IAM role will be created.)
The IAM role to be assumed by this Pipeline.
Instead, when you are constructing your pipeline add the buildRole there:
const pipeline = new CodePipeline.Pipeline(this, 'Pipeline', {
pipelineName: props.name,
restartExecutionOnUpdate: false,
role: buildRole
});
Based on your pipeline you never assigned the role to the relevant stage action according to the docs:
pipeline.addStage({
stageName: 'Deploy',
actions: [
// This is the action calling the compiled domain stack template
new CodePipelineAction.CloudFormationCreateUpdateStackAction({
...
role: buildRole, // this didn't work
}),
// other actions
]
});
Should be:
pipeline.addStage({
stageName: 'Deploy',
actions: [
// This is the action calling the compiled domain stack template
new CodePipelineAction.CloudFormationCreateUpdateStackAction({
....
deploymentRole: buildRole
}),
]
});
Why is it deploymentRole instead of just role, no one knows.

Getting Protractor Tests to Run on SauceLabs

I am trying to launch some tests with protractor going to SauceLabs.
I have my SauceConnect up and running. I have my protractor.config.js setup correctly I believe, but when I run the tests on my machine it with ng e2e --suite smoke, it is just running on my local machine and not going through the tunnel. Any suggestions? I have been following this "tutorial" and it has been going pretty well, but I am just not seeing anything going through the tunnel.
Here is my protractor.config.js file:
const baseUrl = '<BASEURL>';
const maxNumberOfInstances = process.env.NUMBER_OF_INSTANCES ? process.env.NUMBER_OF_INSTANCES : 1;
const reportPath = 'protractor/report';
const HtmlScreenshotReporter = require('protractor-jasmine2-screenshot-reporter');
const screenShotReporter = new HtmlScreenshotReporter({
dest: reportPath,
filename: 'artemis-e2e-report.html'
});
const SAUCELABS_USERNAME = '<SAUCEUSERNAME';
const SAUCELABS_AUTHKEY = '<SAUCEKEY>';
const chromeArgs = process.env.IS_LOCAL ? ['--no-sandbox', '--test-type=browser', '--lang=en', '--window-size=1680,1050'] : ['--disable-gpu', '--no-sandbox', '--test-type=browser', '--lang=en', '--window-size=1680,1050'];
const browserCapabilities = [{
sauceUser: SAUCELABS_USERNAME,
sauceKey: SAUCELABS_AUTHKEY,
browserName: 'chrome',
tunnelIdentifier: '<SAUCETUNNEL>',
shardTestFiles: true,
maxInstances: maxNumberOfInstances,
platform: 'Windows 10',
version: '73.0',
screenResolution: '1280x1024',
chromeOptions: {
args: chromeArgs,
prefs: {
'credentials_enable_service': false,
'profile': {
'password_manager_enabled': false
},
download: {
prompt_for_download: false,
directory_upgrade: true,
default_directory: 'C:\\downloads\\'
},
},
},
loggingPrefs: {
browser: 'SEVERE'
},
}, ];
// Protractor config
exports.config = {
baseUrl: baseUrl,
directConnect: true,
allScriptsTimeout: 2 * 60 * 1000,
jasmineNodeOpts: {
defaultTimeoutInterval: 3 * 60 * 1000
},
getPageTimeout: 2 * 60 * 1000,
suites: {
smoke: 'protractor/smokeTests/*.scenario.ts',
},
multiCapabilities: browserCapabilities,
framework: 'jasmine2',
onPrepare: function () {
browser.waitForAngularEnabled(true);
require('ts-node').register({
project: 'protractor/tsconfig.json',
});
const jasmineReporters = require('jasmine-reporters');
const jUnitXMLReporter = new jasmineReporters.JUnitXmlReporter({
consolidateAll: false,
savePath: reportPath,
filePrefix: 'xmloutput'
});
const JasmineConsoleReporter = require('jasmine-console-reporter');
const consoleReporter = new JasmineConsoleReporter({
colors: 1,
cleanStack: 1,
verbosity: 4,
listStyle: 'indent',
activity: true,
emoji: true,
beep: true,
timeThreshold: {
ok: 10000,
warn: 15000,
ouch: 30000,
}
});
jasmine.getEnv().addReporter(jUnitXMLReporter);
jasmine.getEnv().addReporter(screenShotReporter);
jasmine.getEnv().addReporter(consoleReporter);
browser.get(browser.baseUrl);
},
beforeLaunch: function () {
return new Promise(function (resolve) {
screenShotReporter.beforeLaunch(resolve);
});
},
afterLaunch: function (exitCode) {
return new Promise(function (resolve) {
screenShotReporter.afterLaunch(resolve.bind(this, exitCode));
});
},
};
First of all you are mentioning this
it is just running on my local machine and not going through the tunnel. Any suggestions
This is not related to the tunnel, but related to:
You still have directConnect: true,, remove it from your config
You added the Sauce Labs credentials to your capabilities, but you should use them in your config file at the root level. Here's and example (it's written for TypeScript, but it should give you an idea about how to set up your config file). The tunnel identifier is correct, you only need to be sure that you are getting the correct tunnel id as #fijiaaron mentioned
Hope this helps
Where are you getting your tunnelIdentifier from?
You want to make sure:
The tunnel is running
You can access the tunnel from where you are testing
If you have a named tunnel (e.g. sc -i myTunnel) then "myTunnel" should be the tunnelIdentifier, not the tunnel id that is shown in the console outnot (i.e. not Tunnel ID: cdceac0e33db4d5fa44093e191dfdfb0)
If you have an unnamed tunnel then you should not need to specify a tunnelIdentifier for it to be used.
If you appear to be using the tunnel but cannot access your local environment, try a manual test session in Sauce Labs and select the tunnel to see if it works there.