Scenario
I am attempting to run ask deploy to deploy an Alexa Skill from a ubuntu-20.04 release agent in Azure DevOps. I'm using an AWS Shell Script task, which according to the description
Runs a shell script in Bash, setting AWS credentials and region
information into the shell environment using the standard environment
keys AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN and
AWS_REGION.
I have the credentials set in DevOps as a Service Connection, so it really does have secure access to those values. So now that this task exposes my access key, secret, and region, I simply added a script to export the final two env vars needed for (I thought) successful execution:
export ASK_REFRESH_TOKEN="$(ASK_REFRESH_TOKEN)"
export ASK_VENDOR_ID="$(ASK_VENDOR_ID)"
Just to make sure that it's working I added ask smapi get-vendor-list to the script and I get the expected output which shows my vendor ID, name, and roles. Cool -- it must be working.
Problem
Now when ask deploy I get the following error:
CliError: Skill package src is not found in ask-resources.json
and I don't seem to be able to circumvent it.
I've Already Tried...
Downloading the build artifact and running ask deploy locally. That works, so I know there's not an issue with the composition of the build or the skill package.
Updating the refresh token
Verifying that the command is invoked from the correct directory
Verifying that the directory contains the right files
Question
How can I solve that error and successfully ask deploy from a release agent?
Reference
I'm on ASK-CLI v2.16.0
Here's my ask-resources.json in case that's useful. The error mentions this file.
{
"askcliResourcesVersion": "2020-03-31",
"profiles": {
"default": {
"skillMetadata": {
"src": "./skill-package"
},
"code": {
"default": {
"src": "./lambda"
}
},
"skillInfrastructure": {
"userConfig": {
"runtime": "nodejs12.x",
"handler": "index.handler",
"templatePath": "./infrastructure/cfn-deployer/skill-stack.yaml",
"awsRegion": "us-east-1"
},
"type": "#ask-cli/cfn-deployer"
}
}
}
}
Release job definition:
Release Log:
####Task Permissions
Permissions for this task to call AWS service APIs depend on the activities in the supplied script.
2020-09-29T12:27:05.4760730Z ==============================================================================
2020-09-29T12:27:05.6858528Z Configuring credentials for task
2020-09-29T12:27:05.6863319Z ...configuring AWS credentials from service endpoint 'redacted'
2020-09-29T12:27:05.6863708Z ...endpoint defines standard access/secret key credentials
2020-09-29T12:27:05.6879415Z Configuring region for task
2020-09-29T12:27:05.6882489Z ...configured to use region us-east-1, defined in task.
2020-09-29T12:27:05.6917304Z [command]/usr/bin/bash /home/vsts/work/_temp/awsshellscript_2266.sh
2020-09-29T12:27:09.2056323Z npm WARN deprecated request#2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
2020-09-29T12:27:15.3303791Z npm WARN deprecated har-validator#5.1.5: this library is no longer supported
2020-09-29T12:27:20.4277268Z /usr/local/bin/ask -> /usr/local/lib/node_modules/ask-cli/bin/ask.js
2020-09-29T12:27:20.4466577Z
2020-09-29T12:27:20.4475010Z > dtrace-provider#0.8.8 install /usr/local/lib/node_modules/ask-cli/node_modules/dtrace-provider
2020-09-29T12:27:20.4476929Z > node-gyp rebuild || node suppress-error.js
2020-09-29T12:27:20.4477558Z
2020-09-29T12:27:20.8448748Z gyp WARN EACCES current user ("nobody") does not have permission to access the dev dir "/root/.cache/node-gyp/12.18.4"
2020-09-29T12:27:20.8507717Z gyp WARN EACCES attempting to reinstall using temporary dev dir "/usr/local/lib/node_modules/ask-cli/node_modules/dtrace-provider/.node-gyp"
2020-09-29T12:27:20.8519281Z gyp WARN install got an error, rolling back install
2020-09-29T12:27:20.8535248Z gyp WARN install got an error, rolling back install
2020-09-29T12:27:20.8551818Z gyp ERR! configure error
2020-09-29T12:27:20.8555760Z gyp ERR! stack Error: EACCES: permission denied, mkdir '/usr/local/lib/node_modules/ask-cli/node_modules/dtrace-provider/.node-gyp'
2020-09-29T12:27:20.8559344Z gyp ERR! System Linux 5.4.0-1025-azure
2020-09-29T12:27:20.8560532Z gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
2020-09-29T12:27:20.8571334Z gyp ERR! cwd /usr/local/lib/node_modules/ask-cli/node_modules/dtrace-provider
2020-09-29T12:27:20.8572691Z gyp ERR! node -v v12.18.4
2020-09-29T12:27:20.8573700Z gyp ERR! node-gyp -v v5.1.0
2020-09-29T12:27:20.8574490Z gyp ERR! not ok
2020-09-29T12:27:20.9460739Z
2020-09-29T12:27:20.9462350Z > ask-cli#2.16.1 postinstall /usr/local/lib/node_modules/ask-cli
2020-09-29T12:27:20.9463103Z > node postinstall.js
2020-09-29T12:27:20.9463477Z
2020-09-29T12:27:20.9839474Z
2020-09-29T12:27:20.9840246Z ================================================================================
2020-09-29T12:27:20.9844532Z ASK CLI collects telemetry to better understand customer needs. You can
2020-09-29T12:27:20.9845738Z OPT OUT and disable telemetry by setting the 'share_usage' key to 'false'
2020-09-29T12:27:20.9846574Z in '~/.ask/cli_config'.
2020-09-29T12:27:20.9846928Z
2020-09-29T12:27:20.9847554Z Learn more: https://developer.amazon.com/docs/alexa/smapi/ask-cli-telemetry.html
2020-09-29T12:27:20.9848513Z ================================================================================
2020-09-29T12:27:20.9848861Z
2020-09-29T12:27:20.9933073Z + ask-cli#2.16.1
2020-09-29T12:27:20.9933710Z added 233 packages from 243 contributors in 13.497s
2020-09-29T12:27:21.2144126Z /home/vsts/work/r1/a/BuildArtifact/drop/projectname
2020-09-29T12:27:21.2154300Z total 44
2020-09-29T12:27:21.2155074Z drwxr-xr-x 6 vsts docker 4096 Sep 29 12:27 .
2020-09-29T12:27:21.2155522Z drwxr-xr-x 3 vsts docker 4096 Sep 29 12:27 ..
2020-09-29T12:27:21.2156597Z drwxr-xr-x 2 vsts docker 4096 Sep 29 12:27 .ask
2020-09-29T12:27:21.2157145Z -rw-r--r-- 1 vsts docker 46 Sep 29 12:27 .gitignore
2020-09-29T12:27:21.2157583Z -rw-r--r-- 1 vsts docker 11358 Sep 29 12:27 LICENSE.txt
2020-09-29T12:27:21.2158041Z -rw-r--r-- 1 vsts docker 537 Sep 29 12:27 ask-resources.json
2020-09-29T12:27:21.2158482Z drwxr-xr-x 3 vsts docker 4096 Sep 29 12:27 infrastructure
2020-09-29T12:27:21.2158917Z drwxr-xr-x 3 vsts docker 4096 Sep 29 12:27 lambda
2020-09-29T12:27:21.2159407Z drwxr-xr-x 4 vsts docker 4096 Sep 29 12:27 skill-package
2020-09-29T12:27:22.9397766Z {
2020-09-29T12:27:22.9398441Z "vendors": [
2020-09-29T12:27:22.9398789Z {
2020-09-29T12:27:22.9399540Z "id": "***",
2020-09-29T12:27:22.9399972Z "name": "redacted",
2020-09-29T12:27:22.9400496Z "roles": [
2020-09-29T12:27:22.9400870Z "ROLE_DEVELOPER"
2020-09-29T12:27:22.9401186Z ]
2020-09-29T12:27:22.9401481Z }
2020-09-29T12:27:22.9402201Z ]
2020-09-29T12:27:22.9402497Z }
2020-09-29T12:27:23.9621180Z [Error]: CliError: Skill package src is not found in ask-resources.json.
Related
I've been trying to install custom pack using these links on a single node K8 cluster.
https://github.com/StackStorm/st2packs-dockerfiles/
https://github.com/stackstorm/stackstorm-ha
Stackstorm is installed successfully with default dashboard but when I tried to build custom pack and helm upgrade it's not working.
Here is my stackstorm pack dir and image Dockerfile:
/home/manisha.tanwar/st2packs-dockerfiles # ll st2packs-image/packs/st2_chef/
total 28
drwxr-xr-x. 4 manisha.tanwar domain users 4096 Apr 28 16:11 actions
drwxr-xr-x. 2 manisha.tanwar domain users 4096 Apr 28 16:11 aliases
-rwxr-xr-x. 1 manisha.tanwar domain users 211 Apr 28 16:11 pack.yaml
-rwxr-xr-x. 1 manisha.tanwar domain users 65 Apr 28 16:11 README.md
-rwxr-xr-x. 1 manisha.tanwar domain users 293 Apr 28 17:47 requirements.txt
drwxr-xr-x. 2 manisha.tanwar domain users 4096 Apr 28 16:11 rules
drwxr-xr-x. 2 manisha.tanwar domain users 4096 Apr 28 16:11 sensors
/home/manisha.tanwar/st2packs-dockerfiles # cat st2packs-image/Dockerfile
ARG PACKS="file:///tmp/stackstorm-st2"
FROM stackstorm/st2packs:builder AS builder
COPY packs/st2_chef /tmp/stackstorm-st2/
RUN ls -la /tmp/stackstorm-st2
RUN git config --global http.sslVerify false
# Install custom packs
RUN /opt/stackstorm/st2/bin/st2-pack-install ${PACKS}
###########################
# Minimize the image size. Start with alpine:3.8,
# and add only packs and virtualenvs from builder.
FROM stackstorm/st2packs:runtime
Image is created using command
docker build -t st2_chef:v0.0.2 st2packs-image
And then I changed values.yaml as below:
packs:
configs:
packs.yaml: |
---
# chef pack
image:
name: st2_chef
tag: 0.0.1
pullPolicy: Always
And run
helm upgrade <release-name>.
but it doesn't show anything on dashboard as well as cmd.
Please help, We are planning to upgrade to Stackstorm HA from standalone stackstorm and I need to get POC done for that.
Thanks in advance!!
Got it working with the help of community. Here's the link if anyone want to follow:
https://github.com/StackStorm/stackstorm-ha/issues/128
I wasn't using docker registery to push the image and use it in helm configuration.
Updated values.yaml as :
packs:
# Custom StackStorm pack configs. Each record creates a file in '/opt/stackstorm/configs/'
# https://docs.stackstorm.com/reference/pack_configs.html#configuration-file
configs:
core.yaml: |
---
image:
# Uncomment the following block to make the custom packs image available to the necessary pods
#repository: your-remote-docker-registry.io
repository: manishatanwar
name: st2_nagios
tag: "0.0.1"
pullPolicy: Always
I'm running Postgres 9.4 installed on Ubuntu 16.04.3. Postgres was installed using apt-get, I downloaded the sources and dependencies with apt-get too. I downloaded pg_rewind REL9_4_STABLE branch and built it. When I try to run my pg_rewind command I get the following:
The servers diverged at WAL position 0/6148D50 on timeline 1.
Rewinding from Last common checkpoint at 0/5000098 on timeline 1
SQL command failed
CREATE OR REPLACE FUNCTION rewind_support.rewind_support_ls_dir(text, boolean) RETURNS SETOF text AS '$libdir/pg_rewind_support' LANGUAGE C STRICT;
ERROR: could not access file "$libdir/pg_rewind_support": No such file or directory
Failure, exiting
I found the pg_rewind_support.so library file and I placed it in the locations returned by pg_config --libdir and --pkglibdir with no success. I even created a copy without .so extension.
$ls -la $(pg_config --pkglibdir)/pg_rewind_support*
-rw-r--r-- 1 root root 18768 Jul 16 17:59 /usr/lib/postgresql/9.4/lib/pg_rewind_support
-rw-r--r-- 1 root root 18768 Jul 16 17:50 /usr/lib/postgresql/9.4/lib/pg_rewind_support.so
$ls -la $(pg_config --libdir)/pg_rewind_support*
-rw-r--r-- 1 root root 18768 Jul 16 17:59 /usr/lib/x86_64-linux-gnu/pg_rewind_support
-rw-r--r-- 1 root root 18768 Jul 16 17:44 /usr/lib/x86_64-linux-gnu/pg_rewind_support.so
Any ideas how can I make my apt-get installed Postgres recognize the pg_rewind library? I don't want to end up running in production a full postgres that was packaged and built in-house.
In working through this with the OP, the steps to build pg_rewind were:
Download the appropriate PostgreSQL 9.4.18 tarball, unpack.
Download pg_rewind, move into contrib/
Configure PostgreSQL to match the directory layout that Debian/Ubuntu uses:
./configure --libdir=/usr/lib/postgresql/9.4/lib --bindir=/usr/lib/postgresql/9.4/bin
Do a "make" on PostgreSQL.
Do a "make" and a "sudo make install" on pg_rewind.
pg_rewind must be installed on both the source system (so that the .so is available there) and on the target system (so the pg_rewind binary is available there).
I try to run simple spring batch task on Spring Cloud Data Flow for Yarn. Unfortunatelly while running it I got error message in ResourceManager UI:
Application application_1473838120587_5156 failed 1 times due to AM Container for appattempt_1473838120587_5156_000001 exited with exitCode: 1
For more detailed output, check application tracking page:http://ip-10-249-9-50.gc.stepstone.com:8088/cluster/app/application_1473838120587_5156Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1473838120587_5156_01_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
More informations from Appmaster.stderror stated that:
Log Type: Appmaster.stderr
Log Upload Time: Mon Nov 07 12:59:57 +0000 2016
Log Length: 106
Error: Unable to access jarfile spring-cloud-deployer-yarn-tasklauncherappmaster-1.0.0.BUILD-SNAPSHOT.jar
If it comes to Spring Cloud Data Flow I'm trying to run in dataflow-shell:
app register --type task --name simple_batch_job --uri https://github.com/spring-cloud/spring-cloud-dataflow-samples/raw/master/tasks/simple-batch-job/batch-job-1.0.0.BUILD-SNAPSHOT.jar
task create foo --definition "simple_batch_job"
task launch foo
Its really hard to know why this error occurs. I'm sure that connection from dataflow-server to yarn works fine because in standard HDFS localization (/dataflow) some files was copied (servers.yml, jars with jobs and utilities) but it is unaccessible in some way.
My servers.yml config:
logging:
level:
org.apache.hadoop: DEBUG
org.springframework.yarn: DEBUG
maven:
remoteRepositories:
springRepo:
url: https://repo.spring.io/libs-snapshot
spring:
main:
show_banner: false
hadoop:
fsUri: hdfs://HOST:8020
resourceManagerHost: HOST
resourceManagerPort: 8032
resourceManagerSchedulerAddress: HOST:8030
datasource:
url: jdbc:h2:tcp://localhost:19092/mem:dataflow
username: sa
password:
driverClassName: org.h2.Driver
I'll be glad to hear any informations or spring-yarn tips&tricks to make this work.
PS: As hadoop environment I use Amazon EMR 5.0
EDIT: Recursive path from hdfs:
drwxrwxrwx - user hadoop 0 2016-11-07 15:02 /dataflow/apps
drwxrwxrwx - user hadoop 0 2016-11-07 15:02 /dataflow/apps/stream
drwxrwxrwx - user hadoop 0 2016-11-07 15:04 /dataflow/apps/stream/app
-rwxrwxrwx 3 user hadoop 121 2016-11-07 15:05 /dataflow/apps/stream/app/application.properties
-rwxrwxrwx 3 user hadoop 1177 2016-11-07 15:04 /dataflow/apps/stream/app/servers.yml
-rwxrwxrwx 3 user hadoop 60202852 2016-11-07 15:04 /dataflow/apps/stream/app/spring-cloud-deployer-yarn-appdeployerappmaster-1.0.0.RELEASE.jar
drwxrwxrwx - user hadoop 0 2016-11-04 14:22 /dataflow/apps/task
drwxrwxrwx - user hadoop 0 2016-11-04 14:24 /dataflow/apps/task/app
-rwxrwxrwx 3 user hadoop 121 2016-11-04 14:25 /dataflow/apps/task/app/application.properties
-rwxrwxrwx 3 user hadoop 2101 2016-11-04 14:24 /dataflow/apps/task/app/servers.yml
-rwxrwxrwx 3 user hadoop 60198804 2016-11-04 14:24 /dataflow/apps/task/app/spring-cloud-deployer-yarn-tasklauncherappmaster-1.0.0.RELEASE.jar
drwxrwxrwx - user hadoop 0 2016-11-04 14:25 /dataflow/artifacts
drwxrwxrwx - user hadoop 0 2016-11-07 15:06 /dataflow/artifacts/cache
-rwxrwxrwx 3 user hadoop 12323493 2016-11-04 14:25 /dataflow/artifacts/cache/https-c84ea9dc0103a4754aeb9a28bbc7a4f33c835854-batch-job-1.0.0.BUILD-SNAPSHOT.jar
-rwxrwxrwx 3 user hadoop 22139318 2016-11-07 15:07 /dataflow/artifacts/cache/log-sink-rabbit-1.0.0.BUILD-SNAPSHOT.jar
-rwxrwxrwx 3 user hadoop 12590921 2016-11-07 12:59 /dataflow/artifacts/cache/timestamp-task-1.0.0.BUILD-SNAPSHOT.jar
There's clearly a mix of wrong versions as hdfs has spring-cloud-deployer-yarn-tasklauncherappmaster-1.0.0.RELEASE.jar and error complains about spring-cloud-deployer-yarn-tasklauncherappmaster-1.0.0.BUILD-SNAPSHOT.jar.
Not sure how you got snapshots unless you built distribution manually?
I'd recommend picking 1.0.2 from http://cloud.spring.io/spring-cloud-dataflow-server-yarn. See "Download and Extract Distribution" from ref docs. Also delete old /dataflow directory from hdfs.
I have the following in my mup.json
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.36 by default. Do not use v, only version number.
"nodeVersion": "0.10.43",
// Install PhantomJS in the server
"setupPhantom": false,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (No spaces)
"appName": "myapp",
// Location of app (local directory)
"app": "/path/to/myapp",
// Configure environment
"env": {
"PORT": 5555,
"ROOT_URL": "http://myserver.com"
},
I got this in the deploy log
Started TaskList: Deploy app 'myapp' (linux)
[myserver.com] - Uploading bundle
[myserver.com] - Uploading bundle: SUCCESS
[myserver.com] - Setting up Environment Variables
[myserver.com] - Setting up Environment Variables: SUCCESS
[myserver.com] - Invoking deployment process
[myserver.com] x Invoking deployment process: FAILED
-----------------------------------STDERR-----------------------------------
eding commands with `sudo`, or if
npm WARN deprecated on Windows, run them from an Administrator prompt.)
npm WARN deprecated
npm WARN deprecated If you're running the version of npm bundled with
npm WARN deprecated Node.js 0.10 LTS, be aware that the next version of 0.10 LTS
npm WARN deprecated will be bundled with a version of npm#2, which has some small
npm WARN deprecated backwards-incompatible changes made to `npm run-script` and
npm WARN deprecated semver behavior.
npm WARN package.json meteor-dev-bundle#0.0.0 No description
npm WARN package.json meteor-dev-bundle#0.0.0 No repository field.
npm WARN package.json meteor-dev-bundle#0.0.0 No README data
js-bson: Failed to load c++ bson extension, using pure JS version
/usr/lib/node_modules/wait-for-mongo/bin/wait-for-mongo:14
throw err;
^
Error: TIMEOUTED_WAIT_FOR_MONGO
at null._onTimeout (/usr/lib/node_modules/wait-for-mongo/lib/waitForMongo.js:20:14)
at Timer.listOnTimeout [as ontimeout] (timers.js:121:15)
-----------------------------------STDOUT-----------------------------------
.1:27017]
wait-for-mongo: failed to connect to [127.0.0.1:27017]
wait-for-mongo: failed to connect to [127.0.0.1:27017]
wait-for-mongo: failed to connect to [127.0.0.1:27017]
wait-for-mongo: failed to connect to [127.0.0.1:27017]
when I ssh to the ec2 server and it looks like mongod is not started?
/opt/myapp$ ps -aux | grep mongod
ubuntu 9566 0.7 2.1 661524 22144 ? Sl 22:32 0:00 node /usr/bin/wait-for-mongo mongodb://127.0.0.1/myapp 300000
ubuntu 9569 0.0 0.0 10464 916 pts/0 S+ 22:33 0:00 grep --color=auto mongod
/opt/myapp$ mongo myapp
MongoDB shell version: 2.6.12
connecting to: myapp
2016-04-05T22:44:07.802+0000 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2016-04-05T22:44:07.803+0000 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
I'm not sure how to gain access to mongo on the server, given that I "handed over" responsibility to meteor-up with "setup-mongo":true.
Any ideas would be appreciated.
update
switched to mupx, and set "deployCheckWaitTime": 300 mongo is now loading correctly. But now I am getting this error:
/bundle/bundle/programs/server/node_modules/fibers/future.js:278
throw(ex);
^
MongoError: driver is incompatible with this server version
at Object.Future.wait (/bundle/bundle/programs/server/node_modules/fibers/future.js:398:15)
at [object Object].MongoConnection._ensureIndex (packages/mongo/mongo_driver.js:790:1)
at [object Object].Mongo.Collection._ensureIndex (packages/mongo/collection.js:635:1)
It seems to be related to these issues:
https://github.com/arunoda/meteor-up/issues/841
https://github.com/meteor/meteor/issues/5809
SOLVED
I must had incompatible versions on my system from using mup and also meteor create; meteor mongo earlier on the ec2 server. I linked /opt to an empty folder in a different partition (had space problems) and ran mupx setup/deploy again from scratch. This time it worked fine (with Meteor 1.3).
Strangely, I noticed there was no /opt/nodejs folder, which was probably a leftover from my first attempt with mup.
also, the docker daemon doesn't seem to be running, but I can connect to my mongoDB from an ssh session the mup way
$ mongo myapp // works fine
$ docker exec -it mongodb mongo myapp
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
I'm using thoughworks go for a build pipeline as shown below:
The "Test" stage fetches artefacts from the build stage and runs each of it's jobs in parallel (unit tests, integration test, acceptance tests, package) on different ages. However, each of those jobs is a shell script.
When those tasks are run on a different agent they are failing because permission is denied. Each job is a shell script, and when I ssh into the agent I can see it does not have executable permissions as shown below:
drwxrwxr-x 2 go go 4096 Mar 4 09:48 .
drwxrwxr-x 8 go go 4096 Mar 4 09:48 ..
-rw-rw-r-- 1 go go 408 Mar 4 09:48 aa_tests.sh
-rw-rw-r-- 1 go go 443 Mar 4 09:48 Dockerfile
-rw-rw-r-- 1 go go 121 Mar 4 09:48 run.sh
However, in the git repository they have executable permission and they seem to execute fine on the build agent that clones the git repository.
I solved the problem by executing the script with bash. E.g "bash sriptname.sh" as the command for the task.