I'm trying to add a custom gitlab runner. Here is a config.toml file:
concurrent = 1
check_interval = 0
log_level = "debug"
[session_server]
session_timeout = 7200
[[runners]]
name = "MyName"
url = "MyUrl"
id = 180
token = "MyToken"
token_obtained_at = 2022-09-07T11:19:22Z
token_expires_at = 0001-01-01T00:00:00Z
executor = "custom"
shell = "pwsh"
builds_dir = "/builds"
cache_dir = "/cache"
[runners.custom]
prepare_exec = "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"
prepare_args = [ "C:\\GitLab-Runner\\prepare.ps1" ]
prepare_exec_timeout = 200
run_exec = "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"
run_args = [ "C:\\GitLab-Runner\\run.ps1" ]
It executes PowerShell scripts but I need to run script from my gitlab-ci.yml file too as it happens with other types of executor. It does not run this file and not copy this script or other files too on my PC, works only with ps1 scripts. My gitlab-ci.yml file:
stages:
- test
test:
stage: test
tags:
- my-test
script:
- .\my-test.cmd
my-test.cmd:
#echo Hello > C:\Temp\cmd-test.txt
How to make the runner to exec this script or at least copy files to my PC for manual running by powershell?
Related
I have successfully created an EMR cluster using terraform, as per terraform documentation, it's specified on how to submit a step to EMR as a jar
https://www.terraform.io/docs/providers/aws/r/emr_cluster.html#step-1
step {
action_on_failure = "TERMINATE_CLUSTER"
name = "Setup Hadoop Debugging"
hadoop_jar_step {
jar = "command-runner.jar"
args = ["state-pusher-script"]
}
}
where as documentation for adding a pyspark script as a step is missing.
Does anyone has experience adding pyspark script as EMR step using terraform ?
A common way to do this is to copy a script from S3 and use command-runner.jar to execute the script. (I don't know that it's ideal...)
step = [
{
name = "Copy script"
action_on_failure = "CONTINUE"
hadoop_jar_step {
jar = "command-runner.jar"
args = ["aws", "s3", "cp", "s3://path/to/script.py", "/home/hadoop/"]
}
},
{
name = "Run script"
action_on_failure = "CONTINUE"
hadoop_jar_step {
jar = "command-runner.jar"
args = ["bash", "/home/hadoop/script.py"]
}
},
]
Here is a working example of using hadoop_jar_step.
step = [
{
action_on_failure = "TERMINATE_CLUSTER"
name = "Setup Hadoop Debugging"
hadoop_jar_step = [
{
jar = "command-runner.jar"
args = [
"state-pusher-script"
]
main_class = ""
properties = {}
}
]
}
]
Contains workaround for https://github.com/hashicorp/terraform-provider-aws/issues/20911
Change args to your spark-submit command.
I have this BUILD file:
package(default_visibility = ["//visibility:public"])
load("#npm_bazel_typescript//:index.bzl", "ts_library")
ts_library(
name = "lib",
srcs = glob(
include = ["**/*.ts"],
exclude = ["**/*.spec.ts"]
),
deps = [
"//packages/enums/src:lib",
"//packages/hello/src:lib",
"#npm//faker",
"#npm//#types/faker",
"#npm//express",
"#npm//#types/express",
],
)
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "server",
data = [":lib"],
entry_point = ":index.ts",
)
load("#io_bazel_rules_docker//container:container.bzl", "container_push")
container_push(
name = "push_server",
image = ":server",
format = "Docker",
registry = "gcr.io",
repository = "learning-bazel-monorepo/server",
tag = "dev",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "k8s_deploy",
kind = "deployment",
namespace = "default",
template = ":server.yaml",
images = {
"deploy_server:do_not_delete": ":server"
},
)
But when running the k8s_deploy rule I get this error:
INFO: Analyzed target //services/server/src:k8s_deploy (1 packages loaded, 7 targets configured).
INFO: Found 1 target...
Target //services/server/src:k8s_deploy up-to-date:
bazel-bin/services/server/src/k8s_deploy.substituted.yaml
bazel-bin/services/server/src/k8s_deploy
INFO: Elapsed time: 0.276s, Critical Path: 0.01s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
2019/12/22 07:45:14 Unable to publish images: unable to publish image deploy_server:do_not_delete
The lib, server and push_server rules work fine. So I don't know what's the issue as there is no specific error message.
A snippet out of my server.yaml file:
spec:
containers:
- name: server
image: deploy_server:do_not_delete
You can try it yourself by running bazel run //services/server/src:k8s_deploy on this repo: https://github.com/flolude/minimal-bazel-monorepo/tree/de898eb1bb4edf0e0b1b99c290ff7ab57db81988
Have you pushed images using this syntax before?
I'm used to using the full repository tag for both the server.yaml and the k8s_object images.
So, instead of just "deploy_server:do_not_delete", try "gcr.io/learning-bazel-monorepo/deploy_server:do_not_delete".
Bazel failed to deploy sample k8s deployment (deployment.yaml) file in k8s tenant.
I followed the link https://github.com/bazelbuild/rules_k8s#aliasing-eg-k8s_deploy and tried one sample deployment.yaml file to deploy the application in k8s tenant. I have one k8s tenant already configured in the build machine. To deploy the application I executed:
bazel run //main:dev.create
But the bazel command is failing with below error:
[root#localhost t2]# bazel run //main:dev.create <br/>
Starting local Bazel server and connecting to it... <br/>
INFO: Analyzed target //main:dev.create (68 packages loaded, 6876 targets configured).<br/>
INFO: Found 1 target...<br/>
INFO: Deleting stale sandbox base <br/>/root/.cache/bazel/_bazel_root/5ad59170e5ff426844f68e5dd9f66fb3/sandbox
Target //main:dev.create up-to-date:<br/>
bazel-bin/main/dev.create<br/>
INFO: Elapsed time: 33.497s, Critical Path: 2.04s<br/>
INFO: 0 processes.<br/>
INFO: Build completed successfully, 1 total action<br/>
INFO: Build completed successfully, 1 total action<br/>
$ /usr/local/bin/kubectl --cluster=kubernetes --context= --user= create -f -<br/>
error: error parsing STDIN: error converting YAML to JSON: yaml: line 4: <br/>mapping values are not allowed in this context<br/>
this is my WORKSPACE file
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
git_repository(
name = "io_bazel_rules_go",
remote = "https://github.com/bazelbuild/rules_go.git",
tag = "0.18.5"
)
git_repository(
name = "bazel_gazelle",
remote = "https://github.com/bazelbuild/bazel-gazelle.git",
tag = "0.17.0",
)
load("#io_bazel_rules_go//go:deps.bzl", "go_download_sdk","go_register_toolchains","go_rules_dependencies")
go_download_sdk(
name = "gosdk",
sdks = {
......
},
urls = [....],
)
go_register_toolchains(
"#//:gosdk",
)
go_rules_dependencies()
load("#bazel_gazelle//:deps.bzl", "gazelle_dependencies", "go_repository")
gazelle_dependencies()
git_repository(
name = "io_bazel_rules_docker",
commit = "e12e276a9a6ded09363a6c1f0de46c573bd6096c",
remote = "https://github.com/xxxxx/rules_docker.git",
)
load(
"#io_bazel_rules_docker//repositories:repositories.bzl",
container_repositories = "repositories",
)
container_repositories()
load("#io_bazel_rules_docker//container:container.bzl", "container_pull")
load(
"#io_bazel_rules_docker//go:image.bzl",
go_image_repos = "repositories",
)
go_image_repos()
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "io_bazel_rules_k8s",
sha256 = "91fef3e6054096a8947289ba0b6da3cba559ecb11c851d7bdfc9ca395b46d8d8",
strip_prefix = "rules_k8s-0.1",
urls = ["https://github.com/bazelbuild/rules_k8s/releases/download/v0.1/rules_k8s-v0.1.tar.gz"],
)
load("#io_bazel_rules_k8s//k8s:k8s.bzl", "k8s_repositories")
k8s_repositories()
load("#io_bazel_rules_k8s//k8s:k8s.bzl", "k8s_defaults")
k8s_defaults(
name = "k8s_deploy",
kind = "deployment",
cluster = "kubernetes",
)
build.bazel file :
load("#io_bazel_rules_go//go:def.bzl", "go_binary", "go_library")
go_binary(
name = "hello_go",
embed = [":go_default_library"],
visibility = ["//visibility:public"],
)
go_library(
name = "go_default_library",
srcs = ["main.go"],
)
load("#io_bazel_rules_docker//go:image.bzl", "go_image")
go_image(
name = "go-image",
base = ":test",
embed = [":go_default_library"],
)
load("#io_bazel_rules_docker//container:image.bzl", "container_image")
container_image(
name = "test",
base = "#go_image_base//image",
user = "101",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "dev",
kind = "deployment",
template = ":deployment.yaml",
cluster = "kubernetes",
images = {
"xxxxx.net/test/new:v1": ":go-image",
},
)
deployment.yaml file :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: staging
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: xxxxx.net/test/new:v1
imagePullPolicy: Always
ports:
- containerPort: 50051
On the same server, i have kubeconfig file kept at /root/.kube/config.
I'm have a Gitlab cloud connected to a k8s cluster running on Google (GKE).
The cluster was created via Gitlab cloud.
I want to customise the config.toml because I want to fix the cache on k8s as suggested in this issue.
I found the config.toml configuration in the runner-gitlab-runner ConfigMap.
I updated the ConfigMap to contain this config.toml setup:
config.toml: |
concurrent = 4
check_interval = 3
log_level = "info"
listen_address = '[::]:9252'
[[runners]]
executor = "kubernetes"
cache_dir = "/tmp/gitlab/cache"
[runners.kubernetes]
memory_limit = "1Gi"
[runners.kubernetes.node_selector]
gitlab = "true"
[[runners.kubernetes.volumes.host_path]]
name = "gitlab-cache"
mount_path = "/tmp/gitlab/cache"
host_path = "/home/core/data/gitlab-runner/data"
To apply the changes I deleted the runner-gitlab-runner-xxxx-xxx pod so a new one gets created with the updated config.toml.
However, when I look into the new pod, the /home/gitlab-runner/.gitlab-runner/config.toml now contains 2 [[runners]] sections:
listen_address = "[::]:9252"
concurrent = 4
check_interval = 3
log_level = "info"
[session_server]
session_timeout = 1800
[[runners]]
name = ""
url = ""
token = ""
executor = "kubernetes"
cache_dir = "/tmp/gitlab/cache"
[runners.kubernetes]
host = ""
bearer_token_overwrite_allowed = false
image = ""
namespace = ""
namespace_overwrite_allowed = ""
privileged = false
memory_limit = "1Gi"
service_account_overwrite_allowed = ""
pod_annotations_overwrite_allowed = ""
[runners.kubernetes.node_selector]
gitlab = "true"
[runners.kubernetes.volumes]
[[runners.kubernetes.volumes.host_path]]
name = "gitlab-cache"
mount_path = "/tmp/gitlab/cache"
host_path = "/home/core/data/gitlab-runner/data"
[[runners]]
name = "runner-gitlab-runner-xxx-xxx"
url = "https://gitlab.com/"
token = "<my-token>"
executor = "kubernetes"
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.kubernetes]
host = ""
bearer_token_overwrite_allowed = false
image = "ubuntu:16.04"
namespace = "gitlab-managed-apps"
namespace_overwrite_allowed = ""
privileged = true
service_account_overwrite_allowed = ""
pod_annotations_overwrite_allowed = ""
[runners.kubernetes.volumes]
The file /scripts/config.toml is the configuration as I created it in the ConfigMap.
So I suspect the /home/gitlab-runner/.gitlab-runner/config.toml is somehow updated when registering the Gitlab-Runner with the Gitlab cloud.
If if changing the config.toml via the ConfigMap does not work, how should I then change the configuration? I cannot find anything about this in Gitlab or Gitlab documentation.
Inside the mapping you can try to append the volume and the extra configuration parameters:
# Add docker volumes
cat >> /home/gitlab-runner/.gitlab-runner/config.toml << EOF
[[runners.kubernetes.volumes.host_path]]
name = "var-run-docker-sock"
mount_path = "/var/run/docker.sock"
EOF
I did the runner deployment using a helm chart; I guess you did the same, in the following link you will find more information about the approach I mention: https://gitlab.com/gitlab-org/gitlab-runner/issues/2578
If after appending the config your pod is not able to start, check the logs, I did test the appending approach and had some errors like "Directory not Found," and it was because I was appending in the wrong path, but after fixing those issues, the runner works fine.
Seems to me you should be modifying config.template.toml (within your relevant configmap, that is)
If you want modify existing config.toml in /home/gitlab-runner/.gitlab-runner you need to set environment variables in deployment. For example, this is default set of variables in case you have installed gitlab-runner by pressing install button in gitlab.
Environment:
CI_SERVER_URL: http://git.example.com/
CLONE_URL:
RUNNER_REQUEST_CONCURRENCY: 1
RUNNER_EXECUTOR: kubernetes
REGISTER_LOCKED: true
RUNNER_TAG_LIST:
RUNNER_OUTPUT_LIMIT: 4096
KUBERNETES_IMAGE: ubuntu:16.04
KUBERNETES_PRIVILEGED: true
KUBERNETES_NAMESPACE: gitlab-managed-apps
KUBERNETES_POLL_TIMEOUT: 180
KUBERNETES_CPU_LIMIT:
KUBERNETES_CPU_LIMIT_OVERWRITE_MAX_ALLOWED:
KUBERNETES_MEMORY_LIMIT:
KUBERNETES_MEMORY_LIMIT_OVERWRITE_MAX_ALLOWED:
KUBERNETES_CPU_REQUEST:
KUBERNETES_CPU_REQUEST_OVERWRITE_MAX_ALLOWED:
KUBERNETES_MEMORY_REQUEST:
KUBERNETES_MEMORY_REQUEST_OVERWRITE_MAX_ALLOWED:
KUBERNETES_SERVICE_ACCOUNT:
KUBERNETES_SERVICE_CPU_LIMIT:
KUBERNETES_SERVICE_MEMORY_LIMIT:
KUBERNETES_SERVICE_CPU_REQUEST:
KUBERNETES_SERVICE_MEMORY_REQUEST:
KUBERNETES_HELPER_CPU_LIMIT:
KUBERNETES_HELPER_MEMORY_LIMIT:
KUBERNETES_HELPER_CPU_REQUEST:
KUBERNETES_HELPER_MEMORY_REQUEST:
KUBERNETES_HELPER_IMAGE:
Modify existing values or add new ones - it will appear in correct section of config.toml.
StarCluster seems to use IPython 0.13.1 by default. Is there a way to upgrade this to IPython 2.3.1? Can it be done via the config file? Or manually after the cluster is started?
Here is my config, with only minor security changes:
[global]
DEFAULT_TEMPLATE=iptemplate
REFRESH_INTERVAL=5
[aws info]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
aws_region_name = us-west-2
aws_region_host = ec2.us-west-2.amazonaws.com
[keypair starcluster]
key_location = starcluster.pem
[plugin ipcluster]
SETUP_CLASS = starcluster.plugins.ipcluster.IPCluster
ENABLE_NOTEBOOK = True
NOTEBOOK_PASSWD = XXXX
[plugin ipclusterstop]
SETUP_CLASS = starcluster.plugins.ipcluster.IPClusterStop
[plugin ipclusterrestart]
SETUP_CLASS = starcluster.plugins.ipcluster.IPClusterRestartEngines
[plugin pypackages]
setup_class = starcluster.plugins.pypkginstaller.PyPkgInstaller
packages = scikit-learn, psutil, pandas
# Base configuration for IPython.parallel cluster
[cluster iptemplate]
KEYNAME = starcluster
CLUSTER_SIZE = 1
CLUSTER_USER = ipuser
CLUSTER_SHELL = bash
#REGION = us-east-1
NODE_IMAGE_ID = ami-706afe40 # REGION and NODE_IMAGE_ID go in pair
NODE_INSTANCE_TYPE = c1.xlarge # 8 CPUs
DISABLE_QUEUE = True # We don't need SGE, faster cluster startup
PLUGINS = pypackages, ipcluster
You can do it by updating setup.py. Add "ipython==2.3.1" to install_requires and rerun the setup command. It will update ipython to the version specified.