I have this BUILD file:
package(default_visibility = ["//visibility:public"])
load("#npm_bazel_typescript//:index.bzl", "ts_library")
ts_library(
name = "lib",
srcs = glob(
include = ["**/*.ts"],
exclude = ["**/*.spec.ts"]
),
deps = [
"//packages/enums/src:lib",
"//packages/hello/src:lib",
"#npm//faker",
"#npm//#types/faker",
"#npm//express",
"#npm//#types/express",
],
)
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "server",
data = [":lib"],
entry_point = ":index.ts",
)
load("#io_bazel_rules_docker//container:container.bzl", "container_push")
container_push(
name = "push_server",
image = ":server",
format = "Docker",
registry = "gcr.io",
repository = "learning-bazel-monorepo/server",
tag = "dev",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "k8s_deploy",
kind = "deployment",
namespace = "default",
template = ":server.yaml",
images = {
"deploy_server:do_not_delete": ":server"
},
)
But when running the k8s_deploy rule I get this error:
INFO: Analyzed target //services/server/src:k8s_deploy (1 packages loaded, 7 targets configured).
INFO: Found 1 target...
Target //services/server/src:k8s_deploy up-to-date:
bazel-bin/services/server/src/k8s_deploy.substituted.yaml
bazel-bin/services/server/src/k8s_deploy
INFO: Elapsed time: 0.276s, Critical Path: 0.01s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
2019/12/22 07:45:14 Unable to publish images: unable to publish image deploy_server:do_not_delete
The lib, server and push_server rules work fine. So I don't know what's the issue as there is no specific error message.
A snippet out of my server.yaml file:
spec:
containers:
- name: server
image: deploy_server:do_not_delete
You can try it yourself by running bazel run //services/server/src:k8s_deploy on this repo: https://github.com/flolude/minimal-bazel-monorepo/tree/de898eb1bb4edf0e0b1b99c290ff7ab57db81988
Have you pushed images using this syntax before?
I'm used to using the full repository tag for both the server.yaml and the k8s_object images.
So, instead of just "deploy_server:do_not_delete", try "gcr.io/learning-bazel-monorepo/deploy_server:do_not_delete".
Related
I'm trying to add a custom gitlab runner. Here is a config.toml file:
concurrent = 1
check_interval = 0
log_level = "debug"
[session_server]
session_timeout = 7200
[[runners]]
name = "MyName"
url = "MyUrl"
id = 180
token = "MyToken"
token_obtained_at = 2022-09-07T11:19:22Z
token_expires_at = 0001-01-01T00:00:00Z
executor = "custom"
shell = "pwsh"
builds_dir = "/builds"
cache_dir = "/cache"
[runners.custom]
prepare_exec = "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"
prepare_args = [ "C:\\GitLab-Runner\\prepare.ps1" ]
prepare_exec_timeout = 200
run_exec = "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"
run_args = [ "C:\\GitLab-Runner\\run.ps1" ]
It executes PowerShell scripts but I need to run script from my gitlab-ci.yml file too as it happens with other types of executor. It does not run this file and not copy this script or other files too on my PC, works only with ps1 scripts. My gitlab-ci.yml file:
stages:
- test
test:
stage: test
tags:
- my-test
script:
- .\my-test.cmd
my-test.cmd:
#echo Hello > C:\Temp\cmd-test.txt
How to make the runner to exec this script or at least copy files to my PC for manual running by powershell?
Hi I am trying to run a Kubeflow pipeline.
Two steps will run in parallel and dump data to two different folders of PVC, then the third component will collect data from those to folders and merge them together and dump the merged data to another PVC folder.
Here are my pipeline codes:
vop = dsl.VolumeOp(
name='no2-pvc',
resource_name = "no2-pvc",
size="100Gi",
modes = dsl.VOLUME_MODE_RWO
)
##LOADING POSITIVE DATA##
load_positive_data = dsl.ContainerOp(
name='load_positive_data',
image=load_positive_data_image,
command="python",
arguments=[
"/app/load_positive_data.py",
],
pvolumes={"/mnt/positive/": vop.volume}).apply(gcp.use_gcp_secret("user-gcp-sa"))
##LOADING NEGATIVE DATA##
load_negative_data = dsl.ContainerOp(
name='load_negative_data',
image=load_negative_data_image,
command="python",
arguments=[
"/app/load_negative_data.py",
],
pvolumes={"/mnt/negative/": vop.volume}).apply(gcp.use_gcp_secret("user-gcp-sa"))
##MERGING POSITIVE AND NEGATIVE DATA##
marge_pos_neg_data = dsl.ContainerOp(
name='marge_pos_neg_data',
image=marged_data_image,
command="python",
arguments=[
"/app/merge_neg_pos.py"
],
pvolumes = {"/mnt/positive/": load_negative_data.pvolume, "/mnt/negative/": load_positive_data.pvolume}
#volumes={'/mnt': vop.after(load_negative_data, load_positive_data)}
).apply(gcp.use_gcp_secret("user-gcp-sa")).after(load_positive_data, load_negative_data)
##PROCESSING MARGED DATA##
process_marged_data = dsl.ContainerOp(
name='process_data',
image=perpare_merged_data_image,
command="python",
arguments=[
"/app/prepare_all_dataset.py"
],
pvolumes = {"/mnt/pos_neg": marge_pos_neg_data.pvolume}
).apply(gcp.use_gcp_secret("user-gcp-sa")).after(marge_pos_neg_data)
load-positive-data and load-negative-data are working fine but the marge-pos-neg-data step is giving the following error:
This step is in Error state with this message:
task 'no2-pipeline-x5kpd.marge-pos-neg-data'
errored: Pod "no2-pipeline-x5kpd-2954674781" is invalid:
spec.volumes[3].name: Duplicate value: "no2-pvc"
Hoping for your help to resolve the issue.
pvolumes={"/mnt/positive/": vop.volume}) and pvolumes={"/mnt/negative/": vop.volume}) was creating two separate pvc's.
I am trying to setup a simple flink application from scratch using Bazel. I've bootstrapped the project by running
sbt new tillrohrmann/flink-project.g8
and after that I have added some files in order for Bazel to take control of the building (i.e., migrate from sbt). This is how the WORKSPACE looks like
# WORKSPACE
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
skylib_version = "1.0.3"
http_archive(
name = "bazel_skylib",
sha256 = "1c531376ac7e5a180e0237938a2536de0c54d93f5c278634818e0efc952dd56c",
type = "tar.gz",
url = "https://mirror.bazel.build/github.com/bazelbuild/bazel-skylib/releases/download/{}/bazel-skylib-{}.tar.gz".format(skylib_version, skylib_version),
)
rules_scala_version = "5df8033f752be64fbe2cedfd1bdbad56e2033b15"
http_archive(
name = "io_bazel_rules_scala",
sha256 = "b7fa29db72408a972e6b6685d1bc17465b3108b620cb56d9b1700cf6f70f624a",
strip_prefix = "rules_scala-%s" % rules_scala_version,
type = "zip",
url = "https://github.com/bazelbuild/rules_scala/archive/%s.zip" % rules_scala_version,
)
# Stores Scala version and other configuration
# 2.12 is a default version, other versions can be use by passing them explicitly:
load("#io_bazel_rules_scala//:scala_config.bzl", "scala_config")
scala_config(scala_version = "2.12.11")
load("#io_bazel_rules_scala//scala:scala.bzl", "scala_repositories")
scala_repositories()
load("#io_bazel_rules_scala//scala:toolchains.bzl", "scala_register_toolchains")
scala_register_toolchains()
load("#io_bazel_rules_scala//scala:scala.bzl", "scala_library", "scala_binary", "scala_test")
# optional: setup ScalaTest toolchain and dependencies
load("#io_bazel_rules_scala//testing:scalatest.bzl", "scalatest_repositories", "scalatest_toolchain")
scalatest_repositories()
scalatest_toolchain()
load("//vendor:workspace.bzl", "maven_dependencies")
maven_dependencies()
load("//vendor:target_file.bzl", "build_external_workspace")
build_external_workspace(name = "vendor")
and this is the BUILD file
package(default_visibility = ["//visibility:public"])
load("#io_bazel_rules_scala//scala:scala.bzl", "scala_library", "scala_test")
scala_library(
name = "job",
srcs = glob(["src/main/scala/**/*.scala"]),
deps = [
"#vendor//vendor/org/apache/flink:flink_clients",
"#vendor//vendor/org/apache/flink:flink_scala",
"#vendor//vendor/org/apache/flink:flink_streaming_scala",
]
)
I'm using bazel-deps for vendoring the dependencies (put in the vendor folder). I have this on my dependencies.yaml file:
options:
buildHeader: [
"load(\"#io_bazel_rules_scala//scala:scala_import.bzl\", \"scala_import\")",
"load(\"#io_bazel_rules_scala//scala:scala.bzl\", \"scala_library\", \"scala_binary\", \"scala_test\")",
]
languages: [ "java", "scala:2.12.11" ]
resolverType: "coursier"
thirdPartyDirectory: "vendor"
resolvers:
- id: "mavencentral"
type: "default"
url: https://repo.maven.apache.org/maven2/
strictVisibility: true
transitivity: runtime_deps
versionConflictPolicy: highest
dependencies:
org.apache.flink:
flink:
lang: scala
version: "1.11.2"
modules: [clients, scala, streaming-scala] # provided
flink-connector-kafka:
lang: java
version: "0.10.2"
flink-test-utils:
lang: java
version: "0.10.2"
For downloading the dependencies, I'm running
bazel run //:parse generate -- --repo-root ~/Projects/bazel-flink-scala --sha-file vendor/workspace.bzl --target-file vendor/target_file.bzl --deps dependencies.yaml
Which runs just fine, but then when I try to build the project
bazel build //:job
I'm getting this error
Starting local Bazel server and connecting to it...
ERROR: Traceback (most recent call last):
File "/Users/salvalcantara/Projects/me/bazel-flink-scala/WORKSPACE", line 44, column 25, in <toplevel>
build_external_workspace(name = "vendor")
File "/Users/salvalcantara/Projects/me/bazel-flink-scala/vendor/target_file.bzl", line 258, column 91, in build_external_workspace
return build_external_workspace_from_opts(name = name, target_configs = list_target_data(), separator = list_target_data_separator(), build_header = build_header())
File "/Users/salvalcantara/Projects/me/bazel-flink-scala/vendor/target_file.bzl", line 251, column 40, in list_target_data
"vendor/org/apache/flink:flink_clients": ["lang||||||scala:2.12.11","name||||||//vendor/org/apache/flink:flink_clients","visibility||||||//visibility:public","kind||||||import","deps|||L|||","jars|||L|||//external:jar/org/apache/flink/flink_clients_2_12","sources|||L|||","exports|||L|||","runtimeDeps|||L|||//vendor/commons_cli:commons_cli|||//vendor/org/slf4j:slf4j_api|||//vendor/org/apache/flink:force_shading|||//vendor/com/google/code/findbugs:jsr305|||//vendor/org/apache/flink:flink_streaming_java_2_12|||//vendor/org/apache/flink:flink_core|||//vendor/org/apache/flink:flink_java|||//vendor/org/apache/flink:flink_runtime_2_12|||//vendor/org/apache/flink:flink_optimizer_2_12","processorClasses|||L|||","generatesApi|||B|||false","licenses|||L|||","generateNeverlink|||B|||false"],
Error: dictionary expression has duplicate key: "vendor/org/apache/flink:flink_clients"
ERROR: error loading package 'external': Package 'external' contains errors
INFO: Elapsed time: 3.644s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
Why is that? Anyone can help? It would be great having detailed instructions and project templates for Flink/Scala applications using Bazel. I've put everything together in the following repo: https://github.com/salvalcantara/bazel-flink-scala, feel free to send a PR or whatever.
Bazel failed to deploy sample k8s deployment (deployment.yaml) file in k8s tenant.
I followed the link https://github.com/bazelbuild/rules_k8s#aliasing-eg-k8s_deploy and tried one sample deployment.yaml file to deploy the application in k8s tenant. I have one k8s tenant already configured in the build machine. To deploy the application I executed:
bazel run //main:dev.create
But the bazel command is failing with below error:
[root#localhost t2]# bazel run //main:dev.create <br/>
Starting local Bazel server and connecting to it... <br/>
INFO: Analyzed target //main:dev.create (68 packages loaded, 6876 targets configured).<br/>
INFO: Found 1 target...<br/>
INFO: Deleting stale sandbox base <br/>/root/.cache/bazel/_bazel_root/5ad59170e5ff426844f68e5dd9f66fb3/sandbox
Target //main:dev.create up-to-date:<br/>
bazel-bin/main/dev.create<br/>
INFO: Elapsed time: 33.497s, Critical Path: 2.04s<br/>
INFO: 0 processes.<br/>
INFO: Build completed successfully, 1 total action<br/>
INFO: Build completed successfully, 1 total action<br/>
$ /usr/local/bin/kubectl --cluster=kubernetes --context= --user= create -f -<br/>
error: error parsing STDIN: error converting YAML to JSON: yaml: line 4: <br/>mapping values are not allowed in this context<br/>
this is my WORKSPACE file
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
git_repository(
name = "io_bazel_rules_go",
remote = "https://github.com/bazelbuild/rules_go.git",
tag = "0.18.5"
)
git_repository(
name = "bazel_gazelle",
remote = "https://github.com/bazelbuild/bazel-gazelle.git",
tag = "0.17.0",
)
load("#io_bazel_rules_go//go:deps.bzl", "go_download_sdk","go_register_toolchains","go_rules_dependencies")
go_download_sdk(
name = "gosdk",
sdks = {
......
},
urls = [....],
)
go_register_toolchains(
"#//:gosdk",
)
go_rules_dependencies()
load("#bazel_gazelle//:deps.bzl", "gazelle_dependencies", "go_repository")
gazelle_dependencies()
git_repository(
name = "io_bazel_rules_docker",
commit = "e12e276a9a6ded09363a6c1f0de46c573bd6096c",
remote = "https://github.com/xxxxx/rules_docker.git",
)
load(
"#io_bazel_rules_docker//repositories:repositories.bzl",
container_repositories = "repositories",
)
container_repositories()
load("#io_bazel_rules_docker//container:container.bzl", "container_pull")
load(
"#io_bazel_rules_docker//go:image.bzl",
go_image_repos = "repositories",
)
go_image_repos()
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "io_bazel_rules_k8s",
sha256 = "91fef3e6054096a8947289ba0b6da3cba559ecb11c851d7bdfc9ca395b46d8d8",
strip_prefix = "rules_k8s-0.1",
urls = ["https://github.com/bazelbuild/rules_k8s/releases/download/v0.1/rules_k8s-v0.1.tar.gz"],
)
load("#io_bazel_rules_k8s//k8s:k8s.bzl", "k8s_repositories")
k8s_repositories()
load("#io_bazel_rules_k8s//k8s:k8s.bzl", "k8s_defaults")
k8s_defaults(
name = "k8s_deploy",
kind = "deployment",
cluster = "kubernetes",
)
build.bazel file :
load("#io_bazel_rules_go//go:def.bzl", "go_binary", "go_library")
go_binary(
name = "hello_go",
embed = [":go_default_library"],
visibility = ["//visibility:public"],
)
go_library(
name = "go_default_library",
srcs = ["main.go"],
)
load("#io_bazel_rules_docker//go:image.bzl", "go_image")
go_image(
name = "go-image",
base = ":test",
embed = [":go_default_library"],
)
load("#io_bazel_rules_docker//container:image.bzl", "container_image")
container_image(
name = "test",
base = "#go_image_base//image",
user = "101",
)
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "dev",
kind = "deployment",
template = ":deployment.yaml",
cluster = "kubernetes",
images = {
"xxxxx.net/test/new:v1": ":go-image",
},
)
deployment.yaml file :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: staging
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: xxxxx.net/test/new:v1
imagePullPolicy: Always
ports:
- containerPort: 50051
On the same server, i have kubeconfig file kept at /root/.kube/config.
I tried to configure PostgreSQL as node's database in Corda 3.0 and in Corda 4.0.
I have added following things in build.gradle file. (Testdb1 is Database name. I have tried with postgres also)
node{
...
// this part i have added
extraConfig = [
jarDirs: ['path'],
'dataSourceProperties': [
'dataSourceClassName': 'org.postgresql.ds.PGSimpleDataSource',
'"dataSource.url"' : 'jdbc:postgresql://127.0.0.1:5432/Testdb1',
'"dataSource.user"' : 'postgres',
'"dataSource.password"': 'admin#123'
],
'database': [
'transactionIsolationLevel': 'READ_COMMITTED'
]
]
// till here
}
this part in reference.conf file
dataSourceProperties = {
dataSourceClassName = org.postgresql.ds.PGSimpleDataSource
dataSource.url = "jdbc:postgresql://127.0.0.1:5432/Testdb1"
dataSource.user = postgres
dataSource.password = "admin#123"
}
database = {
transactionIsolationLevel = "READ_COMMITTED"
}
jarDirs = ["path"]
I got the follwing Error while deploying the nodes:
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':java-source:deployNodes'.
In node-info-gen.log file it showed the CAPSULE EXCEPTION. Then I updated my JDK to 8u191 but still got the same error.
I have go-through the followings to get the things done. One can get reference from here.
https://docs.corda.net/node-database.html ,
https://github.com/corda/corda/issues/4037 ,
How can the Corda node be extended to work with databases other than H2?
You need to add those properties in node.conf in each of your corda nodes. After you do "deployNodes"
After you add these properties in node.conf file , just run the corda jar . It will automatically start . But before that you need to create the tables (The migration to other DBs is already provided in corda documentation)
I have added following things in .conf file of each node and one referene.conf file. I have given all the privileges for the user postgres which are mentioned in corda documentation.
https://docs.corda.r3.com/node-database.html
(previously I used postgresql-42.2.5.jar file but that didn't work so I used one downgrade version of it postgresql-42.1.4.jar.
one can download jar files from https://jdbc.postgresql.org/download.html )
After deployed Nodes successfully add following things:
dataSourceProperties = {
dataSourceClassName = org.postgresql.ds.PGSimpleDataSource
dataSource.url = "jdbc:postgresql://127.0.0.1:5432/Testdb1"
dataSource.user = postgres
dataSource.password = "admin#123"
}
database = {
transactionIsolationLevel = "READ_COMMITTED"
}
jarDirs = ["path"]
(path = jar file's location) after adding this configuration run file called runnodes.bat