I am using Bazel to compile scala.
Right now, my scala_test looks like
scala_test {
name = "sample",
srcs = [
"a.scala",
"b.scala",
"c.scala",
"d.scala",
],
deps = [
"//src//main/scala/.../dep1",
"//src//main/scala/.../dep2",
"//src//main/scala/.../dep3",
"//src//main/scala/.../dep4",
]
}
In this case, Bazel does not support parallelization on these srcs as they are grouped as one scala_test. To enable automatic parallel testing, I would like to separate srcs into different scala_test like
scala_test {
name = "sample1",
srcs = [
"a.scala",
],
deps = [
"//src//main/scala/.../dep1",
"//src//main/scala/.../dep2",
"//src//main/scala/.../dep3",
"//src//main/scala/.../dep4",
]
}
scala_test {
name = "sample2",
srcs = [
"b.scala",
],
deps = [
"//src//main/scala/.../dep1",
"//src//main/scala/.../dep2",
"//src//main/scala/.../dep3",
"//src//main/scala/.../dep4",
]
}
...
The problem is I guess bazel tries to compile the deps for every scala_test. Is there any way to group dependencies and reuse them in different scala_test blocks such as scala_library?
Sorry, I think Bazel caches the dependencies so I don't have to worry about compiling them again when running all tests.
Related
I am porting an objective c cocoa pod over to swift package manager. It has a bundle of fonts in the package. In the original pod spec, there was a link of code that created the bundle.
s.resource_bundles = {
'mathFonts' => [ 'fonts/*.otf', 'fonts/*.plist' ]
}
In the swift package, I created a static bundle, called mathFonts.bundle and added it to the package as shown below.
in the manifest file, I tried to copy the resources as shown below:
import PackageDescription
let package = Package(
name: "iosMath",
platforms: [
.iOS(.v8)
],
products: [
.library(
name: "iosMath",
targets: ["iosMath"]),
],
dependencies: [
],
targets: [
.target(
name: "iosMath",
resources: [.process("mathFonts.bundle")]), // **<- this is where I am adding the resources**
.testTarget(
name: "iosMathTests",
dependencies: ["iosMath"],
cSettings: [
.headerSearchPath("iosMath")// 5
]),
]
)
I'm getting a runtime crash here:
return [NSBundle bundleWithURL:[[NSBundle bundleForClass:[self class]] URLForResource:#"mathFonts" withExtension:#"bundle"]];
When I manually copy the bundle into the directory of the project that I am importing this package into, it works just fine. How do I include this bundle so that it is made accessible when the package is installed?
Im trying to add the finagle-http library to my new bazel project as an external maven dependency. But getting the following error. I assume im doing something wrong in creating the build without fully understanding it. Trying to learning. Appreciate any help on this.
error: object Service is not a member of package com.twitter.finagle
error: object util is not a member of package com.twitter
error: type Request is not a member of package com.twitter.finagle.http
error: object Response is not a member of package com.twitter.finagle.http
error: Symbol 'type com.twitter.finagle.Client' is missing from the classpath. This symbol is required by 'object com.twitter.finagle.Http'.
error: not found: value Await
The same code is working using sbt. Below is the code.
import com.twitter.finagle.{Http, Service}
import com.twitter.finagle.http
import com.twitter.util.{Await, Future}
object HelloWorld extends App {
val service = new Service[http.Request, http.Response] {
def apply(req: http.Request): Future[http.Response] =
Future.value(http.Response(req.version, http.Status.Ok))
}
val server = Http.serve(":8080", service)
Await.ready(server)
}
WORKSPACE file
maven_install(
artifacts = [
"org.apache.spark:spark-core_2.11:2.4.4",
"org.apache.spark:spark-sql_2.11:2.4.1",
"org.apache.spark:spark-unsafe_2.11:2.4.1",
"org.apache.spark:spark-tags_2.11:2.4.1",
"org.apache.spark:spark-catalyst_2.11:2.4.1",
"com.twitter:finagle-http_2.12:21.8.0",
],
repositories = [
"https://repo.maven.apache.org/maven2/",
"https://repo1.maven.org/maven2/",
]
)
BUILD file
load("#io_bazel_rules_scala//scala:scala.bzl", "scala_binary")
package(default_visibility = ["//visibility:public"])
scala_binary(
name="helloworld",
main_class="microservices.HelloWorld",
srcs=[
"Main.scala",
],
deps = ["spark],
)
java_library(
name = "spark",
exports = [
"#maven//:com_twitter_finagle_http_2_12_21_8_0",
],
)
Working SBT dependency that was working in my initial sbt project
libraryDependencies += "com.twitter" %% "finagle-http" % "21.8.0"
Figured out the issue, unlike in sbt, in bazel i had induvidualy add the related dependencies. I modified the workspace as below.
maven_install(
artifacts = [
"com.twitter:finagle-http_2.12:21.8.0",
"com.twitter:util-core_2.12:21.8.0",
"com.twitter:finagle-core_2.12:21.8.0",
"com.twitter:finagle-base-http_2.12:21.8.0",
"com.fasterxml.jackson.module:jackson-module-scala_2.12:2.11.2",
"com.fasterxml.jackson.core:jackson-databind:2.11.2",
],
repositories = [
"https://repo.maven.apache.org/maven2/",
"https://repo1.maven.org/maven2/",
]
Build file --
java_library(
name = "finagletrial",
exports = [
"#maven//:com_twitter_finagle_http_2_12_21_8_0",
"#maven//:com_twitter_util_core_2_12_21_8_0",
"#maven//:com_twitter_finagle_core_2_12_21_8_0",
"#maven//:com_twitter_finagle_base_http_2_12_21_8_0",
"#maven//:com_fasterxml_jackson_module_jackson_module_scala_2_12_2_11_2",
"#maven//:com_fasterxml_jackson_core_jackson_databind_2_11_2"
],
I am trying to setup a simple flink application from scratch using Bazel. I've bootstrapped the project by running
sbt new tillrohrmann/flink-project.g8
and after that I have added some files in order for Bazel to take control of the building (i.e., migrate from sbt). This is how the WORKSPACE looks like
# WORKSPACE
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
skylib_version = "1.0.3"
http_archive(
name = "bazel_skylib",
sha256 = "1c531376ac7e5a180e0237938a2536de0c54d93f5c278634818e0efc952dd56c",
type = "tar.gz",
url = "https://mirror.bazel.build/github.com/bazelbuild/bazel-skylib/releases/download/{}/bazel-skylib-{}.tar.gz".format(skylib_version, skylib_version),
)
rules_scala_version = "5df8033f752be64fbe2cedfd1bdbad56e2033b15"
http_archive(
name = "io_bazel_rules_scala",
sha256 = "b7fa29db72408a972e6b6685d1bc17465b3108b620cb56d9b1700cf6f70f624a",
strip_prefix = "rules_scala-%s" % rules_scala_version,
type = "zip",
url = "https://github.com/bazelbuild/rules_scala/archive/%s.zip" % rules_scala_version,
)
# Stores Scala version and other configuration
# 2.12 is a default version, other versions can be use by passing them explicitly:
load("#io_bazel_rules_scala//:scala_config.bzl", "scala_config")
scala_config(scala_version = "2.12.11")
load("#io_bazel_rules_scala//scala:scala.bzl", "scala_repositories")
scala_repositories()
load("#io_bazel_rules_scala//scala:toolchains.bzl", "scala_register_toolchains")
scala_register_toolchains()
load("#io_bazel_rules_scala//scala:scala.bzl", "scala_library", "scala_binary", "scala_test")
# optional: setup ScalaTest toolchain and dependencies
load("#io_bazel_rules_scala//testing:scalatest.bzl", "scalatest_repositories", "scalatest_toolchain")
scalatest_repositories()
scalatest_toolchain()
load("//vendor:workspace.bzl", "maven_dependencies")
maven_dependencies()
load("//vendor:target_file.bzl", "build_external_workspace")
build_external_workspace(name = "vendor")
and this is the BUILD file
package(default_visibility = ["//visibility:public"])
load("#io_bazel_rules_scala//scala:scala.bzl", "scala_library", "scala_test")
scala_library(
name = "job",
srcs = glob(["src/main/scala/**/*.scala"]),
deps = [
"#vendor//vendor/org/apache/flink:flink_clients",
"#vendor//vendor/org/apache/flink:flink_scala",
"#vendor//vendor/org/apache/flink:flink_streaming_scala",
]
)
I'm using bazel-deps for vendoring the dependencies (put in the vendor folder). I have this on my dependencies.yaml file:
options:
buildHeader: [
"load(\"#io_bazel_rules_scala//scala:scala_import.bzl\", \"scala_import\")",
"load(\"#io_bazel_rules_scala//scala:scala.bzl\", \"scala_library\", \"scala_binary\", \"scala_test\")",
]
languages: [ "java", "scala:2.12.11" ]
resolverType: "coursier"
thirdPartyDirectory: "vendor"
resolvers:
- id: "mavencentral"
type: "default"
url: https://repo.maven.apache.org/maven2/
strictVisibility: true
transitivity: runtime_deps
versionConflictPolicy: highest
dependencies:
org.apache.flink:
flink:
lang: scala
version: "1.11.2"
modules: [clients, scala, streaming-scala] # provided
flink-connector-kafka:
lang: java
version: "0.10.2"
flink-test-utils:
lang: java
version: "0.10.2"
For downloading the dependencies, I'm running
bazel run //:parse generate -- --repo-root ~/Projects/bazel-flink-scala --sha-file vendor/workspace.bzl --target-file vendor/target_file.bzl --deps dependencies.yaml
Which runs just fine, but then when I try to build the project
bazel build //:job
I'm getting this error
Starting local Bazel server and connecting to it...
ERROR: Traceback (most recent call last):
File "/Users/salvalcantara/Projects/me/bazel-flink-scala/WORKSPACE", line 44, column 25, in <toplevel>
build_external_workspace(name = "vendor")
File "/Users/salvalcantara/Projects/me/bazel-flink-scala/vendor/target_file.bzl", line 258, column 91, in build_external_workspace
return build_external_workspace_from_opts(name = name, target_configs = list_target_data(), separator = list_target_data_separator(), build_header = build_header())
File "/Users/salvalcantara/Projects/me/bazel-flink-scala/vendor/target_file.bzl", line 251, column 40, in list_target_data
"vendor/org/apache/flink:flink_clients": ["lang||||||scala:2.12.11","name||||||//vendor/org/apache/flink:flink_clients","visibility||||||//visibility:public","kind||||||import","deps|||L|||","jars|||L|||//external:jar/org/apache/flink/flink_clients_2_12","sources|||L|||","exports|||L|||","runtimeDeps|||L|||//vendor/commons_cli:commons_cli|||//vendor/org/slf4j:slf4j_api|||//vendor/org/apache/flink:force_shading|||//vendor/com/google/code/findbugs:jsr305|||//vendor/org/apache/flink:flink_streaming_java_2_12|||//vendor/org/apache/flink:flink_core|||//vendor/org/apache/flink:flink_java|||//vendor/org/apache/flink:flink_runtime_2_12|||//vendor/org/apache/flink:flink_optimizer_2_12","processorClasses|||L|||","generatesApi|||B|||false","licenses|||L|||","generateNeverlink|||B|||false"],
Error: dictionary expression has duplicate key: "vendor/org/apache/flink:flink_clients"
ERROR: error loading package 'external': Package 'external' contains errors
INFO: Elapsed time: 3.644s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
Why is that? Anyone can help? It would be great having detailed instructions and project templates for Flink/Scala applications using Bazel. I've put everything together in the following repo: https://github.com/salvalcantara/bazel-flink-scala, feel free to send a PR or whatever.
When I run scala application using 'sbt run' command it is sending kamon metrics to graphite/grafana container. Then I created a docker image for my scala application and running it as a docker container.
Now it is not sending metrics to graphite/grafana container. Both my application container and graphite/grafana container are running under same docker network.
The command I used to run grafana image is: docker run --network smart -d -p 80:80 -p 81:81 -p 2003:2003 -p 8125:8125/udp -p 8126:8126 8399049ce731
kamon configuration in application.conf is
kamon {
auto-start=true
metric {
tick-interval = 1 seconds
filters {
akka-actor {
includes = ["*/user/*"]
excludes = [ "*/system/**", "*/user/IO-**", "**/kamon/**" ]
}
akka-router {
includes = ["*/user/*"]
excludes = [ "*/system/**", "*/user/IO-**", "**/kamon/**" ]
}
akka-dispatcher {
includes = ["*/user/*"]
excludes = [ "*/system/**", "*/user/IO-**", "*kamon*",
"*/kamon/*", "**/kamon/**" ]
}
trace {
includes = [ "**" ]
excludes = [ ]enter code here
}
}
}
# needed for "[error] Exception in thread "main"
java.lang.ClassNotFoundException: local"
internal-config {
akka.actor.provider = "akka.actor.LocalActorRefProvider"
}
statsd {
hostname = "127.0.0.1"
port = 8125
# Subscription patterns used to select which metrics will be pushed
to StatsD. Note that first, metrics
# collection for your desired entities must be activated under the
kamon.metrics.filters settings.
subscriptions {
histogram = [ "**" ]
min-max-counter = [ "**" ]
gauge = [ "**" ]
counter = [ "**" ]
trace = [ "**" ]
trace-segment = [ "**" ]
akka-actor = [ "**" ]
akka-dispatcher = [ "**" ]
akka-router = [ "**" ]
system-metric = [ "**" ]
http-server = [ "**" ]
}
metric-key-generator = kamon.statsd.SimpleMetricKeyGenerator
simple-metric-key-generator {
application = "my-application"
include-hostname = true
hostname-override = none
metric-name-normalization-strategy = normalize
}
}
modules {
kamon-scala.auto-start = yes
kamon-statsd.auto-start = yes
kamon-system-metrics.auto-start = yes
}
}
your help will be very much appreciated.
It is necessary to add AspectJ weaver as Java Agent when you're starting application: -javaagent:aspectjweaver.jar
You can add the following settings in your project SBT configuration:
.settings(
retrieveManaged := true,
libraryDependencies += "org.aspectj" % "aspectjweaver" % aspectJWeaverV)
So AspectJ weaver JAR will be copied to ./lib_managed/jars/org.aspectj/aspectjweaver/aspectjweaver-[aspectJWeaverV].jar in your project root.
Then you can refer this JAR in your Dockerfile:
COPY ./lib_managed/jars/org.aspectj/aspectjweaver/aspectjweaver-*.jar /app-
workdir/aspectjweaver.jar
WORKDIR /app-workdir
CMD ["java", "-javaagent:aspectjweaver.jar", "-jar", "app.jar"]
My karma.conf.js includes:
plugins: [
'karma-jasmine',
'karma-phantomjs-launcher',
'karma-ng-html2js-preprocessor'
],
preprocessors: {
'../../mypath/*.html': ['ng-html2js']
},
ngHtml2JsPreprocessor: {
moduleName: 'templates'
},
(I've tried without specifying any plugins, too.)
My devDependencies include:
"karma-ng-html2js-preprocessor": "^0.2.0"`
My tests include:
beforeEach(module('templates'));
These give the error:
Module 'templates' is not available!
Running karma with --log-level debug, I do not see any [preprocessor.html2js] entries. (I do get Loading plugin karma-ng-html2js-preprocessor.)
What am I doing wrong?
The issues were that the templates must be listed under files as well, and that the glob pattern in preprocessors must match. This is implied by the documentation.
files: [
'../../Scripts/angular-app/directives/*.html',
// .js files
],
preprocessors: {
'../../Scripts/angular-app/**/*.html': ['ng-html2js']
},
Note that **/*.html does not match parent directories of the basePath.
karma start --log-level debug will display DEBUG [preprocessor.html2js] entries when everything is correct.
I was also able to remove the plugins section.
To get the correct cache ID, I used:
ngHtml2JsPreprocessor: {
// Load this module in your tests of directives that have a templateUrl.
moduleName: 'templates',
cacheIdFromPath: function (filepath) {
return filepath.substring(filepath.indexOf('/Scripts/angular-app/'));
}
},
If a template references a custom filter, the filter must be loaded in files and the filter's module must be loaded in your directive tests.