How to pass multiple parameter to --detect.maven.build.command for blackduck hub scanning of maven project using jenkins - powershell

I am passing below command as PowerShell in Jenkins:
powershell "[Net.ServicePointManager]::SecurityProtocol = 'tls12'; irm https://detect.synopsys.com/detect.ps1?$(Get-Random) | iex; detect" --blackduck.url=$env:HUB_URL --blackduck.trust.cert=true --blackduck.api.token=$env:BLACKDUCK_HUB_TOKEN --detect.project.name=$env:HUB_PROJECT_NAME --detect.project.version.name=$env:VERSION --detect.maven.include.plugins=true --detect.included.detector.types=maven --detect.maven.build.command="E:\apache-maven-3.0.3\bin\mvn.cmd -f pom.xml -s settings.xml -gs settings.xml clean install -DIafConfigSuffix=Cert"`
but when detect executed, --detect.maven.build.command only pass 1st command as highlighted below:
> "C:\Users\a900565\AppData\Local\Temp/synopsys-detect-6.5.0.jar"
> "--blackduck.url=https://blackduckhub.deere.com"
> "--blackduck.trust.cert=true" "--blackduck.api.token=********"
> "--detect.project.name=**********"
> "--detect.project.version.name=master"
> "--detect.maven.include.plugins=true"
> "--detect.included.detector.types=maven"
> "--detect.maven.build.command=E:\apache-maven-3.0.3\bin\mvn.cmd" "-f"
> "pom.xml" "-s" "settings.xml" "-gs" "settings.xml" "clean" "install"
> "-DIafConfigSuffix=Cert" 07:58:14 Java Source:
> JAVA_HOME/bin/java=C:\Program Files\Amazon
> Corretto\jdk1.8.0_202/bin/java 07:58:15 ______ _ _
> 07:58:15 | _ \ | | | | 07:58:15 | | | |___| |_ ___ ___|
> |_ 07:58:15 | | | / _ \ __/ _ \/ __| __| 07:58:15 | |/ / __/ || __/
> (__| |_ 07:58:15 |___/ \___|\__\___|\___|\__| 07:58:15 07:58:17
> 07:58:17 Detect Version: 6.5.0 07:58:17 07:58:17 2020-09-11 07:58:17
> INFO [main] --- 07:58:17 2020-09-11 07:58:17 INFO [main] ---
> Current property values: 07:58:17 2020-09-11 07:58:17 INFO [main] ---
> --property = value [notes] 07:58:18 2020-09-11 07:58:17 INFO [main] --- ------------------------------------------------------------ 07:58:18 2020-09-11 07:58:17 INFO [main] --- blackduck.api.token =
> **************************************************************************************************** [cmd] 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> blackduck.trust.cert = true [cmd] 07:58:18 2020-09-11 07:58:17 INFO
> [main] --- blackduck.url = ************* [cmd] 07:58:18 2020-09-11
> 07:58:17 INFO [main] --- detect.included.detector.types = maven [cmd]
> 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> **detect.maven.build.command = E:\apache-maven-3.0.3\bin\mvn.cmd [cmd]** 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> detect.maven.include.plugins = true [cmd] 07:58:18 2020-09-11
> 07:58:17 INFO [main] --- detect.project.name = ********* [cmd]
> 07:58:18 2020-09-11 07:58:17 INFO [main] ---
> detect.project.version.name = master [cmd]
How can i pass multiple parameter to detect.maven.build.command?

Solution
The issue is related to your nested quotation characters and lack of escape characters. I've taken your PowerShell command and formatted the string correctly with the appropriate escape characters.
powershell label: '', script: '''
[Net.ServicePointManager]::SecurityProtocol = \'tls12\';
irm https://detect.synopsys.com/detect.ps1?$(Get-Random) | iex;
detect
--blackduck.url=$env:HUB_URL
--blackduck.trust.cert=true
--blackduck.api.token=$env:BLACKDUCK_HUB_TOKEN
--detect.project.name=$env:HUB_PROJECT_NAME
--detect.project.version.name=$env:VERSION
--detect.maven.include.plugins=true
--detect.included.detector.types=maven
--detect.maven.build.command="E:\\apache-maven-3.0.3\\bin\\mvn.cmd -f pom.xml -s settings.xml -gs settings.xml clean install -DIafConfigSuffix=Cert"
'''
Pipeline Syntax Generator
You can auto generate this script by using the pipeline syntax generator in Jenkins. Configure a pipeline and click the pipeline syntax hyperlink at the bottom. See image below.
From there you can enter the PowerShell script and click Generate Pipeline Script
Post Script
I noticed that your PowerShell script has some seemingly misplaced quotation marks. If my script doesn't run, please post the PowerShell script that you run directly from the PowerShell console and I will update my answer

See quoting and escaping info here
So, seeing detect is only grabbing the first "word" of your maven command, consider putting a backtick ` before each of the spaces (and maybe some other special characters) in your maven command.
Other useful options, though maybe not useful here:
-hv
--logging.level.com.synopsys.integration=TRACE

Related

Kafka Spark Streaming Error - java.lang.NoClassDefFoundError: org/apache/spark/sql/connector/read/streaming/ReportsSourceMetrics

I'm using Spark 3.1.2, Kafka 2.8.1 & Scala 2.12.1
Getting below Error while integrating Kafka and Spark streaming -
java.lang.NoClassDefFoundError: org/apache/spark/sql/connector/read/streaming/ReportsSourceMetrics
Spark-shell command with Dependency - spark-shell --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2
org.apache.spark#spark-sql-kafka-0-10_2.12 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-3643b83d-a2f8-43d1-941f-a125272f3905;1.0
confs: [default]
found org.apache.spark#spark-sql-kafka-0-10_2.12;3.1.2 in central
found org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.1.2 in central
found org.apache.kafka#kafka-clients;2.6.0 in central
found com.github.luben#zstd-jni;1.4.8-1 in central
found org.lz4#lz4-java;1.7.1 in central
found org.xerial.snappy#snappy-java;1.1.8.2 in central
found org.slf4j#slf4j-api;1.7.30 in central
found org.spark-project.spark#unused;1.0.0 in central
found org.apache.commons#commons-pool2;2.6.2 in central
:: resolution report :: resolve 564ms :: artifacts dl 9ms
:: modules in use:
com.github.luben#zstd-jni;1.4.8-1 from central in [default]
org.apache.commons#commons-pool2;2.6.2 from central in [default]
org.apache.kafka#kafka-clients;2.6.0 from central in [default]
org.apache.spark#spark-sql-kafka-0-10_2.12;3.1.2 from central in [default]
org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.1.2 from central in [default]
org.lz4#lz4-java;1.7.1 from central in [default]
org.slf4j#slf4j-api;1.7.30 from central in [default]
org.spark-project.spark#unused;1.0.0 from central in [default]
org.xerial.snappy#snappy-java;1.1.8.2 from central in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 9 | 0 | 0 | 0 || 9 | 0 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent-3643b83d-a2f8-43d1-941f-a125272f3905
confs: [default]
0 artifacts copied, 9 already retrieved (0kB/15ms)
21/12/28 17:46:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
21/12/28 17:46:28 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
Spark context Web UI available at http://*******:4041
Spark context available as 'sc' (master = local[*], app id = local-1640693788919).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.1.2
/_/
Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_292)
Type in expressions to have them evaluated.
Type :help for more information.
val df = spark.readStream.format("kafka").option("kafka.bootstrap.servers", "127.0.1.1:9092").option("subscribe", "Topic").option("startingOffsets", "earliest").load()
df.printSchema()
import org.apache.spark.sql.types._
val schema = new StructType().add("id",IntegerType).add("fname",StringType).add("lname",StringType)
val personStringDF = df.selectExpr("CAST(value AS STRING)")
val personDF = personStringDF.select(from_json(col("value"), schema).as("data")).select("data.*")
personDF.writeStream.format("console").outputMode("append").start().awaitTermination()
Exception in thread "stream execution thread for [id = 44e8f8bf-7d94-4313-9d2b-88df8f5bc10f, runId = 3b4c63c4-9062-4288-a681-7dd6cfb836d0]" java.lang.NoClassDefFoundError: org/apache/spark/sql/connector/read/streaming/ReportsSourceMetrics
Spark_version 3.1.2
Scala_version 2.12.10
Kafka_version 2.8.1
Note: The versions are very important when we use --packages org.apache.spark:spark-sql-kafka-0-10_2.12:V.V.V with either spark-shell or spark-submit. Where (V.V.V = Spark_version)
I followed the following steps as given at spark-kafka-example:
start producer
$ kafka-console-producer.sh --broker-list Kafka-Server-IP:9092 --topic kafka-spark-test
You should see the prompt > on console. Enter some test data on producer.
>{"name":"foo","dob_year":1995,"gender":"M","salary":2000}
>{"name":"bar","dob_year":1996,"gender":"M","salary":2500}
>{"name":"baz","dob_year":1997,"gender":"F","salary":3500}
>{"name":"foo-bar","dob_year":1998,"gender":"M","salary":4000}
Start the spark-shell as follows:
spark-shell --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2
Notice: i have used 3.1.2. You will see something like following on successful start:
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.1.2
/_/
Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 11.0.13)
Type in expressions to have them evaluated.
Type :help for more information.
Enter the imports and create DataFrame, and print the schema.
val df = spark.readStream.
format("kafka").
option("kafka.bootstrap.servers", "Kafka-Server-IP:9092").
option("subscribe", "kafka-spark-test").
option("startingOffsets", "earliest").
load()
df.printSchema()
The successful execution should result following:
scala> df.printSchema()
root
|-- key: binary (nullable = true)
|-- value: binary (nullable = true)
|-- topic: string (nullable = true)
|-- partition: integer (nullable = true)
|-- offset: long (nullable = true)
|-- timestamp: timestamp (nullable = true)
|-- timestampType: integer (nullable = true)
Convert binary values of DataFrame to string. I am showing command with output
scala> val personStringDF = df.selectExpr("CAST(value AS STRING)")
personStringDF: org.apache.spark.sql.DataFrame = [value: string]
Make and schema for DataFrame. I am showing command with output
scala> val schema = new StructType().
| add("name",StringType).
| add("dob_year",IntegerType).
| add("gender",StringType).
| add("salary",IntegerType)
schema: org.apache.spark.sql.types.StructType = StructType(StructField(name,StringType,true), StructField(dob_year,IntegerType,true), StructField(gender,StringType,true), StructField(salary,IntegerType,true))
Select the data
scala> val personDF = personStringDF.select(from_json(col("value"), schema).as("data")).select("data.*")
personDF: org.apache.spark.sql.DataFrame = [name: string, dob_year: int ... 2 more fields]
Write the stream on console
scala> personDF.writeStream.
| format("console").
| outputMode("append").
| start().
| awaitTermination()
You will see the following output:
-------------------------------------------
Batch: 0
-------------------------------------------
+-------+--------+------+------+
| name|dob_year|gender|salary|
+-------+--------+------+------+
| foo| 1981| M| 2000|
| bar| 1982| M| 2500|
| baz| 1983| F| 3500|
|foo-bar| 1984| M| 4000|
+-------+--------+------+------+
If your kafka producer is still running, you may enter a new row and you will see the new data in Batch: 1 and so on for each time you enter new data in producers.
This is a typical example when we enter data from console producer and consume in spark console.
Good luck! :)
I had nearly the same problem - same exception but in spark-submit.
I solved it by upgrading Spark to version 3.2.0.
I also used version 3.2.0 of org.apache.spark:spark-sql-kafka-0-10_2.12 with the full command being:
$SPARK_HOME/bin/spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.2.0 script.py
just check your spark version with spark.version and adjust the packages as suggested in the other answers.

minikube service is failing to expose URL

F:\Udemy\GitRepo\Kubernetes-Tutorial>kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-app-deploy-68698d9757-wrs9z 1/1 Running 0 14m 172.17.0.3 minikube <none> <none>
F:\Udemy\GitRepo\Kubernetes-Tutorial>minikube service my-app-svc
|-----------|------------|-------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------------|-------------|-----------------------------|
| default | my-app-svc | 80 | http://172.30.105.146:30365 |
|-----------|------------|-------------|-----------------------------|
* Opening service default/my-app-svc in default browser...
F:\Udemy\GitRepo\Kubernetes-Tutorial>kubectl describe service my-app-svc
Name: my-app-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=my-app
Type: NodePort
IP: 10.98.9.115
Port: <unset> 80/TCP
TargetPort: 9001/TCP
NodePort: <unset> 30365/TCP
Endpoints: 172.17.0.3:9001
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
F:\Udemy\GitRepo\Kubernetes-Tutorial>kubectl logs my-app-deploy-68698d9757-wrs9z
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.1.RELEASE)
2021-08-21 13:37:21.046 INFO 1 --- [ main] c.d.d.DockerpublishApplication : Starting DockerpublishApplication v0.0.3 on my-app-deploy-68698d9757-wrs9z with PID 1 (/app.jar started by root in /)
2021-08-21 13:37:21.050 INFO 1 --- [ main] c.d.d.DockerpublishApplication : No active profile set, falling back to default profiles: default
2021-08-21 13:37:22.645 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 9091 (http)
2021-08-21 13:37:22.659 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2021-08-21 13:37:22.660 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.36]
2021-08-21 13:37:22.785 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2021-08-21 13:37:22.785 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1646 ms
2021-08-21 13:37:23.302 INFO 1 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2021-08-21 13:37:23.496 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 9091 (http) with context path ''
2021-08-21 13:37:23.510 INFO 1 --- [ main] c.d.d.DockerpublishApplication : Started DockerpublishApplication in 3.279 seconds (JVM running for 4.077)
F:\Udemy\GitRepo\Kubernetes-Tutorial>
Everything seems to be good, but not working well.
Refused to connect issue comes as below for
minikube service my-app-svc
Your service or application is running on different port as you are getting connection refused.
Spring boot running on the 9091 : Tomcat started on port(s): 9091 (http) with context path ''
But your service is redirecting the traffic TargetPort: 9001/TCP
Your target port should be 9091 instead of 9001
Your will access the application over the node port Ip request, which will reach to the K8s service and be forwarded to the TargetPort: 9091/TCP on which the application is running.

Running docker compose causes "Connection to localhost:5432 refused." exception

I've looked at SO posts related to this questions here, here, here, and here but I haven't had any luck with the fixes proposed. Whenever I run the command docker-compose -f stack.yml up I receive the following stack trace:
Attaching to weg-api_db_1, weg-api_weg-api_1
db_1 | 2018-07-04 14:57:15.384 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-07-04 14:57:15.384 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-07-04 14:57:15.388 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-07-04 14:57:15.402 UTC [23] LOG: database system was interrupted; last known up at 2018-07-04 14:45:24 UTC
db_1 | 2018-07-04 14:57:15.513 UTC [23] LOG: database system was not properly shut down; automatic recovery in progress
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: redo starts at 0/16341E0
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: invalid record length at 0/1634218: wanted 24, got 0
db_1 | 2018-07-04 14:57:15.515 UTC [23] LOG: redo done at 0/16341E0
db_1 | 2018-07-04 14:57:15.525 UTC [1] LOG: database system is ready to accept connections
weg-api_1 |
weg-api_1 | . ____ _ __ _ _
weg-api_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
weg-api_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
weg-api_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
weg-api_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
weg-api_1 | =========|_|==============|___/=/_/_/_/
weg-api_1 | :: Spring Boot :: (v1.5.3.RELEASE)
weg-api_1 |
weg-api_1 | 2018-07-04 14:57:16.908 INFO 7 --- [ main] api.ApiKt : Starting ApiKt v0.0.1-SNAPSHOT on f9c58f4f2f27 with PID 7 (/app/spring-jpa-postgresql-spring-boot-0.0.1-SNAPSHOT.jar started by root in /app)
weg-api_1 | 2018-07-04 14:57:16.913 INFO 7 --- [ main] api.ApiKt : No active profile set, falling back to default profiles: default
weg-api_1 | 2018-07-04 14:57:17.008 INFO 7 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#6e5e91e4: startup date [Wed Jul 04 14:57:17 GMT 2018]; root of context hierarchy
weg-api_1 | 2018-07-04 14:57:19.082 INFO 7 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8080 (http)
weg-api_1 | 2018-07-04 14:57:19.102 INFO 7 --- [ main] o.apache.catalina.core.StandardService : Starting service Tomcat
weg-api_1 | 2018-07-04 14:57:19.104 INFO 7 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.14
weg-api_1 | 2018-07-04 14:57:19.215 INFO 7 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
weg-api_1 | 2018-07-04 14:57:19.215 INFO 7 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 2211 ms
weg-api_1 | 2018-07-04 14:57:19.370 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
weg-api_1 | 2018-07-04 14:57:19.375 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.376 INFO 7 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
weg-api_1 | 2018-07-04 14:57:19.867 ERROR 7 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
weg-api_1 |
weg-api_1 | org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
I thought that my .yml file was brain-dead-simple, but I must be missing something vital for the internal routing between the two containers to fail.
EDIT
My stack.yml is below:
version: '3'
services:
db:
image: postgres
restart: always
container_name: db
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: weg
ports:
- "5432:5432"
weg-api:
image: weg-api
restart: always
container_name: weg-api
ports:
- "8080:8080"
depends_on:
- "db"
EDIT
My Springboot application properties are below:
spring.datasource.url=jdbc:postgresql://db:5432/weg
spring.datasource.username=root
spring.datasource.password=password
spring.jpa.generate-ddl=true
I'm at a loss as to how to proceed.
Your database is running on db container, not on your localhost inside your weg-api container. Therefore, you have to change
spring.datasource.url=jdbc:postgresql://localhost:5432/weg
to
spring.datasource.url=jdbc:postgresql://db:5432/weg
I would also suggest you give container_name to each of your containers to be sure the container names are always same. Otherwise you might sometimes get different names depending on your configuration.
version: '3'
services:
db:
image: postgres
restart: always
container_name: db
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: weg
ports:
- "5432:5432"
weg-api:
image: weg-api
restart: always
container_name: weg-api
ports:
- "8080:8080"
depends_on:
- "db"

IntelliJ scala worksheet: Reduce debug logging

When using the worksheet with Slick, it logs so much debug that I don't see the actual result of what I'm doing. I've been trying to figure out how to disable the debug-logging for hours now, but I can't figure it out.
How do I disable the (slick)/(worksheet) logging?
from the worksheet:
db.run(
countries.take(5)
.map(_.country)
.result
)
Which outputs ~200 lines of:
countries: slick.lifted.TableQuery[Country] = Rep(TableExpansion)
z: slick.lifted.Query[Country,Country#TableElementType,Seq] = Rep(Filter #968224334)
res0: java.sql.Connection = org.postgresql.jdbc4.Jdbc4Connection#40d24bbd
BEFORE
22:17:58.192 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - Source:
| Bind
| from s2: Take
| from: TableExpansion
| table s3: Table country
| columns: ProductNode
| 1: Path s3.id : String'
| 2: Path s3.name : String'
| count: LiteralNode 5 (volatileHint=false)
| select: Pure t4
| value: Path s2.name : String'
22:17:58.193 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.AssignUniqueSymbols - Detected features: UsedFeatures(false,false,false,false)
22:17:58.194 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase assignUniqueSymbols:
| Bind
| from s5: Take
| from: TableExpansion
| table s6: Table country
| columns: ProductNode
| 1: Path s6.id : String'
| 2: Path s6.name : String'
| count: LiteralNode 5 (volatileHint=false)
| select: Pure t8
| value: Path s5.name : String'
22:17:58.195 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase inferTypes: (no change)
22:17:58.196 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.ExpandTables - Found Selects for NominalTypes: #t7
22:17:58.197 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.ExpandTables - With correct table types:
| Bind : Vector[t8<String'>]
| from s5: Take : Vector[#t7<{id: String', name: String'}>]
| from: Table country : Vector[#t7<{id: String', name: String'}>]
| count: LiteralNode 5 (volatileHint=false) : Long
| select: Pure t8 : Vector[t8<String'>]
| value: Path s5.name : String'
22:17:58.197 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.ExpandTables - Table expansions: #t7 -> (s6,ProductNode)
22:17:58.198 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase expandTables:
| Bind : Vector[t8<String'>]
| from s5: Take : Vector[#t7<{id: String', name: String'}>]
| from: Table country : Vector[#t7<{id: String', name: String'}>]
| count: LiteralNode 5 (volatileHint=false) : Long
| select: Pure t8 : Vector[t8<String'>]
| value: Path s5.name : String'
22:17:58.199 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase forceOuterBinds:
| Bind : Vector[t8<String'>]
| from s5: Take : Vector[#t7<{id: String', name: String'}>]
| from: Table country : Vector[#t7<{id: String', name: String'}>]
| count: LiteralNode 5 (volatileHint=false) : Long
| select: Pure t8 : Vector[t8<String'>]
| value: Path s5.name : String'
22:17:58.200 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase removeMappedTypes: (no change)
22:17:58.200 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase expandSums: (no change)
22:17:58.201 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase expandRecords:
| Bind : Vector[t8<String'>]
| from s5: Take : Vector[#t7<{id: String', name: String'}>]
| from: Table country : Vector[#t7<{id: String', name: String'}>]
| count: LiteralNode 5 (volatileHint=false) : Long
| select: Pure t8 : Vector[t8<String'>]
| value: Path s5.name : String'
22:17:58.202 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Flattening projection t8
22:17:58.202 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Analyzing s5.name with symbols
| Path s5.name : String'
22:17:58.203 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Translated s5.name to:
| Path s5.name
22:17:58.203 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Flattening node at Path
| Path s5.name
22:17:58.204 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Adding definition: s9 -> Path s5.name
22:17:58.204 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Adding translation for t8: (Map(List() -> s9), UnassignedType)
22:17:58.204 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.FlattenProjections - Flattened projection to
| Pure t8
| value: StructNode
| s9: Path s5.name
22:17:58.205 [NGSession 241: 127.0.0.1: compile-server] DEBUG slick.compiler.QueryCompiler - After phase flattenProjections:
| Bind : Vector[t8<{s9: String'}>]
| from s5: Take : Vector[#t7<{id: String', name: String'}>]
| from: Table country : Vector[#t7<{id: String', name: String'}>]
| count: LiteralNode 5 (volatileHint=false) : Long
| select: Pure t8 : Vector[t8<{s9: String'}>]
| value: StructNode : {s9: String'}
| s9: Path s5.name : String'
(and it goes on and on and on...)
So how do I turn off this logging?
Assuming that you are using the default sbt directory structure, you need to configure logback to log only what you want. To do so, add a file src/main/resources/logback.xml with the following content:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<logger name="slick.lifted" level="INFO" />
<root level="debug">
<appender-ref ref="STDOUT" />
</root>
</configuration>
Of course, you can configure other log level besides INFO and also other log levels for specific slick packages.
And finally, if you are using Playframework directory structure, put the logback.xml file inside conf directory instead.
I had a similar problem with a Scala worksheet with Spark code where it wrote literally hundreds of DEBUG messages to the log - all starting with [NGSession 241: 127.0.0.1: compile] DEBUG ...
The project uses several modules and no matter where I put the adjusted logback.xml file, it didn't get picked up.
I noticed that this NGSession relates to the Scala compile server. By specifically telling the server which logback.xml to use, it was picked up:
Add the parameter...
After that I restarted the compile server (Stop/Start) and it worked.

Deployment issue in Fabric for code using camel-cxf and camel-http

I am getting the following error while trying to deploy the test-ext feature of the test-ext-profile in JBoss Fuse Fabric. The other feature ticktock of the same profile is getting deployed alright and working fine. I am trying to deploy the two profiles in the child container by typing the command - "container-change-profile test-child-container-1 feature-camel test-ext-profile". PLEASE HELP.
----------------------------------------------------------------------------------------
ERROR --
-------------------------------------------------------------------------------------------------
2015-01-05 16:06:47,125 | INFO | admin-4-thread-1 | FabricConfigAdminBridge | figadmin.FabricConfigAdminBridge 173 | 67 - io.fabric8.fabric-configadmin - 1.0.0.redhat-379 | Updating configuration io.fabric8.agent
2015-01-05 16:06:47,140 | INFO | admin-4-thread-1 | FabricConfigAdminBridge | figadmin.FabricConfigAdminBridge 142 | 67 - io.fabric8.fabric-configadmin - 1.0.0.redhat-379 | Deleting configuration org.ops4j.pax.logging
2015-01-05 16:06:47,140 | INFO | o.fabric8.agent) | DeploymentAgent | io.fabric8.agent.DeploymentAgent 243 | 60 - io.fabric8.fabric-agent - 1.0.0.redhat-379 | DeploymentAgent updated with {hash=ProfileImpl[id='default', version='1.0']-, org.ops4j.pax.url.mvn.defaultrepositories=file:C:\manish\Work - Consulting\installers\jboss-fuse-6.1.0.redhat-379/system#snapshots#id=karaf-default,file:C:\manish\Work - Consulting\installers\jboss-fuse-6.1.0.redhat-379/local-repo#snapshots#id=karaf-local, feature.karaf=karaf, feature.jolokia=jolokia, resolve.optional.imports=false, feature.fabric-core=fabric-core, fabric.zookeeper.pid=io.fabric8.agent, org.ops4j.pax.url.mvn.repositories=http://repo1.maven.org/maven2#id=central, https://repo.fusesource.com/nexus/content/groups/public#id=fusepublic, https://repository.jboss.org/nexus/content/repositories/public#id=jbosspublic, https://repo.fusesource.com/nexus/content/repositories/releases#id=jbossreleases, https://repo.fusesource.com/nexus/content/groups/ea#id=jbossearlyaccess, http://repository.springsource.com/maven/bundles/release#id=ebrreleases, http://repository.springsource.com/maven/bundles/external#id=ebrexternal, https://oss.sonatype.org/content/groups/scala-tools#id=scala, repository.fabric8=mvn:io.fabric8/fabric8-karaf/1.0.0.redhat-379/xml/features, patch.repositories=https://repo.fusesource.com/nexus/content/repositories/releases, https://repo.fusesource.com/nexus/content/groups/ea, service.pid=io.fabric8.agent, feature.fabric-jaas=fabric-jaas, feature.fabric-agent=fabric-agent, feature.fabric-web=fabric-web, feature.fabric-git-server=fabric-git-server, feature.fabric-git=fabric-git, repository.karaf-standard=mvn:org.apache.karaf.assemblies.features/standard/2.3.0.redhat-610379/xml/features, optional.ops4j-base-lang=mvn:org.ops4j.base/ops4j-base-lang/1.4.0}
2015-01-05 16:07:12,344 | INFO | o.fabric8.agent) | DeploymentAgent | io.fabric8.agent.DeploymentAgent 243 | 60 - io.fabric8.fabric-agent - 1.0.0.redhat-379 | DeploymentAgent updated with {feature.ticktock=ticktock, hash=ProfileImpl[id='test-ext-profile', version='1.0']----, org.ops4j.pax.url.mvn.defaultrepositories=file:C:\manish\Work - Consulting\installers\jboss-fuse-6.1.0.redhat-379/system#snapshots#id=karaf-default,file:C:\manish\Work - Consulting\installers\jboss-fuse-6.1.0.redhat-379/local-repo#snapshots#id=karaf-local, feature.karaf=karaf, repository.file:c:_goutam_osgitest2_unsolr_features.xml=file:C:/manish/osgitest2/testsolr/features.xml, feature.jolokia=jolokia, repository.karaf-spring=mvn:org.apache.karaf.assemblies.features/spring/2.3.0.redhat-610379/xml/features, feature.camel-blueprint=camel-blueprint, resolve.optional.imports=false, feature.camel-core=camel-core, feature.test-ext=test-ext, feature.camel-cxf_0.0.0=camel-cxf/0.0.0, feature.fabric-core=fabric-core, repository.karaf-enterprise=mvn:org.apache.karaf.assemblies.features/enterprise/2.3.0.redhat-610379/xml/features, fabric.zookeeper.pid=io.fabric8.agent, feature.fabric-camel=fabric-camel, org.ops4j.pax.url.mvn.repositories=http://repo1.maven.org/maven2#id=central, https://repo.fusesource.com/nexus/content/groups/public#id=fusepublic, https://repository.jboss.org/nexus/content/repositories/public#id=jbosspublic, https://repo.fusesource.com/nexus/content/repositories/releases#id=jbossreleases, https://repo.fusesource.com/nexus/content/groups/ea#id=jbossearlyaccess, http://repository.springsource.com/maven/bundles/release#id=ebrreleases, http://repository.springsource.com/maven/bundles/external#id=ebrexternal, https://oss.sonatype.org/content/groups/scala-tools#id=scala, repository.fabric8=mvn:io.fabric8/fabric8-karaf/1.0.0.redhat-379/xml/features, feature.fabric-jaas=fabric-jaas, patch.repositories=https://repo.fusesource.com/nexus/content/repositories/releases, https://repo.fusesource.com/nexus/content/groups/ea, service.pid=io.fabric8.agent, feature.fabric-agent=fabric-agent, feature.fabric-web=fabric-web, feature.fabric-git-server=fabric-git-server, feature.camel-http_0.0.0=camel-http/0.0.0, feature.fabric-git=fabric-git, repository.apache-camel=mvn:org.apache.camel.karaf/apache-camel/2.12.0.redhat-610379/xml/features, repository.karaf-standard=mvn:org.apache.karaf.assemblies.features/standard/2.3.0.redhat-610379/xml/features, optional.ops4j-base-lang=mvn:org.ops4j.base/ops4j-base-lang/1.4.0, attribute.parents=feature-camel}
2015-01-05 16:07:13,141 | ERROR | agent-1-thread-1 | DeploymentAgent | .fabric8.agent.DeploymentAgent$2 255 | 60 - io.fabric8.fabric-agent - 1.0.0.redhat-379 | Unable to update agent
org.osgi.service.resolver.ResolutionException: Unable to resolve dummy/0.0.0: missing requirement [dummy/0.0.0] osgi.identity; osgi.identity=test-ext; type=karaf.feature; version=0
at org.apache.felix.resolver.Candidates.populateResource(Candidates.java:285)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at org.apache.felix.resolver.Candidates.populate(Candidates.java:153)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:148)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at io.fabric8.agent.DeploymentBuilder.resolve(DeploymentBuilder.java:226)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at io.fabric8.agent.DeploymentAgent.doUpdate(DeploymentAgent.java:521)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at io.fabric8.agent.DeploymentAgent$2.run(DeploymentAgent.java:252)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)[:1.7.0_71]
at java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_71]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_71]
-------------------------------------------------------------------------------------------------
THE PROFILE DISPLAY DETAILS ARE AS FOLLOWS -
-------------------------------------------------------------------------------------------------
JBossFuse:karaf#root> profile-display test-ext-profile
Profile id: test-ext-profile
Version : 1.0
Attributes:
parents: feature-camel
Containers: test-child-container-1
Container settings
----------------------------
Repositories :
file:C:/manish/osgitest2/testsolr/features.xml
Features :
camel-http/0.0.0
camel-cxf/0.0.0
test-ext
ticktock
Configuration details
----------------------------
Other resources
----------------------------
-------------------------------------------------------------------------------------------------
THE features.xml LOOKS LIKE THIS -
-------------------------------------------------------------------------------------------------
<?xml version="1.0" encoding="UTF-8"?>
<features name="my-features">
<feature name="ticktock">
<bundle>file:C:/manish/osgitest2/testsolr/osgitest_tick2.jar</bundle>
<bundle>file:C:/manish/osgitest2/testsolr/osgitest_tock2.jar</bundle>
</feature>
<feature name="test-ext">
<bundle>file:C:/manish/osgitest2/testsolr/standard-ext-api-1.0.0-SNAPSHOT.jar</bundle>
</feature>
</features>
-------------------------------------------------------------------------------------------------
MANIFEST.MF of standard-ext-api-1.0.0-SNAPSHOT.jar is as below. This jar uses camel-cxf and camel-http.
-------------------------------------------------------------------------------------------------
Manifest-Version: 1.0
Bnd-LastModified: 1420491685490
Build-Jdk: 1.7.0_71
Built-By: manish
Bundle-ManifestVersion: 2
Bundle-Name: Camel Blueprint Route for test ext Query
Bundle-SymbolicName: standard-ext-api
Bundle-Version: 1.0.0.SNAPSHOT
Created-By: Apache Maven Bundle Plugin
Export-Package: org.apache.cxf;uses:="org.apache.cxf.feature,org.apache.
cxf.interceptor,org.apache.cxf.common.i18n,org.apache.cxf.common.loggin
g,org.apache.cxf.common.util,org.apache.cxf.common.classloader";version
="2.7.0.redhat-610379"
Import-Package: javax.ws.rs;version="[2.0,3)",javax.ws.rs.core;version="
[2.0,3)",javax.xml.bind.annotation,org.apache.camel;version="[2.12,3)",
org.apache.camel.builder;version="[2.12,3)",org.apache.camel.model;vers
ion="[2.12,3)",org.apache.camel.processor.aggregate;version="[2.12,3)",
org.apache.cxf.common.classloader;version="[2.7,3)",org.apache.cxf.comm
on.i18n;version="[2.7,3)",org.apache.cxf.common.logging;version="[2.7,3
)",org.apache.cxf.common.util;version="[2.7,3)",org.apache.cxf.feature;
version="[2.7,3)",org.apache.cxf.interceptor;version="[2.7,3)",org.osgi
.service.blueprint;version="[1.0.0,2.0.0)"
Tool: Bnd-1.50.0
This problem was solved after some manual intervention and not entirely by using the OBR of fabric8. The appropriate features need to be included in the POM file and they also need to be installed through the POM file. This was a pretty involved exercise. A better way should be provided by Redhat for the JBoss Fuse to bring down the deployment time.