Using CASE in PostgreSQL to search for each values in column and return the output with filter - postgresql

There is a column with different values like below :
Ava avtar 18.1.100-33_HF305143
app agent 19.9.0.99 (root-2021-323)
BOOST:1.3.0.0-12345 FUSE:2.9.4 ora_dw05_plm10
BOOST:1.3.0.0-12345 FUSE:2.9.4 tar
BOOST:1.3.0.0-12345 FUSE:2.9.4 scp
BOOST:1.3.0.0-12345 FUSE:2.9.7 /usr/pgsql-10/bin/pg_dump
BOOST:1.3.0.1-12345 CBFS 6.1 CVMountd.exe
PP-19.9.0-13-18087
______ ddrmaint 7.5.0-183
________ app agent 4.5.0.0 (52)
In the output when I query using select, I should get as below after filter the column :
Ava avtar 18.1.100
app agent 19.9.0.99
BOOST:1.3.0.0-12345 FUSE:2.9.4
BOOST:1.3.0.0-12345 FUSE:2.9.4
BOOST:1.3.0.0-12345 FUSE:2.9.4
BOOST:1.3.0.0-12345 FUSE:2.9.7
BOOST:1.3.0.1-12345 CBFS 6.1
PP-19.9.0-13
______ ddrmaint 7.5.0
________ app agent 4.5.0.0
I'm trying to use select with CASE for this and not going forward. Please let me know the solution for the same.
select
CASE
WHEN 'PP-19.9.0-13-18087' ~ 'PP' THEN split_part('PP-19.9.0-13-18087', '-', 1)
WHEN 'BOOST:1.3.0.0-12345 FUSE:2.9.4 scp' ~ 'BOOST' THEN split_part('BOOST:1.3.0.0-12345 FUSE:2.9.4 scp', ':', 1)
END

Here is a solution using various tools:split_part, substr, left, reverse.
create table t ( value varchar(100));
insert into t values
('Ava avtar 18.1.100-33_HF305143'),
('app agent 19.9.0.99 (root-2021-323)'),
('BOOST:1.3.0.0-12345 FUSE:2.9.4 ora_dw05_plm10'),
('BOOST:1.3.0.0-12345 FUSE:2.9.4 tar'),
('BOOST:1.3.0.0-12345 FUSE:2.9.4 scp'),
('BOOST:1.3.0.0-12345 FUSE:2.9.7 /usr/pgsql-10/bin/pg_dump'),
('BOOST:1.3.0.1-12345 CBFS 6.1 CVMountd.exe'),
('PP-19.9.0-13-18087'),
('______ ddrmaint 7.5.0-183'),
('________ app agent 4.5.0.0 (52)');
✓
10 rows affected
select
value,
length(value),
case left(value,2)
when 'Av' then split_part(value, '-', 1)
when 'ap' then split_part(value, '(', 1)
when 'BO' then case
when value ~ 'BOOST:1(\.\d)+ FUSE' then left(value,30)
else left(value,28)
end
when 'PP' then reverse(substr(reverse(value),strpos(reverse(value),'-')+1,length(value)))
when '__' then case
when value ~ 'ddrmaint' then left(value,21)
else left(value,26)
end
else value
end as processed_value
from t
value | length | processed_value
:------------------------------------------------------- | -----: | :---------------------------
Ava avtar 18.1.100-33_HF305143 | 30 | Ava avtar 18.1.100
app agent 19.9.0.99 (root-2021-323) | 35 | app agent 19.9.0.99
BOOST:1.3.0.0-12345 FUSE:2.9.4 ora_dw05_plm10 | 45 | BOOST:1.3.0.0-12345 FUSE:2.9
BOOST:1.3.0.0-12345 FUSE:2.9.4 tar | 34 | BOOST:1.3.0.0-12345 FUSE:2.9
BOOST:1.3.0.0-12345 FUSE:2.9.4 scp | 34 | BOOST:1.3.0.0-12345 FUSE:2.9
BOOST:1.3.0.0-12345 FUSE:2.9.7 /usr/pgsql-10/bin/pg_dump | 56 | BOOST:1.3.0.0-12345 FUSE:2.9
BOOST:1.3.0.1-12345 CBFS 6.1 CVMountd.exe | 41 | BOOST:1.3.0.1-12345 CBFS 6.1
PP-19.9.0-13-18087 | 18 | PP-19.9.0-13
______ ddrmaint 7.5.0-183 | 25 | ______ ddrmaint 7.5.0
________ app agent 4.5.0.0 (52) | 31 | ________ app agent 4.5.0.0
*db<>fiddle here

Related

Returning null individual values with postgres tablefunc crosstab()

I am trying to incorporate the null values within the returned lists, such that:
batch_id |test_name |test_value
-----------------------------------
10 | pH | 4.7
10 | Temp | 154
11 | pH | 4.8
11 | Temp | 152
12 | pH | 4.5
13 | Temp | 155
14 | pH | 4.9
14 | Temp | 152
15 | Temp | 149
16 | pH | 4.7
16 | Temp | 150
would return:
batch_id | pH |Temp
---------------------------------------
10 | 4.7 | 154
11 | 4.8 | 152
12 | 4.5 | <null>
13 | <null> | 155
14 | 4.9 | 152
15 | <null> | 149
16 | 4.7 | 150
However, it currently returns this:
batch_id | pH |Temp
---------------------------------------
10 | 4.7 | 154
11 | 4.8 | 152
12 | 4.5 | <null>
13 | 155 | <null>
14 | 4.9 | 152
15 | 149 | <null>
16 | 4.7 | 150
This is an extension of a prior question -
Can the categories in the postgres tablefunc crosstab() function be integers? - which led to this current query:
SELECT *
FROM crosstab('SELECT lab_tests_results.batch_id, lab_tests.test_name, lab_tests_results.test_result::FLOAT
FROM lab_tests_results, lab_tests
WHERE lab_tests.id=lab_tests_results.lab_test AND (lab_tests.test_name LIKE ''Test Name 1'' OR lab_tests.test_name LIKE ''Test Name 2'')
ORDER BY 1,2'
) AS final_result(batch_id VARCHAR, test_name_1 FLOAT, test_name_2 FLOAT);
I also know that I am not the first to ask this question generally, but I have yet to find a solution that works for these circumstances. For example, this one - How to include null values in `tablefunc` query in postgresql? - assumes the same Batch IDs each time. I do not want to specify the Batch IDs, but rather all that are available.
This leads into the other set of solutions I've found out there, which address a null list result from specified categories. Since I'm just taking what's already there, however, this isn't an issue. It's the null individual values causing the problem and resulting in a pivot table with values shifted to the left.
Any suggestions are much appreciated!
Edit: With Klin's help, got it sorted out. Something to note is that the VALUES section must match the actual lab_tests.test_name values you're after, such that:
SELECT *
FROM crosstab(
$$
SELECT lab_tests_results.batch_id, lab_tests.test_name, lab_tests_results.test_result::FLOAT
FROM lab_tests_results, lab_tests
WHERE lab_tests.id = lab_tests_results.lab_test
AND (
lab_tests_results.lab_test = 1
OR lab_tests_results.lab_test = 2
OR lab_tests_results.lab_test = 3
OR lab_tests_results.lab_test = 4
OR lab_tests_results.lab_test = 5
OR lab_tests_results.lab_test = 50 )
ORDER BY 1 DESC, 2
$$,
$$
VALUES('Mash pH'),
('Sparge pH'),
('Final Lauter pH'),
('Wort pH'),
('Wort FAN'),
('Original Gravity'),
('Mash Temperature')
$$
) AS final_result(batch_id VARCHAR,
ph_mash FLOAT,
ph_sparge FLOAT,
ph_final_lauter FLOAT,
ph_wort FLOAT,
FAN_wort FLOAT,
original_gravity FLOAT,
mash_temperature FLOAT)
Thanks for the help!
Use the second form of the function:
crosstab(text source_sql, text category_sql) - Produces a “pivot table” with the value columns specified by a second query.
E.g.:
SELECT *
FROM crosstab(
$$
SELECT lab_tests_results.batch_id, lab_tests.test_name, lab_tests_results.test_result::FLOAT
FROM lab_tests_results, lab_tests
WHERE lab_tests.id=lab_tests_results.lab_test
AND (
lab_tests.test_name LIKE 'Test Name 1'
OR lab_tests.test_name LIKE 'Test Name 2')
ORDER BY 1,2
$$,
$$
VALUES('pH'), ('Temp')
$$
) AS final_result(batch_id VARCHAR, "pH" FLOAT, "Temp" FLOAT);

Best way to join multiples small tables with a big table in Spark SQL

I'm doing a join multiples tables using spark sql. One of the table is very big and the others are small (10-20 records). really I want to replace values in the biggest table using others tables that contain pairs of key-value.
i.e.
Bigtable:
| Col 1 | Col 2 | Col 3 | Col 4 | ....
--------------------------------------
| A1 | B1 | C1 | D1 | ....
| A2 | B1 | C2 | D2 | ....
| A1 | B1 | C3 | D2 | ....
| A2 | B2 | C3 | D1 | ....
| A1 | B2 | C2 | D1 | ....
.
.
.
.
.
Table2:
| Col 1 | Col 2
----------------
| A1 | 1a
| A2 | 2a
Table3:
| Col 1 | Col 2
----------------
| B1 | 1b
| B2 | 2b
Table3:
| Col 1 | Col 2
----------------
| C1 | 1c
| C2 | 2c
| C3 | 3c
Table4:
| Col 1 | Col 2
----------------
| D1 | 1d
| D2 | 2d
Expected table is
| Col 1 | Col 2 | Col 3 | Col 4 | ....
--------------------------------------
| 1a | 1b | 1c | 1d | ....
| 2a | 1b | 2c | 2d | ....
| 1a | 1b | 3c | 2d | ....
| 2a | 2b | 3c | 1d | ....
| 1a | 2b | 2c | 1d | ....
.
.
.
.
.
My question is; which is best way to join the tables. (Think that there are 100 or more small tables)
1) Collecting the small dataframes, to transforming it to maps, broadcasting the maps and transforming the big datataframe in one only step
bigdf.transform(ds.map(row => (small1.get(row.col1),.....)
2) Broadcasting the tables and making join using select method.
spark.sql("
select *
from bigtable
left join small1 using(id1)
left join small2 using(id2)")
3) Broadcasting the tables and Concatenate multiples joins
bigtable.join(broadcast(small1), bigtable('col1') ==small1('col1')).join...
Thanks in advance
You might do:
broadcast all small tables (automaticaly done by setting spark.sql.autoBroadcastJoinThreshold slightly superior to the small table number of rows)
run a sql query that join the big table such
val df = spark.sql("
select *
from bigtable
left join small1 using(id1)
left join small2 using(id2)")
EDIT:
Choosing between sql and spark "dataframe" syntax:
The sql syntax is more readable, and less verbose than the spark syntax (for a database user perspective.)
From a developper perspective, dataframe syntax might be more readeble.
The main advantage of using the "dataset" syntax, is the compiler will be able to track some error. Using any string syntax such sql or columns name (col("mycol")) will be spotted at run time.
Best way, as already written in answers, to broadcast all small tables. It can also be done in SparkSQL using BROADCAST hint:
val df = spark.sql("""
select /*+ BROADCAST(t2, t3) */
*
from bigtable t1
left join small1 t2 using(id1)
left join small2 t3 using(id2)
""")
If the data in your small tables is less than the threshold size and physical files for your data is in parquet format then spark will automatically broadcast the small tables but if you are reading the data from some other data sources like sql, PostgreSQL etc. then some times spark does not broadcast the table automatically.
If you know that the tables are small sized and the size of table is not expected to increase ( In case of lookup tables) you can explicitly broadcast the data frame or table and in this way you can efficiently join a larger table with the small tables.
you can verify that the small table is getting broadcasted using the explain command on the data frame or you can do that from Spark UI also.

Postgresql - Split a string by hyphen and group by the second part of the string

I have the data stored in the below format :
resource_name | readiops | writeiops
90832-00:29:3E 3.21 4.00
90833-00:30:3E 2.12 3.45
90834-00:31:3E 2.33 2.78
90832-00:29:3E 4.21 6.00
I want to be able to do a split on resource_name column by "-" and group it by the second part of the split so that the above data looks like below :
array_serial | ldev | readiops | writeiops
90832 00:29:3E 3.21,4.21 4.00,6.00
90833 00:30:3E 2.12 3.45
90834 00:31:3E 2.33 2.78
The resource_name is split into array_serial & ldev .
i have tried using the below query just to get an error .
SELECT
SUBSTRING(resource_name, 0, STRPOS(resource_name, ':')) AS array_serial,
SUBSTRING(resource_name,1, STRPOS(resource_name, ':')) AS ldev
FROM table
GROUP BY SUBSTRING(resource_name, 0, STRPOS(resource_name, ':'))
I am new to postgres . So kindly help .
Use split_part():
with my_table(resource_name, readiops, writeiops) as (
values
('90832-00:29:3E', 3.21, 4.00),
('90833-00:30:3E', 2.12, 3.45),
('90834-00:31:3E', 2.33, 2.78),
('90832-00:29:3E', 4.21, 6.00)
)
select
split_part(resource_name::text, '-', 1) as array_serial,
split_part(resource_name::text, '-', 2) as ldev,
string_agg(readiops::text, ',') as readiops,
string_agg(writeiops::text, ',') as writeiops
from my_table
group by 1, 2;
array_serial | ldev | readiops | writeiops
--------------+----------+-----------+-----------
90832 | 00:29:3E | 3.21,4.21 | 4.00,6.00
90833 | 00:30:3E | 2.12 | 3.45
90834 | 00:31:3E | 2.33 | 2.78
(3 rows)

How to handle NaTs with pandas sqlalchemy and psycopg2

I have a dataframe with NaTs like so that is giving me a DataError: (psycopg2.DataError) invalid input syntax for type timestamp: "NaT": When I try inserting the values into a postgres db
The dataframe
from sqlalchemy import MetaData
from sqlalchemy.dialects.postgresql import insert
import pandas as pd
tst_df = pd.DataFrame({'colA':['a','b','c','a','z', 'q'],
'colB': pd.date_range(end=datetime.datetime.now() , periods=6),
'colC' : ['a1','b2','c3','a4','z5', 'q6']})
tst_df.loc[5, 'colB'] = pd.NaT
insrt_vals = tst_df.to_dict(orient='records')
engine = sqlalchemy.create_engine("postgresql://user:password#localhost/postgres")
connect = engine.connect()
meta = MetaData(bind=engine)
meta.reflect(bind=engine)
table = meta.tables['tstbl']
insrt_stmnt = insert(table).values(insrt_vals)
do_nothing_stmt = insrt_stmnt.on_conflict_do_nothing(index_elements=['colA','colB'])
The code generating the error
results = engine.execute(do_nothing_stmt)
DataError: (psycopg2.DataError) invalid input syntax for type timestamp: "NaT"
LINE 1: ...6-12-18T09:54:05.046965'::timestamp, 'z5'), ('q', 'NaT'::tim...
One possibility mentioned here is to replace the NaT's with None's but as the previous author said it seems a bit hackish.
sqlachemy 1.1.4
pandas 0.19.1
psycopg2 2.6.2 (dt dec pq3 ext lo64)
Did you try to use Pandas to_sql method?
It works for me for the MySQL DB (I presume it'll also work for PostgreSQL):
In [50]: tst_df
Out[50]:
colA colB colC
0 a 2016-12-14 19:11:36.045455 a1
1 b 2016-12-15 19:11:36.045455 b2
2 c 2016-12-16 19:11:36.045455 c3
3 a 2016-12-17 19:11:36.045455 a4
4 z 2016-12-18 19:11:36.045455 z5
5 q NaT q6
In [51]: import pymysql
...: import sqlalchemy as sa
...:
In [52]:
In [52]: db_connection = 'mysql+pymysql://user:password#mysqlhost/db_name'
...:
In [53]: engine = sa.create_engine(db_connection)
...: conn = engine.connect()
...:
In [54]: tst_df.to_sql('zzz', conn, if_exists='replace', index=False)
On the MySQL side:
mysql> select * from zzz;
+------+---------------------+------+
| colA | colB | colC |
+------+---------------------+------+
| a | 2016-12-14 19:11:36 | a1 |
| b | 2016-12-15 19:11:36 | b2 |
| c | 2016-12-16 19:11:36 | c3 |
| a | 2016-12-17 19:11:36 | a4 |
| z | 2016-12-18 19:11:36 | z5 |
| q | NULL | q6 |
+------+---------------------+------+
6 rows in set (0.00 sec)
PS unfortunately i don't have PostgreSQL for testing

Deployment issue in Fabric for code using camel-cxf and camel-http

I am getting the following error while trying to deploy the test-ext feature of the test-ext-profile in JBoss Fuse Fabric. The other feature ticktock of the same profile is getting deployed alright and working fine. I am trying to deploy the two profiles in the child container by typing the command - "container-change-profile test-child-container-1 feature-camel test-ext-profile". PLEASE HELP.
----------------------------------------------------------------------------------------
ERROR --
-------------------------------------------------------------------------------------------------
2015-01-05 16:06:47,125 | INFO | admin-4-thread-1 | FabricConfigAdminBridge | figadmin.FabricConfigAdminBridge 173 | 67 - io.fabric8.fabric-configadmin - 1.0.0.redhat-379 | Updating configuration io.fabric8.agent
2015-01-05 16:06:47,140 | INFO | admin-4-thread-1 | FabricConfigAdminBridge | figadmin.FabricConfigAdminBridge 142 | 67 - io.fabric8.fabric-configadmin - 1.0.0.redhat-379 | Deleting configuration org.ops4j.pax.logging
2015-01-05 16:06:47,140 | INFO | o.fabric8.agent) | DeploymentAgent | io.fabric8.agent.DeploymentAgent 243 | 60 - io.fabric8.fabric-agent - 1.0.0.redhat-379 | DeploymentAgent updated with {hash=ProfileImpl[id='default', version='1.0']-, org.ops4j.pax.url.mvn.defaultrepositories=file:C:\manish\Work - Consulting\installers\jboss-fuse-6.1.0.redhat-379/system#snapshots#id=karaf-default,file:C:\manish\Work - Consulting\installers\jboss-fuse-6.1.0.redhat-379/local-repo#snapshots#id=karaf-local, feature.karaf=karaf, feature.jolokia=jolokia, resolve.optional.imports=false, feature.fabric-core=fabric-core, fabric.zookeeper.pid=io.fabric8.agent, org.ops4j.pax.url.mvn.repositories=http://repo1.maven.org/maven2#id=central, https://repo.fusesource.com/nexus/content/groups/public#id=fusepublic, https://repository.jboss.org/nexus/content/repositories/public#id=jbosspublic, https://repo.fusesource.com/nexus/content/repositories/releases#id=jbossreleases, https://repo.fusesource.com/nexus/content/groups/ea#id=jbossearlyaccess, http://repository.springsource.com/maven/bundles/release#id=ebrreleases, http://repository.springsource.com/maven/bundles/external#id=ebrexternal, https://oss.sonatype.org/content/groups/scala-tools#id=scala, repository.fabric8=mvn:io.fabric8/fabric8-karaf/1.0.0.redhat-379/xml/features, patch.repositories=https://repo.fusesource.com/nexus/content/repositories/releases, https://repo.fusesource.com/nexus/content/groups/ea, service.pid=io.fabric8.agent, feature.fabric-jaas=fabric-jaas, feature.fabric-agent=fabric-agent, feature.fabric-web=fabric-web, feature.fabric-git-server=fabric-git-server, feature.fabric-git=fabric-git, repository.karaf-standard=mvn:org.apache.karaf.assemblies.features/standard/2.3.0.redhat-610379/xml/features, optional.ops4j-base-lang=mvn:org.ops4j.base/ops4j-base-lang/1.4.0}
2015-01-05 16:07:12,344 | INFO | o.fabric8.agent) | DeploymentAgent | io.fabric8.agent.DeploymentAgent 243 | 60 - io.fabric8.fabric-agent - 1.0.0.redhat-379 | DeploymentAgent updated with {feature.ticktock=ticktock, hash=ProfileImpl[id='test-ext-profile', version='1.0']----, org.ops4j.pax.url.mvn.defaultrepositories=file:C:\manish\Work - Consulting\installers\jboss-fuse-6.1.0.redhat-379/system#snapshots#id=karaf-default,file:C:\manish\Work - Consulting\installers\jboss-fuse-6.1.0.redhat-379/local-repo#snapshots#id=karaf-local, feature.karaf=karaf, repository.file:c:_goutam_osgitest2_unsolr_features.xml=file:C:/manish/osgitest2/testsolr/features.xml, feature.jolokia=jolokia, repository.karaf-spring=mvn:org.apache.karaf.assemblies.features/spring/2.3.0.redhat-610379/xml/features, feature.camel-blueprint=camel-blueprint, resolve.optional.imports=false, feature.camel-core=camel-core, feature.test-ext=test-ext, feature.camel-cxf_0.0.0=camel-cxf/0.0.0, feature.fabric-core=fabric-core, repository.karaf-enterprise=mvn:org.apache.karaf.assemblies.features/enterprise/2.3.0.redhat-610379/xml/features, fabric.zookeeper.pid=io.fabric8.agent, feature.fabric-camel=fabric-camel, org.ops4j.pax.url.mvn.repositories=http://repo1.maven.org/maven2#id=central, https://repo.fusesource.com/nexus/content/groups/public#id=fusepublic, https://repository.jboss.org/nexus/content/repositories/public#id=jbosspublic, https://repo.fusesource.com/nexus/content/repositories/releases#id=jbossreleases, https://repo.fusesource.com/nexus/content/groups/ea#id=jbossearlyaccess, http://repository.springsource.com/maven/bundles/release#id=ebrreleases, http://repository.springsource.com/maven/bundles/external#id=ebrexternal, https://oss.sonatype.org/content/groups/scala-tools#id=scala, repository.fabric8=mvn:io.fabric8/fabric8-karaf/1.0.0.redhat-379/xml/features, feature.fabric-jaas=fabric-jaas, patch.repositories=https://repo.fusesource.com/nexus/content/repositories/releases, https://repo.fusesource.com/nexus/content/groups/ea, service.pid=io.fabric8.agent, feature.fabric-agent=fabric-agent, feature.fabric-web=fabric-web, feature.fabric-git-server=fabric-git-server, feature.camel-http_0.0.0=camel-http/0.0.0, feature.fabric-git=fabric-git, repository.apache-camel=mvn:org.apache.camel.karaf/apache-camel/2.12.0.redhat-610379/xml/features, repository.karaf-standard=mvn:org.apache.karaf.assemblies.features/standard/2.3.0.redhat-610379/xml/features, optional.ops4j-base-lang=mvn:org.ops4j.base/ops4j-base-lang/1.4.0, attribute.parents=feature-camel}
2015-01-05 16:07:13,141 | ERROR | agent-1-thread-1 | DeploymentAgent | .fabric8.agent.DeploymentAgent$2 255 | 60 - io.fabric8.fabric-agent - 1.0.0.redhat-379 | Unable to update agent
org.osgi.service.resolver.ResolutionException: Unable to resolve dummy/0.0.0: missing requirement [dummy/0.0.0] osgi.identity; osgi.identity=test-ext; type=karaf.feature; version=0
at org.apache.felix.resolver.Candidates.populateResource(Candidates.java:285)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at org.apache.felix.resolver.Candidates.populate(Candidates.java:153)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:148)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at io.fabric8.agent.DeploymentBuilder.resolve(DeploymentBuilder.java:226)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at io.fabric8.agent.DeploymentAgent.doUpdate(DeploymentAgent.java:521)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at io.fabric8.agent.DeploymentAgent$2.run(DeploymentAgent.java:252)[60:io.fabric8.fabric-agent:1.0.0.redhat-379]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)[:1.7.0_71]
at java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_71]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_71]
-------------------------------------------------------------------------------------------------
THE PROFILE DISPLAY DETAILS ARE AS FOLLOWS -
-------------------------------------------------------------------------------------------------
JBossFuse:karaf#root> profile-display test-ext-profile
Profile id: test-ext-profile
Version : 1.0
Attributes:
parents: feature-camel
Containers: test-child-container-1
Container settings
----------------------------
Repositories :
file:C:/manish/osgitest2/testsolr/features.xml
Features :
camel-http/0.0.0
camel-cxf/0.0.0
test-ext
ticktock
Configuration details
----------------------------
Other resources
----------------------------
-------------------------------------------------------------------------------------------------
THE features.xml LOOKS LIKE THIS -
-------------------------------------------------------------------------------------------------
<?xml version="1.0" encoding="UTF-8"?>
<features name="my-features">
<feature name="ticktock">
<bundle>file:C:/manish/osgitest2/testsolr/osgitest_tick2.jar</bundle>
<bundle>file:C:/manish/osgitest2/testsolr/osgitest_tock2.jar</bundle>
</feature>
<feature name="test-ext">
<bundle>file:C:/manish/osgitest2/testsolr/standard-ext-api-1.0.0-SNAPSHOT.jar</bundle>
</feature>
</features>
-------------------------------------------------------------------------------------------------
MANIFEST.MF of standard-ext-api-1.0.0-SNAPSHOT.jar is as below. This jar uses camel-cxf and camel-http.
-------------------------------------------------------------------------------------------------
Manifest-Version: 1.0
Bnd-LastModified: 1420491685490
Build-Jdk: 1.7.0_71
Built-By: manish
Bundle-ManifestVersion: 2
Bundle-Name: Camel Blueprint Route for test ext Query
Bundle-SymbolicName: standard-ext-api
Bundle-Version: 1.0.0.SNAPSHOT
Created-By: Apache Maven Bundle Plugin
Export-Package: org.apache.cxf;uses:="org.apache.cxf.feature,org.apache.
cxf.interceptor,org.apache.cxf.common.i18n,org.apache.cxf.common.loggin
g,org.apache.cxf.common.util,org.apache.cxf.common.classloader";version
="2.7.0.redhat-610379"
Import-Package: javax.ws.rs;version="[2.0,3)",javax.ws.rs.core;version="
[2.0,3)",javax.xml.bind.annotation,org.apache.camel;version="[2.12,3)",
org.apache.camel.builder;version="[2.12,3)",org.apache.camel.model;vers
ion="[2.12,3)",org.apache.camel.processor.aggregate;version="[2.12,3)",
org.apache.cxf.common.classloader;version="[2.7,3)",org.apache.cxf.comm
on.i18n;version="[2.7,3)",org.apache.cxf.common.logging;version="[2.7,3
)",org.apache.cxf.common.util;version="[2.7,3)",org.apache.cxf.feature;
version="[2.7,3)",org.apache.cxf.interceptor;version="[2.7,3)",org.osgi
.service.blueprint;version="[1.0.0,2.0.0)"
Tool: Bnd-1.50.0
This problem was solved after some manual intervention and not entirely by using the OBR of fabric8. The appropriate features need to be included in the POM file and they also need to be installed through the POM file. This was a pretty involved exercise. A better way should be provided by Redhat for the JBoss Fuse to bring down the deployment time.