Good morning,
I'm looking for help please.I'm only a beginner.
I am using cxf-dosgi from (DOSGi Apache Karaf Feature distribution)
I want to make transparent use of services between two remote machines. So I have a karaf container on each of these two machines.
I tested this example : to start with two containers karaf hosted on the same machine then I changed the configuration to test with two containers hosted on two different remote machines. And it works great !
So I want to do the same thing to export to export my web services. I am using Spring DM. So I do this on the server side :
<osgi:service id="osgi-service" ref="myservice" interface="org.apache.camel.Endpoint"> <osgi:service-properties> <entry key="name" value="service"/> <entry key="service.exported.interfaces" value="*"/> </osgi:service-properties> </osgi:service>
I did the installations like in the tutorial with cxf dosgi version 1.6 But I get this error:
16:25:53,256 | ERROR | pool-21-thread-3 | w.service.RemoteServiceAdminCore 193 | 184 - cxf-dosgi-ri-dsw-cxf - 1.6.0 | failed to create server for interface org.apache.camel.Endpoint
java.lang.NullPointerException
at java.lang.reflect.Array.newArray(Native Method)[:1.7.0_79]
at java.lang.reflect.Array.newInstance(Array.java:70)[:1.7.0_79]
at org.apache.cxf.aegis.type.TypeUtil.getTypeRelatedClass(TypeUtil.java:259)
at org.apache.cxf.aegis.type.AbstractTypeCreator.createTypeForClass(AbstractTypeCreator.java:108)
at org.apache.cxf.aegis.type.AbstractTypeCreator.createType(AbstractTypeCreator.java:402)
at org.apache.cxf.aegis.type.basic.BeanTypeInfo.getType(BeanTypeInfo.java:192)
at org.apache.cxf.aegis.type.basic.BeanType.getDependencies(BeanType.java:547)
at org.apache.cxf.aegis.databinding.AegisDatabinding.addDependencies(AegisDatabinding.java:394)
at org.apache.cxf.aegis.databinding.AegisDatabinding.addDependencies(AegisDatabinding.java:399)
at org.apache.cxf.aegis.databinding.AegisDatabinding.addDependencies(AegisDatabinding.java:399)
at org.apache.cxf.aegis.databinding.AegisDatabinding.initializeMessage(AegisDatabinding.java:371)
at org.apache.cxf.aegis.databinding.AegisDatabinding.initializeOperation(AegisDatabinding.java:283)
at org.apache.cxf.aegis.databinding.AegisDatabinding.initialize(AegisDatabinding.java:242)
at org.apache.cxf.service.factory.AbstractServiceFactoryBean.initializeDataBindings(AbstractServiceFactoryBean.java:86)
at org.apache.cxf.service.factory.ReflectionServiceFactoryBean.buildServiceFromClass(ReflectionServiceFactoryBean.java:490)
at org.apache.cxf.service.factory.ReflectionServiceFactoryBean.initializeServiceModel(ReflectionServiceFactoryBean.java:550)
at org.apache.cxf.service.factory.ReflectionServiceFactoryBean.create(ReflectionServiceFactoryBean.java:265)
at org.apache.cxf.frontend.AbstractWSDLBasedEndpointFactory.createEndpoint(AbstractWSDLBasedEndpointFactory.java:102)
at org.apache.cxf.frontend.ServerFactoryBean.create(ServerFactoryBean.java:159)
at org.apache.cxf.dosgi.dsw.handlers.AbstractPojoConfigurationTypeHandler.createServerFromFactory(AbstractPojoConfigurationTypeHandler.java:191)
at org.apache.cxf.dosgi.dsw.handlers.PojoConfigurationTypeHandler.createServer(PojoConfigurationTypeHandler.java:119)
at org.apache.cxf.dosgi.dsw.service.RemoteServiceAdminCore.exportInterfaces(RemoteServiceAdminCore.java:184)
at org.apache.cxf.dosgi.dsw.service.RemoteServiceAdminCore.exportService(RemoteServiceAdminCore.java:140)
at org.apache.cxf.dosgi.dsw.service.RemoteServiceAdminInstance$1.run(RemoteServiceAdminInstance.java:59)
at org.apache.cxf.dosgi.dsw.service.RemoteServiceAdminInstance$1.run(RemoteServiceAdminInstance.java:57)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_79]
at org.apache.cxf.dosgi.dsw.service.RemoteServiceAdminInstance.exportService(RemoteServiceAdminInstance.java:57)[184:cxf-dosgi-ri-dsw-cxf:1.6.0]
at org.apache.cxf.dosgi.dsw.service.RemoteServiceAdminInstance.exportService(RemoteServiceAdminInstance.java:41)[184:cxf-dosgi-ri-dsw-cxf:1.6.0]
at org.apache.cxf.dosgi.topologymanager.exporter.TopologyManagerExport.exportServiceUsingRemoteServiceAdmin(TopologyManagerExport.java:185)[183:cxf-dosgi-ri-topology-manager:1.6.0]
at org.apache.cxf.dosgi.topologymanager.exporter.TopologyManagerExport.doExportService(TopologyManagerExport.java:168)[183:cxf-dosgi-ri-topology-manager:1.6.0]
at org.apache.cxf.dosgi.topologymanager.exporter.TopologyManagerExport$3.run(TopologyManagerExport.java:143)[183:cxf-dosgi-ri-topology-manager:1.6.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_79]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_79]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_79]
Have you an idea of what is wrong?
Ouch .. what are you doing with 1.4-SNAPSHOT. First it is not a release and second it is quite old.
Another thing that looks suspicious is the service.exported.interfaces=myInterface. It should be a fully qualified java interface name. For the start try service.exported.interfaces=* for it.
You should start with my CXF DOSGi tutorial. The code there should work out of the box. You can then add your changes in the config. So it is easier than to start completely new.
Related
I have the code below for connecting to kafka using the python beam sdk. I know that the ReadFromKafka transform is run in a java sdk harness (docker container) but I have not been able to figure out how to make ssl.truststore.location and ssl.keystore.location accesible inside the sdk harness' docker environment. The job_endpoint argument is pointing to java -jar beam-runners-flink-1.10-job-server-2.27.0.jar --flink-master localhost:8081
pipeline_args.extend([
'--job_name=paul_test',
'--runner=PortableRunner',
'--sdk_location=container',
'--job_endpoint=localhost:8099',
'--streaming',
"--environment_type=DOCKER",
f"--sdk_harness_container_image_overrides=.*java.*,{my_beam_sdk_docker_image}:{my_beam_docker_tag}",
])
with beam.Pipeline(options=PipelineOptions(pipeline_args)) as pipeline:
kafka = pipeline | ReadFromKafka(
consumer_config={
"bootstrap.servers": "bootstrap-server:17032",
"security.protocol": "SSL",
"ssl.truststore.location": "/opt/keys/client.truststore.jks", # how do I make this available to the Java SDK harness
"ssl.truststore.password": "password",
"ssl.keystore.type": "PKCS12",
"ssl.keystore.location": "/opt/keys/client.keystore.p12", # how do I make this available to the Java SDK harness
"ssl.keystore.password": "password",
"group.id": "group",
"basic.auth.credentials.source": "USER_INFO",
"schema.registry.basic.auth.user.info": "user:password"
},
topics=["topic"],
max_num_records=2,
# expansion_service="localhost:56938"
)
kafka | beam.Map(lambda x: print(x))
I tried specifying the image override option as --sdk_harness_container_image_overrides='.*java.*,beam_java_sdk:latest' - where beam_java_sdk:latest is a docker image I based on apache/beam_java11_sdk:2.27.0 and that pulls the credetials in its entrypoint.sh. But Beam does not appear to use it, I see
INFO org.apache.beam.runners.fnexecution.environment.DockerEnvironmentFactory - Still waiting for startup of environment apache/beam_java11_sdk:2.27.0 for worker id 1-1
in the logs. Which is soon inevitebly followed by
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /opt/keys/client.keystore.p12 of type PKCS12
In conclusion, my question is this, In Apache Beam, is it possible to make files available inside java sdk harness docker container from the python beam sdk? If so, how might it be done?
Many thanks.
Currently, there is no straightforward way to achieve this. There is ongoing discussion and tracking issues to provide support for this kind of expansion service customization (see here, here, BEAM-12538 and BEAM-12539). That is the short answer.
Long answer is yes, you can do that. You would have to copy &paste ExpansionService.java into your codebase and build your custom expansion service, where you specify default environment (DOCKER) and default environment config (your image) here. You then have to manually run this expansion service and specify its address using expansion_service parameter of ReadFromKafka.
I am using neo4j enterprise 3.0.3 version for windows. Following the operations manual 3.0, I have installed the neo4j service with bin\neo4j install-service. But I can't start it with bin\neo4j start. It said
Invoke-Neo4j : Failed to start service 'Neo4j Graph Database - neo4j (neo4j)'.
And I can't start the neo4j service in windows serice either. Maybe anyone have encountered this case before?
I had the same problem: I am using neo4j community 3.1.2 for windows and installed the service with the neo4j.bat file without any problems.Then i wanted to start the service with neo4j.bat and got the same error as you
I found a solution that worked for me. My neo4j files were in a folder, where the path to the folder contained spaces (C:\Program Files\Neo4j) Then i moved the folder one level up (C:\Neo4j).
After that i could start the service without problems.
Maybe this solution helps.
I am running neo4j on windows and in my case the crux of the issue was that there was an incompatibility between the installed versions of Java (32-bit) v/s OS version. The biggest clue that led me to this is the following set of lines in neo4j-service.2018-08-03 log file
[2018-08-03 14:55:42] [info] [ 1432] Starting service...
[2018-08-03 14:55:42] [error] [ 1432] %1 is not a valid Win32 application.
[2018-08-03 14:55:42] [error] [ 1432] Failed creating java C:\JavaNew\bin\server\jvm.dll
[2018-08-03 14:55:42] [error] [ 1432] %1 is not a valid Win32 application.
[2018-08-03 14:55:42] [error] [ 1432] ServiceStart returned 1
There are a fair number of potential issues, and I have made an attempt to compile all the issues with this,
Windows services cannot deal with service names in folders that have spaces; especially if there is another folder with the same name as the one with spaces.
For example - C:\Program Files... will have issues if C:\Program\Something...
To work around this, I put Neo4j in root folder c:\Neo4j
Get-Java.ps1 (under ..\bin\Neo4j-Management folder)looks in the path variable for 'JAVA_HOME' (usually found in *nix environments). If it does not find it here, it keeps looking in registry, and finally throws up its hand!
To deal with this, I simply put in a path variable. For a good measure, I uninstalled Java and re-installed Java in the root folder under C:\JavaNew
In retrospect, this step is probably not on part of the problem, and hence can be ignored. But I am leaving it here for completeness sake.
Invoke-Neo4j.ps1 (also under ..\bin\Neo4j-Management folder) has code that determines if the OS is 32-bit (or 64-bit). Based on this it determines if it should run prunsrv-i386.exe (32-bit) or prunsrv-amd64.exe (64-bit).
This has to match the Java version installed.
Upon running java -XshowSettings:all, and inspecting the sun.arch.data.model value (32, in my case), I realized that my OS is 64 bit and the Java version is 32-bit.
To deal with this, I put in code (very klugey!). I am sure there are much better ways to get to the same outcome, but this is what I used.
switch ( (Get-WMIObject -Class Win32_Processor | Select-Object -First 1).Addresswidth ) {
32 { $PrunSrvName = 'prunsrv-i386.exe' } # 4 Bytes = 32bit
#64 { $PrunSrvName = 'prunsrv-amd64.exe' } # 8 Bytes = 64bit COMMENTED as a workaround!!!
64 { $PrunSrvName = 'prunsrv-i386.exe' } # 8 Bytes = 64bit
Now, uninstall the neo4j service, install it, and start the service.
Hope this works for you.
neo4j console
Posting for latest versions > 4.x
I had the same issue using neo4j start, Neo4j console is the right command I was looking for. It is a web-based graph that acts as an interactive tutorial.
i had the same problem , after the neo4j worked for few weeks it stoop working (without any change that i made)
i have set java_home uninstall and install and now it works
neo4j-enterprise-3.3.4
I was also having weired issue as there was no error but neo4J service did not start.
[xx#ss1 bin]$ ./neo4j console
[xx#ss1 bin]$ .
The problem was with the permission on Java directory and I tried
chmod -R 777 jdk_directory
and problem got solved.
#neo4j #neo4jnotstarting
I really want to study how restcomm works in clearwater as a Telephony Application Server.
I follow the guideline at:
http://telestax.com/wp-content/uploads/2013/12/ClearWater-RestComm-Integration-2013.pdf
But seemly, the verion of Restcomm in this article is too old (TelScale-Restcomm-JBoss-AS7-7.1.2-GA), and I am using the Restcomm in newer version (Restcomm-JBoss-AS7-7.7.0.900).
I could not follow the guide in this article because of some difference configuration between two versions.
I set up the clearwater successfully. I could make a SIP call in clearwater.
When I setup the restcomm (version Restcomm-JBoss-AS7-7.7.0.900),
I changed the local-address of media-server in file: standalone/deployments/restcomm.war/WEB-INF/conf/restcomm.xml
as follow:
<media-server-manager>
...
<local-address>192.168.0.117</local-address>
...
</media-server-manager>
(192.168.0.117 is my local IP address)
I did not change the references to 127.0.0.1:8080 in restcomm.xml file to point to 192.168.0.117:8180
because there is no references to 127.0.0.1:8080.
I think that may be the difference between two versions.
I also did not edit the JAVA_OPTS in bin/standalone.conf file because of misunderstanding.
I edit the file mediaserver/deploy/server-beans.xml as follow:
<property name="bindAddress">192.168.0.117</property>
<property name="localBindAddress">127.0.0.1</property>
<property name="externalAddress"><null/></property>
<property name="localNetwork">192.168.0.0</property>
<property name="localSubnet">255.255.255.0</property>
After that, I start media-server:
$ cd ${JBOSS_HOME}/mediaserver/bin
$ ./run.sh
The media-server start successfully.
Then, I start restcomm jboss:
$ cd ${JBOSS_HOME}/bin
$ sudo ./standalone.sh -Djboss.socket.binding.port-offset=100 -b 192.168.0.117
It got errors as the below picture.
enter image description here
But Jboss server still work, when I goto http:/192.168.0.117:8180
But I can not access the Restcomm managerment interface.
I also try to modify somes as the article:
-Modify default app: standalone/deployments/restcomm.war/demos/hello-play.xml
<Response>
<Play>http://192.168.0.117:8180/restcomm/audio/demo-prompt.wav</Play>
</Response>
-Add configure IMS core through Ellis configure file:
{
"Restcomm" :
"<InitialFilterCriteria><Priority>1</Priority><TriggerPoint> <ConditionTypeCNF></ConditionTypeCNF><SPT><ConditionNegated>0</ConditionNegated><Group>0</Group><Method>INVITE</Method><Extension></Extension></SPT></TriggerPoint><ApplicationServer><ServerName>sip:192.168.0.117:5180</ServerName><DefaultHandling>0</DefaultHandling></ApplicationServer></InitialFilterCriteria>"
}
-Bind the number to defaul app:
curl -X POST http://ACae6e420f425248d6a26948c17a9e2acf:77f8c12cc7b8f8423e5c38b035249166#192.168.0.117:8180/restcomm/2012-04-24/Accounts/ACae6e420f425248d6a26948c17a9e2acf/IncomingPhoneNumbers.json -d "PhoneNumber=4321" -d "VoiceUrl=http://192.168.0.117:8180/restcomm/demos/hello-play.xml"
It got the error:
That are my problems.
Thank you very much for supporting me.
Best Regards,
Indeed those steps are way too old and won't probably work on the new version.
I would recommend starting Restcomm with Docker instead and configure the JVM options and port offset (see http://docs.telestax.com/restcomm-docker-environment-variables/) in the docker run command
The rest of the description to configure Clearwater should still be valid.
I need to monitor WAS liberty profiles i made some configuration changes in sever.xml
<feature>restConnector-1.0</feature>^M
<feature>jsp-2.2</feature>^M
<feature>appSecurity-1.0</feature>^M
<feature>ssl-1.0</feature>
<feature>monitor-1.0</feature>^M
but when i am connecting with rest port i am only getting following mbeans regarding websphere
WebSphere
WebSphere:feature=restConnector,type=FileService,name=FileService
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=WLProject
WebSphere:feature=channelfw,type=endpoint,name=defaultHttpEndpoint-ssl
WebSphere:feature=restConnector,type=FileTransfer,name=FileTransfer
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=kohls
WebSphere:service=com.ibm.ws.kernel.filemonitor.FileNotificationMBean
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=worklightadmin
WebSphere:feature=channelfw,type=endpoint,name=defaultHttpEndpoint
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=worklightconsole
WebSphere:name=com.ibm.ws.jmx.mbeans.generatePluginConfig
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=_analytics
WebSphere:name=com.ibm.ws.config.serverSchemaGenerator
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=_MobileBrowserSimulator
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=nsecom
not able to get threadpool, webcontaineer mbeans , is there any configuration i have to do??
Maybe update to the latest Liberty version and try to test with jconsole. I'm running v8.5.5.3 and it works fine. I'm using the following command to start jconsole using rest connector (all in one line, formatted for readability):
jconsole
-J-Djava.class.path=C:\
IBM\WebSphere\LibertyIM\java\java_1.7_32\lib\jconsole.jar;C:\IBM\WebSphere\Liber
tyIM\java\java_1.7_32\lib\tools.jar;C:\IBM\WebSphere\wlp\clients\restConnector.j
ar
-J-Djavax.net.ssl.trustStore=C:/IBM/WebSphere/wlp/usr/servers/monitoringServe
r/resources/security/key.jks
-J-Djavax.net.ssl.trustStorePassword=password
-J-Djavax.net.ssl.trustStoreType=jks
-J-Duser.language=en
I can see ThreadPoolStats and ServletStats. For SessionStats or ConnectionPoolStats your application actually needs to use the feature (e.g. session or connection to db) to be visible in jconsole and have mbean.
I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem