Wso2 CEP 4.1.0 sample is not working - centos

To start the first sample application , i tried to do the prerequisite part by using this link. And according to that after running ./wso2cep-samples.sh -sn 0101
stream-definitions.xml
file should be created inside
/repository/conf/data-bridge/
. But it didn't happen. i tried in both linux and windows. But results are same. What can i do for that?

Apparently documentation is not updated properly it needs to fixed. Please refer sample 0101 documentation [1]. In this sample output will be printed in the terminal by using a logger publisher.
On the other hand from WSO2 CEP 4.0.0 on wards sample artifacts is being stored under samples directory. So stream definition of sample 0101 is stored at CEP_HOME/samples/artifacts/0101/eventstreams
[1] https://docs.wso2.com/pages/viewpage.action?pageId=49777902

Related

Accessing AEM 6.2 error logs over HTTP

In previous versions of AEM, certainly in CQ 5.6 and AEM 6.0, it was possible to tail the error logs over HTTP, without connecting to the server over SSH.
For example, I could get the last 1000 lines from the error log of my AEM author instance by calling:
http://localhost:4502/bin/crxde/logs?tail=1000
This seems to no longer be possible in AEM 6.2, this path does not resolve to anything.
Is there another way I could still tail the log over HTTP?
A colleague answered this question for me on a chat so I'm putting it here to make it easier to find in the future.
There's now a neat utility in the OSGi console that allows one to view the logs as well as configure the various loggers. You can find it at http://localhost:4502/system/console/slinglog
The Appender tab provides links to the various log files that can be used to load logs over HTTP.
Here's an example request it makes:
http://localhost:4502/system/console/slinglog/tailer.txt?tail=1000&name=%2Flogs%2Ferror.log
As you can see, both the log file name and the tail parameter can be specified. You can also use grep with both simple phrases and regular expressions.
This is a built-in feature of Apache Sling.
In addition FYI, you can also find the status-slinglogs where you can perform log file downloads in a zip and logger actions in a txt to your local at /system/console/status-slinglogs
http://localhost:4502/system/console/status-slinglogs
and the direct urls for the downloading these zip files are as below
http://localhost:4502/system/console/status-slinglogs.zip
http://localhost:4502/system/console/status-slinglogs/configuration-status-20170126-183246.zip (where as 20170126-183246 is and time stamp)
You should not be looking at log files via CRXDE lite.
log files in 6.2 are project specific - better to open them from a text editor.
see attached screenshot.
Hope this helps!
Regards,
Prince
You can curl the log with e.g.:
curl -u admin:admin 'http://localhost:4502/system/console/slinglog/tailer.txt?tail=4000&name=%2Flogs%2Ferror.log'
where 4000 is the number of lines you want to get.
I recently wrote a tool named "Log Tailer Plus" to solve exactly this problem. It's entirely free/open source - Take a look at a post describing usage here : https://blogs.perficientdigital.com/2019/05/14/introducing-aem-logtailerplus/
TLDR; You can grab an AEM package from here ( https://github.com/prftryan/LogTailerPlus ) install it to your machine, and access via http://localhost:4502/log-tailer-plus (if local) or http://server:port/log-tailer-plus
This tool will allow you to follow any number of logs at once by leveraging the out of the box logging endpoint(/system/console/tailer ) as well as dynamically checking active OSGI Logging Logger configurations. Currently, highlighting is supported, but only for relatively standard logging patterns (it's done via regex).
This is a new release, works on AEM 6.2+. Enjoy

LabVIEW MongoDB

In LabVIEW applicatio, I want to write some data in a MongoDB.
I found the C# Driver for LabVIEW under the following link: https://decibel.ni.com/content/docs/DOC-41766
When i open the LV-project and try to run an example, i get many errors.
Mainly the class of the driver canĀ“t include / load.
.NET is installed on the system.
Has someone any idea or can give instructions to get the driver running in LabVIEW?
Did you create a LabVIEW.exe.config file (there's an example in the MongoDB-driver package) and store it in the same folder as LabVIEW.exe and relaunch? That did the job for me!
http://zone.ni.com/reference/en-XX/help/371361K-01/lvhowto/configuring_clr_version/

Restful DDS execution

I download restful-dds-1.0-src.tgz file from http://code.google.com/p/restful-dds/downloads/list website. I am using linux environment. From the ReadMe.txt file i execute the chatter application (CHATROOM TEST) up to scripts/startRESTfulDDS.sh and also view the html file from http://ipaddress:8182/static/ajaxTest.html. After that "run the Chatter application in the Tutorial directory by running scripts/Chatter.{sh,bat}." In here my problem arise. I am not able to see scripts folder and chatter.sh file inside the Tutorial folder. Please, help me what i did wrong.
I am using opensplice DDS v5.5
GWT2.4.0,
JDK 1.6,
Restlet v2.0.14,
Gson v2.2.2
I am not able to see scripts folder and
chatter.sh file inside the Tutorial folder
The Tutorial folder that is created is an exact copy of the OpenSpliceDDS tutorial, found in $OSPL_HOME/examples/dcps/standalone/Java/Tutorial. There seems to be a mismatch between the description in the resful-dds README and this tutorial because indeed, there is no chatter.sh. However, there is a README.txt inside the Tutorial directory which explains how to run Chatter:
Chatter [userid] [username]
userid: an integer number that uniquely identifies the sender of a message
(Transmit a message with userid = -1 to terminate the MessageBoard.)
username: the user-name other chatters will see when they receive one of your
chat messages.
The executables classes are located in the chatroom package, but should be
started from the current directory in the following way:
...
java -classpath $OSPL_HOME/jar/dcpssaj.jar:bld chatroom.Chatter 1 Bill
Following this procedure, you should be able to run Chatter. Of course, you should first run ospl start to initialize the infrastructure.
By the way, it is not required that you run the java version of the tutorial -- any supported language should do. The OpenSpliceDDS installation itself should give you more information about running Chatter for different languages. The restful DDS webservice will pick up any data found on the DDS bus and expose it via HTTP, no matter what language the originating process was written in.

Running a mapreduce jar on Hadoop cluster

I'm trying to run the map reduce implementation of quadratic sieve algorithm on Hadoop. For this purpose I'm using karmasphere Hadoop community plugin with Netbeans. The program works fine using the plugin. But I'm unable to run it on actual cluster.
I'm running this command
bin/hadoop jar MRIF.jar 689
Where MRIF.jar is the jar file made by building the netbeans project and 689 is number to be factored. The input and output directories are hard coded in program itself. When running on actual cluster, it appears that the inside java classes are not being processed as reduce completes to 100% before map being at 0% itself. And input and output files are created with no content.
But this works fine when running using Karmasphere plugin.
Try running it as bin/hadoop -jar MRIF.jar 689. The -jar forces it to run local and displays information to the console as well as logs to that machine. You can also check the Hadoop logs to see if they have any indicators of why it's not happening correctly.
When using -jar you can use System.out.println(...); to display information on the console, further helping to debug.
You can also use Hadoop Counters (link is random blog post I found) to assist in troubleshooting when running (psuedo-)distributed.
I admit this post isn't a 'solution' to the problem; Without more/further information about what is happening and where, there is a wide range of things that could be going on. If it is, as you mention, not processing the 'inside java classes' then it would likely be your implementation, of which we can't see to make suggestions, ect.
More data about the issue, such as logs, errors or output will likely assist in getting more solution-y responses instead of debugging tips. :)
EDIT: Thanks for the link to the files. I think your call is missing a component.
I looked in the run.sh and think this might get it to work for you:
bin/hadoop jar mrif.jar com.javiertordable.mrif.MapReduceQuadraticSieve 689

Not able to run the Windows Workflow Foundation sample HiringRequest

I am currently exploring the possibilities of WF. Now i downloaded some sample from here. I wanted to take a look at the hiring request sample applications which is also showed in one of the webcasts from enpoint.tv.
When I start op the project and want to see how the HiringRequestProcess.xaml looks like I get errors.
It's says that the x:string, x:TypeArguments etc cannot be resolved.
Anyone has any idea's how I can get the sample running? I'm running vs2010 ultimate as an administrator.
I got it working now. I extracted all files to the root of my drive and now it seems to work.