TwitterPopTags Scala Spark not able to access Oauth information I think - scala

I have been trying to preform a static pull for some information using Scala, Spark, and intellij IDEA, and I have been running into this error for quite some time. I have already added in the streaming dependency,and all the required jar files, however I keep getting this error.
I've spent some time playing around with the variables, trying to manually input my oauth information (I believe that's what's causing this error) and I've tried making a twitter4j.properties files in my spark root, project root, and even my project source file root.
Usage: TwitterPopularTags <consumer key> <consumer secret> <access token> <access token secret> [<filters>]
Process finished with exit code 1
That is the error I keep getting. Attached is screenshot.
Also, once I get this working (oauth) how can I modify the information I pull from twitter and potentially store it using a local SQL database, or even a csv file?
Thanks![The error][1]
Source code for TwitterPopularTags: https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/TwitterPopularTags.scala
http://imgur.com/sEEiIiT

For anyone trying to figure this out, you can do the following to provide arguments to the array that the example provides you with, and expects your oauth information in:
do the following:
1) Edit your configuration (click the configuration drop-down and select “edit configurations”.
2) Under the “Configuration” tab you will see “Program arguments” field
3) In this program arguments field, enter the four arguments as required by the code, delimited by a whitespace.

Related

Eclipse plugin Incubator's "Web Templates (Advanced)" plugin (with secured Redmine): Failed to parse RSS feed / invalid xml

Trying to connect some restricted Redmine instance to our Eclipse Mylyn environment it worked in the beginning, but the re-imports did not with some error "Failed to parse RSS feed".
I stumbled across this #246440 Eclipse Mylyn ticket where some workaround was to recreate the Task Repository including the Task List Queries by hand.
But this is not a nice solution.
So I played around a bit more and found the following that solved our import issues:
most likely for your needs: remove the key value (or other security-relevant data) from the exported <task list query>.xml.zip / tasklist.xml since the queries contain some user-dependent authentication API (e.g. if shared with other users)
it should anyways be configured on your related Task Repository for all dependent queries and will be re-imported automatically on later import
make sure that (e.g. through some used formatter, CTRL + F or manual formatting) there are no whitespaces in text-value XML nodes, because thus the queries may stop working after import:
e.g.
<Attribute Key="Regexp">^({Id}\d+);({Type}[^;]*);...$
</Attribute>
should be:
<Attribute Key="Regexp">^({Id}\d+);({Type}[^;]*);...$</Attribute>
go on Task List -> <your imported query> -> right click -> Properties -> Finish so some internal magic "fixes" your query
Another debugging hint: you can always check the retrieved files (and Query Pattern regexp using the Preview button) using the <your query -> Properties -> Advanced Configuration -> Open button, which should put the unparsed query result in e.g. c:\Users\<loginname>\AppData\Local\Temp\mylyn-web-connector4155864524987884464.html.
By the way: (If you are at the above point it may likely be useful for you or your team ...) Using the web connector I found the integration via the API key in combination with the .../issues.csv... format much more useful and configurable than the .../issues.xml... variant.
We used something like this for parsing the CSV (and generated the params, their order etc. via normal filter dialogs): ^({Id}\d+);({Type}[^;]*);({Status}[^;]*);"?({Owner}[^";]*)"?;({Description}[^;]*)$.
Advantages are: easier regexp, concatenatable data for Description via column-ordering and fetching of all data without paging (=> we could skip page, per_page, limit, offset).

Unable to run experiment on Azure ML Studio after copying from different workspace

My simple experiment reads from an Azure Storage Table, Selects a few columns and writes to another Azure Storage Table. This experiment runs fine on the Workspace (Let's call it workspace1).
Now I need to move this experiment as is to another workspace(Call it WorkSpace2) using Powershell and need to be able to run the experiment.
I am currently using this Library - https://github.com/hning86/azuremlps
Problem :
When I Copy the experiment using 'Copy-AmlExperiment' from WorkSpace 1 to WorkSpace 2, the experiment and all it's properties get copied except the Azure Table Account Key.
Now, this experiment runs fine if I manually enter the account Key for the Import/Export Modules on studio.azureml.net
But I am unable to perform this via powershell. If I Export(Export-AmlExperimentGraph) the copied experiment from WorkSpace2 as a JSON and insert the AccountKey into the JSON file and Import(Import-AmlExperiment) it into WorkSpace 2. The experiment fails to run.
On PowerShell I get an "Internal Server Error : 500".
While running on studio.azureml.net, I get the notification as "Your experiment cannot be run because it has been updated in another session. Please re-open this experiment to see the latest version."
Is there anyway to move an experiment with external dependencies to another workspace and run it?
Edit : I think the problem is something to do with how the experiment handles the AccountKey. When I enter it manually, it's converted into a JSON array comprising of RecordKey and IndexInRecord. But when I upload the JSON experiment with the accountKey, it continues to remain the same and does not get resolved into RecordKey and IndexInRecord.
For me publishing the experiment as a private experiment for the cortana gallery is one of the most useful options. Only the people with the link can see and add the experiment for the gallery. On the below link I've explained the steps I followed.
https://naadispeaks.wordpress.com/2017/08/14/copying-migrating-azureml-experiments/
When the experiment is copied, the pwd is wiped for security reasons. If you want to programmatically inject it back, you have to set another metadata field to signal that this is a plain-text password, not an encrypted password that you are setting. If you export the experiment in JSON format, you can easily figure this out.
I think I found the issue why you are unable to export the credentials back.
Export the JSON graph into your local disk, then update whatever parameter has to be updated.
Also, you will notice that the credentials are stored as 'Placeholders' instead of 'Literals'. Hence it makes sense to change them to Literals instead of placeholders.
This you can do by traversing through the JSON to find the relevant parameters you need to update.
Here is a brief illustration.
Changing the Placeholder to a Literal:

OrientDB Could not access the security JSON file

Following my upgrade from OrientDB 2.1.16 to 2.2.0 I have started to get the following messages during the initialisation:
2016-05-19 09:28:38:690 SEVER ODefaultServerSecurity.loadConfig() Could not access the security JSON file: /config/security.json [ODefaultServerSecurity]
2016-05-19 09:28:39:142 SEVER ODefaultServerSecurity.onAfterActivate() Configuration document is empty [ODefaultServerSecurity]
The database launched but I don't like the warnings. I've looked through the docs but I cant find anything specifically pertaining to this. There are some links on Google that lead to dead Github pages.
First of all I need to get hold of either a copy of the security.json it is expecting (or the docs explaining the expected structure).
Secondly I need to know how and where to set it.
There are 3 ways to specify the location and name of the security.json file used by the new OrientDB security module.
1) Specify the environment variable, ORIENTDB_HOME, and it will look for it here:
"${ORIENTDB_HOME}/config/security.json"
2) Set this property in the orientdb-server-config.xml file: "server.security.file"
3) Pass the location by setting the global variable -Dserver.security.file on startup.
Here's the documentation on the new features + a link to the configuration format.
https://github.com/orientechnologies/orientdb-docs/blob/master/Security-OrientDB-New-Security-Features.md
-Colin
OrientDB LTD
The Company behind OrientDB

JSON schema validation failed: resource: String does not match pattern:^/[^/~!#\$%^|\s`#&*()\-+={}\[\]:;"'<>,?/\|\\]+(/[^/~!#\

I'm trying to embed Jasper Reports into APEX App. I'm able to gather reports from samples (samples on JasperCommunity website), however, when I'm trying to get created by me and colleagues reports - every time I have the same error:
JSON schema validation failed: resource: String does not match pattern: ^/[^/~!#\$%^|\s#&*()\-+={}\[\]:;"'<>,?/\|\\]+(/[^/~!#\$%^|\s#&()-+={}[]:;"'<>,?/\|\]+)$.
In other words I can't get any of our reports apart from the samples (e.g. /public/Samples/Reports/03._Store_Segment_Performance_Report"). I think the case is that the path to the Report is wrong, but I've tried all possible and impossible options and none of them works. Anyone any ideas please ?! Thanks
P.S. APEX 4.2.6, JasperServer 6.0. And finally, I can get samples Reports ONLY under the JasperAdmin user, simple User always got an error - 'Access Denied'. Why ?!?!?!
Sorted.
If anyone interested - when specify the actual path, in Jasper: right click on Report(or Dashboard, AdHocView etc.) and copy the path from it. As it differs from when you point mouse on it (e.g. adding the underscores etc.), and paste this actual path into your javascript code into HTML section in your web app. Thanks

Cruise Control .net Changing Log File appereance

i would like to change the apperance of the log file, generated by ccnet. It is useful, if the error messages are separated from the original Log Messages, but in order to debug, it is a bit tricky to see, when the error really happened. Our powershell skript runs for 6-8 hours and creates about 38k lines in the log file, so i would really apprechiate a solution, how i could list the errors with the other lines in the log files. Additionally it would be cool, if all the errors would still appear separatedly.
So far i have not found a lot documentary that explained how to change the log file output...
Simon
Not sure how this is logged, but in the end, logs produced during the build are put into the build-log file, that you will find in artifacts folder.
Then this logs are transposed into html output using xsl transforms. If none of the built-in reports is useful to you, you can create a custom xsl and plug it in, see the dashboard.config file, the following section allows for adding additional xsl transforms:
<buildReportBuildPlugin>
<xslReportBuildPlugin description="MSBuild Log" actionName="MSBuildBuildReport" xslFileName="xsl\MSBuild4Log.xsl"/>
...
If you know what the error messages are going to be you can parse them with an xsl file and generate some html that will show up in the build emails. The following goes in ccservice.exe.config.
<xslFiles>
<file name="c:\path\to\custom_errors.xsl"/>
</xslFiles>
custom_errors.xsl is an xsl file that finds the error messages in the raw build log xml and then generates html from them. This html will show up in the build emails. You have to create custom_errors.xsl. It's a significant amount of work to get working the first time especially if you're new to xml/xsl/html/css. If you undertake this I suggest doing all the testing outside of ccnet using a xsl transformer and inputting a sample ccnet build log. ccnet uses a css file to style the html so be aware of that. You can edit this too.
Note you have to restart the ccnet service after editing ccservice.exe.config.