Situation:
I want to offer a neo4j demo for users who have a fresh installed neo4j on their windows machine. The users are not able (lack of knowledge) to use the console (import-tools).
Wish:
What I wish is to load my cypher export file into the neo4j browser. What I don't want it to spend hours to program a sql-like CVS export file (because that's what I'm happy not to use since I use a graph database).
What I tried and learned:
1) The webadmin tool does not exists anymore.
2) The LOAD command expects CSV only.
Question:
Is there a way or work-around that I have missed?
You might want to take a look at APOC's import/export procedures. This should give you some options.
Related
We have big database with a lot of stuff and I want to use version control (Git) to manage changes.
There are a lot of articles how to do it step by step but one piece is missing for me.
Is there standard or recommended way for file structure of whole database (data excluded) and how it can be obtained from existing database?
It is a lot of sources, procedures, functions, packages, etc.
Version control articles show how to manage few files from version control perspective. But they suggest that each file should be selected and saved to file system separately.
Is there way to export/import all stuff to maybe some preorganized structure?
Good IDEs have such structures defined by languages or products. But it looks for me that SQL Developer doesn't have one.
It also looks for me that SQL Developer may have only one repository. No concept of projects which can combine or unite different databases in separate units.
Should I invent my whole structure and use something like
**project/Abc/DB1/Packages/packzgeXyz/source1.sql**
for each source? Sure I can do this but I worry that may miss something.
Any advise?
Yes, SQL Developer can unload a schema to files for you. And then you could take such files to your SVN or Git projects.
Tools - Database Export.
I set the output to multiple directories - so one directory for schema object type.
Then I set my application schema, then proceed to FINISH/OK.
Output looks like:
I talk about this in more detail here.
What is the relation between gremlin and groovy? I put groovy on eclipse and it works but...I guess gremlin is a bit different..(can't write on eclipse editor and run, the way I do on the gremlin shell)..for example, writing 5 + 4 and running as a "groovy shell" configuration in eclipse doesn't do the job..How to go about this?
Edit : What I'm looking forward to doing is create a social graph(with data inside) from about half a million tweets that I have and then run queries on it. I tried neo4j but the browser has a limit to the size of DB I guess. Any neo4j IDE (with cypher as well as graph visualization)??
Then I find gremlin which is amazingly easy and straightforward but then again no IDE to run on!
This isn't a great answer but perhaps it will put you on the path to solving your problem. Gremlin uses a variation of the Groovy Shell, so perhaps it does not quite work in the same manner as the Groovy Shell plugin to Eclipse. You can evaluate Gremlin in the standard Groovy Shell if you follow these instructions:
https://github.com/tinkerpop/gremlin/wiki/Using-Gremlin-through-Groovy#use-from-groovy-shell
As a follow-on to your edit - In my view, the Gremlin Shell is about all you need to load and analyze graph data. Consider this blog post:
http://thinkaurelius.com/2013/02/04/polyglot-persistence-and-query-with-gremlin/
Gremlin is still there for you once you get past ad-hoc analysis. Build your applications over the graph with it. Here's another post:
http://thinkaurelius.com/2013/07/25/developing-a-domain-specific-language-in-gremlin/
One think TinkerPop does not have is a built-in visualization function. Rexster's Dog House has such a feature but it's not terribly advanced especially as compared to Neo4j's console. The typically workflow for visualization that I recommend if using TinkerPop and Gremlin is to dump your graph or a portion of it to GraphML or GML, and then import into tools like Gephi or Cytoscape for visualization needs.
I need to automate some webbrowser operation. Basically I need to import the result of some SQL queries via phpMyAdmin (I can't do direct SQl because my provider doesn't allow it AND I also tried using CURL or WGet but I couldn't get it work). Anyway, as it always the same files I though I could use macro. I thought using Vimperator/pentadactyl but it doesn't work as I need the macro to record file selection etc ....
So what would be the best (more popular) plugin to do the job. I was thinking of Selenium but I've seen other plugin which could do it.
Alternatively a full CLI version allowing to execute SQL remotely would be amazing too.
You can automation test for web to do the work replaced for macro.
With my knowledge macro using vbscript. some programs support for macro are Access, Excel, mapics(as400).
I would like to access a Redmine taskbase via a simple text based interface - wondering what the shortest path would be (minimum investment/development).
Right now, this boils down to 2 use cases/phases:
Import a batch of tasks into Redmine from simple, wiki-based, bulletted TODO list, ie. plain text content. This is more of a one-off task, so a quick and dirty solution would be fine.
Later, some smooth two-way synchrosation would be great.. E.g. edit loads of tasks via some friendly plain text (or XML) in an editor, or scripting where I could manipulate all of them with simple text processing; then synchronise with Redmine and commit them back.
Any ideas on the easiest way to achieve these?
I'd prefer an external solution (i.e not touching the server), especially for the one-off import case; something like a neat IDE/editor/client, or a standalone Ruby script (e.g using the RM API).
If an appropriate RM plugin would be available, I would not resist giving it a try (can get root access from our lovely IT support:)..
Current ideas:
Emacs/Org-mode, looks like a great combination of a cool task manager UI and full plain text power. It seems rich enough to capture tags, states as well. This artice looks promising Orgmode and Roundup: Bridging public bugtrackers and local tasklists, although not exactly a perfect match.
org-mode parser in Ruby, could be used in an script with redmine-api access, or - worst case(for me, right now)- in newly developed RM plugin.. This looks like a good start: org-ruby
export RM->XML, process file, import XML->RM... not sure if this is supported?
I guess it's always possible to talk to the DB directly, but I'd prefer to avoid that.
Actually, I'm also interested a similar solution for Bugzilla.
At the simplest level, you could write a RM/Rails plugin that parses an Org-Mode task list, updating corresponding issues in the RM Model.
Equally, you can build a view for Redmine (again as a Rails plugin) to generate an org list of the current (or subset of) issues.
For Bugzilla I think you would be best off using the XML-RPC interface to do your issue comparison/update sync, so you'd have to take a very different approach from Redmine.
If you have any specific questions, please update your question, it's quite broad at the moment.
Update
At the moment, there are a few plugins which will probably help you figure out your solution, for example Nick Boltons xml import and Martin Liu's Redmine CSV Import Plugin but neither of these are going to completely solve the problem for you, just give you some useful starting point.
On the other hand, If you write a script that interacts with Redmine's REST api, you don't need it to be in any specific lanugage, in fact you could do it in Emacs-lisp, if the target users of the script are all Emacs aware, then this might well be the best way to do the job. (it would certainly be the most appealing option to me.)
Maybe this can be useful: https://github.com/fukamachi/redmine-el
Currently I go into phpMyAdmin, export my database as a text file and then save it with the application files before I commit things to svn (or git). Then of course, I've got to import it to production.
Is there a better way?
Depends on the language you use, RoR has it built in. Currently for a project I'm doing in ASP.net MVC I have 2 files in the project in a folder: database. One file contains the structure of the database and one file some dummy variables for testing. I must say it is a cumbersome way of sharing your database since when you update something you have to let the others know they have to rerun the (updated) sql structure script.
The structure script deletes tables if the exist and readds them + adds new tables.
Could not find a better way like db::migrate of Ruby on Rails.
If you don't have something like rails migrate, are in java environment or anything else, check out liquibase. It's pretty cool if you need that much flexibility. We just track .sql files which setup the entire database.
Generally, I would create a script that is able to generate the database (i.e., all the tables, users, views, indexes, etc) and another script that populates the DB with data. Then, use DBDeploy (similar to RoRs migrations) to handle all DB modifications. Then I would create build targets for all these script in Ant, NAnt, Buildr, etc. This way everything is versioned and in text files so it works with any SCM.
If you're looking for migrations similar to db:migrate in Rails, but you're not in rails, there are other options. There's migrate4j which is similar to db:migrate, but written in/for Java. There's also liquibase, which is very flexible and (AFAIK) language independent, but does make you write everything in XML (which is kind of the opposite of "the Rails way").
If you look at Apache ODE, they have a h2.rake task for Buildr that builds a database for testing automatically.