I have issues with the RESTXQ implementation in exist-db.
I think it might be the RestXQTrigger which is not working correctly.
The problem: I deleted (via the Dashboard) a collection including RESTXQ services inside several .xqm files. However, the services are not unregistered and are still available, even after restarting eXist.
Is there any way to force this unregistring, I mean other than recreate the previous collections/files and delete each .xqm files one by one (this way, the trigger seems to work) ?
RESTXQ in eXist at the moment only implements the Document Trigger events and not the Collection Trigger events. This is just a limitation which needs to be resolved when there is time to implement it.
There is an XQuery module provided with eXist in the namespace: http://exquery.org/ns/restxq/exist. The functions in this module enable you to manually manipulate the RESTXQ Registry. You can enable it in $EXIST_HOME/conf.xml. If you then restart eXist and re-build the function documentation you should be able to see the documentation in the function browser app for these functions. In particular you most likely want the functions:
exrest:deregister-module(xs:anyURI("/db/my-module.xqm")) and exrest:register-module(xs:anyURI("/db/my-module.xqm")).
There are also functions for registering and deregistering individual functions from a module, which are called register-resource-function and deregister-resource-function they are similar to above but take a second argument which is a function signature (as a xs:string) in the form of qname#arity e.g. "fn:substring#2"
You can stop the database, and manually remove the registry file $EXIST_HOME/webapp/WEBINF/data/restxq.registry
Related
I'm defining my own Puppet class, and I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory. I was wondering to have a similar syntax like below, but didn't found a way to make it work.
$dirs = Dir.entries('C:\\Program Files\\Java\\')
Does anyone how to do it in a Puppet file?
Thanks!
I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory.
Information about the current state of the machine to be configured is conveyed to the catalog compiler via facts. These are available to your classes as top-scope variables, and Puppet (or Facter, actually) provides ways to define your own custom facts. That's a link into the Facter 3 manual, but similar applies to earlier versions. Do not overlook the rest of the Facter documentation, which has more relevant information on this topic.
On the other hand, information about the machine providing catalog-building services -- the master in a master / agent setup -- can be obtained by writing and calling a custom function. This is rarely what you actually want, but it's worth mentioning because you might one day want a custom function for some other purpose.
the context of the problem is like this: we create workflows, we save it and after a while a new implementation request comes and we change an activity. After this the workflow instances that were saved cannot run anymore. We get this error:
StateMachine Error : Cannot convert object 'True' to type 'System.String'.
It seems that the new argument added brakes the serialization order?
You'll have to implement Dynamic Update in some fashion.
We are currently in the process of getting some infrastructure set up to update existing instances, and having lots of issues. Hopefully your scenario is easier to solve than ours!
Start here: https://msdn.microsoft.com/en-us/library/hh314052(v=vs.110).aspx
Word of caution: I've found various issues with Microsoft's provided code that required a lot of investigation to fix.
Update: TL;DR there seems to be no built-in way to achieve this, so a custom task is an easy solution.
Capistrano provides facilities to share files and directories over all releases. This is convenient and provides even some safety on files that should not be easily changed (or must remain the same across releases), e.g. a database configuration file.
But when it comes to replace or just update one of these shared files, I end up doing it manually, directly on the target machine. I would like to improve on that, for instance by asking Capistrano to overwrite some or all shared files when deploying. A kind of --force flag with some granularity.
I am not aware of any such kind of facility, and failing so far in my search. Any pointer?
Thinking about it
One of the reason why this facility does not exist (except that I did not find it!) is that it may be harder than it looks. For example, let's assume we have a shared database configuration file, and we exclude it from version control for security reason (common practice). Current release relies on version 1 of the DB configuration. The next release requires version 2 of the DB configuration. If the deployment goes well, everything's good. It gets harder when rolling back after some error with the new release (e.g. a regression), as version 1 must then be available.
Such automation would be cool and convenient, but dangerous as well. Yet I have practical use cases at hand.
I created a template method to do this. For example, I could have a task like this:
task :create_database_yml do
on roles(:app, :db) do
within(shared_path) do
template "local/path/to/database.yml.erb",
"config/database.yml",
:mode => "600"
end
end
end
And then I have a database.yml.erb template that uses things like fetch(:database_password) to fill in appropriate values. You can use the ask method in Capistrano to prompt for these values so they are never committed.
The implementation of template can be very simple: you just need to read the file, pass it through ERB, and then use Capistrano's upload! to place the results on the server.
My version is a little more complicated than yours probably needs to be, but in case you are curious:
https://github.com/mattbrictson/capistrano-mb/blob/7600440ecd3331945d03e059368b75849857f1fb/lib/capistrano/mb/dsl.rb#L104
One approach is to use a system configuration tool like Chef or Puppet to deploy the configuration files distinctly from Capistrano.
Another approach is to create a custom task to do this: https://coderwall.com/p/wgs6gw/copy-local-files-to-remote-server-using-capistrano-3
I personally don't change on-server configs often enough or on enough servers yet to have tried to automate it. Crafting an scp command which copies the desired config file to all of the required servers has sufficed in the past.
I have a modular Sinatra app without a DB and in order to test memcache, I have some test files that need to be created and deleted on the file system. I would like to generate these files in an AfterConfiguration hook using some helper methods (which are in a module shared with rspec, which also needs to create/delete these files for testing). I only want to create them once at the start of Cucumber.
I do not seem to be able to access the helpers from within AfterConfiguration, which lives in "support/hooks.rb." The helpers are accessible from Cucumber's steps, so I know they have been loaded properly.
This previous post seems to have an answer: Want to load seed data before running cucumber
The second example in this answer seems to say my modules should be accessible to my AfterConfiguration block, but I get "undefined method `foo' for nil:NilClass" when attempting to call helper method "foo".
I can pull everything out into a rakefile and run it that way, but I'd like to know what I'm missing here.
After digging around in the code, it appears that AfterConfiguration not only runs before any features are loaded, but before World is instantiated. Running self.class inside of the AfterConfig block returns NilClass. Running self.class inside of any other hook, such as a Before, will return MyWorldName. In retrospect, this makes sense as every feature is run in a separate instance of World.
This is why helpers defined as instance methods (ie def method_name) are unknown. Changing my methods to module methods (ie def ModuleName.method_name) allows them to function, since they really are module methods anyway.
(I am also using .NET 4.0 and VS 2010.)
I created a function import returning a complex type, as explained at http://msdn.microsoft.com/en-us/library/bb896231.aspx. The function import and new complex type appear in my .edmx file and in the Designer.cs file. However, the function does not appear when I view the service in the browser, and when I add or update a service reference in the client project, the function does not appear there either - as is to be expected, given the first result.
Creating an imported function and using it seems conceptually very simple and straightforward, and one would think it would just work, as Microsoft's step-by-step instructions appear to suggest: http://msdn.microsoft.com/en-us/library/cc716672.aspx#Y798 (which article shows the SP returning entity types - I tried this also, and it doesn't work for me either).
This blog post shows the addition of a method to the DataService class, which Microsoft's instructions omit: http://www.codegain.com/articles/wcf/miscellaneous/how-to-use-stored-procedure-in-wcf-data-service.aspx I tried adding one method returning a list of entity types and another returning a list of complex types, and still had no success. I still could not access the functions, either directly via the browser or from the client application via a service reference.
Thanks in advance for any help with this.
config.SetServiceOperationAccessRule("*", ServiceOperationRights.All);
MS would do well to add a note to the walkthroughs stating that the above bit of code must be there. (It may be better to enable each operation explicitly than to use "*".)
http://www.codegain.com/articles/wcf/miscellaneous/how-to-use-stored-procedure-in-wcf-data-service.aspx shows that line of code. Also, something it is there in the code, commented out, when one creates the WCF Data Service. Some of us like to delete commented-out code that we aren't using and that seems irrelevant - perhaps doing so a bit prematurely, sometimes.