I'm testing Alembic for a python project. The autogeneration is really nice, but dropping is not really helpful if you need to work on customer databases with many different versions for example.
Activate or deactivate Dropping for different scenarios. This would be the best solution.
I made my own configuration in env.py, so I can use more than one base script. But if I make a new script (defining a new table) and autogenerate a migration-script I have an autogenerated drop of all previous migrated tables.
I looked already for the mako-file. How is it possible to integrate a restriction in the mako-file?
I found a possibility to filter my migration-operations-list.
if you hand over a filter-methode who filters your list to the config-flag "process_revision_directives". (all configs in the env.py)
from alembic.operations import ops
def process_revision_directives(context, revision, directives):
script = directives[0]
# process both "def upgrade()", "def downgrade()"
for directive in (script.upgrade_ops, script.downgrade_ops):
# now rewrite the list of "ops" such that DropColumnOp and DropTableOp
# are removed for those tables. Needs a recursive function.
directive.ops = list(
_filter_drop_elm(directive.ops)
)
def _filter_drop_elm(directives):
# given a set of (tablename, schemaname) to be dropped, filter
# out Drop-Op from the list of directives and yield the result.
for directive in directives:
# ModifyTableOps is a container of ALTER TABLE types of
# commands. process those in place recursively.
if isinstance(directive, ops.DropTableOp):
continue
elif isinstance(directive, ops.ModifyTableOps):
directive.ops = list(
_filter_drop_elm(directive.ops)
)
if not directive.ops:
continue
elif isinstance(directive, ops.AlterTableOp) and isinstance(directive, ops.DropColumnOp):
continue
# otherwise if not filtered, yield out the directive
yield directive
Related
I am trying to test a view created in Postgres, but it is returning an empty result set. However, when testing out the view in an Elixir interactive shell, I get back the expected data. Here are the steps I have taken to create and test the view:
Create a migration:
def up do
execute """
CREATE VIEW example_view AS
...
Create the schema:
import Ecto.Changeset
schema "test_view" do
field(:user_id, :string)
Test:
describe "example loads" do
setup [
:with_example_data
]
test "view" do
query = from(ev in Schema.ExampleView)
IO.inspect Repo.all(query)
end
end
The response back is an empty array []
Is there a setting that I am missing to allow for views to be tested in test?
As pointed out in one of the comments:
iex, mix phx.server... run on the :dev environment and the dev DB
tests use the :test environment and runs on a separate DB
It actually makes a lot of sense because you want your test suite to be reproducible and independent of whatever records that you might create/edit in your dev env.
You can open iex in the :test environment to confirm that your query returns the empty array here too:
MIX_ENV=test iex -S mix
What you'll need is to populate your test DB with some known records before querying. There are at least 2 ways to achieve that: fixtures and seeds.
Fixtures:
define some helper functions to create records in test/support/test_helpers.ex (typically: takes some attrs, adds some defaults and calls some create_ function from your context)
def foo_fixture(attrs \\ %{}) do
{:ok, foo} =
attrs
|> Enum.into(%{name: "foo", bar: " default bar"})
|> MyContext.create_foo()
foo
end
call them within your setup function or test case before querying
side note: you should use DataCase for tests involving the DB. With DataCase, each test is wrapped in its own transaction and any fixture that you created will be rollback-ed at the end of the test, so tests are isolated and independent from each other.
Seeds:
If you want to include some "long-lasting" records as part of your "default state" (e.g. for a list of countries, categories...), you could define some seeds in priv/repo/seeds.exs.
The file should have been created by the phoenix generator and indicate you how to add seeds (typically use Repo.insert!/1)
By default, mix will run those seeds whenever you run mix ecto.setup or mix ecto.reset just after your migrations (whatever the env used)
To apply any changes in seeds.exs, you can run the following:
# reset dev DB
mix ecto.reset
# reset test DB
MIX_ENV=test mix ecto.reset
If you need some seeds to be environment specific, you can always introduce different seed files (e.g. dev_seeds.exs) and modify your mix.exs to configure ecto.setup.
Seeds can be very helpful not only for tests but for dev/staging in the early stage of a project, while you are still tinkering a lot with your schema and you are dropping the DB frequently.
I usually find myself using a mix of both approaches.
I'm restructuring my Wagtail app to remove an IndexPage that only has a single item in it, and moving that item to be a child of the current IndexPage's parent.
basically moving from this:
Page--|
|--IndexPage--|
|--ChildPages (there's only ever 1 of these)
to this:
Page--|
|--ChildPage
I've made the changes to the models so that this structure is used for creating new content and fixed the relevant views to point to the ChildPage directly. But now I want to migrate the current data to the new structure and I'm not sure how to go about it... Ideally this would be done in a migration so that we would not have to do any of this manipulation by hand.
Is there a way to move these ChildPage's up the tree programmatically during a migration?
Unfortunately there's a hard limitation that (probably) rules out the possibility of doing page tree adjustments within migrations: tree operations such as inserting, moving and deleting pages are implemented as methods on the Page model, and within a migration you only have access to a 'dummy' version of that model, which only gives you access to the database fields and basic ORM methods, not those custom methods.
(You might be able to work around this by putting from wagtail.wagtailcore.models import Page in your migration and using that instead of the standard Page = apps.get_model("wagtailcore", "Page") approach, but I wouldn't recommend that - it's liable to break if the migration is run at a point in the migration sequence where the Page model is still being built up and doesn't match the 'real' state of the model.)
Instead, I'd suggest writing a Django management command to do the tree manipulation - within a management command it is safe to import the Page model from wagtailcore, as well as your specific page models. Page provides a method move(target, pos) which works as per the Treebeard API - the code for moving your child pages might look something like:
from myapp.models import IndexPage
# ...
for index_page in IndexPage.objects.all():
for child_page in index_page.get_children():
child_page.move(index_page, 'right')
index_page.delete()
Theoretically it should be possible to build a move() using the same sort of manipulations that Daniele Miele demonstrates in Django-treebeard and Wagtail page creation. It'd look something like this Python pseudocode:
def move(page, target):
# assuming pos='last_child' but other cases follow similarly,
# just with more bookkeeping
# first, cut it out of its old tree
page.parent.numchild -= 1
for sib in page.right_siblings: # i.e. those with a greater path
old = sib.path
new = sib.path[:-4] + (int(sib.path[-4:])-1):04
sib.path = new
for nib in sib.descendants:
nib.path = nib.path.replace_prefix(old, new)
# now, update itself
old_path = page.path
new_path = target.path + (target.numchild+1):04
page.path = new_path
old_url_path = page.url_path
new_url_path = target.url_path + page.url_path.last
page.url_path = new_url_path
old_depth = page.depth
new_depth = target.depth + 1
page.depth = new_depth
# and its descendants
depth_change = new_depth - old_depth
for descendant in page.descendants:
descendant.path = descendant.path.replace_prefix(old_path, new_path)
descendant.url_path = descendant.url_path.replace_prefix(old_path, new_path)
descendant.depth += depth_change
# finally, update its new parent
target.numchild += 1
The core concept that makes this manipulation simpler than it looks is: when a node gets reordered or moved, all its descendants need to be updated, but the only update they need is the exact same update their ancestor got. It's applied as a prefix replacement (if str) or a difference (if int), neither of which requires knowing anything about the descendant's exact value.
That said, I haven't tested it; it's complex enough to be easy to mess up; and there's no way of knowing if I updated every invariant that Wagtail cares about. So there's something to be said for the management command way as well.
I have been tinkering with this trigger for hours now, think I pinpointed the issue now.
I have set up an example trigger like in ML8 documentation.
Now I have modified it to a more real-world action.
The issue seems to be that I use a library module that hold my own functions in a lib.xqy. I have tested the lib itself in Query Console, all functions run fine.
The alert action itself also runs fine in QC.
The simpleTrigger works ok.
The more complex one runs IF I REMOVE the function that uses my own lib.
Seems that the trigger is run by a user or from a place where it cannot find my module (which is in the modules db). I have set the trigger-db to point to the content-db.
The triggers look at a directory for new documents (document create).
If I want to use my own lib function the Error thrown is:
[1.0-ml] XDMP-MODNOTFOUND: (err:XQST0059) xdmp:eval("xquery version
"1.0-ml";
let $uri := '/marklo...", (),
<options xmlns="xdmp:eval"><database>12436607035003930594</database>
<modules>32519102440328...</options>)
-- Module /lib/sccss-lib.xqy not found
The module is in the modules-db...
Another thing that bothers me is the example in ML doc does a
xdmp:document-insert("/modules/log.xqy",
text{ "
xquery version '1.0-ml';
..."
}, xdmp:permission('app-user', 'execute'))
What does the permission app-user do in this case?
Anyway main question: Why does the trigger not run if I use a custom module in the trigger action?
I have seen this question and think it is related but I do not understand the answer there...
EDIT start, more information on the trigger create statement:
xquery version "1.0-ml";
import module namespace trgr="http://marklogic.com/xdmp/triggers"
at "/MarkLogic/triggers.xqy";
trgr:create-trigger("sensorTrigger", "Simple trigger for connection systems sensor, the action checks how long this device is around the sensor",
trgr:trigger-data-event(
trgr:directory-scope("/marklogic.solutions.obi/source/", "1"),
trgr:document-content("create"),
trgr:post-commit()),
trgr:trigger-module(xdmp:database("cluey-app-content"), "/triggers/", "check-time-at-sensor.xqy"),
fn:true(), xdmp:default-permissions() )
Also indeed the trigger is created from the QC, so indeed as admin(I yet have to figure out how to do that adding code to app-specific.rb). And also the trigger action is loaded from the QC with a doc insert statement equivalent as the trigger example in the docs.
For completeness I added this to app-specific.rb per suggestion by Geert
alias_method :original_deploy_modules, :deploy_modules
def deploy_modules()
original_deploy_modules
# and apply correct permissions
r = execute_query %Q{
xquery version "1.0-ml";
for $uri in cts:uris()
return (
$uri,
xdmp:document-set-permissions($uri, (
xdmp:permission("#{#properties["ml.app-name"]}-role", "read"),
xdmp:permission("#{#properties["ml.app-name"]}-role", "execute")
))
)
},
{ :db_name => #properties["ml.modules-db"] }
end
For testing I also loaded it as part of the content (using ./ml local deploy content to load it, as said before the action is there it will run so there seems no issue with the permission of the action doc itself. What I do not understand is that as soon as I try to use my own module in the action it fails to find the module or(see comment David) does not have the right permission on the module. So the trigger action will fail to run ... The module is loaded with roxy under /src/lib/lib.xqy
SECOND EDIT
I added all trigger stuf to include in roxy by adding the following to app_specific.rb:
# HK voor gebruik modules die geen REST permissies hebben in een rest extension
alias_method :original_deploy_modules, :deploy_modules
def deploy_modules()
original_deploy_modules
# Create triggers
r = execute_query(%Q{
xquery version "1.0-ml";
import module namespace trgr="http://marklogic.com/xdmp/triggers"
at "/MarkLogic/triggers.xqy";
xdmp:log("Installing triggers.."),
try {
trgr:remove-trigger("sensorTrigger")
} catch ($ignore) {
};
xquery version "1.0-ml";
import module namespace trgr="http://marklogic.com/xdmp/triggers"
at "/MarkLogic/triggers.xqy";
trgr:create-trigger("sensorTrigger", "Trigger to check duration at sensor",
trgr:trigger-data-event(
trgr:directory-scope("/marklogic.solutions.obi/source/", "1"),
trgr:document-content("create"),
trgr:post-commit()
),
trgr:trigger-module(xdmp:modules-database(), "/", "/triggers/check-time-at-sensor.xqy"),
fn:true(),
xdmp:default-permissions(),
fn:false()
)
},
######## THIRD EDIT ###############
#{ :app_name => #properties["ml.app-name"] }
{ :db_name => #properties["ml.modules-db"] }
)
# and apply correct permissions
r = execute_query %Q{
xquery version "1.0-ml";
for $uri in cts:uris()
return (
$uri,
xdmp:document-set-permissions($uri, (
xdmp:permission("#{#properties["ml.app-name"]}-role", "read"),
xdmp:permission("#{#properties["ml.app-name"]}-role", "execute")
))
)
},
{ :db_name => #properties["ml.modules-db"] }
end
As you can seee the rootpath is now "/" in line
trgr:trigger-module(xdmp:modules-database(), "/", "/triggers/check-time-at-sensor.xqy")
I also added permissions by hand but still as soon as I add the line pointing to sccs-lib.xqy my trigger fails...
There is a number of criteria that need to be met for a trigger to work properly. David already mentioned part of them. Let me try to complete the list:
You need to have a database that contains the trigger definition. That is the database against which the trgr:create-trigger was executed. Typically Triggers or some app-triggers.
That trigger database needs to be assigned as triggers-database to the content database (not the other way around!).
You point to a trigger module that contains the code that will get executed as soon as a trigger event occurs. The trgr:trigger-module explicitly points to the uri of the module, and the database in which it is contained.
Any libraries used by that trigger module, need to be in the same database as the trigger module. Typically both trigger module, and related libraries are stored within Modules or some app-modules.
With regard to permissions, the following applies:
First of all, you need privileges to insert the document (uri, and collection).
Then, to be able to execute the trigger, the user that does the insert needs to have a role (directly, inherited, or via amps) that has read and execute permission on the trigger modules, as well on all related libraries.
Then that same user needs to have privileges to do whatever the trigger modules needs to do.
Looking at your create-triggers statement, I notice that the trigger module is pointing to the app-content database. That means it will be looking for libraries in the app-content database as well. I would recommend putting trigger module, and libraries in the app-modules database.
Also, regarding app-user execute permission: that is just a convention. The nobody user has the app-user role. That is typically used to allow the nobody user to run rewriter code.
HTH!
Could you please provide a bit more information - like perhaps the entire trigger create statement?
For creating the trigger, keep in mind:
the trigger database that you insert the trigger into has to be the one defined in the content database you refer to in the trigger and
trgr:trigger-module allows you to define the module database and the module to run. With this defined properly, then I cannot see how /lib/sccss-lib.xqy is not found - unless it is a permissions item...
Now on to the other item in your question: you test stuff in query console. That has the roles of that user - often run by people as admin... MarkLogic gives a 'not found' message also if a document is there - and you simply do not have access to it. So, it is possible that there is a problem with permissions for the documents in your modules database.
There is a suitable method in the sbt.Exctracted to add the TaskKey to the current state. Assume I have inState: State:
val key1 = TaskKey[String]("key1")
Project.extract(inState).append(Seq(key1 := "key1 value"), inState)
I have faced with the strange behavior when I do it twice. I got the exception in the following example:
val key1 = TaskKey[String]("key1")
val key2 = TaskKey[String]("key2")
val st1: State = Project.extract(inState).append(Seq(key1 := "key1 value"), inState)
val st2: State = Project.extract(st1).append(Seq(key2 := "key2 value"), st1)
Project.extract(st2).runTask(key1, st2)
leads to:
java.lang.RuntimeException: */*:key1 is undefined.
The question is - why does it work like this? Is it possible to add several TaskKeys while executing the particular task by several calls to sbt.Extracted.append?
The example sbt project is sbt.Extracted append-example, to reproduce the issue just run sbt fooCmd
Josh Suereth posted the answer to sbt-dev mail list. Quote:
The append function is pretty dirty/low-level. This is probably a bug in its implementation (or the lack of documentation), but it blows away any other appended setting when used.
What you want to do, (I think) is append into the current "Session" so things will stick around and the user can remove what you've done via "sesison clear" command.
Additonally, the settings you're passing are in "raw" or "fully qualified" form. If you'd for the setting you write to work exactly the same as it would from a build.sbt file, you need to transform it first, so the Scopes match the current project, etc.
We provide a utility in sbt-server that makes it a bit easier to append settings into the current session:
https://github.com/sbt/sbt-remote-control/blob/master/server/src/main/scala/sbt/server/SettingUtil.scala#L11-L29
I have tested the proposed solution and that works like a charm.
I have many web pages that are clones of each other. They have the exact same database
structure, just different data in different databases (each clone is for a different country so everything is
separated).
I would like to clean up my sphinx config file so that I don't duplicate the same queries
for every site.
I'd like to define a main source (with db auth info) for every clone, a common source for
every table I'd like to search, and then sources&indexes for every table and every clone.
But I'm not sure how exactly I should go about doing that.
I was thinking something among this lines:
index common_index
{
# charset_type, stopwords, etc
}
source common_clone1
{
# sql_host, sql_user, ...
}
source common_clone2
{
# sql_host, sql_user, ...
}
# ...
source table1
{
# sql_query, sql_attr_*, ...
}
source clone1_table1 : ???
{
# ???
}
# ...
index clone1_table1 : common_index
{
source: clone1_table1
#path, ...
}
# ...
So you can see where I'm confused :)
I though I could do something like this:
source clone1_table1 : table1, common_clone1 {}
but it's not working obviously.
Basically what I'm asking is; is there any way to extend two sources/indexes?
If this isn't possible I'll be "forced" to write a script that will generate my sphinx config file to ease maintenance.
Apparently this isn't possible (don't know if it's in the pipeline for the future). I'll have to resort to generating the config file with some sort of script.
I've created such a script, you can find it on GitHub: sphinx generate config php