I have a model named Product.
I want configurable across the board filtering for this model.
For example: sails.config.field = 2;
When I do GET /Product I want it to essentially do GET /Product?where={"field": 2}
The above works for blueprint by adding a policy, but I want consistent behavior when I use the waterline ORM
GET /Product
and Product.find() should return the same thing.
I can override the model: Product.find and it would work perfectly... IF there was some way for me to access the underlying find code.
The code I am using to intercept the blueprint is:
if (!req.query.where) {
req.query.where = `{"status":{">":0,">=":${sails.config.catalogVersions.status}}}`;
} else {
const parsedWhere = JSON.parse(req.query.where);
parsedWhere.status = {
'>': 0,
'>=': sails.config.catalogVersions.status,
};
req.query.where = JSON.stringify(parsedWhere);
}
I could very easily apply this to a Model.find interceptor.
Is there any way that once sails is loaded, I can access the root find method on a model even if its been overwritten at load time?
Maybe you could think on something like this one:
https://github.com/muhammadghazali/sails-hook-pagination
It's a hook and maybe works for your intended use.
I ended up using the hook:orm:loaded hook, to run some code which monkeypatched all of the models with a defaultScope which was stored in each of my models. It works well as I can easily modify the default criteria on all of the models and get consistent behavior across blueprint and waterline.
For code see:
Is there a way to override sails.js waterline endpoint with custom controller, but keep built in pagination, and filtering?
Related
I set the view.setModel(model), get the model for the view, and request for model.read("/entitySet('10000')").
The model is then filled up with /entitySet('10000')/properties.
But it is difficult to assign them to view fields, as now in the view, <Text text="{property}"> doesn't work. It has to be <Text text="{/entitySet('10000')/property}">.
On the other hand, if I set view's context binding to "/entitySet('10000')", then <Text text="{property}"> would start working.
Which one is the preferred method? When to use .read?
I almost never use .read if I want to use the results from an OData call directly in a binding context. The only time I use .read is if I want to manipulate the results before doing anything with them.
Look at this example from the sdk for instance: https://ui5.sap.com/#/entity/sap.ui.table.Table/sample/sap.ui.table.sample.OData
Syntax on this kind of binding is similar to read but with a few differences in events, and a few different types of methods depending on what you want to bind. Binding to a view for instance uses bindElement:
this.getView().bindElement("/entitySet('1000')");
After this, fields on that particular entity can be accessed as <Text text="{property}" />.
Here's an example from one of my current apps with events and some other call parameters:
this.getView().bindElement({
path: `/Orders('${currentOrderNumber}')`,
parameters: {
expand: 'Texts'
},
events: {
dataRequested: _ => this.getView().setBusy(true),
dataReceived: data => {
if (!this.getView().getBindingContext()) {
// navigate to `Not Found` view
}
},
change: _ => this.getView().setBusy(false)
}
});
For a table, it's slightly different, since it depends on the aggregation you wish to bind, such as
oTable.bindRows({
path: "properties"
});
Which is the same as:
<Table rows="{properties}" />
It's always important to be more expressive. Use the API that is specifically designed to do that one task.
Comparing the two variants:
myModel.read(sPath) with text="{/path/property}"
myControl.bindElement(sPath) with text="{property}"
I'd be perplexed about the 1st call whereas in the 2nd call, I'd know exactly what you want to achieve (You want to bind element. Alternatively, bindObject can be also used).
The same applies to the framework. Since you're telling exactly what you want to achieve, the framework can improve its behavior based on your intent. E.g.: in (route)PatternMatched handler when the user navigates to the same page, .bindElement with the same path won't trigger another request since the model already stored the entity from the previous call. It can show the result immediately.
With .read, however, the framework doesn't know what you want to achieve, so it sends the request right away regardless of the application state.
Additionally, the 1st variant is anything but future-proof. It relies on the cached results. It's almost a side-effect that it works at all. The problem is that there is no guarantee that this behavior will continue to work in later versions. Also there won't be read method in V4 ODataModel.
TL;DR
v2.ODataModel#read
Does not create context from the response. Repeating .read("<same path>") always sends a new request.
Less expressive. Encourages app developers to work with a client-side model (e.g. JSONModel).
Application loses context awareness, increasing TCO, less future-proof.
bindElement or bindObject
Creates context from the response and stores it internally so that the same request can return the data immediately.
Clearly expresses the intent; application as well as framework can work with the existing APIs.
More future-proof: v4.ODataModel does not support manual read. Imagine you've built your applications with the v2ODataModel.read-jsonModel.setData approach, and you need to migrate to v4. Have fun. :)
I honestly think that v2.ODataModel#read should have never become a public method. I wouldn't encourage anyone to use .read except of when reading the $count value manually.
If the entity values need to be formatted, there are formatters and binding types out of the box which are also easy to extend.
If the application needs to restructure the response body, usually it's a sign of a poor design of the data model or the service not conforming to the OData spec.
I almost agree with Jorg, but not entirely:
It really depends on what are you trying to achieve. If looking to display data from backend then the easiest way to go is with this.getView().bindElement()
but if you are in need to manipulate data before displaying (like formatting text, displaying images from base64 strings, etc) OR if you would like to create new entity using some of existing data, OR update the existing data - the this.getModel(sName).read() is the way to go - as you can set read entity with all its deep entities to JSONModel in successCallback and manipulate it from the localModel.
If using localModel the dataBinding is pretty much the same in the view - except you have to additionally give model name to take data from. So for example, if in successCallback of your Model.read() you set your data to Model named "localModel":
this.getModel().read(sObjectPath, {
urlParameters: {
$expand: ""
},
success: function (oData) {
// "localModel" is name you gave in onInit function to your new JSONMOdel, when setting it to View e.g. this.getView().setModel(oJSONMOdel, "localModel")
this.getModel("localModel").setData(oData);
}
})
then in XML view you would indicate that
<Text text="{localModel>/mainPropertyName}"/>
// for displaying deep entities as List items or in Table
<List items="{localModel>/deepEntityName}">
<StandardListItem title="{localModel>propertyNamefromDeepEntity}" />
</List>
From my experience working with more complex apps that Read-Only - I always use Model.read().
I am going to register interests in ALL_KEYS for my Pivotal GemFire client via Spring Data GemFire, but I find that ClientRegionFactoryBean has one method.
org.springframework.data.gemfire.client.ClientRegionFactoryBean.setInterests(Interest<MyRegionPojo>[] interests)
In this case, I only can set the exact keys, but I want to register interests for all keys. My key is not a simple class like String, or Long, but a complex object MyRegionPojo.
Please help if any method to implement so like GemFire API region.registerInterest("ALL_KEYS");
You problem statement is a bit vague but I assume/suspect you are configuring your Spring (Data GemFire) (SDG) application using Spring JavaConfig?
However, I will quickly add that this is not unlike how you would register interests in all keys using SDG's XML namespace, as shown here.
The JavaConfig approach is similar, but clearly based on "strongly-typed arguments", namely 1 or more sub-type instances of the o.s.d.g.client.Interest class to the o.s.d.g.client.ClientRegionFactoryBean.setInterests(:Interest<K>[]) method.
By way of example, you might do the following...
#Bean("Example")
public ClientRegionFactoryBean<?, ?> exampleRegion(GemFireCache gemfireCache) {
ClientRegionFactoryBean<MyRegionKey, MyRegionValue> exampleRegion =
new ClientRegionFactoryBean<>();
RegexInterest regexInterest = new RegexInterest();
regexInterest.setKey(".*");
exampleRegion.setCache(gemfireCache);
exampleRegion.setShortcut(ClientRegionShortcut.PROXY);
exampleRegion.setInterests(new Interest[] { regexInterest });
exampleRegion.setKeyConstraint(MyRegionKey.class);
exampleRegion.setValueConstraint(MyRegionValue.class);
return exampleRegion;
}
NOTE: updated the example above to reflect the proper way to register (Regex) interests based on SDG 1.9 or earlier. Keep in mind that the `o.s.d.g.client.RegexInterest.getRegex() delegates to getKey() therefore you can set the Regular Expression using setKey(:String) as I have shown above.
Notice the o.s.d.g.client.RegexInterest sub-type registration, which is effectively the same as register interests in "ALL_KEYS", as described here as well.
Hope this helps!
-John
I want Administrators to enable/disable logging at runtime by changing the enabled property of the LogEnabledFilter in the config.
There are several threads on SO that explain workarounds, but I want it this way.
I tried to change the Logging Enabled Filter like this:
private static void FileConfigurationSourceChanged(object sender, ConfigurationSourceChangedEventArgs e)
{
var fcs = sender as FileConfigurationSource;
System.Diagnostics.Debug.WriteLine("----------- FileConfigurationSourceChanged called --------");
LoggingSettings currentLogSettings = e.ConfigurationSource.GetSection("loggingConfiguration") as LoggingSettings;
var fdtl = currentLogSettings.TraceListeners.Where(tld => tld is FormattedDatabaseTraceListenerData).FirstOrDefault();
var currentLogFileFilter = currentLogSettings.LogFilters.Where(lfd => { return lfd.Name == "Logging Enabled Filter"; }).FirstOrDefault();
var filterNewValue = (bool)currentLogFileFilter.ElementInformation.Properties["enabled"].Value;
var runtimeFilter = Logger.Writer.GetFilter<LogEnabledFilter>("Logging Enabled Filter");
runtimeFilter.Enabled = filterNewValue;
var test = Logger.Writer.IsLoggingEnabled();
}
But test reveals always the initially loaded config value, it does not change.
I thought, that when changing the value in the config the changes will be propagated automatically to the runtime configuration. But this isn't the case!
Setting it programmatically as shown in the code above, doesn't work either.
It's time to rebuild Enterprise Library or shut it down.
You are right that the code you posted does not work. That code is using a config file (FileConfigurationSource) as the method to configure Enterprise Library.
Let's dig a bit deeper and see if programmatic configuration will work.
We will use the Fluent API since it is the preferred method for programmatic configuration:
var builder = new ConfigurationSourceBuilder();
builder.ConfigureLogging()
.WithOptions
.DoNotRevertImpersonation()
.FilterEnableOrDisable("EnableOrDisable").Enable()
.LogToCategoryNamed("General")
.WithOptions.SetAsDefaultCategory()
.SendTo.FlatFile("FlatFile")
.ToFile(#"fluent.log");
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
var defaultWriter = new LogWriterFactory(configSource).Create();
defaultWriter.Write("Test1", "General");
var filter = defaultWriter.GetFilter<LogEnabledFilter>();
filter.Enabled = false;
defaultWriter.Write("Test2", "General");
If you try this code the filter will not be updated -- so another failure.
Let's try to use the "old school" programmatic configuration by using the classes directly:
var flatFileTraceListener = new FlatFileTraceListener(
#"program.log",
"----------------------------------------",
"----------------------------------------"
);
LogEnabledFilter enabledFilter = new LogEnabledFilter("Logging Enabled Filter", true);
// Build Configuration
var config = new LoggingConfiguration();
config.AddLogSource("General", SourceLevels.All, true)
.AddTraceListener(flatFileTraceListener);
config.Filters.Add(enabledFilter);
LogWriter defaultWriter = new LogWriter(config);
defaultWriter.Write("Test1", "General");
var filter = defaultWriter.GetFilter<LogEnabledFilter>();
filter.Enabled = false;
defaultWriter.Write("Test2", "General");
Success! The second ("Test2") message was not logged.
So, what is going on here? If we instantiate the filter ourselves and add it to the configuration it works but when relying on the Enterprise Library configuration the filter value is not updated.
This leads to a hypothesis: when using Enterprise Library configuration new filter instances are being returned each time which is why changing the value has no effect on the internal instance being used by Enterprise Library.
If we dig into the Enterprise Library code we (eventually) hit on LoggingSettings class and the BuildLogWriter method. This is used to create the LogWriter. Here's where the filters are created:
var filters = this.LogFilters.Select(tfd => tfd.BuildFilter());
So this line is using the configured LogFilterData and calling the BuildFilter method to instantiate the applicable filter. In this case the BuildFilter method of the configuration class LogEnabledFilterData BuildFilter method returns an instance of the LogEnabledFilter:
return new LogEnabledFilter(this.Name, this.Enabled);
The issue with this code is that this.LogFilters.Select returns a lazy evaluated enumeration that creates LogFilters and this enumeration is passed into the LogWriter to be used for all filter manipulation. Every time the filters are referenced the enumeration is evaluated and a new Filter instance is created! This confirms the original hypothesis.
To make it explicit: every time LogWriter.Write() is called a new LogEnabledFilter is created based on the original configuration. When the filters are queried by calling GetFilter() a new LogEnabledFilter is created based on the original configuration. Any changes to the object returned by GetFilter() have no affect on the internal configuration since it's a new object instance and, anyway, internally Enterprise Library will create another new instance on the next Write() call anyway.
Firstly, this is just plain wrong but it is also inefficient to create new objects on every call to Write() which could be invoked many times..
An easy fix for this issue is to evaluate the LogFilters enumeration by calling ToList():
var filters = this.LogFilters.Select(tfd => tfd.BuildFilter()).ToList();
This evaluates the enumeration only once ensuring that only one filter instance is created. Then the GetFilter() and update filter value approach posted in the question will work.
Update:
Randy Levy provided a fix in his answer above.
Implement the fix and recompile the enterprise library.
Here is the answer from Randy Levy:
Yes, you can disable logging by setting the LogEnabledFiter. The main
way to do this would be to manually edit the configuration file --
this is the main intention of that functionality (developers guide
references administrators tweaking this setting). Other similar
approaches to setting the filter are to programmatically modify the
original file-based configuration (which is essentially a
reconfiguration of the block), or reconfigure the block
programmatically (e.g. using the fluent interface). None of the
programmatic approaches are what I would call simple – Randy Levy 39
mins ago
If you try to get the filter and disable it I don't think it has any
affect without a reconfiguration. So the following code still ends up
logging: var enabledFilter = logWriter.GetFilter();
enabledFilter.Enabled = false; logWriter.Write("TEST"); One non-EntLib
approach would just to manage the enable/disable yourself with a bool
property and a helper class. But I think the priority approach is a
pretty straight forward alternative.
Conclusion:
In your custom Logger class implement a IsLoggenabled property and change/check this one at runtime.
This won't work:
var runtimeFilter = Logger.Writer.GetFilter<LogEnabledFilter>("Logging Enabled Filter");
runtimeFilter.Enabled = false/true;
Association auto population is sexy during the early stages of the app development. Soon as the related models result in high number of associated records the api calls get a drastic performance hit. SailsJS provides a way to toggle this globally.
module.exports.blueprints.populate = true / false;
Ideal application would be to disable this option globally and load the related models on demand , is this possible ( Base use case would be how Laravel does things with Eager loading http://laravel.com/docs/5.0/eloquent#eager-loading ).
You should be able to override the blueprint configuration per controller #/disabling-blueprints-on-a-per-controller-basis
You may also override any of the settings from config/blueprints.js on a per-controller basis by defining a '_config' key in your controller defintion, and assigning it a configuration object with overrides for the settings in this file.
Try this in the controller where you want to activate populate:
module.exports = {
_config: {
populate: true
}
}
My question is this: How can you implement a default server-side "filter" for a navigation property?
In our application we seldom actually delete anything from the database. Instead, we implement "soft deletes" where each table has a Deleted bit column. If this column is true the record has been "deleted". If it is false, it has not.
This allows us to easily "undelete" records accidentally deleted by the client.
Our current ASP.NET Web API returns only "undeleted" records by default, unless a deleted argument is sent as true from the client. The idea is that the consumer of the service doesn't have to worry about specifying that they only want undeleted items.
Implementing this same functionality in Breeze is quite simple, at least for base entities. For example, here would be the implementation of the classic Todo's example, adding a "Deleted" bit field:
// Note: Will show only undeleted items by default unless you explicitly pass deleted = true.
[HttpGet]
public IQueryable<BreezeSampleTodoItem> Todos(bool deleted = false) {
return _contextProvider.Context.Todos.Where(td => td.Deleted == deleted);
}
On the client, all we need to do is...
var query = breeze.EntityQuery.from("Todos");
...to get all undeleted Todos, or...
var query = breeze.EntityQuery.from("Todos").withParameters({deleted: true})
...to get all deleted Todos.
But let's say that a BreezeSampleTodoItem has a child collection for the tools that are needed to complete that Todo. We'll call this "Tools". Tools also implements soft deletes. When we perform a query that uses expand to get a Todo with its Tools, it will return all Tools - "deleted" or not.
But how can I filter out these records by default when Todo.Tools is expanded?
It has occurred to me to have separate Web API methods for each item that may need expanded, for example:
[HttpGet]
public IQueryable<Todo> TodoAndTools(bool deletedTodos = false, bool deletedTools = false)
{
return // ...Code to get filtered Todos with filtered Tools
}
I found some example code of how to do this in another SO post, but it requires hand-coding each property of Todo. The code from the above-mentioned post also returns a List, not an IQueryable. Furthermore this requires methods to be added for every possible expansion which isn't cool.
Essentially what I'm looking for is some way to define a piece of code that gets called whenever Todos is queried, and another for whenever Tools is queried - preferably being able to pass an argument that defines if it should return Deleted items. This could be anywhere on the server-side stack - be it in the Web API method, itself, or maybe part of Entity Framework (note that filtering Include extensions is not supported in EF.)
Breeze cannot do exactly what you are asking for right now, although we have discussed the idea of allowing the filtering of "expands", but we really need more feedback as to whether the community would find this useful. Please add this to the breeze User Voice and vote for it. We take these suggestions very seriously.
Moreover, as you point out, EF does not support this.
But... what you can do is use a projection instead of an expand to do something very similar:
public IQueryable<Object> TodoAndTools(bool deleted = false
,bool deletedTools = false) {
var baseQuery = _contextProvider.Context.Todos.Where(td => td.Deleted == deleted);
return baseQuery.Select(t => new {
Todo: t,
Tools: t.Tools.Where( tool => tool.Deleted = deletedTools);
});
}
Several things to note here:
1) We are returning an IQueryable of Object instead of IQueryable of ToDo
2) Breeze will inspect the returned payload and automatically create breeze entities for any 'entityTypes' returned (even within a projection). So the result of this query will be an array of javascript objects each with two properties; 'ToDo' and 'Tools' where Tools is an array of 'Tool' entities. The nice thing is that both ToDo and Tool entities returned within the projection will be 'full' breeze entities.
3) You can still pass client side filters based on the projected property names. i.e.
var query = EntityQuery.from("TodoAndTools")
.where("Todo.Description", "startsWith", "A")
.using(em);
4) EF does support this.