Here is my scenario. Imagine a screen with a dropdown of US states. This list is populated from one Admin database. Depending on the choice of other items on the screen get filled with other databases. we have a database per state that share a single schema. I don't have any issue using DI for the States dropdown. However, I am having issues with getting the selected state. I tested hardcoding a state and DI works fine. I would like to use Session for this, but I have read you can't and frankly, I have not been able to make it work. Any suggestions would be appreciated.
services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
services.AddScoped(p => p.GetService<IHttpContextAccessor>()?.HttpContext);
services.AddDbContext<AdminManagement.Data.AdminDataContext>(options =>
options.UseSqlServer(Configuration.GetSection("Connections:myAdmin").Value).UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking));
//this is the issue here I want to be able to pass the selected state
services.AddDbContext<CollectionDataContext>((serviceProvider, builder) =>
{
//I wish I could use this...any alternatives?
//HttpContext.Session.GetString("SelectedState");
//hardcoded for testing purposes. it works ok
var selectedDb = "SC";
//this gets the connection string from app settings, later I will get it from an API
var connectionString = GetConnectionStringFromService(selectedDb);
builder.UseSqlServer(connectionString);
});
//my one admin database Data context
services.AddScoped<AdminManagement.Data.AdminManagementQueries>();
// my multiple databases clases that use DI
services.AddScoped<CollectionManagementQueries>();
services.AddScoped<CollectionManagementCommands>();
You need to retrieve the context from the service provider. That's done via:
var httpContextAccessor = serviceProvider.GetRequiredService<IHttpContextAccessor>();
Then, you can do something like:
var selectedDb = httpContextAccessor.HttpContext.Session.GetString("SelectedState");
Note, however, that IHttpContextAccessor is not registered by default. You can fix that by adding the following in ConfigureServices:
services.AddHttpContextAccessor();
Related
I want Administrators to enable/disable logging at runtime by changing the enabled property of the LogEnabledFilter in the config.
There are several threads on SO that explain workarounds, but I want it this way.
I tried to change the Logging Enabled Filter like this:
private static void FileConfigurationSourceChanged(object sender, ConfigurationSourceChangedEventArgs e)
{
var fcs = sender as FileConfigurationSource;
System.Diagnostics.Debug.WriteLine("----------- FileConfigurationSourceChanged called --------");
LoggingSettings currentLogSettings = e.ConfigurationSource.GetSection("loggingConfiguration") as LoggingSettings;
var fdtl = currentLogSettings.TraceListeners.Where(tld => tld is FormattedDatabaseTraceListenerData).FirstOrDefault();
var currentLogFileFilter = currentLogSettings.LogFilters.Where(lfd => { return lfd.Name == "Logging Enabled Filter"; }).FirstOrDefault();
var filterNewValue = (bool)currentLogFileFilter.ElementInformation.Properties["enabled"].Value;
var runtimeFilter = Logger.Writer.GetFilter<LogEnabledFilter>("Logging Enabled Filter");
runtimeFilter.Enabled = filterNewValue;
var test = Logger.Writer.IsLoggingEnabled();
}
But test reveals always the initially loaded config value, it does not change.
I thought, that when changing the value in the config the changes will be propagated automatically to the runtime configuration. But this isn't the case!
Setting it programmatically as shown in the code above, doesn't work either.
It's time to rebuild Enterprise Library or shut it down.
You are right that the code you posted does not work. That code is using a config file (FileConfigurationSource) as the method to configure Enterprise Library.
Let's dig a bit deeper and see if programmatic configuration will work.
We will use the Fluent API since it is the preferred method for programmatic configuration:
var builder = new ConfigurationSourceBuilder();
builder.ConfigureLogging()
.WithOptions
.DoNotRevertImpersonation()
.FilterEnableOrDisable("EnableOrDisable").Enable()
.LogToCategoryNamed("General")
.WithOptions.SetAsDefaultCategory()
.SendTo.FlatFile("FlatFile")
.ToFile(#"fluent.log");
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
var defaultWriter = new LogWriterFactory(configSource).Create();
defaultWriter.Write("Test1", "General");
var filter = defaultWriter.GetFilter<LogEnabledFilter>();
filter.Enabled = false;
defaultWriter.Write("Test2", "General");
If you try this code the filter will not be updated -- so another failure.
Let's try to use the "old school" programmatic configuration by using the classes directly:
var flatFileTraceListener = new FlatFileTraceListener(
#"program.log",
"----------------------------------------",
"----------------------------------------"
);
LogEnabledFilter enabledFilter = new LogEnabledFilter("Logging Enabled Filter", true);
// Build Configuration
var config = new LoggingConfiguration();
config.AddLogSource("General", SourceLevels.All, true)
.AddTraceListener(flatFileTraceListener);
config.Filters.Add(enabledFilter);
LogWriter defaultWriter = new LogWriter(config);
defaultWriter.Write("Test1", "General");
var filter = defaultWriter.GetFilter<LogEnabledFilter>();
filter.Enabled = false;
defaultWriter.Write("Test2", "General");
Success! The second ("Test2") message was not logged.
So, what is going on here? If we instantiate the filter ourselves and add it to the configuration it works but when relying on the Enterprise Library configuration the filter value is not updated.
This leads to a hypothesis: when using Enterprise Library configuration new filter instances are being returned each time which is why changing the value has no effect on the internal instance being used by Enterprise Library.
If we dig into the Enterprise Library code we (eventually) hit on LoggingSettings class and the BuildLogWriter method. This is used to create the LogWriter. Here's where the filters are created:
var filters = this.LogFilters.Select(tfd => tfd.BuildFilter());
So this line is using the configured LogFilterData and calling the BuildFilter method to instantiate the applicable filter. In this case the BuildFilter method of the configuration class LogEnabledFilterData BuildFilter method returns an instance of the LogEnabledFilter:
return new LogEnabledFilter(this.Name, this.Enabled);
The issue with this code is that this.LogFilters.Select returns a lazy evaluated enumeration that creates LogFilters and this enumeration is passed into the LogWriter to be used for all filter manipulation. Every time the filters are referenced the enumeration is evaluated and a new Filter instance is created! This confirms the original hypothesis.
To make it explicit: every time LogWriter.Write() is called a new LogEnabledFilter is created based on the original configuration. When the filters are queried by calling GetFilter() a new LogEnabledFilter is created based on the original configuration. Any changes to the object returned by GetFilter() have no affect on the internal configuration since it's a new object instance and, anyway, internally Enterprise Library will create another new instance on the next Write() call anyway.
Firstly, this is just plain wrong but it is also inefficient to create new objects on every call to Write() which could be invoked many times..
An easy fix for this issue is to evaluate the LogFilters enumeration by calling ToList():
var filters = this.LogFilters.Select(tfd => tfd.BuildFilter()).ToList();
This evaluates the enumeration only once ensuring that only one filter instance is created. Then the GetFilter() and update filter value approach posted in the question will work.
Update:
Randy Levy provided a fix in his answer above.
Implement the fix and recompile the enterprise library.
Here is the answer from Randy Levy:
Yes, you can disable logging by setting the LogEnabledFiter. The main
way to do this would be to manually edit the configuration file --
this is the main intention of that functionality (developers guide
references administrators tweaking this setting). Other similar
approaches to setting the filter are to programmatically modify the
original file-based configuration (which is essentially a
reconfiguration of the block), or reconfigure the block
programmatically (e.g. using the fluent interface). None of the
programmatic approaches are what I would call simple – Randy Levy 39
mins ago
If you try to get the filter and disable it I don't think it has any
affect without a reconfiguration. So the following code still ends up
logging: var enabledFilter = logWriter.GetFilter();
enabledFilter.Enabled = false; logWriter.Write("TEST"); One non-EntLib
approach would just to manage the enable/disable yourself with a bool
property and a helper class. But I think the priority approach is a
pretty straight forward alternative.
Conclusion:
In your custom Logger class implement a IsLoggenabled property and change/check this one at runtime.
This won't work:
var runtimeFilter = Logger.Writer.GetFilter<LogEnabledFilter>("Logging Enabled Filter");
runtimeFilter.Enabled = false/true;
I have a very weird problem with Unity here. I have the following:
public class UnityConfig
{
public static void RegisterTypes(IUnityContainer container)
container.RegisterType<IDBContext, MyDbContext>(new PerThreadLifetimeManager());
container.RegisterType<IUserDbContext>(new PerThreadLifetimeManager(), new InjectionFactory(c =>
{
var tenantConnectionString = c.Resolve<ITenantConnectionResolver>().ResolveConnectionString();
return new UserDbContext(tenantConnectionString);
}));
}
}
and then in the WebApiConfig.cs file within the Reigster method:
var container = new UnityContainer();
UnityConfig.RegisterTypes(container);
config.DependencyResolver = new UnityResolver(container);
Basically, what I want to happen in the above code is on every request to the API, I want Unity to new up a UserDbContext based on the user (multi-tenant kind of environment). Now the TenantConnectionResolver is responsible for figuring out the Connection String and then I use that connection string to new up UserDbContext.
Also note (not shown above) that TenantConnectionResolver takes an IDbConext in its constructor because I need it to figure out the connection string based on user information in that database.
But for some reason, the code within the InjectionFactory runs at random times. For example, I call //mysite.com/controller/action/1 repetitively from a browser, the code in the InjectionFactory will occasionally run but not on each request.
Am I incorrectly configuring Unity? Has anybody encountered anything similar to this?
Thanks in advance
The problem is very likely related to the LifetimeManager you are using. PerThreadLifetimeManager is not adapted in a web context, as threads are pooled and will serve multiple requests in sequence.
PerRequestLifetimeManager is probably what you want to use.
using the unit of work and repository patterns i recently came across the issue, that changes to the unit of work are not reflected to subsequent queries. Example:
var ctx = DIContainer.Current.Resolve<IB2bContext>();
var rep = DIContainer.Current.Resolve<IRepository<Word>>(
new DependencyOverride<IB2bContext>(ctx));
rep.Add(new Word () { "One" };
rep.Add(new Word () { "Two" };
rep.GetAll().ToList().ForEach(i =>
Console.Write(i.text)); // nothing seen here
So in other words, unless i call SaveChanges() to persist the objects into the Database, i dont see them. Well ofcause i can fiddle around with the ChangeTracker and/or do things like context.Entry(foo).Property(...).CurrentValue. But does that play with a ddd like decoupling of layers? I dont think so. And where is my consistent dataview that once was called a database transaction?
Please enlighten me.
Armin
Your repository exposes some GetAll method. The method itself executes database query. If you want to see local data not inserted to database you must add them to result set. For example like:
public IEnumerable<Word> GetAll()
{
DbSet<Word> set = context.Set<Word>();
return set.AsEnumerable().Concat(set.Local);
}
The query execution is only responsible for returning persisted (real) data.
I'm trying to populate a GXT Grid using data retrieved from an online API (for instance, going to www.example.com/documents returns a JSON array of documents). In addition, I need to paginate the result.
I've read all the various blogs and tutorials, but most of them populate the pagination proxy using something like TestData.GetDocuments(). However, I want to get that info using HTTP GET.
I've managed to populate a grid, but without pagination, using a RequestBuilder + proxy + reader + loader. But it seems as though the actual loading of the data is "put off" until some hidden stage deep inside the GXT code. Pagination requires that data from the start, so I'm not sure what to do.
Can someone provide a simple code example which does what I need?
Thank you.
I managed to get this going, here is what I did:
First I defined the proxy and loader for my data along with the paging toolbat:
private PagingModelMemoryProxy proxy;
private PagingLoader<PagingLoadResult<ModelData>> loader;
private PagingToolBar toolBar;
Next is the creation of each one, initializing with an empty ArrayList.
proxy = new PagingModelMemoryProxy(new ArrayList<EquipmentModel>());
loader = new BasePagingLoader<PagingLoadResult<ModelData>>(proxy);
loader.setRemoteSort(true);
toolBar = new PagingToolBar(100);
toolBar.bind(loader);
loader.load(0, 100);
Last, I have a set method in my view that gets called when the AJAX call is complete, but you could trigger it anywhere. Here is my entire set method, Equipment and EquipmentModel are my database and view models respectively.
public void setEquipmentData(List<Equipment> data)
{
Collections.sort(data);
// build a list of models to be loaded
List<EquipmentModel> models = new ArrayList<EquipmentModel>();
for (Equipment equipment : data)
{
EquipmentModel model = new EquipmentModel(equipment);
models.add(model);
}
// load the list of models into the proxy and reconfigure the grid to
// refresh display.
proxy.setData(models);
ListStore<EquipmentModel> equipmentStore = new ListStore<EquipmentModel>(loader);
equipmentGrid.reconfigure(equipmentStore, equipmentColumnModel);
loader.load(0, 100);
}
The key here for me was re-creating the store with the same loader, the column model was pre-created and gets reused.
I am using EF4 Self Tracking Entities (VS2010 Beta 2 CTP 2 plus new T4 generator). But when I try to update entity information it does not update to database as expected.
I setup 2 service calls. one for GetResource(int id) which return a resource object. the second call is SaveResource(Resource res); here is the code.
public Resource GetResource(int id)
{
using (var dc = new MyEntities())
{
return dc.Resources.Where(d => d.ResourceId == id).SingleOrDefault();
}
}
public void SaveResource(Resource res)
{
using (var dc = new MyEntities())
{
dc.Resources.ApplyChanges(res);
dc.SaveChanges();
// Nothing save to database.
}
}
//Windows Console Client Calls
var res = service.GetResource(1);
res.Description = "New Change"; // Not updating...
service.SaveResource(res);
// does not change anything.
It seems to me that ChangeTracker.State is always show as "Unchanged".
anything wrong in this code?
This is probably a long shot... but:
I assume your Service is actually in another Tier? If you are testing in the same tier you will have problems.
Self Tracking Entities (STEs) don't record changes until when they are connected to an ObjectContext, the idea is that if they are connected to a ObjectContext it can record changes for them and there is no point doing the same work twice.
STEs start tracking once they are deserialized on the client using WCF, i.e. once they are materialized to a tier without an ObjectContext.
If you look through the generated code you should be able to see how to turn tracking on manually too.
Hope this helps
Alex
You have to share assembly with STEs between client and service - that is the main point. Then when adding service reference make sure that "Reuse types in referenced assemblies" is checked.
The reason for this is that STEs contain logic which cannot be transfered by "Add service reference", so you have to share these types to have tracing logic on client as well.
After reading the following tip from Daniel Simmons, the STE starts tracking. Here is the link for the full article. http://msdn.microsoft.com/en-us/magazine/ee335715.aspx
Make certain to reuse the Self-Tracking Entity template’s generated entity code on your client. If you use proxy code generated by Add Service Reference in Visual Studio or some other tool, things look right for the most part, but you will discover that the entities don’t actually keep track of their changes on the client.
so in the client make sure you don't use add service reference to get the proxy instead access service through following code.
var svc = new ChannelFactory<IMyService>("BasicHttpBinding_IMyService").CreateChannel();
var res = svc.GetResource(1);
If you are using STEs without WCF you may have to call StartTracking() manually.
I had the same exact problem and found the solution.
It appears that for the self-tracking entities to automatically start tracking, you need to reference your STE project before adding the service reference.
This way Visual Studio generates some .datasource files which does the final trick.
I found the solution here:
http://blogs.u2u.be/diederik/post/2010/05/18/Self-Tracking-Entities-with-Validation-and-Tracking-State-Change-Notification.aspx
As for starting the tracking manually, it seems that you do not have these methods on the client-side.
Hope it helps...