Azure Service Fabric - Clone Actors with state - azure-service-fabric

Is there any way to clone an actor and its state and create the exact same actor, with the same actor Id and its corresponding state in another application in Azure Service Fabric? I've looked at the backup and restore, but it doesn't seem to do what I need.
We have several instances of the same application type running actors in production. We need this functionality for 2 reasons:
1. We need to combine 2 of the applications into one, so all of the actors will need to be re-created in their current state with their current ID's in the other instance.
2 . We would like to be able to clone production into a QA environment, which is on a different Azure Server, so we can test upgrades and new code in the exact state production is in.
Any help with this is much appreciated!

I don't think that there is built-in support for cloning. But I believe you can implement your own mechanism to do that - you can iterate over your actors to get their states and then pass it to another cluser.
To iterate:
`
var cancellationToken = new CancellationToken();
ContinuationToken continuationToken = null;
var actorProxyFactory = new ActorProxyFactory();
var actorServiceProxy = ActorServiceProxy.Create("fabric:/MyActorApp/MyActorService", partitionKey);
do
{
var queryResult = await actorServiceProxy.GetActorsAsync(continuationToken, cancellationToken);
foreach (var item in queryResult.Items)
{
var actor = actorProxyFactory.CreateActorProxy<IMyActor>("fabric:/MyActorApp/MyActorService", item.ActorId);
var state = await actor.GetState();
}
continuationToken = queryResult.ContinuationToken;
} while (continuationToken != null);
`
If you have a dynamic list of names for actor state, you can get all names using GetStateNamesAsync method:
IEnumerable<string> stateNames = await StateManager.GetStateNamesAsync();
Hope this will help.

For #1, you need to use Alex's solution or similar.
For #2, you can use the existing backup & restore mechanism. The target partition of a service you are restoring doesn't need to be the source partition you created the backup of.

Related

Passing a selected database to a DBcontext via DI

Here is my scenario. Imagine a screen with a dropdown of US states. This list is populated from one Admin database. Depending on the choice of other items on the screen get filled with other databases. we have a database per state that share a single schema. I don't have any issue using DI for the States dropdown. However, I am having issues with getting the selected state. I tested hardcoding a state and DI works fine. I would like to use Session for this, but I have read you can't and frankly, I have not been able to make it work. Any suggestions would be appreciated.
services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
services.AddScoped(p => p.GetService<IHttpContextAccessor>()?.HttpContext);
services.AddDbContext<AdminManagement.Data.AdminDataContext>(options =>
options.UseSqlServer(Configuration.GetSection("Connections:myAdmin").Value).UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking));
//this is the issue here I want to be able to pass the selected state
services.AddDbContext<CollectionDataContext>((serviceProvider, builder) =>
{
//I wish I could use this...any alternatives?
//HttpContext.Session.GetString("SelectedState");
//hardcoded for testing purposes. it works ok
var selectedDb = "SC";
//this gets the connection string from app settings, later I will get it from an API
var connectionString = GetConnectionStringFromService(selectedDb);
builder.UseSqlServer(connectionString);
});
//my one admin database Data context
services.AddScoped<AdminManagement.Data.AdminManagementQueries>();
// my multiple databases clases that use DI
services.AddScoped<CollectionManagementQueries>();
services.AddScoped<CollectionManagementCommands>();
You need to retrieve the context from the service provider. That's done via:
var httpContextAccessor = serviceProvider.GetRequiredService<IHttpContextAccessor>();
Then, you can do something like:
var selectedDb = httpContextAccessor.HttpContext.Session.GetString("SelectedState");
Note, however, that IHttpContextAccessor is not registered by default. You can fix that by adding the following in ConfigureServices:
services.AddHttpContextAccessor();

Autofac Multi Tenant override and IEnumerable<T> inject/resolve

Please forgive my non-native English:
In short, What is the best way for a tenant to override default IEnumerable<T> registration?
TL;DR So I have a service ServiceToBeResove(IEnumerable<IShitty> svcs) need an IEnumerable<IShitty> dependency, but we found not all our tenants have services registered as IShitty, so in our application container we create an not implemented NoImplementShitty and register it as a TypeService of IShitty to server as a default one to make resolve process happy, we do get tenant-specific if tenant have registration and this default non-implemented if tenant forgot to register. But we soon find the ServiceToBeResove will have both tenants implemented registered IShitty and the default NoImplementShitty for its dependence of IEnumerable. What I really want for the IEnumerable<IShitty> dependency is just used tenant registered (registered 1 or more), if tenant not registered, just use the default NoImplementShitty as the IEnumerable<IShitty>. I have played with .OnlyIf(), OnlyIfRegistered(), .PreventDefault() on the app container and it really not helps since autofac will build default first and then tenant. I can certainly use the NoImplementShitty for all the tenant that missing registration of IShitty but it doesn't seem to take the advantage of multiple tenant's override-default features.
To be more specific, In our base AgreementModule, we have
builder.RegisterType<NoOpAgreementHandler>() //NoOpAgreementHandler is the IShitty
.As<IAgreementHandler>()
.InstancePerLifetimeScope();
In our tenantA, we have
public class TenantAContainerBuilder : ITenantContainerBuilder
{
public virtual object TenantId => "1";
public virtual void Build(ContainerBuilder builder)
{
builder.RegisterType<TenantAAgreementHandler>()
.As<IAgreementHandler>()
.InstancePerLifetimeScope();
}
}
We build container as below:
var appContainer = builder.Build();
var tenantIdentifier = new ManualTenantIdentificationStrategy(); //We have our own strategy here I just use the ManualTenantIdentificationStrategy for example
var multiTenantContainer = new MultitenantContainer(tenantIdentifier, appContainer);
//GetTenantContainerBuilders will basically give you all TenantBuilder like TenantAContainerBuilder above
foreach (IGrouping<object, ITenantContainerBuilder> source in GetTenantContainerBuilders().GroupBy(x => x.TenantId))
{
var configurationActionBuilder = new ConfigurationActionBuilder();
configurationActionBuilder.AddRange(source.Select(x => new Action<ContainerBuilder>(x.Build)));
multiTenantContainer.ConfigureTenant(source.Key, configurationActionBuilder.Build());
}
When try to resolving the service, if we do:
public DisbursementAgreementManager(IEnumerable<IAgreementHandler> agreementHandlers)
{
_agreementHandlers = agreementHandlers;
}
The agreementHandlers will be an IEnumerable of NoOpAgreementHandler and TenantAAgreementHandler, seems wierd to have NoOpAgreementHandler and I thought we will only get TenantAAgreementHandler. But if we change the DisbursementAgreementManager to
public DisbursementAgreementManager(IAgreementHandler agreementHandler)
{
_agreementHandler = agreementHandler;
}
We will get only the TenantAAgreementHandler which is expected.
The default behavior of Autofac is there for a reason. Asking it to do it differently would be adding application logic at the dependency-injection level, which violates the separation of concerns (DI should only inject dependencies) and leads directly to surprising behavior ("Why did DI not inject every available component?") and undercuts the maintainability of the system.
This may be a non-issue.
The logic is self-contained inside each IAgreementHandler.
If so, at the point where they are invoked by DisbursementAgreementManager, they are all called and then perform their own logic (which may include a decision whether to do anything all). E.g.:
foreach (var ah in _agreementHandlers) ah.Agree(disbursementInfo);
or maybe something like
foreach (var ah in _agreementHandlers.Where(a => a.ShouldRun(data) || overridingCondition))
{
var agreement = ah.Agree(info);
this.Process(agreement);
}
or whatever. The point is that if NoOpAgreementHandler is doing what it is supposed to (that is, nothing) then it should have no effect when it is called. No problem.
If the situation is other than described, then NoOpAgreementHandler and possibly IAgreementHandler need to be refactored.
There is another point of concern:
The reason we add the no-op is we have unit tests for registration/resolve in order to make sure all registration is properly configured.
Your testing requirements are bleeding into your primary logic. These DI configuration tests should be independent of the production DI configuration. NoOpAgreementHandler shouldn't even be in your primary project, just a member of the unit test project.

Setting up and accessing Flink Queryable State (NullPointerException)

I am using Flink v1.4.0 and I have set up two distinct jobs. The first is a pipeline that consumes data from a Kafka Topic and stores them into a Queryable State (QS). Data are keyed by date. The second submits a query to the QS job and processes the returned data.
Both jobs were working fine with Flink v.1.3.2. But with the new update, everything has broken. Here is part of the code for the first job:
private void runPipeline() throws Exception {
StreamExecutionEnvironment env = configurationEnvironment();
QueryableStateStream<String, DataBucket> dataByDate = env.addSource(sourceDataFromKafka())
.map(NewDataClass::new)
.keyBy(data.date)
.asQueryableState("QSName", reduceIntoSingleDataBucket());
}
and here is the code on client side:
QueryableStateClient client = new QueryableStateClient("localhost", 6123);
// the state descriptor of the state to be fetched.
ValueStateDescriptor<DataBucket> descriptor = new ValueStateDescriptor<>(
"QSName",
TypeInformation.of(new TypeHint<DataBucket>() {}));
jobId = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
String key = "2017-01-06";
CompletableFuture<ValueState<DataBucket> resultFuture = client.getKvState(
jobId,
"QSName",
key,
BasicTypeInfo.STRING_TYPE_INFO,
descriptor);
try {
ValueState<DataBucket> valueState = resultFuture.get();
DataBucket bucket = valueState.value();
System.out.println(bucket.getLabel());
} catch (IOException | InterruptionException | ExecutionException e) {
throw new RunTimeException("Unable to query bucket key: " + key , e);
}
I have followed the instructions as per the following link:
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/state/queryable_state.html
making sure to enable the queryable state on my Flink cluster by including the flink-queryable-state-runtime_2.11-1.4.0.jar from the opt/ folder of your Flink distribution to the lib/ folder and checked it runs in the task manager.
I keep getting the following error:
Exception in thread "main" java.lang.NullPointerException
at org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:84)
at org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:253)
at org.apache.flink.queryablestate.client.QueryableStateClient.getKvState(QueryableStateClient.java:210)
at org.apache.flink.queryablestate.client.QueryableStateClient.getKvState(QueryableStateClient.java:174)
at com.company.dept.query.QuerySubmitter.main(QuerySubmitter.java:37)
Any idea of what is happening? I think that my requests don't reach the QS at all ... Really don't know if and how I should change anything. Thanks.
So, as it turned out, it was 2 things that were causing this error. The first was the use of the wrong constructor for creating a descriptor on the client side. Rather than using the one that only takes as input a name for the QS and a TypeHint, I had to use another one where a keySerialiser along with a default value are provided as per below:
ValueStateDescriptor<DataBucket> descriptor = new ValueStateDescriptor<>(
"QSName",
TypeInformation.of(new TypeHint<DataBucket>() {}).createSerializer(new ExecutionConfig()),
DataBucket.emptyBucket()); // or anything that can be used as a default value
The second was relevant to the host and port values. The port was different from v1.3.2 now set to 9069 and the localhost was also different in my case. You can verify both by checking the logs of any task manager for the line:
Started the Queryable State Proxy Server # ....
Finally, in case you are here because you are looking to allow port-range for queryable state client proxy, I suggest you follow the respective issue (FLINK-7788) here: https://issues.apache.org/jira/browse/FLINK-7788.

Disable logging on FileConfigurationSourceChanged - LogEnabledFilter

I want Administrators to enable/disable logging at runtime by changing the enabled property of the LogEnabledFilter in the config.
There are several threads on SO that explain workarounds, but I want it this way.
I tried to change the Logging Enabled Filter like this:
private static void FileConfigurationSourceChanged(object sender, ConfigurationSourceChangedEventArgs e)
{
var fcs = sender as FileConfigurationSource;
System.Diagnostics.Debug.WriteLine("----------- FileConfigurationSourceChanged called --------");
LoggingSettings currentLogSettings = e.ConfigurationSource.GetSection("loggingConfiguration") as LoggingSettings;
var fdtl = currentLogSettings.TraceListeners.Where(tld => tld is FormattedDatabaseTraceListenerData).FirstOrDefault();
var currentLogFileFilter = currentLogSettings.LogFilters.Where(lfd => { return lfd.Name == "Logging Enabled Filter"; }).FirstOrDefault();
var filterNewValue = (bool)currentLogFileFilter.ElementInformation.Properties["enabled"].Value;
var runtimeFilter = Logger.Writer.GetFilter<LogEnabledFilter>("Logging Enabled Filter");
runtimeFilter.Enabled = filterNewValue;
var test = Logger.Writer.IsLoggingEnabled();
}
But test reveals always the initially loaded config value, it does not change.
I thought, that when changing the value in the config the changes will be propagated automatically to the runtime configuration. But this isn't the case!
Setting it programmatically as shown in the code above, doesn't work either.
It's time to rebuild Enterprise Library or shut it down.
You are right that the code you posted does not work. That code is using a config file (FileConfigurationSource) as the method to configure Enterprise Library.
Let's dig a bit deeper and see if programmatic configuration will work.
We will use the Fluent API since it is the preferred method for programmatic configuration:
var builder = new ConfigurationSourceBuilder();
builder.ConfigureLogging()
.WithOptions
.DoNotRevertImpersonation()
.FilterEnableOrDisable("EnableOrDisable").Enable()
.LogToCategoryNamed("General")
.WithOptions.SetAsDefaultCategory()
.SendTo.FlatFile("FlatFile")
.ToFile(#"fluent.log");
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
var defaultWriter = new LogWriterFactory(configSource).Create();
defaultWriter.Write("Test1", "General");
var filter = defaultWriter.GetFilter<LogEnabledFilter>();
filter.Enabled = false;
defaultWriter.Write("Test2", "General");
If you try this code the filter will not be updated -- so another failure.
Let's try to use the "old school" programmatic configuration by using the classes directly:
var flatFileTraceListener = new FlatFileTraceListener(
#"program.log",
"----------------------------------------",
"----------------------------------------"
);
LogEnabledFilter enabledFilter = new LogEnabledFilter("Logging Enabled Filter", true);
// Build Configuration
var config = new LoggingConfiguration();
config.AddLogSource("General", SourceLevels.All, true)
.AddTraceListener(flatFileTraceListener);
config.Filters.Add(enabledFilter);
LogWriter defaultWriter = new LogWriter(config);
defaultWriter.Write("Test1", "General");
var filter = defaultWriter.GetFilter<LogEnabledFilter>();
filter.Enabled = false;
defaultWriter.Write("Test2", "General");
Success! The second ("Test2") message was not logged.
So, what is going on here? If we instantiate the filter ourselves and add it to the configuration it works but when relying on the Enterprise Library configuration the filter value is not updated.
This leads to a hypothesis: when using Enterprise Library configuration new filter instances are being returned each time which is why changing the value has no effect on the internal instance being used by Enterprise Library.
If we dig into the Enterprise Library code we (eventually) hit on LoggingSettings class and the BuildLogWriter method. This is used to create the LogWriter. Here's where the filters are created:
var filters = this.LogFilters.Select(tfd => tfd.BuildFilter());
So this line is using the configured LogFilterData and calling the BuildFilter method to instantiate the applicable filter. In this case the BuildFilter method of the configuration class LogEnabledFilterData BuildFilter method returns an instance of the LogEnabledFilter:
return new LogEnabledFilter(this.Name, this.Enabled);
The issue with this code is that this.LogFilters.Select returns a lazy evaluated enumeration that creates LogFilters and this enumeration is passed into the LogWriter to be used for all filter manipulation. Every time the filters are referenced the enumeration is evaluated and a new Filter instance is created! This confirms the original hypothesis.
To make it explicit: every time LogWriter.Write() is called a new LogEnabledFilter is created based on the original configuration. When the filters are queried by calling GetFilter() a new LogEnabledFilter is created based on the original configuration. Any changes to the object returned by GetFilter() have no affect on the internal configuration since it's a new object instance and, anyway, internally Enterprise Library will create another new instance on the next Write() call anyway.
Firstly, this is just plain wrong but it is also inefficient to create new objects on every call to Write() which could be invoked many times..
An easy fix for this issue is to evaluate the LogFilters enumeration by calling ToList():
var filters = this.LogFilters.Select(tfd => tfd.BuildFilter()).ToList();
This evaluates the enumeration only once ensuring that only one filter instance is created. Then the GetFilter() and update filter value approach posted in the question will work.
Update:
Randy Levy provided a fix in his answer above.
Implement the fix and recompile the enterprise library.
Here is the answer from Randy Levy:
Yes, you can disable logging by setting the LogEnabledFiter. The main
way to do this would be to manually edit the configuration file --
this is the main intention of that functionality (developers guide
references administrators tweaking this setting). Other similar
approaches to setting the filter are to programmatically modify the
original file-based configuration (which is essentially a
reconfiguration of the block), or reconfigure the block
programmatically (e.g. using the fluent interface). None of the
programmatic approaches are what I would call simple – Randy Levy 39
mins ago
If you try to get the filter and disable it I don't think it has any
affect without a reconfiguration. So the following code still ends up
logging: var enabledFilter = logWriter.GetFilter();
enabledFilter.Enabled = false; logWriter.Write("TEST"); One non-EntLib
approach would just to manage the enable/disable yourself with a bool
property and a helper class. But I think the priority approach is a
pretty straight forward alternative.
Conclusion:
In your custom Logger class implement a IsLoggenabled property and change/check this one at runtime.
This won't work:
var runtimeFilter = Logger.Writer.GetFilter<LogEnabledFilter>("Logging Enabled Filter");
runtimeFilter.Enabled = false/true;

Notification between classes using Observable

I have a class with a list of users from a server.
Other classes can manipulate the list on the server e.g. call add or delete operation.
My Core-Class has a reference to the other classes which are manipulating the list on the server.
I would like to:
Make a init call in Core-Class to get the list on the beginning
The Core-Class will be notified by plugins each time the list was manipulated on the server, so the Core-Class get the list from the server again.
Notify other classes that the list was reloaded and forward the new list.
My structure
Core {
users: [];
plugin1: Plugin;
plugin2: Plugin;
//Get a new list of users from the server
loadUsers() {
userService.loadUsers.then(function (res) {
this.users = res;
})
}
}
Plugin {
//sends a request to the server to create a special user,
//depending on plugin implementation
createUser();
}
I'm only just starting of using rx. I understand the factory methods, hot vs cold observable and other basic stuff. But i can not imagine how to do it with rx in the right way.
Thanks.
My idea of reactive implementation would be:
In your UserService add the following:
var loadSubject = new Subject();
var usersObservable = loadSubject.flatMap(
Observable.fromPromise(<your http call that returns promise>)).share()
function loadUsers(){
loadSubject.next(true);
}
Then each plugin that made changes will simply call:
userService.loadUsers();
Your Core, plugins, and other classes that would like to get updated will simply do:
loadService.usersObservable.subscribe(function(usersFreshValue){
this.users = usersFreshValue;
})
Note that I used the '.share()' to avoid duplication of the data.
Possible improvements:
Instead of the loadUsers method, have each plugin call userService.loadSubject.next(true);
Use backpressure/buffering operators to avoid too frequent calls. For example debounce (CAUTION! when used incorrectly, might lead to starvation) or bufferWithTime + filter (to ensure you don't trigger on empty buffers)
Use behavioral/replay observables so that new subscribers will quickly get the latest value.