ServiceStack.Redis Cache Cleared when application restarted - servicestack.redis

I am using redis for Caching, but am also using ICacheClient in one of my services to write and read directly from the Cache using Get and Set etc.
Redis is set to persist the values to disk. So if I shut Redis down and restart it again, the cached data is there. However, if I run my servicestack services in debug mode and stop the website and restart it again, servicestack clear the redis cache on the first call that is made to the servicestack service. How can I switch this off, or is it a bug. The items in the cache should not expire and when checking the individual items in the cache, it is set not to expire, so it is not that it is expiring, but that servicestack seems to flush the database on first call after restart.
public ICacheClient CacheClient { get; set; }
public object Get(KeyValues request)
{
var value = CacheClient.Get<object>(request.Key);
var response = new KeyValuesResponse { Key = request.Key, Value = value };
return value;
}
public void Put(KeyValuesRecord request)
{
if (request.Key != null)
{
CacheClient.Set(request.Key,request.Value);
}
}
public void Post(KeyValuesRecord request)
{
CacheClient.Set(request.Key, request.Value);
}
public void Delete(KeyValues request)
{
CacheClient.Remove(request.Key);
}
When I call the first Get with for e.g. the url : http://localhost:4099/api/KeyValues/mykey
The whole redis db(0) is cleared out.

Related

MongoDB Error - Can't get Player Data

I'm working on a Framework for a Minecraft Server and I'm running into a constant error while retrieving player data from a MongoDB database. I have both a proxy plugin and a spigot plugin that the framework is shaded into.
For the proxy plugin, I can get and store the player data in my PlayerCache (A map that assigns UUIDs to PlayerData objects), but however, for the spigot plugin, I cannot. It returns a null pointer, saying that the data is not found in the cache, even though I am sure I called the cacheData method.
I believe what is going on is that I'm not allowing enough time for the database to get the information and cache it.
Spigot code:
#EventHandler
public void onLogin(PlayerLoginEvent event)
{
PlayerCache.getInstance().cachePlayer(event.getPlayer().getUniqueId());
}
#EventHandler
public void onJoin(PlayerJoinEvent event)
{
event.setJoinMessage(null);
event.getPlayer().setDisplayName(PlayerCache.getInstance().getCachedRank(
event.getPlayer().getUniqueId()
).getColor() + event.getPlayer().getName()); //Throws a null pointer.
}
PlayerCache:
public void cachePlayer(UUID uuid)
{
PlayerData PD = PlayerManager.getInstance().getPlayerData(uuid);
this.cache.put(uuid, PD);
}
public PlayerRank getCachedRank(UUID uuid)
{
return this.cache.get(uuid).getRank();
}
Hope someone can help.

Working with TableController in Azure Mobile Apps / Services

I'm trying to understand how to work with TableController in Azure Mobile Apps. Here's the sample TodoItemController:
public class TodoItemController : TableController<TodoItem>
{
protected override void Initialize(HttpControllerContext controllerContext)
{
base.Initialize(controllerContext);
MobileServiceContext context = new MobileServiceContext();
DomainManager = new EntityDomainManager<TodoItem>(context, Request, Services);
}
// GET tables/TodoItem
public IQueryable<TodoItem> GetAllTodoItems()
{
return Query();
}
// GET tables/TodoItem/48D68C86-6EA6-4C25-AA33-223FC9A27959
public SingleResult<TodoItem> GetTodoItem(string id)
{
return Lookup(id);
}
// PATCH tables/TodoItem/48D68C86-6EA6-4C25-AA33-223FC9A27959
public Task<TodoItem> PatchTodoItem(string id, Delta<TodoItem> patch)
{
return UpdateAsync(id, patch);
}
// POST tables/TodoItem
public async Task<IHttpActionResult> PostTodoItem(TodoItem item)
{
TodoItem current = await InsertAsync(item);
return CreatedAtRoute("Tables", new { id = current.Id }, current);
}
// DELETE tables/TodoItem/48D68C86-6EA6-4C25-AA33-223FC9A27959
public Task DeleteTodoItem(string id)
{
return DeleteAsync(id);
}
}
Ideally, I'd like to avoid passing around whole models like TodoItem to reduce incoming/outgoing bandwidth and limit clients to only what they should care about. If I were to do that, how would offline sync and client-side SDKs be affected?
Is TableController intended for simple CRUD operations as suggested above? Any examples out on the Internet with complex queries?
The Mobile Apps TableController is the basis for an OData based CRUD interface. You will always transmit an entire model (which is based on an EntityData model, so it has four additional fields - version, createdAt, updatedAt and deleted) to the client. However, the client can use an OData search to get a specific set of entities. For more information on OData, check out http://www.odata.org/
In the specific case of Offline Sync and using the Mobile Apps SDK for clients, the client SDK will issue a GET but limit the results to the last update time (which will be zero for the first request and hence will get everything). It will then push up the changes from the client. In certain cases (where the version does not match), it will have to do conflict resolution. Check out "How Offline Sync Works" in their documentation: https://azure.microsoft.com/en-us/documentation/articles/app-service-mobile-offline-data-sync-preview/

How do I warm up an actor's state from database when starting up?

My requirement is to start a long running process to tag all the products that are expired. This is run every night at 1:00 AM. The customers may be accessing some of the products on the website, so they have instances around the time when the job is run. The others are in the persistent media, not yet having instances because the customers are not accessing them.
Where should I hook up the logic to read the latest state of an actor from a persistent media and create a brand new actor? Should I have that call in the Prestart override method? If so, how can I tell the ProductActor that a new actor being created.
Or should I send a message to the ProductActor like LoadMeFromAzureTable which will load the state from the persistent media after an actor being created?
There are different ways to do it depending on what you need, as opposed to there being precisely one "right" answer.
You could use a Persistent Actor to recover state from a durable store automatically on startup (or in case of crash, to recover). Or, if you don't want to use that module (still in beta as of July 2015), you could do it yourself one of two ways:
1) You could load your state in PreStart, but I'd only go with this if you can make the operation async via your database client and use the PipeTo pattern to send the results back to yourself incrementally. But if you need to have ALL the state resident in memory before you start doing work, then you need to...
2) Make a finite state machine using behavior switching. Start in a gated state, send yourself a message to load your data, and stash everything that comes in. Then switch to a receiving state and unstash all messages when your state is done loading. This is the approach I prefer.
Example (just mocking the DB load with a Task):
public class ProductActor : ReceiveActor, IWithUnboundedStash
{
public IStash Stash { get; set; }
public ProductActor()
{
// begin in gated state
BecomeLoading();
}
private void BecomeLoading()
{
Become(Loading);
LoadInitialState();
}
private void Loading()
{
Receive<DoneLoading>(done =>
{
BecomeReady();
});
// stash any messages that come in until we're done loading
ReceiveAny(o =>
{
Stash.Stash();
});
}
private void LoadInitialState()
{
// load your state here async & send back to self via PipeTo
Task.Run(() =>
{
// database loading task here
return new Object();
}).ContinueWith(tr =>
{
// do whatever (e.g. error handling)
return new DoneLoading();
}).PipeTo(Self);
}
private void BecomeReady()
{
Become(Ready);
// our state is ready! put all those stashed messages back in the mailbox
Stash.UnstashAll();
}
private void Ready()
{
// handle those unstashed + new messages...
ReceiveAny(o =>
{
// do whatever you need to do...
});
}
}
/// <summary>
/// Marker interface.
/// </summary>
public class DoneLoading {}

How to get outArgument WorkflowApplication when wf wait for response(bookmark OR idle) and not complete

Accessing Out Arguments with WorkflowApplication when wf wait for response(bookmark OR idle) and not complete
I also used Tracking to retrieve the values, but instead of saving it to a database I come up with the following solution.
Make a Trackingparticipant and collect the data from an activity.
You can fine tune the tracking participant profile with a spefic tracking query.
I have added a public property Output to set the value of the data from the record.
public class CustomTrackingParticipant : TrackingParticipant
{
//TODO: Fine tune the profile with the correct query.
public IDictionary<String, object> Outputs { get; set; }
protected override void Track(TrackingRecord record, TimeSpan timeout)
{
if (record != null)
{
if (record is CustomTrackingRecord)
{
var customTrackingRecord = record as CustomTrackingRecord;
Outputs = customTrackingRecord.Data;
}
}
}
}
In your custom activity you can set the values you want to expose for tracking with a CustomTrackingRecord.
Here is a sample to give you an idea.
protected override void Execute(NativeActivityContext context)
{
var customRecord = new CustomTrackingRecord("QuestionActivityRecord");
customRecord.Data.Add("Question", Question.Get(context));
customRecord.Data.Add("Answers", Answers.Get(context).ToList());
context.Track(customRecord);
//This will create a bookmark with the display name and the workflow will go idle.
context.CreateBookmark(DisplayName, Callback, BookmarkOptions.None);
}
On the WorklfowApplication instance you can add the Tracking participant to the extensions.
workflowApplication.Extensions.Add(new CustomTrackingParticipant());
On the persistable idle event from the workflowApplication instance I subscribed with the following method.
In the method I get the tracking participant from the extensions.
Because we have set the outputs in the public property we can access them and set them in a member outside the workflow. See the following example.
private PersistableIdleAction PersistableIdle(WorkflowApplicationIdleEventArgs
workflowApplicationIdleEventArgs)
{
var ex = workflowApplicationIdleEventArgs.GetInstanceExtensions<CustomTrackingParticipant>();
Outputs = ex.First().Outputs;
return PersistableIdleAction.Unload;
}
I hope this example helped.
Even simpler: Use another workflow activity to store the value you are looking for somewhere (database, file, ...) before starting to wait for a response!
You could use Tracking.
required steps would be:
define a tracking profile which queries ActivityStates with the state closed
Implement an TrackingParticipant to save the OutArgument in process memory, a database or a file on disk
hook everything together
The link cotains all the information you will need to do this.

using the enterprise library cache application blocks in asp.net mvc2

I tried to use the cache application blocks of the microsoft enterprise library. I have used the MS Enterprise Library V5.0
Here is the sample code i have made, inside the home controller's index method.
Person p = new Person(10, "person1");
ICacheManager cacheMgr = CacheFactory.GetCacheManager("myCache");
ViewData["Message"] = "Welcome to ASP.NET MVC!";
if (Session["currsession"] != null)
{
if (!cacheMgr.Contains(p.pid.ToString()))
{
Response.Write("item is not in cache");
return View(p);
}
else
{
Response.Write("item is still in cache");
return View(p);
}
}
else
{
Session["currsession"] = 1;
cacheMgr.Add(p.pid.ToString(), p, CacheItemPriority.High, null, new SlidingTime(TimeSpan.FromSeconds(10)));
return View(cacheMgr.GetData(p.pid.ToString()));
}
The person class i use in the model has just a constructor with 2 public properties. Nothing special features are used.
Now, is this the right procedure to use the caching of the enterprise library caching blocks. If not, how else can i code this block of code in an efficient manner.
Also, i get the response as item is not in cache only after 20 second delay. Is there any mistake in the implementation of the code or is there a theory behind the caching in detail.
Kindly suggest the best practice for this usage.
Since you specify a sliding expiration of 10 seconds, that means that you will get item is not in cache if you wait more than 10 seconds between reloading the page.
Load first time in session => no message
Reload => in cache
Reload => in cache
Wait 10 seconds
Reload => not in cache
Is this what you're seeing?
Old answer:
The enterprise caching application block is probably way more than you need. My favorite cache is FubuMVC's Cache (just include FubuCore.dll). You can get a very similar implementation by using a ConcurrentDictionary with a "missing element" delegate. Here's an example: http://social.msdn.microsoft.com/Forums/en/parallelextensions/thread/37bbc361-6851-43db-9e90-80cc7e6ac15f
FubuMVC Cache example:
public class PersonCache
{
private Cache _Cache = new Cache();
public PersonCache()
{
_Cache.OnMissing = key => MyDatabase.People.GetById(key);
}
public Person GetById(int id)
{
return _Cache[id];
}
}