Infinispan cache preserving insertion order - insertion

I'm using Infinispan 5.0.1 for my caching needs.
The problem I'm having is that I need to get data back from the cache in the same order it was put in the cache.
For example:
Cache<String, String> myCache = defaultCacheManager.getCache("myCache");
myCache.put("1", "ONE");
myCache.put("2", "TWO");
myCache.put("3", "THREE");
myCache.keySet();
Set<String> keySet = myCache.keySet();
for (String key : keySet) {
System.out.println(myCache.get(key));
}
This should print out:
ONE
TWO
THREE

For the time being I solved this by using ConcurrentLinkedHashMap
and it works good.
Still, if someone knows the answer to the original question of "How to do it in Infinispan" I'd appreciate the anwser.

Why should it print out ONE TWO THREE? keySet() returns a Set, which is an un-ordered collection. Please see the Javadocs for Set, to understand its contract.

Related

How does resource.data.size() work in firestore rules (what is being counted)?

TLDR: What is request.resource.data.size() counting in the firestore rules when writing, say, some booleans and a nested Object to a document? Not sure what the docs mean by "entries in the map" (https://firebase.google.com/docs/reference/rules/rules.firestore.Resource#data, https://firebase.google.com/docs/reference/rules/rules.Map) and my assumptions appear to be wrong when testing in the rules simulator (similar problem with request.resource.data.keys().size()).
Longer version: Running into a problem in Firestore rules where not being able to update data as expected (despite similar tests working in the rules simulator). Have narrowed down the problem to point where can see that it is a rule checking for request.resource.data.size() equaling a certain number.
An example of the data being passed to the firestore update function looks like
Object {
"parentObj": Object {
"nestedObj": Object {
"key1": Timestamp {
"nanoseconds": 998000000,
"seconds": 1536498767,
},
},
},
"otherKey": true,
}
where the timestamp is generated via firebase.firestore.Timestamp.now().
This appears to work fine in the rules simulator, but not for the actual data when doing
let obj = {}
obj.otherKey = true
// since want to set object key name dynamically as nestedObj value,
// see https://stackoverflow.com/a/47296152/8236733
obj.parentObj = {} // needed for adding nested dynamic keys
obj.parentObj[nestedObj] = {
key1: fb.firestore.Timestamp.now()
}
firebase.firestore.collection('mycollection')
.doc('mydoc')
.update(obj)
Among some other rules, I use the rule request.resource.data.size() == 2 and this appears to be the rules that causes a permission denied error (since commenting out this rules get things working again). Would think that since the object is being passed with 2 (top-level) keys, then request.resource.data.size()=2, but this is apparently not the case (nor is it the number of keys total in the passed object) (similar problem with request.resource.data.keys().size()). So there's a long example to a short question. Would be very helpful if someone could clarify for me what is going wrong here.
From my last communications with firebase support around a month ago - there were issues with request.resource.data.size() and timestamp based security rules for queries.
I was also told that request.resource.data.size() is the size of the document AFTER a successful write. So if you're writing 2 additional keys to a document with 4 keys, that value you should be checking against is 6, not 2.
Having said all that - I am still having problems with request.resource.data.size() and any alternatives such as request.resource.size() which seems to be used in this documentation
https://firebase.google.com/docs/firestore/solutions/role-based-access
I also have some places in my security rules where it seems to work. I personally don't know why that is though.
Been struggling with that for a few hours and I see now that the doc on Firebase is clear: "the request.resource variable contains the future state of the document". So with ALL the fields, not only the ones being sent.
https://firebase.google.com/docs/firestore/security/rules-conditions#data_validation.
But there is actually another way to ONLY count the number of fields being sent with request.writeFields.size(). The property writeFields is a table with all the incoming fields.
Beware: writeFields is deprecated and may stop working anytime, but I have not found any replacement.
EDIT: writeFields apparently does not work in the simulator anymore...

Web2py - Multiple tables read-only form

I've searched around the web for a way to achieve this, and found multiple solutions. Most of them had messy code, all of them drawbacks. Some ideas involved setting default values of all the db fields based on a record. Others worked by appending multiple SQLFORMs, which resulted in differences in indentation on the page (because it's 2 HTML tables in 1 form).
I'm looking for a compact and elegant way of providing a read-only representation of a record based on a join on two tables. Surely there must be some simple way to achieve this, right? The Web2py book only contains an example of an insert-form. It's this kind of neat solution I am looking for.
In the future I will probably need multi-table forms that provide update functionality as well, but for now I'll be happy if I can get a simple read-only form for a record.
I would greatly appreciate any suggestions.
This seems to work for me:
def test():
fields = [db.tableA[field] for field in db.tableA.keys() \
if type(db.tableA[field]) == type(db.tableA.some_field)]
fields += [db.tableB[field] for field in db.tableB.keys() \
if type(db.tableB[field]) == type(db.tableB.some_field)]
ff = []
for field in fields:
ff.append(Field(field.name, field.type))
form = SQLFORM.factory(*ff, readonly=True)
return dict(form=form)
You could add in field.required, field.requires validtaors, etc. And also, since you're using SQLFORM.factory, you should be able to validate it and to updates/inserts. Just make sure that the form you are building using this method contains all of the necessary information to validate the form for update -- I believe you can add them easily to the Field instantiation above.
EDIT: Oh yeah, and you need to get the values of the record in question to pre-populate the form based on a record id (after form is defined)... also.. I just realized that instead of those list comprehensions, you can just use SQLFORM.factory and provide the two tables:
def test():
form = SQLFORM.factory(db.tableA, db.tableB, readonly=True)
record = ... (query for your record, probably based on an id in request.args(0))
for field in record.keys():
if (*test if this really is a field*):
form.vars[field] = record[field]
return dict(form=form)
Some tweaking will be required since I only provided psuedo-code for the pre-population... but look at: http://web2py.com/books/default/chapter/29/7#Pre-populating-the-form and the SQLFORM/SQLFORM.factory sections.

Key value store in Perl

I hope this question is still on topic, but recently I found a key-value store programmed in Perl. It was pretty simple, RAM based and I think it had just set and get and also an 'expire' option for keys. I also think it came with as both XS and pure Perl version.
I have been searching for quite a while now and I not sure whether it is on CPAN or I saw it on GitHub. Maybe someone knows what I am talking about.
It might be helpful in narrowing things down if you could explain what exactly the module does that is special in that regard. If you're looking to implement something with caching in general, I'd point you towards CHI, which is basically a common API with multiple caching drivers.
Do you mean Cache? It can store key/value pairs in a number of places, including shared memory.
It sounds like you are describing Memcached. There is a Perl interface on CPAN.
I've used Tie::Cache for this in the past with excellent results. It created a tied hash variable that exhibits LRU behavior when it grows beyond a configured maximum key count.
my $cache_size = 1000;
use vars qw(cache);
%cache = ();
tie %cache, 'Tie::Cache', $cache_size;
From here, you can store hash/value pairs (of course, the value side can be a reference) in %cache and should its size grow to 1000 keys, the LRU keys will be deleted as more are added.
In my usage, I store the right-hand side as an arrayref holding the cached value along with a timestamp of when the entry was cached; my cache reference code checks the timestamp and deletes the key without using it if the entry isn't fresh enough:
sub getCacheMatch {
my $check_value = shift;
my $timeout = 600; # 10 minutes
# Check cache for a match.
my ($result, $time_cached);
my $now = time();
my $time_cached;
my $cache_entry = $cache{$check_value};
if ($cache_entry) {
($result, $time_cached) = #{$cache_entry};
if ($now - $time_cached > $timeout) {
delete $cache{$check_value);
return undef;
} else {
return $result;
}
}
}
And I update the cache elsewhere in the code like so:
$url{$cache_checkstring} = [$value_to_cache, $now];

ObjectContext, Entities and loading performance

I am writing a RIA service, which is also exposed using SOAP.
One of its methods needs to read data from a very big table.
At the beginning I was doing something like:
public IQueryable<MyItem> GetMyItems()
{
return this.ObjectContext.MyItems.Where(x => x.StartDate >= start && x.EndDate <= end);
}
But then I stopped because I was worried about the performance.
As far as I understand MyItemsis fully loaded and "Where" just filters the elements that were loaded at the first access of the property MyItems. Because MyItemswill have really lots of rows, I don't think this is the right approach.
I tried to google a bit the question but no interesting results came up.
So, I was thinking I could create a new instance of the context inside the GetMyItems method and load MyItems selectively. Something like:
public IQueryable<MyItems> GetMyItems(string Username, DateTime Start, DateTime End)
{
using (MyEntities ctx = new MyEntities ())
{
var objQuery = ctx.CreateQuery<MyItems>(
"SELECT * FROM MyItems WHERE Username = #Username AND Timestamp >= #Start AND Timestamp <= #End",
new ObjectParameter("#Username", Username),
new ObjectParameter("#Start", Start),
new ObjectParameter("#End", End));
return objQuery.AsQueryable();
}
}
But I am not sure at all this is the correct way to do it.
Could you please assist me and point out the right approach to do this?
Thanks in advance,
Cheers,
Gianluca.
As far as I understand MyItemsis fully loaded and "Where" just filters the elements that were loaded at the first access of the property MyItems.
No. That's entirely wrong. Don't fix "performance problems" until you actually have them. The code you already have is likely to perform better than the code you propose replacing it with. It certainly won't behave in the way you describe. But don't take my word for it. Use the performance profiler. Use SQL Profiler. And test!

Symfony: Model Translation + Nested Set

I'm using Symfony 1.2 with Doctrine. I have a Place model with translations in two languages. This Place model has also a nested set behaviour.
I'm having problems now creating a new place that belongs to another node. I've tried two options but both of them fail:
1 option
$this->mergeForm(new PlaceTranslationForm($this->object->Translation[$lang->getCurrentCulture()]));
If I merge the form, what happens is that the value of the place_id field id an array. I suppose is because it is waiting a real object with an id. If I try to set place_id='' there is another error.
2 option
$this->mergeI18n(array($lang->getCurrentCulture()));
public function mergeI18n($cultures, $decorator = null)
{
if (!$this->isI18n())
{
throw new sfException(sprintf('The model "%s" is not internationalized.', $this->getModelName()));
}
$class = $this->getI18nFormClass();
foreach ($cultures as $culture)
{
$i18nObject = $this->object->Translation[$culture];
$i18n = new $class($i18nObject);
unset($i18n['id']);
$i18n->widgetSchema['lang'] = new sfWidgetFormInputHidden();
$this->mergeForm($i18n); // pass $culture too
}
}
Now the error is:
Couldn't hydrate. Found non-unique key mapping named 'lang'.
Looking at the sql, the id is not defined; so it can't be a duplicate record (I have a unique key (id, lang))
Any idea of what can be happening?
thanks!
It looks like the issues you are having are related to embedding forms within each other, which can be tricky. You will likely need to do things in the updateObject/bind methods of the parent form to get it to pass its values correctly to its child forms.
This article is worth a read:
http://www.blogs.uni-osnabrueck.de/rotapken/2009/03/13/symfony-merge-embedded-form/comment-page-1/
It gives some good info on how embedding (and mergeing) forms work. The technique the article uses will probably work for you, but I've not used I18n in sf before, so it may well be that there is a more elegant solution built in?