Magento get language collection - zend-framework

There seems to be a possibility in Magento to get a language collection, namely via Mage::getSingleton('adminhtml/system_config_source_language'), which I would like to use. It results however in an error in my version of Magento (both Enterprise 1.10 and Community 1.4), expecting to get its data from an unexisting table called core_language.
Has anyone found a good solution or alternative to this? Or maybe have used this and has a table dump for core_language?

Magento is built on Zend so you can use,
Zend_Locale::getTranslationList("language")
which returns an array of strings keyed by their abbreviation.

Hmm, I looked through the installation files and apparently the table is created initially but dropped from version 0.7.5, so it's probably deprecated code. The class file doesn't mention this though, so quite obscure.

Related

Seem like i can't handle response from mongodb when using hyphen in field name

I didn't see any recommendation about using hyphen in field name at all
Even with #serialName it still didn't work
#SerialName("created-date")
val created_date: String,
but It worked fine with underscore (now i'm using it)
The reason i used it in the first place is because I have used a few api and most of them used hyphen and i just want to follow the common name.
If anyone know why please kindly tell me. I might be missing any docs or sth
There is a page in MongoDB documentation, I'm putting a shortcut for restrictions based on field names https://docs.mongodb.com/manual/reference/limits/#mongodb-limit-Restrictions-on-Field-Names.
MongoDB can store various different field names even you can have "space" in field name. It is not a problem for MongoDB, but once your application receives MongoDB output, it should be deserialized. I have never used kotlinx.serialization before; hence I'm just guessing. What if the problem might be coming from serialization/deserialization process. You better check kotlinx.serialization, maybe something is there.

Handling multiple document version in Mongo Collection

{Hello. I'm not completely satisfied with the title, Mods please help amend it if necessary}
We are trying to come up with ways to implement mongoDB on the back end in our project. We have to address a couple of concerns that were raised as below. Some input by experts in the field would be really helpful.
Remove / Add entirely new fields into the document given the early development changes --> How best can this be accommodated?
As an example to this, suppose my collection contains about 1000 records, and there is an that contains 'Address' data. owing to operational changes, we need to add to (or replace) the 'Address' field with an array of 'Street', 'POBOX' etc. and populate them with a certain default value, how best can this be accommodated?
Specific scenario wherein not all devices that we run would need to be updated to the latest version. This means that the new "fields" added in the DB would essentially be irrelevant to the devices running the older version of the app. --> How best can this scenario be dealt with?
As an example to this, let us assume that some devices run an earlier version of the app which only looks for 'Address' as a field. Certain devices that are updated to the latest app need will need to refer to the 'Street' and 'POBox' fields instead of the address. How best can this be handled from Mongo's perspective?
in simple words:
As you development will progress, document shape can be changed as necessary.
Depend of change type, structure update can be conducted by one update statement
or sometimes there will be a need to use aggregation framework and save results in new collection.
For backward compatibility with app version on device in use - document can contain both versions of fields, so older version of application can be used with newer schema.
This could leads to other problem, what to do if document will be updated? That means app could be setup to read from newer schema, but not write (if possible).
If there will be a possibility to use webApi to communicate with app and then with mongo - you can do all migration on the fly, as api will be aware of changes.....
Any comments welcome!

DocPad and MongoDB

I'm looking for a way to use DocPad with MongoDB. Tried to google about that, but didn't find anything encouraging. Could you please give at least some hints?
Some parts of what I'm developing need to be persisted.
Thanks in advance.
Starting from version 6.55 released last month, DocPad creates a persistent db file in the root of the project called .docpad.db :
https://github.com/bevry/docpad/blob/master/HISTORY.md
I guess it's a first step in the persistent behaviour you need ; the documents may be parsed and inserted in a Mongo database, because behind the scene, DocPad uses QueryEngine which has an API similar to Mongo :
https://github.com/bevry/query-engine
More work is on the way regarding your concern. Have a look at this discussion that deals with the future architecture of DocPad, especially the importer / exporter decoupling :
https://github.com/bevry/docpad/issues/705
I've written some code that reads from Mongodb and returns an object that can be rendered into docs. I've also tried to write some code to provide the backend for basic editing of the database but the regeneration after update is not yet working (although it may be by the time you read this!). See https://github.com/simonh1000/docpad-plugin-mongo

Solr and custom update handler

I have a question about Solr and the possibility to implement a customized update handler
Basically, the scenario is this:
FIELD-A : my main field
FIELD-B and FIELD-C : 2 copyfield with source in A
After FIELD-A has its value stored, i need this valued to be copied in FIELD-B and C, then processed (let's say extract a substring) and stored in FIELD-B and C before indexing time. I'm not using DIH.
edit: i'm pushing my data via nutch (forgot to mention that)
As far as i've understood, copyfields triggers after indexing (but i'm not so sure about this).
I've already read throu the wiki page and still i don't understand a lot of things:
1) customupdateprocessor is an alternative to conditionalcopyfield or do they have to exist both in my solr?
2) after creating my conditionalcopyfield jar file, how do i declare it in my schema?
3) how do i have to modify my solrconfig.xml to use my updater?
4) if i'm choosing the wrong way, any suggestion is appreciated, better if some examples or well documented links are provided
I read a lot (googling and lucene ml on nabble) but there's not so much documentation about this. I just need to create a custom updater for my two copyfields,
Thanks all in advance!
Its not really complicated.. Following is an excellent link I came across to write a custom solr update handler.
http://knackforge.com/blog/selvam/integrating-solr-and-mahout-classifier
I tested it in my solr and it just works fine!
If you are using SOLR 4 or planning to use it, http://wiki.apache.org/solr/ScriptUpdateProcessor could be an easier solution. Have fun!

Is there a way to parse a T-SQL select statement with the "Oslo" M runtime?

Searching around the Microsoft.M assembly I found the SourceParser class and whole set of classes in the Microsoft.TSQL10 namespace that seem related to parsing SQL but I cannot find examples of how to use it.
I know the you can generate T-SQL easily enough, but can you consume it, manipulate the data structure and re-output a modified version of the SQL select?
No is apparently the answer. I received confirmation of this on the MSDN forums.