SugarCRM: How to retrieve complex SQL statements with sugar internal functions? - sugarcrm

In order to retrieve a contact, having a cell phone number of 09362724853, I use following code:
$newSMS_contact = new Contact;
$newSMS_contact->retrieve_by_string_fields(array('phone_mobile'=>'09362724853'));
How about retrieving a contact having a cell phone number of 09362724853 OR 9362724853 OR +989362724853 with sugar internal functions?
This doesn't work:
$newSMS_contact = new Contact;
$newSMS_contact->retrieve_by_string_fields(array('phone_mobile'=>'09362724853', 'phone_mobile'=>'9362724853', 'phone_mobile'=>'+989362724853'));

The thing is that the function which you are trying to utilize was created for other goals. Since it fetches only one row from DB and fills a Bean with it, the Array of parameters will be turned into a string separated by AND operators. But you have completely different case.
I would suggest to use another approach, which is less convenient but more reliable:
$contact_bean = new Contact();
$contacts_list = $contact_bean->get_full_list(null, '(phone_mobile = "09362724853" OR phone_mobile = "9362724853" OR phone_mobile = "+989362724853")');
Eventually, you will have an array of beans.
Probably, for some modules, you will need to use table aliases for fields definition into SQL supplement.

If I were you, I'd have strict rules when the phone numbers are put in the system so you can be sure your phone numbers follow a certain format in the database. (Something like E.164: http://en.wikipedia.org/wiki/E.164) You can enforce the rules with a custom SugarField (or override one that exists) that has Javascript and Server-side validation.
This way, you won't have to worry about that logic in this piece of the code or anywhere else you want to deal with phone numbers.

Related

How do I prevent users to use thousands separator in FileMaker Pro?

In FileMaker Pro, when using number field, the user can choose to use a thousand separator or not. For example, if I have a database with a field for the price of an item, the user can either enter 1,000 or 1000.
I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000. Therefore, I want to either automatically remove the comma, or (my preference in this case) alert the user when trying to enter a value with a thousand separator.
What I tried is the following.
For the field, I am setting Validation options. For example:
Require Strict data type: Numeric Only
Validated by calculation: Position ( Self ; ","; 1 ; 1 ) = 0
Validated by calculation: Self = Substitue ( Self, ",", "")
Auto-enter calculation: Filter( Self ; "0123456789." )
Unfortunately, none of these work. As the field is defined as a number (and I want to keep it like this, as I am also performing calculations based on this number), the Position function and the Substitute function apparently ignore the thousand separator!
EDIT:
Note that I am generating my XML by concatenating a string, for example:
"<Products><Product><Name>" & Name & "</Name><Price>" & Price & "</Price></Product></Product>"
The reason is that what I am exporting is dependent on the values in my database. Therefore, I am not using the [File][Export records...] function.
Auto-enter calculation will work, but you need to uncheck the box "Do not replace existing value of field" (which is checked by default).
I'd suggest using the calculation GetAsNumber(self) as the auto-enter calc. If it should only contain integers, wrap that in a call to Int()
I am using my database to generate an XML file that needs to be uploaded. The thing is, that my XML scheme dictates that only a value of 1000 is allowed and not 1,000.
If this is only a problem when you export, why not handle it when exporting?
If you are exporting as XML using XSLT, you can add an instruction to
your stylesheet to remove the comma from all number fields;
Alternatively, you can export from a layout where the field is
formatted to display without the comma and select the Apply current's layout data formatting to exported data option when
exporting.
Added:
Perhaps I should have clarified. I am not using the export function to generate the XML as there is some logic involved in how the XML should be formatted (dependent on the data that I want to export). What I do instead is that I make a string where I combine XML-tags and actual values from the database.
IMHO, you're making a mistake by not taking advantage of the built-in XML/XSLT export option. Any imaginable logic can be implemented this way, without burdening your solution with the fragile task of creating a valid XML.
In any case, if you're using the field in a calculation, you can replace all references to it with:
GetAsNumber (YourField )
to get an unformatted, numeric-only, value.
Your question puzzles me. As far as I know, FileMaker does not store the thousands separator, but rather offers it only as a display option.
That's also why those functions can't find it.
Are you sure you are exporting the raw data and not a "formatted as layout" variant?

How can you filter on a custom value created during dehydration?

During dehydration I create a custom value:
def dehydrate(self, bundle):
bundle.data['custom_field'] = ["add lots of stuff and return an int"]
return bundle
that I would like to filter on.
/?format=json&custom_field__gt=0...
however I get an error that the "[custom_field] field has no 'attribute' for searching with."
Maybe I'm misunderstanding custom filters, but in both build_filters and apply_filters I can't seem to get access to my custom field to filter on it. On the examples I've seen, it seems like I'd have to redo all the work done in dehydrate in build_filters, e.g.
for all the items:
item['custom_field'] = ["add lots of stuff and return an int"]
filter on item and add to pk_list
orm_filters["pk__in"] = [i.pk for i in pk_list]
which seems wrong, as I'm doing the work twice. What am I missing?
The problem is that dehydration is "per object" by design, while filters are per object_list. That's why you will have to filter it manually and redo work in dehydration.
You can imagine it like this:
# Whole table
[obj, obj1, obj2, obj3, obj4, obj5, obj5]
# filter operations
[...]
# After filtering
[obj1, obj3, obj6]
# Returning
[dehydrate(obj), dehydrate(obj3), dehydrate(obj5)]
In addition you can imagine if you fetch by filtering and you get let say 100 objects. It would be quite inefficient to trigger dehydrate on whole table for instance 100000 records.
And maybe creating new column in model could be candidate solution if you plan to use a lot of filters, ordering etc. I guess its kind of statistic information in this field so if not new column then maybe django aggregation could ease your pain a little.

Web2py - Multiple tables read-only form

I've searched around the web for a way to achieve this, and found multiple solutions. Most of them had messy code, all of them drawbacks. Some ideas involved setting default values of all the db fields based on a record. Others worked by appending multiple SQLFORMs, which resulted in differences in indentation on the page (because it's 2 HTML tables in 1 form).
I'm looking for a compact and elegant way of providing a read-only representation of a record based on a join on two tables. Surely there must be some simple way to achieve this, right? The Web2py book only contains an example of an insert-form. It's this kind of neat solution I am looking for.
In the future I will probably need multi-table forms that provide update functionality as well, but for now I'll be happy if I can get a simple read-only form for a record.
I would greatly appreciate any suggestions.
This seems to work for me:
def test():
fields = [db.tableA[field] for field in db.tableA.keys() \
if type(db.tableA[field]) == type(db.tableA.some_field)]
fields += [db.tableB[field] for field in db.tableB.keys() \
if type(db.tableB[field]) == type(db.tableB.some_field)]
ff = []
for field in fields:
ff.append(Field(field.name, field.type))
form = SQLFORM.factory(*ff, readonly=True)
return dict(form=form)
You could add in field.required, field.requires validtaors, etc. And also, since you're using SQLFORM.factory, you should be able to validate it and to updates/inserts. Just make sure that the form you are building using this method contains all of the necessary information to validate the form for update -- I believe you can add them easily to the Field instantiation above.
EDIT: Oh yeah, and you need to get the values of the record in question to pre-populate the form based on a record id (after form is defined)... also.. I just realized that instead of those list comprehensions, you can just use SQLFORM.factory and provide the two tables:
def test():
form = SQLFORM.factory(db.tableA, db.tableB, readonly=True)
record = ... (query for your record, probably based on an id in request.args(0))
for field in record.keys():
if (*test if this really is a field*):
form.vars[field] = record[field]
return dict(form=form)
Some tweaking will be required since I only provided psuedo-code for the pre-population... but look at: http://web2py.com/books/default/chapter/29/7#Pre-populating-the-form and the SQLFORM/SQLFORM.factory sections.

Make Lucene index a value and store another

I want Lucene.NET to store a value while indexing a modified, stripped-down version of the stored value. e.g. Consider the value:
this_example-has some/weird (chars) 100%
I want it stored right like that (so that I can retrieve exactly that for showing in the results list), but I want lucene to index it as:
this example has some weird chars 100
(you see, like a "sanitized" version of the original value) for a simplified search.
I figure this would be the job of an analyzer, but I don't want to mess with rolling my own. Ideally, the solution should remove everything that is not a letter, a number or quotes, replacing the removed chars by a white-space before indexing.
Any suggestions on how to implement that?
This is because I am indexing products for an e-commerce search, and some have realy creepy names. I think this would improve search assertiveness.
Thanks in advance.
If you don't want a custom analyzer, try storing the value as a separate non-indexed field, and use a simple regex to generate the sanitized version.
var input = "this_example-has some/weird (chars) 100%";
var output = Regex.Replace(input, #"[\W_]+", " ");
You mention that you need another Analyzer for some searching functionality. Dont forget the PerFieldAnalyzerWrapper which will allow you to use different analyzers within the same document.
public static void Main() {
var wrapper = new PerFieldAnalyzerWrapper(defaultAnalyzer: new StandardAnalyzer(Version.LUCENE_29));
wrapper.AddAnalyzer(fieldName: "id", analyzer: new KeywordAnalyzer());
IndexWriter writer = null; // TODO: Retrieve these.
Document document = null;
writer.AddDocument(document, analyzer: wrapper);
}
You are correct that this is the work of the analyzer. And I'd start by using a tool like luke to see what the standard analyzer does with your term before getting into what to use -- it tends to do a good job stripping noise characters and words.

Symfony: Model Translation + Nested Set

I'm using Symfony 1.2 with Doctrine. I have a Place model with translations in two languages. This Place model has also a nested set behaviour.
I'm having problems now creating a new place that belongs to another node. I've tried two options but both of them fail:
1 option
$this->mergeForm(new PlaceTranslationForm($this->object->Translation[$lang->getCurrentCulture()]));
If I merge the form, what happens is that the value of the place_id field id an array. I suppose is because it is waiting a real object with an id. If I try to set place_id='' there is another error.
2 option
$this->mergeI18n(array($lang->getCurrentCulture()));
public function mergeI18n($cultures, $decorator = null)
{
if (!$this->isI18n())
{
throw new sfException(sprintf('The model "%s" is not internationalized.', $this->getModelName()));
}
$class = $this->getI18nFormClass();
foreach ($cultures as $culture)
{
$i18nObject = $this->object->Translation[$culture];
$i18n = new $class($i18nObject);
unset($i18n['id']);
$i18n->widgetSchema['lang'] = new sfWidgetFormInputHidden();
$this->mergeForm($i18n); // pass $culture too
}
}
Now the error is:
Couldn't hydrate. Found non-unique key mapping named 'lang'.
Looking at the sql, the id is not defined; so it can't be a duplicate record (I have a unique key (id, lang))
Any idea of what can be happening?
thanks!
It looks like the issues you are having are related to embedding forms within each other, which can be tricky. You will likely need to do things in the updateObject/bind methods of the parent form to get it to pass its values correctly to its child forms.
This article is worth a read:
http://www.blogs.uni-osnabrueck.de/rotapken/2009/03/13/symfony-merge-embedded-form/comment-page-1/
It gives some good info on how embedding (and mergeing) forms work. The technique the article uses will probably work for you, but I've not used I18n in sf before, so it may well be that there is a more elegant solution built in?