I want to move the euro/currency symbol in the sugarcrm quotes pdf's.
It is actually before the amount, but it has to be printed out after the amount.
Is it possible?
Example:
€4444 <- wrong
4444€ <- correct
Regarding this page:
http://www.evertype.com/standards/euro/formats.html
it seems that €4444 format is the correct one for Italy.
But the way I come to your post looking for the same issue.
Regards.
Related
Here's my first question on this forum, though I've read through a lot of good answers here.
Can anyone tell me what I'm doing wrong with my attempt to do a query import from one sheet to a column in another?
Here's the formula I've tried, but all my adjustments still get me a parsing error.
=QUERY(IMPORTRANGE("https://docs.google.com/spreadsheets/d/1yGPdI0eBRNltMQ3Wr8E2cw-wNlysZd-XY3mtAnEyLLY/edit#gid=163356401","Master Treatment Log (Responses)!V2:V")"WHERE Col8="'&B2&'")")
Note that importrange is only needed for imports between spreadsheets. If you only import from one sheet into another within the same spreadsheet I would suggest using filter() or query().
Assuming the value in B2 is actually a string (and not a number), you can try
=QUERY(IMPORTRANGE("https://docs.google.com/spreadsheets/d/1yGPdI0eBRNltMQ3Wr8E2cw-wNlysZd-XY3mtAnEyLLY/edit#gid=163356401","Master Treatment Log (Responses)!V2:V"), "WHERE Col8="'&B2&'", 0)
Note the added comma before "WHERE". If you want to import a header row, change 0 to 1.
See if that helps? If not, please share a copy of your spreadsheet (sensitive data erased).
I'm quite new to ELK and Grok-filtering, and I'm struggling with parsing this particular pattern in my grok filter.
I've used the grok debugger to try and solve this, but although I like the tool, I just get confused by the custom patterns.
Eventually, I hope to parse lots of log files sent by filebeat to logstash, then send the parsed logs to elasticsearch and display with kibana or some similar visualization tool.
The lines that I need to parse follow the following pattern:
1310 2017-01-01 16:48:54 [325:51] [326:49] [359:57] Some log info text
The first four digits is a log type identifier, and will be used for grouping. I've called the field "LogLineID".
The date is formatted YYYY-MM-DD HH:MM:SS, and is parsed ok. I called the field "LogDate".
But now the problem begins. Within the square brackets, I have counters, formatted as MM:SS if you like. I cannot for the life of me find a way to sort these out, but I need to compare these times, hence I want to store them as minutes and seconds, not just numbers.
The first is a counter "TimeSpent",
the second is a counter "TimeStarted" and
the third is a counter "TimeSinceDown".
Then, last, comes the info text, which I've managed to grok with simply applying %{GREEDYDATA:LogInfo}.
I notice that the amount of minutes could be far higher than the standard 60 minutes within an hour, so I may be barking up the wrong tree here trying to parse it with date patterns such as TIMESTAMP_ISO8601, but then, I don't really know how else to do this.
So, I came this far:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate}
and were as mentioned able to (by cutting away the square bracket parts) to parse the log info text with
%{GREEDYDATA:LogInfo}
to create a field LogInfo.
But that's were I'm stuck. Could someone please help me figure out the rest?
Massive thanks in advance.
PS! I also found %{NUMBER:duration}, but it could as far as I could tell only parse timestamps with dot, not colon..
grok regex expression can help you solve the problem.
but first I wanna make sure that do you mean [325:51] [326:49] [359:57] are the three component that you wanna to fetch? And it will returns the result like :
TimeSpent: 325:51
TimeStarted: 326:49
TimeSinceDown: 359:57
were i get the point , you can use my ways in on of the following suggestions:
define your own custom pattern files and add the pattern in your file.
just use the expression in filter part of logstash conf file
hope it will helps you
Ah, there was a space.. Actually, I was misleading myself and everybody in my question, as it was not actually that log line that was causing problems. I just took the first one, not realizing where the problem really were, but the one causing problems had a space within the brackets as such: [ 42:31]. There are also some parts where there are two spaces, so the way I managed to solve this was to include a %{SPACE} between the \[ and the %{NUMBER}:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate} \[%{SPACE}%{NUMBER:TimeSpentMinutes}\:%{NUMBER:TimeSpentSeconds}\] \[%{SPACE}%{NUMBER:TimeStartedMinutes}\:%{NUMBER:TimeStartedSeconds}\] \[%{SPACE}%{NUMBER:TimeSinceDownMinutes}\:%{NUMBER:TimeSinceDownSeconds}\] %{GREEDYDATA:LogText}
I still haven't solved the merging of minutes and seconds, but this I can also handle in a later stage.
Thanks to Lin Don for showing an interest in my problem, and sorry for not replying sooner.
Hope the solution will help others (or even myself) if their stuck on the same kind of problem.
Note to myself: Read the logs more carefully before grok'ing.. :)
I'm looking to return the number of results found in an ajax fashion on Algolia instant search.
A little field saying something like "There are X number of results" and refines as the characters are typed.
I've read you utilise 'nbHits' but i'm unsure of how to go about it.. Being from a design background.
Thanks for help in advance.
The instantsearch.js stats widget shows the number of results and speed of the search. If you don't want to use the widget, I believe you can still use {{nbHits}} inside of your template wherever you want the number to print.
Very easy when you know how, Thanks for pointing me in the right direction Josh.
This works:
search.addWidget(
instantsearch.widgets.stats({
container: '.no-of-results'
})
);
First of all i'd like to apologise for my english. I am student from Poland and i don't know php but i need something from code. http://pastebin.com/x0vUhj8V
I have encoutered a problem. On my website i can't register with mail which is shorther than (i miss word but here is example) - asd#wp.pl, asd#op.pl, asd#vp.pl
it also concern the part which is before "#" mark (asd) and after (wp.pl, op.pl, vp.pl) -3characters is minimum what is accepted for example asd#asd.pl.
I think somewhere in the code is declareted minimum length of e-mail or something, but with my "knowledge" od PHP I can't figure out which part... If someone could explain me what should i change i would be gratefull. Please help
Edit: My fault, here is code http://pastebin.com/dhGgZkPB that is used to call phpMailer.
Your English is just fine!
That version of PHPMailer is really old, over 10 years out of date. Get the latest from here. Beyond that you have not posted the code you're using to call PHPMailer so we can't say what you're doing wrong - if you need somewhere to start, look at the documentation examples here.
There is no particular lower length limit on email messages or addresses (so long as they are valid) - you can quite reasonably send a message containing 'a' to someone at 'a#a.co'.
Using Foursquare api "Venue" service.. i am parsing nearByVenue details like shop, restaurant etc.
Suppose as an example i am getting following link:
https://api.foursquare.com/v1/venues.json?geolat=40.562362&geolong=-111.938689
Issue is I am getting only 10 nearbyDetails.. suppose I am standing in New York there should be number of venue details.. why I am getting only 10 details only ?
Is there any other service or i am following wrong approach to use it?
Thanks
Add l=n where n is your limit in query string. The default limit is set to 30.
https://api.foursquare.com/v1/venues.json?geolat=40.562362&geolong=-111.938689&l=10
https://api.foursquare.com/v1/venues.xml?geolat=40.562362&geolong=-111.938689&l=50&q=coffee
You can change the value of l="no of result" parameter.
And also if you want to search for particular keyword like ATM, Coffee
then you can add the parameter q and add it's value like q="atm", q="coffee", etc.
hope it will be helpful to you.