How to Round and Format Mail Merge Number - ms-word

I have an amount that fluctuates from 1 million to over a billion and want to show the result as $1.5 million or $1.5 billion using the field codes in Word 2013 for a mail merge. (ie. 1,500,000 should display $1.5 million and 1,500,000,000 should display as $1.5 billion.)
I have this so far:
{=int({MERGEFIELD AreaSales})/100000000 \# $,0.0}
Which gives me close to what I'm looking for $1.5 but without accounting for an amount in the millions or billions and the proper label. Thanks in advance!

I don't understand exactly what you're asking - you should always provide examples of what you want to have. I'm assuming what you mean is you want to see the word "million" or "billion", as appropriate. This can be done using a separate IF field:
{ IF { MERGEFIELD AreaSales } > 999999.99 "{ IF { MERGEFIELD AreaSales } < 1000000000 "million" "billion" }" "" }

Related

Whether mongodb can be used to save this format of document or something else?

I am asking a question about whether mongodb can be used for below data format.
**Header**
GSGT Version 1.9.4
Processing Date 12/28/2016 4:07 PM
Content GSAMD-24v1-0_20011747_A1.bpm
Num SNPs 700078
Total SNPs 700078
Num Samples 44
Total Samples 48
File 1 of 44
**Data**
Sample ID SNP Name Allele1 - Plus Allele2 - Plus Allele1 - AB Allele2 - AB
B01 1:100292476 A A A A
B01 1:101064936 A A A A
B01 1:103380393 G G B B
B01 1:104303716 G G B B
B01 1:104864464 C C B B
B01 1:106737318 T T A A
B01 1:109439680 A A A A
...
The above data is one data record and I am going to have millions of such records. I want to find a good database for storing this kind of data. And MongoDB is the one I want to use. One such record can be saved as one document. The whole data will be saved into a collection. Below is a description of the data structure.
One record includes header and data two parts. The data parts usually have 700,000 lines. In order to save them into MongoDB I propose to change the format into json as a collection as below:
{ "header":{
"GSGT Version": "1.9.4",
"Processing Date" : "12/28/2016 4:07 PM",
...
},
"data" : [{
"Sample ID": "B01",
"SNP Name": "1:100292476",
"Allele1 - Plus" : "A",
...
},{
"Sample ID": "B01",
"SNP Name": "1:100292476",
"Allele1 - Plus" : "A",
...
}
...
}
Since the data part has 700,000 lines I am not confident about this design. What is the reasonable number of nested data in one document? If I save such record in one document whether it is good for querying, saving? Or should I split this data into two collections? Or are there any other databases better than MongoDB to handle this structure?
Yes, You can.
Mongo Supports rich data.
Also you can index them for faster retrieval and embed them to query specifically
But you have to make sure that your individual document (record) doesn't grow more than 16 megabytes

Power Query - remove characters from number values

I have a table field where the data contains our memberID numbers followed by character or character + number strings
For example:
My Data
1234567Z1
2345T10
222222T10Z1
111
111A
Should Become
123456
12345
222222
111
111
I want to get just the member number (as shown in Should Become above). I.E. all the digits that are LEFT of the first character.
As the length of the member number can be different for each person (the first 1 to 7 digit) and the letters used can be different (a to z, 0 to 8 characters long), I don't think I can SPLIT the field.
Right now, in Power Query, I do 27 search and replace commands to clean this data (e.g. find T10 replace with nothing, find T20 replace with nothing, etc)
Can anyone suggest a better way to achieve this?
I did successfully create a formula for this in Excel...but I am now trying to do this in Power Query and I don't know how to convert the formula - nor am I sure this is the most efficient solution.
=iferror(value(left([MEMBERID],7)),
iferror(value(left([MEMBERID],6)),
iferror(value(left([MEMBERID],5)),
iferror(value(left([MEMBERID],4)),
iferror(value(left([MEMBERID],3)),0)
)
)
)
)
Thanks
There are likely several ways to do this. Here's one way:
Create a query Letters:
let
Source = { "a" .. "z" } & { "A" .. "Z" }
in
Source
Create a query GetFirstLetterIndex:
let
Source = (text) => let
// For each letter find out where it shows up in the text. If it doesn't show up, we will have a -1 in the list. Make that positive so that we return the index of the first letter which shows up.
firstLetterIndex = List.Transform(Letters, each let pos = Text.PositionOf(text, _), correctedPos = if pos < 0 then Text.Length(text) else pos in correctedPos),
minimumIndex = List.Min(firstLetterIndex)
in minimumIndex
in
Source
In the table containing your data, add a custom column with this formula:
Text.Range([ColumnWithData], 0, GetFirstLetterIndex([ColumnWithData]))
That formula will take everything from your data text until the first letter.

pseudocode about registers and clients

I have projects that requires to simulate a market with 3 registers. Every second an amount of clients come to the registers and we assume that each clients takes 4 seconds to the register before he leaves. Now lets suppose that we get an input of all the customers and their arriving time: e.x: 0001122334455 which means that 3 customers enter at second 0, 2 at second 1 etc. What i need to find is the total time which need to serve all the customers now matter how many they are and also to find the average waiting time at the store.
Can someone come up with a pseudocode for this problem?
while(flag){
while(i<A.length-1){
if(fifo[tail].isEmpty()) fifo[tail].put(A[i] +4);
else{
temp= fifo[tail].peek();
fifo[tail].put(A[i]-temp+4);
i++;
}
if(tail == a-1){
tail=0;
}else tail++;
if(i>3){
for(int q =0; q<a; q++){
temp = fifo[q].peek();
if(temp==i){
fifo[q].get();
}
}
}
}
}
where A is the array which contains all the customers as numbers as required from the input, and fifo is the array of the registers with the get , put and peek(get the tail but not remove it) methods. I have no clue though how to find the total time an the average waiting time

Random Sampling from Mongo

I have a mongo collection with documents. There is one field in every document which is 0 OR 1. I need to random sample 1000 records from the database and count the number of documents who have that field as 1. I need to do this sampling 1000 times. How do i do it ?
For people coming to the answer, you should now use the new $sample aggregation function, new in 3.2.
https://docs.mongodb.org/manual/reference/operator/aggregation/sample/
db.collection_of_things.aggregate(
[ { $sample: { size: 15 } } ]
)
Then add another step to count up the 0s and 1s using $group to get the count. Here is an example from the MongoDB docs.
For MongoDB 3.0 and before, I use an old trick from SQL days (which I think Wikipedia use for their random page feature). I store a random number between 0 and 1 in every object I need to randomize, let's call that field "r". You then add an index on "r".
db.coll.ensureIndex(r: 1);
Now to get random x objects, you use:
var startVal = Math.random();
db.coll.find({r: {$gt: startVal}}).sort({r: 1}).limit(x);
This gives you random objects in a single find query. Depending on your needs, this may be overkill, but if you are going to be doing lots of sampling over time, this is a very efficient way without putting load on your backend.
Here's an example in the mongo shell .. assuming a collection of collname, and a value of interest in thefield:
var total = db.collname.count();
var count = 0;
var numSamples = 1000;
for (i = 0; i < numSamples; i++) {
var random = Math.floor(Math.random()*total);
var doc = db.collname.find().skip(random).limit(1).next();
if (doc.thefield) {
count += (doc.thefield == 1);
}
}
I was gonna edit my comment on #Stennies answer with this but you could also use a seprate auto incrementing ID index here as an alternative if you were to skip over HUGE amounts of record (talking huge here).
I wrote another answer to another question a lot like this one where some one was trying to find nth record of the collection:
php mongodb find nth entry in collection
The second half of my answer basically describes one potential method by which you could approach this problem. You would still need to loop 1000 times to get the random row of course.
If you are using mongoengine, you can use a SequenceField to generate an incremental counter.
class User(db.DynamicDocument):
counter = db.SequenceField(collection_name="user.counters")
Then to fetch a random list of say 100, do the following
def get_random_users(number_requested):
users_to_fetch = random.sample(range(1, User.objects.count() + 1), min(number_requested, User.objects.count()))
return User.objects(counter__in=users_to_fetch)
where you would call
get_random_users(100)

mongodb complex map/reduce - or so I think

I have a mongodb collection that contains every sale and looks like this
{_id: '999',
buyer:{city:'Dallas','state':'Texas',...},
products: {...},
order_value:1000,
date:"2011-11-23T11:34:33Z"
}
I need to show stats about order volumes, by state, in the last 30,60 and 90 days.
so, to get something like this
State Last 30 Last 60 Last 90
Arizona 12000 22000 35000
Texas 5000 9000 16000
how would you do this in a single query?
That's not very difficult :
map = function() {
emit({key : this.buyer.state, value : order_value})
}
reduce = function(key,values) {
sum = 0;
values.forEach( function(o) {
sum += o
}
return sum
}
and then you map reduce your collection with query {date : {$gt : { [today minus 30 days] }}
(i d'ont remember the syntax but you should the excellent mapreduce doc on mongodb site).
To make more efficient use of map reduce, think with incremental map reduce by querying first on the last 30 days, then map reduce again (incrementally) filtering -60 to -30 days to get information on the las t60 days. Finally, run incremental map reduce filtering -60 to -90 days to get the last 90 days.
This is not bad because you have 3 queryies but you only recompute aggregation on data you don't have yet.
I can provide example, but you should be able to do it by yourself now.