Take the scenario of SO,
when we click Questions button it shows all question regardless of tag,
when we click a tag it only shows question asked to that particular tag.
Second is ok, I will just go to that particular Tag vertex and fetch the sorted data based on attached edge.
How will i implement the first scenario in sorted order.? I will have java question, html question, c++ question vertex... so and so. How will I fetch all these in sorted order? What will be the query?
Obviously, you are trying to avoid iterating/sorting all vertices to find recent questions. One way might be to build some sort of date structure into your schema like:
year --> month --> day --> question
If you have a sufficient number of "questions" in your you use case you might consider breaking time down further to hours, minutes, etc (or a higher level of aggregation....maybe you just need year and month). Index the edges between the year, month, day, and question using a reverse sort order of the time, so that you can just do:
g.V('year','2014').out.out.out[0..<10]
which would return the first 10 most recent questions. Note that Gremlin nicely compiles this to a vertex query to take advantage of your index:
gremlin> g.V.has('year','2014').out.out.out[0..<10].toString()
==>[GremlinStartPipe, GraphQueryPipe(has,vertex), IdentityPipe, VertexQueryPipe(out,[],vertex), VertexQueryPipe(out,[],vertex), VertexQueryPipe(out,[],range:[0,9],vertex), IdentityPipe]
Related
I need to submit my thesis in word in a few days and heard that I could not submit it if my reference numbers were not in chronological order.
Word seems to number text references first and then start on captions in tables and figures. The result is that for example on the first page I have an alinea with reference number 1 at the end and below is figure 1 with reference number 59 in the caption instead of reference number 2.
Last year I used EndNote, this year Mendeley and had the same problem with both. If you need more information, please let me now. It's quite an emergency, need to solve this as quick as possible.
I'm a beginner for tableau. I want to get the direct numbers for each row, but i get the number which are separate, how can i achieve this?
I've tried the sentence like:count("Implemented"), but I don't get the result I want.
For example, for the 1st row I want 3 10 10
not 111 10 112111111
Here is worksheet.
My code:
EDIT :
here is the photo for implementation opportunities
As you can see, the status is related to the date, I think maybe it causes the records which are counted 1by1.
Now the situation is that: i create the code which is related to the date, if i remove this from mark, it will cause the problem (the code is related to the date), but if i leave it, the system will always count it one by one. My code is not perfect but i can't find another one which can replace it.....
EDIT 2:
in short,what i want is the sum of the remaining opportunity:10
capture
Remove DAY from Mark shelf. That detail is producing those separations.
Attaching a workbook with numbers similar to (but not exact due to proprietary issues) is almost always advised. You will get the right answer a lot sooner than just screenshots.
In any case, it seems as if the measure portion of the visualization is properly being summed by the date. Try selecting the measure, and manually selecting "sum" from the menu drop down. Here is a link for more detail.
Secondly, you can play around with table calculations. Click this link and read up on option 3.
I have been struggling to find the new incoming volume per day.
I have categories as : - total ticket, Resolved, closed and Daily left.
So the calc is everyday resolved and closed are moved from the queue and
'daily left = Total Ticket- (Pending + Closed)'
Now there is some carry forward everyday hence the total ticket for the next day includes some volume i.e. Daily left of previous day.
I am not able to figure out how to show that number, I have tried using previous value but it is not helping. Please suggest. Attaching a print screen of the data.
For 3rd the # of records are 33 however there is 1 carry forward from previous
day hence the Fresh Vol should be 32. I have used the formula to calc but it is
not giving correct result
sum([Number of Records]) - (PREVIOUS_VALUE([Daily Left Volume]))
This is taking the left over of current day and not previous day.
I am also using look Up function but that also does not show the current output.
The output from tableau after using the lookup function is below attached as well
I am new to this community and dont have enought reputation to comment :P. So writing few possible solutions here:
1) Make sure the data is sorted by date and is unique on date level. If it is not then Previous or lookup might not work
2) Another solution will be take running_sum of every field and then apply the operations. This should give right answer
3) If this does not will it possible to change the way you import the data?
a) Simply create another filed as Date_past = Date-1 in your raw data.
b) Duplicate your data.
c) join the two data sets on Date = Date_past
d) Now you have all the data of today's date and last day in one field and you can perform operations as you need
I will try to explain the problem on an abstract level first:
I have X amount of data as input, which is always going to have a field DATE. Before, the dates that came as input (after some process) where put in a table as output. Now, I am asked to put both the input dates and any date between the minimun date received and one year from that moment. If there was originally no input for some day between this two dates, all fields must come with 0, or equivalent.
Example. I have two inputs. One with '18/03/2017' and other with '18/03/2018'. I now need to create output data for all the missing dates between '18/03/2017' and '18/04/2017'. So, output '19/03/2017' with every field to 0, and the same for the 20th and 21st and so on.
I know to do this programmatically, but on powercenter I do not. I've been told to do the following (which I have done, but I would like to know of a better method):
Get the minimun date, day0. Then, with an aggregator, create 365 fields, each has that "day0"+1, day0+2, and so on, to create an artificial year.
After that we do several transformations like sorting the dates, union between them, to get the data ready for a joiner. The idea of the joiner is to do an Full Outer Join between the original data, and the data that is going to have all fields to 0 and that we got from the previous aggregator.
Then a router picks with one of its groups the data that had actual dates (and fields without nulls) and other group where all fields are null, and then said fields are given a 0 to finally be written to a table.
I am wondering how can this be achieved by, for starters, removing the need to add 365 days to a date. If I were to do this same process for 10 years intead of one, the task gets ridicolous really quick.
I was wondering about an XOR type of operation, or some other function that would cut the number of steps that need to be done for what I (maybe wrongly) feel is a simple task. Currently I now need 5 steps just to know which dates are missing between two dates, a minimun and one year from that point.
I have tried to be as clear as posible but if I failed at any point please let me know!
Im not sure what the aggregator is supposed to do?
The same with the 'full outer' join? A normal join on a constant port is fine :) c
Can you calculate the needed number of 'dublicates' before the 'joiner'? In that case a lookup configured to return 'all rows' and a less-than-or-equal predicate can help make the mapping much more readable.
In any case You will need a helper table (or file) with a sequence of numbers between 1 and the number of potential dublicates (or more)
I use our time-dimension in the warehouse, which have one row per day from 1753-01-01 and 200000 next days, and a primary integer column with values from 1 and up ...
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
Ok... so you could override your source qualifier to achieve this in the selection query itself (am giving Oracle based example as its what I'm used to and I'm assuming your data in is from a table). I looked up the connect syntax here
SQL to generate a list of numbers from 1 to 100
SELECT (MIN(tablea.DATEFIELD) + levquery.n - 1) AS Port1 FROM tablea, (SELECT LEVEL n FROM DUAL CONNECT BY LEVEL <= 365) as levquery
(Check if the query works for you - haven't access to pc to test it at the minute)
Do you know how to get the last non-empty value using mdx query.. then after that.. i want to count how many null values are next to it up to the last date. My main purpose for this is to count how many days a customer has no transaction..
i have to make a report in ssrs (using adventure works cube) that counts how many days a customer has no transaction..
thanks..
Use this article to obtain set of non empty dates.
To obtain count you can:
Use member properties (day of year or something more suitable in
this case) and just subtract one value from another.
You can find
last date with non empty value, previous one using LAG(1), include
them to the next construction {prev.date:last.date} and use COUNT
for set
Recently I follow this article, It really helps me.
The most important part is to understand scope usage.
Even If you don't have an Enterprise Edition, you can use it.