I have a table that has a couple thousand records.
The table has two fields. [Activity] and [Badge]
The Activity can be anywhere between 1 and 75 items
The Badge can be any number of records.
I have vba code that will (and does) count how many times
Each activity occurs in the table using a (Do while loop)
What I need to do; as Dcount is counting each activity
I need to append to a blank table the Activity and the count.
As its counting from bof to eof
As a example Dcount counts
Bowling 45
Golf 35
Fishing 44
Using vba I need to append to a blank table (tblCount)
The two fields. The Activity and the count as Dcount is
Counting from Bof to Eof
"TblCount" shows
The Activity Record Count
Bowling 45
Golf 35
Fishing 44
any help thanks I have no idea how or where to start
RexOfNapa
Related
I have a table which looks like this
id login_id trend_type sep oct nov
1 abc#abc.com Billing 10 34 43
1 abc#abc.com Visits 20 43 56
1 abc#abc.com Revenue 30 12 12
1 pqr#pqr.com Billing 40 23 54
1 pqr#pqr.com Visits 50 21 47
1 pqr#pqr.com Revenue 60 98 12
I want to create a dashboard where I can display graphs of all these Trend Types and add a filter for the user so they can select the month for which they want to view the graphs.
I have tried this solution -
https://community.tableau.com/thread/228965
but I wasn't successful.
Tableau really likes data that is taller rather than wider. In this case, you need to do a PIVOT on the month data. A pivot will create a column for the months and another column for the values. Your data will have more rows now but fewer columns.
When you bring the data into Tableau, on the Data Source screen, highlight the three month columns and select pivot.
You can also change the name of the Pivot Field Names (to Month) and Pivot Field Values (to Amount or another appropriate name).
Click on the orange Sheet 1 on the bottom left. Next, create a calculated field to create a full date. (Tableau doesn't know what 'sep' is.)
[Pivot Field Names] + "-01-2019"
This field just creates a string that Tableau can parse (eg 'sep-01-2019'). Now tell Tableau it is a Date field by changing the field type (click on the Abc next to the Dimension name).
At this point, you can create a viz and add filters. Here is an example.
I have a trade table that contains historical execution records which contains timestamp, ric, side, price, quantity (where ric are all equities). Additionally, I have aj'ed the futures price snapshot table at each execution time. So, the trade table contains: timestamp, ric, side, price, quantity, futures_price.
I am trying to create an intra-day backtesting system where:
as each execution record is parsed (via { BACKTESTNIG_LOGIC_HERE } each trade), different set of logics will be used to decide hedging timing.
Is it possible for me to create hedge table which can write down timestamp for executing futures, execution price, trade_qty, cumulative_qty without writing to the disk? Basically, I want to see if it is possible to dynamically update hedge table (as each execution record is passed) and pass hedge table.
I was looking at over or scan, but i wasn't sure if that was the right approach of doing it. Can you guys provide some insights on this matter?
Thank you!
Yes, it sounds like over is what you need to use in this case.
This allows you to pass in an initial state, update it and pass in back in for the next iteration.
For example:
q){.[x;(y`sym;`cnt);+;1]}/[([sym:()] cnt:`long$());trades]
sym | cnt
----| ---
ORCL| 114
YHOO| 110
AAPL| 105
IBM | 124
NOK | 120
CSCO| 112
MSFT| 95
DELL| 109
GOOG| 111
In this simple example, a trade table is iterated over (second arg to over), and the initial state is the simple keyed table ([sym:()] cnt:`long$())
On each iteration, we simply add 1 to the count for the relevant sym. In the real usage, you would perform your backtesting here and return the updated hedge table from the lambda function - this updated table will be passed back into the function in the next iteration (e.g. in this example each time the cnt for a sym is increased, the next iteration receives the table with that cnt increased)
I have three data sets:
First, called education.dta. It contains individuals(students) over many years with their achieved educations from yr 1990-2000. Originally it is in wide format, but I can easily reshape it to long. It is presented as wide under:
id educ_90 educ_91 ... educ_00 cohort
1 0 1 1 87
2 1 1 2 75
3 0 0 2 90
Second, called graduate.dta. It contains information of when individuals(students) have finished high school. However, this data set do not contain several years only a "snapshot" of the individ when they finish high school and characteristics of the individual students such as backgroung (for ex parents occupation).
id schoolid county cohort ...
1 11 123 87
2 11 123 75
3 22 243 90
The third data set is called teachers.dta. It contains informations about all teachers at high school such as their education, if they work full or part time, gender... This data set is long.
id schoolid county year education
22 11 123 2011 1
21 11 123 2001 1
23 22 243 2015 3
Now I want to merge these three data sets.
First, I want to merge education.dta and graduate.dta on id.
Problem when education.dta is wide: I manage to merge education and graduation.dta. Then I make a loop so that all the variables in graduation.dta takes the same over all years, for eksample:
forv j=1990/2000 {
gen county j´=.
replace countyj´=county
}
However, afterwards when reshaping to long stata reposts that variable id does not uniquely identify the observations.
further, I have tried to first reshape education.dta to long, and thereafter merge either 1:m or m:1 with education as master, using graduation.dta.
However stata again reposts that id is not unique. How do I deal with this?
In next step I want to merge the above with teachers.dta on schoolid.
I want my final dataset in long format.
Thanks for your help :)
I am not certain that I have exactly the format of your data, it would be helpful if you gave us a toy dataset to look at using dataex (and could even help you figure out the problem yourself!)
But to start, because you are seeing that id is not unique, you need to figure out why there might be multiple ids in any of the datasets. Can someone in graduate.dta or education.dta appear more than once? help duplicates will probably be useful to explore the data in this way.
Because you want your dataset in long format I suggest reshaping education.dta to long first, then doing something like merge m:1 id using "graduate.dta" (once you figure out why some observations are showing up more than once) and then, finally something like merge 1:1 schoolid year using "teacher.dta" and you will have your final dataset.
I am working in a sub report. Visual Studio 2010 if relevant.
My first table has a unique record for every row in the details section. I am then attempting to use a Left Outer Join to join a second table to the first but then sum those values where there may be more than one row in table two per table one.
Right now I am having a problem with row duplication and the values from table two will not sum and instead print out individually. I have tried using groups and tried using Sum formula fields without success.
My first table is unique on a Record Number, Component and Sequence.
My second table also has those three items but may contain no record per the first table, one or multiple.
The Record Number and Component are passed to the report as parameters and are used in select expert. So really Sequence is only a unique record in my first table.
I have tried Group By Record Number, Component and Sequence for Table one. I have also tried Grouping by Sequence for Both Tables. I have tried adding a sum formula and even tried a running total without success.
Is this something that is possible? Is there something I may have missed trying?
Edit:
What I am current getting
Seq T1 Data T2 Data
10 A1 35
20 C2 25
20 C2 15
30 D5 30
40 D6 10
40 D6 50
What I am looking for
Seq T1 Data T2 Data
10 A1 35
20 C2 40
30 D5 30
40 D6 60
I finally figured it out. I grouped by Table 1 Sequence, then Table 2 Sequence (not sure if grouping by both is necessary). Then I used a Running Total Field in Group Footer right after the details section (Group for Table 2) and set the Reset value to On change of group 1 (Table 1).
I'm trying to work on a report for a client. Basically I need something like such
Group 1: Customer ID
Group 2: Truck ID
CustID Vehicle ID Detention Time
------ ---------- --------------
ABX 100 60
35
20
TOTAL: 115
200 80
15
TOTAL: 95
300 10
TOTAL: 10
TOTALS FOR CUSTOMER ABX
100 115
200 95
300 10
Is there anyway to accomplish this without a subreport? I was hoping for a "summary field" that I could summarize more than just a single value.
Thanks!
(FYI using Crystal Reports 2008)
Use a crosstab; place it in the report-footer section.
There might be a better way to do this, but the one that comes to mind is to use two arrays: One to store the truck ID and another to store the corresponding total. In each inner grouping (TruckID), just tack on another array element and store its total time. To display, you could cast the values to strings, attach a newline character after each entry, and set the field to "Can Grow". So altogether, you'd need three formulas: one to initialize the arrays (in GH1), one to update the arrays with sum({truck.time},{truck.ID}) (in GF2), and one to display each entry (in GF1).
With that being said, CR has terrible support for containers... You're limited to 1-dimensional, non-dynamic arrays that are gimped at 1000 items max. It doesn't sound like these would be big problems for what you're trying to do, but you will need to redim preserve the arrays unless you know ahead of time how many trucks you'll have per customer.