distributing pivot graphs to departments through email efficiently - workflow

I work at an institution with a lot of departments and subdivisions. I have an "excel-database" with pivotcharts that can show the results for the progress of the different departments and subdivisions, but there are quite a lot, and to get through all graphs (Dep 1, subdivision 1, Dep 1 subdivision 2, etc...) I have to go through quite a bunch of iterations sending out the graphs for each department and subdivision.
I'm considering creating a macro - that selects each option in the pivotchart and then exports to a word document, but I don't know if there's an easier way to go, since I guess thiss will take me quite some time too.
I'm thinking that someone probably has been in the same situation, so if anyone has any suggestions as to how this could be solved efficiently, please let me know.
EDIT:
So as I see it there are three steps to this question that need solving (steps that are striked are steps that I know how to do)
Iterate through pivot table options
Copy charts to word OR other excel file and save
attach that file to a mail, and send it to the correct department-mail

The general thinking about how to handle a case like yours has changed over the years. Currently I would recommend making the data accessible on an internal website of some kind and allowing each department to generate their own graph on demand. They would then be able to look at the data whenever they wanted and you would not have to send out graphs. See if Google Drive or MS Office365 can do this for you.

Related

Firebase analytics - Unity - time spent on a level

is there any possibility to get exact time spent on a certain level in a game via firebase analytics? Thank you so much 🙏
I tried to use logEvents.
The best way to do so would be measuring the time on the level within your codebase, then have a very dedicated event for level completion, in which you would pass the time spent on the level.
Let's get to details. I will use Kotlin as an example, but it should be obvious what I'm doing here and you can see more language examples here.
firebaseAnalytics.setUserProperty("user_id", userId)
firebaseAnalytics.logEvent("level_completed") {
param("name", levelName)
param("difficulty", difficulty)
param("subscription_status", subscriptionStatus)
param("minutes", minutesSpentOnLevel)
param("score", score)
}
Now see how I have a bunch of parameters with the event? These parameters are important since they will allow you to conduct a more thorough and robust analysis later on, answer more questions. Like, Hey, what is the most difficult level? Do people still have troubles on it when the game difficulty is lower? How many times has this level been rage-quit or lost (for that you'd likely need a level_started event). What about our paid players, are they having similar troubles on this level as well? How many people have ragequit the game on this level and never played again? That would likely be easier answer with sql at this point, taking the latest value of the level name for the level_started, grouped by the user_id. Or, you could also have levelName as a UserProperty as well as the EventProperty, then it would be somewhat trivial to answer in the default analytics interface.
Note that you're limited in the number of event parameters you can send per event. The total number of unique parameter names is limited too. As well as the number of unique event names you're allowed to have. In our case, the event name would be level_completed. See the limits here.
Because of those limitations, it's important to name your event properties in somewhat generic way so that you would be able to efficiently reuse them elsewhere. For this reason, I named minutes and not something like minutes_spent_on_the_level. You could then reuse this property to send the minutes the player spent actively playing, minutes the player spent idling, minutes the player spent on any info page, minutes they spent choosing their upgrades, etc. Same idea about having name property rather than level_name. Could as well be id.
You need to carefully and thoughtfully stuff your event with event properties. I normally have a wrapper around the firebase sdk, in which I would enrich events with dimensions that I always want to be there, like the user_id or subscription_status to not have to add them manually every time I send an event. I also usually have some more adequate logging there Firebase Analytics default logging is completely awful. I also have some sanitizing there, lowercasing all values unless I'm passing something case-sensitive like base64 values, making sure I don't have double spaces (so replacing \s+ with " " (space)), maybe also adding the user's local timestamp as another parameter. The latter is very helpful to indicate time-cheating users, especially if your game is an idler.
Good. We're halfway there :) Bear with me.
Now You need to go to firebase and register your eps (event parameters) into cds (custom dimensions and metrics). If you don't register your eps, they won't be counted towards the global cd limit count (it's about 50 custom dimensions and 50 custom metrics). You register the cds in the Custom Definitions section of FB.
Now you need to know whether this is a dimension or a metric, as well as the scope of your dimension. It's much easier than it sounds. The rule of thumb is: if you want to be able to run mathematical aggregation functions on your dimension, then it's a metric. Otherwise - it's a dimension. So:
firebaseAnalytics.setUserProperty("user_id", userId) <-- dimension
param("name", levelName) <-- dimension
param("difficulty", difficulty) <-- dimension (or can be a metric, depends)
param("subscription_status", subscriptionStatus) <-- dimension (can be a metric too, but even less likely)
param("minutes", minutesSpentOnLevel) <-- metric
param("score", score) <-- metric
Now another important thing to understand is the scope. Because Firebase and GA4 are still, essentially just in Beta being actively worked on, you only have user or hit scope for the dimensions and only hit for the metrics. The scope basically just indicates how the value persists. In my example, we only need the user_id as a user-scoped cd. Because user_id is the user-level dimension, it is set separately form the logEvent function. Although I suspect you can do it there too. Haven't tried tho.
Now, we're almost there.
Finally, you don't want to use Firebase to look at your data. It's horrible at data presentation. It's good at debugging though. Cuz that's what it was intended for initially. Because of how horrible it is, it's always advised to link it to GA4. Now GA4 will allow you to look at the Firebase values much more efficiently. Note that you will likely need to re-register your custom dimensions from Firebase in GA4. Because GA4 is capable of getting multiple data streams, of which firebase would be just one data source. But GA4's CDs limits are very close to Firebase's. Ok, let's be frank. GA4's data model is almost exactly copied from that of Firebase's. But GA4 has a much better analytics capabilities.
Good, you've moved to GA4. Now, GA4 is a very raw not-officially-beta product as well as Firebase Analytics. Because of that, it's advised to first change your data retention to 12 months and only use the explorer for analysis, pretty much ignoring the pre-generated reports. They are just not very reliable at this point.
Finally, you may find it easier to just use SQL to get your analysis done. For that, you can easily copy your data from GA4 to a sandbox instance of BQ. It's very easy to do.This is the best, most reliable known method of using GA4 at this moment. I mean, advanced analysts do the export into BQ, then ETL the data from BQ into a proper storage like Snowflake or even s3, or Aurora, or whatever you prefer and then on top of that, use a proper BI tool like Looker, PowerBI, Tableau, etc. A lot of people just stay in BQ though, it's fine. Lots of BI tools have BQ connectors, it's just BQ gets expensive quickly if you do a lot of analysis.
Whew, I hope you'll enjoy analyzing your game's data. Data-driven decisions rock in games. Well... They rock everywhere, to be honest.

Creating custom daily evalutations in Moodle

I've been looking for a solution for kindergarten teachers to submit daily student evaluations (different criteria) in Moodle. So far, the closest solution that I've found is the Attendance plugin.
Does anyone know of a plugin that allows the teacher to submit a daily evaluation?
Another option that I'm looking into is Moodle Competency, which can actually fit the need, however, it looks like competency is not cumulative ... if I can find a way to make it cumulative that will be awesome.
For example, one of the competencies we have is "able to read sentences" and the scale is "1 - non-developed", "2- being developed" and "3- fully developed". At any point, the teacher or school admin would like to know how competent the student is. In our case, if this is an indicator that is being responded daily, we should be able to take the average and be able to evaluate the student.
The competency framework (to my understanding) doesn't calculate the average, rather it relies on being rated by the teacher.
Any thoughts where I should continue to look?
Attendance could be a great solution to your needs.
It could be hidden to the ones acting like students (I'm not shure if the kindergarden kids be interested in see this, maybe their parents)
Attendance have a full compatibility with course grading.
It could be configured to have diferent percentaje of final grading, so far, you can use one attendance activity for have a registry for their personal clairliness, another to record assessment in math, one more to social assessment and so on.
Finally all users with minimun acces as teacher (or another role you defined: example: school administration, scholar control) Could have facilities to export every grading to spreadsheet.
I've several years using it in a similar way you are asking to.
I hope this helps you.

How to implement Associative Rules Analysis or Market Basket Analysis from scratch?

I tried to went through numerous articles trying to understand what should be my first step to incorporate associative analysis (may be Market Basket analysis) into my system. They all go deep into implementation of algorithm but no one talked about how to store data in the first place.
I will really appreciate if someone can give me some start pointers or article links that I can begin with.
The first thing I want to implement is to track user clicks and provide suggestions based on tracked data.
E.g. User clicked on link A and subsequently on link B and link C. I can track this activity with some metadata associated (user, user organization, user role etc.)
I do not want it to be limited only to links. In future, I want to add number of similar usecases into the system and want to make it smart. E.g. If user set specific values for fields A and B, most likely he/she will set value <bla> for field C.
My system may generate several thousand such data points in a day (E.g. user clicks, field selection etc.).
Below are my questions:
How should I store my data? Go SQL or No SQL (I briefly looked into Mongo DB and it looked promising)
What tool should I use to perform the associative analysis? Are there any open source tools I can use?
It depend. Does your data suitable for NoSql databases? To answer this question it's better to read CAP Theorem and it's case studies: https://en.wikipedia.org/wiki/CAP_theorem or http://robertgreiner.com/2014/06/cap-theorem-explained/
. Some time you want Consistency(depending to your data) and Availability => so that it's better to use Relational Databases like Mysql(Try to read case studies and analyse your data to pick the best tools)
There is large number of open source libraries, but in my opinion it's better to first read some concepts and algorithms. Try searching for Apriori,ECLAT, FP-GROWTH Algorithms and get concepts of them. then you can pick a tool or write the code your self. Some usefull tools(depending to your programming language):
Python: https://github.com/asaini/Apriori, https://github.com/enaeseth/python-fp-growth, https://github.com/enaeseth/python-fp-growth/blob/master/fp_growth.py
PHP: https://github.com/sigidhanafi/fp-growth-php
JAVA: https://github.com/goodinges/FP-Growth-Java, http://www.philippe-fournier-viger.com/spmf/
Also you can use Spark: https://spark.apache.org/docs/1.1.1/mllib-guide.html

Which features should be added for NER in search result snippets

I want to cluster queries by help of the snippets of the search engine results they are currently returning. While using the noun phrases in the snippet worked well for Google results I felt that I should try a different approach for bing snippets and hence was going for Named Entity Extraction.
I have identified the following entities that can be extracted as of now using standard tools:
Person Names
Organisation Names
Locations
But I think I should be extracting more entities. Could anyone help me out here to identify more entities that may be useful?
This is an endless list, once you get to real data problems.
For example, dates are a common thing to extract. But for example booking codes such as airline tickets, or tracking codes such as parcels are something Google Mail already recognizes and extracts.
I don't think this is a very good question for a Q/A site. Plus, you may want to read more literature, and see what kind of data you can get - it clearly is data-driven what entities you want to extract. When analyzing log files, you might be interested in extracting host names, IPs, usernames and daemon/serivce names, for example.

Cant use LIKE but need to find related records in SQL Server

I've got a table used for issue tracking (kind of like stackoverflow :) to log PC related issues) and for simplicity I'll narrow it down to a few fields, something like the following:
Site Category Issue
MI Office Software My MS word does not run macros.
CL Office Hardware PC memory needs to be upgraded
MX Office Printer Printer is out of memory.
MI Office Software Office product prompts for allowing macro to run
I want to find related issues when I am looking at for instance one issue. I can't really use the LIKE operator as for instance if I do:
SELECT...FROM...WHERE Issue LIKE '%My MS word does not run macros.%'
Would only return the first record. Do I have to figure out how to pull key words like "Macros" ? How would I find related records so that my query for instance could return records 1 and 4. Or return 2 and 3 together?
Well here are 3 ways to go about it..
1. Best case:
We have the users add 'tags' to each issue. This way users can search issues using tags and find related issues too. (Just like http://stackoverflow.com ;) )
This could be implemented creating two new tables:
tag_metadata (tag#, name, description, ...)
tag_issue_relationship(tag#, issue#)
We could go a step further and add weight to each issue entry that will determine its position in the similar issue search/look-up ranking.
2. Average case:
We have more levels to sub-categories to help further classify the problem. Now thinking of change control, will your system support easily adding/removing/re-arranging the category hierarchy over time..?
3. Worst case:
Lets say, the users are very lazy and don't want to spend a few seconds tagging their issues :).. Then you would have to implement an indexing algorithm that picks up keywords (nouns) for the issue description and builds indexes to facilitate finding 'similar' issues. Now many-a-times we may have keywords in the description that may not be significant and would result in false positives.
[Update]
Basically the solution what you are looking for could be broken up into these modules:
Parser: Will extract significant keywords from the issue description. A custom dictionary list of keywords would be used as the lookup table.
Indexer: Would index these keywords to make it searchable. This involves maintaining forward and reverse indices!
Search: Would use the indexes to locate 'similar' issues.
There may be an existing commercial/open-source product that does this..