Automatic todo scheduling in emacs org-mode using estimate - emacs

I use the fantastic emacs org-mode to manage my todos and also import my calendar into org-agenda.
Most of my todos have a time estimate in org-mode.
Question 1. Is there a way to have emacs suggest/schedule todos automatically? E.g. look at the open todos, take their time estimate and find a slot to put it in.
Question 2. My todos also have deadlines and schedule dates. Is there a way to make emacs sum up the time estimates of todos with deadlines and mark in my org-agenda a time block indicating that this much time will be taken by the todo?
In general, i would like to use the time estimates and deadlines in a more interesting way to help me plan my tasks.
Many thanks,
Stephan

Related

When to use windowing against grouping by event time?

Kind of related, as allowing infinitely late data would also solve my problem.
I have a pipeline linked to a pub/sub topic, from which data can come very late (days) as well as nearly real time: I have to make simple parsing and aggregation tasks on it, based on event time.
As each element holds a timestamp of the event time, my first idea when looking at the programming guide was to use windowing by setting the timestamp on event time.
However, as the processing can happen a long time after the event, I would have to allow very late data, which based on the answer of the first link I posted seems to not be such a good idea.
An alternative solution I have come up with would be to simply group the data based on event time, without marking it as a timestamp, and then calculate my aggregates based on such a key. As the data for a given even time tend to arrive at the same time in the pipeline, I could then just provide windows of arbitrary length based on processing time so that my data is ingested as soon as it is available.
This got me wondering: does that mean that windowing is useless as soon as your data comes a bit later than real time ? Would it be possible to start a window when the first element is received and close it a bit afterwards ?

What is the "Tableau" way to deal with changing data?

As a background to this question: I've been using Tableau for some time now, but I've been using code (Python, Swift, etc) as a crutch for getting some of the more complicated things done. My employer is now making me move what I can away from custom code and into retail software packages because it will make things easier to maintain if I get hit by a bus or something.
The scenario: With code, I find it very easy to deal with constantly changing/growing data by using recursion. I know that this isn't something I can do with Tableau, but I've found that for many problems so far there is a "Tableau way" of thinking/doing that can solve a lot of problems. And, I'm not allowed to use Rserve/TabPy.
I have a batch of transactional data that grows every month by about 1.6mil records. What I would like to do is build something in Tableau that can let me track a complicated rolling total across the data without having to do it manually. In my code of choice, it would have been something like:
Import the data into a frame
For every unique date value in the 'transaction date' field, create a new column with that name
Total the number of transaction in each account for that day
Write the data to the applicable column
Move on to the next day
Then create new columns that store the sum total of transactions for that account over all of the 30 day periods available (date through date + 29 days)
Select the max value of the accounts for a customer for those 30-day sums
Dump all of that 30-day data into a new table based on the customer identifier
It's a lot of steps, but with a couple of nice recursive functions, it's done in a snap with a bit of code. Plus, it can handle the data as it changes.
The actual question: How in the world do I approach problems like this within Tableau since my brain goes straight to recursive function land? I can do this manually with Tableau Prep, but it takes manual tweaking every time the data changes. Is there a better way, or is this just not within the realm of what Tableau really does?
*** Edit 10/1/2020: Minor typo fix. ***

Reporting of workflow and times

I have to start moving transnational data into a reporting database, but would like to move towards a more warehouse/data mart design, eventually leveraging Sql Server Analytics.
The thing that is being measured is the time between points of a workflow on a piece of work. How would you model that when the things that can happen, do not have a specific order. Also some work wont have all the actions, or might have the same action multiple times.
It makes me want to put the data into a typical relational design with one table the key or piece of work and a table that has all the actions and times. Is that wrong? The business is going to try to use tableau for report writing and I know it can do all kinds of sources, but again, I would like to move away from transaction into warehousing.
The work is the dimension and the actions and times are the facts?
Is there any other good online resources for modeling questions?
Thanks
It may seem like splitting hairs, but you don't want to measure the time between points in a workflow, you need to measure time within a point of a workflow. If you change your perspective, it can become much easier to model.
Your OLTP system will likely capture the timestamp of when the event occurred. When you convert that to OLAP, you should turn that into a start & stop time for each event. While you're at it, calculate the duration, in seconds or minutes, and the occurrence number for the event. If the task was sent to "Design" three times, you should have three design events, numbered 1,2,3.
If you're want to know how much time a task spent in design, the cube will sum the duration of all three design events to present a total time. You can also do some calculated measures to determine first time in and last time out.
Having the start & stop times of the task allow you to , for example, find all of tasks that finished design in January.
If you're looking for an average above the event grain, for example what is the average time in design across all tasks, you'll need to do a new calculated measure using total time in design/# tasks (not events).
Assuming you have more granular states, it is a good idea to define parent states for use in executive reporting. In my company, the operational teams have workflows with 60+ states, but management wanted them rolled up into five summary states. The rollup hierarchy should be part of your workflow states dimension.
Hope that helps.

Writing an Apple Watch Complication that predicts future values and displays time sensitive data

I am in the process of writing an Apple Watch Complication for WatchOS 2. The particular data I am trying to show is given (via web request) in intervals of time ranging from 3-6 minutes. I have a predictive algorithm that can predict what the data values will look like. This presents a problem to me.
Because I want to display the data my predictive algorithm has to offer in time travel, I would like to use getTimelineEntriesForComplication (the version that asks for data after a certain date) to supply the future values that my algorithm believes will be true to the timeline. However, when time moves forward (as it tends to do) and we reach the time that one of these predicted data points was set to occur at, the predicted value is no longer accurate.
For instance, lets say it is 12:00 PM, and I currently have an (accurate) data value of A. The predictive algorithm might predict the following data values of the next two hours:
12:30 PM | B
1:00 PM | C
1:30 PM | D
2:00 PM | E
However, when 12:30 PM actually comes around, the actualy data value might be F. In addition, the algorithm will generate a new set of predictions all the way to 2:30 PM. I understand I can use updateTimelineForComplication to indicate that the timeline has to be rebuilt, but I have two problems with this method:
I fear I will exceed the execution time limit rather quickly
updateTimelineForComplication flushes the entire timeline, which seems wasteful to me considering that all the past data is perfectly valid, its simply the next 4 or so values that need to be updated.
Is there a better way to handle this problem?
At present, there's no way to alter a specific timeline entry, without reloading the entire timeline. You could submit a feature request to Apple.
Summary
To summarize the details that follow, even though your server updates its predictions every 3-6 minutes, the complication server will only update itself at 10 minute intervals, starting at the top of an hour. Reloading the timeline is your only option, as it will guarantee that all your predictions are updated and accurate within 10 minutes.
Specific findings
What I've found in past tests involving extendTimelineForComplication: using the minimum 10-minute update interval, is that the dataSource is asked for 100 entries before and after a sliding window based on the current time.
The sliding window isn't centered on the current time. For watchOS 2.0.1, it appears to be skewed to ask for more recent future entries (after ~14-27 minutes in the future), and less recent past entries (before ~100 minutes in the past).
Reloading is the only way to update any entries that fall within the ~two hour sliding window.
Issues
In my experience, extendTimelineForComplication has been less reliable than reloading the timeline, as a timeline that that is never reloaded needs to be trimmed to discard entries. The fewer number of entries per hour, the less frequently this occurs, but once the timeline cache grows large enough, the SDK appears to aggressively discard entries from the head and tail of the cache, even if those entries fall within the 72-hour time-travel window. The worst I've seen is only being able to time-travel forward 30 entries, instead of 100.
Having provided those details, I wouldn't suggest that anyone try to take advantage of any behaviors that may change in the future.
Daily budget and battery life
As for the daily budget, it sound more ominous than it is, but I think you'd have to do some intense calculations before the complication server cuts you off. Even with ten minute updates, I never exceeded the budget. The real issue is battery use. You'll find that frequent updates can drain your battery before the day is over. This is probably the most significant reason for Apple's recommendation:
Complications should provide as much data as possible during each update cycle, so specify a date as far into the future as you can manage. Do not ask the system to update your complication within minutes. Provide data to last for many hours or for an entire day.

How to append separate datasets to make a combined stacked dataset in Stata without losing information

I'm trying to merge two datasets from two time periods, time 1 & 2, to make a combined repeated measures dataset. There are some observations in time 1 which do not appear in time 2, as the observations are for participants who dropped out after time 1.
When I use the append command in Stata, it appears to drop the observations from time 1 that don't have corresponding data at time 2. It does, however, append observations for new participants who joined at time 2.
I would like to keep the time 1 data of those participants who dropped out, so that I can still use that information in the combined dataset.
How can I tell Stata not to automatically drop these participants?
Thanks,
Steve
Perhaps the best way of interesting people in advising you on your problems is to respect those answer your questions. You have been repeatedly advised, even as recently as yesterday, to review https://stackoverflow.com/help/someone-answers and provide the feedback that reflects itself in the reputation scores of those who take the time to help you.
In any event, append does not work as you describe it. If you take the time to work out a small reproducible example, by creating two small datasets, appending them, and then listing the results, you may find the roots of your misunderstanding. Or you will at least be able to provide substantive information from which others can work, should someone be interested in helping you.