Two questions of TradeStation charts - charts

I am evaluating which one to choose, TWS from IB or TS. TWS has a demo account but TS does not. I have two questions about TS stock chart.
premarket data in charts start from 4am, or later? TWS starts from 4am. I know TS only allows trades after 8am. I am just wondering if premarket data in chart also starts late.
when premarket data is displayed along with regular trading hour data, HOURLY candle is aligned with 9am or 9:30am? TWS has hourly candle aligned with 9am not market opening time. I honestly don't like it. I am just wondering if TS does the same thing.
If anyone can answer me these two questions, I would be really appreciated!
Thanks,
Jay

You can set when you want TS to show you data from, in terms of sessions; if it's there, they'll find it. So yes, I can for example set my TS chart to show 'premarket' AAPL data from 0300 (or really any other time). I can't trade it before the exchange opens though, as that would be an OTC/ off-exchange trade which you have to be an institution to do.
Candles are aligned with 0930, but that doesn't matter; it can be changed in settings, also though when backtesting, you should use LIBB (Look Intra Bar).
Hope this helps! I strongly recommend you take a close look at MultiCharts too, using IQFeed data.
Also remember, all these systems are buggy as all hell. Managing workflow with them is as much about learning and overcoming their eccentricities, as it is doing the work.

Related

Simulate how many incidents would have generated on different anomaly detections settings

Dear KQL master/ expert,
I've been trying to find the most effective (elegant) solution to achieve what I'm trying to do. I'd like to hear from the community, thank you.
Situation:
Currently we have an anomaly detection rules named "Process execution frequency anomaly" running every hour, and generated a lot of false-positives
We would like to tune the analytic rule by changing "threshold" value in series_decompose_anomalies.
We would like to simulate the analytic rule running various different settings, to see how much incident it would be generated.
Issue/ Things I tried:
The idea was to simulate "as if" the analytic rule is running every hour 7 days back for example. Similar to "Result simulation" section.
I have been able to create a simulation in Workbooks for simple analytic rules, by adding make-series command at the end of the KQL line. However, for this specific anomaly detection rules, I haven't been able to recreate it. Most likely because the data is produced by series_decompose_anomalies function in memory.
Question:
Is it doable?
Did I approach this incorrectly?
Is it best to change the settings, and then do an evaluation in the next 30 days ?
Thank you for your thoughts and suggestions.

How to stop timeout in service block

I am modeling ticket system with various SLA. The model must contain several service blocks with different reaction time ( from 2 to 32 hours). In the service block only working hours should be taken into account. So in the service block timeout should stop when non-workong hours and on the weekend. Could you please kindly tell me how i can realize it?
Thank you very much in advance!
I can think of two answers, one simplified but works in many cases, the other more advanced and probably more accurate:
Simplified approach: I would set the model in hours and keep everything running as is without any stop. So, at the end of the simulation, if the total time is 100 hours and you know that you have 8 hours/day with 5 days/week, then you'd know the total duration is 2.5 weeks. Of course, this might have limitations or might become more complex later on if you want day-specific actions (e.g. you want to differentiate between Monday, Tuesday, etc.)
Advanced more accurate approach: Create resources whose capacities are defined by schedule and assigned them to your services. Create a schedule and specify the working hours in that schedule. Check the below link to learn more about schedules. I call this the more advanced approach because you need to make sure the schedule is defined correctly and make sure all elements in the model are properly controlled (e.g. non-service blocks such as source, delays, etc.).
https://help.anylogic.com/topic/com.anylogic.help/html/data/schedule.html?resultof=%22%73%63%68%65%64%75%6c%65%73%22%20%22%73%63%68%65%64%75%6c%22%20
I personally would use the first approach if the model is rather simple and modeling working hours is enough for analysis. Otherwise, I'd go for option 2.
Finally, another option I'd like to highlight is the "suspend/resume" functions. I am only adding this because you asked "how to stop timeout". So these functions specifically stop and resume timeout. But you'll need to define the times at which they are executed (through an event for example).

How can I see approximate hours worked during the week with mercurial?

So I know about the Mercurial activity extension, however the data it seems to plot is number of commits. I don't think that is a good metric. If in an hour you make 20 commits or 1 commit, I don't think one can deduce much from that. I think it's far safer to say every time there is a commit we assume/input some previous time of work was done. 1-2 or maybe 5 hours (depending on the person). Then you plot that in a block style calendar widget (like github's contribution widget) either by week or by month...
Does anything like what I describe exist?
This has at the very least commit plotted by time of day:
https://bitbucket.org/fundacion_jala/stathg/wiki/Home
But it seems to be more of a point cloud rather than filling out blocks. This doesn't give one an idea as a percentage of the work week, how much has one worked. Not going to mark my answer as correct.

What is the source for historical stock chart prices?

Yahoo does not seem to be using historical close prices nor are they using historical Adjusted close prices for their charts. For example, if you look at PCG on July 3, 1980, the data looks like this (close of 5.34):
Date,Open,High,Low,Close,Volume,Adj Close
1980-07-03,5.34,5.34,5.31,5.34,99200,0.04
However, on the interactive chart, Yahoo shows this price on July 3, 1980:
2.6695
On Google's chart, it shows 12.188 for that date. It is also missing some dates.
What is going on? I am beginning to doubt the integrity of stock charts.
Does anyone know what source data Yahoo (or Google) uses for their stock charts?
I recommend http://eoddata.com, very good source of data.

Crystal Reports: ? Possible to show Full Set in one chart, and subsets in separate charts?

In Crystal Reports, is there a way to get both full set charting and subset charting, in the report headers?
I'm working on a report from an erstwhile co-worker and I'm still trying to make things "better".
While I haven't found the solution to accruing time
( see Accruing over time (non-overlapping) - technique? )
I'll press on with how to use the resulting data once I retrieve it.
The report is a Global Availability report for network technologies, and part of the report is graphic:
Chart availability for different
network types for last "n" months'
time.
Charts availability for each region
(for each network type for "n"
months' time).
She (co-worker) had a global chart, but for each region, she did a separate sub-report containing just the chart for that region. The query isn't optimal, and using the sub-reports, the query is repeated each time.
If there a way to use a single data-set in one report for all five charts, forcing the four regional charts to display only that region's data?
Additional info:
The charts are all Bar charts, design is
y-axis: calculated availability
x-axis: Group by network type (Switches, Trunks, "Network)
sub group by month
Bad Example:
Let me see if I understand this. In your Report Header, you have 5 Subreports for the 4 regional graphs and the global graph. And you want to collapse this all into 1 Subreport if possible?
Yes, but you can't do it like in your image where United States & Europe are side-by-side. They would have to be 1 per row. Also, the datasource also has to be formatted correctly. To do this,
Make a new subreport. Group it by the Region.
In this subreport, make your regional graph in the Group Header section.
In this subreport, also make your global graph in the Report Header section.
Insert this subreport into your main report and you should be done.
Sometimes, the only way out of the fire is through it.
After lots of un-satisfactory refactoring, I spoke with the original (years ago) requestor and got some good information. I have yet to speak to the most recent requestor again (who didn't have any knowledge of the technical requirements the last several times).
Spoke w/ the guy who is tending a related db, and I get permission to add come functions, views, store procedures, etc. to THAT db... Within reason and after code/perf review -- something that isn't normally conducted, so I welcome it. I WILL have the ability to do the procedural stuff through... a procedure. Written as a stand-alone, I should be able to re-use it for any of the queries against future needs.
And... Yes, I am pretty much going to have to (read "get to") re-design, and hopefully get rid of most of the sub-reports. Yeay, me.
Thanks for coming along for the ride.