What machines included in Bizspark? - bizspark

Can I create one A4 maschine(8 cores and 14 RAM), and two A0 machines(1 core, 768RAM). They can be turned on in the same time.
What if I create another A4 machine, but keep it turned off and turn it on alternately with another A4(keep working only one). Is it out of bizspark subscription and will be subscription blocked?
Best ragards,
Gennady

1) Sorry for off-top, I'm a newbie at stackoverflow
2) I think I found an answer. For subscription there is allowed 20 cores. Virtual machines use cores like in a table below
So I think I can create another A4, as it will be 8+8+1+1, less then 20, allowed by bizspark subscription.
Best regards,
Gennady

Related

How to stop timeout in service block

I am modeling ticket system with various SLA. The model must contain several service blocks with different reaction time ( from 2 to 32 hours). In the service block only working hours should be taken into account. So in the service block timeout should stop when non-workong hours and on the weekend. Could you please kindly tell me how i can realize it?
Thank you very much in advance!
I can think of two answers, one simplified but works in many cases, the other more advanced and probably more accurate:
Simplified approach: I would set the model in hours and keep everything running as is without any stop. So, at the end of the simulation, if the total time is 100 hours and you know that you have 8 hours/day with 5 days/week, then you'd know the total duration is 2.5 weeks. Of course, this might have limitations or might become more complex later on if you want day-specific actions (e.g. you want to differentiate between Monday, Tuesday, etc.)
Advanced more accurate approach: Create resources whose capacities are defined by schedule and assigned them to your services. Create a schedule and specify the working hours in that schedule. Check the below link to learn more about schedules. I call this the more advanced approach because you need to make sure the schedule is defined correctly and make sure all elements in the model are properly controlled (e.g. non-service blocks such as source, delays, etc.).
https://help.anylogic.com/topic/com.anylogic.help/html/data/schedule.html?resultof=%22%73%63%68%65%64%75%6c%65%73%22%20%22%73%63%68%65%64%75%6c%22%20
I personally would use the first approach if the model is rather simple and modeling working hours is enough for analysis. Otherwise, I'd go for option 2.
Finally, another option I'd like to highlight is the "suspend/resume" functions. I am only adding this because you asked "how to stop timeout". So these functions specifically stop and resume timeout. But you'll need to define the times at which they are executed (through an event for example).

How to create multiple unloading dock using the selectOutput block?

I am working on a warehouse simulation model and I cannot figure out how to increase the number of unloading docks while limiting the docks to one supplier truck. I got it working with 2 unloading docks using the select out with the condition is true when Unload_dock_1.isBlocked(). Anyone has tips on how to increase the number of unloading dock while keeping the restriction of 1 truck per unloading dock until released later.
Current model with two unloading docks
Kind regards,
Stefan
You can duplicate several blocks into 1 agent and re-use it as many times as you want. Select the blocks, right-click and select "turn into agent" (or similar). Then, you can parameterize that and re-use 2 times, 5 times or 99 times.
Check the Youtube tutorial on the concept: https://youtu.be/0Nu1a9Te6ac

Do multiple scripts (projects) contribute to Trigger Aggregate Execution Time?

I have ScriptA with some functions in files that have triggers that all run under UserA and consume about 2 hours of runtime per day.
I have another project ScriptB with some other functions in other files that have triggers that all run under UserA (the same user as ScriptB users) and consume about 3 hours of runtime per day.
Is my Trigger Aggregate Execution Time quota (from quota page here) aggregated per user or per script? That is, is it:
Five hours (2 + 3) for UserA or is it
Two hours for ScriptA and 3hrs for ScriptB?
I have seen this answer but it doesn't explicitly address the scoping question I'm asking.
Obviously is per user not ler script. Otherwise quotas wouldnt make sense.
In the interests of getting some evidence together for this:
At 4m25 in this March 2013 episode of Google Apps Unscripted, Kalyan Reddy says that the quotas are "per account type" and as you can see in the dashboard, the Quota table is gridded and has columns labelled with those account types too.
I have also done some testing and made a script that uses quite a bit of time. It started to max out other scripts running under the same account and many of that account's triggered scripts started to get errors "Service using too much computer time for one day". But... interestingly, after a couple of days of those errors have subsided. I believe on a consumer account I am now getting way more execution time than 1 hr per day.
While not a direct answer to the question and still a leap of logic/assumption, these two things make me feel that "per account" is more likely to be correct than "per script". I'll keep the question open for a bit longer for any comments (esp Googlers).

Service broker with SqlNotificationRequest

I am in the process of evaluating a Service Broker with SQL noticiation for my project. My requirements is User places a order from System A and it will update Order Table. As soon as order is place i need to notify the System B. I have done a quick POC with Trigger , Service Broker and SQLNotificaiton ADO.NET. It is working as i expected.
What i would like to know the group
A) What are the best practices i need to follow for this?
B) What are disadvantages with the above approach if any?
C) Are there any disadvantes of using the Triggers? If so what are those for above approach?
The order table will get order from System A like 1000 to 1500 every day. I also would like to know the performance of above approach.
If what you're trying to do is simply push data from System A to System B, then as long as there are no clients connected to System B, you won't need Sql Notification.
Instead of using triggers you may consider Change Tracking. Take a look at "Real Time Data Integration..." article on Service Broker Team Blog.

WF performance with new 20,000 persisted workflow instances each month

Windows Workflow Foundation has a problem that is slow when doing WF instances persistace.
I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish.
What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons.
I have the following questions:
Is this true? Will my performance be crap with that load(given WF persitance speed limitations)
How can I solve the problem?
We currently have two possible solutions:
1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database.
2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly).
I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at nstjelja#gmail.com
The number of hydrated executing wokflows will be determined by environmental factors memory server through put etc. Persistence issue really only come into play if you are loading and unloading workflows all the time aka real(ish) time in that case workflow may not be the best solution.
In my current project we also use WF with persistence. We don't have quite the same volume (perhaps ~2000 instances/month), and they are usually not as long to complete (they are normally done within 5 minutes, in some cases a few days). We did decide to split up the main workflow in two parts, where the normal waiting state would be. I can't say that I have noticed any performance difference in the system due to this, but it did simplify it, since our system sometimes had problems matching incoming signals to the correct workflow instance (that was an issue in our code; not in WF).
I think that if I were to start a new project based on WF I would rather go for smaller workflows that are invoked in sequence, than to have big workflows handling the full process.
To be honest I am still investigating the performance characteristics of workflow foundation.
However if it helps, I have heard the WF team have made many performance improvements with the new release of WF 4.
Here are a couple of links that might help (if you havn't seem them already)
A Developer's Introduction to Windows Workflow Foundation (WF) in .NET 4 (discusses performance improvements)
Performance Characteristics of Windows Workflow Foundation (applies to WF 3.0)
WF on 3.5 had a performance problem. WF4 does not - 20000 WF instances per month is nothing. If you were talking per minute I'd be worried.