We have a .Net project that checks the existence of a file on a remote computer.
We need to execute this against multiple remote computers (thousands) within a department, each at predefined time, everyday. The execution time is specified in database which keeps changing frequently and the execution time for each remote computer will be different (or some of them could be the same).
To achieve this, we plan to use Quartz scheduler. Since we are new to Quartz, we would like to know how to achieve this. At a high level, we need these -
Scheduler should start off at a specific time everyday - Will Quartz do this?
Once started, it should fetch execution time for each remote computer from DB
Prepare an xml with list of all remote computers and their execution times
Schedule execution for each of the remote computers
Execute the .Net project at the scheduled time for each remote computer
What type of projects/components will be needed to achieve the above? We already have a .net class lib project that checks the remote computers. How do we integrate with Quartz?
UPDATE
Thanks a lot granadaCoder
As per my understanding, there's a main job that runs once a day (based on quartz.config) and schedules other jobs by fetching details from DB. In this case the console app neeeds to be running all the time...
What is your opinion on writing a console app and scheduling it using task scheduler to run at 12am daily?
Within console app, we'll prepare a (custom) xml containing the list of jobs (with details such as trigger time and data needed by job class) and pass it on to a scheduler module (a class lib project) that'll start scheduler & queue up all jobs in xml.
After scheduling all these jobs, we'll wait (inside scheduler module) for job completion notification from all the jobs and then shutdown scheduler and exit the console app. This may take long depending on the trigger time of the last job.
Let me know what you think about this approach.
In addition, we have multiple departments (4 in total), so I'm thinking of writing 4 console apps - one for each dept. And schedule all of them using task scheduler (at the same time as differnt timings may not help because each dept may have jobs with trigger time spanning entire day).
Alternatively, I'm also wondering if it is possible to specify 4 jobs in quartz.config file with the same trigger time? (not sure how this'll work, will it create 4 dept-specific scheduler instances and we can queue up department-wise jobs to each scheduler instance?)
Write a job defined in .xml that is the job that does your job cleanup and (re)scheduling. ("ScheduleOtherJobsJob" in the code below)
ScheduleOtherJobsJob will clear out any old entries. It read some datastore and figure out the list of new jobs it has to perform. It will add these jobs to the scheduler.
I've written a basic example. I don't have fancy when-to-run-based-on-exact-date logic....I just push into the future WithIntervalInSeconds.
I'm using GROUP_NAME to figure out which jobs to remove from the scheduler.....before re-adding them.
I show the .xml for the ScheduleOtherJobsJob. You could add it to a AdoStore, or whereever. The .xml is just simpler to put for an example.
Keep in mind that you'll need a process that keeps the Scheduler "alive". Aka, you just can't add new jobs to the IScheduler and then kill the hosting process. In a Console.App, you would write a "Console.Readline()"...so the program doesn't stop running.
The key points are......one job to schedule other jobs. Clearing out the older jobs based on some filter (GROUP_NAME here). Re-add the jobs based on some datastore......and making sure the host-process stays running so all the newly scheduled jobs will run.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Quartz;
using Quartz.Impl.Matchers;
namespace MyNamespace
{
public class ScheduleOtherJobsJob : IJob
{
private const string GROUP_NAME = "MySpecialGroupName";
/// <summary>
/// Called by the <see cref="IScheduler" /> when a
/// <see cref="ITrigger" /> fires that is associated with
/// the <see cref="IJob" />.
/// </summary>
public virtual void Execute(IJobExecutionContext context)
{
JobKey key = context.JobDetail.Key;
JobDataMap jbDataMap = context.JobDetail.JobDataMap;
// string jobSays = jbDataMap.GetString("KeyOne");
JobDataMap trgDataMap = context.Trigger.JobDataMap;
string triggerParameter001 = trgDataMap.GetString("TriggerFileName");
JobDataMap mergedMap = context.MergedJobDataMap;
string whoWins = mergedMap.GetString("DefinedInJobDetailAndTriggerKey");
string msg = string.Format("HasParametersJob : JobKey='{0}', jobSays='{1}', jobDetailParameter001='{2}', triggerParameter001='{3}', triggerParameter002='{4}' , whoWins='{5}' at '{6}'", key, jobSays, jobDetailParameter001, triggerParameter001, triggerParameter002, whoWins, DateTime.Now.ToLongTimeString());
Console.WriteLine(msg);
/* */
context.Scheduler.UnscheduleJobs(GetAllJobTriggerKeys(context.Scheduler));
/* Schedule Your Jobs */
List<OtherJobInfo> infos = new OtherJobInfoData().GetOtherJobs();
foreach (OtherJobInfo info in infos)
{
ScheduleAHasParametersJob(context.Scheduler, info);
}
}
private IList<TriggerKey> GetAllJobTriggerKeys(IScheduler scheduler)
{
/* Find all current jobs.....filter here if need be */
IList<TriggerKey> returnItems = new List<TriggerKey>();
IList<string> jobGroups = scheduler.GetJobGroupNames();
//IList<string> triggerGroups = scheduler.GetTriggerGroupNames();
IList<string> filteredJobGroups = jobGroups.Where(g => g.Equals(GROUP_NAME)).ToList();
foreach (string group in filteredJobGroups)
{
var groupMatcher = GroupMatcher<JobKey>.GroupContains(group);
var jobKeys = scheduler.GetJobKeys(groupMatcher);
foreach (var jobKey in jobKeys)
{
var detail = scheduler.GetJobDetail(jobKey);
var triggers = scheduler.GetTriggersOfJob(jobKey);
foreach (ITrigger trigger in triggers)
{
returnItems.Add(trigger.Key);
Console.WriteLine(group);
Console.WriteLine(jobKey.Name);
Console.WriteLine(detail.Description);
Console.WriteLine(trigger.Key.Name);
Console.WriteLine(trigger.Key.Group);
Console.WriteLine(trigger.GetType().Name);
Console.WriteLine(scheduler.GetTriggerState(trigger.Key));
DateTimeOffset? nextFireTime = trigger.GetNextFireTimeUtc();
if (nextFireTime.HasValue)
{
Console.WriteLine(nextFireTime.Value.LocalDateTime.ToString());
}
DateTimeOffset? previousFireTime = trigger.GetPreviousFireTimeUtc();
if (previousFireTime.HasValue)
{
Console.WriteLine(previousFireTime.Value.LocalDateTime.ToString());
}
}
}
}
return returnItems;
}
private static void ScheduleAHasParametersJob(IScheduler sched, OtherJobInfo info)
{
IJobDetail hasParametersJobDetail = JobBuilder.Create<FileNameDoSomethingJob>()
.WithIdentity(info.UniqueIdentifier + "IJobDetailWithIdentity", GROUP_NAME)
//.UsingJobData("JobDetailFileName", info.FileName)
.Build();
ITrigger hasParametersJobTrigger001 = TriggerBuilder.Create()
.WithIdentity(info.UniqueIdentifier + "ITriggerWithIdentity", GROUP_NAME)
.UsingJobData("TriggerFileName", info.FileName)
.StartNow()
.WithSimpleSchedule(x => x
.WithIntervalInSeconds(info.WithIntervalInSeconds) /* You'll have to do something fancier here with scheduling if you want an exact time */
.WithRepeatCount(0))
.Build();
sched.ScheduleJob(hasParametersJobDetail, hasParametersJobTrigger001);
}
}
public class FileNameDoSomethingJob : IJob
{
public virtual void Execute(IJobExecutionContext context)
{
JobKey key = context.JobDetail.Key;
JobDataMap jbDataMap = context.JobDetail.JobDataMap;
JobDataMap trgDataMap = context.Trigger.JobDataMap;
string triggerFileNameParameter = trgDataMap.GetString("TriggerFileName");
JobDataMap mergedMap = context.MergedJobDataMap;
string msg = string.Format("HasParametersJob : JobKey='{0}', triggerFileNameParameter='{1}' at '{2}'", key, triggerFileNameParameter, DateTime.Now.ToLongTimeString());
Console.WriteLine(msg);
}
}
public class OtherJobInfoData
{
public List<OtherJobInfo> GetOtherJobs()
{
List<OtherJobInfo> returnItems = new List<OtherJobInfo>();
OtherJobInfo oji1 = new OtherJobInfo() { UniqueIdentifier = "ABC123", WithIntervalInSeconds = 5, FileName = #"C:\file1.xml"};
OtherJobInfo oji2 = new OtherJobInfo() { UniqueIdentifier = "DEF234", WithIntervalInSeconds = 5, FileName = #"C:\file2.xml" };
OtherJobInfo oji3 = new OtherJobInfo() { UniqueIdentifier = "GHI345", WithIntervalInSeconds = 5, FileName = #"C:\file3.xml" };
returnItems.Add(oji1);
returnItems.Add(oji2);
returnItems.Add(oji3);
return returnItems;
}
}
public class OtherJobInfo
{
public string UniqueIdentifier { get; set; }
public int WithIntervalInSeconds { get; set; }
public string FileName { get; set; }
}
}
And some xml to run the ScheduleOtherJobsJob
<?xml version="1.0" encoding="UTF-8"?>
<job-scheduling-data xmlns="http://quartznet.sourceforge.net/JobSchedulingData"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.0">
<!-- This value wipes out existing jobs...be very careful with it being "true" -->
<processing-directives>
<overwrite-existing-data>true</overwrite-existing-data>
</processing-directives>
<schedule>
<job>
<name>ScheduleOtherJobsJobName</name>
<group>ScheduleOtherJobsJobGroupName</group>
<description>My Description</description>
<job-type>MyNamespace.ScheduleOtherJobsJob, MyAssembly</job-type>
<durable>true</durable>
<recover>false</recover>
<job-data-map>
</job-data-map>
</job>
<trigger>
<simple>
<name>ScheduleOtherJobsJobTriggerName</name>
<group>ScheduleOtherJobsJobTriggerGroup</group>
<description>My ScheduleOtherJobsJobTriggerName Description</description>
<job-name>ScheduleOtherJobsJobName</job-name>
<job-group>ScheduleOtherJobsJobGroupName</job-group>
<job-data-map>
</job-data-map>
<!--<start-time>1982-06-28T18:15:00.0Z</start-time>-->
<!--<end-time>2020-05-04T18:13:51.0Z</end-time>-->
<misfire-instruction>SmartPolicy</misfire-instruction>
<!-- repeat indefinitely every 5 seconds -->
<repeat-count>-1</repeat-count>
<repeat-interval>5000</repeat-interval>
</simple>
</trigger>
</schedule>
</job-scheduling-data>
EDIT : APPEND
Make sure your app.config (or web.config) is pointing to a quartz-configuration file. This is the top of mine......note, it should be used as a guide.....
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" />
</configSections>
<quartz configSource="MyQuartzConfiguration.config" />
and then MyQuartzConfiguration.config file needs some stuff, the most important being "Quartz_Jobs_001.xml" (the name isn't important, but this file contains the info to your jobs and triggers.
<add key="quartz.plugin.jobInitializer.type" value="Quartz.Plugin.Xml.XMLSchedulingDataProcessorPlugin" />
<add key="quartz.scheduler.instanceName" value="DefaultQuartzScheduler" />
<add key="quartz.threadPool.type" value="Quartz.Simpl.SimpleThreadPool, Quartz" />
<add key="quartz.threadPool.threadCount" value="10" />
<add key="quartz.threadPool.threadPriority" value="2" />
<add key="quartz.jobStore.misfireThreshold" value="60000" />
<add key="quartz.jobStore.type" value="Quartz.Simpl.RAMJobStore, Quartz" />
<add key="quartz.plugin.jobInitializer.fileNames" value="Quartz_Jobs_001.xml" />
<add key="quartz.plugin.jobInitializer.failOnFileNotFound" value="true" />
<add key="quartz.plugin.jobInitializer.scanInterval" value="120" />
Related
We have two models, a Shared model and an Individual model, each with their own context. The Shared model includes a listing of all individual items along with various metadata. Each individual item has a foreign key back to this listing.
When running migrations to the Individual model we connect back to the Shared context during the "Seed" method to grab some of the listing's metadata for that item. There may be multiple instances of each type of context active at a time, so we sneak pieces of our desired Shared context's connection string into the Individual string by way of the "Application Name" attribute and dynamically generate the Shared one in code. It's a bit of a cheat, but it's worked fine for a very long time while using Entity Framework 6.0.1, however a recent upgrade to 6.1.3 seems to have broken this and I don't know why.
Here's some of our code:
public class IndividualConfiguration : DbMigrationsConfiguration<IndividualContext>
{
protected override void Seed(IndividualContext context)
{
var store = context.Stores.FirstOrDefault();
var storeListingId = store.StoreListingId;
var sharedContext = GetSharedContextFromIndividualContext(context);
var storeListing = sharedContext.StoreListings.FirstOrDefault(p => p.Id == storeListingId);
/* Use listing metadata for things... */
}
public SharedContext GetSharedContextFromIndividualContext(IndividualContext context)
{
var connString = context.Database.Connection.ConnectionString;
var builder = new SqlConnectionStringBuilder(connString);
/* Derive shared context info from the Application Name... */
builder.InitialCatalog = derivedCatalog;
builder.DataSource = derivedDataSource;
builder.ApplicationName = string.Empty;
return new SharedContext(builder.ConnectionString);
}
}
The migration command I enter into the Nuget Package Manager Console is:
update-database -configuration "IndividualConfiguration"
-ConnectionString "Data Source=localhost;Initial Catalog=IndividualDatabase; Integrated Security=True;
MultipleActiveResultSets=True; Application Name=SharedDatabase|localhost"
-ConnectionProviderName "System.Data.SqlClient" -force –verbose
Notice how "SharedDatabase|localhost" is used in the Application name. Those are the catalog and data source of the matching shared context. Looking at the results via the debugger, I can confirm that "builder.ConnectionString" has built the correct connection string, which would be:
"Data Source=localhost;Initial Catalog=SharedDatabase;Integrated Security=True;MultipleActiveResultSets=True;Application Name="
...and yet, when looking up the "StoredListings" DbSet from the returned sharedContext, the code errors out with "Invalid object name 'dbo.StoreListings'."
While troubleshooting this, I tried modifying the code that gets the SharedContext to see when things go wrong:
var ctx = new SharedContext(builder.ConnectionString);
var connBefore = ctx.Database.Connection.ConnectionString;
string latestMigration = ctx.Database.SqlQuery<string>("SELECT TOP 1 MigrationId FROM [__MigrationHistory] ORDER BY MigrationId DESC").FirstOrDefault();
var connAfter = ctx.Database.Connection.ConnectionString;
throw new Exception("CONN BEFORE: " + connBefore + Environment.NewLine + Environment.NewLine + "CONN AFTER : " + connAfter + Environment.NewLine + Environment.NewLine + "LAST MIGRATION: " + latestMigration);
...and ended up with:
CONN BEFORE: Data Source=localhost;Initial Catalog=SharedDatabase;Integrated Security=True;MultipleActiveResultSets=True;Application Name=
CONN AFTER : Data Source=localhost;Initial Catalog=IndividualDatabase; Integrated Security=True; MultipleActiveResultSets=True; Application Name=SharedDatabase|localhost
LAST MIGRATION: 201607081517538_LatestIndividualMigration
So for some reason, despite the shared context's connection string being correct, upon USING the connection it reverts back to the same individual string I used in the migration command.
Does anyone know why that second context is getting ignored? I can confirm that if I revert back to Entity Framework 6.0.1 everything starts working again, so I want to say that something changed from 6.0.1 to 6.1.3 that invalidates our code but I have no idea what that could be.
We have one mongodb Replica Set with two instances(127.0.0.1:27017 - Primary, 127.0.0.1:27018- Secondary) and one arbiter(127.0.0.1:27019). When I step down from primary by using rs.stepDown(60) it should become secondary and secondary should become primary instance, and all write operations should happen in the secondary instance(primary after step down). But after stepping down I am getting the exception "Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.".
App.Config has the following keys:
<appSettings>
<add key="mongoServerIP" value="127.0.0.1" />
<add key="mongoServerPort" value="27017"/>
<!--<add key="dbname" value="PSLRatingEngine"/>-->
<add key="MongoDbDatabaseName" value="PSLRatingEngine" />
<add key="testcollectionname" value="Test"/>
</appSettings>
C# code :
using MongoDB.Driver;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ReplicaSetTest
{
class Program
{
static void Main(string[] args)
{
string mongoserverip = ConfigurationManager.AppSettings["mongoServerIP"].ToString();
int port = Convert.ToInt32(ConfigurationManager.AppSettings["mongoServerPort"].ToString());
string dbname = ConfigurationManager.AppSettings["MongoDbDatabaseName"].ToString();
string customercollectionname = ConfigurationManager.AppSettings["testcollectionname"].ToString();
try
{
MongoServerSettings settings = new MongoServerSettings();
settings.Server = new MongoServerAddress(mongoserverip, port);
// Create server object to communicate with our server
MongoServer server = new MongoServer(settings);
// Get our database instance to reach collections and data
var db = server.GetDatabase(dbname);
// Get user collection reference
var collection = db.GetCollection(customercollectionname);
int i = 0;
while(true)
{
Employee emp = new Employee();
emp.EmpId = i + 1000;
Random r = new Random();
emp.Age = r.Next(20, 70);
emp.Salary = r.NextDouble() * 100000;
emp.Name = "emp" + i;
emp.Dept = "Engineering";
collection.Insert<Employee>(emp);
i++;
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
}
Please let me know how I can automatically switch from primary to secondary by C# code if anything happens to primary instance of the replica set.
Suppose you send an acknowledge write to the primary and then primary is fried by an orbital ion cannon after it partially completes the write but before it finishes and acknowledges the write is complete. How should you "automatically switch from primary to secondary" in that case? What does it mean for that connection and operation?
Elections take a few seconds to happen, during which there is no primary to send writes to. What does the driver do while there's no primary? How can it maintain the same connections when the connections go to a node that isn't available anymore?
The point is, the errors from the application during failover are expected and your application needs to be able to deal with them. When the primary fails, MongoDB/the driver can't magically make everything work like nothing happened in all cases. The driver will, after a new primary is elected, automatically switch to the new one and then it will seem as if nothing happened.
We have implemented recurring tasks using Quartz scheduler in our app. The user may schedule a recurring task starting at any time, even starting in the past. So for example, I can schedule a task to run monthly, starting on the 1st July, even though today is 17th July.
I would expect Quartz to run the first job immediately if it is in the past, and any subsequent jobs I throw at it. However, today I encountered a case when the task didn't get triggered instantly. The task was scheduled for 15th July, today is 17th July. Nothing happened. After I restarted the server and the code to schedule all the tasks in the DB ran, it did get triggered. Why would that happen ?
Code for scheduling the task below. Note that to make it recurring, we just reschedule it with the same code for another date we calculate (but that part of the code doesn't matter for the issue at hand).
Edit: Only the first job gets triggered, any subsequent jobs are not. If I try to use startNow() instead of startAt(Date), it still doesn't work, makes no difference.
JobDetail job = JobBuilder.newJob(ScheduledAppJob.class)
.withIdentity(stringId)
.build();
Trigger trigger = TriggerBuilder.newTrigger()
.withIdentity(stringId)
.startAt(date)
.build();
try
{
scheduler.scheduleJob(job, trigger);
dateFormat = new SimpleDateFormat("dd MMM yyyy, HH:mm:ss");
String recurringTime = dateFormat.format(date);
logger.info("Scheduling recurring job for " + recurringTime);
}
catch (SchedulerException se)
{
se.printStackTrace();
}
quartz.properties file, located under src/main (tried even in WEB-INF and WEB-INF/classes like suggested in the tutorial, but made no difference); even tried with 20 threadCount, still no difference:
org.quartz.scheduler.instanceName = AppScheduler
org.quartz.threadPool.threadCount = 3
org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
Seems to be working now. Haven't ran into any more problems. Could've been a config issue, as I have moved the config file in /src/main/resources.
Also try turning logging on in order to help with the debug:
log4j.logger.com.gargoylesoftware.htmlunit=DEBUG
We also added a JobTriggerListener to help with the logs:
private static class JobTriggerListener implements TriggerListener
{
private String name;
public JobTriggerListener(String name)
{
this.name = name;
}
public String getName()
{
return name;
}
public void triggerComplete(Trigger trigger, JobExecutionContext context,
Trigger.CompletedExecutionInstruction triggerInstructionCode)
{
}
public void triggerFired(Trigger trigger, JobExecutionContext context)
{
}
public void triggerMisfired(Trigger trigger)
{
logger.warn("Trigger misfired for trigger: " + trigger.getKey());
try
{
logger.info("Available threads: " + scheduler.getCurrentlyExecutingJobs());
}
catch (SchedulerException ex)
{
logger.error("Could not get currently executing jobs.", ex);
}
}
public boolean vetoJobExecution(Trigger trigger, JobExecutionContext context)
{
return false;
}
}
Check the updated config file:
#============================================================================
# Configure Main Scheduler Properties
#============================================================================
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.scheduler.instanceName = MyAppScheduler
org.quartz.scheduler.instanceId = AUTO
#============================================================================
# Configure ThreadPool
#============================================================================
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 25
org.quartz.threadPool.threadPriority = 9
#============================================================================
# Configure JobStore
#============================================================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
The first time I run my application everything works normally.
I can register items with no problems.
But after closing the application and run it again the following error occurs:
Full image in: http://i.stack.imgur.com/33Whm.png
Config
Ninject
public class RavenDBNinjectModule : NinjectModule
{
public override void Load()
{
Bind<IDocumentStore>().ToMethod(context =>
{
NonAdminHttp.EnsureCanListenToWhenInNonAdminContext(8080);
var documentStore = new EmbeddableDocumentStore { ConnectionStringName="RavenDB", UseEmbeddedHttpServer = true };
return documentStore.Initialize();
}).InSingletonScope();
Bind<IDocumentSession>().ToMethod(context => context.Kernel.Get<IDocumentStore>().OpenSession()).InRequestScope();
}
}
Connection String
<connectionStrings>
<add name="RavenDB" connectionString="DataDir = ~\App_Data" />
</connectionStrings>
Controller
private readonly IDocumentSession _documentSession;
public PluginsController (IDocumentSession documentSession)
{
_documentSession = documentSession;
}
It is always the second time I run the app, the error occurs! Why?
Something is modifying the on disk file after it is created.
Please check if you have anything there that can cause this.
This may indicate some logical or physical issue with your HD.
We require programmatic access to a SQL Server Express service as part of our application. Depending on what the user is trying to do, we may have to attach a database, detach a database, back one up, etc. Sometimes the service might not be started before we attempt these operations. So we need to ensure the service is started. Here is where we are running into problems. Apparently the ServiceController.WaitForStatus(ServiceControllerStatus.Running) returns prematurely for SQL Server Express. What is really puzzling is that the master database seems to be immediately available, but not other databases. Here is a console application to demonstrate what I am talking about:
namespace ServiceTest
{
using System;
using System.Data.SqlClient;
using System.Diagnostics;
using System.ServiceProcess;
using System.Threading;
class Program
{
private static readonly ServiceController controller = new ServiceController("MSSQL$SQLEXPRESS");
private static readonly Stopwatch stopWatch = new Stopwatch();
static void Main(string[] args)
{
stopWatch.Start();
EnsureStop();
Start();
OpenAndClose("master");
EnsureStop();
Start();
OpenAndClose("AdventureWorksLT");
Console.ReadLine();
}
private static void EnsureStop()
{
Console.WriteLine("EnsureStop enter, {0:N0}", stopWatch.ElapsedMilliseconds);
if (controller.Status != ServiceControllerStatus.Stopped)
{
controller.Stop();
controller.WaitForStatus(ServiceControllerStatus.Stopped);
Thread.Sleep(5000); // really, really make sure it stopped ... this has a problem too.
}
Console.WriteLine("EnsureStop exit, {0:N0}", stopWatch.ElapsedMilliseconds);
}
private static void Start()
{
Console.WriteLine("Start enter, {0:N0}", stopWatch.ElapsedMilliseconds);
controller.Start();
controller.WaitForStatus(ServiceControllerStatus.Running);
// Thread.Sleep(5000);
Console.WriteLine("Start exit, {0:N0}", stopWatch.ElapsedMilliseconds);
}
private static void OpenAndClose(string database)
{
Console.WriteLine("OpenAndClose enter, {0:N0}", stopWatch.ElapsedMilliseconds);
var connection = new SqlConnection(string.Format(#"Data Source=.\SQLEXPRESS;initial catalog={0};integrated security=SSPI", database));
connection.Open();
connection.Close();
Console.WriteLine("OpenAndClose exit, {0:N0}", stopWatch.ElapsedMilliseconds);
}
}
}
On my machine, this will consistently fail as written. Notice that the connection to "master" has no problems; only the connection to the other database. (You can reverse the order of the connections to verify this.) If you uncomment the Thread.Sleep in the Start() method, it will work fine.
Obviously I want to avoid an arbitrary Thread.Sleep(). Besides the rank code smell, what arbitary value would I put there? The only thing we can think of is to put some dummy connections to our target database in a while loop, catching the SqlException thrown and trying again until it works. But I'm thinking there must be a more elegant solution out there to know when the service is really ready to be used. Any ideas?
EDIT: Based on feedback provided below, I added a check on the status of the database. However, it is still failing. It looks like even the state is not reliable. Here is the function I am calling before OpenAndClose(string):
private static void WaitForOnline(string database)
{
Console.WriteLine("WaitForOnline start, {0:N0}", stopWatch.ElapsedMilliseconds);
using (var connection = new SqlConnection(string.Format(#"Data Source=.\SQLEXPRESS;initial catal
using (var command = connection.CreateCommand())
{
connection.Open();
try
{
command.CommandText = "SELECT [state] FROM sys.databases WHERE [name] = #DatabaseName";
command.Parameters.AddWithValue("#DatabaseName", database);
byte databaseState = (byte)command.ExecuteScalar();
Console.WriteLine("databaseState = {0}", databaseState);
while (databaseState != OnlineState)
{
Thread.Sleep(500);
databaseState = (byte)command.ExecuteScalar();
Console.WriteLine("databaseState = {0}", databaseState);
}
}
finally
{
connection.Close();
}
}
Console.WriteLine("WaitForOnline exit, {0:N0}", stopWatch.ElapsedMilliseconds);
}
I found another discussion dealing with a similar problem. Apparently the solution is to check the sys.database_files of the database in question. But that, of course, is a chicken-and-egg problem. Any other ideas?
Service start != database start.
Service is started when the SQL Server process is running and responded to the SCM that is 'alive'. After that the server will start putting user databases online. As part of this process, it runs the recovery process on each database, to ensure transactional consistency. Recovery of a database can last anywhere from microseconds to whole days, it depends on the ammount of log to be redone and the speed of the disk(s).
After the SCM returns that the service is running, you should connect to 'master' and check your database status in sys.databases. Only when the status is ONLINE can you proceed to open it.