How to suppress verbose error message in powershell? - powershell

Please, observe:
And here is the transcript:
PS C:\Dayforce\854\db\SQL\ClientDB> $job = Start-Job { C:\Dayforce\854\db\tools\dbupgrade\DbUpgrade.exe -d C:\Dayforce\854\db\SQL\ClientDB -db 854_dfadminportal }
PS C:\Dayforce\854\db\SQL\ClientDB> C:\Dayforce\854\db\tools\dbupgrade\DbUpgrade.exe -d C:\Dayforce\854\db\SQL\ClientDB -db 854_dfadminportal
Using timeout of 30 minutes per step.
Analyzing target DB state ...
Parsing upgrade steps ...
Found 1 new upgrade step(s). Upgrading the database ...
Dropping 2 runtime code items ...
Failed to apply DFVersion 20000
20000.sql: Divide by zero error encountered. (8134, 1)
Total Step SQL duration = 00:00:00.000
Total Step Round Trip duration = 00:00:00.035, average = 00:00:00.035
Total duration = 00:00:00.357
PS C:\Dayforce\854\db\SQL\ClientDB> Receive-Job $job.Id
Using timeout of 30 minutes per step.
Analyzing target DB state ...
Parsing upgrade steps ...
Found 1 new upgrade step(s). Upgrading the database ...
Dropping 2 runtime code items ...
Failed to apply DFVersion 20000
+ CategoryInfo : NotSpecified: (Failed to apply DFVersion 20000:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
+ PSComputerName : localhost
20000.sql: Divide by zero error encountered. (8134, 1)
Total Step SQL duration = 00:00:00.000
Total Step Round Trip duration = 00:00:00.031, average = 00:00:00.031
Total duration = 00:00:00.339
PS C:\Dayforce\854\db\SQL\ClientDB>
All I am doing is running the same executable. Once directly, once from within a background job. When receiving the output of the background job, extra error information is output. How can I get rid of it?

Related

AWS Glue job failing with OOM exception when changing column names

I have an ETL job where I load some data from S3 into a dynamic frame, relationalize it, and iterate through the dynamic frames returned. I want to query the result of this in Athena later so I want to change the names of the columns from having '.' to '_' and lower case them. When I do this transformation, I change the DynamicFrame into a spark dataframe and have been doing it this way. I've also seen a problem in another SO question where it turned out there is a reported problem with AWS Glue rename field transform so I've stayed away from that.
I've tried a couple things, including adding a load limit size to 50MB, repartitioning the dataframe, using both dataframe.schema.names and dataframe.columns, using reduce instead of loops, using sparksql to change it and nothing has worked. I'm fairly certain that its this transformation that failing because I've put some print statements in and the print that I have right after the completion of this transformation never shows up. I used a UDF at one point but that also failed. I've tried the actual transformation using df.toDF(new_column_names) and df.withColumnRenamed() but it never gets this far because I've not seen it get past retrieving the column names. Here's the code I've been using. I've been changing the actual name transformation as I said above, but the rest of it has stayed pretty much the same.
I've seen some people try and use the spark.executor.memory, spark.driver.memory, spark.executor.memoryOverhead and spark.driver.memoryOverhead. I've used those and set them to the most AWS Glue will let you but to no avail.
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame
from pyspark.sql.functions import explode, col, lower, trim, regexp_replace
import copy
import json
import boto3
import botocore
import time
# ========================================================
# UTILITY FUNCTIONS
# ========================================================
def lower_and_pythonize(s=None):
if s is not None:
return s.replace('.', '_').lower()
else:
return None
# pyspark implementation of renaming
# exprs = [
# regexp_replace(lower(trim(col(c))),'\.' , '_').alias(c) if t == "string" else col(c)
# for (c, t) in data_frame.dtypes
# ]
# ========================================================
# END UTILITY FUNCTIONS
# ========================================================
## #params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
#my params
bucket_name = '<my-s3-bucket>' # name of the bucket. do not include 's3://' thats added later
output_key = '<my-output-path>' # key where all of the output is saved
input_keys = ['<root-directory-i'm using'] # highest level key that holds all of the desired data
s3_exclusions = "[\"*.orc\"]" # list of strings to exclude. Documentation: https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect.html#aws-glue-programming-etl-connect-s3
s3_exclusions = s3_exclusions.replace('\n', '')
dfc_root_table_name = 'root' # name of the root table generated in the relationalize process
input_paths = ['s3://' + bucket_name + '/' + x for x in input_keys] # turn input keys into s3 paths
output_connection_opts = {"path": "s3://" + bucket_name + "/" + output_key} # dict of options. Documentation link found above the write_dynamic_frame.from_options line
s3_client = boto3.client('s3', 'us-east-1') # s3 client used for writing to s3
s3_resource = boto3.resource('s3', 'us-east-1') # s3 resource used for checking if key exists
group_mb = 50 # NOTE: 75 has proven to be too much when running on all of the april data
group_size = str(group_mb * 1024 * 1024)
input_connection_opts = {'paths': input_paths,
'groupFiles': 'inPartition',
'groupSize': group_size,
'recurse': True,
'exclusions': s3_exclusions} # dict of options. Documentation link found above the create_dynamic_frame_from_options line
print(sc._conf.get('spark.executor.cores'))
num_paritions = int(sc._conf.get('spark.executor.cores')) * 4
print('Loading all json files into DynamicFrame...')
loading_time = time.time()
df = glueContext.create_dynamic_frame_from_options(connection_type='s3', connection_options=input_connection_opts, format='json')
print('Done. Time to complete: {}s'.format(time.time() - loading_time))
# using the list of known null fields (at least on small sample size) remove them
#df = df.drop_fields(drop_paths)
# drop any remaining null fields. The above covers known problems that this step doesn't fix
print('Dropping null fields...')
dropping_time = time.time()
df_without_null = DropNullFields.apply(frame=df, transformation_ctx='df_without_null')
print('Done. Time to complete: {}s'.format(time.time() - dropping_time))
df = None
print('Relationalizing dynamic frame...')
relationalizing_time = time.time()
dfc = Relationalize.apply(frame=df_without_null, name=dfc_root_table_name, info="RELATIONALIZE", transformation_ctx='dfc', stageThreshold=3)
print('Done. Time to complete: {}s'.format(time.time() - relationalizing_time))
keys = dfc.keys()
keys.sort(key=lambda s: len(s))
print('Writting all dynamic frames to s3...')
writting_time = time.time()
for key in keys:
good_key = lower_and_pythonize(s=key)
data_frame = dfc.select(key).toDF()
# lowercase all the names and remove '.'
print('Removing . and _ from names for {} frame...'.format(key))
df_fix_names_time = time.time()
print('Repartitioning data frame...')
data_frame.repartition(num_paritions)
print('Done.')
#
print('Changing names...')
for old_name in data_frame.schema.names:
data_frame = data_frame.withColumnRenamed(old_name, old_name.replace('.','_').lower())
print('Done.')
#
df_now = DynamicFrame.fromDF(dataframe=data_frame, glue_ctx=glueContext, name='df_now')
print('Done. Time to complete: {}'.format(time.time() - df_fix_names_time))
# if a conflict of types appears, make it 2 columns
# https://docs.aws.amazon.com/glue/latest/dg/built-in-transforms.html
print('Fixing any type conficts for {} frame...'.format(key))
df_resolve_time = time.time()
resolved = ResolveChoice.apply(frame = df_now, choice = 'make_cols', transformation_ctx = 'resolved')
print('Done. Time to complete: {}'.format(time.time() - df_resolve_time))
# check if key exists in s3. if not make one
out_connect = copy.deepcopy(output_connection_opts)
out_connect['path'] = out_connect['path'] + '/' + str(good_key)
try:
s3_resource.Object(bucket_name, output_key + '/' + good_key + '/').load()
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == '404' or 'NoSuchKey' in e.response['Error']['Code']:
# object doesn't exist
s3_client.put_object(Bucket=bucket_name, Key=output_key+'/'+good_key + '/')
else:
print(e)
## https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-glue-context.html
print('Writing {} frame to S3...'.format(key))
df_writing_time = time.time()
datasink4 = glueContext.write_dynamic_frame.from_options(frame = df_now, connection_type = "s3", connection_options = out_connect, format = "orc", transformation_ctx = "datasink4")
out_connect = None
datasink4 = None
print('Done. Time to complete: {}'.format(time.time() - df_writing_time))
print('Done. Time to complete: {}s'.format(time.time() - writting_time))
job.commit()
Here is the error I'm getting
19/06/07 16:33:36 DEBUG Client:
client token: N/A
diagnostics: Application application_1559921043869_0001 failed 1 times due to AM Container for appattempt_1559921043869_0001_000001 exited with exitCode: -104
For more detailed output, check application tracking page:http://ip-172-32-9-38.ec2.internal:8088/cluster/app/application_1559921043869_0001Then, click on links to logs of each attempt.
Diagnostics: Container [pid=9630,containerID=container_1559921043869_0001_01_000001] is running beyond physical memory limits. Current usage: 5.6 GB of 5.5 GB physical memory used; 8.8 GB of 27.5 GB virtual memory used. Killing container.
Dump of the process-tree for container_1559921043869_0001_01_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 9630 9628 9630 9630 (bash) 0 0 115822592 675 /bin/bash -c LD_LIBRARY_PATH=/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native /usr/lib/jvm/java-openjdk/bin/java -server -Xmx5120m -Djava.io.tmpdir=/mnt/yarn/usercache/root/appcache/application_1559921043869_0001/container_1559921043869_0001_01_000001/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' '-Djavax.net.ssl.trustStore=ExternalAndAWSTrustStore.jks' '-Djavax.net.ssl.trustStoreType=JKS' '-Djavax.net.ssl.trustStorePassword=amazon' '-DRDS_ROOT_CERT_PATH=rds-combined-ca-bundle.pem' '-DREDSHIFT_ROOT_CERT_PATH=redshift-ssl-ca-cert.pem' '-DRDS_TRUSTSTORE_URL=file:RDSTrustStore.jks' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1559921043869_0001/container_1559921043869_0001_01_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.deploy.PythonRunner' --primary-py-file runscript.py --arg 'script_2019-06-07-15-29-50.py' --arg '--JOB_NAME' --arg 'tss-json-to-orc' --arg '--JOB_ID' --arg 'j_f9f7363e5d8afa20784bc83d7821493f481a78352641ad2165f8f68b88c8e5fe' --arg '--JOB_RUN_ID' --arg 'jr_a77087792dd74231be1f68c1eda2ed33200126b8952c5b1420cb6684759cf233' --arg '--job-bookmark-option' --arg 'job-bookmark-disable' --arg '--TempDir' --arg 's3://aws-glue-temporary-059866946490-us-east-1/zmcgrath' --properties-file /mnt/yarn/usercache/root/appcache/application_1559921043869_0001/container_1559921043869_0001_01_000001/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/containers/application_1559921043869_0001/container_1559921043869_0001_01_000001/stdout 2> /var/log/hadoop-yarn/containers/application_1559921043869_0001/container_1559921043869_0001_01_000001/stderr
|- 9677 9648 9630 9630 (python) 12352 2628 1418354688 261364 python runscript.py script_2019-06-07-15-29-50.py --JOB_NAME tss-json-to-orc --JOB_ID j_f9f7363e5d8afa20784bc83d7821493f481a78352641ad2165f8f68b88c8e5fe --JOB_RUN_ID jr_a77087792dd74231be1f68c1eda2ed33200126b8952c5b1420cb6684759cf233 --job-bookmark-option job-bookmark-disable --TempDir s3://aws-glue-temporary-059866946490-us-east-1/zmcgrath
|- 9648 9630 9630 9630 (java) 265906 3083 7916974080 1207439 /usr/lib/jvm/java-openjdk/bin/java -server -Xmx5120m -Djava.io.tmpdir=/mnt/yarn/usercache/root/appcache/application_1559921043869_0001/container_1559921043869_0001_01_000001/tmp -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p -Djavax.net.ssl.trustStore=ExternalAndAWSTrustStore.jks -Djavax.net.ssl.trustStoreType=JKS -Djavax.net.ssl.trustStorePassword=amazon -DRDS_ROOT_CERT_PATH=rds-combined-ca-bundle.pem -DREDSHIFT_ROOT_CERT_PATH=redshift-ssl-ca-cert.pem -DRDS_TRUSTSTORE_URL=file:RDSTrustStore.jks -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1559921043869_0001/container_1559921043869_0001_01_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class org.apache.spark.deploy.PythonRunner --primary-py-file runscript.py --arg script_2019-06-07-15-29-50.py --arg --JOB_NAME --arg tss-json-to-orc --arg --JOB_ID --arg j_f9f7363e5d8afa20784bc83d7821493f481a78352641ad2165f8f68b88c8e5fe --arg --JOB_RUN_ID --arg jr_a77087792dd74231be1f68c1eda2ed33200126b8952c5b1420cb6684759cf233 --arg --job-bookmark-option --arg job-bookmark-disable --arg --TempDir --arg s3://aws-glue-temporary-059866946490-us-east-1/zmcgrath --properties-file /mnt/yarn/usercache/root/appcache/application_1559921043869_0001/container_1559921043869_0001_01_000001/__spark_conf__/__spark_conf__.properties
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1559921462650
final status: FAILED
tracking URL: http://ip-172-32-9-38.ec2.internal:8088/cluster/app/application_1559921043869_0001
user: root
Here are the log contents from the job
LogType:stdout
Log Upload Time:Fri Jun 07 16:33:36 +0000 2019
LogLength:487
Log Contents:
4
Loading all json files into DynamicFrame...
Done. Time to complete: 59.5056920052s
Dropping null fields...
null_fields [<some fields that were dropped>]
Done. Time to complete: 529.95293808s
Relationalizing dynamic frame...
Done. Time to complete: 2773.11689401s
Writting all dynamic frames to s3...
Removing . and _ from names for root frame...
Repartitioning data frame...
Done.
Changing names...
End of LogType:stdout
As I said earlier, the Done. print after changing the names never appears in the logs. I've seen plenty of people getting the same error I'm seeing and I've tried a fair bit of them with no success. Any help you can provide would b e much appreciated. Let me know if you need any more information. Thanks
Edit
Prabhakar's comment reminded me that I have tried the memory worker type in AWS Glue and it still failed. As stated above, I have tried raising the amount of memory in the memoryOverhead from 5 to 12, but to avail. Neither of these made the job complete successfully
Update
I put in the following code for column name change instead of the above code for easier debugging
print('Changing names...')
name_counter = 0
for old_name in data_frame.schema.names:
print('Name number {}. name being changed: {}'.format(name_counter, old_name))
data_frame = data_frame.withColumnRenamed(old_name, old_name.replace('.','_').lower())
name_counter += 1
print('Done.')
And I got the following output
Removing . and _ from names for root frame...
Repartitioning data frame...
Done.
Changing names...
End of LogType:stdout
So it must be a problem with the data_frame.schema.names part. Could it be this line with my loop through all of the DynamicFrames? Am I looping through the DynamicFrames from the relationalize transformation correctly?
Update 2
Glue recently added more verbose logs and I found this
ERROR YarnClusterScheduler: Lost executor 396 on ip-172-32-78-221.ec2.internal: Container killed by YARN for exceeding memory limits. 5.5 GB of 5.5 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
This happens for more than just this executor too; it looks like almost all of them.
I can try to increase the executor memory overhead, but I would like to know why getting the column names results in an OOM error. I wouldn't think that something that trivial would take up that much memory?
Update
I attempted to run the job with both spark.driver.memoryOverhead=7g and spark.yarn.executor.memoryOverhead=7g and I again got an OOM error

Oppo Realme1 Job Schduler Min interval for Periodic Work is 1 hr

Currently I am using WorkManager 1.0.0-alpha05. I set periodic Work Request using below code.
When interval is below 1 hr then In Oppo Realme (Android Version - 8.1.0 , ColorOSVersion V5.0)
job execute at 1 hr. When interval greater than 1 hr job execute at exact time . when interval is smaller than 1 hr then job execute at 1 hr.
Please let me know any log or information required :
Code For Schdule Periodic Job:
PeriodicWorkRequest uploadWork = new PeriodicWorkRequest.
Builder(LocationUpdatesJobService.class ,interval, TimeUnit.MILLISECONDS)
.addTag(Constants.Location.TAG_BACKGROUND_LOCATION_PERIODIC)
.setConstraints(constraints).build();
WorkManager.getInstance().enqueueUniquePeriodicWork(
Constants.Location.TAG_BACKGROUND_LOCATION_PERIODIC,
ExistingPeriodicWorkPolicy.REPLACE, uploadWork);
in all other device Periodic Work request interval is proper. In Oppo Realme 1 work execute at 1 hr.
Oppo Realme 1: Interval 15 Min
I debug Job Schduler using below command:
adb shell dumpsys jobscheduler
JOB #u0a249/18: cc2fc59 com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
u0a249 tag=job/com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
Source: uid=u0a249 user=0 pkg=com.cygneto.field_sales
JobInfo:
Service: com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
PERIODIC: interval=+1h0m0s0ms flex=+21m0s0ms
Requires: charging=false batteryNotLow=false deviceIdle=false
Extras: mParcelledData.dataSize=180
Backoff: policy=1 initial=+30s0ms
Has early constraint
Has late constraint
Required constraints: TIMING_DELAY DEADLINE
Satisfied constraints: APP_NOT_IDLE DEVICE_NOT_DOZING
Unsatisfied constraints: TIMING_DELAY DEADLINE
Doze whitelisted: true
Tracking: TIME
Enqueue time: -9m4s617ms
Run time: earliest=+29m55s383ms, latest=+50m55s383ms
Ready: false (job=false user=true !pending=true !active=true !backingup=true comp=true)
Oppo Realme 1: Interval 1hr 10 Min
Log:
JobInfo:
Service: com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
PERIODIC: interval=+1h10m0s0ms flex=+1h10m0s0ms
Requires: charging=false batteryNotLow=false deviceIdle=false
Extras: mParcelledData.dataSize=180
Doze whitelisted: true
Tracking: TIME
Enqueue time: -4m19s846ms
Run time: earliest=+1h5m39s833ms, latest=+2h15m39s833ms
Last successful run: 2018-07-25 17:01:23
Ready: false (job=false user=true !pending=true !active=true !backingup=true comp=true)
Other Device :
Log :
JobInfo:
Service:com.cygneto.field_sales/androidx.work.impl.background.systemjob.SystemJobService
PERIODIC: interval=+15m0s0ms flex=+15m0s0ms
Requires: charging=false batteryNotLow=false deviceIdle=false
Tracking: TIME
Enqueue time: -29s237ms
Run time: earliest=+14m30s690ms, latest=+29m30s690ms
Last successful run: 2018-07-25 17:29:19
Ready: false (job=false user=true !pending=true !active=true !backingup=true comp=true)
I also try Using different library. I found same behavior in Job Scheduler and Android-Job.
job period length is 15 min but execute at 1 hr but when i try using firebase job dispatcher
job execute at correct 15 min interval time.
i debug Job Scheduler and Android-Job using below command:
adb shell dumpsys jobscheduler
Job Scheduler:
Interval : 15 Min
Output : 1 hr
Log:
JOB #u0a266/1: a0dd846 com.jobscheduler_periodic/com.periodic.JobSchedulerService
u0a266 tag=*job*/com.jobscheduler_periodic/com.periodic.JobSchedulerService
Source: uid=u0a266 user=0 pkg=com.jobscheduler_periodic
JobInfo:
Service: com.jobscheduler_periodic/com.periodic.JobSchedulerService
PERIODIC: interval=+1h0m0s0ms flex=+15m0s0ms
Android-Job:
Interval : 15 Min
Output : 1 hr:
Log:
JOB #u0a266/3: 10c0d65 com.jobscheduler_periodic/com.evernote.android.job.v21.PlatformJobService
u0a266 tag=*job*/com.jobscheduler_periodic/com.evernote.android.job.v21.PlatformJobService
Source: uid=u0a266 user=0 pkg=com.jobscheduler_periodic
JobInfo:
Service: com.jobscheduler_periodic/com.evernote.android.job.v21.PlatformJobService
PERIODIC: interval=+1h0m0s0ms flex=+5m0s0ms
Firebase Job Dispatcher:
I debug firebase job Dispatcher using below command:
adb shell "dumpsys activity service GcmService | grep com.jobscheduler_periodic"
Interval : 15 Min
Output : 15 min
Log:
u0|com.jobscheduler_periodic: 1
(scheduled) com.jobscheduler_periodic/com.firebase.jobdispatcher.GooglePlayReceiver{u=0 tag="MyJobService" trigger=window{s
tart=720s,end=900s,earliest=46s,latest=226s} requirements=[NET_CONNECTED,CHARGING] attributes=[RECURRING] scheduled=-673s last_
run=N/A jid=N/A status=PENDING retries=0 client_lib=FIREBASE_JOB_DISPATCHER-1}
This happens to be an OEM bug. Unfortunately, it is very hard to work around these kind of bugs, in a battery efficient way. If you want a period of 15 mins, I suggest using the following workaround:
Use a OneTimeWorkRequest instead of a periodic work request, and upon first execution of the first work request, schedule a second from inside the worker with an initialDelay of 15 mins. That will essentially give you what you want.

How to format output to DD, HH, MM

I'm trying to work out a devices up time using PowerShell. My code is as so:
$wmi = Get-WmiObject -Class Win32_OperatingSystem
$upTime = $wmi.ConvertToDateTime($wmi.LocalDateTime) – $wmi.ConvertToDateTime($wmi.LastBootUpTime)
When I call $upTime, it returns the following:
Days : 0
Hours : 1
Minutes : 8
Seconds : 5
Milliseconds : 311
Ticks : 40853110010
TotalDays : 0.0472836921412037
TotalHours : 1.13480861138889
TotalMinutes : 68.0885166833333
TotalSeconds : 4085.311001
TotalMilliseconds : 4085311.001
While I can see the up time, I need to specifically format the output to D:dd, H:hh, M:mm as I will be automatically feeding this into a monitoring system, but after much searching on here, google etc, I can't see how to achieve this. Can anyone suggest how to go about doing this?
Try:
"D:{0:dd}, H:{0:hh}, M:{0:mm}" -f $upTime
alternatively, if you like to escape lots of things:
$upTime.ToString('\D\:dd\,\ \H\:hh\,\ \M\:mm')
This will format the string to have 2 leading zeros when ether days, hours or seconds is 0. And 1 leading zero if they are 1-9.
"D:01, H:02, M:23"
You can format it in normal .NET style
$UptimeStr = $Uptime.ToString("ddd:dd, H:hh, mmm:mm")
or
$UptimeStr = '{0:ddd:dd, H:hh, mmm:mm}' -f $Uptime
If the above formatting is not what you wanted the formatting is explained on MSDN

Execute Process and Redirect Output

I run a process using VBScript. The process usually takes 5-10 minutes to finish and if i run the process standalone then it gives intermittent output while running.
I want to acheive the same logic while running the process using VBScript. Can some one please tell me how to do that?
Set Process = objSWbemServices.Get("Win32_Process")
result = Process.Create(command, , , intProcessID)
waitTillProcessFinishes objSWbemServices,intProcessID
REM How to get the output when the process has finished or while it is running
You don't have access to the output of processes started via WMI. What you could do is redirect the output to a file:
result = Process.Create("cmd /c " & command & " >C:\out.txt", , , intProcessID)
and read the file periodically:
Set fso = CreateObject("Scripting.FileSystemObject")
linesRead = 0
qry = "SELECT * FROM Win32_Process WHERE ProcessID=" & intProcessID
Do While objSWbemServices.ExecQuery(qry).Count > 0
Set f = fso.OpenTextFile("C:\out.txt")
Do Until f.AtEndOfStream
For i = 1 To linesRead : f.SkipLine : Next
WScript.Echo f.ReadLine
linesRead = linesRead + 1
Loop
f.Close
WScript.Sleep 100
Loop
The above is assuming the process is running on the local host. If it's running on a remote host you need to read the file from a UNC path.
For local processes the Exec method would be an alternative, since it gives you access to the process' StdOut:
Set sh = CreateObject("WScript.Shell")
Set p = sh.Exec(command)
Do While p.Status = 0
Do Until p.StdOut.AtEndOfStream
WScript.Echo p.StdOut.ReadLine
Loop
WScript.Sleep 100
Loop
WScript.Echo p.StdOut.ReadAll

Cron Expression - Run every hour after a start time

I want to run a cron job every hour after a certain start time. Currently my cron job expression is
cronExpression = seconds + " " + minutes + " " + hours +"/1" + " " + " * * ? *" ;
(seconds, minutes, hours are passed in by the user selection)
The job starts at the right time and runs every hour until midnight but then stops until the hour on the next day and then resumes. How do I get the job to continuously run and not stop at midnight?
I understand I can change the expression to
cronExpression = seconds + " " + minutes + " " * * * ? *" ;
but then it will not take into account the start time. It will just run at every hour.
Thanks in advance,
Rich
Do you mean you want the job to start at the given time and then run once hourly forever? If so, I don't think a cron expression is the right approach.
If you're using a scheduler it should be straightforward to start the job and run forever at a given interval. For example, here's a snippet from the Quartz scheduler docs for JobBuilder:
JobDetail job = newJob(MyJob.class)
.withIdentity("myJob")
.build();
Trigger trigger = newTrigger()
.withIdentity(triggerKey("myTrigger", "myTriggerGroup"))
.withSchedule(simpleSchedule()
.withIntervalInHours(1)
.repeatForever())
.startAt(futureDate(10, MINUTES))
.build();
scheduler.scheduleJob(job, trigger);
Run a wrapper every hour which runs your job if the current time is after your start time.
For example,
case $(date +%H) in
09 |1[0-6] ) pingcheck ;;
esac
would run pingcheck (whatever that is :-) between 09:00 and 16:59.
Use your own expression
cronExpression = seconds + " " + minutes + " " * * * ? *" ;
with the startAt(your start time) method() mentioning the start time of the trigger. This start time will implicitely tell the quartz when the trigger will start coming into effect.