I have installed and configured check_mk tool. Also, configured its agents on both Windows and Linux servers.
Now, I can monitor things like CPU Utilization. But i want to check whether JBoss is running on server or not. If no, it will trigger an email notification to the particular email id.
How can i achieve this?
You can write custom powershell scripts in windows and drop them into the local check_mk folder usually located at "C:\Program Files (x86)\check_mk\local". Once you've dropped the file into this folder you'll need to go into the Check_mk dashboard and add the new monitor to the servers inventory.
Below is a basic service monitor. This will allow you to monitor any service that you'd like. Stats 0=OK, 1=Warning, 2=Critical, 3=Unknown
$serviceName='<any servce name>'
$status=get-service -name $serviceName
$jbossStatus=$status.Status
if ($jbossStatus -eq 'Running') {
Write-Host "0 jboss_service - Status:" $jbossStatus
}
else {
write-host "2 jboss_service - Status: " $jbossStatus
}
Related
I've created a ps1 file that runs on our UTIL server for all workstations on our domain that checks if the computer is online, skips offline computers, checks bitlocker status, formats results, and writes to a CSV file.
The script essentially uses manage-bde -cn $Computer -status C: and works great on most machines. However, there are a few machines that are confirmed on the network and online that do not reply with the status.
I ran the same command manually in powershell on the UTIL server to the affected machines and get the result "ERROR: An error occurred while connecting to the Bitlocker management interface. Check that you have administrative rights on the computer and the computer name is correct" If I connect to the computer and check status on the computer itself, it displays results no problem.
I'm logged into the UTIL server as an admin running powershell as admin. My question is, what would cause some computers to return results successfully and others to have an issue connecting to the Bitlocker management interface? Has anyone seen this before?
What process is executing your script when you're not in an interactive session? A scheduled task, a service? What security context does that process run in?
Based on some other threads I have seen on this, you should check these items:
Not running the command as an admin
Not having a compatible TPM
The TPM being disabled in the BIOS (it is on many computers)
The TPM or BitLocker services not being started.
A TPM reporting as a 1.2 TPM when in fact it is a 1.1 TPM.
I had the same issue in my net.
Solved by setting up one rule for remote client Windows firewall. Ther rule is intended to allow WMI (Windows Management Instrumentation) access to Remote Machine (see this link for further info https://social.technet.microsoft.com/Forums/lync/en-US/a2f2abb3-35f6-4c1a-beee-d09f311b4507/group-policy-to-allow-wmi-access-to-remote-machine?forum=winservergen )
Regards
Andrea
I am trying to publish my product on Azure marketplaces.
I am using windows 2012 R2 Datacenter that I use to create a VM from portal.azure.com. I followed steps of running a sysprep, generalizing it and then creating containers.
After that when we run save-azurermvmimage to capture image, I get the capture action is only supported on a virtual machine with blob based disks. please use the image resource apis to create an image from a managed virtual machine
So I am not able to get the image url in container. Is there anything I am doing wrong?
Please guide!
Managed disk is different from unmanaged disk. We can use Powershell to create a managed image, but we can't find this new image in our storage account, managed disk manage by Azure, we can't manage it directly.
To create a managed image of a VM, we can follow those steps:
run sysprep to generalize the windows VM.(This process deletes the original virtual machine after it's captured.Prior to capturing an image of an Azure virtual machine, it is recommended the target virtual machine be backed up. )
$vmName = "myVM"
$rgName = "myResourceGroup"
$location = "EastUS"
$imageName = "myImage"
Stop-AzureRmVM -ResourceGroupName $rgName -Name $vmName -Force
Set-AzureRmVm -ResourceGroupName $rgName -Name $vmName -Generalized
$vm = Get-AzureRmVM -Name $vmName -ResourceGroupName $rgName
$image = New-AzureRmImageConfig -Location $location -SourceVirtualMachineId $vm.ID
New-AzureRmImage -Image $image -ImageName $imageName -ResourceGroupName $rgName
After it completed, we can find this image here:
More information about create a managed image, please refer to this link.
By the way, we should use Azure PowerShell 3.7.0 or later.
PS C:\Users> Get-Module -ListAvailable -Name Azure -Refresh
Directory: C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement
ModuleType Version Name ExportedCommands
---------- ------- ---- ----------------
Manifest 3.7.0 Azure {Get-AzureAutomationCertificate, Get-AzureAutomationConnec...
#Jason Ye: I was able to solve my problem and publish. I stopped using powershell and used only azure portal to do all the steps. The MS documentation provided is fragmented and new, old versions are combined even though they are deprecated. They are written in a way assuming that the reader has already experienced doing things.
So after lot of work done, I eventually came up with these steps:
Compute new VM Windows Server 2012 R2 Datacenter.
Follow the creation steps but use unmanaged disk so as to manage storage account. Keep track of all the names you are giving, specifically VM, Storage account name and username/password. This authentication will be used in step 5.
Once created, VHD url is stored in storage account created which can be seen in Disks section under VM too (127Gib size is displayed).
Login to VM by using the 'Connect' button on portal and rdp with credentials you provided while creating VM in steps 2/3. If you don't see that, then you will need to add rdp port in Inbound rules.
Once logged in, check and run windows required updates (leave optional) and install your software and do whatever is needed for your Software.
Once this is done, run sysprep with Generalize and Shutdown options.
Sysprep shuts down the virtual machine, which changes the status of the virtual machine in the Azure portal to Stopped (Deallocated).
Create Image by clicking Images -> Add, as shown below. Use new resource group and in Storage Blob field, browse for the storage account created in step 3.
Your storage account will have a container with default name as 'vhds'. Your .vhd is inside this folder. If you don't see your account, you are either checking with wrong name or you ended up creating a managed disk.
Once the image is created, create VM from the newly created image. In console, if you click on this image, then there will be an option 'Create VM' (use the existing resource group created above for the image). This time it’s a managed VM as the VM is created from the image (mentioned in the ‘i’ section).
You can login to this newly created (second) VM and check your data, if it's there in a folder in c: drive. If you don't see 'Connect' button, then add rdp port in Inbound rules.
Stop the VM in step 4/5 by clicking the Stop button in portal.
Download and install Microsoft Storage Explorer.
Search for the storage account created in step 2/3. Expand it and go to the lowest hierarchy. Click on it and on the right side menu, you will see your vhd. Right click and 'Get Shared Access Signature'.
Select 'Generalize...' check box. Enter the start date one day before the current date and expiration date one month from the current (> 7 days from current). Copy the the signature url and save it.
Now, go to the publishing portal: https://cloudpartner.azure.com and create offer. Fill in the necessary fields. In SKU's tab, you have to add a New VM image where Disk Version can be anything in the number.number.number format and OS VHD URL will be the above copied signature url.
In Atihska steps, After Sysprep , machine will be shutdown and status will shown as stopped but not stopped (deallocated). For deallocation, you need to run below powershell commands.
Stop-AzureRMVM -ResourceGroupName ResourceGroup -Name VMName
Set-AzureRMVM -ResourceGroupName ResourceGroup -Name VMName -Generalized
I have a 2 node cluster, on Windows Server 2012R2, version 4.0 of PowerShell installed. One of the roles in the cluster is 'Messaging Queuing', named 'TESTMSMQ', which has about 20 private queues installed.
In a fresh PowerShell console, I set the environment variable _CLUSTER_NETWORK_NAME_ to be 'TESTMSMQ', using the command
$env:_CLUSTER_NETWORK_NAME_='TESTMSMQ'
When I run Get-MsmqQueue -Name *, I get nothing back. But if I run compmgmt.msc I can see all the queues listed, and if I load the System.Messaging assembly into the PowerShell session, I can see the queues.
[System.Reflection.Assembly]::LoadWithPartialName("System.Messaging")
[System.Messaging.MessageQueue]::Exists('.\private$\MyTestQueue')
returns True
Does anybody have an idea why the MSMQ cmdlets cannot find the queues, but the .net assembly can and the Computer Managment snap in sees the queues as well?
Just to be clear, there are no queues defined on the local node or physical nodes. "private$\MyTestQueue" is only defined on the MSMQ installed role "TESTMSMQ".
So, if Exists() is returning True using a localhost name, then I would assume that the environment is the MSMQ role, not the physical node.
A little late, but I couldn't find the answer to this question anywhere. Maybe others are looking too...
Finally pieced together some ideas from a couple of places and got this working on Windows Server 2016.
On one of the cluster nodes:
$env:computername = "MsmqHostName"
Get-MsmqQueue | Format-Table -Property QueueName,MessageCount
remote from the cluster:
Invoke-Command -ScriptBlock {$env:computername = "msmqHostName";Get-MsmqQueue | Format-Table -Property QueueName,MessageCount } -ComputerName ClusternNodeName
Sounds like the classic clustered MSMQ problem.
Clustering MSMQ applications – rule #1
You don't specify where you are running your apps. For example, if "Exists('.\private$\MyTestQueue')" returns True then that means the MSMQ service is running locally to your test. So if you ran the test from the command prompt on a node, you are talking to MSMQ on the node - not the cluster. You would need to run the test from a clustered command prompt instead to use the clustered MSMQ service.
I want to use wasadmin or some other command to list out all the servers in a profile, but looking through the IBMknowledge center its not so straight forward to find.
Can anybody tell what command can be used? I am on a windows 7 system.
You can use serverStatus command line tool. It will list all the servers in the profile, with their current status (stopped/started), like below:
C:\IBM\WebSphere\AppServer85\profiles\AppSrv1\bin>serverStatus.bat -all
ADMU0116I: Tool information is being logged in file
C:\IBM\WebSphere\AppServer85\profiles\AppSrv1\logs\serverStatus.log
ADMU0128I: Starting tool with the AppSrv1 profile
ADMU0503I: Retrieving server status for all servers
ADMU0505I: Servers found in configuration:
ADMU0506I: Server name: server1
ADMU0509I: The Application Server "server1" cannot be reached. It appears to be
stopped.
I use a powershell script, triggered by teamcity, to spin up new Windows Server VMs. Currently, when the machine is up and running, I need to log in via the VMM console to make a couple of configuration changes (enable file sharing, network discovery, msdeploy and remoting over winrm) in order to allow other teamcity jobs to be able to deploy enterprise apps to the VM.
I haven't found any way to run my config setup scripts on the new VM other than by using the GUI console in VMM. For VMHosts, there is Invoke-SCScriptCommand, but this doesn't work for virtual machines themselves. Am I missing something or do I have to alter the template that my VM's are built from, in order to get the required config on the VMs?
One way you could achieve what you require is by putting all your config changes in a powershell script sitting inside VM template and adding it to VM's startup scripts.
The script's first step is checks whether the config changes have been applied in the past by checking some kind of a flag(ie. a file c:\deployed.flag) and last step is to create the flag.
if(Test-Path c:\deployed.flag){
## deployment script run already, do nothing
}
else{
## your config changing code block
New-Item c:\deployed.flag -Type f
}
In VMWare/PowerCLI you can run Invoke-VMScript which executes command directly on a VM via VMWare tools but alas Hyper-V Integration Services don't have such functionality.