I have servers on Azure and I am using OMS to patch the servers, which is working fine. How ever there are many servers which are non azure like Laptops, Is it possible to patch the Non-Azure VMs from OMS?
Could you please help?
Is it possible to patch the Non-Azure VMs from OMS?
Yes, it is possible. You need install OMS agent on your local VMs.
However, if you want to use Update management, there are some prerequisites that you need to be satisfied. You could refer to this link.
1.The solution supports performing update assessments against Windows
Server 2008 and higher, and update deployments against Windows Server
2012 and higher. Server Core and Nano Server installation options are
not supported.
2.Windows client operating systems are not supported.
3.Windows agents must either be configured to communicate with a Windows
Server Update Services (WSUS) server or have access to Microsoft
Update.
More information about connect Windows computers to OMS please refer to this link.
Related
I have set up a Prefect backend server on a remote machine. I was able to connect local agents from different other machines to the server by modifying the config.toml in the .prefect folder:
[server]
endpoint = "http://server_ip:port/graphql"
[server.ui]
apollo_url = "http://server_ip:port/graphql"
As it stands, I can create a local agent on each machine, register flows and run them on the respective machines. Now I would like to have a central computer where I can develop and register my flows.
Unfortunately, when I run a flow on Machine B, registered on Machine A, I get a "Module not Found" error message. I have read that the error comes from machines only looking for the flows in their local storage.
Without using Git, GCS, etc., is it possible to use, for example, a NAS where all flows are stored and which all machines can use to access the flows?
if so, how must flows, agents, and storage be configured? Unfortunately, I have not found any good documentation on this.
Many applications use Docker agents and have similar problems, or use remote storage directly.
There is no native NAS storage interface available in the core library, but we provide recipes and guidance on how you may solve the ModuleNorFoundError - check out this Discourse wiki page which dives into how you may solve that
I was able to find a solution to my answer. The prerequisite is shared storage (e.g. a NAS), which is accessible on all machines under the same path. In this storage, the flows are stored in the form of .py files. Flows and used local Agents do not need any special preparations.
I simply registered my flows with
prefect register --project "PREFECT_PROJECT_NAME" --path "PATH_TO_.py"
in CLI.
I was able to deploy all my flows from machine A and execute them from/schedule them on any other machine
I have an AWS RDS (PostgreSQL) that is inside a private network - only accessible via a VPN and Bastian Host.
I am able to establish connection from PBI Desktop to "PostgreSQL-RDS Instance." By creating SSH tunneling from my Laptop (localhost) to Bastian Host using ODBC Driver. With this approach all the data is imported onto PBI desktop(import mode).
But our requirement is to establish connection through a direct query to refresh data real time and generate the Reports Dynamically which I am not able to.
I entered the database credentials into the Power BI desktop tool, and it not working correctly in the power bi desktop, getting a Timeout Error.
I must use direct query, I can't use import.
Any help is appreciated.
An exact error that you are getting would help get to the root cause of the issue. However, a few basic troubleshooting steps that I'd suggest are:
Ensure that you have a compatible version of the software installed on your machine such as the Npgsql-4.0.9. AT times the latest version of the software usually causes issues.
Ensure that you remove the semicolon at the end of the query.
Once you get the query running successfully on the desktop version, when you publish it to the web version, the visuals will not be able to connect to the database unless an on-premises data gateway is setup. To do so, more details on setting up a data gateway to automatically refresh the dataset for the power bi web version are here:
Refresh AWS RDS database from Power BI Web you are successfully able to query directly
Problem: System Center Endpoint Protection keeps deploying itself from SCCM to the computers and servers after I manually delete them, even if the SCCM server got completely removed recently. Though AFAIK the deployment tasks weren't deleted, only the services stopped and SCCM related programs uninstalled. Also the server (Hostname: SCCM_SERVER) was shutdown.
If I open one of the servers and go to Configuration Manager, I see that Assigned management point is still SCCM_SERVER.
Question: Not having been delved into SCCM administration prior, how is this happening? Did it create windows services on each machine? Could there be additional SCCM administration sever running somewhere else? I checked GPO/scheduled tasks - nothing. How does the deployment work? And how do I stop it?
Also, if additional information related to the software/hardware/network is required please ask.
Regards,
Sai
Have you checked the log file EndpointProtectionAgent.log? Maybe it can give us some clues.
If you want to decommission SCCM, you could uninstall the SCCM client.
The correct way is edit the Client Settings node in the Administration workspace first.
Modify the device settings "Install Endpoint Protection client on client computers", choosing False or No will not uninstall the Endpoint Protection client. To uninstall the Endpoint Protection client, set the Manage Endpoint Protection client on client computers client setting to False or No. Then, deploy a package and program to uninstall the Endpoint Protection client.
#About client settings in System Center Configuration Manager
Is there a simpler way of deploying Windows Services from TFS than using a Powershell script, run on the TFS server, which:
Stops the existing Windows Service on the remote server
Copy the file on a shared folder on the remote server (copy-item)
Starts the Windows Service on the remote
If not, can any other continuous integration/deployment tool do this better?
As the TFS server is using a domain controller which is different from the remote server, can we share a folder for a specific user? I tried to run the powershell script as a user from the target domain controller, but of course, it is not recognized as a valid user on TFS server.
At last, is there any difference on deploying on an hosted remote server or on the cloud?
Thanks,
In tasks based build system (TFS 2015 +), you can try to install Windows Service Release Tasks, which contains tasks to start and stop windows services as well as change the startup type.
I have downloaded opshub tool for the migration, trying to migrate from on premises to visualstydio team service. Getting error On the migration summary page with error message:
"Unable to communicate with the required process opshubtfsservice.Because probably it is not running. Restart application & try again"
I verified the opshub service is running as local service. I tried to restart & tried with Network service/local account etc. But no lock.
Can you please help me on this here.
Thank you.
It seems that your machine is behind a proxy and all traffic in-out is being routed through it. (Including the communication of local traffic)
You will have to bypass local addresses from the proxy as well as enable OVSMU to communicate through the proxy.
Please refer the C:\Program Files\OpsHub Visual Studio Migration Utility\Other_Resources\Resources\ProxyUtility.zip to configure OVSMU to utilize your proxy. There should be a user guide document which lists the steps.Keep the bypass local address value as default.