AzureDevops deploys Analysis Sevices with different IP address everytime - powershell

I am deploying Tabular Model via CICD, using this approach , however every time it deploys using different IP addresses. I have to disable the firewall rule in Azure Analysis Services for deployment. Is there any workaround for this or a specific IP range to whitelist?

For Self-hosted agent:
You can configure one specific Self-hosted agent for deployment purpose, so that you only need to add the IP address of the agent machine to allow list.
For Microsoft-hosted agent:
And if you prefer to using cloud-hosted agents, you need to dynamically modify the firewall rule in each continuous deployment.
(Since every time you get a new hosted agent instance, you then get a different IP address.)
We can use a Azure Powershell task right before the deploy task to configure the firewall rule of your Analysis Service. About how to write the content of PS script, you can refer to AddDevOpsIpToAAS.ps1 for some help.
Here's one detailed blog from Arthur about "How to add your DevOps IP to Azure Analysis Services Firewall", it should be helpful for you. (Thanks to Arthur!)

Related

Configure network access to MongoDB cluster from Azure App Service

I'm trying to configure network access of a MongoDB cluster to allow connections from an Azure App Service. I found the outbound IP addresses of my App Service in the Azure portal (see Azure docs). And entered them in the IP access list according to MongoDB Atlas docs. I appended "/32" to the IP addresses to allow only a single host (CIDR notation).
However, when trying to connect on App Service start I get an error indicating to check the IP whitelist of the MongoDB cluster.
This actually seems to be the problem, because adding 0.0.0.0/0 (allow access from anywhere) solves the problem.
What could be the problem here?
I double checked the outbound IP addresses of the Azure App Service and the IP access list from the MongoDB Cluster.
What I did was indeed the answer to another question, so I think I'm missing something...
Actually /32 is not a valid CIDR in Azure. The minimum size of a single VNET is /29.
This will restrict your range to only 3 IPs (not 8 as you would expect), as Azure will reserve the first four IPs and the last one for internal routing.
Please consider also that if you are running the MongoDB cluster inside a private network and it is not exposed externally via a network appliance (such as Application Gateway, Load Balancer, Front Door or Traffic Manager), you will need to enable VNET Integration on Azure Web App side.
If this is your case, navigate through your App in the portal and go into the "Networking" blade.
Here you can add VNET Integration, but you should consider that in this case the minimum size of your subnet can only be /28 (you cannot add a smaller subnet)
I only added the IP addresses listed in the "outbound IP addresses" property of my Azure App Service. After adding the IP addresses listed in the "Additional Outbound IP Addresses" property also the App Service connects to the MongoDB cluster successfully.
This is somewhat surprising to me because the documentation on when outbound IPs change says that the "...set of outbound IP addresses for your app changes when you perform one of the following actions:
Delete an app and recreate it in a different resource group (deployment unit may change).
Delete the last app in a resource group and region combination and recreate it (deployment unit may change).
Scale your app between the lower tiers (Basic, Standard, and Premium), the PremiumV2, and the PremiumV3 tier (IP addresses may be added to or subtracted from the set).
..."
None of the above actions happened. 🙄

Wazuh Agent not connecting

I have two questions. My Immediate problem is WAZUH-AGENT never connects to WAZUH-MANAGER
A. That makes me think, While installing Wazuh Manager, where do we provide WAZUH MANAGER IP?
B. I registered Windows and RHEL machines as agents but none of them are able to connect - all agents are NEVER CONNECTED status.
From windows , it is the error . I am using port#1515 and TCP
ERROR: (1216): Unable to connect to 'xx.xxx.105.75': 'A connection
attempt failed because the connected party did not properly respond
after a period of time, or established connection failed because
connected host has failed to respond.'
I even tried changing 1515 to 1519 from Kibana-Wazuh app. And added my Agent IP in white-list, not sure if that matters.
Answering your questions according to the current version of wazuh v3.13.1 as of today:
[A] While installing Wazuh Manager, where do we provide WAZUH MANAGER IP?
In the installation of the manager you don't have to configure any IP unless you are configuring the cluster mode. WAZUH MANAGER IP is necessary to configure it in the agents.
After installing the agent, you have to:
Add the manager's ip address in the configuration file /var/ossec/etc/ossec.conf
<address>MANAGER_IP</address>
Register the agent in the manager. The simplest method is
/var/ossec/bin/agent-auth -m MANAGER_IP
Restart the wazuh agent
systemctl restart wazuh-agent
Once these steps are applied, you should have your agent connected and reporting to the manager.
[B] I registered Windows and RHEL machines as agents but none of them are able to connect - all agents are NEVER CONNECTED status.
After having performed the steps mentioned above, you should have connection of the agents with the manager. If not, then a troubleshooting process must be followed.
Check that the agent has successfully registered in the manager. You can use the command /var/ossec/bin/agent_control -l and see if the manager has the agent registered.
Check that you have a connection to the manager from the agents.
Wazuh uses by default ports 1515/TCP for registration and 1514/UDP for communication. Check that you have a connection through these ports (check firewall rules ...)
To avoid possible problems, check that your manager's version is >= that the agent's version.
Check if there has been an error in /var/ossec/logs/ossec.log file.
I hope this information is helpful to you.
Best regards.
A.You will have to edit ossec.conf file and make sure you have the MANAGER_IP address put it right place.
B.After you complete the section A. and if 1514/1515 ports are opened, you will be seeing your agent on the manager. Do not forget to register your aget to the manager.
I Think there have two steps:
1.To edit ossec.conf in agent. to change the 'MANAGER_IP' to real manager IP. This is very import and it's very easy to forget to edit it.
2.Restart the Agent.

Azure VNET - Accessing VNET Resources from WebApp

I have deployed a VNET on Azure. I have also set up a Point-to-Site connection following this tutorial. I need 3 things on this Network.
VM Instance for MongoDB Docker.
WebApp API(ExpressJS) which should treat (1) as local address
Connect my Local machine to VNET to manage my VM Instance
I managed to deploy (1)
I successfully connect my machine (3) to the VPN and can access (1) on local IP 10.1.0.5:PORT using Mongo DB Management tool.
For WebApp API (2). I have followed all the necessary steps mentioned here. And Azure Portal show that the App is connected properly.
According to this video I should be able to connect the VM (1) . However I cannot access the local resources from the WebApp API (2).
My Connection String for WebApp API(2) is of the following format:
mongodb://[username]:[password]#10.1.0.5:[port]/[db-name]
What can be the possible reason?
since this seems to be specific to your setup, I would recommend reaching out to support so the support team can do a thorough investigation.
-- Anavi N [MSFT]

How Can I Connect virtual networks from different deployment models using PowerShell?

I googled and searched but no use. I have VNet which is already created in Azure Classic Portal and I want to add a subnet gateway and a gateway in PowerShell to connect it to an existing IaaS v2 Vnet. How can I do that in PowerShell? Please let me know if there's any resources on how to do that.
Thank you
According to your description, if you want to connect the VNets in different deployment models. The most important is that, the address ranges for the VNets do not overlap with each other, or overlap with any of the ranges for other connections that the gateways may be connected to.
Also your PC have installed the last PowerShell cmdlets, make sure you install both the Service Management (ASM) and the Resource Manager (ARM) cmdlets. Now we can create the VPN to connect the VNets in different deployment models.
More information about how to connect VNets in different deployment models, please refer to the link below:
https://azure.microsoft.com/en-us/documentation/articles/vpn-gateway-connect-different-deployment-models-powershell/
If you still have questions, welcome to post back here. Thanks.

How to make Windows DNS and WINS settings persist in an Azure VM?

I have a domain controller set up in an Azure VM, and a couple of other servers also set up as VMs. When I set up the server VMs, I configured DNS and WINS to point to the IP address of the DC and joined them to the domain. However, these settings don't survive a shutdown (where the VM is deallocated). When the VM is started back up, DNS and WINS are empty, and domain authentication does not work.
I read that I should provision new VMs via PowerShell commandlets, specifically setting up domain joining. I tried that, and maybe I got something wrong, but it didn't work -- the newly provisioned VM was not joined to the domain, and did not have DNS/WINS set to point to the domain controller.
In any event, my question is: is there any way to re-configure an existing VM to retain network settings through a shutdown or is my only option to figure out how to provision a brand new VM to be married to the domain controller, and then to start from scratch?
Thanks!
You shall never use static configuration on your Azure VM! Neither for IP Addresses, nor for DNS Settings. What I recommend to use is a long story you can read here. It is tested, validated and proven to be effective. A short extract follows:
You should setup at least two sub-nets. Leave one solely for the DNS (and AD/DC if it happens to be the same server). Put all rest of the machines in the other Sub-Net. Thus, you will have 100% predictable IP Address of the DNS Server machine. Having that in mind, configure the DNS for the virtual network via the portal or via PowerShell. But explicitly configure DNS Server for that virtual network. Set IP address for the DNS - the one that you know it will have!
Please do never forget - never manually change network configuration settings for an Azure VM! Doing so is a path to failure.
The above method will help you resolve DNS issue. Now, for the WINS. I don't think you can configure WINS via Virtual Network settings. So, if your VM really loses WINS config, you can create a small powershell script that runs locally on each VM to configure WINS settings upon boot. You can either make this script more generic by looking up the DHCP assigned DNS server and use the same IP Address for WINS, or just put it static, because you know what the IP Address of DNS server will be.
Anton presents a clever and perfectly workable solution, but I wanted to understand what exactly I was doing wrong, because Microsoft guidance suggests that it should be perfectly possible to set up and maintain an Active Directory domain the in the Azure cloud without putting the DC into its own subnet.
After a lot of trial and error (mostly error), I finally figure it out. This is not well documented, so hopefully this will help someone:
In Windows Azure, cloud service is another term for application, or a set of components that scale together. A cloud service is assigned a single DNS name and a single external IP address. In the context of virtual machines, you typically have a 1:1 correspondence between a cloud service and a virtual machine. You only add additional virtual machines to an existing cloud service when you want Azure to automatically load balance and distribute requests among the VMs inside that cloud service, treating them as if they were one.
This brings me to my mistake. Not fully understanding the above, I was attempting to add a new worker virtual machine to the cloud service in which I set up my Domain Controller. That is not a supported configuration. Once I understood that, and properly configured a new VM into its own cloud service, associated with the domain controller as DNS server, everything worked perfectly.