Can an Azure DevOps proxy server be used to facilitate artifact and pipeline releases for a DMZ environment (e.g containers).
For example,
LAN Azure DevOps > Azure DevOps Proxy > Containers in DMZ
When I've looked at the proxy documentation it seems to just relate to artifact caching.
Any help would be appreciated.
Thanks
You can use proxy server for that and connect it to particular port using firewall rules and telnet to see if connection is working fine.
We used to do same using the proxy server between artifact server and Azure DevOps server by opening some firewalls rules for particular port.
Related
We are getting below error on Azure devops pipeline via Self hosted agent release when Azure web app is on Private network. No Error seen when the web app on azure is on Public.
Error: Error: Failed to deploy web package to App Service. Error: tunneling socket could not be established, statusCode=503
Made Azure web app to private and error comes. Moved to public no error seen.
Seems that the self-hosted agent cannot connect to the Azure app service. It seems to be a network issue.
The agent needs a way to connect to the App service directly. To ensure the connectivity is ok, we need to make sure the self-hosted agent is not blocked by NSG rules or App Service networking Access Restrictions. Just whitelist the agent machine in your rules.
The task using Kudu REST API to deploy the application. We need to check the following App Service networking Access Restrictions to allow deployment from a specific agent:
Make sure the REST site “xxx.scm.azurewebsites.net” have Allow All, i.e. no restriction.
Also, the option “Same restrictions as ***.azurewebsites.net” should be unchecked.
If you are using Private Endpoints for Azure Web App, you must create two records in your Azure DNS private zone or your custom DNS server. Kindly check DNS for more details.
Besides, when the proxy is set up, Web API calls and SCM hosts are bypassed by the user. The same has to be configured in the Azure pipelines agent explicitly. To bypass specific hosts, follow the steps here and restart the agent.
1.Allow access to Public removed.
2.Created Pvt endpoints within same Vnet and Subnet of Target VM
3.Created new file .proxybypass in self hosted agent folder C:\Username\Agent
4.Added below entries in .proxybypass to allow and communicate bypassing corporate proxy
https://MyWebappname.azurewebsites.net
http://MyWebappname.azurewebsites.net
enter code here
we have a TFS 2018 running inside our Intranet and want to deploy to a remote machine outside of our intranet. The TFS is not visible from the outside (behind Firewall and does not have its own IP)
So we came up with this solution, that might work:
Set up a VPN connection between the target machine and our intranet
Create an Azure Pipeline Agent on the target machine that uses a private access token to communicate to the TFS
Is there an easier solution to this, which doesn't require a VPN connection?
We thought we could deploy to a web share from TFS and then trigger the Azure Pipeline Agent on the target machine, to start the deployment. But from the documentation of Microsoft it seems as if the Agent has to have direct access to the TFS trough HTTPS and only "listens" to jobs in the TFS queue.
That means that the only other solution to a VPN connection from the target machine would be, to make our TFS accessible from the internet trough HTTPS, right?
Unfortunately, until now we haven't found a lot of documentation on "best practices" for this use case. That's why I decided to share it here. Thanks!
I am thinking of moving from TFS in our local Data Center to Azure DevOps but one of our pipelines deploys to a server that is going to stay in our Data Center, and that's not exposed to the internet. Can I establish a VPN with Azure DevOps such that in this multi-tenant environment the pipeline can deploy to our internal server? Or am I stuck with TFS installed here?
It is possible by using Deployment groups. The connection will be initiated by your servers.
For seeing how it work watch the following video:
https://www.youtube.com/watch?v=58UfRxxAWhE
I can't connect from azure resource (aks node) to Azure postgres using pgcli. I also tried directly from node and got the same error message:
FATAL: Client from Azure Virtual Networks is not allowed to access the server. Please make sure your Virtual Network is correctly configured.
Firewall rules in the resource are on:
Allow access to Azure services: ON
Running the same pgcli login command on my computer and on another azure resource seems to work fine.
Adding Firewall rules to all IPs return the same error.
Curl from the problematic server (host:5432) returns a reply, so it's not an outbound issue.
What does the error mean?
A VM where the connection originates from is deployed to a virtual network subnet where Microsoft.Sql service endpoint is turned on. Per documentation:
If Microsoft.Sql is enabled in a subnet, it indicates that you only want to use VNet rules to connect. Non-VNet firewall rules of resources in that subnet will not work.
For connection to succeed there must be a VNet rule added on PostgreSQL side. At the time the question was asked VNet Service Endpoints for Azure Database for PostgreSQL just got to public preview so I assume it might not have been available for the OP.
Solution
As of November 2020, Service Endpoints for Postgres is GA and instead of disabling the service endpoint one can add a missing VNet rule to the PostgreSQL server instance and reference the service endpoint-enabled subnet. It can be done via Portal or Azure CLI
Apparently, the vm is part of a vnet that a service endpoint tag Microsoft.sql was enabled.
I found this answer. To solve the problem I disabled the service endpoint and added the public IP to the Connection Security section.
I encountered the same problem.
All I did was to switch Allow access to Azure services to ON .
I am trying to send email using one of our on-premises servers from one of my web roles hosted on azure. We've got a Windows Azure Connect endpoint installed on this on-premises server which has an SMTP server.
We've configured the web role so that it contains an activation code I acquired using the windows azure portal and the azure subscription we have. The web role has been deployed to azure with this configuration. Looking in the virtual network section of the portal I can see our on-premises server listed as well as the instance of said web role. I Created a group connecting the local endpoint to the web role instance.
The problem I'm having now is figuring out exactly what I have to do in order for the emails I send from the web role to be relayed through the smtp server on the on-premises server.
My first thought was to just specify the local endpoint name as it appears in our azure portal as the host to use when I create my SmtpClient object in code. Of course this didn't work as I received an SmtpException just saying Failure Sending Email.
So my question is once I've set everything up as described above, what do I need to do in ,my web role code and/or configuration in order to use the local endpoint as the smtp host for sending out my emails??
How about open your firewall for the SMTP on both your azure VM and local server.
As I know the azure VM firewall disabled the PING (ICMP) but doesn't know if it blocked all ports except those defined in your CSDEF file.