We're trying to get our first containerized build running in Azure Devops Server.
The build runs fine in the container, but, unfortunately, it needs to access resources on another server. As such, I need this to be running as a domain user (GMSA account will work) so that it can authenticate the network share to access those resources.
I can't seem to find any documentation on running a containerized build as a specific user.
Can anyone point me to how to setup the yml for passing credentials, or gmsa account? That would be great.
Thanks in advance
Alright... so I figured it out.
First you have to create a credential spec
In powershell New-CredentialSpec -AccountName GMSAAccountName
Then add this in the yml file beneath the container declaration:
options: --security-opt "credentialspec=file://Domain_GMSAAccountName.json"
That was it... and now it works.
Have you tried using PAT(Personal Access Token) to run in agent build?
When setup asks for your authentication type, choose PAT. Then paste the PAT token you created into the command prompt window.
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops#permissions
Related
I am trying to use FTP to upload specific files (not a full release) to an Azure Web App. Essentially I am using a PowerShell script to FTP files up to the web app in Azure. I can add new files, create files and folders but when I try to overwrite or delete a file, I get a 550 Access is denied.
I tried creating a a new deployment credential and was able to log in but the result was the same when trying to delete anything; 550 Access is Denied.
Is there any way to grant more permissions to this user or is this impossible? Thanks!
Check that you are not using the READ ONL FTP URL.
The publish profile gives two FTP url's the bottom on is ready only and will always give a 550 error.
550 Denied error, it indicates that you have no enough permission to do that.
You could download the Azure publish profile to get ftp user and password.
You also could follow this tutorial to get FTP information.
or
As zahid Faroq mentioned that you could use KUDU tool(https://{yoursite}.scm.azurewebsites.net) to do that easily. For more information about KUDU, please refer to this document
If you still can reproduce the issue, I recommand that you could create a support ticket to get help from Azure team.
You can't overwrite runnig app. Stop the app first, then upload, then start app again. You can stop/start the app from azure portal or using az cli
az webapp stop --name %AZURE_APP% --resource-group %AZURE_RESOURCE_GROUP%
and
az webapp start --name %AZURE_APP% --resource-group %AZURE_RESOURCE_GROUP%
p.s.: Funny thing is that you can delete running app. Then you still can't upload the running app even when it's already deleted.
I restarted the server, then I could delete/alter the app.
I'm trying to remove a VSTS agent from a system, but I no longer possess the Personal Access Token (PAT) originally used during setup. An answer on this thread states that I can just delete the agent from the VSTS web UI, but I don't see that option besides nuking the entire agent pool (which is not a great option for us).
When I try to run config.cmd remove, these are my results:
PS C:\agent> .\config.cmd remove
Removing agent from the server
Enter authentication type (press enter for PAT) >
Enter personal access token >
Enter personal access token > Exiting...
First, it’s better to remove VSTS agent through config.cmd remove command and the PAT is required, you don’t need to use original PAT, you can apply a new PAT with Agent Pools (read, manage) scope and use it to remove agent.
Secondly, without PAT:
Deleting agent from server:
Deleting agent service in local system through sc command if it is running as service: sc delete [service name].
After that, you can delete the agent files.
Dears, I've another use case; I've been using Azure DevOps on-prem server.
I deleted the agent from the devops server 'Website,' However this's wont help me out when I tried to reinstall the agent it tells me:
Cannot configure the agent because it is already configured. To
reconfigure the agent, run 'config.cmd remove' or './config.sh remove'
first.
However, I've solve it when typing the below:
resolved
I have some powershell scripts in my CI server to check the state of some WebJobs.
But I have few problems.
I'm using publish settings file, but it expires and my build starts to fail.
I don't want to use a Management Certificate that will expose all management features.
And I don't want to put my user credentials on the CI server that will also expose all management features.
There is any way to create a CI user or credential with restricted permissions?
Thanks!
Azure Functions provides a good solution to this problem. You can create a Service Principle account, with certificate login and then restrict that account to whatever actions you need it to allow (via RBAC)
You can then have an Azure PowerShell script running in Functions, that is called from a webhook from your CI engine. That way the only credentials that are stored on your CI are the webhook secret, and if your CI engine has a static IP you can verify that commands only come from that address, and drop anything else.
I've checked that the TeamCity user has access to the network share in question.
All packages from the public NuGet feed are found correctly while packages available on the network share are not.
We use the network share when building via Visual Studio with the exact same path without a problem.
I've tried using "file://ratchet/NuGetRepository" but that doesn't make a difference.
TeamCity log entries and screenshot of the build step configuration shown below:
NuGet command: E:\BuildAgent01\plugins\nuget-agent\bin\JetBrains.TeamCity.NuGetRunner.exe E:\BuildAgent01\tools\NuGet.CommandLine.DEFAULT.nupkg\tools\NuGet.exe restore E:\BuildAgent01\work\95323b7041b60513\MySolution.sln -Source https://nuget.org/api/v2/ -Source \\ratchet\NuGetRepository\
Was able to solve this by specifying the fully qualified name of the network share, e.g. \\ratchet.hq.local\NuGetRepository.
Since the accepted answer did not provide a solution for my setup, I'd like to post what did allow TeamCity to access my network share.
First, a very important note: TeamCity Build Agent may either run as a Windows service or directly in command prompt. For my machine, this had the following consequences:
When run as a Windows service, the build agent was logged in as LocalSystem. For our network share, my machine's credentials were not given permissions.
Note: while this SO thread indicates that the network share can be configured to allow the machine's LocalSystem account to have permission, this was NOT an option for me.
When run in command prompt, the build agent will use the security context of whoever runs it (for me, it was my domain user). Again, for our network share, all domain users are given permissions.
The quick solution was to simply run the build agent in command prompt and call it a day; however, I did really want to run the build agent as a Windows service, since I think it is a cleaner approach.
Here's my solution:
First, I needed to grant my domain user the privilege to log on as a service. This is needed to run the service with my domain user's security context. I navigated to User Rights Assignment within Local Security Policy:
Control Panel -> Administrative Tools -> Local Security Policy -> Local Policies -> User Rights Assignment
Next, I added my domain user to the Log on as a servcie setting. For this, I made sure to include the domain with my user name.
Now that my domain user's security context can be used when starting a service, I navigated to Services (services.msc), located TeamCity Build Agent, and edit its properties:
Now, when relaunching the TeamCity Build Agent Windows service, it would be able to access the network share since it was using the security context of my domain user. I can now access the Nuget repository on our shared drive and keep the build agent running in the background.
You can include the package sources in NuGet.targets file. Just find the commented lines and add your path.
<PackageSource Include="https://nuget.org/api/v2/" />
<PackageSource Include="\\ratchet\NuGetRepository\" />
Scenario:
I have a console application that needs to access a network share with read/write permissions.
There is no problems when run it manually.
The problem:
When I add this application as a job in my quartz.net server, it cannot access the share. I do not have access to change permissions on the network share, so basically I need my quartz job or if necessary my quartz server to run jobs as me (or as a user that has the proper permissions).
Any ideas in how to accomplish this?
Thanks
You need to change the user that the service is ran with (so this actually isn't a Quartz.NET issue). Open service properties in services and change the user from SYSTEM or NETWORK SERVICE to some named user account that has proper rights to the network share.
You can also use impersonation to change the user you're running as on the fly.