Playwright worksf on my machine when I run it through the normal Python interpreter, but when I try to deploy it as an Azure function I get errors.
I'm trying to follow instructions here but I'm getting "webkit" browser was not found. Please complete Playwright installation via running "python -m playwright install" which I think is an error that can't occur if you're using npm.
I've tried creating an azure devops Pipeline which has this step:
- bash: |
python -m venv worker_venv
source worker_venv/bin/activate
pip install -r requirements.txt
python -m playwright install
workingDirectory: $(workingDirectory)
displayName: 'Install application dependencies'
I've also tried just doing it from my code:
os.system('python -m playwright install')
I can see that the PLAYWRIGHT_BROWSERS_PATH environment variable is set to 0.
How can I get this to run on Azure functions?
As you mentioned the code works locally and it doesn't work when you deploy it to azure function. It seems you haven't add the modules which you installed into requirements.txt. When you deploy it to azure function, azure cloud will install the modules according to requirement.txt. So just add a line like playwright==0.162.1 in your requirements.txt.
Related
CI Runner Context
Gitlab version : 13.12.2 (private server)
Gitlab Runner version : 14.9.1
Executor : shell executor (PowerShell)
Exploitation system : Windows 10
Project in Python (may be unrelated)
(using Poetry for dependency management)
The Problem
I am setting up an automated integration system for a project that has several internal dependencies that are hosted on the same server as the project being integrated. If I run the CI with a poetry update in the yml file, the Job console sends an exit with error code 128 upon calling a git clone on my internal dependency.
To isolate the problem, I tried simply calling a git clone on that same repo. The response is that the runner cannot authenticate itself to the Gitlab server.
What I Have Tried
Reading through the Gitlab docs, I found that the runners need authorization to pull any private dependencies. For that, Gitlab has created deploy keys.
So I followed the instructions to create the deploy key for the dependency and added it to the sub-project's deploy key list. I then ran into the exact same permissions problem.
What am I missing?
(For anyone looking for this case for a Winodws PowerShell, the user that the runner uses is nt authority/system, a system only user that I have not found a way to access as a human. I had to make the CI runner do the ssh key creation steps.)
Example .gitlab-ci.yml file:
#Commands in PowerShell
but_first:
#The initial stage, always happens first
stage: .pre
script:
# Start ssh agent for deploy keys
- Start-Service ssh-agent
# Check if ssh-agent is running
- Get-Service ssh-agent
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
I solved my problem of pulling internal dependencies via completely bypassing the ssh pull of the source code and by switching from poetry to hatch for dependency management (I'll explain why further down).
Hosting the compiled dependencies
For this, I compiled my dependency project's source code into a distribution-ready package (in this context it was a python wheel).
Then used Gitlab's Packages and Registries offering to host my package. Instead of having packages in each source code project, I pushed the packages of all my dependencies to a project I created for this single purpose.
My .gitlab-ci.yaml file looks like this when publishing to that project:
deploy:
# Could be used to build the code into an installer
stage: Deploy
script:
- echo "deploying"
- hatch version micro
# only wheel is built (without target, both wheel and sdist are built)
- hatch build -t wheel
- echo "Build done ..."
- hatch publish --repo http://<private gitlab repo>/api/v4/projects/<project number>/packages/pypi --user gitlab-ci-token --auth $CI_JOB_TOKEN
- echo "Publishing done!"
Pulling those hosted dependencies (& why I ditched poetry)
My first problem was having pip find the extra pypi repository with all my packages. But pip already has a solution for that!
In it's pip.ini file(to find where it is, you can do pip config -v list), 2 entries need to be added:
[global]
extra-index-url = http://__token__:<your api token>#<private gitlab repo>/api/v4/projects/<project number>/packages/pypi/simple
[install]
trusted-host = <private gitlab repo>
This makes it functionally the same as adding the --extra-index-url and --trusted-host tags while calling pip install.
Since I was using a dependency manager, I was not directly using pip, but the manager's wrapper for pip. And here comes the main reason why I decided to change dependency managers: poetry does not read or recognize pip.ini. So any changes done in any of those files will be ignored.
With the configuration of the pip.ini file, any dependencies I have in the private package repo will also be searched for the installation of projects. So the line:
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
changes to a simple line:
- pip install dependency-project
Or a line in pyproject.toml:
dependencies = [
"dependency-project",
"second_project",
]
I would like to use Docker Compose task in Azure pipelines, but I am getting following error:
##[error]Unhandled: Docker Compose was not found. You can provide the path to docker-compose via 'dockerComposePath'
How should I install docker compose? Is there a "nice way", something like Docker Installer task?
Unfortunately, there does not seem to be a "clean" step like Docker Installer task for Azure pipelines to install Docker Compose.
However, I have been able to get Docker Compose installed via shell script task which utilizes the local package manager to get/install Docker Compose (e.g. sudo apt-get install -y docker-compose)
- task: ShellScript#2
displayName: Install Docker-Compose
inputs:
scriptPath: 'docker-compose-install.sh'
I have a really simple repo in GitHub (/Hooks/), currently containing just 1 file, a simple Hooks.ts typescript file. On my local machine, in my Workspace, I've created a project folder and I can yarn add normal repositories like yarn add fuse.js but I wanted to yarn add my private repo yarn add Hooks using this format yarn add git+ssh//git#github.com:OrganisationName/Hooks.git but I just get Error: couldn't find the binary git'. I have permissions to the Hooks repo because I can push/pull from it. I'm on OSX Mojave (10.14.16) and installed Yarn via brew. My yarn version (yarn -v) is 1.22.10. This is the latest brew will install after running brew upgrade yarn.
This error "couldn't find the binary git" is related with not having installed git where the installation is made, Are you running these. commands inside a container?
you might as well be installing openssh, is necessary too.
for example in an alpine container
apk add --no-cache git openssh
yarn install
If, just in case, you don't want to access to the repo trough SSH, you can access trough https+deploy-token, heres a gitlab example:
git+https://<token-name>:<token>#gitlab.com/Username/Repository#<branch|tag>
I'm trying to debug why snaps do not run on Azure pipelines builds, and what I have found is that "/" is not owned by root during these builds, it is owned by uid 500 (not 0).
Does anybody know why "/" is not owned by root? Is this a bug with Azure Pipelines?
For example the following example does not work:
pr:
- 1.*
jobs:
- job: ldc2_snap
timeoutInMinutes: 0
pool:
vmImage: ubuntu-16.04
steps:
- script: |
set -x
snap version
lxd --version
sudo apt-get update
sudo snap install --classic --candidate snapcraft
export PATH="${PATH}:/snap/bin"
snapcraft --version
snapcraft
displayName: Build ldc2 snap package
This fails because snap-confine (which is run by snapcraft / snapd) won't run if "/" is not owned by root. We (snapd developers) do not want to allow snap-confine to run with non-root owned "/" without understanding why this is the case, as it seems like a bug with Azure Pipelines.
You can try to run your pipeline on agent ubuntu-18.04. I can reproduce the same issue with agent ubuntu-16.04. But the issue seems gone on agent ubuntu-18.04.
If you want to configure your own self-hosted agent. You can refer to the detailed steps here
Agents run out of a working directory (defined by the system variable Pipeline.Workspace / environment variable PIPELINE_WORKSPACE). You only have access to that working directory on the hosted agent. It's not a bug, it's an intentional limitation.
If you need something to access the root of the file system, provision your own private agents.
I would like to add a test stage in the IBM Bluemix DevOps "Build Deploy" function to test APIs using Postman and Newman but I don't see how to do that. Any advice on where to look?
In the Build and Deploy pipeline, if you select the Add Stage you can add a new Test job to be ran after each update to the source code repository.
When configuring the stage, you can add a "Test" job with the "Simple" tester type. This lets you give shell commands to be executed in the project directory.
Using Newman can be achieved using NPM to manage the package. Providing you have the newman package listed in your project dependencies, you could set up an NPM script command to run the tests as below.
"scripts": {
"test": "newman -c tests.json"
},
This would allow you to run the following test stage to execute your tests.
#!/bin/bash
# invoke tests here
npm install
npm test