From an Azure DevOps pipeline, using a self hosted linux build agent running on docker, I get the error below in terraform plan. I tried many things, even running the steps from a bash shell on the build agent: this worked well.
Do you have any suggestion?
2019-09-12T13:55:21.8133489Z ##[debug]Evaluating condition for step: 'Terraform plan'
2019-09-12T13:55:21.8134075Z ##[debug]Evaluating: succeeded()
2019-09-12T13:55:21.8134246Z ##[debug]Evaluating succeeded:
2019-09-12T13:55:21.8134443Z ##[debug]=> True
2019-09-12T13:55:21.8134723Z ##[debug]Result: True
2019-09-12T13:55:21.8134976Z ##[section]Starting: Terraform plan
2019-09-12T13:55:21.8138406Z ==============================================================================
2019-09-12T13:55:21.8138526Z Task : Run Terraform
2019-09-12T13:55:21.8138605Z Description : Run a Terraform on the build agent
2019-09-12T13:55:21.8138647Z Version : 2.4.0
2019-09-12T13:55:21.8138688Z Author : Peter Groenewegen - Xpirit
2019-09-12T13:55:21.8138772Z Help : [More Information](https://pgroene.wordpress.com/2016/06/14/getting-started-with-terraform-on-windows-and-azure/)
2019-09-12T13:55:21.8138828Z ==============================================================================
2019-09-12T13:55:21.8347594Z ##[error]The current operating system is not capable of running this task. That typically means the task was written for Windows only. For example, written for Windows Desktop PowerShell.
2019-09-12T13:55:21.8361383Z ##[debug]System.Exception: The current operating system is not capable of running this task. That typically means the task was written for Windows only. For example, written for Windows Desktop PowerShell.
at Microsoft.VisualStudio.Services.Agent.Worker.TaskRunner.RunAsync()
at Microsoft.VisualStudio.Services.Agent.Worker.StepsRunner.RunStepAsync(IStep step, CancellationToken jobCancellationToken)
2019-09-12T13:55:21.8365066Z ##[section]Finishing: Terraform plan
The pipelines fails on "terraform plan". Here's the yml:
variables:
env: 'environment'
steps:
- task: petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Terraform.Xpirit-Vsts-Release-Terraform.Terraform#2
displayName: 'Terraform plan'
inputs:
TemplatePath: '$(System.DefaultWorkingDirectory)/_repository/tf'
Arguments: 'plan -var-file=$(System.DefaultWorkingDirectory)/_repository/tf/$(env)/$(env).tfvars '
InstallTerraform: true
UseAzureSub: true
ConnectedServiceNameARM: 'deploy-sco'
ManageState: true
SpecifyStorageAccount: true
StorageAccountResourceGroup: 'rg-terraform'
StorageAccountRM: sta
StorageContainerName: terraform
InitArguments: '-backend-config=$(System.DefaultWorkingDirectory)/_repository/tf/$(env)/$(env).beconf'
The build agent is a docker container, built with the following dockerfile
FROM ubuntu:16.04
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates curl jq git iputils-ping libcurl3 libunwind8 netcat libssl-dev unzip wget apt-utils apt-transport-https make binutils gcc lsb-release gnupg
RUN wget -P /tmp/download https://releases.hashicorp.com/terraform/0.12.7/terraform_0.12.7_linux_amd64.zip
RUN wget -P /tmp/download -q https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb
RUN unzip /tmp/download/terraform_0.12.7_linux_amd64.zip -d /tmp/download/
RUN mv /tmp/download/terraform /usr/local/bin
RUN chmod a+x /usr/local/bin/terraform
RUN apt-get install /tmp/download/packages-microsoft-prod.deb
RUN apt-get update
RUN apt-get -y install powershell
RUN wget -P /tmp/download -q https://curl.haxx.se/download/curl-7.65.3.tar.gz
RUN cd /tmp/download; tar xzf curl-7.65.3.tar.gz
RUN cd /tmp/download/curl-7.65.3; ./configure --prefix=/opt/curl-7.65.3 --disable-ipv6 --with-ssl; make; make install
RUN mv /usr/bin/curl /tmp; ln -s /opt/curl-7.65.3/bin/curl /usr/bin/curl
RUN curl -sL https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor | tee /etc/apt/trusted.gpg.d/microsoft.asc.gpg > /dev/null
RUN AZ_REPO=$(lsb_release -cs); echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | tee /etc/apt/sources.list.d/azure-cli.list
RUN apt-get update
RUN apt-get install azure-cli
RUN pwsh -c "Install-Module -Name Az -Force"
RUN pwsh -c "Install-Module -Name Azure -Force"
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
As what you said, yes, the Terraform plan can be compatible including linux, windows and MacOS. But now, the issue you are facing, is the extension and task you are using which named Run Terraform task can only be executed in the agent which installed on Windows. You can see its doc to know that: Getting started with Terraform on Windows and Azure.
There has another extensions which created by our Microsoft DevLabs and individual developer: Terraform and Terraform Build & Release Tasks. The tasks in these two extensions can all compiled in windows, linux and macOS.
You'd better change to use the tasks which in these two extensions.
Related
i am creating an ECS cluster with docker image library/wordpress:latest and i get the desired task in running state but when i build this image using following Dockerfile and push it to my dockerhub repo and then try to create this cluster using my new image the containers fails by giving Exit code 2
Could you please suggest me what am i doing wrong here?
#Base image
FROM wordpress:latest
LABEL version="latest" maintainer="xxxxxxx <xxxxxx>"
# Update apt
RUN apt-get update
# Add a user for running applications.
RUN useradd apps
RUN mkdir -p /home/apps && chown apps:apps /home/apps
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
# Install all necessary packages
RUN apt-get -y install build-essential libpoppler-cpp-dev pkg-config x11vnc xvfb fluxbox wget wmctrl gnupg2 unzip zip
# Set the Chrome repo.
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
# Install Chrome.
RUN apt-get update && apt-get -y install google-chrome-stable
# Install Chrome driver
RUN wget https://chromedriver.storage.googleapis.com/94.0.4606.61/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/bin/chromedriver \
&& chown root:root /usr/bin/chromedriver \
&& chmod +x /usr/bin/chromedriver
# create folder to store requirements.txt file
RUN mkdir /home/automation
RUN mkdir /home/automation/FrontEnd
WORKDIR /home/automation
# Copy config and scripts
COPY requirements.txt ./requirements.txt
COPY TestSuites /home/automation/FrontEnd/TestSuites
COPY Resources /home/automation/FrontEnd/Resources
COPY TestRunner.py /home/automation/FrontEnd
COPY TestRail/ /home/automation/TestRail
COPY run-frontend-tests.sh /home/automation/run-tests.sh
COPY FrontEndResultParser.py /home/automation/FrontEnd/FrontEndResultParser.py
# Install python 3.9 and pip3
RUN apt-get -y install python3-dev python3.9 python3-pip
# Install dependencies
RUN pip install "setuptools==58.0.0"
RUN pip install -r requirements.txt
CMD ["sh", "run-tests.sh"]
i am basically just trying to run a script into the container
I used a worpress image and built my own image out of it, thought it would keep the container up and my script will be executed but that didnt happen. My ECS cluster didnt have any running task, all i saw iin the events was service stage-fe-auto has started 1 tasks: task e83587e734c94f77. and when i opened the task details, it had Exit Code 2 and Working directory /home/app but in my Dockerfile my work directory is differen. Not sure what i did wrong
Could somebody help me run my Jmeter script to our Github? FYI the Jmeter I'm using different plugins. Your response is highly appreciated. Thank you so much
This is how I install my Jmeter machine on linux box/playground
sudo apt-get update
sudo apt install curl -y
sudo apt install -y default-jdk
sudo curl -O https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.3.tgz
sudo tar -xvf apache-jmeter-5.3.tgz
cd apache-jmeter-5.3/lib
sudo curl -O https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2.1/cmdrunner-2.2.1.jar
cd ext/
sudo curl -O https://repo1.maven.org/maven2/kg/apc/jmeter-plugins-manager/1.6/jmeter-plugins-manager-1.6.jar
cd ..
sudo java -jar cmdrunner-2.2.1.jar --tool org.jmeterplugins.repository.PluginManagerCMD install-all-except jpgc-hadoop,jpgc-oauth,ulp-jmeter-autocorrelator-plugin,ulp-jmeter-videostreaming-plugin,ulp-jmeter-gwt-plugin,tilln-iso8583
Output: Jmeter script able to run on Github.
What do you mean by "Jmeter script able to run on Github"? Github is one (of many) implementations of a Git repository, it only stores files and their version history, you cannot "run" anything there.
If you're talking about Github Actions then just use run keyword and put your commands there.
Example workflow definition would be something like:
name: CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: setup-jmeter
run: |
sudo apt-get update
sudo apt install curl -y
sudo apt install -y default-jdk
sudo curl -O https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.3.tgz
sudo tar -xvf apache-jmeter-5.3.tgz
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib && sudo curl -O https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2.1/cmdrunner-2.2.1.jar
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib/ext && sudo curl -O https://repo1.maven.org/maven2/kg/apc/jmeter-plugins-manager/1.6/jmeter-plugins-manager-1.6.jar
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib && sudo java -jar cmdrunner-2.2.1.jar --tool org.jmeterplugins.repository.PluginManagerCMD install-all-except jpgc-hadoop,jpgc-oauth,ulp-jmeter-autocorrelator-plugin,ulp-jmeter-videostreaming-plugin,ulp-jmeter-gwt-plugin,tilln-iso8583
- name: run-jmeter-test
run: |
$GITHUB_WORKSPACE/apache-jmeter-5.3/bin/./jmeter.sh -n -t test.jmx -l result.jtl
Also be informed that according to JMeter Best Practices you should be using the latest version of JMeter so consider upgrading to JMeter 5.5 or whatever is the latest stable version which is available at JMeter Downloads page
I'm trying to deploy on google cloud. So, my Dockerfile :
FROM ubuntu:20.04
RUN apt update
RUN apt -y install sudo
RUN apt -y install curl
RUN sudo apt install -y build-essential
RUN curl -O https://storage.googleapis.com/nvidia-drivers-us-public/GRID/GRID13.0/NVIDIA-Linux-x86_64-470.63.01-grid.run
RUN chmod +x NVIDIA-Linux-x86_64-470.63.01-grid.run
RUN sudo /bin/bash NVIDIA-Linux-x86_64-470.63.01-grid.run
FROM python:3.8
RUN adduser meat
RUN passwd -d meat
USER meat
WORKDIR /home/meat
RUN python3 -m venv meat-env
RUN /bin/bash -c "source meat-env/bin/activate"
RUN /usr/local/bin/python -m pip install --upgrade pip
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
CMD ["python", "main.py"]
and I got this error:
Step 8/20 : RUN sudo /bin/bash NVIDIA-Linux-x86_64-470.63.01-grid.run
---> Running in 811998f9cea8
Verifying archive integrity...
OK
Uncompressing NVIDIA Accelerated Graphics Driver for Linux-x86_64 470.63.01
...............
and several dots(.) later
[91mError opening terminal: unknown.
[0m
unable to stream build output: The command '/bin/sh -c sudo /bin/bash NVIDIA-Linux-x86_64-470.63.01-grid.run' returned a non-zero code: 1
Failed to build the app. Error: unable to stream build output: The command '/bin/sh -c sudo /bin/bash NVIDIA-Linux-x86_64-470.63.01-grid.run' returned a non-zero code: 1
I made something wrong with my Dockerfile? Or is there a differente way to do the command?
#DazWilkin's comment is correct:
It's not possible to install graphics driver to Cloud Run fully managed because it doesn't expose any GPU.
You should deploy it in Cloud Run Anthos(GKE). But you'll need to configure your GKE Cluster to install your GPU, According to the documentation, you should follow the steps:
Add a GPU-enabled node pool to your GKE cluster.
In this step, you can enable Enable Virtual Workstation (NVIDIA GRID). Please choose a GPU such as NVIDIA Tesla T4, P4 or P100 to enable NVIDIA GRID.
Install NVIDIA's device drivers to the nodes.
You can now create a service that will consume GPUs and deploy the image to Cloud Run for Anthos:
Setting up your service to consume GPUs
I want to use a npm cli utility (this one), inside a bash script and to run it via a github workflow.
This is my basic script:
folder="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
mkdir -p "$folder"/rawdata
mkdir -p "$folder"/processing
npm list -g --depth=0
rm "$folder"/rawdata/"$reg".png
capture-website --delay 5 --full-page --width 1280 --height 720 --output "$folder"/rawdata/"$reg".png "https://ondata.github.io/vaccinipertutti/?area=SIC"
I run it using this github workflow (it's Ubuntu 20.04.2 LTS), in which I set capture-website-cli installation in this way:
mkdir ~/.npm-global
npm config set prefix '~/.npm-global'
export PATH=~/.npm-global/bin:$PATH
source ~/.profile
NPM_CONFIG_PREFIX=~/.npm-global
npm install -g capture-website
But when the script runs I have this error:
./test.sh: line 13: capture-website: command not found
It seems that it has been not installed in ~/.npm-global.
If I run find /home/runner -executable -name capture-website I have
/home/runner/.npm-global/lib/node_modules/capture-website
Do you have some advice to solve my problem?
You might need to differentiate between:
capture-website
sindresorhus/capture-website-cli which provide a CLI (commannd-line interface) way to use that npm library (library means: no executable in .npm-global/bin)
You need to install the latter in your runner $PATH.
npm install --global capture-website-cli
I'm new to Docker and am trying to create a Docker image with Raspbian base and PowerShell Core installed.
EDIT: Updated Dockerfile to include libicu52 package, which resolved the main error: lack of libpsl-native or dependencies not available. Changed CMD parameters and now have a different error.
Here is my Dockerfile:
# Download the latest RPi3 Debian image
FROM resin/raspberrypi3-debian:latest
# Update the image and install prerequisites
RUN apt-get update && apt-get install -y \
wget \
libicu52 \
libunwind8 \
&& apt-get clean
# Grab the latest tar.gz
RUN wget https://github.com/PowerShell/PowerShell/releases/download/v6.0.0-rc.2/powershell-6.0.0-rc.2-linux-arm32.tar.gz
# Make folder to put PowerShell
RUN mkdir ~/powershell
# Unpack the tar.gz file
RUN tar -xvf ./powershell-6.0.0-rc.2-linux-arm32.tar.gz -C ~/powershell
# Run PowerShell
CMD pwsh -v
New error:
hostname: you must be root to change the host name
/bin/sh: 1: pwsh: not found
How do I resolve these errors?
Thanks in advance!
Instead of downloading from source and extracting it in your container, I'd recommend using the official apt installer packages for your Dockerfile from Microsoft's official Debian repository as described at:
https://learn.microsoft.com/en-us/powershell/scripting/setup/installing-powershell-core-on-macos-and-linux?view=powershell-6#debian-9
So transforming that to Dockerfile format:
# Install powershell related system components
RUN apt-get install -y \
gnupg curl apt-transport-https \
&& apt-get clean
# Import the public repository GPG keys
RUN curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
# Register the Microsoft's Debian repository
RUN sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-debian-stretch-prod stretch main" > /etc/apt/sources.list.d/microsoft.list'
# Install PowerShell
RUN apt-get update \
&& apt-get install -y \
powershell
# Start PowerShell
CMD pwsh
Alternatively you can also try to start from one of the original Microsoft docker Linux images, but of course then you need to solve then the raspberry installation for yourself:
https://hub.docker.com/r/microsoft/powershell/tags/