Installing docker-compose using coreos ignition - docker-compose

I know how to install using cloud-config. But since this is a first-boot only task, I would instead like to install it using ignition.
Here is my attempted configuration (that doesn't work):
"systemd": {
"units": [{
"name": "install-docker-compose.service",
"contents": "[Unit]\nDescription=Install docker-compose\nConditionPathExists=!/opt/bin/docker-compose\n[Service]\nType=oneshot\nRemainAfterExit=yes\nExecStart=/usr/bin/mkdir -p /opt/bin/\nExecStart=/usr/bin/curl --create-dirs -o /opt/bin/docker-compose -sL \"https://github.com/docker/compose/releases/download/1.9.0/docker-compose-linux-x86_64\"\nExecStart=/usr/bin/chmod +x /opt/bin/docker-compose"
}]
}

Related

Create mongo image having initial collections with docker file

I am trying to create a custom mongo image from the official image with few custom collections baked into the image.
I have created a init script that runs and import json into the database using mongoimport.
Docker file builds correctly and I can see the scripts runs successfully.
But when i run the container with the generated image I am unable to see the added collecitons.
Docker file:
FROM mongo:latest
RUN mkdir -p /data/db2 \
&& echo "dbpath = /data/db2" > /etc/mongodb.conf \
&& chown -R mongodb:mongodb /data/db2
COPY ./upload.json /json/
COPY ./init.sh .
RUN ./init.sh
VOLUME /data/db2
CMD ["mongod", "--config", "/etc/mongodb.conf"]
#docker build -f Dockerfile -t my-mongo/test:1.0 .
init.sh:
#!/usr/bin/env bash
mongod --fork --logpath /var/log/mongodb.log --dbpath /data/db2
mongoimport -d testStore -c authors --file ./json/upload.json --jsonArray
mongod --dbpath /data/db2 --shutdown
upload.json:
[
{ "Name": "Design Patterns", "Price": 54.93, "Category": "Computers", "Author": "Ralph Johnson" },
{ "Name": "Clean Code", "Price": 43.15, "Category": "Computers", "Author": "Robert C. Martin" }
]

Mongodump mac vs ubuntu

I am trying to get a dump of a specific collection from my database. In mac, I am running the following command:
/usr/local/bin/mongodump --uri <connection_string> --db admin --collection tenants and it works perfectly well.
However, when I try to run the same command in ubuntu, I get the following error:
error parsing command line options: illegal argument combination: cannot specify --db and --uri
I tried to add the /dbname suffix to the connection string but then I am not able to download a single collection as it fails with the error Failed: bad option: cannot dump a collection without a specified database
Replacing --uri with --host seems like another possible solution but that does not work for me since I only have the connection string and do not have access to the username and password.
Another weird thing I noticed is that in mac, the command mongodump --version returns:
MongoDB shell version v5.0.6
Build Info: {
"version": "5.0.6",
"gitVersion": "212a8dbb47f07427dae194a9c75baec1d81d9259",
"modules": [],
"allocator": "system",
"environment": {
"distarch": "x86_64",
"target_arch": "x86_64"
}
}
However, in ubuntu I am seeing mongodump version: built-without-version-string.
How do I get a dump of a single collection using connection string on Ubuntu?
Looks like the installation of mongo-tools was corrupt.
Fixed it by installing using these commands
wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list
sudo apt-get update
sudo apt-get install -y mongodb-org

How do I add Pulumi to my GitHub Codespaces / VSCode .NET devcontainer?

I want to develop and deploy IaC with Pulumi .NET / C# from a VS Code .devcontainer. How can I add this capability to the environment?
I included the build container steps from https://github.com/pulumi/pulumi-docker-containers/blob/main/docker/dotnet/Dockerfile into .devcontainer/Dockerfile:
ARG DOTNET_VARIANT="3.1"
ARG PULUMI_VERSION=latest
ARG INSTALL_NODE="true"
ARG NODE_VERSION="lts/*"
# --------------------------------------------------------------------------------
FROM debian:11-slim AS builder
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y \
curl \
build-essential \
git
RUN if [ "$PULUMI_VERSION" = "latest" ]; then \
curl -fsSL https://get.pulumi.com/ | bash; \
else \
curl -fsSL https://get.pulumi.com/ | bash -s -- --version $PULUMI_VERSION ; \
fi
# --------------------------------------------------------------------------------
FROM mcr.microsoft.com/vscode/devcontainers/dotnetcore:0-${DOTNET_VARIANT}
RUN if [ "${INSTALL_NODE}" = "true" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
COPY --from=builder /root/.pulumi/bin/pulumi /pulumi/bin/pulumi
COPY --from=builder /root/.pulumi/bin/*-dotnet* /pulumi/bin/
ENV PATH "/pulumi/bin:${PATH}"
and I control the process with this .devcontainer/devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.203.0/containers/alpine
{
"name": "C# (.NET)",
"build": {
"dockerfile": "Dockerfile",
"args": {
"DOTNET_VARIANT": "3.1",
"PULUMI_VERSION": "latest",
"INSTALL_NODE": "true",
"NODE_VERSION": "lts/*"
}
},
"features": {
"azure-cli": "latest"
},
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-dotnettools.csharp"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Replace when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--init", "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
"runArgs": [
"--init"
],
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
be aware that the Rebuild will take a while and that you probably have to reload the devcontainer once it indicates success in the GitHub Codespaces: Details.
After that pulumi login and e.g. pulumi new azure-csharp should work on the container.
You can spin up a codespace, and configure the devcontainer from within the codespace.
I did it just now,
Access the Command Palette (Shift + Command + P / Ctrl + Shift + P), then >start typing "dev container". Select Codespaces: Add Development Container >Configuration Files....
just follow this guide
from within following this guide - https://docs.github.com/en/codespaces/setting-up-your-project-for-codespaces/setting-up-your-project-for-codespaces
I was inspired by the builder container approach for DOTNET mentioned in this thread, and here is how I added pulumi to my GO codespace dockerfile.
Dockerfile
ARG GO_VARIANT="1"
ARG PULUMI_VERSION=latest
FROM debian:11-slim AS builder
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y \
curl \
build-essential \
git
RUN if [ "$PULUMI_VERSION" = "latest" ]; then \
curl -fsSL https://get.pulumi.com/ | bash; \
else \
curl -fsSL https://get.pulumi.com/ | bash -s -- --version $PULUMI_VERSION ; \
fi
FROM mcr.microsoft.com/vscode/devcontainers/go:0-${GO_VARIANT}
# [Choice] Node.js version: none, lts/*, 16, 14, 12, 10
ARG NODE_VERSION="none"
RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
COPY --from=builder /root/.pulumi/bin/pulumi /pulumi/bin/pulumi
COPY --from=builder /root/.pulumi/bin/*-go* /pulumi/bin/
ENV PATH "/pulumi/bin:${PATH}"
ENV GOOS "linux"
ENV GOARCH "amd64"
Snippit of code from devcontainer.json
"args": {
... //redacted for clarity
"GO_VARIANT": "1.18",
}
)
I can login with azure, pulumi and pulumi up is also working.
Full example here
https://github.com/DevOpsJava/solution-using-secret
I fixed an issue with pulumi up, which turned out to be a resource problem regarding memory. Switched codespace from 2 to 4 cores, which also doubled the memory to 8GB.

How do I add the experimental language server to a devcontainer for vscode?

I'm doing a pretty basic devcontainer for terraform work in VSCode on Windows. Every time I start it up or rebuild the container for use, it prompts me to install the experimental language server where I end up picking the latest tag for it (v0.0.9).
I have the following setting configured in my default settings.json file
{
"terraform.languageServer.enabled": true
}
and my .devcontainer/devcontainer.json is taken and minimized from the Azure terraform container.
// For format details, see https://aka.ms/vscode-remote/devcontainer.json or the definition README at
// https://github.com/microsoft/vscode-dev-containers/tree/master/containers/docker-existing-dockerfile
{
// See https://aka.ms/vscode-remote/devcontainer.json for format details.
"name": "DevOps Projects IaC With Terraform",
"context": "..",
"dockerFile": "Dockerfile",
"runArgs": [
"-v", "${env:USERPROFILE}/.ssh:/root/.ssh-localhost:ro",
"-v", "${env:USERPROFILE}/.aws:/root/.aws:ro"
],
"postCreateCommand": "mkdir -p ~/.ssh && cp -r ~/.ssh-localhost/* ~/.ssh && chmod 700 ~/.ssh && chmod 600 ~/.ssh/*",
// Add the IDs of any extensions you want installed in the array below.
"extensions": ["mauve.terraform"]
}
How do I include the experimental language server into my build/devcontainer config?
I've been trying to figure out the answer to this for a while, for my own purposes. I decided today that I was going to figure it out and I believe I have it working (installing terraform, the LSP and the AWS provider) using
# Terraform, LSP and AWS Provider
ENV TERRAFORM_VERSION=0.12.24
ENV TERRAFORM_LSP_VERSION=0.0.10
ENV TERRAFORM_AWS_PROVIDER_VERSION=2.59.0
RUN wget -c https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip \
&& unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip \
&& mv terraform /usr/local/bin \
&& wget -c https://releases.hashicorp.com/terraform-provider-aws/${TERRAFORM_AWS_PROVIDER_VERSION}/terraform-provider-aws_${TERRAFORM_AWS_PROVIDER_VERSION}_linux_amd64.zip \
&& unzip terraform-provider-aws_${TERRAFORM_AWS_PROVIDER_VERSION}_linux_amd64.zip \
&& mv terraform-provider-aws_v${TERRAFORM_AWS_PROVIDER_VERSION}* /usr/local/bin \
&& echo "provider \"aws\" {}" >> /usr/local/bin/providers.tf \
&& wget -c https://github.com/juliosueiras/terraform-lsp/releases/download/v${TERRAFORM_LSP_VERSION}/terraform-lsp_${TERRAFORM_LSP_VERSION}_linux_amd64.tar.gz -O - | tar -zx \
&& mv terraform-lsp /usr/local/bin \
&& rm terraform*.zip
because I'm installing this to /usr/local/bin and I'm creating a containerUser which wouldn't have access to install these components, I needed to add the following to the settings section of my devcontainer.json
"terraform.indexing": {
"enabled": false
},
"terraform.languageServer": {
"enabled": true,
"installCommonProviders": false,
"pathToBinary": "/usr/local/bin"
},
Obviously you need to make adjustments if you want other providers, or to install it elsewhere, or different versions of terraform, the LSP or the AWS provider, but they all should be simple changes.
The latest releases can be found at the following links:
Terraform
Terraform LSP
AWS Provider
Other Providers

Unable to install ansible-awx Ubuntu 18.04

I am trying to install AWX on Ubuntu 18.04 and i am getting the Error.
I have checked out the latest version of awx from github and tried running the install using
ansible-playbook -i inventory install.yml -vvvv
TASK [local_docker : Start the containers] ************************************************************************************************************************************************************************
task path: /temp/awx/installer/roles/local_docker/tasks/compose.yml:25
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/cloud/docker/docker_service.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: sateesh
<localhost> EXEC /bin/sh -c 'echo ~sateesh && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173 `" && echo ansible-tmp-1555964996.64-166348838404173="` echo /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173 `" ) && sleep 0'
<localhost> PUT /home/sateesh/.ansible/tmp/ansible-local-18120SkKEmm/tmpaVUC61 TO /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173/docker_service.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173/ /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173/docker_service.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/env python /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173/docker_service.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_oWaqla/ansible_module_docker_service.py", line 745, in cmd_up
timeout=self.timeout)
File "/usr/local/lib/python2.7/dist-packages/compose/project.py", line 559, in up
'Encountered errors while bringing up the project.'
fatal: [localhost]: FAILED! => {
"changed": false,
"errors": [],
"invocation": {
"module_args": {
"api_version": null,
"build": false,
"cacert_path": null,
"cert_path": null,
"debug": false,
"definition": null,
"dependencies": true,
"docker_host": null,
"files": null,
"filter_logger": false,
"hostname_check": false,
"key_path": null,
"nocache": false,
"project_name": null,
"project_src": "/tmp/awxcompose",
"pull": false,
"recreate": "smart",
"remove_images": null,
"remove_orphans": false,
"remove_volumes": false,
"restarted": false,
"scale": null,
"services": null,
"ssl_version": null,
"state": "present",
"stopped": false,
"timeout": 10,
"tls": null,
"tls_hostname": null,
"tls_verify": null
}
},
"module_stderr": "Creating awx_web ... \r\n\r\u001b[1B",
"module_stdout": "",
"msg": "Error starting project unknown cause"
}
to retry, use: --limit #/temp/awx/installer/install.retry
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=8 changed=0 unreachable=0 failed=1
Not sure why it is failing.
I have the following versions of Ansible , pip & Docker
ansible 2.5.4
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
Docker version 18.03.1-ce, build 9ee9f40
pip 19.0.3
Thanks
Sateesh
I've tried your solution and ran into the same issue.
My problem was that my host was running apache2, so the port 80 was already taken. After stopping and removing apache2, the build went through.
Thanks.
I've been following this question from the start as I did encounter the exact same message you did. I didn't have a possible solution to your question until now.
I just managed to install the latest version of AWX on my Ubuntu server running 18.04. What I've done to solve my issue (and I've tried this many times before) was:
Getting the latest AWX version from github
Edit the inventory file located in awx/installer keeping the path to postgres_data_dir the same as before
Use a command to kill all my running docker conatiners:
docker container kill | docker container ls $(awk '{print $1}')
Note!: I don't have any containers running except those used for AWX
Removing all containers on my system:
docker container rm <container>
Note!: Again, I don't have any containers except those used for AWX
I've used the TAB key to let bash suggest the container names
Used the ansible playbook for AWX:
ansible-playbook -i inventory install.yml
And thats it! This time I upgraded to the latest version of AWX. In my case, I wanted to update to the latest version. I don't know if you were updating or installing it "for the first time". But this is how I managed to do it, so maybe it works for you as well.
Good luck solving your issue if you haven't already.
P.S. Make sure project_src is not /tmp/awxcompose. This wil cause some issues I learned. It'll work, but if you reboot Ubuntu, AWX will run into a problem: See this link