For learning purpose, I'm trying to install and setup my own Kubernetes Cluster on GCP.
I want to provision my instances on GCP with a bootstrap script.
Here is my google_compute_instance config
resource "google_compute_instance" "default" {
name = var.vm_name
machine_type = "f1-micro"
zone = "europe-west1-b"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = var.network
access_config {
// Include this section to give the VM an external IP address
}
}
provisioner "remote-exec" {
script = var.script_path
connection {
type = "ssh"
host = var.ip_address
user = "root"
}
}
tags = ["node"]
}
I have this issue when I do terraform apply
Error: Failed to open script 'sudo apt-get update
sudo apt-get install
apt-transport-https
ca-certificates
curl
gnupg-agent
software-properties-common
zsh
vim
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key
add - sudo add-apt-repository \ "deb [arch=amd64]
https://download.docker.com/linux/debian \ $(lsb_release -cs) \
stable" sudo apt-get update && sudo apt-get install docker-ce
docker-ce-cli containerd.io
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo
apt-key add - cat <<EOF | sudo tee
/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/
kubernetes-xenial main EOF sudo apt-get update sudo apt-get install -y
kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl ':
open sudo apt-get update
sudo apt-get install
apt-transport-https
ca-certificates
curl
gnupg-agent
software-properties-common
zsh
vim
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key
add - sudo add-apt-repository \ "deb [arch=amd64]
https://download.docker.com/linux/debian \ $(lsb_release -cs) \
stable" sudo apt-get update && sudo apt-get install docker-ce
docker-ce-cli containerd.io
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo
apt-key add - cat <<EOF | sudo tee
/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/
kubernetes-xenial main EOF sudo apt-get update sudo apt-get install -y
kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl :
no such file or directory
All my instances are created on the cloud, It's seems to find the bootstrap script but it is showing this error.
What did I miss? Is there a better way to do it ?
Here is the script:
#bin/bash
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common \
zsh \
vim
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
sudo apt-get update && sudo apt-get install docker-ce docker-ce-cli containerd.io
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
You should provide the private_key argument in the connection block of remote-exec.
private_key - The contents of an SSH key to use for the connection. These can be loaded from a file on disk using the file function. This takes preference over the password if provided.
A sample block could be like this:
provisioner "remote-exec" {
script = var.script_path
connection {
host = var.ip_address
type = "ssh"
user = "root"
private_key = fileexists("/temp/private_key") ? file("/temp/private_key") : file("C:/private_key")
}
}
For those who are interested, I have found an easier solution, without using ssh but by using the google metadata available at creation of the resource.
metadata_startup_script = file("./scripts/bootstrap.sh")
resource "google_compute_instance" "default" {
name = var.vm_name
machine_type = "e2-standard-2"
zone = "europe-west1-b"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = var.network
access_config {
// Include this section to give the VM an external IP address
}
}
metadata_startup_script = file("./scripts/bootstrap.sh")
tags = ["node"]
}
Related
stage('Deploy Test chart') {
steps{
container('ubuntu-kubectl-helm') {
script {
kubeconfig(credentialsId: 'kubeconfig-test'){
sh "which kubectl"
sh "kubectl config view"
sh "kubectl get nodes"
sh "kubectl get pods -n jenkins-new"
sh "kubectl get pods -n test"
}
}
}
}
}
We have 3 containers to run the whole pipeline. and we create an Ubuntu container to run Kubernetes on it(we have install kubectl on it). when we run this step as a part of the whole pipe line. It gives us errors.
ERROR: Failed to run "kubectl version". Returned status code 127.
stdout:
sh: 47: kubectl: not found
But when we run this step separately as a pipeline, then the pipeline is working and we get the results.Now we are stuck and not able to figure out how to proceed further.
We also checked the environment path and checked if it was running on the Ubuntu container or not.
Inside Jenkins you need to install kubectl. Same if you want to use docker in your pipeline.
Here my Dockerfile which does it.
FROM jenkins/jenkins
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN apt-get -y update && \
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose \
&& ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose \
&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list \
&& apt-get -y update \
&& apt install -y kubectl
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
USER jenkins
I am trying to create a high availability cluster using the kubeadm tool. And I am trying to install the tools that specified in the pre-requistics of kubeadm installation. When I am running sudo apt-get install -y kubelet kubeadm kubectl , I am getting the error like the following,
Building dependency tree
Reading state information... Done
E: Unable to locate package kubelet
E: Unable to locate package kubeadm
E: Unable to locate package kubectl
My Attempt
I am following the following official documentation for preparing the nodes from kubernetes.io. I am refering the following link for that,
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin
Ans when I am continuing with following commands as described in the official documentation,
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Updates
When I tried the answer from Mr.Tummala, I am getting the error like the following,
W: Failed to fetch https://apt.kubernetes.io/dists/kubernetes-xenial/InRelease Could not resolve host: apt.kubernetes.io
W: Some index files failed to download. They have been ignored, or old ones used instead.
But result like unable to locate the package.
See if the below steps are doing the trick for you.
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
I would refer to the official documentation https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
Then,
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
Finally
sudo apt-get update
# Optionally, view versions with
# sudo apt-cache show kubectl
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Try curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - then sudo bash -c 'cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF' after it just run sudo apt-get update and then apt-cache policy kubelet | head -n 20 now you can try to install kubectl and kubeadm again.
So I'm having this problem where for some reason I can't install any package on my ubuntu system.
I'm currently on Ubuntu 16.10.
terminal install logs
Update:
I've done entered those commands and got this.
after update and apt-cache
What should I do now?
sudo apt-get install wget ca-certificates
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
sudo apt-get update
sudo apt-get install postgresql postgresql-contrib
After installing the PostgreSQL database server, by default, it creates a user ‘postgres’ with role ‘postgres’. Also creates a system account with the same name ‘postgres’. So to connect to Postgres server, log in to your system as user postgres and connect database.
sudo su - postgres
psql
First do
sudo apt-get update
You should get no errors upon updating. In case you do, then you might have issues with your firewall, or something blocking you from updating repositories. Check the output carefully.
And then search for the correct (exact!) package name using this command:
apt-cache search postgresql
As a last resort you could add external 3rd Party repository as described in this answer. Just remember to use your distribution name instead of "xenial".
It should work.
$ sudo apt-get install postgresql postgresql-client
If you are getting (E: Unable to locate package postgresql-12) while migrating following step may helps you:
sudo apt-get -y install bash-completion wget
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc |
sudo apt-key add -
sudo apt-get update
sudo apt-get -y install postgresql-12 postgresql-client-12
sudo systemctl status postgresql
ref:install postgres12 in ubuntu-18.04
Following Commands worked for me: sudo apt-get install wget ca-certificates
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ lsb_release -cs-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
sudo apt-get update
sudo apt install postgresql-11 libpq-dev
How to build postgresql-9.6 image from postgresql-9.6.1.tar.gz using dockerfile?
I tried to create below dockerfile to install postgresql-9.6 on ubuntu. But I am unable to make it a complete image?
FROM ubuntu:16.04
RUN apt-get update && \
apt-get install software-properties-common -y && \
apt-get install wget && \
add-apt-repository "deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main" && \
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - && \
apt-get update && \
apt-get install -y postgresql-9.6 postgresql-client-9.6
EXPOSE 5432
So as an alternative I want to create image from tarball
The official Postgres Dockerfile does a build from source on Debian which should be largely portable to Ubuntu.
It will be easier to just use the postgres:9.6 or postgres:9.6.1 image as a seperate container to your application, rather than trying to manage a build heavy, monolithic container yourself.
I've got a docker container, in which I'm installing mongo db. After installing,
I'm trying to start mongo and restore a mongo db dump. However, when I start the docker instance, I see that the user has been switched to root (as per supervisor instruction) but mongo is not started.
This is the supervisor snippet:
[supervisord]
nodaemon=true
[program:mongodb]
user=root
command=/usr/bin/mongod
This is my setup in the dockerfile:
RUN apt-get update && sudo apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisor.conf /etc/supervisor/conf.d/supervisor.conf
# Install MongoDB.
RUN \
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 && \
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen'
| tee /etc/apt/sources.list.d/mongodb.list && \
apt-get update && \
apt-get install -y mongodb-org && \
rm -rf /var/lib/apt/lists/*
# Define mountable directories.
VOLUME ["/data/db"]
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["mongod"]
EXPOSE 27017
EXPOSE 28017
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
Am I missing any configuration setting? Any help is appreciated.
You're not able to run mongodb because while installing mongodb some files need authentication, so you just have to replace apt-get install -y mongodb-org with apt-get install -y --no-authentication mongodb-org and you'll be able to install mongodb without any problem.