installing kubernetes on coreos with rkt and automated script - kubernetes

I'm trying to install kuberentes with rkt on my real (not virtual) coreos servers at home using the scripts at https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic and I have some questions.
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
thanks!
update
Rob thank you so much for your response. I wasn't clear enough regarding etcd2. I already have etcd2 tls installed and properly configured on my coreos servers. so I configured my etcd servers in the controller-install.sh file:
export ETCD_ENDPOINTS="https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
but when I run the controller-install.sh script, it returns and repeat the following output:
Waiting for etcd...
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
...
so I was guessing it's because i didn't define etcd related tls certificates in the controller script and that is why it stuck in that faze.
on my macbook pro laptop I have the following alias configured:
alias myetcdctl="~/apps/etcd-v3.0.8-darwin-amd64/etcdctl --endpoint=https://coreos-2.tux-in.com:2379 --ca-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/ca.pem --cert-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1.pem --key-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1-key.pem --timeout=10s"
so when I run myetcdctl member list I get:
8832ce6a269a7dac: name=ccff826d5f564c67abf35467306f80a0 peerURLs=https://coreos-3.tux-in.com:2380 clientURLs=https://coreos-3.tux-in.com:2379 isLeader=true
a2c0ac9708ef90fc: name=dc38bc8f20e64940b260d3f7b260430d peerURLs=https://coreos-2.tux-in.com:2380 clientURLs=https://coreos-2.tux-in.com:2379 isLeader=false
so I'm guessing that I don't really have a problem there.
any ideas?
thanks!

my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
These scripts don't start an etcd server. You will need to set one up manually and will be able to use TLS and as many nodes as you would like. This isn't clear in the current form of the document, I will attempt a PR to fix.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
Only CONTROLLER_ENDPOINT be a domain name.
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
These scripts include/start the rkt API service. As you can see below, it also has a Restart parameter set (source):
[Unit]
Before=kubelet.service
[Service]
ExecStart=/usr/bin/rkt api-service
Restart=always
RestartSec=10
[Install]
RequiredBy=kubelet.service

Related

Syndesis (Fuse-online) Integration build failed for unknown host "maven1.repo.org"

We installed fuse-online 7.4 on openshift 3.11. We created an integration containing an OpenApiProvider connection and an SQL connection.
When we publish the integration, the build fails with the following error:
"repo1.maven.org: Name or service not known: Unknown host repo1.maven.org: Name or service not known"
Openshift is installed behing an enterprise http proxy
The image registry.access.redhat.com/fuse7/fuse-ignite-s2i is pulled correctly since docker is configured with proxy.
syndesis-server DeploymentConfig has been set with proxies environment variables
I suppose that, since the buildconfig for the integration is created dynamically, is not possible to inject HTTP_PROXY,HTTPS_PROXY,NO_PROXY env variables to the build pod.
We read https://docs.openshift.com/container-platform/3.11/install_config/http_proxies.html#s2i-builds but since we don't have any rights to modify s2i image we cannot proceed.
Is there any way to provide proxy information during during fuse-online integration build?
Finally we succeeded to inject http proxy environment variables in dynamic created build pods.
We modified syndesis-server-config config map reporting proxy variables on mavenOptions key like this:
mavenOptions: "-XX:+UseG1GC -XX:+UseStringDeduplication -Xmx310m -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort= -Dhttp.nonProxyHosts="
Thanks for the support
Let me know if you have any other idea of resolving the issue
Can you check the DNS of your network connection? Not sure why but sometimes I have to use one of the "reliable" DNS on my machine (like the 8.8.8.8 from Google) to make sure repo1.maven.org is reachable.
You can check if this is the problem trying a simple
$ ping repo1.maven.org
If that doesn't work, you have to check your DNS.

issue when deploy basic auth in kubernetes dashboard

I try to add a basic authentication to my kubernetes cluster by changing the file.
/etc/kubernetes/manifest/kubernetes-apiserver.yaml
There i add 3 flags
-- basic-auth-file=/etc/kubernetes/basic-auth.csv
-- authorization-mode=ABAC
-- authentication-mode=basic
But when i add those lines and i restart my system. My kubernetes freezes and won't start. Is this the right way to add flags to an already running kubernetes cluster ? Is this the right way to add basic authentication to kubernetes dashboard ?
I used this tutorial for the basic authentication: https://github.com/kubernetes/dashboard/wiki/Access-control#basic
Conceptually you doing everything right, but the problem is that for Modern Kubernetes version, at least for 1.9, authentication-mode is not a valid CLI flag for API server. All available flags you can check in documentation.
It is a bit outdated documentation in the repo. Actually, basic authentification will be enabled when you provided basic-auth-file option.
So, just remove authentication-mode flag and use only basic-auth-file and authorization-mode. If should help.
For enable a user/password authorization, based on documentation of dashboard, you need to add authentication-mode CLI arg to a Dashboard.

Deploy API REST IBM Hyperledger Composer Blockchain (bad flag in substitute command: 'U' ERROR)

I'm getting this error trying to deploy a card to a working blockchain on cloud, any idea? Thanks in advance. I'm using a mac, following the guide (Kubernetes installed/configured well, I think):
https://ibm-blockchain.github.io/interacting/
./create/create_composer-rest-server.sh --paid --business-network-card /Users/sm/jsblock/tutorial-network/PeerAdmin#fabric-network.card
Configured to setup a paid storage on ibm-cs
Preparing yaml file for create composer-rest-server
sed: 1: "s/%COMPOSER_CARD%//User ...": bad flag in substitute command: 'U'
Creating composer-rest-server pod
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server.yaml
error: no objects passed to create
Composer rest server created successfully
There is an error in that document. And you are also specifying the Peer Admin card when you need to use a Network Admin Card.
There are 2 sets of 'parallel' documents for paid and free clusters. The command you are using has --paid in it in error. If you remove --paid and use the Network Admin Card I think it will solve the problem.
Your command will look something like this: ./create/create_composer-rest-server.sh --business-network-card admin#YOUR-network
I tried again with the install proccess (deployed blockchain instance seemed to work weall): https://ibm-blockchain.github.io/simple/
But I noticed that the script:
./create_all.sh
...never ended (mostly an internal kubernetes problems I guess). And I reset the installation:
./delete_all.sh
./create_all.sh
I tried again, now everything Ok. I could get my awesome API REST talking with my Hyperledger Blockchain, deployed on IBM Cloud. Ready to develop something amazing.

Configuring FQDN for GCE instance on startup

I am trying to start a google compute engine (GCE) instance with a pre-configured FQDN. We are intending to run an application that is licensed based on the contents of /etc/hosts.
I am starting the instances using the Google Cloud SDK utility - gcloud.
I have tried setting the "hostname" key using the metadata option like so:
gcloud compute instances create mynode (standard opts) --metadata hostname=mynode.example.com
Whenever I log into the developer console, under computer, instances, I can see hostname under "Custom metadata". This appears to be a new, custome key - it has no impact on what:
http://metadata.google.internal/computeMetadata/v1/instance/hostname
returns.
I have also tried setting "instance/hostname" like the below, which causes a parsing error when using gcloud.
--metadata instance/hostname=mynode.example.com
I have successfully used the startup scripts functionality of the metadata server to run a startup script that parses the new, internal IP address of the newly created instance, updated /etc/hosts. This appears to work but doesn't feel "like the google way".
Can I configure the FQDN (specifically, a domain name, as the instance name is always the hostname) of an instance, during instance creation, using the metaserver functionality?
try this:
Go to your GCE >> VM instances panel.
stop your gce instance.
clic on the instance name.
Edit your instance, adding this values on Custom metadata fields:
Key field: hostname / Value field: your.server.hostname
Key field: startup-script / Value field: sudo -s hostnamectl set-hostname your.server.hostname
setup-example-image.png
Finally, start your instance and test with a hostnamectl command.
regards!
According to this article 'hostname' is part of the default metadata entries that provide information about your instance and it is NOT possible to manually edit any of the default metadata pairs. You can also take a look at this video from the Google Team. Within the first few minutes it is mentioned that you cannot modify default metadata pairs. As such, it does not seem like you can specify the hostname upon instance creation other than through the use of a start-up script like you've done already. It is also worth mentioning that the hostname you've specified will get deleted and auto-synced by the metadata server upon reboot unless you're using a start-up script or something that would modify it every time.
If what you're currently doing works for what you're trying to accomplish, it might be the only workaround to your scenario.
Here is a patch for /usr/share/google/set-hostname to set FQDN to GCE instance.
https://gist.github.com/yuki-takeichi/3080521322f0f1d159ea6a343e2323e6
Before you use this patch, you must set your desired FQDN in your instance's metadata by specifying hostname key.
Hostname is set each time instance's IP address is renewed by dhclient. set-hostname is just a hook script which dhclient executes and serves new IP address and internal hostame to, and modifies /etc/hosts. This patch changes the source of hostname by querying instance's metadata from metadata server.
The original set-hostname script is here:
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google_config/bin/set_hostname.
Use this patch at your own risk.
When creating a VM, you can specify a custom FQDN hostname as an optional parameter. This feature is currently in Beta.
$ gcloud beta compute instances create INSTANCE_NAME --hostname example.hostname
This should work across OSes, and eliminate the need for workaround scripts.
More info in the docs.
-- Sirui (Product Manager, Google Compute Engine)
I've looked throughout this site to find answered questions and found a few things that work but with a couple solutions combined. This thread seems the place to answer.
1) echo example.com > /etc/hostname
2) add -- 127.0.1.1 example.com in /etc/hosts
3) add -- hostnamectl set-hostname
example.com -- command to /etc/rc.local script
4) uncomment /etc/dhcp/dhclient.conf line:
supersede domain-name "example.com";
5) profit.... Seems to stick after each reboot
(Note example.com is your domain name: fqdndomain.com - yourfqdndomain.org)
Also note this is for Ubuntu or Debian. Other Unix May slightly vary. I've tested this on Ubuntu 16.04
Always on the wording NOT possible to manually edit any of the default metadata pairs, how about the instant level default metadata "/scheduling"? we could set them manually as mentioned in this article

haproxy - which configuration files

I have an HAProxy install which was configured by someone who left the company. It runs on Ubuntu 10.04 and it seems to use 3 configuration files in the directory /etc/haproxy
haproxy.cfg
haproxy.http.cfg
haproxy.https.cfg
I don't see the point in using the haproxy.https.cfg file as I believe (in our configuration) it can all be configured from a single haproxy.http.cfg file but when I remove that httpS file it complains bitterly and refuses to run. My question
Is this the standard configuration haproxy uses or if not, I can't find a reference to the "S" file anywhere. Can anyone suggest how HAProxy concludes it should use it?
Thanks
The very answer to your question: your haproxy is simply launched with those three config files ( -f haproxy.cfg -f haproxy.http.cfg -f haproxy.https.cfg, maybe from /etc/init.d/haproxy but mileage varies depending on your distribution ).
If you remove the file, of course it will complain.
This is not particularly standard, but ain't bad either, it helps structuring the conf rather than having a very long file.
The task of the .https version will certainly be to redirect the https traffic towards a service that can handle HTTPS (stunnel or nginx usually), since haproxy cannot terminate ssl connections. (stunnel has to be patched, see on the haproxy page)
If you want you can merge those files into one or two, just find out how haproxy is launched (check for init.d or let us know which distribution) and fix it appropriately.
I believe that it is only /etc/haproxy/haproxy.cfg that is used by default.
This may be of use to you (1.4 configuration reference):
http://haproxy.1wt.eu/download/1.4/doc/configuration.txt