Couldn't create application in spinnaker UI - kubernetes

I've used Oracle IaaS K8s environment. So, all configuration changes are done successfully, But when I'm creating the Application I've got "Could not create application: Cannot get property 'name' on null object" message.
Could you please help me to solve this issue.
I've configured persistentStorage as oraclebmcs , configure docker registry, kubernetes accounts and also enable the oraclebmcs.
persistentStorage config :
oraclebmcs:
bucketName: spinnaker_oracle
namespace: spinnaker
compartmentId: ocid1.compartment.xxxxx
region: us-phoenix-1
userId: ocid1.user.xxxxx
fingerprint: e4:14:a1:2a:xxxxxx
sshPrivateKeyFilePath: /home/.oci/oci_api_key.pem
tenancyId: ocid1.tenancy.xxxxxxxxx
oraclebmcs config :
oraclebmcs:
enabled: true
accounts:
- name: oracle-bmcs
requiredGroupMembership: []
compartmentId: ocid1.compartment.xxxxxxx
userId: ocid1.user.xxxxxxx
fingerprint: e4:14:a1:2a:xxxxxxx
sshPrivateKeyFilePath: /home/.oci/oci_api_key.pem
tenancyId: ocid1.tenancy.xxxxxxx
region: us-ashburn-1
k8s account :
kubernetes:
enabled: true
name: oracle-k8s-automate
requiredGroupMembership: []
providerVersion: V1
dockerRegistries:
- accountName: docker
namespaces: []
configureImagePullSecrets: true
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
kubeconfigFile: /home/chalaka/cloud/configuraiton/kubeconfig_auto
oauthScopes: []
oAuthScopes: []

Related

Spinnaker deployment in kubernetes is failing

Background: I have setup a ServiceAccount and spinnaker-role-binding in the default namespace. Created the spinnaker namespace for Kubernetes. Deployed services on port 9000 and 8084.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/spin-deck-np LoadBalancer hidden <pending> 9000:31295/TCP 9m39s
service/spin-gate-np LoadBalancer hidden <pending> 8084:32161/TCP 9m39s
Created halyard deployment in the default namespace and configured hal inside it.
Problem: When I run the hal deploy apply command then I am getting below error
Problems in Global:
! ERROR Unexpected exception:
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET
at:
https://kubernetes.default/apis/extensions/v1beta1/namespaces/spinnaker/replicasets.
Message: the server could not find the requested resource. Received status:
Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null,
kind=null, name=null, retryAfterSeconds=null, uid=null,
additionalProperties={}), kind=Status, message=the server could not find the
requested resource, metadata=ListMeta(resourceVersion=null, selfLink=null,
additionalProperties={}), reason=NotFound, status=Failure,
additionalProperties={}).
Below is my kube config file at /home/spinnaker/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://kubernetes.default
name: default
contexts:
- context:
cluster: default
user: user
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: user
user:
token: *********************
Below is the hal config file at /home/spinnaker/.hal/config
currentDeployment: default
deploymentConfigurations:
- name: default
version: 1.8.1
providers:
appengine:
enabled: false
accounts: []
aws:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: us-west-2
defaults:
iamRole: BaseIAMRole
ecs:
enabled: false
accounts: []
azure:
enabled: false
accounts: []
bakeryDefaults:
templateFile: azure-linux.json
baseImages: []
dcos:
enabled: false
accounts: []
clusters: []
dockerRegistry:
enabled: true
accounts:
- name: my-docker-registry
requiredGroupMembership: []
providerVersion: V1
permissions: {}
address: https://index.docker.io
email: fake.email#spinnaker.io
cacheIntervalSeconds: 30
clientTimeoutMillis: 60000
cacheThreads: 1
paginateSize: 100
sortTagsByDate: false
trackDigests: false
insecureRegistry: false
repositories:
- library/nginx
primaryAccount: my-docker-registry
google:
enabled: false
accounts: []
bakeryDefaults:
templateFile: gce.json
baseImages: []
zone: us-central1-f
network: default
useInternalIp: false
kubernetes:
enabled: true
accounts:
- name: my-k8s-account
requiredGroupMembership: []
providerVersion: V1
permissions: {}
dockerRegistries:
- accountName: my-docker-registry
namespaces: []
configureImagePullSecrets: true
cacheThreads: 1
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
cachingPolicies: []
kubeconfigFile: /home/spinnaker/.kube/config
oauthScopes: []
oAuthScopes: []
primaryAccount: my-k8s-account
openstack:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
oracle:
enabled: false
accounts: []
deploymentEnvironment:
size: SMALL
type: Distributed
accountName: my-k8s-account
updateVersions: true
consul:
enabled: false
vault:
enabled: false
customSizing: {}
gitConfig:
upstreamUser: spinnaker
persistentStorage:
persistentStoreType: gcs
azs: {}
gcs:
jsonPath: /home/spinnaker/.gcp/gcs-account.json
project: round-reality
bucket: spin-94cc2e22-8ece-4bc1-80fd-e9df71c1d9f4
rootFolder: front50
bucketLocation: us
redis: {}
s3:
rootFolder: front50
oracle: {}
features:
auth: false
fiat: false
chaos: false
entityTags: false
jobs: false
metricStores:
datadog:
enabled: false
prometheus:
enabled: false
add_source_metalabels: true
stackdriver:
enabled: false
period: 30
enabled: false
notifications:
slack:
enabled: false
timezone: America/Los_Angeles
ci:
jenkins:
enabled: false
masters: []
travis:
enabled: false
masters: []
security:
apiSecurity:
ssl:
enabled: false
overrideBaseUrl: http://External IP of worker:8084
uiSecurity:
ssl:
enabled: false
overrideBaseUrl: http://External IP of worker:9000
authn:
oauth2:
enabled: false
client: {}
resource: {}
userInfoMapping: {}
saml:
enabled: false
ldap:
enabled: false
x509:
enabled: false
iap:
enabled: false
enabled: false
authz:
groupMembership:
service: EXTERNAL
google:
roleProviderType: GOOGLE
github:
roleProviderType: GITHUB
file:
roleProviderType: FILE
enabled: false
artifacts:
bitbucket:
enabled: false
accounts: []
gcs:
enabled: false
accounts: []
github:
enabled: false
accounts: []
gitlab:
enabled: false
accounts: []
http:
enabled: false
accounts: []
s3:
enabled: false
accounts: []
pubsub:
google:
enabled: false
subscriptions: []
canary:
enabled: false
serviceIntegrations:
- name: google
enabled: false
accounts: []
gcsEnabled: false
stackdriverEnabled: false
- name: prometheus
enabled: false
accounts: []
- name: datadog
enabled: false
accounts: []
- name: aws
enabled: false
accounts: []
s3Enabled: false
reduxLoggerEnabled: true
defaultJudge: NetflixACAJudge-v1.0
stagesEnabled: true
templatesEnabled: true
showAllConfigsEnabled: true
Used below commands in hal to interact with kubernetes
kubectl config set-cluster default --server=https://kubernetes.default --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubectl config set-context default --cluster=default
token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-credentials user --token=$token
kubectl config set-context default --user=user
kubectl config use-context default
How could I resolve the error for spinnaker deployment?
Thank you
As per your config file it's looking like kubeconfig context(Search it) not setup correctly.
Please use below command
# Setting Variable for admin kubeconfig file location(Please fetch config file with --admin - if possible)
kubeconfig_path="<my-k8s-account-admin-file-path>"
hal config provider kubernetes account add my-k8s-account --provider-version v2 \
--kubeconfig-file "$kubeconfig_path" \
--context $(kubectl config current-context --kubeconfig "$kubeconfig_path")
After execution of above command you will be able to see context in your config file, which is missing in current config.

Spinnaker:AKS account not showing on UI

I've configured spinnaker cloud provider as kubernetes with below commands
hal config provider kubernetes enable
kubectl config current-context
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add my-k8s-v2-account --provider-version v2 --context $CONTEXT
hal config features edit --artifacts true
but this account is not visible on spinnaker UI
and in logs its shows error as below
Nov 29 12:07:43 47184UW2DDevLVM2 gate[34594]: 2019-11-29 12:07:43.860 ERROR 34594 --- [TaskScheduler-5] c.n.s.g.s.DefaultProviderLookupService : Unable to refresh account details cache, reason: timeout
please advise.. thanks..
here's my hal deploy diff command output
+ Get current deployment
Success
+ Determine config diff
Success
~ EDITED
default.persistentStorage.redis
- port 6379 -> null
- host localhost -> null
~ EDITED
telemetry
I've provisioned new VM and did all installation process from scratch but still same issue :(
here is ~/.kube/config file
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx
server: https://xxx:443
name:xxx
contexts:
- context:
cluster: xxx
user: xxx
name: xxx
current-context: xxx
kind: Config
preferences: {}
users:
- name: xxx
user:
client-certificate-data: xxx
client-key-data: xxx
token: xxx
and here is ~/.hal/config file
currentDeployment: default
deploymentConfigurations:
- name: default
version: 1.17.2
providers:
appengine:
enabled: false
accounts: []
aws:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: xxx
defaults:
iamRole: BaseIAMRole
ecs:
enabled: false
accounts: []
azure:
enabled: false
accounts: []
bakeryDefaults:
templateFile: azure-linux.json
baseImages: []
dcos:
enabled: false
accounts: []
clusters: []
dockerRegistry:
enabled: false
accounts: []
google:
enabled: false
accounts: []
bakeryDefaults:
templateFile: gce.json
baseImages: []
zone: us-central1-f
network: default
useInternalIp: false
kubernetes:
enabled: true
accounts:
- name: xxx
requiredGroupMembership: []
providerVersion: V2
permissions: {}
dockerRegistries: []
context: xxx
configureImagePullSecrets: true
cacheThreads: 1
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
cachingPolicies: []
kubeconfigFile: /home/xxx/.kube/config
oAuthScopes: []
onlySpinnakerManaged: false
primaryAccount: xxx
oracle:
enabled: false
accounts: []
bakeryDefaults:
templateFile: oci.json
baseImages: []
cloudfoundry:
enabled: false
accounts: []
deploymentEnvironment:
size: SMALL
type: LocalDebian
imageVariant: SLIM
updateVersions: true
consul:
enabled: false
vault:
enabled: false
customSizing: {}
sidecars: {}
initContainers: {}
hostAliases: {}
affinity: {}
tolerations: {}
nodeSelectors: {}
gitConfig:
upstreamUser: spinnaker
livenessProbeConfig:
enabled: false
haServices:
clouddriver:
enabled: false
disableClouddriverRoDeck: false
echo:
enabled: false
persistentStorage:
persistentStoreType: azs
azs:
storageAccountName: xxx
storageAccountKey: xxx
storageContainerName: xxx
gcs:
rootFolder: front50
redis: {}
s3:
rootFolder: front50
oracle: {}
features:
auth: false
fiat: false
chaos: false
entityTags: false
artifacts: true
metricStores:
datadog:
enabled: false
tags: []
prometheus:
enabled: false
add_source_metalabels: true
stackdriver:
enabled: false
newrelic:
enabled: false
tags: []
period: 30
enabled: false
notifications:
slack:
enabled: false
twilio:
enabled: false
baseUrl: https://api.twilio.com/
github-status:
enabled: false
timezone: America/Los_Angeles
ci:
jenkins:
enabled: false
masters: []
travis:
enabled: false
masters: []
wercker:
enabled: false
masters: []
concourse:
enabled: false
masters: []
gcb:
enabled: false
accounts: []
repository:
artifactory:
enabled: false
searches: []
security:
apiSecurity:
ssl:
enabled: false
overrideBaseUrl: http://xxx:8084/
uiSecurity:
ssl:
enabled: false
overrideBaseUrl: http://xxx:9000/
authn:
oauth2:
enabled: false
client: {}
resource: {}
userInfoMapping: {}
saml:
enabled: false
userAttributeMapping: {}
ldap:
enabled: false
x509:
enabled: false
iap:
enabled: false
enabled: false
authz:
groupMembership:
service: EXTERNAL
google:
roleProviderType: GOOGLE
github:
roleProviderType: GITHUB
file:
roleProviderType: FILE
ldap:
roleProviderType: LDAP
enabled: false
artifacts:
bitbucket:
enabled: false
accounts: []
gcs:
enabled: false
accounts: []
oracle:
enabled: false
accounts: []
github:
enabled: false
accounts: []
gitlab:
enabled: false
accounts: []
gitrepo:
enabled: false
accounts: []
http:
enabled: false
accounts: []
helm:
enabled: false
accounts: []
s3:
enabled: false
accounts: []
maven:
enabled: false
accounts: []
templates: []
pubsub:
enabled: false
google:
enabled: false
pubsubType: GOOGLE
subscriptions: []
publishers: []
canary:
enabled: false
serviceIntegrations:
- name: google
enabled: false
accounts: []
gcsEnabled: false
stackdriverEnabled: false
- name: prometheus
enabled: false
accounts: []
- name: datadog
enabled: false
accounts: []
- name: signalfx
enabled: false
accounts: []
- name: aws
enabled: false
accounts: []
s3Enabled: false
- name: newrelic
enabled: false
accounts: []
reduxLoggerEnabled: true
defaultJudge: NetflixACAJudge-v1.0
stagesEnabled: true
templatesEnabled: true
showAllConfigsEnabled: true
plugins:
plugins: []
enabled: false
downloadingEnabled: false
pluginConfigurations:
plugins: {}
webhook:
trust:
enabled: false
telemetry:
enabled: false
endpoint: https://stats.spinnaker.io
instanceId: xxx
connectionTimeoutMillis: 3000
readTimeoutMillis: 5000
Here are the commands used to install spinnaker
az login
az aks get-credentials --resource-group xxx --name xxx
curl -O https://raw.githubusercontent.com/spinnaker/halyard/master/install/debian/InstallHalyard.sh
sudo bash InstallHalyard.sh --user xxx
hal config provider kubernetes enable
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add xxx \
--provider-version v2 \
--context $CONTEXT
hal config features edit --artifacts true
hal config deploy edit --type localdebian
hal config storage azs edit --storage-account-name xxx --storage-account-key xxx
hal config storage edit --type azs
hal version list
hal config version edit --version 1.17.2
sudo hal deploy apply
echo "host: 0.0.0.0" | tee \
~/.hal/default/service-settings/gate.yml \
~/.hal/default/service-settings/deck.yml
hal config security ui edit \
--override-base-url http://xxx:9000/
hal config security api edit \
--override-base-url http://xxx:8084/
sudo hal deploy apply
Found below exceptions logs
Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: 2019-12-02 11:12:07.424 ERROR 23908 --- [1-7002-exec-105] c.n.s.k.w.e.GenericExceptionHandlers : Internal Server Error
Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: java.lang.NullPointerException: null
Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: #011at com.netflix.spinnaker.clouddriver.kubernetes.health.KubernetesHealthIndicator.health(KubernetesHealthIndicator.java:48) ~[clouddriver-kubernetes-6.4.1-20191111102213.jar:6.4.1-20191111102213]
Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: #011at org.springframework.boot.actuate.health.CompositeHealthIndicator.health(CompositeHealthIndicator.java:95) ~[spring-boot-actuator-2.1.7.RELEASE.jar:2.1.7.RELEASE]
Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: #011at org.springframework.boot.actuate.health.HealthEndpoint.health(HealthEndpoint.java:50) ~[spring-boot-actuator-2.1.7.RELEASE.jar:2.1.7.RELEASE]
Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: #011at org.springframework.boot.actuate.health.HealthEndpointWebExtension.health(HealthEndpointWebExtension.java:53) ~[spring-boot-actuator-2.1.7.RELEASE.jar:2.1.7.RELEASE]
plus localhost 7002 is not responding
hexunix#47184UW2DDevLVM2:~$ curl -v http://localhost:7002/credentials
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 7002 (#0)
> GET /credentials HTTP/1.1
> Host: localhost:7002
> User-Agent: curl/7.58.0
> Accept: */*
>
This is how i have done in my environment
kubeconfig_path="/home/root/.hal/kube-config"
kubernetes_account="my-account"
docker_registry="docker.io"
hal config provider kubernetes account add $kubernetes_account --provider-version v2 \
--kubeconfig-file "$kubeconfig_path" \
--context $(kubectl config current-context --kubeconfig "$kubeconfig_path") \
--omit-namespaces=kube-system,kube-public \
--docker-registries "$docker_registry"
make necessary updates and apply the changes. It should work.
from hal config it is clear that kubernetes account is added.
kubernetes:
enabled: true
accounts:
- name: xxx
requiredGroupMembership: []
providerVersion: V2
permissions: {}
dockerRegistries: []
context: xxx
configureImagePullSecrets: true
cacheThreads: 1
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
cachingPolicies: []
kubeconfigFile: /home/xxx/.kube/config
oAuthScopes: []
onlySpinnakerManaged: false
primaryAccount: xxx

Spinnaker unable to communicate with kubernetes cluster

I am trying to deploy spinnaker locally with minikube and minio, i have everything setted up, my kubernetes cluster is up and running with a composed app on it, details below:
| NAME | READY | UP-TO-DATE | AVAILABLE | AGE |
|---------------------------|-------|------------|-----------|-----|
| deployment.extensions/api | 1/1 | 1 | 1 | 18s |
| deployment.extensions/db | 1/1 | 1 | 1 | 18s |
I configured both, my kubernetes and storage on my hal config, i will paste it below as well, when i try to deploy using "sudo hal deploy apply" i get the following error:
WARNING You have not specified a Kubernetes context in your halconfig, Spinnaker will use "minikube" instead. ? We recommend
explicitly setting a context in your halconfig, to ensure changes to
your kubeconfig won't break your deployment.
! ERROR Unable to communicate with your Kubernetes cluster: An error
has occurred.. ? Unable to authenticate with your Kubernetes cluster.
Try using kubectl to verify your credentials.
Problems in default.security:
WARNING Your UI or API domain does not have override base URLs set even though your Spinnaker deployment is a Distributed deployment on a
remote cloud provider. As a result, you will need to open SSH tunnels
against that deployment to access Spinnaker. ? We recommend that you
instead configure an authentication mechanism (OAuth2, SAML2, or
x509) to make it easier to access Spinnaker securely, and then
register the intended Domain and IP addresses that your publicly
facing services will be using.
Failed to prep Spinnaker deployment
Here is my hal config:
currentDeployment: default
deploymentConfigurations:
- name: default
version: ''
providers:
appengine:
enabled: false
accounts: []
aws:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: us-west-2
defaults:
iamRole: BaseIAMRole
ecs:
enabled: false
accounts: []
azure:
enabled: false
accounts: []
bakeryDefaults:
templateFile: azure-linux.json
baseImages: []
dcos:
enabled: false
accounts: []
clusters: []
dockerRegistry:
enabled: true
accounts:
- name: my-docker-registry
requiredGroupMembership: []
providerVersion: V1
permissions: {}
address: https://index.docker.io
username: <sensitive> (this is my actual username)
password: <sensitive> (this is my actual password)
email: fake.email#spinnaker.io
cacheIntervalSeconds: 30
clientTimeoutMillis: 60000
cacheThreads: 1
paginateSize: 100
sortTagsByDate: false
trackDigests: false
insecureRegistry: false
repositories:
- ericstoppel1/atixlabs
primaryAccount: my-docker-registry
google:
enabled: false
accounts: []
bakeryDefaults:
templateFile: gce.json
baseImages: []
zone: us-central1-f
network: default
useInternalIp: false
kubernetes:
enabled: true
accounts:
- name: my-k8s-account
requiredGroupMembership: []
providerVersion: V1
permissions: {}
dockerRegistries:
- accountName: my-docker-registry
namespaces: []
configureImagePullSecrets: true
cacheThreads: 1
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
cachingPolicies: []
kubeconfigFile: /home/osboxes/.kube/config
oAuthScopes: []
onlySpinnakerManaged: false
primaryAccount: my-k8s-account
oracle:
enabled: false
accounts: []
bakeryDefaults:
templateFile: oci.json
baseImages: []
cloudfoundry:
enabled: false
accounts: []
deploymentEnvironment:
size: SMALL
type: Distributed
accountName: my-k8s-account
updateVersions: true
consul:
enabled: false
vault:
enabled: false
customSizing: {}
sidecars: {}
initContainers: {}
hostAliases: {}
affinity: {}
nodeSelectors: {}
gitConfig:
upstreamUser: spinnaker
livenessProbeConfig:
enabled: false
haServices:
clouddriver:
enabled: false
disableClouddriverRoDeck: false
echo:
enabled: false
persistentStorage:
persistentStoreType: s3
azs: {}
gcs:
rootFolder: front50
redis: {}
s3:
bucket: spin-763f86d5-10ba-497e-9348-264fc353edec
rootFolder: front50
pathStyleAccess: false
endpoint: https://localhost:9001
accessKeyId: AKIAIOSFODNN7EXAMPLE
secretAccessKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
oracle: {}
features:
auth: false
fiat: false
chaos: false
entityTags: false
jobs: false
metricStores:
datadog:
enabled: false
tags: []
prometheus:
enabled: false
add_source_metalabels: true
stackdriver:
enabled: false
period: 30
enabled: false
notifications:
slack:
enabled: false
twilio:
enabled: false
baseUrl: https://api.twilio.com/
timezone: America/Los_Angeles
ci:
jenkins:
enabled: false
masters: []
travis:
enabled: false
masters: []
wercker:
enabled: false
masters: []
concourse:
enabled: false
masters: []
gcb:
enabled: false
accounts: []
repository:
artifactory:
enabled: false
searches: []
security:
apiSecurity:
ssl:
enabled: false
uiSecurity:
ssl:
enabled: false
authn:
oauth2:
enabled: false
client: {}
resource: {}
userInfoMapping: {}
saml:
enabled: false
userAttributeMapping: {}
ldap:
enabled: false
x509:
enabled: false
iap:
enabled: false
enabled: false
authz:
groupMembership:
service: EXTERNAL
google:
roleProviderType: GOOGLE
github:
roleProviderType: GITHUB
file:
roleProviderType: FILE
ldap:
roleProviderType: LDAP
enabled: false
artifacts:
bitbucket:
enabled: false
accounts: []
gcs:
enabled: false
accounts: []
oracle:
enabled: false
accounts: []
github:
enabled: false
accounts: []
gitlab:
enabled: false
accounts: []
http:
enabled: false
accounts: []
helm:
enabled: false
accounts: []
s3:
enabled: false
accounts: []
maven:
enabled: false
accounts: []
templates: []
pubsub:
enabled: false
google:
enabled: false
pubsubType: GOOGLE
subscriptions: []
publishers: []
canary:
enabled: false
serviceIntegrations:
- name: google
enabled: false
accounts: []
gcsEnabled: false
stackdriverEnabled: false
- name: prometheus
enabled: false
accounts: []
- name: datadog
enabled: false
accounts: []
- name: signalfx
enabled: false
accounts: []
- name: aws
enabled: false
accounts: []
s3Enabled: false
reduxLoggerEnabled: true
defaultJudge: NetflixACAJudge-v1.0
stagesEnabled: true
templatesEnabled: true
showAllConfigsEnabled: true
webhook:
trust:
enabled: false
I have my kubernetes config and can acces to it, so, separately it all seems to work, what may be wrong?
As per issue reported:
WARNING You have not specified a Kubernetes context in your halconfig,
Spinnaker will use "minikube" instead.
I don't see any Kuberenetes context entry defined in your hal config, find here dedicated chapter from Spinnaker guideline document.
Try adding the kubernetes details to the halyard context.
hal config provider kubernetes account add <ACCOUNT>
hal config provider kubernetes enable
This link can be used for reference: https://www.spinnaker.io/reference/halyard/commands/

(RKE) FATA[0212] [etcd] Failed to bring up Etcd Plane: [etcd] Etcd Cluster is not healthy

I am trying to setup a small Kubernetes cluster using RKE with 2 nodes using RKE. The 2 nodes are Ubuntu server VM's running in VirtualBox, both with a bridged connection.
ip of vm 1: xx.xx.xx.61
ip of vm 2: xx.xx.xx.67
When I launch the cluster using rke up I get the following error:
FATA[0212] [etcd] Failed to bring up Etcd Plane: [etcd] Etcd Cluster is not healthy
When I subsequently run kubectl commands such as kubectl --kubeconfig kube_config_cluster.yml version, I get the following error:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server xx.xx.xx.67:6443 was refused - did you specify the right host or port?
Not sure if these errors are caused by the same underlying issue.
What could be causing this or how could I troubleshoot this issue. Are there any particular log files that I could look into?
The is what the cluster.yml looks like:
# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: xx.xx.xx.61
port: "22"
internal_address: ""
role:
- controlplane
- worker
- etcd
hostname_override: ""
user: rke
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_rsa
ssh_cert: ""
ssh_cert_path: ""
labels: {}
- address: xx.xx.xx.67
port: "22"
internal_address: ""
role:
- controlplane
- worker
- etcd
hostname_override: ""
user: rke
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_rsa
ssh_cert: ""
ssh_cert_path: ""
labels: {}
services:
etcd:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
external_urls: []
ca_cert: ""
cert: ""
key: ""
path: ""
snapshot: null
retention: ""
creation: ""
backup_config: null
kube-api:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: ""
pod_security_policy: false
always_pull_images: false
kube-controller:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
scheduler:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
kubelet:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_domain: cluster.local
infra_container_image: ""
cluster_dns_server: 10.43.0.10
fail_swap_on: false
kubeproxy:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
network:
plugin: canal
options: {}
authentication:
strategy: x509
sans: []
webhook: null
addons: ""
addons_include:
- https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
- https://gist.githubusercontent.com/superseb/499f2caa2637c404af41cfb7e5f4a938/raw/930841ac00653fdff8beca61dab9a20bb8983782/k8s-dashboard-user.yml
system_images:
etcd: rancher/coreos-etcd:v3.3.10-rancher1
alpine: rancher/rke-tools:v0.1.34
nginx_proxy: rancher/rke-tools:v0.1.34
cert_downloader: rancher/rke-tools:v0.1.34
kubernetes_services_sidecar: rancher/rke-tools:v0.1.34
kubedns: rancher/k8s-dns-kube-dns:1.15.0
dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0
kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0
kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.3.0
coredns: rancher/coredns-coredns:1.3.1
coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.3.0
kubernetes: rancher/hyperkube:v1.14.3-rancher1
flannel: rancher/coreos-flannel:v0.10.0-rancher1
flannel_cni: rancher/flannel-cni:v0.3.0-rancher1
calico_node: rancher/calico-node:v3.4.0
calico_cni: rancher/calico-cni:v3.4.0
calico_controllers: ""
calico_ctl: rancher/calico-ctl:v2.0.0
canal_node: rancher/calico-node:v3.4.0
canal_cni: rancher/calico-cni:v3.4.0
canal_flannel: rancher/coreos-flannel:v0.10.0
weave_node: weaveworks/weave-kube:2.5.0
weave_cni: weaveworks/weave-npc:2.5.0
pod_infra_container: rancher/pause:3.1
ingress: rancher/nginx-ingress-controller:0.21.0-rancher3
ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
metrics_server: rancher/metrics-server:v0.3.1
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
mode: rbac
options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
provider: ""
options: {}
node_selector: {}
extra_args: {}
cluster_name: ""
cloud_provider:
name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
address: ""
port: ""
user: ""
ssh_key: ""
ssh_key_path: ""
ssh_cert: ""
ssh_cert_path: ""
monitoring:
provider: ""
options: {}
restore:
restore: false
snapshot_name: ""
dns: null
The scale for etcd deployment is generally an odd number (1, 3, 5 ...). I see that you have selected both the nodes with etcd role. It might be causing issues with quorum. Can you use etcd role with only one of the nodes and run rke up again?

Configure dashboard via values

As the title indicates I'm trying to setup grafana using helmfile with a default dashboard via values.
The relevant part of my helmfile is here
releases:
...
- name: grafana
namespace: grafana
chart: stable/grafana
values:
- datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus-server.prometheus.svc.cluster.local
isDefault: true
- dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
- dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
As far as I can understand by reading here I need a provider and then I can refer to a dashboard by url. However when I do as shown above no dashboard is installed and when I do as below (which as )
- dashboards:
default:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
I get the following error message
Error: render error in "grafana/templates/deployment.yaml": template: grafana/templates/deployment.yaml:148:20: executing "grafana/templates/deployment.yaml" at <$value>: wrong type for value; expected map[string]interface {}; got string
Any clues about what I'm doing wrong?
I think the problem is that you're defining the datasources, dashboardProviders and dashboards as lists rather than maps so you need to remove the hyphens, meaning that the values section becomes:
values:
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-prometheus-server
access: proxy
isDefault: true
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
The grafana chart has them as maps and using helmfile doesn't change that