How to wait until env for appid is created in jelastic manifest installation? - kubernetes

I have the following manifest:
jpsVersion: 1.3
jpsType: install
application:
id: shopozor-k8s-cluster
name: Shopozor k8s cluster
version: 0.0
baseUrl: https://raw.githubusercontent.com/shopozor/services/dev
settings:
fields:
- name: envName
caption: Env Name
type: string
default: shopozor
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: version
type: string
caption: Version
default: v1.16.3
onInstall:
- installKubernetes
- enableSubDomains
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cmd
cmd: |-
curl -fsSL ${baseUrl}/scripts/install_k8s.sh | /bin/bash
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.version}
jaeger: false
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
domains: staging,api-staging,assets-staging,api,assets
Unfortunately, when I run that manifest, the k8s cluster gets installed, but the subdomains cannot be created (yet), because:
[15:26:28 Shopozor.cluster:3]: enableSubDomains: {"action":"enableSubDomains","params":{}}
[15:26:29 Shopozor.cluster:4]: api [cp]: {"method":"jelastic.env.binder.AddDomains","params":{"domains":"staging,api-staging,assets-staging,api,assets"},"nodeGroup":"cp"}
[15:26:29 Shopozor.cluster:4]: ERROR: api.response: {"result":2303,"source":"JEL","error":"env for appid [5ce25f5a6988fbbaf34999b08dd1d47c] not created."}
What jelastic API methods can I use to perform the necessary waiting until subdomain creation is possible?
My current workaround is to split that manifest into two manifests: one cluster installation manifest and one update manifest creating the subdomains. However, I'd like to have everything in the same manifest.

Please change this:
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
domains: staging,api-staging,assets-staging,api,assets
to:
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
envName: ${settings.envName}
domains: staging,api-staging,assets-staging,api,assets

Related

Kibana - Elastic - Fleet - APM - failed to listen:listen tcp bind: can't assign requested address

Having setup Kibana and a fleet server, I now have attempted to add APM.
When going through the general setup - I forever get an error no matter what is done:
failed to listen:listen tcp *.*.*.*:8200: bind: can't assign requested address
This is when following the steps for setup of APM having created the fleet server.
This is all being launched in Kubernetes and the documentation has been gone through several times to no avail.
We did discover that we can hit the
/intake/v2/events
etc endpoints when shelled into the container but 404 for everything else. Its close but no cigar so far following the instructions.
As it turned out, the general walk through is soon to be depreciated in its current form as is.
And setup is far far simpler in a helm file where its actually possible to configure kibana with package ref for your named apm service.
xpack.fleet.packages:
- name: system
version: latest
- name: elastic_agent
version: latest
- name: fleet_server
version: latest
- name: apm
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server on ECK policy
id: eck-fleet-server
is_default_fleet_server: true
namespace: default
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
package_policies:
- name: fleet_server-1
id: fleet_server-1
package:
name: fleet_server
- name: Elastic Agent on ECK policy
id: eck-agent
namespace: default
monitoring_enabled:
- logs
- metrics
unenroll_timeout: 900
is_default: true
package_policies:
- name: system-1
id: system-1
package:
name: system
- package:
name: apm
name: apm-1
inputs:
- type: apm
enabled: true
vars:
- name: host
value: 0.0.0.0:8200
Making sure these are set in the kibana helm file will allow any spun up fleet server to automatically register as having APM.
The missing key in seemingly all the documentation is the need of a APM service.
The simplest example of which is here:
Example yaml scripts

What is wrong with my JupyterHub config file ? Error when using helm to install Jupyter

I am attempting to deploy JupyterHub into my kubernetes cluster as i want to have it integrated with a self hosted Gitlab instance running in the same cluster.
I have followed the steps on the Gitlab documentation page as shown here.
however when attempt to install this helm chart with this command
helm install jupyterhub/jupyterhub --namespace jupyter --version=1.2.0 --values values.yaml --generate-name ... i get the below errors.
- (root): Additional property gitlab is not allowed
- hub.extraConfig: Invalid type. Expected: object, given: string
- ingress: Additional property host is not allowed
I am using helm chart from https://github.com/jupyterhub/helm-chart. see below for my values.yaml
VALUES.YAML
#-----------------------------------------------------------------------------
# The gitlab and ingress sections must be customized!
#-----------------------------------------------------------------------------
gitlab:
clientId: <Your OAuth Application ID>
clientSecret: <Your OAuth Application Secret>
callbackUrl: http://<Jupyter Hostname>/hub/oauth_callback,
# Limit access to members of specific projects or groups:
# allowedGitlabGroups: [ "my-group-1", "my-group-2" ]
# allowedProjectIds: [ 12345, 6789 ]
# ingress is required for OAuth to work
ingress:
enabled: true
host: <JupyterHostname>
# tls:
# - hosts:
# - <JupyterHostanme>
# secretName: jupyter-cert
# annotations:
# kubernetes.io/ingress.class: "nginx"
# kubernetes.io/tls-acme: "true"
#-----------------------------------------------------------------------------
# NO MODIFICATIONS REQUIRED BEYOND THIS POINT
#-----------------------------------------------------------------------------
hub:
extraEnv:
JUPYTER_ENABLE_LAB: 1
extraConfig: |
c.KubeSpawner.cmd = ['jupyter-labhub']
c.GitLabOAuthenticator.scope = ['api read_repository write_repository']
async def add_auth_env(spawner):
'''
We set user's id, login and access token on single user image to
enable repository integration for JupyterHub.
See: https://gitlab.com/gitlab-org/gitlab-foss/issues/47138#note_154294790
'''
auth_state = await spawner.user.get_auth_state()
if not auth_state:
spawner.log.warning("No auth state for %s", spawner.user)
return
spawner.environment['GITLAB_ACCESS_TOKEN'] = auth_state['access_token']
spawner.environment['GITLAB_USER_LOGIN'] = auth_state['gitlab_user']['username']
spawner.environment['GITLAB_USER_ID'] = str(auth_state['gitlab_user']['id'])
spawner.environment['GITLAB_USER_EMAIL'] = auth_state['gitlab_user']['email']
spawner.environment['GITLAB_USER_NAME'] = auth_state['gitlab_user']['name']
c.KubeSpawner.pre_spawn_hook = add_auth_env
auth:
type: gitlab
state:
enabled: true
singleuser:
defaultUrl: "/lab"
image:
name: registry.gitlab.com/gitlab-org/jupyterhub-user-image
tag: latest
lifecycleHooks:
postStart:
exec:
command:
- "sh"
- "-c"
- >
git clone https://gitlab.com/gitlab-org/nurtch-demo.git DevOps-Runbook-Demo || true;
echo "https://oauth2:${GITLAB_ACCESS_TOKEN}#${GITLAB_HOST}" > ~/.git-credentials;
git config --global credential.helper store;
git config --global user.email "${GITLAB_USER_EMAIL}";
git config --global user.name "${GITLAB_USER_NAME}";
jupyter serverextension enable --py jupyterlab_git
proxy:
service:
type: ClusterIP
Single-string was deprecated in 0.6, for the dicts that don't conflict. The new structure is kind of:
hub:
extraConfig:
myConfigName: |
print("hi", flush=True)
Please use the following URL’s info as a reference issues/1009.
Regarding the “- (root): Additional property gitlab is not allowed” and “- ingress: Additional property host is not allowed” errors, check the indexation, as it is critical in docker-compose.yml. Please use the following threads as a reference Additional property {property} is not allowed and docker compose file not working.

Is it possible to create a Cloudfromation template to deploy to AWS EKS?

I mean, I have an application which is already dockerized, can I provide a cloudformation template to deploy it on the EKS cluster of my client?
I am using Cloudformation for some time, however I did never use it for deploying Kubernetes artifacts (and I've never heard of anybody else so far). I think there is a way to do so (see AWS Blog) but even this solution seems to be based on Helm.
I would definitely recommend to use Helm charts for your use case. Helm charts are straight forward and easy to use, especially if you already know the Kubernetes objects you want to deploy.
You can use cdk8s.io. Here's some examples: https://github.com/awslabs/cdk8s/tree/master/examples
Deploy an Amazon EKS cluster by using the Modular and Scalable Amazon EKS Architecture Quick Start. After the Amazon EKS cluster is deployed, on the Outputs tab, note the following outputs.
HelmLambdaArn
KubeClusterName
KubeConfigPath
KubeGetLambdaArn
The template below installs the WordPress Helm chart the same as if you logged in to the Kubernetes cluster and ran the following command.
helm install stable/wordpress
The following section of the template shows how Helm is used to deploy WordPress. It also creates a load balancer host name, so that you can access the WordPress site.
Resources:
HelmExample:
Type: "Custom::Helm"
Version: '1.0'
Description: 'This deploys the Helm Chart to deploy wordpress in to the EKS Cluster.'
Properties:
ServiceToken: !Ref HelmLambdaArn
KubeConfigPath: !Ref KubeConfigPath
KubeConfigKmsContext: !Ref KubeConfigKmsContext
KubeClusterName: !Ref KubeClusterName
Namespace: !Ref Namespace
Chart: stable/wordpress
Name: !Ref Name
Values:
wordpressUsername: !Ref wordpressUsername
wordpressPassword: !Ref wordpressPassword
WPElbHostName:
DependsOn: HelmExample
Type: "Custom::KubeGet"
Version: '1.0'
Properties:
ServiceToken: !Ref KubeGetLambdaArn
KubeConfigPath: !Ref KubeConfigPath
KubeConfigKmsContext: !Ref KubeConfigKmsContext
Namespace: !Ref Namespace
Name: !Sub 'service/${Name}-wordpress'
JsonPath: '{.status.loadBalancer.ingress[0].hostname}'
Modify the helm chart to fit your application and modify the cloudformation template with the values you got from the output previously.
These are the parameters you will have to fill in when deploying the cloudformation template:
HelmLambdaArn
KubeClusterName
KubeConfigPath
KubeGetLambdaArn
Namespace
Name
You can use AWS Quick start extensions to deploy payload to EKS:
-AWSQS::Kubernetes::Resource
-AWSQS::Kubernetes::Helm
Before you can use new types, activate them in helper template
EKSHelmExtension:
Type: AWS::CloudFormation::TypeActivation
Properties:
AutoUpdate: false
ExecutionRoleArn: !GetAtt DeployClusterRole.Arn
PublicTypeArn: !Sub "arn:aws:cloudformation:${AWS::Region}::type/resource/408988dff9e863704bcc72e7e13f8d645cee8311/AWSQS-Kubernetes-Helm"
EKSResourceExtension:
Type: AWS::CloudFormation::TypeActivation
Properties:
AutoUpdate: false
ExecutionRoleArn: !GetAtt DeployClusterRole.Arn
PublicTypeArn: !Sub "arn:aws:cloudformation:${AWS::Region}::type/resource/408988dff9e863704bcc72e7e13f8d645cee8311/AWSQS-Kubernetes-Resource"
Then, in main template use new types as follows:
Resources:
ExampleCm:
Type: "AWSQS::Kubernetes::Resource"
Properties:
ClusterName: my-eks-cluster-name
Namespace: default
Manifest: |
apiVersion: v1
kind: ConfigMap
metadata:
name: example-cm
data:
example_key: example_value
Helm:
Resources:
KubeStateMetrics:
Type: "AWSQS::Kubernetes::Helm"
Properties:
ClusterID: my-cluster-name
Name: kube-state-metrics
Namespace: kube-state-metrics
Repository: https://prometheus-community.github.io/helm-charts
Chart: prometheus-community/kube-state-metrics
ValueYaml: |
prometheus:
monitor:
enabled: true

Why can't I attach an external IP through my jelastic installation manifest?

I have a very simple jelastic installation manifest which installs a kubernetes cluster:
jpsVersion: 1.3
jpsType: install
application:
id: shopozor-k8s-cluster
name: Shopozor k8s cluster
version: 0.0
settings:
fields:
- name: envName
caption: Env Name
type: string
default: shopozor
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: k8s-version
type: string
caption: k8s manifest version
default: v1.16.3
onInstall:
- installKubernetes
- attachIpToWorkerNodes
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cc
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.k8s-version}
jaeger: false
attachIpToWorkerNodes:
- forEach(node:nodes.cp):
- jelastic.env.binder.AttachExtIp:
envName: ${settings.envName}
nodeId: ${#node.id}
If I install that manifest, then I get my cluster up and running, but the worker nodes do not get an IPv4 attached. After installing that manifest, if I additionally install the following update manifest, then it works:
jpsVersion: 1.3
jpsType: update
application:
id: attach-ext-ip
name: Attach external IP
version: 0.0
onInstall:
- attachIpToWorkerNodes
actions:
attachIpToWorkerNodes:
- forEach(node:nodes.cp):
- jelastic.env.binder.AttachExtIp:
nodeId: ${#node.id}
What is it I am doing wrong in the install manifest? why aren't the ip attached to my worker nodes, while there are if I perform that action after installation with an update manifest?
Please note, that the "public IP binding" feature is not available in the production yet. It's under active development and will be officially announced in one of our next releases.
In the current stable version, some of the functionality related to it may not work properly. Right now, it's not recommended for production use, but you can try it for test purposes only.
As for the "attachIpToWorkerNodes" action in the original manifest, the issue was that "nodes.cp" of the environment created wasn't declared in scope where "forEach" was invoked. The correct version of the action is:
attachIpToWorkerNodes:
install:
envName: ${settings.envName}
jps:
type: update
name: Attach IP To Worker Nodes
onInstall: jelastic.env.binder.AttachExtIp [nodes.cp.join(id,)]
Please let us know if you have any further questions.

gke cluster deployment with custom network

I am trying to create a yaml file to deploy gke cluster in a custom network I created. I get an error
JSON payload received. Unknown name \"network\": Cannot find field."
I have tried a few names for the resources but I am still seeing the same issue
resources:
- name: myclus
type: container.v1.cluster
properties:
network: projects/project-251012/global/networks/dev-cloud
zone: "us-east4-a"
cluster:
initialClusterVersion: "1.12.9-gke.13"
currentMasterVersion: "1.12.9-gke.13"
## Initial NodePool config.
nodePools:
- name: "myclus-pool1"
initialNodeCount: 3
version: "1.12.9-gke.13"
config:
machineType: "n1-standard-1"
oauthScopes:
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/ndev.clouddns.readwrite
preemptible: true
## Duplicates node pool config from v1.cluster section, to get it explicitly managed.
- name: myclus-pool1
type: container.v1.nodePool
properties:
zone: us-east4-a
clusterId: $(ref.myclus.name)
nodePool:
name: "myclus-pool1"
I expect it to place the cluster nodes in this network.
The network field needs to be part of the cluster spec. The top-level of properties should just be zone and cluster, network should be on the same indentation as initialClusterVersion. See more on the container.v1.cluster API reference page
Your manifest should look more like:
EDIT: there is some confusion in the API reference docs concerning deprecated fields. I offered a YAML that applies to the new API, not the one you are using. I've update with the correct syntax for the basic v1 API and further down I've added the newer API (which currently relies on gcp-types to deploy.
resources:
- name: myclus
type: container.v1.cluster
properties:
projectId: [project]
zone: us-central1-f
cluster:
name: my-clus
zone: us-central1-f
network: [network_name]
subnetwork: [subnet] ### leave this field blank if using the default network
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 0
config:
imageType: cos
- name: my-pool-1
type: container.v1.nodePool
properties:
projectId: [project]
zone: us-central1-f
clusterId: $(ref.myclus.name)
nodePool:
name: my-clus-pool2
initialNodeCount: 0
version: "1.13"
config:
imageType: ubuntu
The newer API (which provides more functionality and allows you to use more features including the v1beta1 API and beta features) would look something like this:
resources:
- name: myclus
type: gcp-types/container-v1:projects.locations.clusters
properties:
parent: projects/shared-vpc-231717/locations/us-central1-f
cluster:
name: my-clus
zone: us-central1-f
network: shared-vpc
subnetwork: local-only ### leave this field blank if using the default network
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 0
config:
imageType: cos
- name: my-pool-2
type: gcp-types/container-v1:projects.locations.clusters.nodePools
properties:
parent: projects/shared-vpc-231717/locations/us-central1-f/clusters/$(ref.myclus.name)
nodePool:
name: my-clus-separate-pool
initialNodeCount: 0
version: "1.13"
config:
imageType: ubuntu
Another note, you may want to modify your scopes, the current scopes will not allow you to pull images from gcr.io, some system pods may not spin up properly and if you are using Google's repository, you will be unable to pull those images.
Finally, you don't want to repeat the node pool resource in both the cluster spec and separately. Instead, create the cluster with a basic (default) node pool, for all additional node pools, create them as separate resources to manage them without going through the cluster. There are very few updates you can perform on a node pool, asside from resizing