How to setup kube-aggregator? - kubernetes

Using Kubernetes 1.6.4 and want to enable custom metrics for autoscaling but it require kube-aggregator, I did not find any doc to setup it and integrate with api servers. Any help
Thanks

You need k8s 1.7+ version to use k8s API aggregator feature, and follow below github project which has all required files to setup a metrics server, custom metrics using Prometheus adaptor
https://github.com/pawankkamboj/k8s-custom-metrics

Related

Has anybody successfully did hyper-ledger fabric network setup on Minikube environment?

Are there any examples for creating a blockchain network on top of minikube environment?
Take a look at the following repositories where you can find helm charts which should facilitate the whole deployment process:
https://github.com/hyfen-nl/PIVT
https://github.com/splunk/hyperledger-fabric-k8s
Note that they require helm 2.11 and 2.16 or newer installed in your kubernetes cluster.

Using cortex for scraping metrics

I'm running cortex on kubernetes in a dev environment to curate a dataset of metrics from multiple applications.
From what I'm reading, cortex utilizes a lot of prometheus source code. Is there a way to configure cortex to scrape metrics like prometheus (based on annotations, maybe?) without having to run instances of prometheus?
This is just for some research, not for production
Take a look at Grafana cloud agent. There is also more lightweight scraper for Prometheus metrics - vmagent.
According to the maintainers, there's no way to do this

How to access helm programmatically

I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
I found pyhelm but it supports only Helm 2. I looked on npm, but nothing there. I wrote a bash script but if I try to use it's output I get just a string really so it's not really useful.
I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
Helm 3 is different than previous versions in that it is a client only tool, similar to e.g. Kustomize. This means that helm charts only exists on the client (and in chart repositories) but is then transformed to a kubernetes manifest during deployment. So only Kubernetes objects exists in the cluster.
Kubernetes API is a REST API so you can access and get Kubernetes objects using a http client. Kubernetes object manifests is available in JSON and Yaml formats.
If you are OK to use Go then you can use the Helm 3 Go API.
If you want to use Python, I guess you'll have to wait for the Helm v3 support of pyhelm, there is already an issue addressing this.
reached this as we also need an npm package to deploy helm3 charts programmatically (sorta whitelabel app with a gui to manage the instances).
Only thing I could find was an old discontinued package from microsoft for helm v2 https://github.com/microsoft/helm-web-api/tree/master/on-demand-micro-services-deployment-k8s
I dont think using k8s API would work, as some charts can get fairly complex in terms of k8s resources, so I got some inspiration and I think I will develop my own package as a wrapper to the helm cli commands, using -o json param for easier handling of the CLI output

How can i configure Hazelcast distributed maps using Wildfly with hazelcast-jca

I'm trying to use Hazelcast with Wildfly.
Following the instructions provided in the Hazelcast website I could start a cluster using the hazelcast-jca and hazelcast-jca-rar.
What I don't know, is where do I configure the distributed maps?
You need to configure the maps in hazelcast.xml and put the configuration into your classpath.
http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#map
Here you can find some code samples for Hazelcast Resource Adapter:
https://github.com/hazelcast/hazelcast-code-samples/tree/master/hazelcast-integration/jca-ra
Documentation for Hazelcast Resource Adapter as you already know:
http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#integrating-into-j2ee

Google Cloud Sdk from DataProc Cluster

What is the right way to use/install python google cloud apis such as pub-sub from a google-dataproc cluster? For example if im using zeppelin/pyspark on the cluster and i want to use the pub-sub api, how should i prepare it?
It is unclear to me what is installed and what is not installed during default cluster provisioning and if/how I should try to install python libraries for google cloud apis.
I realise additionally there may be scopes/authentication to setup.
To be clear, I can use the apis locally but I am not sure what is the cleanest way to make the apis accessible from the cluster and I dont want to perform any unnecessary steps.
In general, at the moment, you need to bring your own client libraries for the various Google APIs unless using the Google Cloud Storage connector or BigQuery connector from Java or via RDD methods in PySpark which automatically delegate into the Java implementations.
For authentication, you should simply use --scopes https://www.googleapis.com/auth/pubsub and/or --scopes https://www.googleapis.com/auth/cloud-platform and the service account on the Dataproc cluster's VMs will be able to authenticate to use PubSub via the default installed credentials flow.