Can not login to superset after installing it on AWS kuberbetes - kubernetes

I installed amancevice/superset to AWS Kubernetes.
When i open Load balancer DNS, I can see superset login page but default login is not working admin/admin.
Is there anything i missed?
Here's my yaml file i used to install superset
Superset.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: superset-deployment
namespace: default
spec:
selector:
matchLabels:
app: superset
template:
metadata:
labels:
app: superset
spec:
containers:
- name: superset
image: amancevice/superset:latest
ports:
- containerPort: 8088
kubectl create -f superset.yaml

To initialize the database with an admin user, Run superset-init helper script
kubectl exec -it superset superset-init
MoreDetails Here

Related

TensorFlow Setting model_config_file runtime argument in YAML file for K8s

I've been having a hell of a time trying to figure-out how to serve multiple models using a yaml configuration file for K8s.
I can run directly in Bash using the following, but having trouble converting it to yaml.
docker run -p 8500:8500 -p 8501:8501 \
[container id] \
--model_config_file=/models/model_config.config \
--model_config_file_poll_wait_seconds=60
I read that model_config_file can be added using a command element, but not sure where to put it, and I keep receiving errors around valid commands or not being able to find the file.
command:
- '--model_config_file=/models/model_config.config'
- '--model_config_file_poll_wait_seconds=60'
Sample YAML config below for K8s, where would the command go referencing the docker run command above?
---
apiVersion: v1
kind: Namespace
metadata:
name: model-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tensorflow-test-rw-deployment
namespace: model-test
spec:
selector:
matchLabels:
app: rate-predictions-server
replicas: 1
template:
metadata:
labels:
app: rate-predictions-server
spec:
containers:
- name: rate-predictions-container
image: aws-ecr-path
command:
- --model_config_file=/models/model_config.config
- --model_config_file_poll_wait_seconds=60
ports:
#- grpc: 8500
- containerPort: 8500
- containerPort: 8501
---
apiVersion: v1
kind: Service
metadata:
labels:
run: rate-predictions-service
name: rate-predictions-service
namespace: model-test
spec:
type: ClusterIP
selector:
app: rate-predictions-server
ports:
- port: 8501
targetPort: 8501
What you are passing on seems to be the arguments and not the command. Command should be set as the entrypoint in the container and arguments should be passed in args. Please see following link.
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

Docker Desktop error converting YAML to JSON while trying to deploy the voting app

I am using Docker Desktop to run the voting app, I am following the tutorial the link in the command line is deprecated :
kubectl apply -f https://raw.githubusercontent.com/docker/docker-birthday/master/resources/kubernetes-docker-desktop/vote.yaml
So I tried to use the link from this repo :
kubectl apply -f https://github.com/dockersamples/docker-fifth-birthday/blob/master/kubernetes-desktop/kube-deployment.yml
But this error keeps on popping :
error: error parsing https://github.com/dockersamples/docker-fifth-birthday/blob/master/kubernetes-desktop/kube-deployment.yml: error converting YAML to JSON: YAML: line 92: mapping values are not allowed in this context
---
apiVersion: v1
kind: Service
metadata:
name: result
labels:
app: result
spec:
type: LoadBalancer
ports:
what am I doing wrong?
I tried to do a get the file to my local to execute but got the same error as 92 line using wget https://github.com/dockersamples/docker-fifth-birthday/blob/master/kubernetes-desktop/kube-deployment.yml. However, I tried just did a copy/paste of the content and it creates services fine but there are 2 issues with the project.
the apiversion in deployment is apps/v1beta it needs to be apps/v1 as per documentation. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
There are places where the selectors have not been mentioned in the deployments which is why the deployments are not getting created, you might need to fix it. To elaborate, the selectors in the deployments(spec section) have to match the labels of the service (metadata). Below is a working version of service/deployment from the project mentioned.
On why you would do that? every deployment will run a set of pods,it will Maintain a set of identical pods, ensuring that they have the correct config and that the right number and to access these you will expose a service. these services will look up the deployment based on these labels.
If you are looking for learning material, you can check the official documentation below.
https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
clusterIP: None
ports:
- name: redis
port: 6379
targetPort: 6379
selector:
app: redis
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
name: redis

What is the URL to this Kubernetes service/pod/docker thing

I need to hard code the address of a couchDB instance to another server in my kubernetes cluster. I'm not super familiar with kubernetes but I know that IP will change each time the cluster is rebuilt or the pod is rebuilt. So I can't use that.
What is the URL to this kubernetes service/what should I hard code into my Server Docker Image so it will alway find the CouchDB server in the system. I think it will be in this format
<service-name>.<namespace>.svc.cluster.local:<service-port>
# YAML for launching the server
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kino-couch
labels:
app: kino-couch
spec:
serviceName: orderer
# Single instance of the Orderer Pod is needed
replicas: 1
selector:
matchLabels:
app: kino-couch
template:
metadata:
labels:
app: kino-couch
spec:
containers:
- name: kino-couch
ports:
- containerPort: 5984
# Image used
image: dpacchain/development:dpaccouch
If "wget 172.17.0.2:5984" works what should "172.17.0.2" be replaced with
The following is not correct
wget kino-couch-0.couch-service.default.svc.cluster.local:5984
wget kino-couch-0.couch-service.default.svc.cluster.local:5984
wget kino-couch-0.kino-couch.default.svc.cluster.local:5984
wget kino-couch-0.kino-couchdb.default.svc.cluster.local:5984
wget kino-couch-0.kino-couchdb.svc.cluster.local:5984
For StatefulSet you need to create a Headless service to be responsible for the network identity of the Pods proving stable DNS entries. Notice clusterIP: None in below example.
apiVersion: v1
kind: Service
metadata:
name: couch-service
labels:
app: kino-couch
spec:
ports:
- port: 5984
clusterIP: None
selector:
app: kino-couch
The statefulset need to refer to the above service in serviceName. So the statefulset yaml would look like below
# YAML for launching the server
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kino-couch
labels:
app: kino-couch
spec:
serviceName: couch-service
# Single instance of the Orderer Pod is needed
replicas: 1
selector:
matchLabels:
app: kino-couch
template:
metadata:
labels:
app: kino-couch
spec:
containers:
- name: kino-couch
ports:
- containerPort: 5984
# Image used
image: dpacchain/development:dpaccouch
Then as a client you can access it using couch-service.<namespace>.svc.cluster.local:5984 to connect to a any of the CouchDB pods.
If you want to connect to a specific pod then use kino-couch-0.couch-service.<namespace>.svc.cluster.local:5984. This is typically needed for connecting the couchDB pods between themselves to create a cluster.

Converting docker run to YAML for Kubernetes

I'm new to Kubernetes. I'm trying to convert the following DOCKER container code to YAML for kubernetes.
docker container run -d -p 80:80 --name MyFirstContainerWeb docker/getting-started:pwd
This is what I have come up with so far. Can someone please help me with the ingress part? I'm using Docker Desktop (which has kubernetes cluster). My final goal is to see the website in the browser.
apiVersion: apps/v1
kind: Deployment
metadata:
name: getting-started-deployment
spec:
selector:
matchLabels:
app: getting-started
replicas: 2
template:
metadata:
labels:
app: getting-started
spec:
containers:
- name: getting-started-container
image: docker/getting-started:pwd
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: getting-started-service
namespace: default
labels:
app: myfirstcontainer
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: getting-started-service
You can use port-forward to forward to service port by running
$ kubectl port-forward svc/getting-started-service 80
To learn more about port-forwarding click here

How to create multiple instances of Mediawiki in a Kubernetes Cluster

I´m about to deploy multiple Mediawiki instances on my Kubernetes-cluster.
In my case the YAML deploymentfile for the DB (MySQL) works as it supposed to do, the deploymentfile for Mediawiki deploys as many pods as expected, but I can´t access them from outside of the cluster even if I create a Service for this case.
If I try to create one single Mediawiki pod and a service to access it from outside of the cluster it works as it should. If I try to create a deploymentfile for Mediawiki equal to the one for MySQL it does creates the pods and the requiered service but it´s not accessible from the externel-IP assigned to it.
My deploymentfile for Mediawiki:
apiVersion: v1
kind: Service
metadata:
name: mediawiki-service
labels:
name: mediawiki-service
app: mediawiki
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
name: mediawiki-pod
app: mediawiki
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mediawiki
spec:
replicas: 6
selector:
matchLabels:
app: mediawiki
strategy:
type: Recreate
template:
metadata:
labels:
app: mediawiki
spec:
containers:
- image: mediawiki
name: mediawiki
ports:
- containerPort: 80
name: mediawiki
This is the pod-definition file:
apiVersion: v1
kind: Pod
metadata:
name: mediawiki-pod
labels:
name: mediawiki-pod
app: mediawiki
spec:
containers:
- name: mediawiki
image: mediawiki
ports:
- containerPort: 80
This is the service-definition file:
apiVersion: v1
kind: Service
metadata:
name: mediawiki-service
labels:
name: mediawiki-service
app: mediawiki
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
name: mediawiki-pod
The accual resault should be that I can deploy multiple instances of Mediawiki on my cluster and can access them from outside with the externel-IP.
If you look at kubectl describe service mediawiki-service in both scenarios, I expect you will see that in the single-pod case, there is an Endpoints: list that includes a single IP address (the pod's, but that's an implementation detail) but in the deployment case, it says <none>.
Your Service only matches pods that have both name and app labels:
apiVersion: v1
kind: Service
spec:
selector:
name: mediawiki-pod
app: mediawiki
But the pods deployed by your deployment only have app labels:
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
labels:
app: mediawiki
So at that specific point (the labels inside the template for the deployment; also adding them at the top level doesn't hurt, but this embedded point is what's important) you need to add the second label name: mediawiki-pod.
If you want to deploy multiple instances of some piece of software on Kubernetes cluster it's good idea to check out if there is a helm chart for it.
In your case the answer is positive - there is a stable helm chart for Mediawiki.
Creating multiple instances is as easy as creating multiple releases, for example:
helm install --name wiki1 stable/mediawiki
helm install --name wiki2 stable/mediawiki
helm install --name wiki3 stable/mediawiki
To use Helm you have to install it on your local machine and on k8s cluster - following the quick start guide will be enough.