I have configured my superset in Virtual env want to run it as a service
I have tried using below config but its not working
[Unit]
Description=superset service
After=network.target
[Service]
Type=simple
User=superset
Group=superset
Environment=PATH=/home/ubuntu/code/superset:$PATH
Environment=PYTHONPATH=/var/superset/superset:$PYTHONPATH
ExecStart=/home/ubuntu/code/superset/superset runserver
[Install]
WantedBy=multi-user.target
Virtual Env folder is Superset
I get the below error
/etc/init.d/superset: 1: /etc/init.d/superset: [Unit]: not found
Usage: service < option > | --status-all | [ service_name [ command |
--full-restart ] ] /etc/init.d/superset: 5: /etc/init.d/superset: [Service]: not found
Actually the superset runserver is used for development mode and it is highly recommended other tools like gunicorn for production.
Anyway, the main problem is that superset path on the virutalenv is $VENV_PATH/bin/superset (in general the applications that treat like binary applications like superset or airflow, etc servers on this path: $VENV_PATH/bin and the easy way to find the path of any application on Linux systems is to use which command that in this case, you can use which superset to find the superset path ).
This is the superset service file that I use it on the production, hope to useful:
[Unit]
Description = Apache Superset Webserver Daemon
After = network.target
[Service]
PIDFile = /home/superset/superset-webserver.PIDFile
User = superset
Group = superset
Environment=SUPERSET_HOME=/home/superset
Environment=PYTHONPATH=/home/superset
WorkingDirectory = /home/superset
ExecStart =/home/superset/venv/bin/python3.7 /home/superset/venv/bin/gunicorn --workers 8 --worker-class gevent --bind 0.0.0.0:8888 --pid /home/superset/superset-webserver.PIDFile superset:app
ExecStop = /bin/kill -s TERM $MAINPID
[Install]
WantedBy=multi-user.target
Related
For test purposes, I'm trying to connect a module that intoduces an absration layer over s3fs with custom business logic.
It seems like I have trouble connecting the s3fs client to the Minio container.
Here's how I created the the container and attach the s3fs client (below describes how I validated the container is running properly)
import s3fs
import docker
client = docker.from_env()
container = client.containers.run('minio/minio',
"server /data --console-address ':9090'",
environment={
"MINIO_ACCESS_KEY": "minio",
"MINIO_SECRET_KEY": "minio123",
},
ports={
"9000/tcp": 9000,
"9090/tcp": 9090,
},
volumes={'/tmp/minio': {'bind': '/data', 'mode': 'rw'}},
detach=True)
container.reload() # why reload: https://github.com/docker/docker-py/issues/2681
fs = s3fs.S3FileSystem(
anon=False,
key='minio',
secret='minio123',
use_ssl=False,
client_kwargs={
'endpoint_url': "http://localhost:9000" # tried 127.0.0.1:9000 with no success
}
)
===========
>>> fs.ls('/')
[]
>>> fs.ls('/data')
Bucket doesnt exists exception
check that the container is running:
➜ ~ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
127e22c19a65 minio/minio "/usr/bin/docker-ent…" 56 seconds ago Up 55 seconds 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp hardcore_ride
check that the relevant volume is attached:
➜ ~ docker exec -it 127e22c19a65 bash
[root#127e22c19a65 /]# ls -l /data/
total 4
-rw-rw-r-- 1 1000 1000 4 Jan 11 16:02 foo.txt
[root#127e22c19a65 /]# exit
Since I proved the volume binding is working properly by shelling into the container, I expected to see the same results when attached the container's filesystem via the s3fs client.
What is the bucket name that was created as part of this setup?
From the docs I'm seeing you have to give <bucket_name>/<object_path> syntax to access the resources.
fs.ls('my-bucket')
['my-file.txt']
Also if you look at the docs below there are a couple of other ways to access it using fs.open can you give that a try?
https://buildmedia.readthedocs.org/media/pdf/s3fs/latest/s3fs.pdf
Above error occured when installing kubernetes using kubespray.
The installtion fails and through journal -xe i see the follow:
` node1 systemd[1]: Starting Docker Application Container Engine...
-- Subject: Unit docker.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has begun starting up.
Dec 09 23:37:01 node1 dockerd[8296]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: lo
Dec 09 23:37:01 node1 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 09 23:37:01 node1 systemd[1]: Failed to start Docker Application Container Engine.
how do I troubleshoot to fix the issue? Is there something that I am missing looking into?
The json file is as follows
[root#k8s-master01 kubespray]# cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
the docker.yml file is as follows:
cat inventory/sample/group_vars/all/docker.yml
---
## Uncomment this if you want to force overlay/overlay2 as docker storage driver
## Please note that overlay2 is only supported on newer kernels
# docker_storage_options: -s overlay2
## Enable docker_container_storage_setup, it will configure devicemapper driver on Centos7 or RedHat7.
docker_container_storage_setup: false
## It must be define a disk path for docker_container_storage_setup_devs.
## Otherwise docker-storage-setup will be executed incorrectly.
# docker_container_storage_setup_devs: /dev/vdb
## Uncomment this if you have more than 3 nameservers, then we'll only use the first 3.
docker_dns_servers_strict: false
# Path used to store Docker data
docker_daemon_graph: "/var/lib/docker"
## Used to set docker daemon iptables options to true
docker_iptables_enabled: "false"
# Docker log options
# Rotate container stderr/stdout logs at 50m and keep last 5
docker_log_opts: "--log-opt max-size=50m --log-opt max-file=5"
# define docker bin_dir
docker_bin_dir: "/usr/bin"
# keep docker packages after installation; speeds up repeated ansible provisioning runs when '1'
# kubespray deletes the docker package on each run, so caching the package makes sense
docker_rpm_keepcache: 0
## An obvious use case is allowing insecure-registry access to self hosted registries.
## Can be ipaddress and domain_name.
## example define 172.19.16.11 or mirror.registry.io
# docker_insecure_registries:
# - mirror.registry.io
# - 172.19.16.11
## Add other registry,example China registry mirror.
# docker_registry_mirrors:
# - https://registry.docker-cn.com
# - https://mirror.aliyuncs.com
## If non-empty will override default system MountFlags value.
## This option takes a mount propagation flag: shared, slave
## or private, which control whether mounts in the file system
## namespace set up for docker will receive or propagate mounts
## and unmounts. Leave empty for system default
# docker_mount_flags:
## A string of extra options to pass to the docker daemon.
## This string should be exactly as you wish it to appear.
docker_options: >-
the setup.cfg file is as below
[root#k8s-master01 kubespray]# cat setup.cfg
[metadata]
name = kubespray
summary = Ansible modules for installing Kubernetes
description-file =
README.md
author = Kubespray
author-email = smainklh#gmail.com
license = Apache License (2.0)
home-page = https://github.com/kubernetes-sigs/kubespray
classifier =
License :: OSI Approved :: Apache Software License
Development Status :: 4 - Beta
Intended Audience :: Developers
Intended Audience :: System Administrators
Intended Audience :: Information Technology
Topic :: Utilities
[global]
setup-hooks =
pbr.hooks.setup_hook
[files]
data_files =
usr/share/kubespray/playbooks/ =
cluster.yml
upgrade-cluster.yml
scale.yml
reset.yml
remove-node.yml
extra_playbooks/upgrade-only-k8s.yml
usr/share/kubespray/roles = roles/*
usr/share/kubespray/library = library/*
usr/share/doc/kubespray/ =
LICENSE
README.md
usr/share/doc/kubespray/inventory/ =
inventory/sample/inventory.ini
etc/kubespray/ =
ansible.cfg
etc/kubespray/inventory/sample/group_vars/ =
inventory/sample/group_vars/etcd.yml
etc/kubespray/inventory/sample/group_vars/all/ =
inventory/sample/group_vars/all/all.yml
inventory/sample/group_vars/all/azure.yml
inventory/sample/group_vars/all/coreos.yml
inventory/sample/group_vars/all/docker.yml
inventory/sample/group_vars/all/oci.yml
inventory/sample/group_vars/all/openstack.yml
[wheel]
universal = 1
[pbr]
skip_authors = True
skip_changelog = True
[bdist_rpm]
group = "System Environment/Libraries"
requires =
ansible
python-jinja2
python-netaddr
Take look on that you have defined in deamon.json file storage driver:
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
At the same time in docker.yaml file you didn't enable storage driver options :
## Uncomment this if you want to force overlay/overlay2 as docker storage driver
## Please note that overlay2 is only supported on newer kernels
# docker_storage_options: -s overlay2
Please uncomment docker_storage_options: -s overlay2 line.
make sure you have followed every steps from this tutorial.
Here my zkServer.cmd file :
#echo off
setlocal
call "%~dp0zkEnv.cmd"
set ZOOMAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain
echo on
call %JAVA% "-Dzookeeper.log.dir=%ZOO_LOG_DIR%" "-Dzookeeper.root.logger=%ZOO_LOG4J_PROP%" -cp "%CLASSPATH%" %ZOOMAIN% "%ZOOCFG%" %*
endlocal
The skServer.sh script will run the zkEnv.sh script which in-turn will look for a script '../conf/zookeeper-env.sh'
create a file on the conf folder called zookeeper-env.sh
Paste this into the file and restart Zookeeper:
JMXLOCALONLY=false
JMXDISABLE=false
JMXPORT=4048
JMXAUTH=false
JMXSSL=false
First obtain the hostname (or reachable IP eg. lan/public/NAT address):
hostname -i
# or find ip
ip a
next add following options to ZOOMAIN (assumed hostname my.remoteconsole.org and desired port 8989)
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.port=8989
-Djava.rmi.server.hostname=my.remoteconsole.org
More details about available options in java docs (http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html).
ADD org.apache.zookeeper.server.quorum.QuorumPeerMain in server-start.
The class org.apache.zookeeper.server.quorum.QuorumPeerMain will start a JMX manageable ZooKeeper server. This class registers the proper MBeans during initalization to support JMX monitoring and management of the instance.
In addition to above answer by Marcell du Plessis, if you are running zookeeper as a systemd service, then you can specify jmx port in the environment variable.
[Unit]
Description=Apache Kakfa Zookeeper
Requires=network.target
After=network.target
[Service]
Type=simple
User=user
Group=users
ExecStart=/your-zookeeper-install-path/bin/zkServer.sh start
ExecStop=/your-zookeeper-install-path/bin/zkServer.sh stop
TimeoutStopSec=180
Restart=on-failure
Environment="JMX_PORT=9999"
[Install]
WantedBy=multi-user.target
Alias=zookeeper.service
kubernetes's version is 1.2
I want to watch the scheduler's log. So how to set kube-scheduler's log print to a file?
The kube-scheduler's configuration is at this path: /etc/kubernetes/scheduler.
And the global configuration is at this path: /etc/kubernetes/config.
So we can see these notes:
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
Can you tail the contents of the service (if running in systemd): journalctl -u apiserver -f
Or if a container, find the container id of the scheduler, and tail with docker: docker logs -f
I have following supervisord config(copied from this answer):
[program:myprogram]
process_name=MYPROGRAM%(process_num)s
directory=/var/www/apps/myapp
command=/var/www/apps/myapp/virtualenv/bin/python index.py --PORT=%(process_num)s
startsecs=2
user=youruser
stdout_logfile=/var/log/myapp/out-%(process_num)s.log
stderr_logfile=/var/log/myapp/err-%(process_num)s.log
numprocs=4
numprocs_start=14000
Can i do same thing with systemd?
A systemd unit can include specifiers which may be used to write a generic unit service that will be instantiated several times.
Example based on your supervisord config: /etc/systemd/system/mydaemon#.service:
[Unit]
Description=My awesome daemon on port %i
After=network.target
[Service]
User=youruser
WorkingDirectory=/var/www/apps/myapp
Type=simple
ExecStart=/var/www/apps/myapp/virtualenv/bin/python index.py --PORT=%i
[Install]
WantedBy=multi-user.target
You may then enable / start as many instances of that service using by example:
# systemctl start mydaemon#4444.service
Article with more examples on Fedora Magazine.org: systemd: Template unit files.