I'm going to embed the Superset dashboard into ReactJs. When I login using the default admin user I can't find the words "Embed dashboard" in the dashboard dropdown that I'm currently opening. Previously I've changed some settings in config.py Superset
SESSION_COOKIE_SAMESITE = None
ENABLE_PROXY_FIX = True
"EMBEDDED_SUPERSET": True
CORS_OPTIONS = {
'supports_credentials': True,
'allow_headers': ['*'],
'resources':['*'],
'origins': ['http://localhost:8088', 'http://localhost:8888']
}
and docker-compose.yaml, after setting i'm restart docker container
version: '3'
services:
superset:
image: apache/superset:latest
ports:
- 8088:8088
container_name: superset
restart: always
volumes:
- ../data/superset:/opt/superset/workspace
environment:
- TZ=Asia/Jakarta
- SUPERSET_FEATURE_EMBEDDED_SUPERSET='true'
like this display on my dashboard
How do I get the "Embed dashboard" message to appear? Are there any mistakes in the settings?
thanks.
Related
I am trying to setup a local Beam Runner for easier testing/developing.
I'd like to allow testing python pipeline which uses kafka IO locally on my mac.
Here's my current plan for the entire framework looks like:
Here's my current docker-compose
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
container_name: kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- "9092:9092"
jobmanager:
image: flink_image
command: ['jobmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\nparallelism.default: 2"
ports:
- "8081:8081"
taskmanager:
image: flink_image
scale: 1
depends_on:
- jobmanager
command: ['taskmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\ntaskmanager.numberOfTaskSlots: 2\nparallelism.default: 2"
beam-jobserver:
image: flink_image
ports:
- "8097:8097"
- "8098:8098"
- "8099:8099"
entrypoint:
- java
- -cp
- /target/flink/flink-web-upload/beam-runner.jar
- org.apache.beam.runners.flink.FlinkJobServerDriver
- --flink-master=jobmanager
- --job-host=0.0.0.0
And my pipeline looks like this:
LOCAL_ARGS = [
'--streaming',
'--runner=portableRunner',
'--environment_type=LOOPBACK',
'--job_endpoint=localhost:8099',
'--artifact_endpoint=localhost:8098',
'--defaultEnvironmentType=EXTERNAL',
'--defaultEnvironmentConfig=host.docker.internal:5000',
]
with beam.Pipeline(options=PipelineOptions(LOCAL_ARGS)) as pipeline:
result = (
pipeline
| "Kafka Read" >> ReadFromKafka(
consumer_config={"bootstrap.servers": "kafka:9092", 'auto.offset.reset': 'earliest'},
topics=["test.topic"],
with_metadata=False,
expansion_service=default_io_expansion_service(
append_args=[
'--defaultEnvironmentType=PROCESS',
"--defaultEnvironmentConfig={\"command\":\"/opt/apache/beam/java_boot\"}",
'--experiments=use_deprecated_read',
]
)
)
| "logging" >> beam.Map(lambda x: logging.info(f"logged: {x}"))
)
However, it looks like the LOOPBACK tried to open a port on my host machine, and ask the task manager to talk to itself via localhost:<randomPort>. Which is not accessible inside the container.
Unfortunately, host network is not supported for Docker on Mac, and thus I need to find a way to overwrite the Loopback settings so that it connect to host.docker.internal:<dedicated_pool> instead of a random port on my host machine? or if there are other suggested workaround? Thanks!
(The entire infra can be found here: https://gist.github.com/lydian/0db7614652c2ccdc733884134bf67f9b)
It looks like this is not supported. LOOPBACK mode is mostly targeting very simple setups.
You could come close by starting the worker manually, e.g.
python -m apache_beam.runners.worker.worker_pool_main --service_port =PORT
and then passing --environment_type=EXTERNAL --environment_config= host.docker.internal:PORT.
I was just facing similar struggles recently. Luckily there's two environment variables that facilitate testing on Docker for Mac. Unfortunately, there's not much documentation around that currently.
DOCKER_MAC_CONTAINER=1 limits the ports for communication with SDK workers to the range 8100 - 8200 instead of using random ports. Ports of that range are used in a round-robin fashion and have to be published.
BEAM_WORKER_POOL_IN_DOCKER_VM=1 tells an SDK worker to communicate with a runner node using host.docker.internal / via the docker host instead of using localhost.
Here's an example how to use these with Spark, but Flink shouldn't be any different
We have a docker-compose.yaml file in which a parameter APP_DEBUG from .env.local, which sets if xdebug is active or not:
php-fpm:
build:
context: .
dockerfile: ./docker/php-fpm/Dockerfile
args:
- TIMEZONE=Europe/Berlin
- WITH_XDEBUG=${APP_DEBUG}
container_name: ${PROJECT_NAME}-php-fpm
environment:
XDEBUG_CONFIG: "remote_host=docker.for.mac.localhost remote_connect_back=0 remote_enable=1 remote_autostart=1 remote_port=9009"
PHP_IDE_CONFIG: "serverName=docker-server"
working_dir: /var/www
volumes:
- .:/var/www:cached
ports:
- ${HOST_WEB_PORT}:80
If I have my container up and running and want to switch xdebug or or off, is "stop" and "start" enough for the container to react to the change or do I need to do "down" and "up" or even a new build?
Build config options are applied at build time. So, think you are going to need to build your image and run the container again.
Source Build docs
I plan to add meta information in my docker-compose files, but I don't know if it's possible/a good way.
Figure this service, and the meta key:
OldMongoDB:
image: mongo:3.2
environment:
- URL: mongodb://localhost:27015
ports:
- "27015:27017"
meta:
- meta1: "some value usefull in tests"
- meta2: "other value usefull in tests"
Is it good for you to store additional values inside a docker-compose?
This is to be used by test scripts.
Here's my goal, I would like to configure emails for my Gitlab server. I followed a lot of tutorials but I can't make it work.
My configuration is the following, I've got a reverse-proxy in a Docker container and my Gitlab server also in a Docker container.
About versions :
Docker version 17.09.0-ce, build afdb6d4
docker-compose version 1.16.1, build 6d1ac21
Here's my docker-compose.yml file
version: '3.3'
networks:
proxy:
external: true
internal:
external: false
services:
gitlab:
image: gitlab/gitlab-ce:latest
container_name: gitlab
environment:
- TZ=Europe/Paris
- GITLAB_TIMEZONE=Paris
- IMAP_USER=USER#GMAIL.COM
- IMAP_PASSWORD=MYGMAILPASS
- GITLAB_INCOMING_EMAIL_ADDRESS=USERGMAIL+%{key}#gmail.com
volumes:
- /srv/gitlab/config:/etc/gitlab
- /srv/gitlab/logs:/var/log/gitlab
- /srv/gitlab/data:/var/opt/gitlab
restart: always
labels:
- traefik.backend=gitlab
- traefik.frontend.rule=Host:git.domain.com
- traefik.docker.network=proxy
- traefik.port=80
- traefik.frontend.entryPoints=http,https
networks:
- internal
- proxy
I followed this tutorial which seems to be good :
https://github.com/sameersbn/docker-gitlab#available-configuration-parameters
I must miss something in my configuration but I can't figure out what is it ...
Does anyone can help me to configure email sending ? I don't know either the proper way to test email sending from GitLab.
Is the best way is to configure from docker-compose environment variables or directly from gitlab.rb file ?
Some help would be much appreciated
The instructions you followed are for a different docker image than the one you're actually using. You also set up IMAP, which is for receiving emails. In GitLab's case, it's for replying to issues by email.
What you want are the SMTP settings. The GitLab docker image does not come with sendmail installed, so you will have to follow the instructions here to set up SMTP in GitLab: https://docs.gitlab.com/omnibus/settings/smtp.html#example-configuration
You can dump gitlab.rb configuration right in your docker-compose under the environment section. My Fastmail setup for reference:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "***"
gitlab_rails['smtp_port'] = 465
gitlab_rails['smtp_user_name'] = "***"
gitlab_rails['smtp_password'] = "***"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = true
gitlab_rails['smtp_openssl_verify_mode'] = 'peer'
I am trying to build a Stack as follows:
redis:
image: redis
ports:
- '6379'
app:
build: .
links:
- redis
when I push "Create Stack" button, I get this error:
Oops!
Service 'app': Value {u'build': u'.', u'links': [u'redis'], u'name': u'app'} for field '<obj>' contains additional property 'build' not defined by 'properties' or 'patternProperties' and additionalProperties is False. See 'https://support.tutum.co/support/solutions/articles/5000583471' for more details
Can someone help me with this please?
The Tutum documentation states the following at the bottom of the page:
Docker-compose non-supported keys
Tutum.yml has been designed with docker-compose.yml in mind to maximize compatibility, but the following keys are not supported:
build
external_links
env_file
This clearly states that build is not a supported key, which is what your error message also says. It looks like you'll have to remove the build key from your file.