Airflow email alert not format as expected - email

I am using "apache-airflow 2.1.2".
I am working on the Airflow email configuration, and I got this email format problem.
I have set the email subject template in airflow.cfg. The subject_template look like this:
[airflow] Pipeline '{{ti.dag_id}}' '{{ti.task_id}}' has status {{ti.state}}. {% if ti.state=="failed" %}is CRITICAL{% else %}is OK{% endif %}
Therefore, if everything works fine, I will get an email from "airflow-alerts#mycompany.com" and the email subject will be like this:
[airflow] Pipeline my_dag my_task has status failed. is CRITICAL
When I use "airflow dags test" command, it works fine.
airflow dags test my_dag 2021-01-01
However, when I use the UI to unpause or use "airflow dags backfill" command,
airflow dags backfill my_dag -s 2021-01-01
the email format is as default(which is wrong) and it is sent from "airflow#example.com".
Airflow alert: <TaskInstance: my_dag.my_task 2021-01-01T00:00:00+00:00 [failed]>
I have checked the log and didn't see anything related to this.
I don't know why using different commands leads to different results.
I have the airflow.cfg as follows.
[email]
# Configuration email backend and whether to
# send email alerts on retry or failure
# Email backend to use
email_backend = airflow.utils.email.send_email_smtp
# Whether email alerts should be sent when a task is retried
default_email_on_retry = True
# Whether email alerts should be sent when a task failed
default_email_on_failure = True
subject_template = /path/to/my/template
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = myhost
smtp_starttls = True
smtp_ssl = False
# Example: smtp_user = airflow
# Example: smtp_password = airflow
smtp_port = 25
smtp_mail_from = airflow-alerts#mycompany.com
smtp_timeout = 30
smtp_retry_limit = 5
I am stuck on this problem for quite a while. I hope you can give me any help. Thanks in advance!

Related

I cannot log in the Chainlink GUI

I am using this helm chart
https://artifacthub.io/packages/helm/vulcanlink/chainlink
I managed to launch and connect Chainlink node with Postgres, with these values
config:
# Login Info
ROOT: /chainlink
API_LOGIN: |
API_EMAIL=admin#admin.com
API_LOGIN=admin
WALLET_PASSWORD: "9xMR9PN7CTk6Axs" # a random test password based on chainlink's demands
# HTTP Security
ALLOW_ORIGINS: "*"
SECURE_COOKIES: "false"
CHAINLINK_PORT: "6688"
CHAINLINK_TLS_PORT: "0"
# Database
DATABASE_TIMEOUT: "0"
DATABASE_URL: postgresql://chainlink:chainlink#pgdb-postgresql:5432/chainlink?sslmode=disable
# Ethereum
ETH_URL: wss://rinkeby.infura.io/ws/v3/somerandomnumber # ws://geth:8546
ETH_CHAIN_ID: "4"
LINK_CONTRACT_ADDRESS: 0x514910771af9ca656af840dff83e8264ecf986ca # this was here ...
I port forward the k8s service and I see the Chainlink UI.
But what combination of the above should I use?
I have tried them all.
EDIT
In order to change the env vars, I ended up destroying the whole minikube env. Insane, and I have no idea why...
Now I get this in the logs
There are no accounts, creating a new account with the specified password
There are no P2P keys; creating a new key encrypted with given password
There are no OCR keys; creating a new key encrypted with given password
2022-09-02T10:22:50Z [INFO] API exposed for user API_EMAIL=admin#admin.com cmd/local_client.go:122
2022-09-02T10:23:32Z [INFO] POST /sessions web/router.go:433 body={"email":"admin#admin.com","password":"*REDACTED*"} clientIP=127.0.0.1 errors=Error #01: Invalid email
latency=4.918708ms method=POST path=/sessions servedAt=2022-09-02 10:23:32 status=401
... so I still cannot log in in the GUI. It is frustrating
EDIT
This is what happens when the instructions are not clear...
The username was API_EMAIL=admin#admin.com and the password API_LOGIN=admin .
Now I can login...but surely gonna change them...

what is the correct configuration of mod_ping on ejabberd-18.12.1?

I am using ejabberd server version 18.12.1 with stream management enabled. When the user disconnects from the internet, its presence remains online so I decided to use mod_ping to kill the connection after a timeout using mod ping
I used the following config in ejabberd.yml file :
mod_ping:
send_pings: true
ping_ack_timeout: 32
timeout_action: kill
considering the default value of ping_interval : 60.
Ping does not seem to be working with this configuration. Am I missing any other configuration ? should the client enable something to make this working ? is there any ping log that I can check?
Note: using the modules page of the web admin of ejabberd server, the config value of the ping_ack_timeout of mod_ping seems to be different from the one in the ejabberd.yml file, why is that?
[{ping_interval,60},
{ping_ack_timeout,32000},
{send_pings,true},
{timeout_action,kill}]
Note: using the modules page of the web admin of ejabberd server, the config value of the ping_ack_timeout of mod_ping seems to be different from the one in the ejabberd.yml file, why is that?
That is expected: you set the human-configurable option in seconds, and later the internal time value is expressed in milliseconds (the time unit used by erlang).
Am I missing any other configuration ? should the client enable something to make this working ? is there any ping log that I can check?
That should be enough. Try with other clients, just to check if that affects in any way. I've installed ejabberd 18.12, configured like this:
loglevel: 5
...
mod_ping:
send_pings: true
ping_interval: 10
ping_ack_timeout: 15
timeout_action: kill
Then I start ejabberd and login with Tkaber client (but I think any client is good for testing ping). Every ten seconds, the client receives this query:
<iq to='user1#localhost/tka1'
from='user1#localhost'
type='get'
id='rr-1552642185584-13814872912241253802-5xOvCCobbU2TCC/RT4GaqD6M8bo=-55238004'>
<ping xmlns='urn:xmpp:ping'/>
</iq>
And at the same time, the ejabberd log file shows several messages, starting with this one:
10:29:30.585 [debug] route:
#iq{id = <<"rr-1552642185584-13814872912241253802-5xOvCCobbU2TCC/RT4GaqD6M8bo=-55238004">>,
type = get,lang = <<>>,
from = #jid{user = <<"user1">>,server = <<"localhost">>,resource = <<>>,
luser = <<"user1">>,lserver = <<"localhost">>,
lresource = <<>>},
to = #jid{user = <<"user1">>,server = <<"localhost">>,
resource = <<"tka1">>,luser = <<"user1">>,
lserver = <<"localhost">>,lresource = <<"tka1">>},
sub_els = [#ping{}],
meta = #{}}

Airflow CeleryExecutor With AWS SQS

I'm trying to cluster my Airflow setup and I'm using this article to do so. I just configured my airflow.cfg file to use the CeleryExecutor, I pointed my sql_alchemy_conn to my postgresql database that's running on the same master node, I've set the broker_url to use SQS (I didn't set the access_key_id or secret_key since it's running on an EC2-Instance it doesn't need those), and I've set the celery_result_backend to my postgresql server too. I saved my new airflow.cfg changes, I ran airflow initdb, and then I ran airflow scheduler and I'm getting this error from the scheduler,
[2018-06-07 21:07:33,420] {celery_executor.py:101} ERROR - Error syncing the celery executor, ignoring it:
[2018-06-07 21:07:33,421] {celery_executor.py:102} ERROR - Can't load plugin: sqlalchemy.dialects:psycopg2
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 83, in sync
state = async.state
File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 433, in state
return self._get_task_meta()['status']
File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 372, in _get_task_meta
return self._maybe_set_cache(self.backend.get_task_meta(self.id))
File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 344, in get_task_meta
meta = self._get_task_meta_for(task_id)
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/__init__.py", line 53, in _inner
return fun(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/__init__.py", line 122, in _get_task_meta_for
session = self.ResultSession()
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/__init__.py", line 99, in ResultSession
**self.engine_options)
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/session.py", line 59, in session_factory
engine, session = self.create_session(dburi, **kwargs)
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/session.py", line 45, in create_session
engine = self.get_engine(dburi, **kwargs)
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/session.py", line 42, in get_engine
return create_engine(dburi, poolclass=NullPool)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/__init__.py", line 424, in create_engine
return strategy.create(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 57, in create
entrypoint = u._get_entrypoint()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/url.py", line 156, in _get_entrypoint
cls = registry.load(name)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 221, in load
(self.group, name))
sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:psycopg2
Here is my airflow.cfg file,
[core]
# The home folder for airflow, default is ~/airflow
airflow_home = /home/ec2-user/airflow
# The folder where your airflow pipelines live, most likely a
# subfolder in a code repository
# This path must be absolute
dags_folder = /home/ec2-user/airflow/dags
# The folder where airflow should store its log files
# This path must be absolute
base_log_folder = /home/ec2-user/airflow/logs
# Airflow can store logs remotely in AWS S3 or Google Cloud Storage. Users
# must supply an Airflow connection id that provides access to the storage
# location.
remote_log_conn_id =
encrypt_s3_logs = False
# Logging level
logging_level = INFO
# Logging class
# Specify the class that will specify the logging configuration
# This class has to be on the python classpath
# logging_config_class = my.path.default_local_settings.LOGGING_CONFIG
logging_config_class =
# Log format
log_format = [%%(asctime)s] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s
simple_log_format = %%(asctime)s %%(levelname)s - %%(message)s
# The executor class that airflow should use. Choices include
# SequentialExecutor, LocalExecutor, CeleryExecutor, DaskExecutor
#executor = SequentialExecutor
executor = CeleryExecutor
# The SqlAlchemy connection string to the metadata database.
# SqlAlchemy supports many different database engine, more information
# their website
#sql_alchemy_conn = sqlite:////home/ec2-user/airflow/airflow.db
sql_alchemy_conn = postgresql+psycopg2://postgres:$password#localhost/datalake_airflow_cluster_v1_master1_database_1
# The SqlAlchemy pool size is the maximum number of database connections
# in the pool.
sql_alchemy_pool_size = 5
# The SqlAlchemy pool recycle is the number of seconds a connection
# can be idle in the pool before it is invalidated. This config does
# not apply to sqlite.
sql_alchemy_pool_recycle = 3600
# The amount of parallelism as a setting to the executor. This defines
# the max number of task instances that should run simultaneously
# on this airflow installation
parallelism = 32
# The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 16
# Are DAGs paused by default at creation
dags_are_paused_at_creation = True
# When not using pools, tasks are run in the "default pool",
# whose size is guided by this config element
non_pooled_task_slot_count = 128
# The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 16
# Whether to load the examples that ship with Airflow. It's good to
# get started, but you probably want to set this to False in a production
# environment
load_examples = True
# Where your Airflow plugins are stored
plugins_folder = /home/ec2-user/airflow/plugins
# Secret key to save connection passwords in the db
fernet_key = ibwZ5uSASmZGphBmwdJ4BIhd1-5WZXMTTgMF9u1_dGM=
# Whether to disable pickling dags
donot_pickle = False
# How long before timing out a python file import while filling the DagBag
dagbag_import_timeout = 30
# The class to use for running task instances in a subprocess
task_runner = BashTaskRunner
# If set, tasks without a `run_as_user` argument will be run with this user
# Can be used to de-elevate a sudo user running Airflow when executing tasks
default_impersonation =
# What security module to use (for example kerberos):
security =
# Turn unit test mode on (overwrites many configuration options with test
# values at runtime)
unit_test_mode = False
# Name of handler to read task instance logs.
# Default to use file task handler.
task_log_reader = file.task
# Whether to enable pickling for xcom (note that this is insecure and allows for
# RCE exploits). This will be deprecated in Airflow 2.0 (be forced to False).
enable_xcom_pickling = True
# When a task is killed forcefully, this is the amount of time in seconds that
# it has to cleanup after it is sent a SIGTERM, before it is SIGKILLED
killed_task_cleanup_time = 60
[cli]
# In what way should the cli access the API. The LocalClient will use the
# database directly, while the json_client will use the api running on the
# webserver
api_client = airflow.api.client.local_client
endpoint_url = http://localhost:8080
[api]
# How to authenticate users of the API
auth_backend = airflow.api.auth.backend.default
[operators]
# The default owner assigned to each new operator, unless
# provided explicitly or passed via `default_args`
default_owner = Airflow
default_cpus = 1
default_ram = 512
default_disk = 512
default_gpus = 0
[webserver]
# The base url of your website as airflow cannot guess what domain or
# cname you are using. This is used in automated emails that
# airflow sends to point links to the right web server
base_url = http://localhost:8080
# The ip specified when starting the web server
web_server_host = 0.0.0.0
# The port on which to run the web server
web_server_port = 8080
# Paths to the SSL certificate and key for the web server. When both are
# provided SSL will be enabled. This does not change the web server port.
web_server_ssl_cert =
web_server_ssl_key =
# Number of seconds the gunicorn webserver waits before timing out on a worker
web_server_worker_timeout = 120
# Number of workers to refresh at a time. When set to 0, worker refresh is
# disabled. When nonzero, airflow periodically refreshes webserver workers by
# bringing up new ones and killing old ones.
worker_refresh_batch_size = 1
# Number of seconds to wait before refreshing a batch of workers.
worker_refresh_interval = 30
# Secret key used to run your flask app
secret_key = temporary_key
# Number of workers to run the Gunicorn web server
workers = 4
# The worker class gunicorn should use. Choices include
# sync (default), eventlet, gevent
worker_class = sync
# Log files for the gunicorn webserver. '-' means log to stderr.
access_logfile = -
error_logfile = -
# Expose the configuration file in the web server
expose_config = False
# Set to true to turn on authentication:
# http://pythonhosted.org/airflow/security.html#web-authentication
authenticate = False
# Filter the list of dags by owner name (requires authentication to be enabled)
filter_by_owner = False
# Filtering mode. Choices include user (default) and ldapgroup.
# Ldap group filtering requires using the ldap backend
#
# Note that the ldap server needs the "memberOf" overlay to be set up
# in order to user the ldapgroup mode.
owner_mode = user
# Default DAG view. Valid values are:
# tree, graph, duration, gantt, landing_times
dag_default_view = tree
# Default DAG orientation. Valid values are:
# LR (Left->Right), TB (Top->Bottom), RL (Right->Left), BT (Bottom->Top)
dag_orientation = LR
# Puts the webserver in demonstration mode; blurs the names of Operators for
# privacy.
demo_mode = False
# The amount of time (in secs) webserver will wait for initial handshake
# while fetching logs from other worker machine
log_fetch_timeout_sec = 5
# By default, the webserver shows paused DAGs. Flip this to hide paused
# DAGs by default
hide_paused_dags_by_default = False
# Consistent page size across all listing views in the UI
page_size = 100
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = localhost
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
# smtp_user = airflow
# smtp_password = airflow
smtp_port = 25
smtp_mail_from = airflow#example.com
[celery]
# This section only applies if you are using the CeleryExecutor in
# [core] section above
# The app name that will be used by celery
celery_app_name = airflow.executors.celery_executor
# The concurrency that will be used when starting workers with the
# "airflow worker" command. This defines the number of task instances that
# a worker will take, so size up your workers based on the resources on
# your worker box and the nature of your tasks
celeryd_concurrency = 16
# When you start an airflow worker, airflow starts a tiny web server
# subprocess to serve the workers local log files to the airflow main
# web server, who then builds pages and sends them to users. This defines
# the port on which the logs are served. It needs to be unused, and open
# visible from the main web server to connect into the workers.
worker_log_server_port = 8793
# The Celery broker URL. Celery supports RabbitMQ, Redis and experimentally
# a sqlalchemy database. Refer to the Celery documentation for more
# information.
#broker_url = sqla+mysql://airflow:airflow#localhost:3306/airflow
broker_url = sqs://
# Another key Celery setting
#celery_result_backend = db+mysql://airflow:airflow#localhost:3306/airflow
celery_result_backend = db+psycopg2://postgres:$password#localhost/datalake_airflow_cluster_v1_master1_database_1
# Celery Flower is a sweet UI for Celery. Airflow has a shortcut to start
# it `airflow flower`. This defines the IP that Celery Flower runs on
flower_host = 0.0.0.0
# This defines the port that Celery Flower runs on
flower_port = 5555
# Default queue that tasks get assigned to and that worker listen on.
default_queue = default
# Import path for celery configuration options
celery_config_options = airflow.config_templates.default_celery.DEFAULT_CELERY_CONFIG
[dask]
# This section only applies if you are using the DaskExecutor in
# [core] section above
# The IP address and port of the Dask cluster's scheduler.
cluster_address = 127.0.0.1:8786
[scheduler]
# Task instances listen for external kill signal (when you clear tasks
# from the CLI or the UI), this defines the frequency at which they should
# listen (in seconds).
job_heartbeat_sec = 5
# The scheduler constantly tries to trigger new tasks (look at the
# scheduler section in the docs for more information). This defines
# how often the scheduler should run (in seconds).
scheduler_heartbeat_sec = 5
# after how much time should the scheduler terminate in seconds
# -1 indicates to run continuously (see also num_runs)
run_duration = -1
# after how much time a new DAGs should be picked up from the filesystem
min_file_process_interval = 0
dag_dir_list_interval = 300
# How often should stats be printed to the logs
print_stats_interval = 30
child_process_log_directory = /home/ec2-user/airflow/logs/scheduler
# Local task jobs periodically heartbeat to the DB. If the job has
# not heartbeat in this many seconds, the scheduler will mark the
# associated task instance as failed and will re-schedule the task.
scheduler_zombie_task_threshold = 300
# Turn off scheduler catchup by setting this to False.
# Default behavior is unchanged and
# Command Line Backfills still work, but the scheduler
# will not do scheduler catchup if this is False,
# however it can be set on a per DAG basis in the
# DAG definition (catchup)
catchup_by_default = True
# This changes the batch size of queries in the scheduling main loop.
# This depends on query length limits and how long you are willing to hold locks.
# 0 for no limit
max_tis_per_query = 0
# Statsd (https://github.com/etsy/statsd) integration settings
statsd_on = False
statsd_host = localhost
statsd_port = 8125
statsd_prefix = airflow
# The scheduler can run multiple threads in parallel to schedule dags.
# This defines how many threads will run.
max_threads = 2
authenticate = False
[ldap]
# set this to ldaps://<your.ldap.server>:<port>
uri =
user_filter = objectClass=*
user_name_attr = uid
group_member_attr = memberOf
superuser_filter =
data_profiler_filter =
bind_user = cn=Manager,dc=example,dc=com
bind_password = insecure
basedn = dc=example,dc=com
cacert = /etc/ca/ldap_ca.crt
search_scope = LEVEL
[mesos]
# Mesos master address which MesosExecutor will connect to.
master = localhost:5050
# The framework name which Airflow scheduler will register itself as on mesos
framework_name = Airflow
# Number of cpu cores required for running one task instance using
# 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
# command on a mesos slave
task_cpu = 1
# Memory in MB required for running one task instance using
# 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
# command on a mesos slave
task_memory = 256
# Enable framework checkpointing for mesos
# See http://mesos.apache.org/documentation/latest/slave-recovery/
checkpoint = False
# Failover timeout in milliseconds.
# When checkpointing is enabled and this option is set, Mesos waits
# until the configured timeout for
# the MesosExecutor framework to re-register after a failover. Mesos
# shuts down running tasks if the
# MesosExecutor framework fails to re-register within this timeframe.
# failover_timeout = 604800
# Enable framework authentication for mesos
# See http://mesos.apache.org/documentation/latest/configuration/
authenticate = False
# Mesos credentials, if authentication is enabled
# default_principal = admin
# default_secret = admin
[kerberos]
ccache = /tmp/airflow_krb5_ccache
# gets augmented with fqdn
principal = airflow
reinit_frequency = 3600
kinit_path = kinit
keytab = airflow.keytab
[github_enterprise]
api_rev = v3
[admin]
# UI to hide sensitive variable fields when set to True
hide_sensitive_variable_fields = True
I'm not too sure what's going on here. Is there additional setup I need to do on Celery or anything else? I'm also confused as to how it knows which SQS queue to use on AWS? Does it create a new queue itself or do I need to create the queue on AWS and put that url somewhere?
See this question here: https://stackoverflow.com/a/39967889/5191221
Taken from there:
So replace:
celery_result_backend = postgresql+psycopg2://username:password#192.168.1.2:5432/airflow
with something like:
celery_result_backend = db+postgresql://username:password#192.168.1.2:5432/airflow

Splunk Kafka Add-on doesn't read chef managed configuration files

We are using Chef to manage our infrastructure, and I'm running into an issue where the Splunk TA (Add-on for Kafka) simply refuses to acknowledge I've dropped kafka_credential.conf file in the local directory of the plugin. If I use the "Web UI", it generates an entry properly and it shows up in the add-on configuration.
[root#ip-10-14-1-42 local]# ls
app.conf inputs.conf kafka.conf kafka_credentials.conf
[root#ip-10-14-1-42 local]# grep -nr "" *.conf
app.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
app.conf:2:[install]
app.conf:3:is_configured = 1
inputs.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
inputs.conf:2:[kafka_mod]
inputs.conf:3:interval = 60
inputs.conf:4:start_by_shell = false
inputs.conf:5:
inputs.conf:6:[kafka_mod://my_app]
inputs.conf:7:kafka_cluster = default
inputs.conf:8:kafka_topic = log-my_app
inputs.conf:9:kafka_topic_group = my_app
inputs.conf:10:kafka_partition_offset = earliest
inputs.conf:11:index = main
kafka.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
kafka.conf:2:[global_settings]
kafka.conf:3:log_level = INFO
kafka.conf:4:index = main
kafka.conf:5:use_kv_store = 0
kafka.conf:6:use_multiprocess_consumer = 1
kafka.conf:7:fetch_message_max_bytes = 1048576
kafka_credentials.conf:1:# MANAGED BY CHEF. PLEASE DO NOT MODIFY!
kafka_credentials.conf:2:[default]
kafka_credentials.conf:3:kafka_brokers = 10.14.2.164:9092,10.14.2.194:9092
kafka_credentials.conf:4:kafka_partition_offset = earliest
kafka_credentials.conf:5:index = main
Upon restarting splunk, the add-on is installed, and even the input is created under the Inputs section, but the cluster itself is "not available" and when examining the logs I see this:
2017-08-09 01:40:25,442 INFO pid=29212 tid=MainThread file=kafka_mod.py:main:168 | Start Kafka
2017-08-09 01:40:30,508 INFO pid=29212 tid=MainThread file=kafka_config.py:_get_kafka_clusters:228 | Clusters: {}
2017-08-09 01:40:30,509 INFO pid=29212 tid=MainThread file=kafka_config.py:__init__:188 | No Kafka cluster are configured
It seems like this plugin is only respecting clusters created through the WebUI. That is not going to work as we want to be able to fully configure this through Chef. Short of hacking the REST API, and fudging around with the .py files in the addon directory and forcing a dictionary in, what are my options?
Wondering if anyone has encountered this before.
If I had to guess it is silently rejecting the files because # is not traditionally used for comments in INI files. Try a ; instead.

(Unknown queue MSG=requested queue not found)

I am trying to run wien2k jobs with torque job scheduler (maui scheduler) in ubuntu 14.04 . the app is installed correctly without any error messages but when add the bash script , I encounter with this error "qsub: submit error (Unknown queue MSG=requested queue not found)". I read about this error in this website.
can any one help me please?
it's my PBS Queue Manager after using qmgr -c "p s" command:
Create queues and set their attributes.
Create and define queue batch
create queue batch
set queue batch queue_type = Execution
set queue batch max_running = 20
set queue batch resources_max.ncpus = 20
set queue batch resources_max.nodes = 20
set queue batch resources_default.ncpus = 1
set queue batch resources_default.nodect = 1
set queue batch resources_default.nodes = 1
set queue batch resources_default.walltime = 76790:53:51
set queue batch enabled = True
set queue batch started = True
set server attributes.
set server scheduling = True
set server acl_hosts = seconduser
set server acl_hosts += firstuser
set server managers = root#localhost
set server managers += secfir#localhost
set server operators = root#localhost
set server operators += secfir#localhost
set server default_queue = batch
set server log_events = 511
set server mail_from = adm
set server scheduler_iteration = 600
set server node_check_rate = 150
set server tcp_timeout = 300
set server job_stat_rate = 45
set server poll_jobs = True
set server mom_job_sync = True
set server keep_completed = 0
set server next_job_number = 47
set server moab_array_compatible = True
set server nppcu = 1
Try running:
qmgr -c "p s"
Look through the config for queue names. If there is a routing queue try submitting your job with qsub -q queuename jobfile . If a routing queue doesn't exist you can try execution queues. Otherwise, it may be best to ask the cluster admin since they have the power to kill your jobs if you don't do it properly.
Just had the same problem and fixed it. The fullest way to do a qsub is (described here)
qsub -q queuename scriptname
Your queuename can be find using the command described in the previous answer, as you did
qmgr -c "p s"
The result for yours (and mine as well) is that the queue name is "batch." Multiple lines determine this, such as "Create and define queue batch," and "create queue batch," but most assuredly:
set server default_queue = batch
So, submit your job script with
qsub -q batch scriptname