How to deploy symfony2 - my dev env works but not prod - deployment

I have read the cookbook regarding deploying my symfony2 app to production environment. I find that it works great in dev mode, but the prod mode first wouldn't allow signing in (said bad credentials though I signed in with those very credentials in dev mode), and later after an extra run of clearing and warming up the prod cache, I just get http500 from my prod route.
I had a look in the config files and wonder if this has anything to do with it:
config_dev.php:
imports:
- { resource: config.yml }
framework:
router: { resource: "%kernel.root_dir%/config/routing_dev.yml" }
profiler: { only_exceptions: false }
web_profiler:
toolbar: true
intercept_redirects: false
monolog:
handlers:
main:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
firephp:
type: firephp
level: info
assetic:
use_controller: true
config_prod:
imports:
- { resource: config.yml }
#doctrine:
# orm:
# metadata_cache_driver: apc
# result_cache_driver: apc
# query_cache_driver: apc
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
nested:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
I also noticed that there is a routing_dev.php but no routing_prod, the prod encironment works great however on my localhost so... ?

In your production environment when you run the app/console cache:warmup command you need to make sure you run it like this: app/console cache:warmup --env=prod --no-debug Also, remember that the command will warmup the cache as the current user, so all files will be owned by the current user and not the web server user (eg: www-data). That is probably why you get a 500 server error. After you warmup the cache run this: chown -R www-data.www-data app/cache/prod (be sure to replace www-data with your web server user.
Make sure your parameters.ini file has any proper configs in place since its common for this file to not be checked in to whatever code repository you might be using. Or (and I've even done this) its possible to simply forget to put parameters from dev into the prod parmeters.ini file.
You'll also need to look in your app/logs/prod.log to see what happens when you attempt to login.

Related

How do I modify existing jobs to switch owner?

I installed Rundeck v3.3.5 (on CentOS 7 via RPM) to replace an old Rundeck instance that was decommissioned. I did the export/import of projects (which worked brilliantly) while connected to the new server as the default admin user. The imported jobs run properly on the correct schedule. I subsequently configured the new server to use LDAP authentication and configured ACLs for users/roles. That also works properly.
However, I see an error like this in the service.log:
ERROR services.NotificationService - Error sending notification email to foo#bar.com for Execution 9358 Error executing tag <g:render>: could not initialize proxy [rundeck.Workflow#9468] - no Session
My thought is to switch job owners from admin to a user that exists in LDAP. I mean, I would like to switch job owners regardless, but I'm also hoping it addresses the error.
Is there a way in the web interface or using rd that I can bulk-modify jobs to switch the owner?
It turns out that the error in the log was caused by notification settings in an included job. I didn't realize that notifications were configured on the parameterized shared job definition, but there were; removing the notification settings caused the error to stop being added to /var/log/rundeck/service.log.
To illustrate the problem, here are chunks of YAML I've edited to show just the important parts. Here's the common job:
- description: Do the actual work with arguments passed
group: jobs/common
id: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
name: do_the_work
notification:
onstart:
email:
attachType: file
recipients: ops#company.com
subject: Actual work being started
notifyAvgDurationThreshold: null
options:
- enforced: true
name: do_the_job
required: true
values:
- yes
- no
valuesListDelimiter: ','
- enforced: true
name: fail_a_lot
required: true
values:
- yes
- no
valuesListDelimiter: ','
scheduleEnabled: false
sequence:
commands:
- description: The actual work
script: |-
#!/bin/bash
echo ${RD_OPTION_DO_THE_JOB} ${RD_OPTION_FAIL_A_LOT}
keepgoing: false
strategy: node-first
timeout: '60'
uuid: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
And here's the job that calls it (the one that is scheduled and causes an error to show up in the log when it runs):
- description: Do the job
group: jobs/individual
name: do_the_job
...
notification:
onfailure:
email:
recipients: ops#company.com
subject: '[Rundeck] Failure of ${job.name}'
notifyAvgDurationThreshold: null
...
sequence:
commands:
- description: Call the job that does the work
jobref:
args: -do_the_job yes -fail_a_lot no
group: jobs/common
name: do_the_work
If I remove the notification settings from the common job, the error in the log goes away. I'm not sure if sending notifications from an included job is not supported. It would be useful to me if it was, so I could place notification settings in a single location. However, I can understand why it presents a problem for the scheduler/executor.

JHipster/Microservices frontend development hot reload

I have created a JHipster microservices application and want to do some frontend development on it. Launching the whole microservices stack in the ./docker-compose/ directory with docker-compose up -d works as expected. Registry shows all microservices, the gateway and an UAA instance with status 'up'. No exceptions thrown. Login to http://localhost:8080 works as expected.
Launching yarn start in the gateway project directory launches the development server via webpack and browsersync. Hot reload works as expected when pointing the browser at http://localhost:9000.
Now to my problem/question: logging into http://localhost:9000 as user/user doesn't work, as the account cannot be retrieved. The thrown exception is irrelevant because it just states that the account is null:
webpack-internal:///…fesm5/core.js:16064 ERROR Error: Uncaught (in promise):
TypeError: Cannot read property 'langKey' of null
TypeError: Cannot read property 'langKey' of null
at LoginService.setPreferredLanguage (webpack-internal:///…
login.service.ts:34)
....
But when I point the browser back to http://localhost:8080, I'm logged in as 'user'. Which means that the login on the backend worked.
Being new to docker and microservices, I'm suspecting that I'm conceptually missing something (networks/ports/etc..). Any ideas that could point to finding a solution? Or what is the suggested setup/practices for developing the frontend in a JHipster/microservices configuration.
There has been an error in the webpack.dev.js configuration file and because UAA was added later to the project, the UAA module was missing in the contexts to be proxies:
devServer: {
contentBase: './build/www',
proxy: [{
context: [
'/microservice1',
'/microservice2',
'/microserviceuaa', /* !!! was missing !!! */
/* jhipster-needle-add-entity-to-webpack - JHipster will add entity api paths here */
'/api',
'/management',
'/swagger-resources',
'/v2/api-docs',
'/h2-console',
'/auth'
],
target: `http${options.tls ? 's' : ''}://127.0.0.1:8080`,
secure: false,
changeOrigin: options.tls,
headers: { host: 'localhost:9000' }
}],

run ansible task only if tag is NOT specified

Say I want to run a task only when a specific tag is NOT in the list of tags supplied on the command line, even if other tags are specified. Of these, only the last one will work as I expect in all situations:
- hosts: all
tasks:
- debug:
msg: 'not TAG (won't work if other tags specified)'
tags: not TAG
- debug:
msg: 'always, but not if TAG specified (doesn't work; always runs)'
tags: always,not TAG
- debug:
msg: 'ALWAYS, but not if TAG in ansible_run_tags'
when: "'TAG' not in ansible_run_tags"
tags: always
Try it with different CLI options and you'll hopefully see why I find this a bit perplexing:
ansible-playbook tags-test.yml -l HOST
ansible-playbook tags-test.yml -l HOST -t TAG
ansible-playbook tags-test.yml -l HOST -t OTHERTAG
Questions: (a) is that expected behavior? and (b) is there a better way or some logic I'm missing?
I'm surprised I had to dig into the (undocumented, AFAICT) variable ansible_run_tags.
Amendment: It was suggested that I post my actual use case. I'm using ansible to drive system updates on Debian family systems. I'm trying to notify at the end if a reboot is required unless the tag reboot was supplied, in which case cause a reboot (and wait for system to come back up). Here is the relevant snippet:
- name: check and perhaps reboot
block:
- name: Check if a reboot is required
stat:
path: /var/run/reboot-required
get_md5: no
register: reboot
tags: always,reboot
- name: Alert if a reboot is required
fail:
msg: "NOTE: a reboot required to finish uppdates."
when:
- ('reboot' not in ansible_run_tags)
- reboot.stat.exists
tags: always
- name: Reboot the server
reboot:
msg: rebooting after Ansible applied system updates
when: reboot.stat.exists or ('force-reboot' in ansible_run_tags)
tags: never,reboot,force-reboot
I think my original question(s) still have merit, but I'm also willing to accept alternative methods of accomplishing this same functionality.
For completeness, and since only #paul-sweeney has offered any alternative solution, I'll answer my own question with my current best solution and let people pick / up-vote their favorite:
---
- name: run only if 'TAG' not specified
debug:
msg: 'ALWAYS, but not if TAG in ansible_run_tags'
when: "'TAG' not in ansible_run_tags"
tags: always
I know it's an old(ish) question, but I had a similar requirement.
It's probably something best implemented another way ... but ... sometimes it can be useful.
I'd achieve it by setting a fact if the tag IS specified, then outputting the message only if the fact is not set, something like:
---
- name: "test task runs only if tag missing"
hosts: all
tasks:
- name: "suppress message if tag given"
set_fact: suppress_message=yes
tags: reboot,never
- name: "message"
debug:
msg: "You didn't say 'reboot'"
when: suppress_message is not defined
I think that we have states for controlling (example: started, restarted, stopped), states for installing (present,absent) and components (webserver, db,...).
Ansible is lacking a good separation of those 3 dimensions and mixing those 3 dimensions in a single tag system is leading to confusion.
For example, if you have a 'webserver' and a 'DB' tag, you want to 'restart' the DB and not the webserver using a 'restart' tag.
But it won't work if the 'restart' tasks of the DB and the webserver are in the same tasks file with the same 'restart' tag as the 'restart' tag will start both the DB and the webserver...
So you will have probably to separate webserver and DB tasks in 2 separate files and use the tag at the level of the include.
Using tags means that you have a tree of options, not a matrix of options.
I like the tag concept but the fact that it is not possible to use it in conditional expressions is making it less appealing.
What I recommend is to declare tags in a role but map them into variables as a first task. So the 'restart' and 'db' tags would become boolean variables in my role and use when: instead of tags:
ansible-playbook has a skip-tags option. The example from the docs is
ansible-playbook example.yml --skip-tags "packages"
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html

How to debug JavaScript tests in JHipster applications using Karma?

I have a simple monolithic application generated using JHipster v4.10.1 with front-end using Angular 4.x. To run JavaScript unit tests, as suggested in the documentation I ran
./node_modules/karma/bin/karma start src/test/javascript/karma.conf.js --debug
The command runs the tests, reports coverage summary and exits, whether tests all pass or some test fail does not matter. Test run output does show at one point that the debug server is loaded:
21 11 2017 13:41:20.616:INFO [karma]: Karma v1.7.1 server started at http://0.0.0.0:9876/
But because the command exits, the Karma debug server can not be accessed. How to run tests so that Karma console can be used in browser to debug?
Figured out that the magic flag is actually single-run which seems to be true by default. So the main command to run for JS debug is:
yarn test --single-run=false
which in turn runs
$ karma start src/test/javascript/karma.conf.js --single-run=false
With this the command will only exit with explicit kill e.g. with Ctrl+C or Z. Karma debug console can then be accessed on http://localhost:9876/debug.html (assuming default port is not already busy. If it is, test output should tell you which port was chosen).
Additionally you need to disable minimization (and also istanbul config - not sure why) so that you can breakpoint and step through the .ts code in debugger easily. I figured this is done by making following changes in webpack/webpack.test.js file:
Remove following istanbul config from module.rules array:
{
test: /src[/|\\]main[/|\\]webapp[/|\\].+\.ts$/,
enforce: 'post',
exclude: /(test|node_modules)/,
loader: 'sourcemap-istanbul-instrumenter-loader?force-sourcemap=true'
}
Add minimize: false to the LoaderOptionsPlugin under plugins array:
new LoaderOptionsPlugin({
minimize: false,
options: {
tslint: {
emitErrors: !WATCH,
failOnHint: false
}
}
})

Why DreamFactory 2.1.1 LOG_LEVEL .env parameter is ignored?

Since i've upgraded my Dreamfactory DSP from 2.0.2 to 2.1.1-2 , some configuration's parameters seems to be ignored!
DF_LOG_LEVEL is one of them and even if i change it, the value stay to WARNING as defined as the default value in the config/df.php
here is a part of my .env file:
##------------------------------------------------------------------------------
## DreamFactory Settings
##------------------------------------------------------------------------------
## LOG Level. This is hierarchical and goes in the following order.
## DEBUG -> INFO -> NOTICE -> WARNING -> ERROR -> CRITICAL -> ALERT -> EMERGENCY
## If you set log level to WARNING then all WARNING, ERROR, CRITICAL, ALERT, and EMERGENCY
## will be logged. Setting log level to DEBUG will log everything. Default is WARNING.
DF_LOG_LEVEL=DEBUG
(I've check that there is no other line in my .env file about the LOG_LEVEL)
Here is the config/df.php section about LOG_LEVEL: (where default is WARNING)
'version' => '2.1.1',
// General API version number, 1.x was earlier product and may be supported by most services
'api_version' => '2.0',
// Name of this DreamFactory instance. Defaults to server name.
'instance_name' => env('DF_INSTANCE_NAME', gethostname()),
// Log level
'log_level' => env('DF_LOG_LEVEL', 'WARNING'),
When i change the DF_LOG_LEVEL to other value in my .env file, Even after restarting my server nothing change in my Log file and in the Admin section Config/System Info i still have:
DreamFactory Instance
Admin Application Version: 2.1.5
DreamFactory Version: 2.1.1
System Database: mysql
Install Path: /opt/df2/apps/dreamfactory/htdocs/
Log Path: /opt/df2/apps/dreamfactory/htdocs/storage/logs/
Log Mode: single
Log Level: WARNING
I have noticed the same trouble with other parameters like the DF_ALLOW_FOREVER_SESSIONS=true
That is also no more effective !
Any help or suggestion ?
After making changes to the .env file in DreamFactory it is recommended to issue the following commands for the changes to be read from .env file (from htdocs folder or DF2 installation directory):
php artisan config:clear
php artisan cache:clear