UI5 app crashes searching for deployed resources while served locally - sapui5

I am using the UI5 Tooling (version 2.14.9) in order to serve locally a project along with its custom library. Both projects are developed with OpenUI5 version 1.84.
My ui5.yaml is configured as follows (only relevant parts are reported)
specVersion: "2.6"
metadata:
name: "launchpad"
type: module
resources:
configuration:
paths:
/my/app/: packages/my_app/webapp
/resources/library/: packages/library/src/my_library
server:
customMiddleware:
........
- name: ui5-middleware-simpleproxy
afterMiddleware: compression
mountPath: /XMII/
configuration:
baseUri: "http://ip:port/XMII/"
username: "user"
password: "pw"
In my index.html:    
<script id="sap-ui-bootstrap" src="..."
data-sap-ui-resourceroots='{
"my.app": "./",
"my.library": "/resources/library"
}'
...
></script>
The app crashes because it tries to load some of the library files on the deployment server rather than locally and gives me a 404 - File not found error. The library is fully served by the "ui5 serve" command (also the files that the app is trying to reach on the server) and a lot of its files are correctly loaded from the local directory so I can't find a motivation for this behavior.
I'm not in a Fiori environment (I'm using VSCode and not SAP BAS) and I triple checked namespaces when importing in the app files from library.
EDIT
I found a partial solution which confuses me a lot: I added in my bootstrap script a new alias pointing towards the same resource and changed the reference (only for my crashing imports) and now all files are loaded locally.
But why????
<script id="sap-ui-bootstrap" src="..."
data-sap-ui-resourceroots='{
"my.app": "./",
"my.library": "/resources/library",
"library": "/resources/library"
}'
...
></script>

Related

Ansible error using `win_copy`: "Unexpected failure during module execution"

I upgraded to Python 3.5 and Ansible deployment started failing, not sure if they are related, but here is the info:
Ansible version: 2.3.2
yaml file:
- name: Collect compiled DLLs for publishing
win_copy:
src: '{{ download_dir }}/tmp/xxxx/bin/Release/PublishOutput/bin/'
dest: '{{ work_dir }}\bin'
Error:
{
"failed": true,
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
Upgrade to Ansible 2.5.1 to fix this issue.
Summary from the pull request:
When win_copy copies multiple files it can sometimes delete the local tmp folder that should be used by multiple modules. This means any further modules that need to access this local tmp folder will fail.
We never came across this in ansible-test as we ran a Python module on localhost which causes the ansiballz cache to stop win_copy from successfully deleting this folder.

Error reading manifest file in bluemix deploy

I'm having a hard time to deploy this standard ecommerce project on Bluemix:
https://github.com/zallaricardo/ecommerce-devops
I've chosen to do it with git repository and automatic deploy through the Bluemix pipeline service. After successfully building and fixing a lot of misconfigurations, the root challenge seems to be write a correct version of the manifest.yml file for the project.
Without the manifest.yml file, the log shows the following error:
Downloading artifacts...DOWNLOAD SUCCESSFUL
Target: https://api.ng.bluemix.net
Updating app loja-virtual-devops in org pfc-devops / space Dev as [email account]...
OK
Uploading loja-virtual-devops...
Uploading app files from: /home/pipeline/d38f0184-33da-44da-ba16-4671b491988a
Uploading 384.1M, 1679 files
228.5M uploaded...
Done uploading
OK
Stopping app loja-virtual-devops in org pfc-devops / space Dev as [email account]...
OK
Starting app loja-virtual-devops in org pfc-devops / space Dev as[email account]...
-----> Downloaded app package (452M)
-----> Downloaded app buildpack cache (4.0K)
Staging failed: An application could not be detected by any available buildpack
FAILED
NoAppDetectedError
TIP: Buildpacks are detected when the "cf push" is executed from within the directory that contains the app source code.
Use 'cf buildpacks' to see a list of supported buildpacks.
Use 'cf logs loja-virtual-devops --recent' for more in depth log information.
And with the version of manifest which I believe * - I'm new on this manifests stuff* - to be ok and sufficient, the log shows:
Downloading artifacts...DOWNLOAD SUCCESSFUL
Target: https://api.ng.bluemix.net
FAILED
Error reading manifest file:
yaml: unmarshal errors:
line 2: cannot unmarshal !!seq into map[interface {}]interface {}
The manifest.yml file is currently written as follows:
---
- name: loja-virtual-devops
memory: 512M
buildpack: https://github.com/cloudfoundry/java-buildpack
domain: mybluemix.net
I'll sincerely appreciate any hint about how to fix the manifest for this application or another way to successfully deploy the project through Bluemix.
Try including the applications heading in your manifest.yml file.
example:
applications:
- name: appname
host: app_hostname
buildpack: java_buildpack
instances: 2
memory: 512M
disk_quota: 512M
path: .

Hosting sails.js application from subdirectory

Due to certain restrictions, I need to host my application from a subdirectory (e.g. hostname.domain.com/mysailsapp). The webserver is nginx and set to proxy connections to sails/node.
The issue I'm facing is that grunt automatically builds in the links to include frontend javascript/css/images/etc, but the entire sails app is expecting to be at the root level.
Everything works fine when I connect directly to sails via port 1337 as sails is at the root, but when connecting though the subdirectory URL proxied to sails this will not work.
I didn't see an easy way to configure sails to change this behavior and did not want to manually update the grunt build path.
Maybe I'm missing something else, but wanted to open this up on SO to see if there was another way or if I should open up a issue for sails to include such a configuration option.
Edit:
Accessing via http://host.domain.com:1337/, the HTML source includes links like this:
<!--STYLES-->
<link rel="stylesheet" href="/bower_components/bootstrap/dist/css/bootstrap.css">
<link rel="stylesheet" href="/bower_components/bootstrap/dist/css/bootstrap-theme.css">
<link rel="stylesheet" href="/bower_components/font-awesome/css/font-awesome.css">
<!--STYLES END-->
These links work fine since you can access hosted from sails/node.
http://host.domain.com:1337/bower_components/bootstrap/dist/css/bootstrap.css - Returns 200
Hosting the sails app from an nginx proxy, http://host1.domain.com/mysailsapp/, the sails/nginx return this HTML to the browser:
<!--STYLES-->
<link rel="stylesheet" href="/bower_components/bootstrap/dist/css/bootstrap.css">
<link rel="stylesheet" href="/bower_components/bootstrap/dist/css/bootstrap-theme.css">
<link rel="stylesheet" href="/bower_components/font-awesome/css/font-awesome.css">
<!--STYLES END-->
The browser then attempts to load the static assets via:
http://host1.domain.com/bower_components/bootstrap/dist/css/bootstrap.css
This link will not work since the browser is attempting to load the static assets outside of the sub directory.
You can configure nginx to serve the static content from that url.
location /mysailsapp/ {
alias /path/to/your/app/assets/;
}
Prorper way of doing this is hidding in official docs for sails grunt assets-linker plugin: https://github.com/Zolmeister/grunt-sails-linker where you should use absolute path but this seems doesn't work. So you can use little hack.
Trick would be to put var that represents your subdirectory name inside tasks/linkAssets.js like in example:
var subdirectory='/dashboard';
module.exports = function(grunt) {
grunt.config.set('sails-linker', {
devJs: {
options: {
startTag: '<!--SCRIPTS-->',
endTag: '<!--SCRIPTS END-->',
fileTmpl: '<script src="'+prefix+'%s"></script>',
appRoot: '.tmp/public'
},
files: {
'.tmp/public/**/*.html': require('../pipeline').jsFilesToInject,
'views/**/*.html': require('../pipeline').jsFilesToInject,
'views/**/*.ejs': require('../pipeline').jsFilesToInject
}
},
devJsRelative: {
options: {
startTag: '<!--SCRIPTS-->',
endTag: '<!--SCRIPTS END-->',
fileTmpl: '<script src="'+prefix+'%s"></script>',
appRoot: '.tmp/public',
relative: true
},
files: {
'.tmp/public/**/*.html': require('../pipeline').jsFilesToInject,
'views/**/*.html': require('../pipeline').jsFilesToInject,
'views/**/*.ejs': require('../pipeline').jsFilesToInject
}
},
and so on, i hope you get the point.
Sails (at least the version I'm using: 0.11) includes grunt tasks that link assets with the grunt-sails-linker's option "relative" set to "true", which avoids setting the initial "/" in the asset's URL. Those tasks names have the suffix "Relative", to differentiate them from their "default" counterparts that have "relative" set to "false".
For example, there is:
devStylesRelative
prodStylesRelative
devJsRelative
... and so on
Also, there are alias tasks that run those "relative versions" instead of the "absolute ones". E.g. linkAssetsBuild
You can find these tasks being registered in tasks/config/sails-linker.js.
The solution I used, (although I'm not sure if it's the most standard one or not) is changing the default task to run linkAssetsBuild instead of linkAssets, to ensure that the tasks that are run when lifting sails are the "relative ones".
You can do this by modifying the file tasks/register/default.js.

How to set up an external server root for ember-cli server

I am phasing ember into a project that has its content linking from the server root (as it is in prod).
E.g I have a html files with links like this:
<img src="/content/foo.svg">
How can I set up ember cli so that when I run ember server these URL's will work, without having to move the ember-cli project to the directory in my file system containing /content. I could get round this by moving content into the ember folder but don't want to do this at present..
my folder structure:
/content
/anotherFolder
/theEmberCliApp
/app
/etc etc..
but when I run it I get this error:
[Report Only] Refused to connect to 'ws://127.0.0.1:35729/livereload' because it violates the following Content Security Policy directive: "connect-src 'self' ws://localhost:35729 ws://0.0.0.0:35729".
livereload.js?ext=Chrome&extver=2.0.9:193__connector.Connector.Connector.connect livereload.js?ext=Chrome&extver=2.0.9:193Connector livereload.js?ext=Chrome&extver=2.0.9:176LiveReload livereload.js?ext=Chrome&extver=2.0.9:862(anonymous function) livereload.js?ext=Chrome&extver=2.0.9:1074(anonymous function)
I think the issue is this: baseURL: '../../' how can I get round this? For other non ember sites I just point apaches httpdconfig to the location of the parent of content, but I don't want to stick the whole ember cli project in there.
my environment.js:
/* jshint node: true */
module.exports = function(environment) {
var ENV = {
modulePrefix: 'ember-app',
environment: environment,
baseURL: '../../',
locationType: 'auto',
EmberENV: {
FEATURES: {
// Here you can enable experimental features on an ember canary build
// e.g. 'with-controller': true
}
},
APP: {
// Here you can pass flags/options to your application instance
// when it is created
}
};
if (environment === 'production') {
}
return ENV;
};

How to deploy symfony2 - my dev env works but not prod

I have read the cookbook regarding deploying my symfony2 app to production environment. I find that it works great in dev mode, but the prod mode first wouldn't allow signing in (said bad credentials though I signed in with those very credentials in dev mode), and later after an extra run of clearing and warming up the prod cache, I just get http500 from my prod route.
I had a look in the config files and wonder if this has anything to do with it:
config_dev.php:
imports:
- { resource: config.yml }
framework:
router: { resource: "%kernel.root_dir%/config/routing_dev.yml" }
profiler: { only_exceptions: false }
web_profiler:
toolbar: true
intercept_redirects: false
monolog:
handlers:
main:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
firephp:
type: firephp
level: info
assetic:
use_controller: true
config_prod:
imports:
- { resource: config.yml }
#doctrine:
# orm:
# metadata_cache_driver: apc
# result_cache_driver: apc
# query_cache_driver: apc
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
nested:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
I also noticed that there is a routing_dev.php but no routing_prod, the prod encironment works great however on my localhost so... ?
In your production environment when you run the app/console cache:warmup command you need to make sure you run it like this: app/console cache:warmup --env=prod --no-debug Also, remember that the command will warmup the cache as the current user, so all files will be owned by the current user and not the web server user (eg: www-data). That is probably why you get a 500 server error. After you warmup the cache run this: chown -R www-data.www-data app/cache/prod (be sure to replace www-data with your web server user.
Make sure your parameters.ini file has any proper configs in place since its common for this file to not be checked in to whatever code repository you might be using. Or (and I've even done this) its possible to simply forget to put parameters from dev into the prod parmeters.ini file.
You'll also need to look in your app/logs/prod.log to see what happens when you attempt to login.