When to use “shadow-cljs release app” vs “shadow-cljs compile app”? - deployment

While building the web app for a new firebase deploy, I have been using:
$ shadow-cljs compile app
Actually, even clearing stuff and then re-compiling, such as:
$ rm -rf .shadow-cljs
$ shadow-cljs compile app
Apparently, the release command could also be used:
$ shadow-cljs release app
What is the difference between the two? What are the implications of each choice?
On a build before a new deployment, what would be the best practice?
Thanks

The compile command builds a development version and exits: https://shadow-cljs.github.io/docs/UsersGuide.html#_development_mode
The release command builds a release version and exits: https://shadow-cljs.github.io/docs/UsersGuide.html#_release_mode

Related

yocto opendds does not create the sdknative files

I am trying to build an application that communicates with dds with opendds. I am using the opendds layer on krikstone. The bitbake image is built with the opendds libraries but when I build the sdk it seems the layer nativesdk is not installed. When I run the cmake I get the error "Missing required dependencies OPENDDS_IDL;ACE_GPERF;TAO_IDL".
from the opendds.inc I see there is the nativesdk install. I added a junk line and expected that when i build the opendds or build the populate-sdk I will fail but it seems that the nativesdk is not run.
build is for imx8mm variscite som with command bitbake fsl-image-qt5 -v populate_sdk_ext
layer with the problem is meta-opendds (krikstone branch) building version 3.22
the layer has a bb file that requires an opendds.inc file which has the nativesdk install
Blockquote
do_install:append:class-nativesdk() {
dfdf -- my junk line to trigger failure
ln -sf ${bindir}/opendds_idl ${D}${datadir}/dds/bin/opendds_idl
ln -sf ${bindir}/ace_gperf ${D}${datadir}/ace/bin/ace_gperf
ln -sf ${bindir}/tao_idl ${D}${datadir}/ace/bin/tao_idl
}
Blockquote
** i added a junk line to trigger failure but it doesnt fail with building the sdk or image itself
why is the nativesdk command not run and why is the sdk without the opendds_idl executable
thanks
I was able at last to build the nativesdk files. I had to build them specifically as they are not build by default. i had to "bitbake nativesdk-opendds" manually.
my mistake was to believe that the nativesdk will be defaulted. my assumption is that if i am using the opendds that most probable that i will build subscribers/produces applications and will need the opendds_idl executable.
!!!! after testing the nativesdk-opendds it did not solve the problem !!!
!! in general after adding building the opendds layer i cannot build the Messanger example for lack of the opendds_idl and other two (tao,ace) executable

flutter run --no-build does not work together with --release

I run flutter build ios and then subsequently flutter run --release --no-build.
As far as I understand, the second statement should only run and not rebuild the executable, but it's always rebuilding as well. Am I missing something?
Update: When I run flutter run --no-build, it first builds the executable and then says Could not find the built application bundle at build/ios/iphoneos/Runner.app. Error launching application on <my iPhone>. This makes sense because there is no Runner.app but only a <application name>.app in that directory.
I can workaround by running flutter run --use-application-binary.
When running flutter run --no-build --release, --no-build clashes with --release.
flutter run -h
Here are the applicable sections of the help command (highlighting is mine):
--[no-]build If necessary, build the app before running.
--release Build a release version of your app.
Apparently, --release will always cause a rebuild. If you omit it, you will see that the debug build does not recompile before running a second time.
This means that it is not possible (at least not like this) to run a release build without compiling. You could potentially file an issue on GitHub about it.

How to avoid Edeliver deployment error: "vm.args: No such file or directory"?

Context
We are trying to use edeliver to deploy a "Hot Upgrade" of a Phoenix Web Application to a remote Virtual Machine instance.
Our aim is to build an "upgrade" version of the app each time so that the app can be "hot" upgraded in production without any down-time.
We have succeeded in doing this "hot upgrade" on a "Hello World" phoenix app:
https://github.com/nelsonic/hello_world_edeliver which is automatically deployed from Travis-CI when the build passes. see: https://travis-ci.org/nelsonic/hello_world_edeliver/builds/259965752#L1752
So, in theory this technique should work for our "real" app.
Attempting to Deploy a "Real" Phoenix App using Edeliver
Ran the following command (to build the upgrade):
mix edeliver build upgrade --auto-version=git-revision --from=$(git rev-parse HEAD~) --to=$(git rev-parse HEAD) --verbose
i.e. "build the upgrade from the previous git revision to the current one"
So far, so good. "Release successfully built!"
Error: vm.args: No such file or directory
When we attempt to deploy the upgrade:
mix edeliver deploy upgrade to production --version=1.0.3+86d55eb --verbose
cat: /home/hladmin/healthlocker/releases/1.0.3+86d55eb/vm.args: No such file or directory
Note: we have a little bash script that reads the latest upgrade version available in .deliver/releases and deploys that see: version.sh
Question:
Is there a way to ignore the absence of the vm.args file and continue the deployment?
Or if the file is required to complete the deployment, is there some documentation on how to create the file?
Note: we have read the distillery "Runtime Configuration" docs: https://github.com/bitwalker/distillery/blob/master/docs/Runtime%20Configuration.md and are sadly none-the-wiser ...
Additional Info
Environment
Localhost: Mac running Elixir 1.4.2
Build Host: Ubuntu 16.04.2 LTS running Elixir 1.4.5
mix.exs file: https://github.com/healthlocker/healthlocker/blob/continuous-delivery/mix.exs
edeliver version: 1.4.4
Build tool: distillery version: 1.4.0
Umbrella project: yes.
This question was also asked on: https://github.com/edeliver/edeliver/issues/234
As mentioned by others, the vm.args file is necessary for BEAM to run the release. A default file is created by distillery during the release build process and should be located in releases/<version>/vm.args. From your log output it looks like expected directory is being checked.
Can you show us the contents of /home/hladmin/healthlocker/releases/?
Can you confirm that the default vm.args file is being created when building the release and extracting it (outside of the upgrade process)?
You also asked:
Or if the file is required to complete the deployment, is there some documentation on how to create the file?
If diagnosing the problem with the default vm.args file doesn't get you anywhere, you can also write your own file and configure distillery to use that file instead of the default. The details for this are in the distillery configuration docs. In short,
add the vm_args setting to your distillery config, which should be at rel/config.exs(relative to your project root), for example:
environment :prod do
set vm_args: "<path>/vm.args"
[...]
end

Edeliver failing to start release

When running mix edeliver version production locally it fails with the following output
EDELIVER MYAPP WITH VERSION COMMAND
-----> getting release versions from production servers
production node:
user : app_user
host : my_app
path : /home/app_user/my_app.io
response: bash: line 4: bin/my_app: No such file or directory
bash: line 47: bin/my_app: No such file or directory
VERSION DONE!
The error is obvious, as the executable lives in: ~/my_app.io/my_app/_build/prod/rel/my_app/bin
I'm also unable to run any of the start/stop/restart etc commands
The deployment was successful because when I ssh in, and run the start command it works.
I would like to know if anyone can point me in the direction of some config parameter that I'm missing, as the local commands are a lot more efficient.
Figured out the problem
I only built my app by running the following: env MIX_ENV=prod mix edeliver build release
I was probably too excited and forgot to actually deploy the release using something similar to the following mix edeliver deploy release to production --version=0.0.1
Hope someone else might benefit from this also.

Swift tests pass locally but the build fails on Travis-CI

I'm trying to setup my cocoapod project to run it's test on Travis-CI after a push. I'm using xctool 0.2.4 to run the tests and it executes well locally. But as soon as it runs on Travis-CI the compile build steps fail for various reasons which I can't seem to reproduce locally.
xctool test -project test/MEViewExtensions.xcodeproj -scheme MEViewExtensions -sdk iphonesimulator
Here are two failures which works fine on my machine:
https://travis-ci.org/materik/meviewextensions/builds/68458750
Basic Block in function '_TFE16MEViewExtensionsCSo8UIScreeng5widthV12CoreGraphics7CGFloat' does not have terminator!
label %entry2
LLVM ERROR: Broken function found, compilation aborted!
https://travis-ci.org/materik/meviewextensions/builds/68465719
/Users/travis/build/materik/meviewextensions/test/MEViewExtensionsTests/UIViewTests.swift:22:33: error: type '#autoclosure () -> CGFloat' does not conform to protocol 'FloatLiteralConvertible'
XCTAssertEqual(view2.x, 10.0)
I would at least like to get the same errors on my machine to be able to debug it. Any ideas?
The problem was that I was running the tests on Xcode 6.3 with a newer version of Swift while the Travis tests were run on Xcode 6.1 with an older version with different type of errors. Had to add osx_image: beta-xcode6.3 to my .travis.yml file in order to force Travis to run on the later version and then it was fine.