When using the version of fastlane installed from Homebrew, I don't know how to use a development build of a plugin. I see fastlane add_plugin still generates a Pluginfile. If I try adding gem "fastlane-plugin-xxx", git: "https://github.com/yyy/xxx" or something similar using a :path argument, it always tries to install the version from RubyGems.
I have two specific cases where this makes things inconvenient:
I'm building a new plugin for a client. I want mobile devs to review it internally before it is published.
A user reported an error from a published plugin. I want them to try a dev version from the master branch in order to get more information.
In both cases, I think it's necessary to use Ruby and the Bundler. Not everyone has lots of Ruby experience, so getting someone set up can be an obstacle.
Edited 2017-07-06:
Part of the answer is obvious. When you run fastlane add_plugin, it prompts you if it cannot find the gem:
[jdee#Jimmy-Dees-MacBookPro TestApp]$ fastlane add_plugin my_new_action
[10:46:28]: Seems like the plugin is not available on RubyGems, what do you want to do?
1. Git URL
2. Local Path
3. RubyGems.org ('fastlane-plugin-my_new_action' seems to not be available there)
?
This works well with the fastlane gem, e.g. with RVM:
gem install fastlane
fastlane add_plugin my_new_action
The self-contained binary from Homebrew also prompts you for a Git URL or a local path, but I consistently get build failures from native extensions in the json gem on OS X Sierra. This may be due to plugin dependencies, but I'm not sure. This can still be a little awkward for the use cases above, and I'm surprised that this fails with the self-contained version, which I'd expect to be more robust than using Ruby. At least this removes the need for bundle install and bundle exec.
The answer here is basically that the self-contained version of Fastlane does not really work with plugins. In particular, when fastlane add_plugin or fastlane install_plugins runs bundle install, it tries to install the json gem, a dependency of the fastlane gem, which has native dependencies, and it cannot find <stdio.h>, presumably because /usr/include is screened out in the self-contained bundle in order to insulate it from the system Ruby. After this, you can run bundle install and bundle exec yourself, but in general a Gemfile is required to work with Fastlane plugins. The CLI will even tell you to run bundle exec once you have a Gemfile.
When using the fastlane gem, fastlane add_plugin will usually work, but again you will have a Gemfile and want to run bundle exec fastlane afterward.
You can just modify your Pluginfile to use a path or a git repo and rerun bundle install. There's not a much easier way to do this at the moment.
Related
The official way of working with expo-cli is to install it globally, see https://docs.expo.io/get-started/installation/#installing-expo-cli:
Installing Expo CLI
# Install command line tools
npm install --global expo-cli
However, I never found any explanation about why it is supposed to be global (other than that this simplifies the initial expo init command). To my thinking, having a global package undermines the whole idea behind npm and local node_modules. Essentially, expo-cli is a direct dependency of the project. It's used for running the dev version with expo start and also for creating production builds.
Different versions of expo-cli will work differently, they may even expect different values in app.config.ts. That means it's not safe to upgrade expo-cli globally for one project and then return working with an older project which has been created and maintained with an older version of (global) expo-cli.
None of this would have happened if expo-cli were a normal local project dependency like expo (the SDK package).
So, my question is: is there a real reason for keeping expo-cli global? What do I lose if I move it to local project dependencies? How come Expo documentation never even mentions such option?
I am not a developer, but I had an app built a couple months ago. The developer we had won't help us at all anymore (not sure why).
Please excuse me if I don't use proper terms.
So the project was done on Expo. I no longer have access to the original expo project, but I have all the code he wrote in a Github repository.
Is is possible to take the code from Github and paste it into Expo XDE and possibly reproduce the app on Expo? (Or Does that sound possible?)
Please let me know.
Yes, you could do this. It is important, that you copy all project files from the GitHub repository into your new Expo project. Don't forget to download all necessary libraries into your new Expo project, e.g. via npm install.
I'm a complete react native noob, I've been doing this, and I love it:
Develop prototype on https://snack.expo.io
Here I can develop and test on the browser, test on my phones, and on emulators. It's great.
When I'm ready to build, I download the code package from the Snack IDE
This downloads a zip file with everything except Expo and imported libraries.
I unzip and go into the folder with my terminal and install the libraries.
Inside the folder, I run these commands to install Expo and the regular libraries:
$ npm install expo # install expo
$ npm install # install a bunch of required libraries
# Then I run these two lines until my project builds
$ npm run web # try to run - it will tell me which libraries to install, one by one
$ npm install <library> # install each library
Eventually I'll move to using command-line only, but this is both a no-brainer for a noob like me and it's like training wheels for me to learn npm and expo.
I'm working on building a docker image to be able to run all of our Perl applications. The applications require hundreds of CPAN modules to be installed. The full build of the docker image takes about an hour to complete.
After doing the initial image, I'm not sure how best to handle ongoing updates.
We could keep a single Dockerfile in git, and then modify this as required, and push new builds up to dockerhub. However if the person doing the build doesn't have all of the intermediate images, then adding a single CPAN module could be an extremely tedious process, and it might take an hour before they even know if the new module installs correctly. Also it would be downloading every CPAN module again, which seems a bit risky, as there might be a breaking change in the new module.
Alternatively, the person doing the build could pull the latest docker-hub image, and then install the cpan module interactively, commit the build and push the new image to dockerhub. However then we only have our dockerhub images, but not master Dockerfile.
Or another option would be to create a Dockerfile for each new build, which references the previous dockerhub image. This seems overly complicated though.
Option 1) seems wrong. I'm fairly sure we don't want to be rebuilding the entire image from the base OS just to install one additional module. However being dependent on images without Dockerfiles seems risky as well.
You could use the standard module installer for your underlying OS on your docker image.
For example, if its RedHat then use yum and only use CPAN when they are not available
FROM centos:centos7
RUN yum -y install cpanm gcc perl perl-App-cpanminus perl-Config-Tiny && yum clean all
RUN cpanm install Some::Module; rm -fr root/.cpanm; exit 0
taken from here and modified
I would try to have a base image which the actual applications use
I would also avoid doing things interactively (e.g. script a dockerfile) as you want to be able to repeat the build when upstream dependencies change, which docker hub does for you.
EDIT
You can convert perl modules into your own packages using dh-make-perl
You can load these into your own Ubuntu repo using reprepro or a paid solution of Artifactory
These can then be installed using apt-get when you use your repo as a source from within a dockerfile.
When I have tried a similar thing before There are a few problems
Your apps don't work with the latest version of modules
There are far more dependencies than you expected
Some modules wont package
Benefits are
You keep the build tools (gcc, etc) off the app servers
You know much more about your dependencies
I've made a Rails 3.1 PoC application that also uses haml by adapting the examples from the railstutorial.org book and locally everything works fine.
But when I try to push to heroku, therubyracer fails to build on the server (full output):
Installing therubyracer (0.8.2) with native extensions /usr/ruby1.8.7/lib/ruby/site_ruby/1.8/rubygems/installer.rb:483:in `build_extensions': ERROR: Failed to build gem native extension. (Gem::Installer::ExtensionBuildError)
My Gemfile is pretty standard, so I would really appreciate if somebody could help me understand what's going wrong, and maybe give me a hand in finding a solution.
These answers are out of date. You can now just use therubyracer in both environments so long as you have version '>= 0.11.2'
I should note that I am the author of therubyracer and use it in several production heroku apps both during asset compile time and at runtime
Heroku no longer requires, but strongly discourages using therubyracer or therubyracer-heroku, as these gems use a very large amount of memory.
If you are using them your next deploy will fail!
You have 2 choices:
Add 'therubyracer', :platforms => :ruby to the group :assets and upgrade your ruby version. Then remove your Gemfile.lock and run bundle install.
Run assets:precompile in your local machine and push them to heroku (don't forget to remove therubyracer gems from production);
Rails asset pipeline supports the Sass language by default. Instead of rails-bootstrap gem (LESS) you can use bootstrap-sass-rails
You need to use therubyracer-heroku.
Just define a pair of groups in your Gemfile to install the correct one where required.
group :development, :test do
gem 'therubyracer'
end
group :production do
gem 'therubyracer-heroku'
end
I've installed REE on CentOS 5 for a very special task (using rails 2.3.10 and ruby 1.8) and I really need it to be isolated
In this case I won't use bundler or smth so.
Everything works ok if I'll setup every gem manually via
/opt/ree/bin/gem install agem
But when I run
/opt/ree/bin/rake gems:install
in prepared for this command project - all (or most, I haven't check every dependency) gems are installed via /usr/bin/gem into common gem path, where I do not need any of them
This is an issue and I do not want to install all gems manually. Have smb ever hit into this issue and probably knows solution?
Solution that really helped me was to temporarily replace /usr/bin/gem with a symbolic link to /opt/ree/bin/gem
With this replacement /opt/ree/bin/rake gems:intall worked as expected - all required gems were installed to REE path - returning /usr/bin/gem to original gem executable made system stable again
This is not very clean solution but it works, so it can be used like hammer in critical situation.
There's either a GEM_HOME variable somewhere in the environment, or the runtime ruby called is not ree. Therefore, I'd suggest at least 3 things to try:
Start with an almost empty environment (run env -i sh for example) and run again the rake command, see if this is still installs gems in the common gem path. Be careful, because env -i is an empty environment, you might see complaints from rubygems (because no HOME or nothing else is set)...
Check that the shebang line (first line of the rake program) really indicates your REE binary and not something else
Finally, do run rake using the REE binary with /opt/ree/bin/ruby /opt/ree/bin/rake gems:install
This should give you an indication of what's going wrong. All in all, I think that the environment issue is probably the most probable culprit of this thing