Grouping rake tasks from other gems - rake

To avoid having to trigger many rake tasks when I want to reinitialize my project I created the following task in lib/task/twiddle.rake:
task :twiddle => %w(db:drop db:create railties:install:migrations db:migrate db:seed spree_sample:load)
Triggering each of these command from the command line works.
However running rake twiddle hangs when it reaches spree_sample:load with the following error:
NoMethodError: undefined method `slug' for #<Spree::Product:0x0000000ec9b9f0>
Could someone explain why running a set of specific tasks would works on the CLI and not through the rake task? Should I require some libraries?

The answer's relatively simple actually - Rails loads schema information from the database only when it boots. (You can do something like reset_schema_information to force it to reload)
You generally don't ever do db:migrate followed by db:seed right after because Rails won't reload the app between the migration and the seeding. Therefore, when db:seed runs, it will have no table information. This is why you see different results if you run them individually -- the act of running them individually makes Rails reload itself and fixes the catch-22 of trying work with a schema that is only made available by the previous command.
Also railties:install:migrations makes no sense in your task-- you only run that yourself as the developer 1 time, it creates several migration files (which you check-in to git), and then those files live in your app forever. You don't re-run railties:install:migrations on regular basis, since you've already created those migration files. (You do re-run it when you upgrade spree, but that's a different matter)

Related

Sylius: "cache:clear" timeout

I've developed a sylius based site on a local server. I want to deploy it in production on my OVH server.
In the Sylius Sylius Cookbook, I did not find any particular procedure. So I followed the normal procedure.
Upload my code to the production server with a "git clone" of my git repository
Install my vendor dependencies "php composer install"But this step does not work because it never ends. At the end, I always have something like this:
Executing script cache:clear
[Symfony\Component\Process\Exception\ProcessTimedOutException]
The process "'/usr/local/php7.3/bin/php' '--php-ini=/usr/local/php7.3/etc/php.ini' './bin/console' --ansi cache:clear" exceeded the timeout of 20000 seconds.
I even tried "composer clearcache" before. It hasn't changed anything.
I am now trying "COMPOSER_PROCESS_TIMEOUT = 50,000". The "composer install" was sent 12 hours ago and is still not finished ...
Has anyone ever had this problem or know how to find a solution?
Is there a special step to do when working with sylius?
Because I really don't know what to do.
UPDATE:
My main lead at the moment is that the problem would come from sylius because I am trying to create a new install of sylius with the symfony 4 structure like this
composer create-project sylius/sylius-standard
Same result:
Executing script cache:clear
[Symfony\Component\Process\Exception\ProcessTimedOutException] The
process "'/usr/local/php7.3/bin/php'
'--php-ini=/usr/local/php7.3/etc/php.ini' './bin/console' --ansi
cache:clear" exceeded the timeout of 20000 seconds.
I tried to run composer create-project with the --no-scripts flag and run php bin/console cache:clear separately after that. The bug reappears with the second command.
You should first check that you are setting permissions right for your var folder, as per symfony install instructions.
You might also just be running out of resources on that server. Had the same issue on my last 1.7 project. The problem came from the cache:clear's warmup (probably because sylius has tons of dependencies and I added a bunch more). You might wanna try editing the composer.json "scripts" to:
"scripts": {
"auto-scripts": {
"cache:clear --no-warmup": "symfony-cmd",
"assets:install %PUBLIC_DIR%": "symfony-cmd"
},
Or, as you did per your update, run the install with the --no-script flag followed by bin/console cache:clear --no-warmup (do make sure you are installing the assets after that).
Cache will then be generated on your first visit to the website instead of being generated thru warmup.
This is a problem not just with the install, you'll have to use this workaround each time you wanna clear cache. My project is in production and working well using this, just gotta remember to visit the website once you did so that a random user doesn't have longer loading because the cache hasn't been generated yet.

How to make RubyMine use already-running docker-compose service, rather than trying to start it?

I'm using docker-compose on a Rails project, with the main web service being called web.
When I try to run a test from RubyMine, it attempts to run
/usr/local/bin/docker-compose -f
/Users/jy/Development/#Rails/project/docker-compose.yml -f
/Users/jy/Library/Caches/RubyMine2018.3/tmp/docker-compose.override.35.yml
up --exit-code-from web --abort-on-container-exit web
Even though the web container is already up.
This leads to issues with duplicate networks being created, and the web service being stopped afterward thanks to the --abort-on-container-exit.
How can I make RubyMine run my tests using a simple docker-compose exec web bundle exec rspec …, without all the preamble? I know that command works because it works from the command line (but running an individual test involves a lot of typing to fill in --example testname!)
Apparently, it doesn't support this yet.
https://youtrack.jetbrains.com/issue/RUBY-19849 is the issue that needs resolving in order to make it work properly.

Having sbt to re-run on file changes - The `~ compile` equivalent for `run`

I know it's possible to re-compile or re-run tests on file changes. I want know to if it's possible to perform something similar for the run command. ~ run does not work. (That makes sense since the run never finishes).
Is there a way to create a task that watches for file changes, quit the running server and relaunch it ?
If not what other tool, would you suggest to get the same behaviour ?
You will have to bring in an external project like sbt-revolver
https://github.com/spray/sbt-revolver

Rake file is seeing old version of database on Heroku

I'm using a rakefile to seed my database. I was seeing weird behavior (see Additional user attributes results in UnknownAttributeError and NoMethodError) and have concluded that it is operating on an old version of my database (at the very least, an old version of my Users table, perhaps more).
Running the rakefile on localhost works fine
On Heroku, printing User.column_names within the rakefile shows the old version of the table
On Heroku, printing User.column_names from within the main app shows the new version of the table
Within Heroku rails console, User.column_names shows the new version of the table
Any ideas how to resolve?
One thing to make sure you do on heroku is restart your dynos correctly. A client of mine once tried something like this:
heroku run rake db:migrate db:seed_data
Heroku's documentation at https://devcenter.heroku.com/articles/rake mentions that you should restart your app in between migrations:
After running a migration you’ll want to restart your app with heroku
restart to reload the schema and pickup any schema changes.
So the answer might be to not batch it in the same process; i.e. try something like
heroku run rake db:migrate; heroku run rake db:seed_data

In TeamCity, can I run a command-line application for the duration of a build?

I have a command-line application that I want to run in a build configuration for the duration of the build, then shut it down at the end when all other build steps have completed.
The application is best thought of as a stub server, which will have a client run against it, then report its results. After the tests, I shut down the server. That's the theory anyway.
What I'm finding is that running my stub server as a command line build step shuts down the stub server immediately before going to the next build step. Since the next build step depends on the server running, the whole thing fails.
I've also tried using the custom script option to run both tools one after another in the same step, but that results in the same thing: the server, launched on the first line, is shut down before invoking the second line of the script.
Is it possible to do what I'm asking in TeamCity? If so, how do I do it? Please list any possibilities, right up to creating a plugin (although the easier, the better).
Yes you can, you can do that in a Nant script, have Teamcity run the script, look for spawn and the nantContrib waitforexit.
However I think you would be much better off creating a mock class that the client uses only when running the tests. Instead of round tripping to the server during the build as that is can be a bit problematic, sometimes ports are closed, sometimes the server hangs from the last run, etc. That way you can run the tests, make sure the code is doing the right thing when the mock returns whatever it needs to return etc.