Using nx run-many shows: Another process, with id ..., is currently running ngcc - nrwl-nx

We are having an NX monorepo with 10+ Angular apps and 150+ libs. Our CI server is running all builds in docker containers using Ubuntu. We are storing and sharing the computation caching across all build agents. We are now using nx affected:apps to detect for which apps the builds need to run and nx affected:libs to create a list of affected libraries, for each app. This approach enable us to run distributed builds. We now have a dedicated build plan for each app and its dependent libraries.
So, we are using nx affected, computation caching and distributed builds but we are still struggling with long build durations because of the large number of tests we need to run.
The next step we took was to use nx run-many to run those tests in parallel but this did not work for us. Even with 2 parallel processes we see the following error:
Another process, with id ..., is currently running ngcc. Waiting up to 250s for it to finish.
We have tried all the workarounds without any success
If I run the same command inside the same docker container but running on my local machine, everything works ok.
So, instead of reducing the build time, this approach is adding to the total build duration (if we want to run 4 parallel processes we need to wait for 16min before the tests actually start).
Any ideas why this is happening?

Had a similar problem with NX throwing ngcc is already running
What helped me was to set the flag parallel.
from:
npx nx run-many --target=build --prod --all
to:
npx nx run-many --target=build --prod --all --parallel=1

Related

pytest --forked flag processes don't die causing GitLab build to hang

We are using pytest-xdist to run pytest tests with the --forked flag (we are writing integration tests for code that uses Scrapy). However, we noticed that when the tests finish running, some of the created child processes remain alive, which causes our GitLab build to hang.
The Python version we'e using is 3.7.9.
I couldn't find other mentions of the issue online. Is anyone familiar with it? Are there any solutions/fixes/workarounds?

Jodconverter: Fails randomly when running tests with libreoffice (docker)

While running conversions in test suites using jodconverter, it was randomly crashing and tests failed.
We are using Libreoffice with jodconverter running tests in docker. Took too much time to figure this out, so created this question.
Solution :
Use -PuseLibreOffice with the test command to signal jodconverter to use libs for libreoffice. Default is open office.
./gradlew test -PuseLibreOffice

Logs not visible while npm task is running

In AzureDevops, when running an NPM task on a private agent, the logs are not visible until the task completes.
The output just appears blank e.g.
This happens for all npm tasks, from tasks that result in large log output and small.
The task above is a long running task which uses TestCafe to perform functional UI tests on a website. However, the same happens for other npm commands such as install.
I tested npm tasks, the log can be loaded normally:
Here are a few points you can try to see if it returns to normal:
What version of npm are you using? If not latest,you can try the latest npm task version.
According to your interface, you are using an older version of UI , you can try to enable Multi-stage pipelines in Preview features.
Try to run with private agent and see if it's still the same.

'ionic cordova run android' command stuck for 11 minute

I have two git branch, one is development and the other is lazy-load.
I did lazy load in my lazy-load branch, but it stuck for 11 minutes after 'copy finished in 19.81 s' which case the build process too slow.
The development branch which is not lazy loaded, build app normally, but it takes too much time to startup.
I want lazy-load branch should not take a long time in its build process.
Image when i run command ionic cordova run android
When it finishes building the image.
It seems that webpack takes 11 minutes to finish its task.
When I run the app in the development branch, which does not have any lazy load the webpack takes 34.59 s.
If you want to reduce the amount of work ionic has to do between builds while you're editing, try using the livereload flag. It will then only do a webpack update instead of a webpack full start and finish. This way you can make edits and test without having to run through the ~11 minute process every time.
ionic cordova run android --livereload
or
ionic serve
As for the long build time, I would need to see more of your code if this just started happening recently. However, on my project(s), when I'm building it can take over 10 minutes to build, especially when running with the --prod flag.
Also, always make sure you're on the latest of Ionic:
npm install #ionic/cloud-angular#latest --save
sudo npm update -g cordova
sudo npm update -g ionic

Running SBT (Scala) on several (cluster) machines at the same time

So I've been playing with Akka Actors for a while now, and have written some code that can distribute computation across several machines in a cluster. Before I run the "main" code, I need to have an ActorSystem waiting on each machine I will be deploying over, and I usually do this via a Python script that SSH's into all the machines and starts the process by doing something like cd /into/the/proper/folder/ and then sbt 'run-main ActorSystemCode'.
I run this Python script on one of the machines (call it "Machine X"), so I will see the output of SSH'ing into all the other machines in my Machine X SSH session. Whenever I do run the script, it seems all the machines are re-compiling the entire code before actually running it, making me sit there for a few minutes before anything useful is done.
My question is this:
Why do they need to re-compile at all? The same JVM is available on all machines, so shouldn't it just run immediately?
How do I get around this problem of making each machine compile "it's own copy"?
sbt is a build tool and not an application runner. Use sbt-assembly to build an all in one jar and put the jar on each machine and run it with scala or java command.
It's usual for cluster to have a single partition mounted on every node (via NFS or samba). You just need to copy the artifact on that partition and they will be directly accessible in each node. If it's not the case, you should ask your sysadmin to install it.
Then you will need to launch the application. Again, most clusters come
with MPI. The tools mpirun (or mpiexec) are not restricted to real MPI applications and will launch any script you want on several nodes.