When adding a new project to a Rush monorepo, is there a way for Rush to automatically insert the dev dependencies into the package.json? For example I want to use the same test frameworks between projects so it would be good to have Rush sync the dev dependencies.
No, there is no way to do this. rush has no idea which package requires which dependencies and, as such, you'll need to add them manually to each.
However, once you've configured your package.json's accordingly, rush will help you maintain dependency versioning across your monorepo. The precise behaviour can be configured by:
setting preferredVersions in the common-versions.json file
using a version policy such as lockStepVersion
(I presume you found this answer already but in case any stumbles across this in the future)
If you run rush add -h you get the usage.
[usage: rush add [-h] -p PACKAGE [--exact] [--caret] [--dev] [-m] [-s] [--all]]
--dev If specified, the package will be added to the
"devDependencies" section of the package.json
The command you are looking for is
rush add -p PACKAGENAME --dev
Related
I'm trying to use a 3rd-party autotools project in Yocto. Its unit tests are run by 'make check' and requires './configure --enable-oe-sdk', but this is not included in the default recipe (from autotools.bbclass). I want the tests built and run, so How to build a different autoconf target in a Yocto/BitBake recipe? Please note that the unit tests run on the development host, instead of running on the embedded target.
Here is what I have tried. Adding extra options to recipes based on Autoconf. But it doesn't say how to build a different target. I added EXTRA_OECONF += '--enable-oe-sdk', and tried to override do_compile() of the recipe. But got following error.
configure: error: OECORE_TARGET_SYSROOT must be set with --enable-oe-sdk
I'm asking a generic question, but the project in question is github.com/openbmc/phosphor-bmc-code-mgmt. Thank you so much!
In openBMC's own repository, there is the meta-phosphor repository with the recipe ready for phosphor-bmc-code-mgmt.bb.
Clone and add meta-phosphor to conf/bblayers.conf and use the phosphor-software-manager recipe.
I have run several openBMC test, but I am far from an expert.
It looks like the repo you are looking at was designed to be tested using the continues integration docker frame work.
The instruction on how to run the test are here .
In the example they are testing "phosphor-hwmon" so insted try testing "phosphor-bmc-code-mgmt"
If that does not work, I bet the someone on openbmc discord will help you out. https://discord.com/invite/69Km47zH98
When I was going through the Google Cloud tutorial: https://cloud.google.com/python/getting-started/using-pub-sub#running_the_app_on_your_local_machine
I got the following error:
google.auth._default No project ID could be determined from the Cloud SDK configuration. Consider running gcloud config set project or setting the GOOGLE_CLOUD_PROJECT environment variable
I did 'gcloud config set project [my project name]' with no success.
What's the problem?
Update: I've deployed app engines previously without any problem. The problem only happens when I run the psqworker for this Pub/Sub function. I know my project ID and used it before.
The first thing I would try would be:
gcloud info
This will tell you the account and project that gcloud is currently set to.
You may also find the available projects for your account with the following gcloud command:
gcloud projects list
Locate the project ID and project number
There are two ways to identify your project: the project number and project ID.
The project number is automatically assigned when you create a project.
The project ID is a unique identifier for a project. When you first create a project, you can accept the default generated project ID or create your own. A project ID cannot be changed after the project is created, so if you are creating a new project, be sure to choose an ID that you'll be comfortable using for the lifetime of the project.
Note: You should be aware that some resource identifiers (such as project IDs) might be retained beyond the life of your project. For
this reason, avoid storing sensitive information in resource
identifiers.
To locate your project ID and project number:
Go to the Cloud Platform Console
From the projects list, select the name of your project.
On the left, click Dashboard. The project name and ID are displayed in the Dashboard.
TL;DR
Use virtualenv -p C:/Python27/python.exe name-of-env instead of virtualenv -p C:/Python36/python.exe name-of-env in the tutorial
I ran into a similar issue. Here are the steps I went through and why. Hope it helps!
First I tried to specify the id with the command gcloud config set project name-of-your-project
This resulted in the error
ERROR: Python 3 and later is not compatible with by the Google Cloud SDK. Please use a Python 2.7.x version.
If you have a compatible Python interpreter installed, you can use it by setting
the CLOUDSDK_PYTHON environment variable to point to it.
I thought this error was weird because the tutorial tells you to use python3 but it doesn't work. So I created a virtualenv with python2.7 like so
virtualenv -p C:/Python27/python.exe name-of-env (I have python 2 and 3 so its easier to specify the whole path to the .exe file)
Then follow the rest of the tutorial with
name-of-env\scripts\activate
pip install -r requirements.txt
Don't know why you have to use python3 when it doesn't even work.
I'm working in a project in a very security concious place with no access via proxy to all the online repositories SBT usually requires. We'd like to fetch the dependencies and transitive dependencies we need once.
How can sbt be forced to fetch all the dependencies a project needs once and from there on, only work offline? I have tried doing exactly that from home. I then copied over everything under:
~/.ivy2/cache
~/.ivy2/local
$ACTIVATOR_HOME/repository
but still SBT even when executed with sbt "set offline := true" run goes and tries to fetch everything online ... is a pain. Then finally breaks and complains it doesn't find some dependency.
UPDATE: I noticed another source of troubles but can't yet conclude it is the culprit of the OP broken build issue. I build and get the dependencies for the project from a Linux (Ubuntu box) and then I copy all the files to the corporate Windows 7 Pro environment. I found that many property files under ~/.ivy2/cache refer to the absolute path of the activator repository directory in Ubuntu and this is of course incorrect in the Windows env e.g.
#ivy cached data file for ch.qos.logback#logback-classic;1.1.3
#Fri Mar 10 08:39:37 CET 2017
artifact\:ivy\#ivy.original\#xml\#-1844423371.location=/opt/dev/activator/1.3.12/repository/ch.qos.logback/logback-classic/1.1.3/ivys/ivy.xml
artifact\:ivy\#ivy\#xml\#1016118566.is-local=true
artifact\:ivy\#ivy\#xml\#1016118566.location=/opt/dev/activator/1.3.12/repository/ch.qos.logback/logback-classic/1.1.3/ivys/ivy.xml
artifact\:ivy\#ivy.original\#xml\#-1844423371.is-local=true
artifact\:ivy\#ivy\#xml\#1016118566.exists=true
artifact\:logback-classic\#jar\#jar\#804750561.is-local=true
artifact\:logback-classic\#jar\#jar\#804750561.location=/opt/dev/activator/1.3.12/repository/ch.qos.logback/logback-classic/1.1.3/jars/logback-classic.jar
artifact\:ivy\#ivy.original\#xml\#-1844423371.exists=true
artifact\:logback-classic\#jar\#jar\#804750561.exists=true
So I went and did a find and replace but the build still doesn't work. It doesn't look like a brilliant idea to have thousands of property files hardcoding an absolute path to the activator location. I would rather prefer they use an environment variable for that.
Maybe you could try coursier?
No only it offers
better offline mode - one can safely work with snapshot dependencies if these are in cache (SBT tends to try and fail if it cannot check for updates)
but also is much faster than Ivy due to parallel artifacts downloading. The project is young but promising.
I am currently using yocto for building an embedded linux image for TI AM335x (I am using hob, since I find it more comfortable than using command-line).
I start using the recipe for building 'core-image-base' and here is the selection of packages which are included:
Now I would like to exclude the package alsa-utils-1.0.28-r0 from build, since it has some problems compiling for my target and I really do not need it... so, as far as I can understand, I have to remove all the dependencies which brought alsa-utils in (that is: alsa-state and packagegroup-base, looking at following screenshot):
So I move to Package groups tab and I remove packagegroup-base and then I remove alsa-state and alsa-utils from the Included recipes:
Now it seems that alsa-utils is there no more... but if I try to build the image, this is the result:
Why? Who is still bringing-in alsa-utils? What am I doing wrong? Is there a way (even command-line) to know why a package is brought-in by yocto?
Use
bitbake -g alsa-utils -u depexp
to display a dependency tree you should be able to see who's depending on it.
See the openembedded wiki.
I'm currently looking at NuGet to solve my dependency problems in TFS and what I wanted to do is to host my own NuGet server that would take care of internal dependencies. I also want to use NuGet to handle my 3rd party dependencies as well. I'm trying to set up automated builds for our company and this is one roadblock I'm trying to overcome with NuGet.
So my question is how do I handle this scenario in which I have to retrieve my dependencies from different servers?
Is there a better way to handle internal dependencies? How is everyone else doing this?
Also just as a note I intend on using NuGet without committing packages to TFS. I planned on using the method outline in this article:
http://blog.davidebbo.com/2011/08/easy-way-to-set-up-nuget-to-restore.html
Glad you're looking into the no commit scenario for NuGet packages on TFS. You can take a look at my blog post on this topic where I explain the concept.
EDIT (2012/06/13): NuGetPowerTools is replaced by NuGet's built-in package restore functionality. However, same concept of changing the PackageSources element in nuget.targets still applies.
You definitely should take a look at David Fowler's NuGetPowerTools.
After installing this package, you can Enable-PackageRestore (newly installed command in Package Manager Console), which will add...
Enabling package restore will add MSBuild targets to your project files. These MSBuild targets will trigger nuget.exe in a pre-build step and fetch any packages required by your project.
No need to check-in NuGet packages in source control, all you need is the packages.config and these msbuild tasks.
To configure multiple, different package sources, you need to set some settings to be used by these MSBuild tasks. One of them is PackageSources. You can set it by editing the NuGet.targets file, which you will find in the .nuget folder once you enabled package restore.
Regarding those package sources, you could set up different internal NuGet galleries, or simply set up different network shares to be used. This is a matter of requirements and preference, so you can choose. All you need to do, is to tell your msbuild targets to use these packagesources. The order in which you define them, will be the order of lookup of packages as well.
Good luck!
Xavier
Little update on accepted answer and question:
When using TFS as a buildmachine without visual studio installed on it, you can do the following so the buildmachine automatically uses your custom packageSources (more than 1 in the same solution) without any further configuration of packagesources in your solution.
Create a machine default config by placing a NuGet.Config in the root ( C:\NuGet.Config ) by using sample from: http://docs.nuget.org/docs/reference/nuget-config-file
Comment out the line with: <add key="repositorypath" value="$\External\Packages" />
Otherwise your packages gets expanded in C:\$\External\packages\'. When commented out, the config gets chained and the right directory will be used.
Config your needed packagesource(s).
For more Info about other options (e.g. user specifc) see: http://docs.nuget.org/docs/reference/nuget-config-file (bottom of the page).