I have a nexus server that acts as a repo for NPM, Maven and Docker artifacts.
The problem is that for NPM and Maven, for legacy reasons I had to serve all nexus from a different root i.e
[npm]
http://ip:port/nexus/repository/npm/
[maven]
http://ip:port/nexus/repository/maven/
and obviously by extension docker is http://ip:port/nexus/repository/docker/
but when docker tries to do anything it automatically goes as https://ip:port/v2/ which of course results in a 404
Is there a way to specify the full url of the repo in the daemon.json rather than just host + port?
if I have anything else after port I cant start the daemon: parsing "8081/nexus/repository/docker": invalid syntax
Related
I have a build server with no internet access, and I need to resolve dependencies from both github.com and registry.npmjs.org. The build server has access to Artifactory(jfrog), so I have created an npm repo to proxy for registry.npmjs.org and that is working, and I just created dependency-rewrite mechanism under virtual repo for remote npm repo as mentioned in this link--Configure npm to resolve dependencies using artifactory as proxy for both npm registry and github
after configuring still I face the same issue:
node-sass#4.11.0 install /app/jenkins/workspace/uiwidget_smarthome1.0_dev/bwtk/node_modules/node-sass
node scripts/install.js
Downloading binary from https://github.com/sass/node-sass/releases/download/v4.11.0/linux-x64-47_binding.node
Cannot download "https://github.com/sass/node-sass/releases/download/v4.11.0/linux-x64-47_binding.node":
How can I configure npm to resolve from both of these? Since the 2 repos are different types, I can't aggregate them into a single virtual repo. Can npm be configured to resolve dependencies from both of these?
Yes, you need to pass the virtual repo url to the npm command. You can use ---registry virtual repo url or you can set the registry to using npm command
I have a build server with no internet access, and I need to resolve dependencies from both github.com and registry.npmjs.org. The build server has access to Artifactory, so I have created an NPM repo to proxy for registry.npmjs.org and that is working, and I just created a VCS repo to proxy for github.com.
How can I configure npm to resolve from both of these? Since the 2 repos are different types, I can't aggregate them into a single virtual repo. Can NPM be configured to resolve dependencies from both of these?
VCS repos have zero correlation to NPM dependencies. A VCS repo is just a gateway to a set of APIs on the remote git server that will help you cache source binaries (i.e a zip/tarball of a particular branch/tag or even a release). The npm client is not familiar with the REST endpoints that Artifactory exposes for such repos.
For NPM packages that reference github repos inside their package.json (see URLs as dependencies & Git URLs as Dependencies sections here), you want to look into Artifactory's dependency-rewrite mechanism.
Since your NPM client is running on a machine that has no access to the internet, your own package.json files should not depend directly on "github dependencies", since these make the client bypass the registry configuration inside your ~/.npmrc and go directly to github instead of Artifactory.
When the package.json of one of your project's dependencies uses github dependencies, and this package is resolved via Artifactory, the dependency rewrite mechanism modifies the package.json on the fly before returning it to the client, so that subsequent dependency requests to resolve such dependencies are attempted via Artifactory, and not via github -- this is perfect for use cases such as yours.
In summary, you should stick with NPM repositories on Artifactory specifically, but also utilize the dependency rewrite mechanism of the Virtual Repository in order to avoid direct resolution of dependencies via github.
HTH,
I’m trying to set up an operation where the only internet access to external repos is via artifactory server. I have followed the bower information on this web page http://www.jfrog.com/confluence/display/RTF/Bower+Repositories
I can successfully do the npm installs of bower-art-resolver as described (utilizing npm remote repository for npmjs in artifactory), but then trying to do the example bower install of bootstrap it fails because bower is attempting to find git://github.com/twbs/bootstrap.git and I don't have an access to github.com due to firewalls.
How do I make the full bower workflow work then if having the bower registry remote repository is not suffient to make the setup work? Is there some way that the artifactory VCS functions come into play? How would I make bower utilize that instead of trying to reach github.com?
This firewall scenario seems like a common use case for a repository server, so I'm sure I'm missing something.
Make sure you are doing the following:
(1) Create a remote repository in Artifactory proxying the Bower registry. Notice that Artifactory will need to access both the Bower registry and Github.
(2) Configure Bower to use the Artifactory repository you created in the previous step as the Bower registry. This should be done in the .bowerrc file, for example:
{
"registry": "http://localhost:8081/artifactory/api/bower/bower-repo"
}
(3) Use bower-art instead of bower when installing packages, for example:
bower-art install bootstrap
I have several Scala applications that I want to deploy in a Docker multi-container environment on Amazon's Elastic Beanstalk.
It seems like the whole process is a bit more complicated that I was expecting. So I'm really looking forward to hear some feedback for best practices and other ways to improve my entire process and be able to "automate" some steps (if possible).
This is my current process:
To generate my projects' artifacts I'm using the sbt-docker plugin. This
plugin generates the projects artifacts (jars and Dockerfile) under
[app-route]/target/docker.
I upload these artifacts (jars and Dockerfile) into a git
repository (currently doing this "manually").
As Amazon's Elastic Beanstalk requires for Docker
multi-containers, I need an online repository to "host" the
images: Could be Docker-Hub or Quay.io. Either require me
to have a git repository in which it can find the artifacts to be
able to generate the project's image.
Having created the multi-container environment in Elastic Beanstalk,
I proceed to upload the Dockerrun.aws.json file as detailed in
Amazon's documentation and also the
.ebextensions/elb-listeners.config file with the settings of the
ports (Since I'm running multiple apps)
Magic! Amazon generates my environment. Same url, different ports
for all my apps (as specified in the configuration files in step
4.
I would love to find a way to automate step 2. Since this requires me to have an extra repo per each app. I have my apps hosted in a git repo, and I have an "extra" repo per each where I host the artifacts generated in step 1 to be able to do step 3.
If you're willing to use a different SBT plugin for step 1, then you can automate step 2.
Although quay.io supports building your image from GitHub, they do not require it. (You can publish a local Docker image directly to your quay.io repository.)
Use the sbt-native-packager plugin in project/plugins.sbt.
Setup the plugin settings in build.sbt, like: dockerRespository := Some("quay.io/myaccount")
Your step 1 becomes: sbt docker:stage
Followed by: sbt docker:publishLocal
Check your image names and tags with docker images. The new image should have a name like quay.io/myaccount/app
Before you can publish to quay.io, you must docker login quay.io. Read their tutorial.
Your step 2 becomes sbt docker:publish. Now your quay.io account should contain the same IMAGE ID as your local Docker daemon.
Proceed with steps 3+ on the AWS side...
I am not really familiar with Scala however I believe the artifacts could be generated by Jenkins/CircleCI inside of your container that is built on Jenkins/CircleCI then the appropriate image tags referenced within your Dockerrun.aws.json.
Hope that helps.
I have upgraded my version of GitLab to 5.3 and use Nginx instead of Apache. It worked one time and now I saw that GitLab stopped. So I try to re-start the service with this command sudo service gitlab start and watch what is happening with htop. I noticed that after 1 or 2 minutes the gitlab service stopped and I don't know why ...
I'm using a AWS EC2 micro instance.
How can I retrieve all my repositories on GitLab and import them to BitBucket (or GitHub) ?
Thank you.
Environment information
$ sudo -u git -H bundle exec rake gitlab:env:info RAILS_ENV=production
System information
System: Ubuntu 12.04
Current User: git
Using RVM: no
Ruby Version: 1.9.3p327
Gem Version: 1.8.23
Bundler Version:1.2.3
Rake Version: 10.0.4
GitLab information
Version: 5.3.0
Revision: e1c473c
Directory: /home/git/gitlab
DB Adapter: mysql2
URL: https://domaine-name.com
HTTP Clone URL: https://domaine-name.com/some-project.git
SSH Clone URL: git#domaine-name.com:some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 1.4.0
Repositories: /home/git/repositories/
Hooks: /home/git/gitlab-shell/hooks/
Git: /usr/bin/git
How can I retrieve all my repositories on GitLab and import them to BitBucket (or GitHub) ?
You can:
log on your AWS EC2 micro instance (like described in the Bitnami stack or following this installation blog post)
go to where the bare repos are stored (as mentioned in the gitlab.yml config file)
make a bundle for each one (see my answer on git bundle): that will generate one file per repo, which is easier to copy around
copy those bundle files on your local pc
clone those repos on your local pc (a bundle is an acceptable remote! git clone mybundle works)
add a remote to a GitHub empty repo you have declared first
push to GitHub