I have upgraded my version of GitLab to 5.3 and use Nginx instead of Apache. It worked one time and now I saw that GitLab stopped. So I try to re-start the service with this command sudo service gitlab start and watch what is happening with htop. I noticed that after 1 or 2 minutes the gitlab service stopped and I don't know why ...
I'm using a AWS EC2 micro instance.
How can I retrieve all my repositories on GitLab and import them to BitBucket (or GitHub) ?
Thank you.
Environment information
$ sudo -u git -H bundle exec rake gitlab:env:info RAILS_ENV=production
System information
System: Ubuntu 12.04
Current User: git
Using RVM: no
Ruby Version: 1.9.3p327
Gem Version: 1.8.23
Bundler Version:1.2.3
Rake Version: 10.0.4
GitLab information
Version: 5.3.0
Revision: e1c473c
Directory: /home/git/gitlab
DB Adapter: mysql2
URL: https://domaine-name.com
HTTP Clone URL: https://domaine-name.com/some-project.git
SSH Clone URL: git#domaine-name.com:some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 1.4.0
Repositories: /home/git/repositories/
Hooks: /home/git/gitlab-shell/hooks/
Git: /usr/bin/git
How can I retrieve all my repositories on GitLab and import them to BitBucket (or GitHub) ?
You can:
log on your AWS EC2 micro instance (like described in the Bitnami stack or following this installation blog post)
go to where the bare repos are stored (as mentioned in the gitlab.yml config file)
make a bundle for each one (see my answer on git bundle): that will generate one file per repo, which is easier to copy around
copy those bundle files on your local pc
clone those repos on your local pc (a bundle is an acceptable remote! git clone mybundle works)
add a remote to a GitHub empty repo you have declared first
push to GitHub
Related
Context
I've installed minikube in github codespaces, and that works fantastic! With this setup I'm able to port forward any application running in minikube and reach it with the url generated by github codespaces.
Problem
I'd like to use github actions to deploy an app into the minikube cluster that runs in github codespaces.
Question
Is it possible, if so the how to do it?
It toured out that it is possible. There are 2 ways that you could solve this problem.
Push based
Start GitHub codespace with minikube installed in it.
Install and configure GitHub's self hosted runner in GitHub Codespaces.
Configure and start GitHub's self hosted runner in GitHub Codespace - preferably you should run self hosted runner as a service
Run your GitHub's Actions on self hosted runners
jobs:
build:
runs-on:
labels:
- self-hosted
- self-hosted-runner-label
Pull based
Start GitHub Codespace with minikube installed in it.
Install ArgoCD in minikube
Point ArgoCD towards your GitHub repository
Use GitHub Actions to generate new k8s manifests files
CI Runner Context
Gitlab version : 13.12.2 (private server)
Gitlab Runner version : 14.9.1
Executor : shell executor (PowerShell)
Exploitation system : Windows 10
Project in Python (may be unrelated)
(using Poetry for dependency management)
The Problem
I am setting up an automated integration system for a project that has several internal dependencies that are hosted on the same server as the project being integrated. If I run the CI with a poetry update in the yml file, the Job console sends an exit with error code 128 upon calling a git clone on my internal dependency.
To isolate the problem, I tried simply calling a git clone on that same repo. The response is that the runner cannot authenticate itself to the Gitlab server.
What I Have Tried
Reading through the Gitlab docs, I found that the runners need authorization to pull any private dependencies. For that, Gitlab has created deploy keys.
So I followed the instructions to create the deploy key for the dependency and added it to the sub-project's deploy key list. I then ran into the exact same permissions problem.
What am I missing?
(For anyone looking for this case for a Winodws PowerShell, the user that the runner uses is nt authority/system, a system only user that I have not found a way to access as a human. I had to make the CI runner do the ssh key creation steps.)
Example .gitlab-ci.yml file:
#Commands in PowerShell
but_first:
#The initial stage, always happens first
stage: .pre
script:
# Start ssh agent for deploy keys
- Start-Service ssh-agent
# Check if ssh-agent is running
- Get-Service ssh-agent
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
I solved my problem of pulling internal dependencies via completely bypassing the ssh pull of the source code and by switching from poetry to hatch for dependency management (I'll explain why further down).
Hosting the compiled dependencies
For this, I compiled my dependency project's source code into a distribution-ready package (in this context it was a python wheel).
Then used Gitlab's Packages and Registries offering to host my package. Instead of having packages in each source code project, I pushed the packages of all my dependencies to a project I created for this single purpose.
My .gitlab-ci.yaml file looks like this when publishing to that project:
deploy:
# Could be used to build the code into an installer
stage: Deploy
script:
- echo "deploying"
- hatch version micro
# only wheel is built (without target, both wheel and sdist are built)
- hatch build -t wheel
- echo "Build done ..."
- hatch publish --repo http://<private gitlab repo>/api/v4/projects/<project number>/packages/pypi --user gitlab-ci-token --auth $CI_JOB_TOKEN
- echo "Publishing done!"
Pulling those hosted dependencies (& why I ditched poetry)
My first problem was having pip find the extra pypi repository with all my packages. But pip already has a solution for that!
In it's pip.ini file(to find where it is, you can do pip config -v list), 2 entries need to be added:
[global]
extra-index-url = http://__token__:<your api token>#<private gitlab repo>/api/v4/projects/<project number>/packages/pypi/simple
[install]
trusted-host = <private gitlab repo>
This makes it functionally the same as adding the --extra-index-url and --trusted-host tags while calling pip install.
Since I was using a dependency manager, I was not directly using pip, but the manager's wrapper for pip. And here comes the main reason why I decided to change dependency managers: poetry does not read or recognize pip.ini. So any changes done in any of those files will be ignored.
With the configuration of the pip.ini file, any dependencies I have in the private package repo will also be searched for the installation of projects. So the line:
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
changes to a simple line:
- pip install dependency-project
Or a line in pyproject.toml:
dependencies = [
"dependency-project",
"second_project",
]
I have a really simple repo in GitHub (/Hooks/), currently containing just 1 file, a simple Hooks.ts typescript file. On my local machine, in my Workspace, I've created a project folder and I can yarn add normal repositories like yarn add fuse.js but I wanted to yarn add my private repo yarn add Hooks using this format yarn add git+ssh//git#github.com:OrganisationName/Hooks.git but I just get Error: couldn't find the binary git'. I have permissions to the Hooks repo because I can push/pull from it. I'm on OSX Mojave (10.14.16) and installed Yarn via brew. My yarn version (yarn -v) is 1.22.10. This is the latest brew will install after running brew upgrade yarn.
This error "couldn't find the binary git" is related with not having installed git where the installation is made, Are you running these. commands inside a container?
you might as well be installing openssh, is necessary too.
for example in an alpine container
apk add --no-cache git openssh
yarn install
If, just in case, you don't want to access to the repo trough SSH, you can access trough https+deploy-token, heres a gitlab example:
git+https://<token-name>:<token>#gitlab.com/Username/Repository#<branch|tag>
I have a dotnet core web app built on windows using GitHub Actions workflow steps. The last step is to build and push the container to GitHub packages (using docker build and docker push commands).
docker push of windows container image to GitHub packages always fails with message below:
denied: No matching package_file with sha256 "b9e6fec25718aef5ed18d499b27e43adb524f9ee4f2eb3f0fffaea018e7e86b0" found in repository "myrepo/dotnet-ci".
Is windows container not supported in GitHub packages?
I am successful if I use linux for GitHub Actions to build the dotnet core app for linux, build and push linux container to GitHub packages.
Sadly it appears to be the case that Windows images are not supported by the GitHub registry: https://docs.github.com/en/packages/using-github-packages-with-your-projects-ecosystem/configuring-docker-for-use-with-github-packages
Note: When installing or publishing a docker image, GitHub Packages does not currently support foreign layers, such as Windows images.
Is it possible to deploy an application from VSTS to a WebApp on Linux.
My Webapp is a simple ruby app and I'm currently deploying it with the hosted git repo like in the doc: https://learn.microsoft.com/en-us/azure/app-service/containers/quickstart-ruby
git remote add azure <Git deployment URL from above>
git add -A
git commit -m "Initial deployment commit"
git push azure master
Is there a way to do it using a repo in VSTS?
First, the Azure App Service Deployment task supports Web App on Linux app service type, so you can deploy your ruby app through this task, for example:
Create a new build definition
Specify the source with corresponding repository and branch
Add Archive files task to put necessary files to zip file
Add Azure App Service Deploy task (App type: Linux Web App; Image Source: Built-in Image; Package or folder: [zip file in step 3]; Runtime Stack: Ruby 2.3)
Note: You can deploy it through Release.
Secondly, if Deployment option is available, you can configure it in azure portal: Continuous Deployment to Azure App Service.
On the other hand, you also can push source to server through git command as you are using:
Add Command Line task: (Tool: git; Arguments: remove add azure [git deployment URL]; Working folder: $(build.SourcesDirectory)). Note: the git deployment URL should contains username and password, for example: https://[username]:[password]#[app name].scm.azurewebsites.net/[app name].git (username can’t contains # character)
The same as other git command