I want to build a desktop application for linux with flutter that will be used by clients in a controlled environment, I mean it is not a public app that will be downloaded by anyone.
I've thought about building the app from the Gitlab runners and storing it in the gitlab packages. Then the clients, managed by me, download the new versions through the gitlab api.
I'm using the flutter_dotenv package to load environment variables in my flutter app. The documentation recommends adding the .env file to the assets and also adding it to the .gitignore.
The truth is that in the CI/CD when I build the flutter app that is going to be used in production, it gives an error because the .env file does not exist and yet it is being referenced in the assets. This is the error it shows.
The only thing that occurs to me is from the runner itself to know if I am building for test or for prod and create the .env as appropriate. Obviously there is no other option for testing, but I'm not sure if it's good practice and if it's the responsibility of the gitlab runners to create the prod env.
Any ideas?
You don't say what your CI/CD environment is - edit - I see it. GitLab. If it were Azure - you could upload your .env to local secure files storage and in a step of your CI/CD deployment you could copy it in from secure files to the working directory where your app is being built. That is how I have dealt with a missing local.properties file for Android deployment from Azure
sample for Azure, maybe can be adapted to GitLab format
- task: DownloadSecureFile#1
name: local_properties
displayName: 'Download local properties'
inputs:
secureFile: 'local.properties'
- script: |
echo Installing $(local_properties.secureFilePath) to the android build folder $(Build.SourcesDirectory)/android ...
sudo cp $(local_properties.secureFilePath) $(Build.SourcesDirectory)/android
sudo chown -R $(whoami) /home/vsts/work/1/s/android/local.properties
Related
CI Runner Context
Gitlab version : 13.12.2 (private server)
Gitlab Runner version : 14.9.1
Executor : shell executor (PowerShell)
Exploitation system : Windows 10
Project in Python (may be unrelated)
(using Poetry for dependency management)
The Problem
I am setting up an automated integration system for a project that has several internal dependencies that are hosted on the same server as the project being integrated. If I run the CI with a poetry update in the yml file, the Job console sends an exit with error code 128 upon calling a git clone on my internal dependency.
To isolate the problem, I tried simply calling a git clone on that same repo. The response is that the runner cannot authenticate itself to the Gitlab server.
What I Have Tried
Reading through the Gitlab docs, I found that the runners need authorization to pull any private dependencies. For that, Gitlab has created deploy keys.
So I followed the instructions to create the deploy key for the dependency and added it to the sub-project's deploy key list. I then ran into the exact same permissions problem.
What am I missing?
(For anyone looking for this case for a Winodws PowerShell, the user that the runner uses is nt authority/system, a system only user that I have not found a way to access as a human. I had to make the CI runner do the ssh key creation steps.)
Example .gitlab-ci.yml file:
#Commands in PowerShell
but_first:
#The initial stage, always happens first
stage: .pre
script:
# Start ssh agent for deploy keys
- Start-Service ssh-agent
# Check if ssh-agent is running
- Get-Service ssh-agent
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
I solved my problem of pulling internal dependencies via completely bypassing the ssh pull of the source code and by switching from poetry to hatch for dependency management (I'll explain why further down).
Hosting the compiled dependencies
For this, I compiled my dependency project's source code into a distribution-ready package (in this context it was a python wheel).
Then used Gitlab's Packages and Registries offering to host my package. Instead of having packages in each source code project, I pushed the packages of all my dependencies to a project I created for this single purpose.
My .gitlab-ci.yaml file looks like this when publishing to that project:
deploy:
# Could be used to build the code into an installer
stage: Deploy
script:
- echo "deploying"
- hatch version micro
# only wheel is built (without target, both wheel and sdist are built)
- hatch build -t wheel
- echo "Build done ..."
- hatch publish --repo http://<private gitlab repo>/api/v4/projects/<project number>/packages/pypi --user gitlab-ci-token --auth $CI_JOB_TOKEN
- echo "Publishing done!"
Pulling those hosted dependencies (& why I ditched poetry)
My first problem was having pip find the extra pypi repository with all my packages. But pip already has a solution for that!
In it's pip.ini file(to find where it is, you can do pip config -v list), 2 entries need to be added:
[global]
extra-index-url = http://__token__:<your api token>#<private gitlab repo>/api/v4/projects/<project number>/packages/pypi/simple
[install]
trusted-host = <private gitlab repo>
This makes it functionally the same as adding the --extra-index-url and --trusted-host tags while calling pip install.
Since I was using a dependency manager, I was not directly using pip, but the manager's wrapper for pip. And here comes the main reason why I decided to change dependency managers: poetry does not read or recognize pip.ini. So any changes done in any of those files will be ignored.
With the configuration of the pip.ini file, any dependencies I have in the private package repo will also be searched for the installation of projects. So the line:
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
changes to a simple line:
- pip install dependency-project
Or a line in pyproject.toml:
dependencies = [
"dependency-project",
"second_project",
]
I'm trying to use GitHub to trigger on PR a GitLab pipeline.
Practically when a developer creates a PR in GitHub, his/her code get tested against a GitLab pipeline.
I'm trying to follow this user guide: https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/github_integration.html
and we have a silver account, but it won't work. When creating the PR, the GitLab pipeline is not triggered.
Anyone with this kind of experience who can help?
Thanks
Joe
I've found the cause of the issue.
In order for GitHub to trigger GitLab as CD/CI mostly in PR request, you need to have a Silver/Premium account AND, very important, being the root owner.
Any other case, you won't be able to see github in the integration list on GitLab. People from gitlab had the brilliant idea to hide it instead of showing it disabled (which would had been a tip to understand that you needed an upgraded license)
In the video above it's not explained.
Firstly, you need to give us the content of your .gitlab-ci.yaml file. In your question you asked about GitHub but you're following Gitlab documentation which is completely different. Both are using git commands to commit and push repos but Github & Gitlab are different.
For Github pipelines, you need to create a repository, then you go to Actions. Github will propose you to configure a .github/workflows directory which contain a file.yaml. In this .yaml file you can code your pipelines. According to your project, Github will propose you several linux machines with the adequate configuration to run your files (If it's a Java Project --> you'll be proposed maven machines, Python --> Python Machines, React/Angular -> machines with npm installed, Docker, Kubernetes for deployments...) and you're limited to 4 private project as far as I know (check this last information).
For Gitlab you have two options, you can use preconfigured machines like github, and you call them by adding for example atag: npm in your .gitlab-ci.yaml file, to call a machine with npm installed, but you need to pay an amount of money. Or you can configure your own runners by following the Gitlab documentation with gitlab commands (which is the best option), but you'll need good machines and servers to run npm - mvn - python3 - ... commands
Of course, in your Gitlab repository, and finally to answer your question this an example, of .gitlab-ci.yaml file with two simple stages: build & test, the only statement specifies that these pipelines will run if there is a merge request ( I use the preconfigured machines of Gitlab as a sample here) More details on my python github project https://github.com/mehdimaaref7/Scrapping-Sentiment-Analysis and for gitlab https://docs.gitlab.com/runner/
stages:
- build
- test
build:
tags:
- shell
- linux
stage: build
script:
- echo "Building"
- mkdir build
- touch build/info.txt
artifacts:
paths:
- build/
only:
- merge_requests
test:
tags:
- shell
- linux
stage: test
script:
- echo "Testing"
- test -f "build/info.txt"
only:
- merge_requests
I'm using Gitlab CI for my project. When I push on develop branch, it runs tests and update the code on my test environment (a remote server).
But the gitlab runner is already using the same build folder : builds/a3ac64e9/0/myproject/myproject
But I would like to create a now folder every time :
builds/a3ac64e9/1/yproject/myproject
builds/a3ac64e9/2/yproject/myproject
builds/a3ac64e9/3/yproject/myproject
and so on
Using this, I could just update my website by changing a symbolic link pointing to the last runner directory.
Is there a way to configure Gitlab Runner this way ?
While it doesn't make sense to use your build directory as your deployment directory, you can setup a custom build directory
Open config.toml in a text editor: (more info on where to find it here)
Set enabled = true under [runners.custom_build_dir] (more info here)
[runners.custom_build_dir]
enabled = true
In your .gitlab-ci.yml file, under variables set GIT_CLONE_PATH. It must start with $CI_BUILDS_DIR/, e.g. $CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME, which will probably give you what you're looking for, although if you have multiple stages, they will have different job IDs. Alternatively, you could try $CI_BUILDS_DIR/$CI_COMMIT_SHA, which would give you a unique folder for each commit. (More info here)
variables:
GIT_CLONE_PATH: '$CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME'
Unfortunately there is currently an issue with using GIT_BUILDS_DIR in GIT_CLONE_PATH, if you're using Windows and Powershell, so you may have to do something like this as a work-around, if all your runners have the same build directory: GIT_CLONE_PATH: 'C:\GitLab-Runner/builds/$CI_JOB_ID/$CI_PROJECT_NAME'
You may want to take a look at the variables available to you (predefined variables) to find the most suitable variables for your path.
You might want to read the following answer Changing the build intermediate paths for gitlab-runner
I'll repost my answer here:
Conceptually, this approach is not the way to go; the build directory is not a deployment directory, it's a temporary directory, to build or to deploy from, whereas on a shell executor this could be fixed.
So what you need is to deploy from that directory with a script as per gitlab-ci.yml below, to the correct directory of deployment.
stages:
- deploy
variables:
TARGET_DIR: /home/ab12/public_html/$CI_PROJECT_NAME
deploy:
stage: deploy
script:
mkdir -pv $TARGET_DIR
rsync -r --delete ./ $TARGET_DIR
tags:
- myrunner
This will move your projectfiles in /home/ab12/public_html/
naming your projects as project1 .. projectn, all your projects could use this same .gitlab-ci.yml file.
You can not achieve this only with Gitlab CI runner configuration, but you can create 2 runners, and assign them exclusively to each branch by using a combination of only and tags keywords.
Assuming your two branches are named master and develop and two runners have been tagged with master_runner and develop_runner tags, your .gitlab-ci.yml can look like this:
master_job:
<<: *your_job
only:
- master
tags:
- master_runner
develop_job:
<<: *your_job
only:
- develop
tags:
- develop_runner
(<<: *your_job is your actual job that you can factorize)
I'm having some troubles setting up appveyor. I'd like to publish the generated web deploy packages to the Appveyor artifact feed. I've selected to build web deploy packages in appveyor.yml:
build:
project: Apps/MyProject.sln
publish_wap: true
I can see from the logs that the 2 webdeploy packages get produced:
[00:00:24] Package "Backend.zip" is successfully created as single file at the following location:
[00:00:24] file:///C:/Users/appveyor/AppData/Local/Temp/1/cul57h0ak9
I can push these packages to github releases by simply referring to them by filename:
deploy:
- provider: GitHub
tag: v$(appveyor_build_version)
auth_token:
secure: stuff
artifact: api.zip, backend.zip
force_update: false
on:
DEPLOY: true
However, I'm unable to publish these packages to Appveyor artifact feed, because unlike "deployments", it seems that I'm required to know the exact path of the artifact(s). Appveyour seems to use a temp folder when it generates these, so it's pretty hopeless to know the path. I cold traverse the build agent's user's temp file directory looking for them, but that seems a bit hacky to me.
So, my question is: How do I reliably tell appveyor to send my generated zips to the artifact feed?
(Note that I know that I can configure a "publish target" in visual studio and use that instead, but as far as I can understand the whole idea behind the "publish_wap" option is to not have to do that for every project. I'm trying to achieve a clear separation of code so that no build-specific config has to be included inside my msbuild projects).
Turns out Appveoyr auto-posts any artifacts, and now I feel stupid.
I'm wondering if there are any convenient ways to automate deployment of code to a live server in GO, either standard built-in methods, or otherwise.
I want something google app engine like, I just run the command and it uploads to the server and triggers a restart.
(Ultimately I want a git commit to trigger a rebuild and redeploy, but thats for down the track in the future)
I recommend Travis CI + Heroku.
You can deploy to heroku directly with just a git push, but I like to use Travis to build and run the tests before that.
There are some guides online but I'll try to go directly to the point:
What you will need?
Github account
Travis account (linked with github, free if open source)
Empty Heroku app (Free dyno works great)
Setup
In your github repo, create the following files:
.travis.yml (more info on the Travis CI documentation)
Procfile
.go-dir
After that go to your Travis account, add your repository and enabled the build for it.
Here is a sample minimal config file content (based on my app that I deploy to heroku):
.travis.yml
language: go
go:
- tip
deploy:
provider: heroku
buildpack: https://github.com/kr/heroku-buildpack-go.git
api_key:
secure: <your heroku api key encripted with travis encrypt>
on: master
Procfile
worker: your-app-binary
.go-dir
your-app-binary
Procfile and .go-dir are heroku configs so it can vary if you are deploying a web app, you can read more at the heroku documentation
One important and easily missed point is the build pack, without it the deploy will not work.
Read the Travis docs to see how to encrypt the heroku key
How it works?
Basically, every push to your repository will trigger the Travis CI build, if it passes it will deploy the app to heroku, so you set this up once and build + deploy is just a push away ;)
Also Travis will build and updated the status of all Pull Requests to your repository automagically.
To see my config and build, please take a look at my Travis build and my repository with my working configs