Can I have several public directories configured for only one VERCEL deployment? - deployment

I have a Monorepo that holds different proyects that are related within each other and they shared some common dependencies, so it make sense to have them all together coupled inside the same Mono. I have a build process that runs inside vercel which iterates through each build step of all the workspaces. The build cache here helps me to just rebuild the proyects that needs to. Everything is ok here.
But, there is not too much customization I found on Vercel on how I could define multiple Public Directories. So, to be more graphical with this ..
MONOREPO
-----Workspace A
--------src/
--------dist/
-----Workspace B
--------src/
--------dist/
-----Workspace C
--------src/
--------dist/
So my Public Directory here is /**/dist/, but I'm not sure if a Pattern (globs, regex, etc) is accepted as an input value. I could also have a Public Workspace for the all the builds living outside of each workspace and place every build compilation inside there using a subdirectory structure but first I would like to know if using a Pattern is possible with vercel.

Related

Monorepo microservices architecture - Lerna powered by Nx

Background
We have a Lerna/NX monorepo with multiple packages/subrepos within it, and each one currently has it’s own associated workflows for GitHub.
We are now trying to build out some APIs using a microservice architecture within that monorepo, and have a few snagging queries regarding this.
Question 1 - shared dependencies
We will have a minimum of 5 microservices for our MVP (more planned in the future) all consumed via an API Gateway. These microservices are going to be sharing a lot of dependencies, with a few unique dependencies for specific microservices depending on its purpose, such as 3rd party SDKs/prisma client/etc.
Is there a way we can utilise a “shared” package.json for the shared dependencies, whilst also populating it on build with the unique dependencies which would be housed within the package.json within each microservice?
See below image for an example of the folder structure with a “root” package.json that houses the shared dependencies, and within each microservice “sub-package” we would house the microservice specific dependencies.
OSC-API microservices structure
The purpose of this is to avoid the tedious task of going through each package individually to update the shared dependencies. Instead we would just have to update the “shared” package.json.
If this is something that just screams bad practice or “SHOULDN'T be done” feel free to say so.
Question 2 - GitHub workflows
Is there a way we can easily create a singular workflow that would cater for all microservices in 1 .yml file?
This would need to individually build the affected packages based on commits in the PR triggering the workflow. I.e. if Microservice 1, has commits, but nothing else then only microservice 1 gets built and deployed. However, the caveat is if there IS a “shared” package.json then any changes to this should cause all microservices to be rebuilt with the new dependencies.
It is also worth noting here, we do not have a large degree of DevOps experience, so what we currently have is a lot of trial and error, and as such would like to keep it somewhat simplified. However, if there is ample explanation to more complex suggestions would not mind trying them out.
The reason we want a single file is because we already have 3/4 workflows for each package in our monorepo, and as soon as we start building out these microservices, the workflows folder is going to start getting very bulky very quickly if we have to target each microservice with an individual workflow.yml
We have considered using the “working-directory” flag (https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#example-set-the-default-shell-and-working-directory), but it does not seem to work with marketplace actions that do not allow you to specify subfolders
We have also considered moving the individual packages to the root of the monorepo, this results in multiple workflows, it seems to work alright, are there any negatives associated with doing it in this manner
eg.
- run: rm -v -r ./package.json ./node_modules
- run: mv -v -f packages/osc-package1/* .

VSTS Release Phase Condition Based Off From One of Many Builds

First attempt at automated build and continuous deployment so any process suggestions / improvements are welcome.
I have a repository with different build definitions. One for each of the following: database project, api, and web. (Will add more later for etl / reports) Each build has a filter so it only builds if code in a specific path has been changed.
Currently I have separate releases using continuous deployment for each build. So when the code changes, it builds that auto deploys. This works, but really isn't practical because of dependencies.
What I am looking to do is have one release definition that includes all build artifacts. Then have deployment phases that only run conditionally if a specific build artifact was created (something in that project changed). This way all builds / releases don't run every time, but are tied together when there are related changes.
I am going down the path of trying to created a custom condition on the deployment phase, but can't seem to figure out a way to make this work. I appreciate any help with this.
I have a repository with different build definitions. One for each of
the following: database project, api, and web. (Will add more later
for etl / reports) Each build has a filter so it only builds if code
in a specific path has been changed
Path filters are not to be used in your situation.
If you see Microsoft's git repo,
They have all their codebase from the Windows and Devices Group (WDG) in one big repo. Each root folder is a separate product and completely unrelated to the rest. (eg. Xbox, HoloLens, Windows OS, etc).
Path filters makes sense here because if I git push code to Xbox, I don't want Hololens code also to be built.
Web / DB / API projects all need to be built together, packaged together and deployed together.
I am assuming the project uses .NET stack.
Keep the DB, Web and API projects are in the same solution. Create a single build definition that builds the solution and create multiple artifacts(dacpac, webdeploy package etc.) by adding multiple publish artifacts step.
See screenshot of a build with multiple artifacts.
Link the artifacts from this build to the Release Definition and you should be able to deploy.

Deploy build files from continuous integration

I am working on a project with multiple people, a website application which requires webpack to be built, uglified, concatenated into a few files e.g. app.min.js, style.min.css etc. - As a result of this, in an effort to prevent merge conflicts we recently added the build folder to .gitignore, under the assumption that we would be able to build during deployment.
When pushing to the Master branch, we automatically "deploy" through Semaphore CI (similar to Travis) which runs composer install, npm install, and finally "npm run build" which triggers the webpack build. This is all built and then tested on the CI side of things, and then Semaphore automatically deploys to Amazon's Elastic Beanstalk where our application is hosted.
The problem with this is, it seems Semaphore doesn't upload the build it's just tested, but rather the Master branch itself which has no built JS or CSS. I'm wondering if there's a way to push these built files to deployment as well, or if running the entire build process AGAIN on Elastic Beanstalk is the only route. It seems unnecessary to have to do that process essentially 3 times, locally, CI, and then deployment. Every time a step like this is needed on EB the actual re-instantiation time gets longer, which I'd like to keep as short as possible.
Obviously if building it a 3rd time on EB is the only way to go about this then I'll have to, just wondering if there are better solutions for this whole workflow.
I haven't worked with Semaphore CI, but you might be able to use an .ebignore file.
If you create one, the cli will use that instead of your .gitignore file.
I find in some deployment situations you want the inverse of your .gitignore (all compiled, no src). It essentially lets you pick the files from your project directory that you want to deploy, in the same way as the .gitignore file.
Edit: I just noticed the documentation on aws is lacking. It only mentions file exclusion, but you can include files too.
Edit 2: I don't think Semaphore supports the use of .ebignore, so right now this solution isn't of any use. :(
I just had a great first experience with https://deploybot.com/. The can deploy directly to elastic beanstalk. It might be interesting or you.

Build Workspace mapping

I got two solution setting at same location. This two solution are sharing some of the projects along with some dedicated ones.
I have created two separate build definition with gated check in trigger but issue is that when I make any change in one solution it triggers both the build definition.
Can I somehow control the triggering of the build definition based on the solution that I am checking in?
You need to configure your workspace correctly for this to work. Any change in a Build definition's mapped workspace will cause a build to trigger. Due to this, it completely depends on your source control layout, whether it's possible to setup a build that only triggers when something changes that belongs to either solution.
This setup will become very hard to manage quite quickly, as such I recommend you put each set of projects in their own subfolder, that makes it a lot easier.
So ensure that you build definitions won't both trigger, open the Source Settings panel of your build definition and apply a cloak rule to each file or folder by changing "active" in the first column to "cloaked".
To cloak a file you need to enter its full path in TFS, the UI will only offer you a folder picker, but entering a path to a file will work.
These files should:
Not be needed to build the solution
and changes to should not trigger the build.
Do note that the cloak will cause Team Build to not get these files on the Build agent, so it's not possible to have files your build depends on, but not trigger the build when these files change.
You should create gated check-in build definitions per project not per solution.

Build Flow vs Build Pipeline

I'm trying to split up a few Jenkins jobs using the Build Flow plugin so that instead of three monolithic jobs, we have three "starting points" that then use the DSL to trigger downstream jobs. I chose Build Flow over the Build Pipeline plugin because it seemed like it was a lot harder to share jobs between different pipelines ( ie, sharing the workspace of the multiple starting jobs with a single compile job ).
Previously, I had three jobs set up: Project-PR, Project-DEV, and Project-PROD.
Project-PR would build whenever a pull request happened in GitHub, and would just run a smaller subset of our unit tests, so that we could get quick verification that the PR is okay to merge.
Project-DEV would build whenever a feature branch was merged in GitHub into the main development branch, as well as having the ability to be manually triggered and given a different branch to pull. It would run the full suite of unit -- basically a sanity check that everything is still good. Then it would compile and minify, and push to a QA environment for testing, and then it would run the full suite of integration tests against that QA environment. This step was configured as a parametrized build, with the parameter being the name of the branch to pull, test, and push. It would push to and set up QA environment specific to that branch, so that we could QA multiple features without having to merge to development ( ie, feature-one.qa.example.com, feature-two.qa.example.com ).
Project-PROD would only ever be manually triggered, and would do the full unit and integration test suite, compile and minify the front-end code ( Less, JS, and CSS ), and push the built code into a special "release branch" in GitHub that can then be deployed -- we haven't quite reached the point of Jenkins being in charge of deployment.
Now, what I wanted to set up was to split the subtasks into their own jobs, so that it'd be easy to set up new jobs to without having to copy and paste all the build steps ( or copying the job and changing all the things that need to be unique ). This would let us do things like create a copy of the Project-DEV, but switch out the last job for one that deploys to a staging environment set up in the cloud. Or easily create a job that could report test results to a third party source, ie copy the results to a shared network folder or something. Or any number of things. The goal is basically to use these subtask jobs as building blocks to let us build more complicated jobs, while also making it easier to update how one portion of the build works ( for example, maybe we switch to a different technology for compiling, which might change how Jenkins would compile the code ).
For example, the Project-PR would be split into the following:
Project-PULL -> Project-SetupBuildEnv -> Project-PartialUnitTests
(BuildFlow) (Normal Job) (Normal Job)
The SetupBuildEnv would just pull down any NPM or Composer requirements, and set up the directories required for testing and building. PartialUnitTests then run, and report it's results back up to the
The Project-DEV could be split up like so:
Project-DEV -> Project->SetupBuildEnv -> Project-FullUnitTests -> Project-Compile -> Project-Minify -> Project-DeployQA -> Project-FullIntegrationTests
This way, the parts of the build process that are shared ( in this case, Project-SetupBuildEnv ) can be easily shared between jobs, reducing duplication, and making it easier to update a step in the build process without having to remember EVERY job that uses that step.
Right now, I'm using the Shared Workspace plugin so that all the steps use the same workspace. However, I'm running into an issue with that: it's not actually using one workspace. What's happening is that the Build Flow job will get a directory ( eg: /sharedspace/shared_one ), and download the code from GitHub into there. Then it will trigger the DSL, which starts up the 'SetupBuildEnv' job. But instead of working inside the same directory, it will get a directory with a name like "/sharedspace/shared_one#2", and run the build setup task in there. Then when it goes to do the third step ( unit testing ), it fails, because now it's got a third directory ( /sharedspace/shared_one#3 ), but that directory didn't have the setup run, so the required node and composer modules are missing. What's weird is that it looks like the Shared Workspace plugin is copying the first shared workspace to another directory and incrementing a counter ( the #N part of the directory name ) and giving that to the other jobs to work in.
So, question time:
is there a way to fix the Shared Workspace plugin so that it's actually only using one directory for each job?
if not, is it possible to have the Clone Workspace plugin take an argument, so I can specify what archived workspace to use instead of using the dropdown?
another possiblity: would using the shared workspace plugin, but use the "Local subdirectory for repo (optional)" option in the advanced git job options to specify the directory to use?
failing all that, is there some other way to set up a build pipeline that can share jobs with other pipelines that I've missed out on?
In my experience, even if you do get this working, this might not be a scalable way to go longer term. We've found the shared workspace plugin entirely a bad idea for long / complex builds (similar reasons to yours - but also: scaling across dozens of slaves becomes hard suddenly). Arguably the idea is slightly against the spirit of modern scalable CI.
I'd instead delegate more to your build tools, be they Maven / Gradle, Ant, even Grunt, whatever. If you want to keep these builds truly modular, but can't afford to rebuild at each step (we decided full independence was worth wasting a few minutes per build) then perhaps look at creating useful artefacts at key stages - in your case, minified assets TARs, library JARs, or maybe webjars, or whatever, and deploy them to a (Maven?) repo.
Later build steps in your pipeline can quickly, easily, and repeatably pull the latest (or named version) assets from this centralised repo, and continue with the build process.
An alternative (with similarities) is to build one or more assets, but only promote them after increasing numbers of tests are run, which can be done in separate builds coordinated by your build flow, using the Promoted Builds plugin etc.