I'm doing some tutorials using the MLOps templates, to create a Sagemaker Project to build, train and deploy with third-party Git repositories using CodePipeline. Following this documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects-walkthrough-3rdgit.html#sagemaker-proejcts-walkthrough-create-3rdgit.
I have created the connection in CodeCommit settings, and selected my two repositories created in Github, one for modelbuild and one for modeldeploy. When creating the project, I put the urls from github, the name of the repos (in my case, it is organization/modelbuild-repo) and I use the arn of the connection.
However, in the building process, in the seedcodecheckin, I get the following errors:
[GitRepositorySeedCodeBootStrapper.main()] ERROR GitRepositorySeedCodeBootStrapper - Seedcode checkin failed: Invalid remote: origin
I also get the following exceptions:
Caused by: org.eclipse.jgit.errors.NoRemoteRepositoryException: https://codestar-connections.eu-west-1.amazonaws.com/git-http/XXXXXXXXXXX/eu-west-1/ff5bf7e3-bee9-4dad-a63d-0374c4a96297/ORGANIZATION/sagemaker-tutorial-3-modelbuild.git: https://codestar-connections.eu-west-1.amazonaws.com/git-http/XXXXXXXXXXX/eu-west-1/ff5bf7e3-bee9-4dad-a63d-0374c4a96297/ORGANIZATION/sagemaker-tutorial-3-modelbuild.git/info/refs?service=git-upload-pack not found: Not Found
These errors are from the build logs from git-seedcodecheckin.
Which I believe it has to do with trying to populate the repository with the modelbuild code.
Anyone has an idea what might be the error?
I have seen similar questions to this, related to policies and the need for them, but I beleive I have the correct ones attached.
It looks like the repository either does not exist or you are not passing in the right name.
You can follow the guidelines in this blog - https://aws.amazon.com/blogs/machine-learning/create-amazon-sagemaker-projects-using-third-party-source-control-and-jenkins/
Related
I'm collaborating on a website using Github for source control. The site is hosted on a shared server on Dreamhost. I'd like to set up an easy way for myself and my collaborator to be able to see changes that have been merged into the main branch on the staging site then also run a couple of other shell commands (composer update, for example, after deploying the changes).
I'm new to this. I've found pieces of relevant documentation but have not been able to tie it all together. So far I am running into at least two issues.
Setting up github environments to point to development and staging environments
I looked into Github Workflows but it seemed Github Actions might be easier. I set up Github Environments called staging and development. When setting up the environments, I saw the option to add environment secrets but don't know what exactly to add here. So my environments in Github have names but don't really point to my development and staging environments. I think the first thing I need to figure out is how to link my Github Environments and actual development and staging environments together. I found Deploying with Github Actions but didn't find an answer there.
Invalid workflow file error
Also I found an action in Github Marketplace called branch-deploy. I created a yaml file under .github/workflows to test it. When this runs, I see an error on the workflow in Github of
Invalid workflow file
The workflow is not valid. .github/workflows/deploy.yml (Line: 2, Col: 1): Unexpected value 'id'
Not sure what is going on with this error because the "basic usage" example in the Marketplace page uses the same value for id.
I was exploring following GitHub page to understand the migration and merge of projects from Azure DevOps Server to Azure DevOps Services
https://github.com/nkdAgility/azure-devops-migration-tools
I see in the documentation, the following feature was mentioned. But, unfortunately, I could not see any relevant documentation for the same. Please help with this.
Merge many projects into a single project
You do it through the configuration file as part of the tool. It requires tinkering and trial and error to get it working but is a very useful. In the config file you state a source and a target and the amend the parama you want and then run the tool. It not very well documented though but is still a powerful tool.
I'm currently testing Backstage for my company, and I tried various continuous integration pipelines like Github, Jira, Jenkins, and more. But I'm facing an issue with the Jira plugin. Maybe it's just a bad setup.
In my component, I can see the Jira entity, but every time, it says:
failed to fetch data, status 404: Not Found
When I look in the browser's console (network), I can see this 404, and this is the query used:
http://localhost:7007/api/proxy/jira/api/rest/api/latest/project/undefined
Why do I have undefined? Is it because the jira/project-key variable is not at the right place? Actually, it's in the catalog-info.yaml under metadata.
I followed all documentation I could find, but one section is not enough clear for me. It's about the annotations thing. It says `Add annotation to the yaml config file of a component. I created a component yesterday, but I don't see any file for it.
Thanks in advance.
Ok, I found the solution.
In the documentation, the file, called catalog-info.yaml, is not the one at the Backstage repository's root, but to a file in a different repository that will be used as component template in Backstage.
If you create a new repository (ex a fork from Symfony), you will have to add the file catalog-info.yaml with various informations if you want to use this repository as template for your projects.
I have been trying to integrate Google Cloud Build with my GitHub account. I have set up working build triggers in the past for other projects on GCP - but with this one, I just can't get it to work reliably. Here is what I did:
Install the Google Cloud Build App on GitHub and link it to my Google Cloud Account.
Connected to my GitHub repository in Google Cloud Build. As source, I selected "GitHub (Cloud Build GitHub App)".
Let Cloud Build create its default trigger for me - just to make sure that the settings are correct.
Now, when manually running the default trigger, I always receive the following error message after selecting my branch: "Failed to trigger build: Request contains an invalid argument." Here is what that looks like:
The trigger also does not work when invoked through a new commit in the GitHub repository. There are two different errors I have spotted through the GitHub UI:
The GitHub Cloud Build Action essentially reports the same error as Cloud Build itself when manually invoking the build and immediately fails:
The GitHub Cloud Build Action is queued/started, but never actually does anything. In this case, Cloud Build does not even seem to know about the build that was triggered by GitHub. The action will remain in this state for hours, even though Cloud Build should usually cancel builds after 10 minutes by default.
Here are some things that I've tried so far to mitigate the issue:
Create all sorts of different trigger variations - none of them seems to work. The error is always the same.
Uninstall the Cloud Build App on Github, unlink my Google Cloud account, and go through the entire setup process again.
When connecting the repository in Cloud Build, instead of selecting the GitHub App as a source, select "GitHub (mirrored)".
At this point, I seem to be stuck and I would be super grateful for any advice/tip that could somehow push me in the right direction.
One more thing that I should note: I have had the triggers working for a while in this project. They stopped working some time after I renamed my master branch on GitHub to "production". I don't know if that has anything to do with my triggers failing though.
I found that this can be caused when you have an "invalid" CloudBuild config file (e.g. cloudbuild.yaml).
This threw me off, because it doesn't necessarily mean it is invalid YAML or JSON, just that it is not what CloudBuild expects.
In my case, I defined a secretEnv value, but had removed the step that utilized it. Apparently, CloudBuild does not allow secretEnv values to go unused, which resulted in the cryptic error message:
Failed to trigger build: Request contains an invalid argument.
In case that isn't clear, here is an example of a config file that will fail:
steps:
- name: "gcr.io/cloud-builders/docker"
entrypoint: "bash"
args: ["-c", "docker login --username=user-name --password=$$PASSWORD"]
secretEnv: ["PASSWORD"]
secrets:
- kmsKeyName: projects/project-id/locations/global/keyRings/keyring-name/cryptoKeys/key-name
secretEnv:
PASSWORD: "encrypted-password"
UNUSED_PASSWORD: "another-encrypted-password"
UNUSED_PASSWORD is never actually used anywhere, so this will fail.
Since this error message is so vague, I assume there are other scenarios that could cause this same problem, so take this as just an example of the type of mistakes to look for.
Regards,
Your help will be appreciated.
I have created a pipeline in VSTS\Azure-DevOps. It gets its sources from a repository in Bitbucket. Queueing a build works fine. It builds and the tests succeed.
Now I want a build to run on every commit to the repository on Bitbucket. However, when I edit the pipeline and in the Triggers tab enable 'Continuous Integration' and click 'Save' I get the following error:
Unable to configure a service on the selected Bitbucket repository. Bitbucket returned the error 'Forbidden: '.
I am confused that I get 'Forbidden', while getting the source-code already works.
What is it that I am doing wrong? Is there something I must configure in VSTS\Azure-DevOps or in Bitbucket?
Answering my own question:
It appeared that in Bitbucket I only had the rights of 'Writer' for the Repository. When we changed it to 'Administrator' enabling Continuous Integration worked and we verified that committing a code change triggered the build.
Good news / bad news.
It looks like - for now - you can configure a pipeline without being a BitBucket admin on the repo... but not using the templates.
So you can build an empty pipeline based on a BitBucket repo (no admin access), and manually add each of the tasks.
Based on further tests: what you cannot do is set the Continuous Integration trigger, because that requires admin access to set up the webhooks
I know, this is not what you want... but at least there is a way to end up with a working pipeline.
Regards,
Jose