Installing a marathon group as a DCOS package - marathon

We are trying to create our own DCOS package to install our application, we created our own universe and host it in S3, we created all the necessary files for the DCOS package(config.json, package.json, marathon.json.mustache) and the index is created correctly, called Atest.
Our marathon.json is a marathon descriptor for a group of apps:
{
"id" : "/{{Atest.id}}",
"groups":
[
{
"id": "{{Atest.apps-id}}",
"apps" :
[
{
"id" : "{{Atest.app-master-id}}",
.......
},
{
"id" : "{{Atest.app-slave-id}}",
.......
},
]
}
]
}
When we deploy the application through the marathon api it works fine, but when we try to run DCOS package install Atest it fails, if i replace the the json for only the master app it is installed without problems.
So DCOS package install custom-package can only install marathon apps? Or is there a way to to install a marathon group as a DCOS package?

Yes, dcos package install custom-package can only install marathon app. DCOS doesn't have a support to accept marathon group json.
Marathon has the support to start multiple apps from same json, it is summitted to /v2/groups endpoint from REST API.
(https://mesosphere.github.io/marathon/docs/rest-api.html#post-v2-groups).
However, the Cosmos (the DC/OS package manager - https://github.com/dcos/cosmos/) doesn’t accept the same request, because it only accepts request for /v2/apps endpoint (https://github.com/dcos/cosmos/blob/master/cosmos-server/src/main/scala/com/mesosphere/cosmos/MarathonClient.scala#L20) which is starting a single app.

Related

Is it possible to use cloud formation to deploy a Cloud9 ide on an EC2 image that is not obsolete?

Apparently Cloud9 out of the box is being shipped on an essentially obsolete EC2 instance, as it does not have a current, recent, or viable instance of the aws cli.
$ aws --version
aws-cli/1.19.112 Python/2.7.18 Linux/4.14.296-222.539.amzn2.x86_64 botocore/1.20.112
As far as I can tell, Amazon recommends using version 2.9.1;
But even the most recent series 1 version is 1.27.19
Is there any way of using CloudFormation to deploy Cloud9 on a more contemporary EC2 instance? I want to roll Cloud9 out to a dev organization, but it is distressing to me that it seems to be deployed crippled (and yes, I need to use more recent cli options for the initial configuration of each new IDE).
Have you tried with the identifier for the Amazon Machine Image (AMI)?
That's used to create the EC2 instance, because to declare this entity in your AWS CloudFormation template need use this syntax in your JSON file:
{
"Type" : "AWS::Cloud9::EnvironmentEC2",
"Properties" : {
"AutomaticStopTimeMinutes" : Integer,
"ConnectionType" : String,
"Description" : String,
"ImageId" : String,
"InstanceType" : String,
"Name" : String,
"OwnerArn" : String,
"Repositories" : [ Repository, ... ],
"SubnetId" : String,
"Tags" : [ Tag, ... ]
}
}
Then, to choose an AMI for the instance, you must specify a valid 'AMI alias' or a valid AWS Systems Manager path, the default AMI is used if the parameter isn't explicitly assigned a value in the request.
Check the entire process in the AWS Cloud9 environment EC2.
AMI aliases
Amazon Linux (default): amazonlinux-1-x86_64
Amazon Linux 2: amazonlinux-2-x86_64
Ubuntu 18.04: ubuntu-18.04-x86_64
SSM paths
Amazon Linux (default): resolve:ssm:/aws/service/cloud9/amis/amazonlinux-1-x86_64
Amazon Linux 2: resolve:ssm:/aws/service/cloud9/amis/amazonlinux-2-x86_64
Ubuntu 18.04: resolve:ssm:/aws/service/cloud9/amis/ubuntu-18.04-x86_64

Swift Package Collections doesn't work with an Entreprise GitHub account

I am trying to generate a package collections from a GitHub entreprise account, using the command line (follwing the steps on the official doc):
package-collection-generate packages.json collection.json
When I ran this command, the Terminal ask me for my user name, once provided it keeps runing without a result, until I stop it using Ctl-C
The packages.json looks like this:
{
"name": "Entreprise iOS packages",
"overview": "This collection contains the entreprise Swift packages.",
"author": {
"name": "Swift packages"
},
"keywords": [
"iOS"
],
"packages": [
{
"url": "https://github.entreprise.com/[ORGANISATION]/[REPO].git"
}
]
}
I have also ttried to integrate my access token and user name in the url like this:
https://[UserName]:[AccessToken]#https://github.entreprise.com/[ORGANISATION]/[REPO].git
I have also tried to use the SSH url, with no success.
git#github.entreprise.com:[ORGANISATION]/[REPO].git
I can import the same package using Xcode Packages
I have SSH configured on my machine
I have tried to use both Private and Public access to the repo
With the same setup, I can create a collection using a non-entreprise GitHub account.
Maybe I am missing something or Swift Package Collection doesn't work with a GitHub Entreprise account!
Can you please advice what to do here?
You talk about a Github enterprise account but give a completely wrong URL in multiple places in your question (including your packages.json). Double-check that.

Deployment Manager, how to obtain google storage service account in resource file

I use Deployment Manager and try to describe my resources in python files ( Deployment Manager allows to create configuration using Python or Jinja).
Actually,
I use json-format for topic-resource's creating -
return
{
'name': topic,
'type': 'pubsub.v1.topic',
'properties': {
'topic': topic
},
'accessControl': {
'gcpIamPolicy': {
'bindings': [
{
'role':
'roles/pubsub.publisher',
'members': [ 'service_account = project_name + '#gs-project-accounts.iam.gserviceaccount.com' ]
}
]
}
}
}
The format [project_name]#gs-project-accounts.iam.gserviceaccount.com worked fine several weeks ago but for new created project such service account is not found.
Is it correct that format of Google Cloud Storage service accounts was changed for a new created project it is failure service account ... doesn't exist? It was - [project-name]#gs-project-accounts.iam.gserviceaccount.com, and currently it is service-[projectId]#gs-project-accounts.iam.gserviceaccount.com.
I check it by this API and for special new-created projects I get - this format : service-[project_Id]#gs-project-accounts.iam.gserviceaccount.com.
How we can fetch the google cloud storage service account dynamically in Deployment Manager config files? As I can see here
there are only several environment variables like project_name, project_id, time etc.
and there isn't any storage_service_account environment variable
The GCS service account format recently changed to the following format:
service-[PROJECT_NUMBER]#gs-project-accounts.iam.gserviceaccount.com
Existing projects will continue to work with the previous format.
For new projects, the new format will be the way moving forward.
To verify format you can get projects.serviceAccount.

Swagger-ui on GKE 1.9

I am running a kubernetes cluster on GKE. I have been told that Kubernetes API server comes integrated with the Swagger UI and the UI is a friendly way to explore the apis. However, I am not sure how to enable this on my cluster. Any guidance is highly appreciated. Thanks!
I've researched a bit regarding your question, and I will share with you what I discovered.
This feature is not enabled by default on every Kubernetes installation and you would need to enable the swagger-ui through the flag enable-swagger-ui and I believe this was what you where looking for.
--enable-swagger-ui Enables swagger ui on the apiserver at /swagger-ui.
The issue is that I believe it is not enabled for Google Kubernetes engine and the master node in Google Kubernates Engine does not serve any request for this resource and the port appears to be close and since it is managed I believe it cannot be enabled.
However according to documentation the master should expose a series of resources giving you the possibility to access the API documentation and render them with the tool you prefer. This is the case and the following files are available:
https://master-ip/swagger.json (you can get the master IP running $ kubectl cluster-info)
{"swagger": "2.0",
"info": {
"title": "Kubernetes",
"version": "v1.9.3"
},
"paths": {
"/api/": {
"get": {
...
https://master-ip/swaggerapi
{"swaggerVersion": "1.2",
"apis": [
{
"path": "/version",
"description": "git code version from which this is built"
},
{
"path": "/apis",
"description": "get available API versions"
},
...
According to this blog post from Kuberntes you could make use of this file:
From kuber-apiserver/swagger.json. This file will have all enabled GroupVersions routes and models and would be most up-to-date file with an specific kube-apiserver. [...] There are numerous tools that works with this spec. For example, you can use the swagger editor to open the spec file and render documentation, as well as generate clients; or you can directly use swagger codegen to generate documentation and clients. The clients this generates will mostly work out of the box--but you will need some support for authorisation and some Kubernetes specific utilities. Use python client as a template to create your own client.

How to 'trigger' chef-solo and get callback/report?

I'm thinking to use Chef-Solo as a PaaS orchestrator.
I'll have my own dashboard which will generate recipes and my nodes will pull from them. I know I can do that by using :
chef-solo -i <interval>
But, if i'd like to add more and more attributes; like having a list of virtualhosts or mysql users to deploy. I don't know how I can achieve this.
I'm looking for your ideas; I 'think' engineyard is using chef to deploy 'on demand' php, node .js apps; how did they achieve this ?
How not to re-execute an app deployment if that one has already been deployed
On first run i'll have :
"websites" : {
"site1": { "username": "dave", "password": "password123" }
},
And then, when a new site is created the attributes would become :
"websites" : {
"site1": { "username": "dave", "password": "password123" }
"site2": { "username": "bob", "password": "password123" }
}
etc.
And how to get report on what chef-solo is doing ?
Any ingenious idea is welcome :)
Add chef-server to your PAAS stack and use knife to push your receipes there. Knife can also be used to initially provision nodes in your PAAS, taking care of installing the chef client (configured to talk to your chef server).
The chef solo client is useful for simple use cases, but doesn't really scale will require additional supporting code for items like monitoring/reporting (your question) and when you move to more complex multi-tier deployment scenarios.