I have created my application using serverless offline. It's working fine. but when I am going deploy on aws it will give me error like below.
The CloudFormation template is invalid: Template format error: Number of resources, 216, is greater than maximum allowed, 200
I will research lots And found following suggestion
Use microservices
Nested Stack
But I have no idea, how can use it on the existing project.
Unfortunately this is a CloudFormation limitation. A possible solution would be to split up your large Service into multiple smaller ones.
More automatically you should look into this plugin
For more details follow the link
Related
Issue/References Needed
I am looking for a setup where a repository code could be deployed to multiple clusters and all cluster are working on different configurations, I want to understand the possibility of going this using GitHub.
I tried looking for some solution on the internet but was unable to find any referenced article which can support this. Any references could be quite helpful.
Thanks in advance!
I wanted to create multiple deployments from the same container image in the same namespace with different configurations.
These Documentations helped me:
Doc 1
Doc 2
I’m currently using kubernetes and I came across of helm.
Let’s say I don’t like the idea of “infecting” my kubernetes cluster with a process that is not related to my applications but I would gladly accept it if it could be beneficial.
So I made some researches but I still can’t find anything I can’t easily do by using my yaml descriptor and kubectl so for now I can’t find an use except,maybe, for the environizing.
For example (taking it from guides I read:
you can easily install application, eg. helm install nginx —> I add an nginx image to my deployment descriptor, done
repositories -> I have docker ones (where I pull my images from)
you can easily helm rollback in case of release failure-> I just change the image version to the previous one in my kubernetes descriptor, easy
What bothers me is that, at level of commands, I do pretty much the same effort (helm update->kubectl apply).
In exchange for that I have a lot of boilerplate because of keeping the directory structure helm wants and I feel like missing the control I have with plain deployment descriptors ...what am I missing?
It is totally understandable your question. For small and simple deploys the benefits is not actually that great. But when the deploy of something is very complex Helm helps a lot.
Think that you have a couple squads that develop microservice for some company. If you can make a Chart that works for most of them, the deploy of each microservices would differ only by the image and the resources required. This way you get an standardized deployment and easier to all developers.
Another use case is deploying applications which requires a lot of moving parts. For example, if you want to deploy a Grafana server on Kubernetes you're probably going to need at least a Deployment and a Configmap, then you would need a service that matches this deployment. And if you want to expose it to the internet you need an ingress too.
One relatively simple application, would require 4 different YAMLs that you would to manually configure and make sure everything is correct instead you could do a simple helm install and reuse the configuration that someone has already made, sometimes even the company who created the Application.
There are a lot of other use cases, but these two are the ones that I would say are the most common.
Here's three suggestions of ways Helm can be useful:
Your continuous deployment system somewhat routinely produces new builds and wants to send them to the Kubernetes cluster. You can use templating to specify the image name and tag in a deployment, and so helm upgrade ... --set tag=201907211931 to request a specific tag.
You might have various service-specific controls like the log level or external database hostnames. The Helm values mechanism gives a uniform way to specify them, without having to know the details of the Kubernetes YAML files.
There is a repository of pre-packaged application charts, so if you want replicated PostgreSQL with in-cluster persistent storage, that's already built for you and you can just depend on it, rather than figuring out the right combination of StatefulSets and PersistentVolumeClaims yourself.
You can combine these in interesting (and potentially complex) ways: use an in-cluster database for developer testing but use a cloud-hosted and backed-up database for production, for example, and compute the database host name based on what combination of settings are provided.
There are, of course, alternative ways to do all of these things. Kustomize in particular can change the image value fairly straightforwardly, and is notable for having been included in the kubectl tool since Kubernetes 1.14 (see also Declarative Management of Kubernetes Objects Using Kustomize in the Kubernetes documentation). The "operator" pattern gives an alternate path to install software in your cluster, but even more so than Helm you're trusting an arbitrary program with API access.
I ran into the 200 resource limit on cloudformation when using serverless. I saw on the blog that using the domain manager will help mitigate this issue by freeing a few resources from the api gateway.
After implementing this I realized it did nothing to help resource limit. Do I need to do something else after this. I am not sure if I should remove my sls stack and redeploy it?
All serverless does is transform some abstract configuration into CloudFormation templates (and other provider templates too.)
Adding a plugin to help you reconfigure the same stack won't reduce your number of resources that get generated.
The serverless blog has a great article on this, https://serverless.com/blog/serverless-workaround-cloudformation-200-resource-limit/.
The TL;DR is that there is a limit to resources in stacks so you have to break your stacks up.
You can:
Split your stacks manually, create multiple projects to form your platform and use references between them; or
Use a plugin to split the outputted CloudFormation template via:
serverless-plugin-additional-stacks
serverless-nested-stack
serverless-plugin-split-stacks
I have set up a Kubernetes cluster using Kubernetes Engine on GCP to work on some data preprocessing and modelling using Dask. I installed Dask using Helm following these instructions.
Right now, I see that there are two folders, work and examples
I was able to execute the contents of the notebooks in the example folder confirming that everything is working as expected.
My questions now are as follows
What are the suggested workflow to follow when working on a cluster? Should I just create a new notebook under work and begin prototyping my data preprocessing scripts?
How can I ensure that my work doesn't get erased whenever I upgrade my Helm deployment? Would you just manually move them to a bucket every time you upgrade (which seems tedious)? or would you create a simple vm instance, prototype there, then move everything to the cluster when running on the full dataset?
I'm new to working with data in a distributed environment in the cloud so any suggestions are welcome.
What are the suggested workflow to follow when working on a cluster?
There are many workflows that work well for different groups. There is no single blessed workflow.
Should I just create a new notebook under work and begin prototyping my data preprocessing scripts?
Sure, that would be fine.
How can I ensure that my work doesn't get erased whenever I upgrade my Helm deployment?
You might save your data to some more permanent store, like cloud storage, or a git repository hosted elsewhere.
Would you just manually move them to a bucket every time you upgrade (which seems tedious)?
Yes, that would work (and yes, it is)
or would you create a simple vm instance, prototype there, then move everything to the cluster when running on the full dataset?
Yes, that would also work.
In Summary
The Helm chart includes a Jupyter notebook server for convenience and easy testing, but it is no substitute for a full fledged long-term persistent productivity suite. For that you might consider a project like JupyterHub (which handles the problems you list above) or one of the many enterprise-targeted variants on the market today. It would be easy to use Dask alongside any of those.
I have a Azure project (Azure 1.3) in VS2010. There are 2 webroles, one web page project and one WCF project. In debug mode I want the web project to use a web.config for DEV enviroment, and when publishing the web.config for PROD must be used.
What is the best way to do this ?
Currently I am facing issues when using a Web.Debug.config with transform XSLT. It doesn't seem to work in Azure....
Solve your problem a different way. Think about the web.config always being static and never changing when working with Azure. What does change is your ServiceConfiguration.cscfg.
What we have done is created our own configuration provider that first checks the ServiceConfiguration.cscfg and then falls back to the web.config if the setting/connection string is't there. This allows us to run servers in IIS/WCF directly during development and then to have different settings when deployed to Azure. There are some circumstances where you have to use web.config (yes, I'm referring to WCF here) and in those cases you have to write code and create convention instead of storing everything in web.config. I have a blog post where I show an example of how I did this when dealing with WIF (Windows Identity Foundation) and Azure.
I agree with Mose, excellent question!
Visual Studio 2010 includes a solution for this type of problem, web.config transforms. If you look at your web role you'll notice it includes Web.Debug.config and Web.Release.config along with the traditional web.config. These files are used to transform the web.config during deployment.
The canonical example is "I need different database connection strings for development and release" but it also fits your situation.
There is an excellent blog post from the Visual Web Developer Team that explains how to use this feature (don't bother with the MSDN docs, I know how it works and still don't understand the docs). Check out http://blogs.msdn.com/b/webdevtools/archive/2009/05/04/web-deployment-web-config-transformation.aspx
I like this question !
For worker roles, I solved this problem by detecting the environment at runtime and launching my 'application' in a new AppDomain with a custom configuration :
bot.cloud.config
bot.dev.config
bot.win.config
This is incredibly efficient !
I'd like to do the same with web projects, because using the Azure specific configuration is a lot of trouble :
Both config are not in the same place, which is time-consuming when debugging
You have to learn a new way of writing something that sould be standard
Sometime you'll wonder if the app falled back on web.config because of a stupid syntax error
I'm still searching the right way to do that, like in this post
Another possible solution is to have two CloudService projects, each one with specific ServiceConfiguration.cscfg(dev/prod). Develop using the Dev, but deploy the Prod.
Currently I am facing issues when using a Web.Debug.config with
transform XSLT. It doesn't seem to work in Azure....
It depends on whether you want to make it work on your local machine or inside continuous integration.
For the local machine I tried to answer here: https://stackoverflow.com/a/9393533/182371
For the continuous integration it's even easier. When you build from the command line specifying the Configuration property value your configs WILL be transformed (no matter what it does when you build inside VS). So properly specifying build configurations for both cloud and web project will give you the correct output depending on build parameters.