In CDK, for most of the constructs, I can also find a Cfn variant of it. What is the difference? Eg. CfnTableProps and TableProps. Which is the one we should use generally?
The Cfnxx resources are low-level constructs.
"These constructs represent all of the AWS resources that are available in AWS CloudFormation."
As opposed to high-level constructs which "provide the same functionality, but handle much of the details, boilerplate, and glue logic required by CFN constructs".
More info available here: https://docs.aws.amazon.com/cdk/latest/guide/constructs.html
Personally, I've used used these only when there's no API available in the high-level constructs for what I'm trying to achieve.
Related
This question is formulated in Scala 3/Dotty but should be generalised to any language NOT in MetaML family.
The Scala 3 macro tutorial:
https://docs.scala-lang.org/scala3/reference/metaprogramming/macros.html
Starts with the The Phase Consistency Principle, which explicitly stated that free variables defined in a compilation stage CANNOT be used by the next stage, because its binding object cannot be persisted to a different compiler process:
... Hence, the result of the program will need to persist the program state itself as one of its parts. We don’t want to do this, hence this situation should be made illegal
This should be considered a solved problem given that many distributed computing frameworks demands the similar capability to persist objects across multiple computers, the most common kind of solution (as observed in Apache Spark) uses standard serialisation/pickling to create snapshots of the binded objects (Java standard serialization, twitter Kryo/Chill) which can be saved on disk/off-heap memory or send over the network.
The tutorial itself also suggested the possibility twice:
One difference is that MetaML does not have an equivalent of the PCP - quoted code in MetaML can access variables in its immediately enclosing environment, with some restrictions and caveats since such accesses involve serialization. However, this does not constitute a fundamental gain in expressiveness.
In the end, ToExpr resembles very much a serialization framework
Instead, Both Scala 2 & Scala 3 (and their respective ecosystem) largely ignores these out-of-the-box solutions, and only provide default methods for primitive types (Liftable in scala2, ToExpr in scala3). In addition, existing libraries that use macro relies heavily on manual definition of quasiquotes/quotes for this trivial task, making source much longer and harder to maintain, while not making anything faster (as JVM object serialisation is an highly-optimised language component)
What's the cause of this status quo? How do we improve it?
Out of the Kubernetes docs a kubectl tool has "three kinds of object management":
imperative commands
imperative object configuration
declarative object configuration
While the first and the last options' use cases are more or less clear, the second one really makes me confusing.
Moreover in the concepts section there is a clear distinction of use cases:
use imperative commands for quick creation of (simple)
single-container resources
use declarative commands for managing (more complex) set of resources
Also imperative style is recommended for CKA certification so it seems to be preferred for day-to-day cluster management activities.
But once again what is a best use case / practice for "Imperative object configuration" option and what is the root idea behind it?
There are two basic ways to deploy to Kubernetes: imperatively, with kubectl commands, or declaratively, by writing manifests and using kubectl apply. A Kubernetes object should be managed using only one technique. It is better to use only one way for the same object, mixing techniques for the same object results in undefined behavior.
Imperative commands operates on live objects
Imperative object configuration operates on individual files
Declarative object configuration operates on directories of files
Imperative object configuration creates, updates and deletes objects using configuration files, which contains fully-defined object definitions. You can store object configuration files in source control systems and audit changes more easily than with imperative commands.
You can run kubectl apply, delete, and replace operations with configuration files or directories containing configuration files.
Please refer to official documentation, where everything is fully described with examples. I hope it is helpful.
I am aware of the nested job support (XD-1972) work and looking forward to that. A question regarding split flow support. Is there a plan to support running parallel steps, as defined in split flows, in separate containers?
Would it be as simple as providing custom implementation of a proper taskExecutor, or it is something more involved?
I'm not aware of support for splits to be executed across multiple containers being on the roadmap currently. Given the orchestration needs of something like that, I'd probably recommend a more composed approach anyways.
A custom 'TaskExecutor` could be used to farm out the work, but it would be pretty specialized. Each step within the flows in a split are executed within the scope of a job. That scope (and all it's rights and responsibilities) would need to be carried over to the "child" containers.
I am hoping that Combinator parsers, (http://debasishg.blogspot.com/2008/04/external-dsls-made-easy-with-scala.html), will work for a design to process the routing rules for a REST service that is implemented with Scalatra,(http://tutorialbin.com/tutorials/80408/infoq-scalatra-a-sinatra-like-web-framework-for-scala).
This REST service is to serve as a proxy so external applications can get access to services within the firewall, as it will have additional layers of security that can be customized for the business requirements of each REST service.
So, if a person wants to access their class schedule there will be less security than if you want to look at the transcript of someone.
I would like the rules for where to go to actually get the information, and how to return it, as well as what security is needed, in a DSL.
But, the first problem is how to dynamically change the routing rules for the REST service based on a DSL, as I am trying to create a framework that doesn't require a great deal of recompiling to add new rules, but just write the appropriate scripts and just let it be processed.
So, can a DSL be implemented using the Combinator Parser, in Scala, that will allow JAX-RS (http://download.oracle.com/javaee/6/tutorial/doc/giepu.html) to have dynamically changed routing?
UPDATE:
I haven't designed the language yet, but this is what I am trying to do:
route /transcript using action GET to
http://inside.com/transcript/{firstparam}/2011/{secondparam}
return json encrypt with public key from /mnt/publickey.txt
for /education_cost using action GET combine http://combine.com/SOAP/costeducate with
http://combine.com/education_benefit/2010 with
http://combine.com/education_benefit/2011 return html
These are two possible ideas where rules for a request for a transcript is sent to a different site, such as within a firewall, and the data is encrypted and returned.
The second would be more complicated in that the results of a SOAP and two REST requests will be combined, and there would need to be additional commands on how this is combined, but the idea is to put all of this in files that can be parsed on the fly.
If I used Groovy then some new classes could be generated for the routing, which would remove some performance hits, but I think using Scala would be the best bet, even if I took a performance hit.
My hope is to make a framework that is more maintainable so new routing rules can be written by people that don't know any OOP or functional languages, but the specifications could be written using Specs (http://code.google.com/p/specs/) so that the functional side could be certain that their requirements are tested on a regular basis.
UPDATE 2:
When I start working on a design I may intuitively understand some options, but not know why. Today I realized that the reason that Groovy may be a better fix for this is that I could then generate the classes for routing, using the metaprogramming (http://www.justinspradlin.com/programming/groovy-metaprogramming-adding-behavior-dynamically/), then I would be able to use Scala or Groovy to dynamically use the routing that was generated. I am not certain how to get Scala to generate the classes if they don't already exist.
In Groovy, as well as some other languages, as shown here (http://langexplr.blogspot.com/2008/02/handling-call-to-missing-method-in.html) if a method is missing you can dynamically generate the method and it will henceforth exist, so it will be missing one time.
It almost seems that I should be mixing Groovy with Java to make this work, but then the result may be that some of the code is in Scala and some in Java, for the routing of REST services.
Splitting the question in two parts:
can a DSL be implemented using the Combinator Parser
Yes. There are things that cannot be implemented using a combinator parser, or even other kinds of parser. For instance, Perl itself cannot be parsed (it must be evaluated). And combinator parsers are also not particularly good for complex languages (such as Scala -- its compiler is not based on combinator parsers), or if you demand top performance (such as the compilers used to compile hundreds of thousands of lines of code).
If, however, you plan to go to such extremes, choosing the parser is not going to be your main problem. For DSLs of average complexity, they'll do just fine.
that will allow JAX-RS to have dynamically changed routing
Well, I don't know JAX-RS, but if dynamically changed routing can be done with it, then combinators parsers will be able to provide whatever input is needed.
EDIT
Seeing your example, I think parser combinators are certainly enough. From their results, I expect you could dynamically create BlueEyes binders -- I haven't used BlueEyes, so I'm not sure how dynamic they are.
Another alternative would be go with Lift. Lift's binders are partial functions, and they can be combined in all the usual ways -- f1 orElse f2, f1 andThen f2, etc. I didn't suggest it at first because it is most often used with sessions, but it has a RESTful model which, I think, is stateless.
I don't know Scalatra, so I don't know if it would be adaptable to this or not.
This has baffled me for a long time.
Given basic atomic primitives like compare & swap, I can see how to implement a spin lock (from which I can build mutexes).
However, I don't see how I can build condition variables out of this. How is this done?
It's not particularly simple. The following is a link to a paper by Douglas Schmidt (who is also largely responsible for the ACE libraries) that details several approaches for implementing condition variables on Windows using the synchronization primitives available in Win32 (pre-Vista). The approaches include using only the basic, generally available on any OS primitives, and discusses the various limitations of the approaches:
http://www.cs.wustl.edu/~schmidt/win32-cv-1.html
The bottom line (concluding remarks):
This article illustrates why developing condition variables on Win32 platforms is tricky and error-prone. There are several subtle design forces that must be addressed by developers. In general, the different implementations we've examined vary according to their correctness, efficiency, fairness, and portability. No one solution provides all these qualities optimally.
The SignalObjectsAndWait solution in Section 3.4 is a good approach if fairness is paramount. However, this approach is not as efficient as other solutions, nor is it as portable. Therefore, if efficiency or portability are more important than fairness, the SetEvent approach described in Section 3.2 may be more suitable. Naturally, the easiest solution would be for Microsoft to simply provide condition variables in the Win32 API.
Note that starting in Vista, Windows supports condition variables using native APIs:
http://msdn.microsoft.com/en-us/library/ms686903.aspx