Containers (Kubernetes) vs Web service (REST APIs) - rest

I have a single screen desktop application developed in Java. It is a tool to convert files, given a file in .abc format, the tool converts it to .xyz format. Basically the tool works offline and acts as a translator to convert file from one form to another.
So now, to improve the infrastructure, there are discussions to move the tool to Kubernetes or to provide REST services for the file conversion. I completely have no idea about the containers nor the REST APIs as I am a front-end developer.
More about the tool, as I told earlier, the tool is a single page application, very light doing very minimal job, totally used by 200 users approximately. So, this being the shape and size of the application, which one would be the best approach to go with and why? Basically, I am looking for a short evaluation report of Kubernetes vs REST service and architecture recommendation with reasons.

Currently your application is a standalone application which is quite an old concept.
I can mention high-level changes needs to be done when your file conversion logic would be exposed over Rest Api in Kubernetes world.
you can go through one by one following mentioned areas to get a better understanding design-wise:
java code would be a backend code and its public methods that take inputs from UI actions will be exposed over rest API.
There are multiple rest API's (jersey, rest easy, etc or spring/spring-boot framework also provides rest API support) that you can go through any of them to get an understanding.
once your backend is exposed over the rest API then it needs to be containerized means your backend will be running under the container. Can go through docker documentation and can build one sample containerized app. There is huge material present in this area.
once your backend is containerized then it will be installed in a Kubernetes cluster
Kubernetes is basically a container orchestration tool and it's quite a wide thing. you can through its official documentation for basic understanding.
SPA will be running on a client machine like today also you are able to launch from your desktop but it will communicate with the Kubernetes cluster where your application is presently packaged in a container.
References:
docker :
https://docs.docker.com/
Kubernetes :
https://kubernetes.io/

Related

How to mock the Kubernetes cluster/server?

Kubernetes OpenAPI specification is hosted here.
https://github.com/kubernetes/kubernetes/tree/master/api/openapi-spec
Additionally, various client APIs for the Kubernetes is provided here:
https://kubernetes.io/docs/reference/using-api/client-libraries/
Using the OpenAPI specification, I am able to generate the server code, which provides the REST services. However, the applications using these K8s client APIs (written in either language - Go, Java, etc.) do not use these REST API directly.
My objective is to mock the K8s server to use in the test automation and build a controlled environment to create various test scenarios.
Is there any ready-to-use Kubernetes mock available? If not, how we can interface the client APIs with the above OpenAPI generated REST server? This way, the applications shall continue to use the client APIs but internally, they will be communicating with the mocked K8s server and not the real one.
Please help with the options.
.
Not really a direct answer to your question, but most solutions i have seen implemented are not trying to mock the k8s API but are really using it through either k3s (from rancher labs) or KinD project (official way)
You then connect to it like a normal kubernetes cluster

Material on Building a REST api from within a docker container

I'm looking to build an api on a application that is going to run its own docker container. It needs to work with some applications via its REST apis. I'm new to development and dont understand the process very well. Can you share the broad steps necessary to build and release the APIs so that my application runs safely within the docker but externally whatever communication needs to happen they work out well.
For context: I'm going to be working on a Google Compute VM instance and the application I'm building is a HyperLedger Fabric program written in GoLang.
Links to reference material and code would also be appreciated.
REST API implementation is very easy in Go. You can use the inbuilt net/http package. Here's a tutorial which will help you understand its usage. https://tutorialedge.net/golang/creating-restful-api-with-golang/
Note : If you are planning on developing a production server, the default HTTP client is not recommended. It will knock down the server on heavy frequency calls. In that case, you have to use a custom HTTP client as described here, https://medium.com/#nate510/don-t-use-go-s-default-http-client-4804cb19f779
For learning docker I would recommend the docker docs they're very good and cover a handful of stuff. Docker swarm and orchestration are useful things to learn but most people aren't using docker swarm anymore and use things like kubernetes instead. Same principles, but different tech. I would definitely go through this website: https://docs.docker.com/ and implemented on your own computer. Then just practice by looking at other peoples dockerfiles and building your own. A good understanding a linux will definitely help with installing packages and so on.
I haven't used go myself but I suspect it shouldn't be too hard to deploy into a docker container.
The last production step of deployment will be similar for whatever your using if it's docker or no docker. The VM will need an webserver like apache or nginx to expose the ports you wish to use to the public and then you will run the docker container or the go server independently and then you'll have your system!
Hope this helps!

Deploying Vue.js application consuming REST API

I'm trying to deploy my first Vue.js application on Heroku, but I need some clarification.
My app is a very simple client consuming REST service API.
I deployed the REST service on a Heroku dyno and now I need to also deploy my front-end application.
Is it possible to install client app on the same dyno? Is it a good practice, or should I deploy the client as a separate application?
What is the "real-world" production approach?
NOTE REST APIs are based on Java/Spring MVC.
It is certainly possible, and can keep your dyno costs down.
I answered a similar question here. That specific answer may or may not be suitable to your needs depending on what server technology you are using etc., however, the general idea is that you can certainly maintain multiple parts of your app within a single git repo that gets deployed to a single Heroku app.
Such a single Heroku app may or may not consist of multiple Process Types which may each run on one or more dynos.

What is the difference between Cloud Foundry and OpenWhisk?

I see these both in Bluemix, but what is the difference between them?
Cloud Foundry and OpenWhisk are two Bluemix Compute models that a developer can used to power an application's workload.
I'll give a very high-level summary of both services and when I would use them...
Cloud Foundry
IBM Bluemix was originally based off Cloud Foundry's open technology. It is a cloud computing platform as a service that supports the full lifecycle, from initial development, through all testing stages, to deployment.
Cloud Foundry has a CLI program called cf which is the primary tool to interact with Bluemix (or Bluemix provides a web GUI for this).
Cloud Foundry introduces the concepts of Organizations that contain Spaces which you can think of as workspaces. Different spaces typically correspond to different lifecycle stages for an application.
Cloud Foundry introduces the concepts of Services and Applications. A Cloud Foundry service usually performs a particular function (like a database service), and an application usually has services and their keys bound to it.
OpenWhisk
OpenWhisk is a brand new IBM Cloud developed distributed event-driven compute model.
It has a distributed automatically scaling serverless architecture that executes application logic on events.
OpenWhisk also has a CLI program called wsk which can be used to run your code snippets, or actions, on OpenWhisk.
OpenWhisk introduces the concepts of Triggers, Actions, and Rules.
Triggers are a class of events emitted by event sources.
Actions encapsulate the actual code to be executed which support multiple language bindings including Node.js, Swift and arbitrary binary programs encapsulated in Docker Containers. Actions invoke any part of an open ecosystem including existing Bluemix services for analytics, data, cognitive, or any other 3rd party service.
Rules are an association between a trigger and an action.
Cloud Foundry vs. OpenWhisk
So the question remains: when should you use Cloud Foundry, or when should you use OpenWhisk?
In my limited experience using OpenWhisk, here are my thoughts. I like to think of OpenWhisk as an easily implementable automatically scaling architecture that application developers can use without needing much prior knowledge in backend development. I think of Cloud Foundry as a lower level in the software stack which might give you more customization, but will likely take more skill and knowledge for setting it up.
I would use Cloud Foundry if I...
Was a backend & application developer.
Had experience creating and connecting services together.
Needed functionality that just might not be possible using OpenWhisk.
I would use OpenWhisk if I...
Was an application developer.
Didn't want to worry about a server.
Didn't want to learn different programming languages, etc. to figure out how to set up my server.
Really wanted focus on developing my application and have the backend just work.
Hope that helped.
Edit:
Here's a cool image that I found that illustrates this:
CloudFoundry is a PaaS (Platform-as-a-service) platform, which means in a nutshell, that it hosts the platform for your application to run on. Examples of a platform include node.js or a JVM.
OpenWhisk is a serverless platform. The term FaaS (Function-as-a-service) seems to be emerging as well. You upload code, which is executed once an event happens. That event might be anything, ranging from a simple HTTP request to a change happening in your database.
The fundamental difference between the two is the mode of operation. PaaS means, you're still running a server-process. You'll have a long running process which listens to events and executes your logic, once an event happens. All the other time, the process is idle, still requiring CPU cycles and memory to actually listen for events.
In serverless, the platform takes the burden of "listening for events". Once an event happens, your code is instantiated and executed. That code is shutdown afterwards thus not requiring any resources anymore. That also explains why OpenWhisk actions have a time limitation of 5 minutes. It is not meant to have long running actions.
Disclaimer: Both platforms support a lot more than I described here, I tried to keep it down to the most substantial difference between the both.

Choosing between gRPC with endpoints, or REST in a simple app to work like a BackEnd app deployed in GAE

I'm developing an app deployed in GAE, simple for this moment. This app is the backend of other app.
Internally, this app have a few modules (this is not important here) that they communicate with rest apis (for other reason).
And the question that I'm thinking is: I was beginning to write API (to outside) using gRPC and EndPoints like GAE docs says, when I thought that if I could have really advantages if using gRPC and not REST like internally.
I have been spend a lot of time searching that really advantages that offers gRPC about REST, but I don't find it.
Why Google recommended gRPC? Is faster than REST?, (from my point of view is most simple to write)
You know any test about speed with both technologies?
I will thanking any help.
You can use GRPC today on AppEngine's Managed VM platform as both a client and a server. If you want load balancing you need to use TCP/IP load balancing and have GRPC servers terminate TLS for you.
GRPC does not yet work on AppEngine standard but we're working on it. For more questions hit up the mailing list.