Setting up workspace to build on Apache Atlas - apache-atlas

I'm exploring options on open source data catalog tools which can provide metadata features like
Open source
Search and discovery
Lineage tracking
Tagging
I found Apache Atlas as a good candidate to start working on since it does not currently have connectors to Google Cloud Platform.
I've spent a lot of time on figuring out how the platform works but I need to understand how I can start writing connectors to support Google Cloud Platform. Is there any documentation to get started?
I went through the link: https://atlas.apache.org/#/EclipseSetup. which talks about how to setup the environment to start on eclipse but I'm not aware on how to actually start building and testing the new code I'm thinking to write.
I think there are a lot of components at play and I'm too noob to get started on this.
TLDR;
Detail i'm looking for is: once I write the code, how do I test that It will work after I package the application

Related

How to deploy code to IBM Cloud using DevOps approach?

Can somebody bring some light in IBM Cloud deployments tools / platforms whatever?
I am new to it, so I am looking at their docs, watching videos and still i am confused.
What I want to achieve is typical scenario fetch code from repo, build it, test it, deploy it to cloud. I found strategies / platforms how to achieve that and i still can't see differences advantages / disadvantages between them.
So we have:
toolchain
cloud foundry
code engine
Continuous Delivery (service)
and maybe something more? :)
I am looking at Cloud Foundry explained video and the guy is saying if you want to do not care about the bottom part like networking, security, containers you can choose deploy using K8S service. Wtf? So from total automatic thing you can now handle something in the cloud foundry by yourself. So for me its total mix of everything together and i don't know now which tool / platform / strategy to use.
Any comment is appreciated.
It all depends on your requirements.
IBM DevOps/Continuous Delivery/Toolchanis is set of services that can build and deploy your application to given runtime. You can find various tutorials here - https://www.ibm.com/cloud/architecture/courses/toolchain-tutorials. These tutorials shows you various things that you can embed in your build pipelines (like code scanning with CRA, image scanning, signing etc)
These runtimes can be different depending on your requirements:
CloudFoundry, where you deploy app using a buildpak, but this is rather fading technology, so I wouldn't recommend that
as docker image in K8s/OpenShift cluster - use this if your organization is planning or already utilizing Docker/Kubernetes/OpenShift. You will need to create K8s/OpenShift cluster first.
as serverless app, using the IBM Code Engine
If you are just starting and just want to deploy simple, single app to the Cloud I'd consider using IBM Code Engine and not investigate Toolchains for now. Check basic demo here - How to deploy source code with IBM Cloud Code Engine

Confluence migration from cloud to server

We have migrated a space from cloud instance to server instance,in cloud instance we were using "Plantuml diagrams for confluence" but in server we are using "Confluence PlantUML Plugin" .so macro name are different in both cloud and server ,so macro name for cloud is "plantumlcloud" but for server it is "plantuml".so ,in pages after migration it is showing "plantumlcloud" not a valid macro ,kindly help to resolve.
In general, migration of confluence spaces to another application which is not running the same plugins will cause any functionality of that plugin to break.
If you migrate hosting platforms, and have the equivalent version of the plugin for your new platform, created by the same developer, in most cases you will retain functionality, however there will often be differences between versions.
These differences are found especially when downgrading, and moving from cloud to server is a very definite example of a downgrade, as cloud will always run the latest version.
In general I would reccomend against a migration from cloud to server, and when it must be done, time should be spent to ensure compatability with all plugins, and migration guides and plans should be made and followed.
As commented by #tgdavies, there seems to be an equivelent version of the plugin you were using on cloud, so hopefully that can resolve your issue.

The best way to deploy/redeploy PHP code from github to GCP Compute Engine LAMP Stack [Google Click to Deploy]

overflowers!
Can someone please advice me on the best way to continuously deploy PHP code from github to GCP Compute Engine? Specifically to GCP Marketplace LAMP Stack, which is the Google Click to Deploy VM? Here is the link to the market place
Your advice is greatly appreciated!
Click to Deploy (C2D) is an excellent way to test drive solutions but I'm (admittedly somewhat naive but) skeptical that it's a good approach to combine C2D with customization.
That said, the C2D solutions are published and you could, with some work, customize the solution as the basis for your own solution.
In other words, I'd recommend not combining the C2D as-is but to customize the tools that it uses (!) for your needs.
The README explains how the LAMP VM is built (Cloud Build, packer, chef).
Without wishing to in any way impugn your approach, please consider alternative ways to deploy PHP to Google Cloud Platform. Running Apache and MySQL on a VM may be entirely appropriate for your needs but you will need to maintain the OS, Apache, MySQL etc.
If you're goal is to deploy a PHP (web) app that needs a MySQL-compliant database and you want to be more "cloud native", you could consider using:
App Engine or Cloud Run to host your PHP app (see link)
Cloud SQL for the database (see link)
The above would require more initial work but, if you want more flexibility, resilience and less "chore", I think you'd benefit from the investment.
In addition opening up the app like this would facilitate leveraging Cloud Monitoring, Logging, Debugger etc

openshif cloudcomputing configuration,is possible complete on cloud?

Is possible build a bigdata application on cloud with RED HAT'PaaS OpenShift? I'm looking how build on cloud an Scala Application with Hadoop (HDFS),Spark,an Apache Mahout but i can't find any thing about it.I've seen something with HortonWorks but nothing clear about how install it in an openshift environment an how add HDFS node in Cloud too.Is it possible with OpneShift?
It's possible in Amazon but my question is : IS possible in OpenShift ??
It really depends on what you're ultimately trying to achieve. I know you mention building a big data application on Openshift with Scala but what will the application ultimately be doing?
I've gotten Hadoop running in a gear before but if you want a better example check out this quickstart here to get an idea of how its done https://github.com/ryanj/flask-hbase-todos. I know its not scala but here's a good article that will show you how to put together a scala app https://www.openshift.com/blogs/building-distributed-and-event-driven-applications-in-java-or-scala-with-akka-on-openshift.
What will the application ultimately be doing?:
Forecasting for football match result for several football leagues,a web application (ruby) and
statistic computation and data mining ,calculations with Scala language
and apache frameworks(spark & mahout).
We get the info via CSV files, process and save it in nosql db (Cassandra).
And all of this on cloud(OpenShift),that's the idea.
I've seen the info https://github.com/ryanj/flask-hbase-todos.I'll try by this way but
with Scala.

eclipse metadata refresh without opening eclipse

We are working with various cloud platform(like. salesforce etc) and we need sync with server everyday. would like to know is there way that we can in our development box to synchronize all eclipse projects through some script without opening the IDE and open the IDE without much freezing.
This would enable to do clean sync( with cloud server) and refresh with local files.
This would enable to do refresh( for non cloud server ).
running a little ant or some kind of script would have development stable unique environment across all developers?
Any help would be appreciated.
It's going to GREATLY depend on what cloud platforms you are using. HOWEVER, i work with the salesforce platform. They offer (per their dev. docs) an ant API jar that allows you to write ant scripts that can essentially check out everything in your org.
Essentially you can use it to check out and check back in pieces and parts of the website. Though this of course only works for SFDC. For other platforms you will need to refer to their API's or write your own tools.