How to create Job in Kubernetes using Java API - kubernetes

Am able to create a job in the Kubernetes cluster using CLI (https://kubernetesbyexample.com/jobs/)
Is there a way to create a job inside the cluster using Java API ?

You can use Kubernetes Java Client to create any object such as Job. Referring from the example here
/*
* Creates a simple run to complete job that computes π to 2000 places and prints it out.
*/
public class JobExample {
private static final Logger logger = LoggerFactory.getLogger(JobExample.class);
public static void main(String[] args) {
final ConfigBuilder configBuilder = new ConfigBuilder();
if (args.length > 0) {
configBuilder.withMasterUrl(args[0]);
}
try (KubernetesClient client = new DefaultKubernetesClient(configBuilder.build())) {
final String namespace = "default";
final Job job = new JobBuilder()
.withApiVersion("batch/v1")
.withNewMetadata()
.withName("pi")
.withLabels(Collections.singletonMap("label1", "maximum-length-of-63-characters"))
.withAnnotations(Collections.singletonMap("annotation1", "some-very-long-annotation"))
.endMetadata()
.withNewSpec()
.withNewTemplate()
.withNewSpec()
.addNewContainer()
.withName("pi")
.withImage("perl")
.withArgs("perl", "-Mbignum=bpi", "-wle", "print bpi(2000)")
.endContainer()
.withRestartPolicy("Never")
.endSpec()
.endTemplate()
.endSpec()
.build();
logger.info("Creating job pi.");
client.batch().jobs().inNamespace(namespace).createOrReplace(job);
// Get All pods created by the job
PodList podList = client.pods().inNamespace(namespace).withLabel("job-name", job.getMetadata().getName()).list();
// Wait for pod to complete
client.pods().inNamespace(namespace).withName(podList.getItems().get(0).getMetadata().getName())
.waitUntilCondition(pod -> pod.getStatus().getPhase().equals("Succeeded"), 1, TimeUnit.MINUTES);
// Print Job's log
String joblog = client.batch().jobs().inNamespace(namespace).withName("pi").getLog();
logger.info(joblog);
} catch (KubernetesClientException e) {
logger.error("Unable to create job", e);
} catch (InterruptedException interruptedException) {
logger.warn("Thread interrupted!");
Thread.currentThread().interrupt();
}
}
}

If you want to launch a job using a static manifest yaml from inside the cluster, it should be easy using the official library.
This code worked for me.
ApiClient client = ClientBuilder.cluster().build(); //create in-cluster client
Configuration.setDefaultApiClient(client);
BatchV1Api api = new BatchV1Api(client);
V1Job job = new V1Job();
job = (V1Job) Yaml.load(new File("/tmp/template.yaml")); //load static yaml file
ApiResponse<V1Job> response = api.createNamespacedJobWithHttpInfo("default", job, "true", null, null);
You can also modify any kind of information of the job before launching it with the combination of getter and setter.
// set metadata-name
job.getMetadata().setName("newName");
// set spec-template-metadata-name
job.getSpec().getTemplate().getMetadata().setName("newName");

Related

Launch JobLaunchRequest for each new file in AWS S3 with Spring Batch Integration

I'm following the docs: Spring Batch Integration combining with the Integration AWS for pooling the AWS S3.
But the batch execution per each file is not working in some situations.
The AWS S3 Pooling is working correctly, so when I put a new file or when I started the application and there's files in the bucket the application sync with the local directory:
#Bean
public S3SessionFactory s3SessionFactory(AmazonS3 pAmazonS3) {
return new S3SessionFactory(pAmazonS3);
}
#Bean
public S3InboundFileSynchronizer s3InboundFileSynchronizer(S3SessionFactory pS3SessionFactory) {
S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(pS3SessionFactory);
synchronizer.setPreserveTimestamp(true);
synchronizer.setDeleteRemoteFiles(false);
synchronizer.setRemoteDirectory("remote-bucket");
//synchronizer.setFilter(new S3PersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "simpleMetadataStore"));
return synchronizer;
}
#Bean
#InboundChannelAdapter(value = IN_CHANNEL_NAME, poller = #Poller(fixedDelay = "30"))
public S3InboundFileSynchronizingMessageSource s3InboundFileSynchronizingMessageSource(
S3InboundFileSynchronizer pS3InboundFileSynchronizer) {
S3InboundFileSynchronizingMessageSource messageSource = new S3InboundFileSynchronizingMessageSource(pS3InboundFileSynchronizer);
messageSource.setAutoCreateLocalDirectory(true);
messageSource.setLocalDirectory(new FileSystemResource("files").getFile());
//messageSource.setLocalFilter(new FileSystemPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "fsSimpleMetadataStore"));
return messageSource;
}
#Bean("s3filesChannel")
public PollableChannel s3FilesChannel() {
return new QueueChannel();
}
I followed the tutorial so I created the FileMessageToJobRequest I won't put the code here because it's the same as the docs
So I created the beans IntegrationFlow and FileMessageToJobRequest:
#Bean
public IntegrationFlow integrationFlow(
S3InboundFileSynchronizingMessageSource pS3InboundFileSynchronizingMessageSource) {
return IntegrationFlows.from(pS3InboundFileSynchronizingMessageSource,
c -> c.poller(Pollers.fixedRate(1000).maxMessagesPerPoll(1)))
.transform(fileMessageToJobRequest())
.handle(jobLaunchingGateway())
.log(LoggingHandler.Level.WARN, "headers.id + ': ' + payload")
.get();
}
#Bean
public FileMessageToJobRequest fileMessageToJobRequest() {
FileMessageToJobRequest fileMessageToJobRequest = new FileMessageToJobRequest();
fileMessageToJobRequest.setFileParameterName("input.file.name");
fileMessageToJobRequest.setJob(delimitedFileJob);
return fileMessageToJobRequest;
}
So in the JobLaunchingGateway I think is the problem:
If I created like this:
#Bean
public JobLaunchingGateway jobLaunchingGateway() {
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setJobRepository(jobRepository);
simpleJobLauncher.setTaskExecutor(new SyncTaskExecutor());
JobLaunchingGateway jobLaunchingGateway = new JobLaunchingGateway(simpleJobLauncher);
return jobLaunchingGateway;
}
Case 1 (Bucket is empty when the application starts):
I upload a new file in the AWS S3;
The pooling works and the file appears in the local directory;
But the transform/job isn't fired;
Case 2 (Bucket already has one file when application starts):
The job is launched:
2021-01-12 13:32:34.451 INFO 1955 --- [ask-scheduler-1] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=arquivoDelimitadoJob]] launched with the following parameters: [{input.file.name=files/FILE1.csv}]
2021-01-12 13:32:34.524 INFO 1955 --- [ask-scheduler-1] o.s.batch.core.job.SimpleStepHandler : Executing step: [delimitedFileJob]
If I add a second file in S3, the job isn't launched as the case 1.
Case 3 (Bucket has more than one file):
The files are synchronized correctly in local directory
But the job is only executed once for the last file.
So following the docs I change my Gateway to:
#Bean
#ServiceActivator(inputChannel = IN_CHANNEL_NAME, poller = #Poller(fixedRate="1000"))
public JobLaunchingGateway jobLaunchingGateway() {
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setJobRepository(jobRepository);
simpleJobLauncher.setTaskExecutor(new SyncTaskExecutor());
//JobLaunchingGateway jobLaunchingGateway = new JobLaunchingGateway(jobLauncher());
JobLaunchingGateway jobLaunchingGateway = new JobLaunchingGateway(simpleJobLauncher);
//jobLaunchingGateway.setOutputChannel(replyChannel());
jobLaunchingGateway.setOutputChannel(s3FilesChannel());
return jobLaunchingGateway;
}
With this new gateway implementation, if I put a new file in S3 the application reacts but didn't transform giving the error:
Caused by: java.lang.IllegalArgumentException: The payload must be of type JobLaunchRequest. Object of class [java.io.File] must be an instance of class org.springframework.batch.integration.launch.JobLaunchRequest
And if there's two files in the bucket (when the apps starts) FILE1.csv and FILE2.csv, the job runs for the FILE1.csv correctly, but give the error above for the FILE2.csv.
What's the correct way to implement something like this?
Just to be clear I want to receive thousand of csv files in this bucket, read and process with Spring Batch, but I also need to get every new file asap from S3.
Thanks in advance.
The JobLaunchingGateway indeed expects from us only JobLaunchRequest as a payload.
Since you have that #InboundChannelAdapter(value = IN_CHANNEL_NAME, poller = #Poller(fixedDelay = "30")) on the S3InboundFileSynchronizingMessageSource bean definition, it is really wrong to have then #ServiceActivator(inputChannel = IN_CHANNEL_NAME for that JobLaunchingGateway without FileMessageToJobRequest transformer in between.
Your integrationFlow looks OK for me, but then you really need to remove that #InboundChannelAdapter from the S3InboundFileSynchronizingMessageSource bean and fully rely on the c.poller() configuration.
Another way is to leave that #InboundChannelAdapter, but then start the IntegrationFlow from the IN_CHANNEL_NAME not a MessageSource.
Since you have several poller against the same S3 source, plus both of then are based on the same local directory, it is not a surprise to see so many unexpected situations.

how to get slack notification on change of Kubernetes pod status?

How to get slack notification while any k8s pod status changed? can't use kube bots as it's not allowed in my organisation.
You can use "Alertmanager" from the Prometheus stack for such notifications.
Once you have the prometheus stack up and running, you can configure custom alerts based on any property of objects in kubernetes and forward them to slack
https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/alerting.md
Updated:
In case you can't deploy any external tool, you could right a simple shell script which gets pod status via kubectl
Something like:
kubectl get pods mypod -ojson | jq .status.phase
You can poll on this command and use the slack webhooks to send a notification when it changes value
I implemented solution using kubernetes api
https://github.com/kubernetes-client/csharp/tree/master/examples/watch
Basically it will check the not running pod and notify using Microsoft Teams web-hook. It will also notify pod which was initially not running and came back to running status again(recovered pod)
C# code snippet below with Main and Notify function.
static async Task Main(string[] args)
{
// Load from the default kubeconfig on the machine.
var config = KubernetesClientConfiguration.BuildConfigFromConfigFile();
// Use the config object to create a client.
var client = new Kubernetes(config);
try
{
var podlistResp = await client.ListNamespacedPodWithHttpMessagesAsync(Namespace, watch: true);
using (podlistResp.Watch<V1Pod, V1PodList>(async (type, item) =>
{
Console.WriteLine(type);
Console.WriteLine("==on watch event==");
var message = $"Namespace: {Namespace} Pod: {item.Metadata.Name} Type: {type} Phase:{item.Status.Phase}";
var remessage = $"Namespace: {Namespace} Pod: {item.Metadata.Name} Type: {type} back to Phase:{item.Status.Phase}";
Console.WriteLine(message);
if (!item.Status.Phase.Equals("Running") && !item.Status.Phase.Equals("Succeeded"))
{ Console.WriteLine("==on watch event==");
await Notify(message);
Console.WriteLine("==on watch event==");
}
if ( type== WatchEventType.Modified && item.Status.Phase.Equals("Running") )
{ Console.WriteLine("==on watch event==");
await Notify(remessage);
Console.WriteLine("==on watch event==");
}
}))
{
Console.WriteLine("press ctrl + c to stop watching");
var ctrlc = new ManualResetEventSlim(false);
Console.CancelKeyPress += (sender, eventArgs) => ctrlc.Set();
ctrlc.Wait();
}
}
catch (System.Exception ex)
{
Console.Error.WriteLine($"An error happened Message: {ex.Message}", ex);
}
}
private static async Task Notify(string message)
{
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("https://outlook.office.com");
var body = new { text = message };
var content = new StringContent(JsonConvert.SerializeObject(body));
var result = await client.PostAsync("https://outlook.office.com/webhook/xxxx/IncomingWebhook/xxx", content);
result.EnsureSuccessStatusCode();
}
}
You can try to use kwatch, which sends Slack notifications on crashes -
https://github.com/abahmed/kwatch

pgjdbc-ng Ill-formed region:

Currently i'm trying to make a module which will listen to any changes via trigger on Postgres. I'm using pgjdbc-ng ver 0.8.2 ,download the JAR from maven repo central and add it as project reference.
Following is the code that i used :
public class ListenNotify
{
// Create the queue that will be shared by the producer and consumer
private BlockingQueue queue = new ArrayBlockingQueue(10);
// Database connection
PGConnection connection;
public ListenNotify()
{
// Get database info from environment variables
/*
String DBHost = System.getenv("DBHost");
String DBName = System.getenv("DBName");
String DBUserName = System.getenv("DBUserName");
String DBPassword = System.getenv("DBPassword");
*/
String DBHost = "127.0.0.1";
String DBName = "dbname";
String DBUserName = "postgres";
String DBPassword = "postgres";
// Create the listener callback
PGNotificationListener listener = new PGNotificationListener()
{
#Override
public void notification(int processId, String channelName, String payload)
{
// Add event and payload to the queue
queue.add("/channels/" + channelName + " " + payload);
}
};
try
{
// Create a data source for logging into the db
PGDataSource dataSource = new PGDataSource();
dataSource.setHost(DBHost);
dataSource.setPort(5432);
dataSource.setDatabaseName(DBName);
dataSource.setUser(DBUserName);
dataSource.setPassword(DBPassword);
// Log into the db
connection = (PGConnection) dataSource.getConnection();
// add the callback listener created earlier to the connection
connection.addNotificationListener(listener);
// Tell Postgres to send NOTIFY q_event to our connection and listener
Statement statement = connection.createStatement();
statement.execute("LISTEN q_event");
statement.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
/**
* #return shared queue
*/
public BlockingQueue getQueue()
{
return queue;
}
/**
*
* main entry point
*
* #param args
*/
public static void main(String[] args)
{
// Create a new listener
ListenNotify ln = new ListenNotify();
// Get the shared queue
BlockingQueue queue = ln.getQueue();
// Loop forever pulling messages off the queue
while (true)
{
try
{
// queue blocks until something is placed on it
String msg = queue.take().toString();
// Do something with the event
System.out.println(msg);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
}
Upon running, i got exception :
Ill-formed region: Indonesia [at index 0]
I have read the official git, saying that it should be fixed within some release number.
How do i apply those fix ?
Thank you
I know its a little bit late ;)
I had the same problem and also read that the problem was solved. But it does not seem that way.
Anyway the problem is this when creating the postgres database, LC_COLLATE is probably set to Indonesian_Indonesia.1252. When trying to establish a connection this value is compared with the java locales. In the Java Locales class, the value is probably in your language so the entry can not be found. However, to solve the problem you can set the default value of the Java locales to English. This is certainly not the best way to solve the problem, but it works. For security, I would put back after the connection is established
you can set the default value as follows:
Locale.setDefault(Locale.ENGLISH)

Is RESTEasy RegisterBuiltin.register necessary when using ClientResponse<T>

I am developing a REST client using JBOSS app server and RESTEasy 2.3.6. I've included the following line at the beginning of my code:
RegisterBuiltin.register(ResteasyProviderFactory.getInstance());
Here's the rest of the snippet:
RegisterBuiltin.register(ResteasyProviderFactory.getInstance());
DefaultHttpClient httpclient = new DefaultHttpClient();
httpclient.getCredentialsProvider().setCredentials(
new AuthScope(host, port, AuthScope.ANY_REALM), new UsernamePasswordCredentials(userid,password));
ClientExecutor executor = createAuthenticatingExecutor(httpclient, host, port);
String uriTemplate = "http://myhost:8080/webapp/rest/MySearch";
ClientRequest request = new ClientRequest(uriTemplate, executor);
request.accept("application/json").queryParameter("query", searchArg);
ClientResponse<SearchResponse> response = null;
List<MyClass> values = null;
try
{
response = request.get(SearchResponse.class);
if (response.getResponseStatus().getStatusCode() != 200)
{
throw new Exception("REST GET failed");
}
SearchResponse searchResp = response.getEntity();
values = searchResp.getValue();
}
catch (ClientResponseFailure e)
{
log.error("REST call failed", e);
}
finally
{
response.releaseConnection();
}
private ClientExecutor createAuthenticatingExecutor(DefaultHttpClient client, String server, int port)
{
// Create AuthCache instance
AuthCache authCache = new BasicAuthCache();
// Generate BASIC scheme object and add it to the local auth cache
BasicScheme basicAuth = new BasicScheme();
HttpHost targetHost = new HttpHost(server, port);
authCache.put(targetHost, basicAuth);
// Add AuthCache to the execution context
BasicHttpContext localContext = new BasicHttpContext();
localContext.setAttribute(ClientContext.AUTH_CACHE, authCache);
// Create ClientExecutor.
ApacheHttpClient4Executor executor = new ApacheHttpClient4Executor(client, localContext);
return executor;
}
The above is a fairly simple client that employs the ClientRequest/ClientResponse<T> technique. This is documented here. The above code does work (only left out some trivial variable declarations like host and port). It is unclear to me from the JBOSS documentation as to whether I need to run RegisterBuiltin.register first. If I remove the line completely - my code still functions. Do I really need to include the register method call given the approach I have taken? The Docs say I need to run this once per VM. Secondly, if I am required to call it, is it safe to call more than one time in the same VM?
NOTE: I do understand there are newer versions of RESTEasy for JBOSS, we are not there yet.

Pax Exam how to start multiple containers

for a project i'm working on, we have the necessity to write PaxExam integration tests which run over multiple Karaf containers.
The idea would be finding a way to extend/configure PaxExam to start-up a Karaf container (or more) and deploying there a bounce of bundles, and then start the test Karaf container which will then test the functionality.
We need this to verify performance tests and other things.
Does someone know anything about that? Is that actually possible in PaxExam?
I write the answer by myself, after having found this interesting article.
In particular have a look at the sections Using the Karaf Shell and Distributed integration tests in Karaf
http://planet.jboss.org/post/advanced_integration_testing_with_pax_exam_karaf
This is basically what the article says:
first of all you have to change the test probe header, allowing the dynamic-package
#ProbeBuilder
public TestProbeBuilder probeConfiguration(TestProbeBuilder probe) {
probe.setHeader(Constants.DYNAMICIMPORT_PACKAGE, "*;status=provisional");
return probe;
}
After that, the article suggests the following code that is able to execute commands in the Karaf shell
#Inject
CommandProcessor commandProcessor;
protected String executeCommands(final String ...commands) {
String response;
final ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
final PrintStream printStream = new PrintStream(byteArrayOutputStream);
final CommandSession commandSession = commandProcessor.createSession(System.in, printStream, System.err);
FutureTask<string> commandFuture = new FutureTask<string>(
new Callable<string>() {
public String call() {
try {
for(String command:commands) {
System.err.println(command);
commandSession.execute(command);
}
} catch (Exception e) {
e.printStackTrace(System.err);
}
return byteArrayOutputStream.toString();
}
});
try {
executor.submit(commandFuture);
response = commandFuture.get(COMMAND_TIMEOUT, TimeUnit.MILLISECONDS);
} catch (Exception e) {
e.printStackTrace(System.err);
response = "SHELL COMMAND TIMED OUT: ";
}
return response;
}
Then, the rest is kind of trivial, you will have to implement a layer able to start-up a child instance of Karaf
public void createInstances() {
//Install broker feature that is provided by FuseESB
executeCommands("admin:create --feature broker brokerChildInstance");
//Install producer feature that provided by imaginary feature repo.
executeCommands("admin:create --featureURL mvn:imaginary/repo/1.0/xml/features --feature producer producerChildInstance");
//Install producer feature that provided by imaginary feature repo.
executeCommands("admin:create --featureURL mvn:imaginary/repo/1.0/xml/features --feature consumer consumerChildInstance");
//start child instances
executeCommands("admin:start brokerChildInstance");
executeCommands("admin:start producerChildInstance");
executeCommands("admin:start consumerChildInstance");
//You will need to destroy the child instances once you are done.
//Using #After seems the right place to do that.
}