I have a task deployed to a Spring Cloud Data Flow server using the UI.
I execute the task a couple of times, each time adding a new platform property through the Parameters box so its content looks like this before I launch the task for the second time:
app.param1=a
app.param2=b
For the third execution of the same task, I would like to remove the platform parameters. I try to do this by editing the content of the Parameters box so it is empty before I launch the task. However, on checking the third execution details, I find that the platform parameters have been retained:
Third task execution information
My question is, is it possible to remove parameters (platform properties) from such a task using this UI?
Related
I want to create change request using scripts and not from the GUI page. How can I achieve that ? Also, there is a single sign on check of my organization too.
A few options:-
Create Scripted REST API - this exposes a URL which can be called by an external system. Tricky if you are new to SN.
As #Rafay suggestions - setup a scheduled event. Check out https://community.servicenow.com/community?id=community_question&sys_id=b32c0765db9cdbc01dcaf3231f961984
Scheduled events can be scripted/reports or create templates.
Templates can create of tasks without any coding - may be easier if it meets your requirements? These are referred to as templates and can be created via the UI then scheduled to create them (e.g. A regular server patching activity).
I have a small project that uses Redis for the task queue purposes. Here is how it basically works.
I have two components in the system: desktop client (can be more than one) and a server-side app. Server-side app has a pull of tasks for the desktop client(s). When a client comes, the first available task from the pull is given to it. As the task has an id, when the desktop client gets back with the results, the server-side app can recognize the task by its id. Basically, I do the following in Redis:
Keep all the tasks as objects.
Keep queue (pool) of tasks in several lists: queue, provided, processing.
When a task is being provided to the desktop client, I use RPOPLPUSH in Redis to move the id from the queue list to the provided list.
When I get a response from the desktop client, I use LREM for the given task ID from the provided list (if it fails, I got a task that was not provided or was already processed, or just never existed - so, I break the execution). Then I use LPUSH to add the task id to the processing list. Given that I have unique task ids (controlled on the level of my app), I avoid duplicates in the Redis lists.
When the task is finished (the result got from the desktop client is processed and somehow saved), I remove the task from the processing list and delete the task object from Redis.
If anything goes wrong on any step (i.e. the task gets stuck on the processing or provided list), I can move the task back to the queue list and re-process it.
Now, the question: is it somehow possible to do the similar stuff in Apache Kafka? I do not need the exact behavior as in Redis - all I need is to be able to provide a task to the desktop client (it shouldn't be possible to provide the same task twice) and mark/change its state according to the actual processing status (new, provided, processing), so that I could control the process and restore the tasks that were not processed due to some problem. If it's possible, could anyone please describe the applicable workflow?
It is possible for kafka to act as a standard queue. Check the consumer group feature.
If the question is about the appropriateness, please also refer Is Apache Kafka appropriate for use as a task queue?
We are using kafka as a task queue, one of the consideration went in favor of kafka was that it is already in our application ecosystem, found it easier than adding one more component.
In Microsoft Dynamics CRM Plug-in why at Event Pipeline stage of execution is used Pre-Operation for "Update" message while adding a step in a plug-In. could anyone elaborate this?
The plugin pipeline includes the following stages; pre-validation, pre-operation, and post-operation.
Apart from a couple of exceptions these stages are always available. So for the update message the pre-operation is there because that is how the product is designed to work.
In an update message the pre-operation stage could be used for example to:
Stop plugin execution by throwing an exception.
Inspect values of the record before they are changed.
Alter the plugin Target object to change the update applied to the record.
MSDN elaborates quite a bit about it: see Event Execution Pipeline.
Abstract (copy-pasted from the linked page):
The Microsoft Dynamics CRM event processing subsystem executes
plug-ins based on a message pipeline execution model. A user action in
the Microsoft Dynamics CRM Web application or an SDK method call by a
plug-in or other application results in a message being sent to the
organization Web service. The message contains business entity
information and core operation information. The message is passed
through the event execution pipeline where it can be read or modified
by the platform core operation and any registered plug-ins.
I've converted a console app into a scheduled WebJob. All is working well, but I'm having a little trouble figuring out how to accomplish the error logging/emailing I'd like to have.
1.) I am using Console.WriteLine and Console.Error.WriteLine to create log messages. I see these displayed in the portal when I go to WebJob Run Details. Is there any way to have these logs saved to files somewhere? I added my storage account connection string as AzureWebJobsDashboard and AzureWebJobsStorage. But this appears to have just created an "azure-webjobs-dashboard" blob container that only has a "version" file in it.
2.) Is there a way to get line numbers to show up for exceptions in the WebJob log?
3.) What is the best way to send emails from within the WebJob console app? For example, if a certain condition occurs, I may want to have it send me and/or someone else (depending on what the condition is) an email along with logging the condition using Console.WriteLine or Console.Error.WriteLine. I've seen info on triggering emails via a queue or triggering emails on job failure, but what is the best way to just send an email directly in your console app code when it's running as a WebJob?
How is your job being scheduled? It sounds like you're using the WebJobs SDK - are you using the TimerTrigger for scheduling (from the Extensions library)? That extensions library also contains a new SendGrid binding that you can use to send emails from your job functions. We plan on expanding on that to also facilitate failure notifications like you describe, but it's not there yet. Nothing stops you from building something yourself however, using the new JobHostConfiguration.Tracing.Trace to plug in your own TraceWriter that you can use to catch errors/warnings and act as you see fit. All of this is in the beta1 pre-release.
Using that approach of plugging in a custom TraceWriter, I've been thinking of writing one that allows you to specify an error threshold/sliding window, and if the error rate exceeds, an email or other notification will be sent. All the pieces are there for this, just haven't done it yet :)
Regarding logging, the job logs (including your Console.WriteLines) are actually written to disk in your Web App (details here). You should be able to see them if you browse your site log directory. However, if you're using the SDK and Dashboard, you can also use the TextWriter/TraceWriter bindings for logging. These logs will be written to your storage account and will show up in the Dashboard Functions page per invocation. Here's an example.
Logs to files: You can use a custom TraceWriter https://gist.github.com/aaronhoffman/3e319cf519eb8bf76c8f3e4fa6f1b4ae
Exception Stack Trace Line Numbers: You will need to make sure your project is built with debug info set to "full" (more info http://aaron-hoffman.blogspot.com/2016/07/get-line-numbers-in-exception-stack.html)
SendGrid, Amazon Simple Email Service (SES), etc.
I am writting a manager program in a rcp way with eclipse, so I want to create a "command center" job which will run until the game is over. It'll get input from views, editors or via socket channel which is another job to get remote servers'/clients' request, and vice versa. But I do not know how to do it? So as a summary I have two problem:
How a job communicate with a ui part of eclipse?
How a job communicate with another?
I do not think, that an Eclipse Job is well-suited for this purpose, because jobs are basically used as elementary, but long tasks.
I would create something you do require as a controller/"command center" view, that can be used by the user to control the game. In this case, the view can communicate with the internal model e.g. using the Data Binding API, and with other views using the Selection service.
Or if you would like to control your application automatically in the background, you could create different event listeners, that can create small jobs, that read/write the data model of the application.