Talend Administration Center linking Job to project - talend

I'm trying to create Project and Task in TAC using MetaServletCaller.bat file.
I'm able to create a project using the bat file, but didn't get how to link or assign jobs to that project.
How to create project with the jobs using MetaServletCaller.bat file?

Talend MetaServletCaller API doesn't provide any command for creating a job from an export file. The only way to do this would be to do it in Talend studio, or programmatically using the commandline importItems command which allows you to import an exported job (while logged in to the project):
| importItems source (dir|.zip) imports items |
| -if (--item-filter) filterExpr item filter expression |
| -im (--implicit) import implicit |
| -o (--overwrite) overwrite existing items |
| -s (--status) import the status |
| -sl (--statslogs) import stats & logs params |
You can find the commandline API reference here.

Related

Airflow duplicated plugin entries in virtualenv

Was setting up Airflow (2.1.4) in a virtual environment followed by an install of a third-party plugin "pip install simple-dag-editor"
Plugin installed successfully, however upon checking the plugin list, there were duplicated entries.
(venv) root#test-server:/opt/airflow$ airflow plugins
name | source | flask_blueprints | appbuilder_views
==================+============================================================+=======================================================+=============================================================
simple_dag_editor | simple-dag-editor==0.1.1: | <flask.blueprints.Blueprint object at 0x7f69e5e427b8> | {'category': 'Admin', 'name': 'Simple DAG editor', 'view':
| EntryPoint(name='simple_dag_editor', value='simple_dag_edi | | <simple_dag_editor.app_builder_view.AppBuilderDagEditorView
| tor.simple_dag_editor:SimpleDagEditor', | | object at 0x7f69e5dd1470>}
| group='airflow.plugins') | |
simple_dag_editor | simple-dag-editor==0.1.1: | <flask.blueprints.Blueprint object at 0x7f69e5e427b8> | {'category': 'Admin', 'name': 'Simple DAG editor', 'view':
| EntryPoint(name='simple_dag_editor', value='simple_dag_edi | | <simple_dag_editor.app_builder_view.AppBuilderDagEditorView
| tor.simple_dag_editor:SimpleDagEditor', | | object at 0x7f69e5dd1470>}
| group='airflow.plugins') | |
Airflow portal also resulted in 2 entries in the "Admin" section
Any idea what is happening? I tested the setup again both on a docker container and standalone on the server. Both instances did not result in the duplicated entries therefore I am suspecting it is related to running Airflow in a Python virtual environment. The server is running on CentOS 7.
I believe you might have plugin installed twice in two different places:
In "plugins" folder as simply a python package
Installed as python package
Aiflow Allows for both types of installations, and I think if you have both - it will install both.
If you change the airflow log level to verbose, you should be able to see two entries:
"Loading plugins from entrypoints"
"Loading plugins from directory: <DIRECTORY>"
They should be followed by attempts to import the plugins.
The solution would be to remove the plugings from the "plugins" directory.
It's also possible that you have two packages that have the same entrypoint - for example if you installed it with different package name before, it could also be discovered twice. Airflow checks all pacakges available and if they have appropriate entrypoint declared, it will load it as plugin. But if you enable DEBUG logging level, you should see details. You can easily set airflow logging level by config option (or environment variable):
https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#logging-level

VS Code Extension Settings CLI

I want to create an automated script for setting up VS Code.
Part of this is the installation of the extensions and configuring them as necessary.
So I was able to install the extensions via CLI, but can't find how to change the extension settings by only using the command line.
For example - I want to change Jest Runner settings. I found this on their readme:
Jest Runner will work out of the box, with a valid Jest config.
If you have a custom setup use the following options to configure Jest Runner:
| Command | Description |
| --- | --- |
| jestrunner.configPath | Jest config path (relative to ${workFolder} e.g. jest-config.json) |
| jestrunner.jestPath | Absolute path to jest bin file (e.g. /usr/lib/node_modules/jest/bin/jest.js) |
| jestrunner.debugOptions | Add or overwrite vscode debug configurations (only in debug mode) (e.g. `"jestrunner.debugOptions": { "args": ["--no-cache"] }`) |
| jestrunner.runOptions | Add CLI Options to the Jest Command (e.g. `"jestrunner.runOptions": ["--coverage", "--colors"]`) https://jestjs.io/docs/en/cli |
| jestrunner.jestCommand | Define an alternative Jest command (e.g. for Create React App and similar abstractions) |
| jestrunner.disableCodeLens | Disable CodeLens feature
| jestrunner.codeLensSelector | CodeLens will be shown on files matching this pattern (default **/*.{test,spec}.{js,jsx,ts,tsx})
But don't know how to access it via cmd.
Any thoughts on how to do this?
Thanks!
Was able to find a solution now.
So it turns out that the settings are actually stored in:
<userFolder>\AppData\Roaming\Code\User\Settings.json
From there I can open up the json file and add in the commands as specified by the extension's readme.

reading from configuration database inside of data factory

I need to create a data factory pipeline to move data from sftp to blob storage. At this point, I'm only doing a POC, and I'd like know how I would read configuration settings and kick off my pipeline based on those configuration settings.
Example of config settings would be (note that there would be around 1000 of these):
+--------------------+-----------+-----------+------------------------+----------------+
| sftp server | sftp user | sftp pass | blob dest | interval |
+--------------------+-----------+-----------+------------------------+----------------+
| sftp.microsoft.com | alex | myPass | /myContainer/destroot1 | every 12 hours |
+--------------------+-----------+-----------+------------------------+----------------+
How do you kick off a pipeline using some external configuration file/store?
Take a look at Lookup activity and linked service Parameterize

Data Driven testing with cucumber protractor

Lets say I have a scenario in my demo.feature file
Scenario Outline: Gather and load all submenus
Given I will login using <username> and <password>
When I will click all links
Examples :
| username | password |
| user1 | pass1 |
| use2 | pass2 |
lets say i have a file called users.json
How can i get those usernames and passwords from that external file to my demo.feature ?
Can I catch the file by passing parameters to my npm script like below ?
npm run cucumber -- --params.environment.file=usernames.json
I recommend having the login step access that json file within the step definition. Just make sure not to check it into the repo and instead always expect it to be in a location but only locally and not in the repository.
Doing the above is useful for a couple of reasons:
- An engineer running your tests does not need to know that a param must be passed in from the command line
- The code is self-descriptive in that step as to how it logs in
- You can add better error handling
- You can use multiple user files if needs be by having hooks define paths etc based on tags

How to create a chrome extension that can run custom Catalon scripts on it?

I am looking for a way to build a custom Chrome extension that runs Katalon scripts.
From their main page I am not sure how to process and execute the outputted commands posts:
open | https://katalon-test.s3.amazonaws.com/demo-aut/dist/html/form.html |
click | id=first-name |
type | id=first-name | Alex
type | id=last-name | Smith