IBM API Connect - Can custom connectors be exposed via UI? - ibm-cloud

In SLC ARC the list of connectors available (when creating datasources and thus generating models) via the UI was hard-coded (link to overview of issue) Does the same hold true for API Connect?
Effectively, I'd like to create a fork of the mssql connector to address some issues with how schemas are processed when generating models from existing tables. If I create such a connector, will I be able to install it so that I can utilize it via the GUI (again, I could not via SLC ARC due to hard-coding). Any help is greatly appreciated!
EDIT: I've installed the loopback-connector-redis connector into a throwaway project. When I spin up APIC it does not appear on the data sources screen. So, rephrasing my question: are there settings or otherwise that would allow such connectors to be included. Ideally, APIC would scan my project and determine what I have installed, exposing those connectors.

As you've seen, the list is currently fixed and doesn't detect additional installed connectors.
If you want to use your own custom connector, create a new datasource using the API Designer, select the MSSQL connector and fill in the values per usual.
Next, you'll need to open a file on your system to tweak the connector target.
In your project directory, open ./server/datasources.json and you should see the datasource you just created. Then, just change the connector value to the name of the custom version you created, save, and continue developing your APIs like normal.
{
"db": {
"name": "db",
"connector": "memory"
},
"DB2 Customers": {
"host": "datbase.acme-air.com",
"port": 50000,
"database": "customers",
"password": "",
"name": "Customer DB",
"connector": "db2-custom",
"user": "mhamann#us.ibm.com"
}
}
Unfortunately, you're now on your own in terms of managing datasources, as they won't show up in the Designer's datasource editor. They will still be usable in other parts of the Designer, so you can connect up your models, etc.

Related

Krakend grafana dashoboard shows no data

I am using krakend-ce 1.4 and influx 1.X
I have configured grafana dashboard and hoping to see dashboard panels for all the layers.
There are 4 layers as per dashboard.
Router
Proxy
Backend
System
I see router panels data is getting charted as expected. But for other panels I see empty charts. "No Data to show"
my configuration for krakend-metrics and influx modules is as follows:
"github_com/devopsfaith/krakend-metrics": {
"collection_time": "30s",
"proxy_disabled": false,
"router_disabled": false,
"backend_disabled": false,
"endpoint_disabled": false,
"listen_address": "127.0.0.1:8090"
},
"github_com/letgoapp/krakend-influx":{
"address":"http://influxdb-service:80",
"ttl":"25s",
"buffer_size":0,
"db": "krakend",
"username": "admin",
"password": "adminadmin"
}
I also added opencensus as follows:
"github_com/devopsfaith/krakend-opencensus": {
"sample_rate": 100,
"reporting_period": 1,
"enabled_layers": {
"backend": true,
"router": true,
"pipe": true
},
"influxdb": {
"address": "http://influxdb-service:80",
"db": "krakend",
"timeout": "1s",
"username": "admin",
"password": "adminadmin"
}
}
I thought may be my data is not ending up in influxDB, so I went in and checked what does it have. show measurements give me following output, and all measurements does have some data.
I am using grafana dashboard ID 5722. which is specified in docs.
how can I change my setup so that panels for proxy, backend and system shows charts?
__________________________
UPDATE
I upgrade krakend to version 2.1
Removed opencensus metrics exporter
Now using dashboard 15029 per krakend 2.1.2 documentation.
I still do not see other layer charts getting populated.
PS: I have checked what metrics are getting exposed on http://krakend-host:8090/__stats I see layer.backend and layer.pipe metrics.
__________________________
UPDATE 2
I was also checking for other available dashboards which can work. I stumbled upon this one https://github.com/letgoapp/krakend-influx/blob/master/examples/grafana-dashboard.json
I see 2 more panels showing up. but not all of them.
kale
If you want to use the Grafana dashboards published by KrakenD they work with only with the native InfluxDB exporter (github_com/letgoapp/krakend-influx) and NOT with the OpenCensus one.
You should delete the github_com/devopsfaith/krakend-opencensus configuration block, otherwise, you have KrakenD reporting the same metrics in different measurements twice.
The influxdb port is usually http://influxdb-service:8086, but you show port 80 in the settings, make sure these are not old data. If you have changed the port it's fine.
Another thing you have to check is if the InfluxDB client is started during KrakenD startup (there is a line in the logs telling about it), old versions had a random bug that prevented the metrics from being sent.
Finally, the maintained dashboard is now 15029 for the InfluxDB v1. Try to use that one.
There have been several improvements in metrics and in KrakenD in general system, I would avoid staying with the old 1.4 being the transition as simple as it is (but this is another topic)
Hope this helps
After hours of debugging, I finally found the solution. Key here to note is If your configurations are correct, you should see data showing up in influxDB. First, make sure that you see data in influxDB. In my case, this was correct.
Like I mentioned in the second update to the question, I see some more panels getting populated when I used the dashboard from https://github.com/letgoapp/krakend-influx/blob/master/examples/grafana-dashboard.json
This was a huge lead. Later I went on to compare the specific JSON blocks within the working dashboard and the one which was missing some charts. I realized all of those panels had "datasource": null. And the ones which were working had the "datasource": "InfluxDB",
I updated the JSON definition of dashboard and viola !! All the panels started showing the charts.
PS: If you see any datasource as null, or which does not correspond to your influx DB datasource, you should update it to use influx db data source which is defined in the datasources section on the admin panel.

Unable to utilize log4j-spring-cloud-config-client when Spring Cloud Config uses a backend other than Git or File Based

Apparently, to use the log4j-spring-cloud-config-client with Spring Cloud Config, you need to take advantage of the SearchPathLocator functionality to pull the raw file based on a specific URI. From the
Spring-cloud-config code it appears only the JGitEnvironmentRepository and NativeEnvironmentRepository implement that interface and offer that functionality.
Running locally, if I hit the following endpoint, I get back a raw log4j2 config file: http://localhost:8088/config-server-properties-poc/default/master/log4j2.xml.
When I try that with an S3 backend, I get a 404, and it doesn't try to search for that specific file. I was able to work around this by naming my file to log4j2-default.json (XML is not supported). When I hit the following URL, I can get my properties back but not in the correct format
http://localhost:8088/log4j2/default
Format
{
"name": "log4j2",
"profiles": ["default"],
"label": null,
"version": null,
"state": null,
"propertySources": [{
"name": "log4j2",
"source": {
"configuration.appenders.appender[0].PatternLayout.Pattern": "${logging_pattern}",
"configuration.appenders.appender[0].name": "Console",
"configuration.appenders.appender[0].target": "SYSTEM_OUT",
"configuration.appenders.appender[0].type": "Console",
"configuration.loggers.Root.AppenderRef.ref": "Console",
"configuration.loggers.Root.level": "info",
"configuration.loggers.logger[0].AppenderRef.ref": "Console",
"configuration.loggers.logger[0].additivity": "false",
"configuration.loggers.logger[0].level": "info",
"configuration.loggers.logger[0].name": "com.paychex",
"configuration.loggers.logger[1].AppenderRef.ref": "Console",
"configuration.loggers.logger[1].additivity": "false",
"configuration.loggers.logger[1].level": "info",
"configuration.loggers.logger[1].name": "com.paychex.traceability",
"configuration.loggers.logger[2].AppenderRef.ref": "Console",
"configuration.loggers.logger[2].level": "WARN",
"configuration.loggers.logger[2].name": "org.apache.catalina.startup.VersionLoggerListener",
"configuration.properties.property[0].name": "logging_pattern",
"configuration.properties.property[0].value": "%d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX},severity=%p,thread=%t,logger=%c,%X,%m%n",
"configuration.properties.property[1].name": "traceability_logging_pattern",
"configuration.properties.property[1].value": "%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ},severity=%p,thread=%t,logger=%c,%X,%m%n"
}
}
]
}
As you can see, the properties are wrapped into the Spring Environment object, and the properties are pushed into a Map, so peeling this apart and getting log4j2 to parse it would be tricky.
Has anyone gotten the log4j client to work with a non-git backend?
You are correct. Log4j's support for Spring Cloud Config relies on SCC's support for serving plain text files.
The latest Spring Cloud Config documentation indicates that plain text support via urls onlys work for Git, SVN, native and AWS S3 but that for S3 to work Spring Cloud AWS must be included in the Config Server. This issue indicates support for serving plain text files from S3 appears to have been added in Spring Cloud Config 2.2.1.Release which was published in Dec 2019. There is still an open issue to add support for a vault backend.
Log4j's support for SCC was added in the 2.12.0 release in June 2019 when SCC did not yet support AWS S3. I have only tested it with native for unit/functional testing and Git since that is the backend my employer uses. However, according to the documentation if you can get SCC to serve plain text with an AWS backend then Log4j should work as well as all it does is query SCC via URLs.

Connect Azure Data Factory to SAP BW

I have an SSIS package that successfully uses the Microsoft SAP BW connector. The SAP Administrator has set up his side so that it uses a process chain and ProgramId as connection criteria. I start my SSIS package and it runs in "Wait" mode until the SAP job executes. This all works great. I now need to replicate this using the Azure data factory's SAP BW connector but the Azure connector does not have the same look and feel so I am attempting to edit the code in the Connections tab for the SAPBW connection to include the Wait mode etc.
The SAP BW connection to the SAP BW system successfully passes the "Test Connection" in the Data Factory.
In the SSIS SAP BW connector the advanced properties display these values which I am trying to replicate (hope this image works):
So I added the "Custom Properties" to the code in the Connections -> linked Services->SapBw
{
"name": "SapBw",
"type": "Microsoft.DataFactory/factories/linkedservices",
"properties": {
"type": "SapBw",
"typeProperties": {
"server": "sapdb.compnme.local",
"systemNumber": "00",
"clientId": "400",
"userName": "myUser",
"encryptedCredential": "abc123"
},
"connectVia": {
"referenceName": "ARuntime",
"type": "IntegrationRuntimeReference"
}
},
"Custom Properties":{
"DbTableName":"/BIC/OHCSST_OHD",
"DestinationName":"CSST_OHD",
"ExecutionMode":"W",
"GatewayHost":"sapdb.compnme.local",
"GatewayService":"sapgw00",
"ProcessChain":"Z_CS_STAT_OHD",
"ProgramId":"ProgId_P23",
"Timeout":"1200"
}
}
Unfortunately, when I click "Finish" the connection is successfully published but when I go to view the code my Custom Properties have disappeared. Is there a different process to connect to SAP Open Hub iwht the Azure data factory as there does not appear to be anything on the MS website to guide me.
Your image attachment could not display correctly. Based upon what I comprehend, I wonder if you confused ADF SSIS-IR and ADF Self-hosted IR.
Because you leveraged the BW connector in SSIS, apparently you were using the SSIS package and deployed it to ADF SSIS-IR stack. This IR has nothing to do with the Self-hosted IR which is required by ADF Copy activity from SAP BW. You mentioned you defined custom properties in the linked services, but the context of linked services is for the ADF native BW MDX connection interface. No matter what you define in the ADF linked services, it would not affect SSIS IR. Also, you may need to realize that ADF native BW interface is for MDX access only to query BW InfoCube and BEx QueryCube data. There is nothing to do with Open Hub.
Tactically, you should apply the custom properties to your BW connection in SSIS package, but I have a feeling that you may not know deeply the pros and cons of SSIS BW connector, ADF BW connector, Open Hub, and MDX. From real project experience, there are major robustness issues with the SSIS BW connector's integration with Open Hub and Process Chain. The DTP jobs inside the process chain could fail frequently, and the "reset" of DTP jobs is a frustrating experience. I suggest you describe your requirement before spending too much energy solving a connection property issue.
Did some work with a Microsoft person - the process we wanted was to use an OpenHub connection in the Data Factory. This link to the Microsoft Azure Data Factory forum has a document that talks about how to achieve this.
DataFactory Forum
Unfortunately this process didn't work for me becasue our SAP Version is 4 when it should work with 7.3 13.

Extending S/4HANA OData service to SCP

I want to extend a custom OData service created in a S/4HANA system. I added a Cloud Connector to my machine, but I don't know how to go from there. The idea is that I want people to access the service from SCP and that I don't need multiple accounts accessing the service on the S/4 system, but just the one coming from SCP. Any ideas?
Ok I feel silly doing this but it seems to work. My test is actually inconclusive because I don't have a cloud connector handy, but it works proxy-ing google.
I'm still thinking about how to make it publicly accessible. There might be people with better answers than this.
create the cloud connector destination.
make a new folder in webide
create file neo-app.json.
content:
{
"routes": [{
"path": "/google",
"target": {
"type": "destination",
"name": "google"
},
"description": "google"
}],
"sendWelcomeFileRedirect": false
}
path is the proxy in your app, so myapp.scp-account/google here. the target name is your destination. I called it just google, you'll put your cloud connector destination.
Deploy.
My test app with destination google going to https://www.google.com came out looking like this. Paths are relative so it doesn't work but google seems proxied.
You'll still have to authenticate etc.

Managing application configuration in a chef environment cookbook

I am new to chef and have been struggling to find best practices on how to configure application configuration in an environment cookbook [source #1].
The environment cookbook I'm working on should do the following:
Prepare the node for a custom application deployment by creating directories, users, etc. that are specific for this deployment only.
Add initialization and monitoring scripts specific for the application deployment.
Define the application configuration settings.
This last responsibility has been a particularly tough nut to crack.
An example configuration file of an application deployment might look as follows:
{
"server": {
"port": 9090
},
"session": {
"proxy": false,
"expires": 100
},
"redis": [{
"port": 9031,
"host": "rds01.prd.example.com"
}, {
"port": 9031,
"host": "rds02.prd.example.com"
}],
"ldapConfig": {
"url": "ldap://example.inc:389",
"adminDn": "CN=Admin,CN=Users,DC=example,DC=inc",
"adminUsername": "user",
"adminPassword": "secret",
"searchBase": "OU=BigCustomer,OU=customers,DC=example,DC=inc",
"searchFilter": "(example=*)"
},
"log4js": {
"appenders": [
{
"category": "[all]",
"type": "file",
"filename": "./logs/myapp.log"
}
],
"levels": {
"[all]": "ERROR"
}
},
"otherService": {
"basePath" : "http://api.prd.example.com:1234/otherService",
"smokeTestVariable" : "testVar"
}
}
Some parts of this deployment configuration file are more stable than others. While this may vary depending on the application and setup, things like port numbers and usernames I prefer to keep the same across environments for simplicity's sake.
Let me classify the configuration settings:
Stable properties
session
server
log4js.appenders
ldapConfig.adminUsername
ldapConfig.searchFilter
otherService.basePath
redis.port
Environment specific properties
log4js.levels
otherService.smokeTestVariable
Partial-environment specific properties
redis.host: rds01.[environment].example.com
otherService.basePath: http://api.[environment].example.com:1234/otherService
Encrypted environment specific properties
ldapConfig.adminPassword
Questions
How should I create the configuration file? Some options: 1) use a file shipped within the application deployment itself, 2) use a cookbook file template, 3) use a JSON blob as one of the attributes [source #2], 4)... other?
There is a great diversity of variability in the configuration file; how best to manage these using Chef? Roles, environments, per-node configuration, data-bags, encrypted data-bags...? Or should I opt for environment variables instead?
Some key concerns in the approach:
I would prefer there is only 1 way to set the configuration settings.
Changing the configuration file for a developer should be fairly straightforward (they are using Vagrant on their local machines before pushing to test).
The passwords must be secure.
The chef cookbook is managed within the same git repository as the sourcecode.
Some configuration settings require a great deal of flexibility; for example the log4js setting in my example config might contain many more appenders with dozens of fairly unstructured variables.
Any experiences would be much appreciated!
Sources
http://blog.vialstudios.com/the-environment-cookbook-pattern/
http://lists.opscode.com/sympa/arc/chef/2013-01/msg00392.html
http://jtimberman.housepub.org/blog/2013/01/28/local-templates-for-application-configuration/
http://realityforge.org/code/2012/11/12/reusable-cookbooks-revisited.html
Jamie Winsor gave a talk at chefconf that goes further in explaining the environment cookbook pattern's rationale and usage:
Chefcon: talking about self-contained releases, using chef
Slides
In my opinion one of the key concepts this pattern introduces is the idea of using chef environments to control the settings of each application instance. The environment is updated, using berkshelf, with the run-time version of the cookbooks being used by the application.
What is less obvious is that if you decide to reserve a chef environment for the use of a single application instance, it then it becomes safe to use that environment to configure the application's global run-time settings.
An example if given in the berkshelf-api installation instructions. There you will see production environment (for the application) being edited with various run-time settings:
knife environment edit berkshelf-api-production
In conclusion, chef gives us lots of options. I would make the following generic recommendations:
Capture defaults in the application cookbook
Create an environment for each application instance (as recommended by pattern)
Set run-time attribute over-rides in the environment
Notes:
See also the berksflow tool. Designed to make the environment cookbook pattern easier to implement.
I have made no mention of using roles. These can also be used to override attributes at run-time, but might be simpler to capture everything in a dedicated chef environment. Roles seem better suited to capturing information peculiar to a component of an application.