Extending S/4HANA OData service to SCP - sapui5

I want to extend a custom OData service created in a S/4HANA system. I added a Cloud Connector to my machine, but I don't know how to go from there. The idea is that I want people to access the service from SCP and that I don't need multiple accounts accessing the service on the S/4 system, but just the one coming from SCP. Any ideas?

Ok I feel silly doing this but it seems to work. My test is actually inconclusive because I don't have a cloud connector handy, but it works proxy-ing google.
I'm still thinking about how to make it publicly accessible. There might be people with better answers than this.
create the cloud connector destination.
make a new folder in webide
create file neo-app.json.
content:
{
"routes": [{
"path": "/google",
"target": {
"type": "destination",
"name": "google"
},
"description": "google"
}],
"sendWelcomeFileRedirect": false
}
path is the proxy in your app, so myapp.scp-account/google here. the target name is your destination. I called it just google, you'll put your cloud connector destination.
Deploy.
My test app with destination google going to https://www.google.com came out looking like this. Paths are relative so it doesn't work but google seems proxied.
You'll still have to authenticate etc.

Related

Unable to utilize log4j-spring-cloud-config-client when Spring Cloud Config uses a backend other than Git or File Based

Apparently, to use the log4j-spring-cloud-config-client with Spring Cloud Config, you need to take advantage of the SearchPathLocator functionality to pull the raw file based on a specific URI. From the
Spring-cloud-config code it appears only the JGitEnvironmentRepository and NativeEnvironmentRepository implement that interface and offer that functionality.
Running locally, if I hit the following endpoint, I get back a raw log4j2 config file: http://localhost:8088/config-server-properties-poc/default/master/log4j2.xml.
When I try that with an S3 backend, I get a 404, and it doesn't try to search for that specific file. I was able to work around this by naming my file to log4j2-default.json (XML is not supported). When I hit the following URL, I can get my properties back but not in the correct format
http://localhost:8088/log4j2/default
Format
{
"name": "log4j2",
"profiles": ["default"],
"label": null,
"version": null,
"state": null,
"propertySources": [{
"name": "log4j2",
"source": {
"configuration.appenders.appender[0].PatternLayout.Pattern": "${logging_pattern}",
"configuration.appenders.appender[0].name": "Console",
"configuration.appenders.appender[0].target": "SYSTEM_OUT",
"configuration.appenders.appender[0].type": "Console",
"configuration.loggers.Root.AppenderRef.ref": "Console",
"configuration.loggers.Root.level": "info",
"configuration.loggers.logger[0].AppenderRef.ref": "Console",
"configuration.loggers.logger[0].additivity": "false",
"configuration.loggers.logger[0].level": "info",
"configuration.loggers.logger[0].name": "com.paychex",
"configuration.loggers.logger[1].AppenderRef.ref": "Console",
"configuration.loggers.logger[1].additivity": "false",
"configuration.loggers.logger[1].level": "info",
"configuration.loggers.logger[1].name": "com.paychex.traceability",
"configuration.loggers.logger[2].AppenderRef.ref": "Console",
"configuration.loggers.logger[2].level": "WARN",
"configuration.loggers.logger[2].name": "org.apache.catalina.startup.VersionLoggerListener",
"configuration.properties.property[0].name": "logging_pattern",
"configuration.properties.property[0].value": "%d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX},severity=%p,thread=%t,logger=%c,%X,%m%n",
"configuration.properties.property[1].name": "traceability_logging_pattern",
"configuration.properties.property[1].value": "%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ},severity=%p,thread=%t,logger=%c,%X,%m%n"
}
}
]
}
As you can see, the properties are wrapped into the Spring Environment object, and the properties are pushed into a Map, so peeling this apart and getting log4j2 to parse it would be tricky.
Has anyone gotten the log4j client to work with a non-git backend?
You are correct. Log4j's support for Spring Cloud Config relies on SCC's support for serving plain text files.
The latest Spring Cloud Config documentation indicates that plain text support via urls onlys work for Git, SVN, native and AWS S3 but that for S3 to work Spring Cloud AWS must be included in the Config Server. This issue indicates support for serving plain text files from S3 appears to have been added in Spring Cloud Config 2.2.1.Release which was published in Dec 2019. There is still an open issue to add support for a vault backend.
Log4j's support for SCC was added in the 2.12.0 release in June 2019 when SCC did not yet support AWS S3. I have only tested it with native for unit/functional testing and Git since that is the backend my employer uses. However, according to the documentation if you can get SCC to serve plain text with an AWS backend then Log4j should work as well as all it does is query SCC via URLs.

Swagger-ui on GKE 1.9

I am running a kubernetes cluster on GKE. I have been told that Kubernetes API server comes integrated with the Swagger UI and the UI is a friendly way to explore the apis. However, I am not sure how to enable this on my cluster. Any guidance is highly appreciated. Thanks!
I've researched a bit regarding your question, and I will share with you what I discovered.
This feature is not enabled by default on every Kubernetes installation and you would need to enable the swagger-ui through the flag enable-swagger-ui and I believe this was what you where looking for.
--enable-swagger-ui Enables swagger ui on the apiserver at /swagger-ui.
The issue is that I believe it is not enabled for Google Kubernetes engine and the master node in Google Kubernates Engine does not serve any request for this resource and the port appears to be close and since it is managed I believe it cannot be enabled.
However according to documentation the master should expose a series of resources giving you the possibility to access the API documentation and render them with the tool you prefer. This is the case and the following files are available:
https://master-ip/swagger.json (you can get the master IP running $ kubectl cluster-info)
{"swagger": "2.0",
"info": {
"title": "Kubernetes",
"version": "v1.9.3"
},
"paths": {
"/api/": {
"get": {
...
https://master-ip/swaggerapi
{"swaggerVersion": "1.2",
"apis": [
{
"path": "/version",
"description": "git code version from which this is built"
},
{
"path": "/apis",
"description": "get available API versions"
},
...
According to this blog post from Kuberntes you could make use of this file:
From kuber-apiserver/swagger.json. This file will have all enabled GroupVersions routes and models and would be most up-to-date file with an specific kube-apiserver. [...] There are numerous tools that works with this spec. For example, you can use the swagger editor to open the spec file and render documentation, as well as generate clients; or you can directly use swagger codegen to generate documentation and clients. The clients this generates will mostly work out of the box--but you will need some support for authorisation and some Kubernetes specific utilities. Use python client as a template to create your own client.

Failed to load resource while consuming OData service

Hello comunnity i need some help, i have my odata service already running and i have an url like this:
https://myclient:port/sap/opu/odata/SAP/servicename_SRV/MaterialListSet
This is my config, which I suppose is wrong.
manifest.json
"dataSources": {
"invoiceRemote": {
"uri": "https://myclient:port/sap/opu/odata/SAP/servicename_SRV/",
"type": "OData",
"settings": {
"odataVersion": "2.0"
}
}
}
.
.
.
"models": {
...
"invoice": {
"dataSource": "invoiceRemote"
}
}
I get these two errors:
Failed to load resource: the server responded with a status of 401 (Unauthorized)
and
Failed to load https://client:port/sap/opu/odata/SAP/odata_SRV/$metadata?sap-language=ES: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:port' is therefore not allowed access. The response had HTTP status code 401.
This line is not good;
"uri": "https://myclient:port/sap/opu/odata/SAP/servicename_SRV/",
This is because you have to use relative URLs, so it should be
"uri": "/sap/opu/odata/SAP/servicename_SRV/",
The reason behind that is simple: your customer for sure has more than one SAP Gateway/Fiori system. So you shouldn't hard code the domain of your development or production system.
Assuming you will eventually deploy your UI5 application to the SAP NetWeaver system, then that system will contain both the oData service AND the UI5 application. And as they will be hosted in the same server, relative URLs will work just fine.
However inside Web IDE this is not enough because if you use relative URLs than SAP Cloud/Web IDE will understand that you are trying to access a resource in the cloud.
That is why you should add/change your neo-app.json file inside your UI5 project. If you have it already than just change it. If you do not have this file inside your project yet, you can easily create it by right-clicking in the project name and choosing New >> HTML5 Application Descriptor. This will create this file in the root of your project. (outside the webapp folder usually present).
Finally, you will have to add a route in this neo-app.json file, like this
{
"path": "/sap/opu/odata",
"target": {
"type": "destination",
"name": "NAME_OF_YOUR_SAP_CLOUD_DESTINATION",
"entryPath": "/sap/opu/odata"
},
"description": "SAP Gateway System"
}
This tells Web IDE to forward every request to a different system under the destionation specified.
This will only work if you have in place an SAP Cloud Connector linking your SAP Cloud account with your SAP NetWeaver on premise system.

How to Set IP to Static with Powershell and Azure

I have an Azure Dev Test Lab that I am deploying to Azure via Power Shell. I am able to deploy the ARM templates and join to the test domain (not Azure AD) with no issues. The next step I would like to do is to set the IP to static. I can think of 3 ways to possibly do this. Either figure out the IP structure beforehand and deploy it with those settings. Let the DHCP assign the settings and try to problematically set them from Dynamic to Static using Powershell DSC. Or some type of preferred lease from the DHCP. These labs are meant to be stood up and torn down ad hoc. The IPs are internal and not Public. It is possible for me to know the IPs before hand. Could someone make a recommendation on what would make the most sense to pursue?
Well, there are several ways of looking at it, first of all, you can define ip at deployment time, by setting it to static, instead of dynamic:
{
"name": "xxx",
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2016-10-01",
"location": "loc",
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Static",
"privateIPAddress": "ipgoeshere",
"subnet": {
"id": "subnetgoeshere"
}
}
}
]
}
but this method is only valid if you know the available IP addresses beforehand and you will have to look those up and pass to the template.
Another way of doing this is créating NIC as dynamic, getting its IP address and setting it to static. All can be done with an ARM Template. The example is a bit too much to paste here, you can check it here. look for deployments called: "[concat(variables('vmNamePrefix'),'setStaticIp')]", and "[concat(variables('vmNamePrefix'),copyIndex(1),'-primaryIp')]", and their corresponding templates: getip and setip
You can do pretty much the same with powershell, I dont have a script Handy, but the logic is the same, deploy > getip > setip

IBM API Connect - Can custom connectors be exposed via UI?

In SLC ARC the list of connectors available (when creating datasources and thus generating models) via the UI was hard-coded (link to overview of issue) Does the same hold true for API Connect?
Effectively, I'd like to create a fork of the mssql connector to address some issues with how schemas are processed when generating models from existing tables. If I create such a connector, will I be able to install it so that I can utilize it via the GUI (again, I could not via SLC ARC due to hard-coding). Any help is greatly appreciated!
EDIT: I've installed the loopback-connector-redis connector into a throwaway project. When I spin up APIC it does not appear on the data sources screen. So, rephrasing my question: are there settings or otherwise that would allow such connectors to be included. Ideally, APIC would scan my project and determine what I have installed, exposing those connectors.
As you've seen, the list is currently fixed and doesn't detect additional installed connectors.
If you want to use your own custom connector, create a new datasource using the API Designer, select the MSSQL connector and fill in the values per usual.
Next, you'll need to open a file on your system to tweak the connector target.
In your project directory, open ./server/datasources.json and you should see the datasource you just created. Then, just change the connector value to the name of the custom version you created, save, and continue developing your APIs like normal.
{
"db": {
"name": "db",
"connector": "memory"
},
"DB2 Customers": {
"host": "datbase.acme-air.com",
"port": 50000,
"database": "customers",
"password": "",
"name": "Customer DB",
"connector": "db2-custom",
"user": "mhamann#us.ibm.com"
}
}
Unfortunately, you're now on your own in terms of managing datasources, as they won't show up in the Designer's datasource editor. They will still be usable in other parts of the Designer, so you can connect up your models, etc.