local webIDE not connectig to es4 service - sapui5

I have installed SAP WebIDE local on my machine and trying to connect with the below services:
https://sapes4.sapdevcenter.com/sap/opu/odata/IWBEP/GWSAMPLE_BASIC/?sap-ds-debug=true
http://services.odata.org/v3/northwind/northwind.svc/
I am getting two errors attached for reference.
Below is my destination file1:
Description=es4
Type=HTTP
TrustAll=true
Authentication=NoAuthentication
WebIDEUsage=odata_abap
Name=es4
WebIDEEnabled=true
URL=https\://sapes4.sapdevcenter.com\:443
ProxyType=Internet
WebIDESystem=es4
File 2:
Description=es4
Type=HTTP
TrustAll=true
Authentication=NoAuthentication
WebIDEUsage=odata_gen
Name=es4
WebIDEEnabled=true
URL=https\://sapes4.sapdevcenter.com
ProxyType=Internet
WebIDESystem=es4
Is there any configuration needed in my local Cloud connector?

First, you shouldn't have separate files for the same destination. Please have it in one file and separate the WebIDEUsage values with commas (make sure there are no spaces). More information can be found in the documentation Hofit has added.
Second, there's no need in a Cloud Connector, as there's no cloud here. If you install Web IDE locally then it's installed in your local station, there's no connectivity to the cloud.
I'm sure you can find all the needed information in both the documentation and SAP community.

I just tried to connect to es4- as you did in the first screenshot and it is working fine. (the name in the service catalog dropdown should be es4 as the name in the destination file 1- and not es4123).
Here is a link to the documentation.

Related

Testing Jetty server of Jasper Reports Integration

I am trying to use JasperReports integration for the first time. I am using the included Jetty server, Oracle database XE 18c and Windows 7.
I am following the quick start guide https://github.com/daust/JasperReportsIntegration/blob/main/src/doc/github/installation-quickstart.md
I downloaded the zip folder, configuired database access through adding schema credentials in application.properties file as follows...
[datasource:default] type=jdbc
url=jdbc:oracle:thin:#localhost:1521:XEPDB1 username=hr password=hr
this parameter is limiting access to the integration for the specified
list of ip addresses, e.g.:
ipAddressesAllowed=127.0.0.1,10.10.10.10,192.168.178.31 if the list is
empty, ALL addresses are allowed.
Then I deployed the jri.war file successfully. Then I started the server successfully as well. But when I tried to test it through http://localhost:8090/, I got the following page, and I do not know if that's the norm or there's something wrong...
I need to know if testing is successful, and what's meant by "context" here?
Thanks
You deployed the jri.war to the context path /jri, this isn't an error, and is quite normal.
Just access your webapp via http://localhost:8080/jri/

td-agent does not validate google cloud service account credentials

Trying to configure fluentd output with td-agent and the fluent-google-cloud plugin. The plugin and all dependencies are loaded but fluentd is not outputting to google cloud logging and the td-agent log states error="Unable to read the credential file specified by GOOGLE_APPLICATION_CREDENTIALS: file /home/$(whoami)/.config/gcloud/service_account_credentials.json does not exist".
However when I go to the file path, the file does exist and the $GOOGLE_APPLICATION_CREDENTIALS variable is set to the file path as well.What should I do to fix this?
On the assumption that the error and you are both correct, I suspect (!) that you're using your user account ( == whoami) and finding /home/$(whoami)/.config/gcloud while the agent is running (under systemctl?) as root and not finding the credentials file there (perhaps /root/.config/gcloud.
It would be helpful if you included more details as to what you've done in order that we can better understand the issue.

How to read a local csv file using Azure Data Factory and a self-hosted runtime?

I have a Windows Server VM with the ADF Integration Runtime installed running under a local account called deploy. This account is a member of the local admins group. The server is not domain-joined.
I created a new linked service (File System) and pointed it to a csv file on the root of the C drive as a test. When I test the connection I get Connection failed.
Error occurred when trying to access the file in Folder 'C:\etr.csv', File filter: ''. The directory name is invalid. Activity ID: 1b892702-7cc3-48d5-83c7-c680d6d15afd.
Any ideas on a fix?
The linked service needs to be a folder on the target machine. In your screenshot, change C:\etr.csv to C:\ and then define a new dataset that uses the linked service to select etr.csv.
The dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. So the linked service should point to the folder instead of file. It should be C:\ instead of C:\etr.csv

Creating and using a custom kafka connect configuration provider

I have installed and tested kafka connect in distributed mode, it works now and it connects to the configured sink and reads from the configured source.
That being the case, I moved to enhance my installation. The one area I think needs immediate attention is the fact that to create a connector, the only available mean is through REST calls, this means I need to send my information through the wire, unprotected.
In order to secure this, kafka introduced the new ConfigProvider seen here.
This is helpful as it allows to set properties in the server and then reference them in the rest call, like so:
{
.
.
"property":"${file:/path/to/file:nameOfThePropertyInFile}"
.
.
}
This works really well, just by adding the property file on the server and adding the following config on the distributed.properties file:
config.providers=file # multiple comma-separated provider types can be specified here
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
While this solution works, it really does not help to easy my concerns regarding security, as the information now passed from being sent over the wire, to now be seating on a repository, with text on plain sight for everyone to see.
The kafka team foresaw this issue and allowed clients to produce their own configuration providers implementing the interface ConfigProvider.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
and added the following entry in the distributed file:
config.providers=cust
config.providers.cust.class=com.somename.configproviders.CustConfigProvider
However I am getting an error from connect, stating that a class implementing ConfigProvider, with the name:
com.somename.configproviders.CustConfigProvider
could not be found.
I am at a loss now, because the documentation on their site is not explicit about how to configure custom config providers very well.
Has someone worked on a similar issue and could provide some insight into this? Any help would be appreciated.
I just went through these to setup a custom ConfigProvider recently. The official doc is ambiguous and confusing.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
You could name the final name of jar whatever you like, but needs to pack to jar format which has .jar suffix.
Here is the complete step by step. Suppose your custom ConfigProvider fully-qualified name is com.my.CustomConfigProvider.MyClass.
1. create a file under directory: META-INF/services/org.apache.kafka.common.config.ConfigProvider. File content is full qualified class name:
com.my.CustomConfigProvider.MyClass
Include your source code, and above META-INF folder to generate a Jar package. If you are using Maven, file structure looks like this
put your final Jar file, say custom-config-provider-1.0.jar, under the Kafka worker plugin folder. Default is /usr/share/java. PLUGIN_PATH in Kafka worker config file.
Upload all the dependency jars to PLUGIN_PATH as well. Use the META-INFO/MANIFEST.MF file inside your Jar file to configure the 'ClassPath' of dependent jars that your code will use.
In kafka worker config file, create two additional properties:
CONNECT_CONFIG_PROVIDERS: 'mycustom', // Alias name of your ConfigProvider
CONNECT_CONFIG_PROVIDERS_MYCUSTOM_CLASS:'com.my.CustomConfigProvider.MyClass',
Restart workers
Update your connector config file by curling POST to Kafka Restful API. In Connector config file, you could reference the value inside ConfigData returned from ConfigProvider:get(path, keys) by using the syntax like:
database.password=${mycustom:/path/pass/to/get/method:password}
ConfigData is a HashMap which contains {password: 123}
If you still seeing ClassNotFound exception, probably your ClassPath is not setup correctly.
Note:
• If you are using AWS ECS/EC2, you need to set the worker config file by setting the environment variable.
• worker config and connector config file are different.

How do I configure a webserver for a collective in Bluemix?

I found doc that indicates I need to setup a webserver in my collective environment, however, I cannot determine the correct set of steps. Thoughts?
It would help to see what you've already tried, but consider the following:
Create two or more servers on one or more of the hosts and join them to the collective. Make sure your servers are clusterMembers and collectiveMembers. The following post should help with creating servers and joining them to the collective:
How can I setup a cell and collective in Bluemix
Update the controller's /etc/hosts file with the hostnames of all the hosts in the collective.
Download and follow this guide to generate the plugin-cfg.xml file on the controller:
https://developer.ibm.com/wasdev/downloads/#asset/scripts-jython-Generate_Cluster_Plugin
Copy the generated plugin-cfg.xml file to /opt/IBM/WebSphere/HTTPServer/conf
Edit /opt/IBM/WebSphere/HTTPServer/conf/httpd.conf and uncomment these two lines at the bottom of the file:
LoadModule was_ap22_module /opt/IBM/WebSphere/Plugins/bin/64bits/mod_was_ap22_http.so
WebSpherePluginConfig /opt/IBM/WebSphere/Profiles/Liberty/servers/controller/pluginConfig/myLibertyCluster-plugin-cfg.xml
Change the WebSpherePluginConfig value to be /opt/IBM/WebSphere/HTTPServer/conf/plugin-cfg.xml
Stop and start the HTTP server
sudo ./apachectl stop
sudo ./apachectl start
Verify the application can be reached using the webserver <webserverIP>:80/appname
Generate the plugin again if the application is added or removed.