Implementation of Proxy on Liberty for Java - ibm-cloud

I use "Liberty for Java" app and Statica service(Proxy) on Bluemix.
We set http.proxyHost/http.proxyPort/https.proxyHost/https.proxyPort as system properties in Java code every transactions.
for example:
URL url = new URL(xxx);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
........
System.setProperty("http.proxyHost", host);
System.setProperty("http.proxyPort", port);
System.setProperty("https.proxyHost", host);
System.setProperty("https.proxyPort", port);
........
DataOutputStream out = new DataOutputStream(connection.getOutputStream());
I have an issue that one transaction go from the app to a target server directly in spite of tens of thousands of transactions passed the proxy.
Question 1:
Do "Liberty for Java" app on Bluemix clear or update system properties, http.proxyHost/http.proxyPort/https.proxyHost/https.proxyPort?
I wonder "Liberty for Java" app updated with null to access outer servers in multi-thread environment.
Question 2:
Do "Liberty for Java" app on Bluemix communicate with outer servers?
I found the following log in Statica.
https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.agents.na.apm.ibmserviceengage.com
https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.gateway.prd.na.ca.ibmserviceengage.com
( I masked a part of URL.)
P.S. We will change java code with ProxySelector class or Proxy class.

Re #1: No.
Re #2: Potentially yes. In your case, it seems your app is bound with a Monitoring & Analytics service? If so, a data collector will be installed and will send collected data to remote servers.
What's the reason that you need to set the proxy system properties in your code? Is it because you want some connections to go through the proxy and others not?
If so, then the way you do this is not right because the system proxy setting is a global setting, not a thread-scoped setting. This means if one thread sets the proxy setting, all threads will then use that proxy; if one thread unsets it, all threads will then do direct connections. That may explain why you are intermittently seeing some direct connections. The right way is to use a http client lib that supports proxy as parameters, like https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.Builder.html#setProxy%28org.apache.http.HttpHost%29
If you want all connections to go through the http proxy, then you should simply set the JAVA_OPTS environment variable to pass in those system properties, e.g., "-Dhttp.proxyHost=x.x.x.x -Dhttp.proxyPort=xx".

Related

spring data geode pool is not resolvable as a Pool in the application context

I've come back to a #SpringBootApplication project that uses spring-geode-starter with version 1.2.4 although the same error happens with upgrades to 1.5.6 version.
It sets up a Geode client using
#Component
#EnableClusterDefinedRegions(clientRegionShortcut=ClientRegionShortcut.PROXY)
and in order to register interest subscriptions over HTTP, also
#Configuration
#EnableGemFireHttpSession
with a bean
#Bean
public ReactiveSessionRepository<?> reactiveSessionRepository() {
return new ReactiveMapSessionRepository(new ConcurrentHashMap<>());
}
On starting the application the spring data geode client connects to the server (Geode version 1.14) and auto copies regions back to the client, which is great.
However, after all the region handles are copied over, there's an error with the #EnableGemFireHttpSession which is
Error creating bean with name 'ClusteredSpringSessions' defined in class path resource [org/springframework/session/data/gemfire/config/annotation/web/http/GemFireHttpSessionConfiguration.class] and [gemfirePool] is not resolvable as a Pool in the application context
The first info message in the logs is:
org.springframework.session.data.gemfire.config.annotation.web.http.GemFireHttpSessionConfiguration 1109 sessionRegionAttributes: Expiration is not allowed on Regions with a data management policy of PROXY
org.springframework.data.gemfire.support.AbstractFactoryBeanSupport 277 lambda$logInfo$3: Falling back to creating Region [ClusteredSpringSessions] in Cache [Web]
So the client is trying to create a region ClusteredSpringSessions but it can't. The problem appears to resolve itself if I define a connection pool for HTTP, with a pool connection bean like this
#Configuration
#EnableGemFireHttpSession(poolName="devPool")
public class SessionConfig {
#Bean
public ReactiveSessionRepository<?> reactiveSessionRepository() {
return new ReactiveMapSessionRepository(new ConcurrentHashMap<>());
}
#Bean("devPool")
PoolFactoryBean sessionPool() {
PoolFactoryBean pool = new PoolFactoryBean();
ConnectionEndpoint ce = new ConnectionEndpoint("1.2.3.4", 10334);
pool.setSubscriptionEnabled(true);
pool.addLocators(ce);
return pool;
}
}
There is still the Expiration is not allowed on Regions with a data management policy of PROXY info message in the log, but this time the Falling back to creating Region [ClusteredSpringSessions] in Cache [Web] appears to work.
I don't understand why a default pool can't connect.
If a pool is defined then in version 1.2.4 that can cause this issue.
Since you are using Spring Boot for Apache Geode (SBDG), which is an excellent choice (thank you), then you can simply include the spring-geode-starter-session dependency on your #SpringBootApplication classpath, which removes the need to explicitly annotate your Spring Boot application with SSDG's #EnableGemFireHttpSession annotation.
See here for more details. I also have a Sample application demonstrating the use of SSDG, here. The guide and source code for this example, along with other examples, can be found here).
Also, I would generally advise that users drive the GemFire/Geode cluster configuration from the application and not let the cluster dictate the Regions (and/or other components/configuration) that the client gets. However, SDG's #EnableClusterDefinedRegions annotation is provided and generally useful in the case you do not have control over the GemFire/Geode cluster your application is using. Still, in the (HTTP) Session UC, the GemFire/Geode cluster would need a Session Region (which defaults to "ClusteredSpringSessions" as determined by Spring Session for Apache Geode (SSDG) itself) anyway.
OK, now to the problem at hand...
I think what is happening here is, due to backwards compatibility and legacy reasons, Spring Data for Apache Geode (SDG), on which both SSDG and SBDG are based; SBDG additionally pulls in SSDG as well, defined a GemFire/Geode Pool by the name of "gemfirePool", specifically when using the SDG XML space and using/defining a DataSource configuration.
So, it is somewhat naively assumed users would be explicitly defining a Pool and calling it "gemfirePool", and not simply relying on a "default" Pool connection to the GemFire/Geode cache server (namely "localhost", 40404, or if using Locators (recommended), "localhost" and 10334).
However, for development purposes, and in SBDG specifically, I rely on the fact that GemFire/Geode creates a "DEFAULT" Pool anyway (when no explicit Pool is defined), and forgo the strict requirement that a "gemfirePool" should exist. But, SBDG builds on SSDG and SDG and they still rely on the legacy arrangement (for example).
I have filed an Issue ticket in SSDG to change this and better align SSDG with what SBDG prefers going forward, but I simply have not gotten around to it yet. My apologies for your inconvenience.
Anyway, it is a simple change you can make externally from your Spring Boot application, in application.properties like so (see here from the HTTP Session Sample I referenced from SBDG above). This will allow you to configure the Session Region Pool "name".
Also note, it is possible to change the name of the Session Region used by the client if what comes down from the cluster when you are using SDG's #EnableClusterDefinedRegions and the Region definition pulled down from the cluster is named differently on the server-side using this property.
Additionally, you can also configure the client Session Region data policy using properties as well (for example).
Regarding the Expiration "info" message you are seeing in the logs...
Since the client Session Region is a PROXY by default, then Expiration, Eviction and other Region data management policies (e.g. Compression, etc), do not actually make much sense.
In fact, SSDG is smart about whether to apply additional Region data management policies locally or not (see here, and specifically, this logic).
The message you are seeing in your application logs is in fact coming from SSDG, specifically. This message really serves as a reminder that your Session state management is actually "managed" on the server-side (when the application client is using a PROXY or even a CACHING_PROXY Region for that matter) and that the corresponding server-side, or cluster Sessions Region should be configured manually and appropriately, with Expiration policies as well as other things if necessary. Otherwise, no Session expiration would actually happen!
I hope all this makes since.
If you continue to have problems, feel free to file an Issue ticket and provide an example test or small application replicating your problem.

core audio user-space plug-in driver - sandbox preventing data interaction from another process

I'm working on a coreaudio user-space hal plugin based on the example
developer.apple.com/library/mac/samplecode/AudioDriverExamples/Introduction/Intro.html
In the plug-in implementation, I plan to obtain audio data from another process i.e. CFMessagePort
However, I got the following error in console trying to create port CFMessagePortCreateLocal...
sandboxd[251]: ([2597]) coreaudiod(2597) deny mach-register com.mycompnay.audio
I did some googlging and came to this article
Technical Q&A QA1811
https://developer.apple.com/library/mac/qa/qa1811/_index.html
about adding AudioServerPlugIn_MachServices in plist but still no success.
Is there anything else I need to do to make this work (like adding entitlements, code-sign) or this is not the correct approach.?
I am not sure if MesssagePort mechanism works anymore under sandbox. would XPC Services be viable?
Thank you very much for your time. Any help is greatly appreciated
update 1:
I should be creating a remote port instead of a local in the audio plug-in. Having that said, with the AudioServerPlugIn_MachServices attribute in the plist. now there is no sandboxd[559]: ([552]) coreaudiod(552) deny mach-lookup / register message in console.
However, in my audio hal plug-in (client side) I have
CFStringRef port_name = CFSTR("com.mycompany.audio.XPCService");
CFMessagePortRef port = CFMessagePortCreateRemote(kCFAllocatorDefault, port_name);
port has return the value of 0. I tried this in a different app and it works just fine.
This is my server side:
CFStringRef port_name = CFSTR("com.mycompany.audio.XPCService");
CFMessagePortRef port = CFMessagePortCreateLocal(kCFAllocatorDefault, port_name, &callback, NULL, NULL);
CFRunLoopSourceRef runLoopSource =
CFMessagePortCreateRunLoopSource(nil, port, 0);
CFRunLoopAddSource(CFRunLoopGetCurrent(),
runLoopSource,
kCFRunLoopCommonModes);
CFRunLoopRun();
I did get a console message regarding this.
com.apple.audio.DriverHelper[1314]: The plug-in named SimpleAudioPlugIn.driver requires extending the sandbox for the mach service named com.mycompnay.audio.XPCService
anyone know why??
update 2
I noticed that when I use the debug mode with coreaudiod it does successful get the object reference of the mach service. (same thing happened when I was trying the xpc_service approach)
project scheme setting
Anyone??
I'm pretty sure I was running into the same problems in my AudioServerPlugIn. I could look up and use every Mach service I tried, except for the ones I had created. And the ones I had created worked normally from a regular process.
Eventually I read the Daemonomicon and figured out that coreaudiod (which hosts the HAL plugins) was using the global bootstrap namespace, but my service was being registered in the per-user bootstrap namespace. And since "processes using the global namespace can only see services in the global namespace" my plugin couldn't see my service.
You can use launchctl to test this by having it run the program that registers your service, but with the same bootstrap namespace as coreaudiod. You'll probably need to have rootless disabled.
# launchctl bsexec $(pgrep coreaudiod) your_service_executable
With that running, try to connect from your plugin again.
From Table 2 in the Daemonomicon, you can see that only launchd daemons use the global bootstrap namespace. That explains why coreaudiod uses it. And I think it means that your Mach service needs to be created by a launchd daemon.
To make one, create a launchd.plist for your service in /Library/LaunchDaemons. Set its owner to root:wheel and make it only writable by the owner. In it, set the MachServices key and add the name of your service:
<key>MachServices</key>
<dict>
<key>com.mycompany.audio.XPCService</key>
<true/>
</dict>
Then register it:
# launchctl bootstrap system /Library/LaunchDaemons/com.mycompany.audio.XPCService.plist
This is what I ended up with: com.bearisdriving.BGM.XPCHelper.plist.template. Note that without the UserName/GroupName keys your daemon will run as root. (The code for my service and plugin is in that repo as well, in case that's helpful.)
I ended up having to use XPC, unfortunately, but I tried CFMessagePort first and it worked fine.
It also seems to all work fine whether the plugin is signed or not. Though, as you say, you do need the AudioServerPlugIn_MachServices key in your Info.plist.

Configure AppFabric Cache without listing servers in web.config

I am trying to understand how to properly configure AppFabric Caching on a web site. We are planning to use SQL Server as the cache manager and as far as I can understand the SQL will contain a list of the cache hosts in the cluster.
However, when running
DataCacheFactory factory = new DataCacheFactory();
I get
Server collection cannot be empty.
which, I guess, is to be expected since I have not added any servers in the web.config.
However, I do not want to maintain a server list on each web server, I want that to be done centrally on the SQL Server. I assume there is a way to point to the SQL Server, but I cannot find information on how to do this.
(I have also tried with the XML configration option, but it cannot even find that file. I have checked the health of the service in power shell.)
How do I centralize the server cache host list?
We are planning to use SQL Server as the cache manager and as far as I
can understand the SQL will contain a list of the cache hosts in the
cluster.
It's false. SQL Server can perform cluster management but it's only for managing the cache hosts, and ultimately, the cache cluster. It's just for internal management and your clients can use this configuration and they don't need to have acces to Sql Server.
DataCacheFactory factory = new DataCacheFactory();
This code will try to load default datacacheclient in config. In your case, it should be empty that's why you get this error.
You can still use code to configure cache host in this way.
// Declare array for cache host(s).
DataCacheServerEndpoint[] servers = new DataCacheServerEndpoint[1];
servers[0] = new DataCacheServerEndpoint("CacheServer1", 22233);
DataCacheFactoryConfiguration factoryConfig = new DataCacheFactoryConfiguration();
factoryConfig.Servers = servers;
DataCacheFactory mycacheFactory = new DataCacheFactory(factoryConfig);
DataCache myDefaultCache = mycacheFactory.GetCache("NamedCache1");
You don't need to specify all host names here, because AppFabric Caching will route request to the correct cache host, event if it is not in your list.

Calling GWT RPC service

I have been going through the google tutorial ( which I find very good ) at
https://developers.google.com/web-toolkit/doc/latest/tutorial/RPC
I have the service up and running on my local server and my JavaScript client can call it fine. OK so far. Now, what I want to do is deploy the service on a remote server JoeSoapHost:8080
How do I now tell my client where to send it's requests? I can't see any server/url being created in my RPC call. It just works by magic but now I want to get under the bonnet and start breaking it.
[Edit}
This is the Interface my client uses to know what service on the Server is to be called. I know that my Web.xml web descriptor must have a url that matches this. It has this because my server is invoked ok. Problem is, if I now decide to deploy my server elsewhere how do I tell my client what server/domain name to use?
#RemoteServiceRelativePath("stockPrices")
public interface StockPriceService extends RemoteService
{
StockPrice[] getPrices(String[] symbols);
}
What I want to achieve first is have a simple GWT client calling into an RPC service. I have this working but only when the server is localhost.
Next step, I deploy my app to the Google App Engine. What must I change now because my RPC service in my JavaScript is not being called when I deploy my app to
http://stockwatcherjf.appspot.com/StockWatcher.html
1) Brian Slesinsky excellent document on RPC - https://docs.google.com/document/d/1eG0YocsYYbNAtivkLtcaiEE5IOF5u4LUol8-LL0TIKU/edit#heading=h.amx1ddpv5q4m
2) #RemoteServiceRelativePath("stockPrices") allows GWT code to determine relative to your host/server/domain i.e http//mydomain.com/gwtapp/stockPrices
3) You can search GOOGle IO Sessions from 2009 - 2012 for some more in depth stuff on GWT RPC usage.
#RemoteServiceRelativePath gives the path of the servlet relative to the GWT.getModuleBaseURL() (which is more or less the URL of the *.nocache.js script); it doesn't "just work by magic".
If you deploy your services on a different server than the one serving your client code, then you'll likely hit the Same Origin Policy. CORS can help here, but you'll lose compatibility with IE (up to IE9 included). You'd better stick serving everything from the same origin.

Best way to store Database configuration parameters?

I'm developing a web application at the moment. The web application needs to access a Patients database, which for now is a simple MySQL database but may likely be replaced by some other DB (or data source) in the future. At the moment, everything is hardcoded but I would like to have some way to configure the DB connection (that is, the database URL, user, password etc.).
What would be a simple and straightforward solution? It would be good if I could change the configuration by simple editing of a file.
I've seen there's the Properties API as well as Preferences. Or is there some idiom concerning servlets/web apps?
A servlet is part of a web app, and this web app is deployed in a Java EE container (Tomcat, WebLogic, etc.).
The standard way to get a database connection is to use JNDI to get a DataSource instance, and to ask a connection to this DataSource. The DataSource, most of the time, will pool database connections to avoid creating and closing too many connections and thus be much faster :
Context initCtx = new InitialContext();
DataSource dataSource = (DataSource) initCtx.lookup("java:comp/env/jdbc/MyDataSource");
Connection c = dataSource.getConnection();
try {
// ...
}
finally {
c.close(); // makes the connection available for a new thread
}
The DataSource will have to be declared in the web.xml file:
<resource-ref>
<description>Datasource example</description>
<res-ref-name>jdbc/MyDataSource</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
</resource-ref>
It will have to be defined (with its URL, number of connections, user, password, settings, etc.) inside your Java EE container. This is where it depends on your container.
Read the following explanations for Tomcat : http://tomcat.apache.org/tomcat-7.0-doc/jndi-datasource-examples-howto.html
I think a configuration XML along with your web application is a good idea. Each time the application is initiated by a new request the configuration is loaded and the database connection information available from any internal context that you make.
On IIS this is a standard way through the Web.config file.
regards