Not able to read VCAP_SERVICES in Bluemix - ibm-cloud

I used the following code to read my VCAP_SERVICES environment variable of Liberty application.
And I am not getting any values, result shows null or "not found".
private void readECaaSEnvVars() {
Map<?, ?> env = System.getenv();
Object vcap = env.get("VCAP_SERVICES");
if (vcap == null) {
System.out.println("No VCAP_SERVICES found");
}
else {
try {
JSONObject obj = new JSONObject(vcap);
String[] names = JSONObject.getNames(obj);
if (names != null) {
for (String name : names) {
if (name.startsWith("DataCache")) {
JSONArray val = obj.getJSONArray(name);
JSONObject serviceAttr = val.getJSONObject(0);
JSONObject credentials = serviceAttr.getJSONObject("credentials");
String username = credentials.getString("username");
String password = credentials.getString("password");
String endpoint=credentials.getString("catalogEndPoint");
String gridName= credentials.getString("gridName");
System.out.println("Found configured username: " + username);
System.out.println("Found configured password: " + password);
System.out.println("Found configured endpoint: " + endpoint);
System.out.println("Found configured gridname: " + gridName);
break;
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}

Your parsing code is OK.
In your Bluemix application dashboard, confirm you have the DataCache service bound to your application.
After a new service gets bound, you need to restage the application for the environment variable to get updated. cf restage <appname>
Output the environment variable to confirm DataCache credentials are in there System.out.println("VCAP_SERVICES: " + System.getenv("VCAP_SERVICES"));
You should also know that by default the the Liberty buildpack generates or updates existing server.xml file configuration stanzas for the Data Cache instance. The bound Data Cache instance can be accessed by the application using JNDI. The cache instance can either be injected into the application with an #Resource annotation, or can be looked up by the application with the javax.naming.InitialContext.
To see your server.xml on Bluemix for Liberty application:
cf files myLibertyApplication app/wlp/usr/servers/defaultServer/server.xml
You should see something like:
<xsBindings>
<xsGrid jndiName="wxs/myCache"
id="myCache"
gridName="${cloud.services.myCache.connection.gridName}"
userName="${cloud.services.myCache.connection.username}"
password="${cloud.services.myCache.connection.password}"
clientDomain="${cloud.services.myCache.name}"/>
</xsBindings>
where your JNDI name is wxs/myCache. This avoids the need for parsing VCAP_SERVICES.

Related

AEM 6.3 Cannot create groups with service user

Hoping someone on here can help me out of a conundrum.
We are trying to remove all Admin sessions from our application, but are stuck with a few due to JCR Access Denied exceptions. Specifically, when we try to create AEM groups or users with a service user we get an Access Denied exception. Here is a piece of code written to isolate the problem:
private void testUserCreation2() {
String groupName = "TestingGroup1";
Session session = null;
ResourceResolver resourceResolver = null;
String createdGroupName = null;
try {
Map<String, Object> param = new HashMap<String, Object>();
param.put(ResourceResolverFactory.SUBSERVICE, "userManagementService");
resourceResolver = resourceResolverFactory.getServiceResourceResolver(param);
session = resourceResolver.adaptTo(Session.class);
// Create UserManager Object
final UserManager userManager = AccessControlUtil.getUserManager(session);
// Create a Group
LOGGER.info("Attempting to create group: "+groupName+" with user "+session.getUserID());
if (userManager.getAuthorizable(groupName) == null) {
Group createdGroup = userManager.createGroup(new Principal() {
#Override
public String getName() {
return groupName;
}
}, "/home/groups/testing");
createdGroupName = createdGroup.getPath();
session.save();
LOGGER.info("Group successfully created: "+createdGroupName);
} else {
LOGGER.info("Group already exists");
}
} catch (Exception e) {
LOGGER.error("Error while attempting to create group.",e);
} finally {
if (session != null && session.isLive()) {
session.logout();
}
if (resourceResolver != null)
resourceResolver.close();
}
}
Notice that I'm using a subservice name titled userManagementService, which maps to a user titled fwi-admin-user. Since fwi-admin-user is a service user, I cannot add it to the administrators group (This seems to be a design limitation on AEM). However, I have confirmed that the user has full permissions to the entire repository via the useradmin UI.
Unfortunately, I still get the following error when I invoke this code:
2020-06-22 17:46:56.017 INFO
[za.co.someplace.forms.core.servlets.IntegrationTestServlet]
Attempting to create group: TestingGroup1 with user fwi-admin-user
2020-06-22 17:46:56.025 ERROR
[za.co.someplace.forms.core.servlets.IntegrationTestServlet] Error
while attempting to create group. javax.jcr.AccessDeniedException:
OakAccess0000: Access denied at
org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:231)
at
org.apache.jackrabbit.oak.api.CommitFailedException.asRepositoryException(CommitFailedException.java:212)
at
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.newRepositoryException(SessionDelegate.java:670)
at
org.apache.jackrabbit.oak.jcr.delegate.SessionDelegate.save(SessionDelegate.java:496)
Is this an AEM bug, or am I doing something wrong here?
Thanks in advance
So it seems the bug is actually in the old useradmin interface. It was not allowing me to add my system user into the admninistrators group, but this is possible in the new touch UI admin interface.

Service Unavailable 503 Error during File Transfer with Openfire using Smack

I am trying to send a file through chat using openfire on the server and the smack java library.
This is the output I get:
Status :: Error Error :: null Exception :: service-unavailable(503) Is
it done? true
Here are my sender and receiver functions:
public void fileTransfer(String fileName, String destination) throws XMPPException {
// Create the file transfer manager
FileTransferManager manager = new FileTransferManager(connection);
FileTransferNegotiator.setServiceEnabled(connection,true);
// Create the outgoing file transfer
OutgoingFileTransfer transfer = manager.createOutgoingFileTransfer(destination);
// Send the file
transfer.sendFile(new File(fileName), "You won't believe this!");
try {
Thread.sleep(10000);
}
catch(Exception e){}
System.out.println("Status :: " + transfer.getStatus() + " Error :: " + transfer.getError() + " Exception :: " + transfer.getException());
System.out.println("Is it done? " + transfer.isDone());
}
public void fileReceiver(final boolean accept, final String fileName) {
// Create the file transfer manager
final FileTransferManager manager = new FileTransferManager(connection);
// Create the listener
manager.addFileTransferListener(new FileTransferListener() {
public void fileTransferRequest(FileTransferRequest request) {
// broadcast something here. Wheather users want to accept file
// Check to see if the request should be accepted
if(accept) {
// Accept it
IncomingFileTransfer transfer = request.accept();
try {
transfer.recieveFile(new File(fileName));
System.out.println("File " + fileName + "Received Successfully");
//InputStream input = transfer.recieveFile();
} catch (XMPPException ex) {
Logger.getLogger(XmppManager.class.getName()).log(Level.SEVERE, null, ex);
}
} else {
// Reject it
request.reject();
}
}
});
}
I had same problem, I investigated the stanza and solved it this way.
Many people use "/Smack" or "/Resource" as resource part in jid, but that can be done another way.
Resource path is changing with every presence changed of user. Lets say we want to send image to this user:
"user1#mydomain"
You must add "/Resource" part to this jid and it become this:
user1#mydomain/Resource
But /Resource path is changing with presence so you must follow every presence change to update resource path.
Best way is to get user presence is in roster listener and in presencheChanged() method you get last user resource part like this:
Roster roster=getRoster();
roster.addRosterListener(new RosterListener() {
#Override
public void entriesAdded(Collection<Jid> addresses) {
Log.d("entriesAdded", "ug");
context.sendBroadcast(new Intent("ENTRIES_ADDED"));
}
#Override
public void entriesUpdated(Collection<Jid> addresses) {
Log.d("entriesUpdated", "ug");
}
#Override
public void entriesDeleted(Collection<Jid> addresses) {
Log.d("entriesDeleted", "ug");
}
#Override
public void presenceChanged(Presence presence) {
Log.d("presenceChanged", "ug");
//Resource from presence
String resource = presence.getFrom().getResourceOrEmpty().toString();
//Update resource part for user in DB or preferences
//...
}
});
}
Resource string will be some generated string like "6u1613j3kv" and jid will become:
user1#mydomain/6u1613j3kv
That means that you must create your outgoing transfer like this:
EntityFullJid jid = JidCreate.entityFullFrom("user1#mydomain/6u1613j3kv");
OutgoingFileTransfer transfer = manager.createOutgoingFileTransfer(jid)
transfer.sendFile(new File("DirectoryPath"), "Description");
And that is how i have solved my problem with file transfer on smack and Openfire.
In your case jid is destination.
Also to mention you must add following properties in your Openfire server:
xmpp.proxy.enabled - true
xmpp.proxy.externalip - MY_IP_ADDRESS
xmpp.proxy.port - 7777
Just to mention, I am using Openfire 4.0.2 and Smack 4.2.2.
Also this can be configured the easy way, just set the resource on
XMPPTCPConnectionConfiguration.Builder .
like
XMPPTCPConnectionConfiguration.Builder configurationBuilder =
XMPPTCPConnectionConfiguration.builder();
configurationBuilder.setResource("yourResourceName");

The client is not authorized to make this request -- while trying to get google cloud sql instance by java

I want to get details of Google Cloud Sql instance by using google cloud service account. I have created a service account which is billing enabled. I have successfully did Google Cloud Storage functionality like bucket create, bucket delete and so on by using this service account from java code. But while I tried to get GCS Sql functionality I am getting following error:
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "The client is not authorized to make this request.",
"reason" : "notAuthorized"
} ],
"message" : "The client is not authorized to make this request."
}
Below are my java code snippet:
private SQLAdmin authorizeSqlAdmin() throws Exception {
if (cloudSqlAdmin == null) {
HttpTransport httpTransport = new NetHttpTransport();
JsonFactory jsonFactory = new JacksonFactory();
List<String> scopes = new ArrayList<String>();
scopes.add(SQLAdminScopes.CLOUD_PLATFORM);
scopes.add(SQLAdminScopes.SQLSERVICE_ADMIN);
String propertiesFileName = "/cloudstorage.properties";
Properties cloudStorageProperties = null;
try {
cloudStorageProperties = Utilities.getProperties(propertiesFileName);
} catch (Exception e) {
logger.error(e.getMessage(), e);
return null;
}
Credential credential = new GoogleCredential.Builder()
.setTransport(httpTransport)
.setJsonFactory(jsonFactory)
.setServiceAccountId(
cloudStorageProperties.getProperty(ACCOUNT_ID_PROPERTY)
)
.setServiceAccountPrivateKeyFromP12File(
new File(cloudStorageProperties.getProperty(PRIVATE_KEY_PATH_PROPERTY))
)
.setServiceAccountScopes(scopes).build();
cloudSqlAdmin = new SQLAdmin.Builder(httpTransport, jsonFactory, credential)
.setApplicationName(
cloudStorageProperties.getProperty(APPLICATION_NAME_PROPERTY)
)
.build();
}
return cloudSqlAdmin;
}
public DatabaseInstance getInstanceByInstanceId(String projectId, String instanceId) throws Exception {
SQLAdmin cloudSql = authorizeSqlAdmin();
Get get = cloudSql.instances().get(projectId, instanceId);
DatabaseInstance dbInstance = get.execute();
return dbInstance;
}
What am I missing here?
Somebody please help me.
N.B: I have added that service account as a member in permissions tab and gave this account as CAN EDIT permission
Solved this issue by replacing instance id value.
From GCS console I got the instance id as project-id:instance-name.
I putted whole part of project-id:instance-name as instance id and thats why I got the above error
After some trials I found that I need to give instance-name as instanceId in here
Get get = cloudSql.instances().get(projectId, instanceId);
That solved my problem.
Updated answer
if you are on terraform and receive this error, it means that your master instance name is set wrongly. A master should refer to an instance name that already exists in cloud sql( i.e., whichever is to be the master of the instance your are creating)
master_instance_name = "${google_sql_database_instance.master.name}"
It would be the same for json setup
"masterInstanceName": "source-instance"

Mapping an Azure File Service CloudFileShare as a virtual directory on each instance of a cloud service

I have an azure cloud service which I am attempting to upgrade for high availability and I have subscribed to the Microsoft Azure File Service preview which has been enabled in the preview portal. I have created a new storage account and can see the storage account now has a Files endpoint located at:
https://<account-name>.file.core.windows.net/
Within my web role I have the following code which looks to see if a share called scorm is created and if not it creates it:
public static void CreateCloudShare()
{
CloudStorageAccount account = CloudStorageAccount.Parse(System.Configuration.ConfigurationManager.AppSettings["SecondaryStorageConnectionString"].ToString());
CloudFileClient client = account.CreateCloudFileClient();
CloudFileShare share = client.GetShareReference("scorm");
share.CreateIfNotExistsAsync().Wait();
}
This works without issue. My problem is that I am unsure as to how to map the CloudShare that has been created as a virtual directory within my cloud service. On a single instance I was able to do this:
public static void CreateVirtualDirectory(string VDirName, string physicalPath)
{
try
{
if (VDirName[0] != '/')
VDirName = "/" + VDirName;
using (var serverManager = new ServerManager())
{
string siteName = RoleEnvironment.CurrentRoleInstance.Id + "_" + "Web";
//Site theSite = serverManager.Sites[siteName];
Site theSite = serverManager.Sites[0];
foreach (var app in theSite.Applications)
{
if (app.Path == VDirName)
{
// already exists
return;
}
}
Microsoft.Web.Administration.VirtualDirectory vDir = theSite.Applications[0].VirtualDirectories.Add(VDirName, physicalPath);
serverManager.CommitChanges();
}
}
catch (Exception ex)
{
System.Diagnostics.EventLog.WriteEntry("Application", ex.Message, System.Diagnostics.EventLogEntryType.Error);
//System.Diagnostics.EventLog.WriteEntry("Application", ex.InnerException.Message, System.Diagnostics.EventLogEntryType.Error);
}
}
I have looked and seen that it is possible to map this via powershell but I am unsure as to how I could call the code within my web role. I have added the following method to run the powershell code:
public static int ExecuteCommand(string exe, string arguments, out string error, int timeout)
{
Process p = new Process();
int exitCode;
p.StartInfo.FileName = exe;
p.StartInfo.Arguments = arguments;
p.StartInfo.CreateNoWindow = true;
p.StartInfo.UseShellExecute = false;
p.StartInfo.RedirectStandardError = true;
p.Start();
error = p.StandardError.ReadToEnd();
p.WaitForExit(timeout);
exitCode = p.ExitCode;
p.Close();
return exitCode;
}
I know that the command I have to run is:
net use z: \\<account-name>.file.core.windows.net\scorm /u:<account-name> <account-key>
How can I use this from within my web role? My web role code is as follows but does not seem to be working :
public override bool OnStart()
{
try
{
CreateCloudShare();
ExecuteCommand("net.exe", "user " + userName + " " + password + " /add", out error, 10000);
ExecuteCommand("netsh.exe", "firewall set service type=fileandprint mode=enable scope=all", out error, 10000);
ExecuteCommand("net.exe", " share " + shareName + "=" + path + " /Grant:" + userName + ",full", out error, 10000);
}
catch (Exception ex)
{
System.Diagnostics.EventLog.WriteEntry("Application", "CREATE CLOUD SHARE ERROR : " + ex.Message, System.Diagnostics.EventLogEntryType.Error);
}
return base.OnStart();
}
Our blog post Persisting connections to Microsoft Azure Files has an example of referencing Azure Files shares from web and worker roles. Please see the "Windows PaaS Roles" section and also take a look at the note under "Web Roles and User Contexts".
The library RedDog.Storage makse it really easy to mount a drive in your Cloud Service without having to worry about P/Invoke:
Install-Package RedDog.Storage
After the package is installed, you can simply use the extension method "Mount" on your CloudFileShare:
public class WebRole : RoleEntryPoint
{
public override bool OnStart()
{
// Mount a drive.
FilesMappedDrive.Mount("P:", #"\\acc.file.core.windows.net\reports", "sandibox",
"key");
// Unmount a drive.
FilesMappedDrive.Unmount("P:");
// Mount a drive for a CloudFileShare.
CloudFileShare share = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"))
.CreateCloudFileClient()
.GetShareReference("reports");
share.Mount("P:");
// List drives mapped to an Azure Files share.
foreach (var mappedDrive in FilesMappedDrive.GetMountedShares())
{
Trace.WriteLine(String.Format("{0} - {1}", mappedDrive.DriveLetter, mappedDrive.Path));
}
return base.OnStart();
}
}
More information: http://fabriccontroller.net/blog/posts/using-the-azure-file-service-in-your-cloud-services-web-roles-and-worker-role/

I am trying to update status to twitter using twitter4j but it does not work

I succeeded to get every credentials(Oauth_token,Oauth_verifier).
With it, I tried to post a text to twitter account, but it always fail with error message "No authentication challenges found"
I found some solution like
"Check the time zone automatically",
"import latest twitter4j library" etc..
but after check it, still not work.
Is there anyone can show me the way.
code is like below
public static void updateStatus(final String pOauth_token,final String pOauth_verifier) {
new Thread() {
public void run() {
Looper.prepare();
try {
TwitterFactory factory = new TwitterFactory();
AccessToken accessToken = new AccessToken(pOauth_token,pOauth_verifier);
Twitter twitter = factory.getInstance();
twitter.setOAuthConsumer(Cdef.consumerKey, Cdef.consumerSecret);
twitter.setOAuthAccessToken(accessToken);
if (twitter.getAuthorization().isEnabled()) {
Log.e("btnTwSend","인증값을 셋팅하였고 API를 호출합니다.");
Status status = twitter.updateStatus(Cdef.sendText + " #" + String.valueOf(System.currentTimeMillis()));
Log.e("btnTwSend","status:" + status.getText());
}
} catch (Exception e) {
Log.e("btnTwSend",e.toString());
}
};
}.start();
}
"No authentication challenges found"
I think you are missing Access token secret in your code. That is why you are getting this exception.
Try following :
ConfigurationBuilder configurationBuilder;
Configuration configuration;
// Set the proper configuration parameters
configurationBuilder = new ConfigurationBuilder();
configurationBuilder
.setOAuthConsumerKey(TWITTER_CONSUMER_KEY);
configurationBuilder
.setOAuthConsumerSecret(TWITTER_CONSUMER_SECRET);
// Access token
configurationBuilder.setOAuthAccessToken(ACCESS_TOKEN);
// Access token secret
configurationBuilder
.setOAuthAccessTokenSecret(ACCESS_TOKEN_SECRET);
// Get the configuration object based on the params
configuration = configurationBuilder.build();
// Pass it to twitter factory to get the proprt twitter instance.
twitterFactory = new TwitterFactory(configuration);
twitter = twitterFactory.getInstance();
// use this instance to update
twitter.updateStatus("Your status");
I finally found the reason.
I thought parameter named 'oauth_token' , 'oauth_verifier' is member of accesstoken,
but it was not true.
I just had to pass one more way to get correct key.
and this way needs 'oauth_token' , 'oauth_verifier' to get accesstoken.
This code must add one more code below:
mAccessToken = mTwitter.getOAuthAccessToken(REQUEST_TOKEN,OAUTH_VERIFIER);