Unable to integrate CQ5.6.1 with Site Catalyst - aem

I'm having difficulty in integrating AEM 5.6.1 with Site Catalyst. It allows me to connect in the configuration successfully, but does not work on the framework setup.
I've followed the standard procedure to connect AEM to SC and it accepts my login in the configuration, but fails on the framework set up with the browser message 'We were not able to login to SiteCatalyst. Please check your credentials and try again.'. Behind the scenes in the server log;
12.12.2014 14:10:06.967 *WARN* [0:0:0:0:0:0:0:1 [1418393406764] POST /libs/cq/analytics/sitecatalyst/service.json HTTP/1.1] com.day.cq.analytics.sitecatalyst.impl.SitecatalystHttpClientImpl Data center 'https://api3.omniture.com/admin/1.3/rest/' responded with errors {"error":{"code":500,"message":"Internal Server Error"}}
12.12.2014 14:10:06.967 *ERROR* [0:0:0:0:0:0:0:1 [1418393406764] POST /libs/cq/analytics/sitecatalyst/service.json HTTP/1.1] com.day.cq.analytics.sitecatalyst.impl.servlets.SitecatalystServlet Call to SiteCatalyst method 'Company.GetReportSuites' failed com.day.cq.analytics.sitecatalyst.SitecatalystException: not authenticated
I've tried accessing via the API Explorer and it works.
I've tried the troubleshooting guide without success.
I can log in to Site Catalyst, I'm an admin, I am in the web services access group.
I've tried using a clean install of CQ5.6.1 with geometrixx - it doesn't work either.
I've tried this from a server and from a localhost/dev machine with the same results. No proxy. I've even tried using the shared secret as the password but then it doesn't connect at all, and fails on the configuration screen.
What might cause this to fail?

If it doesn't work with a fresh install and Geometrixx, then it's probably an Adobe bug. That's typically the first thing support will ask you about.
I would also verify using Geometrixx Outdoors, or a more recent demo site, on your fresh install, just to ensure it's not an outdated ClientLib issue.
I know this isn't a direct answer to your question, but honestly, I would approach the integration differently. I've worked with the AEM-SC framework and it's buggy at best. It's very finicky, it doesn't REALLY work the way the documentation claims, and it requires that you're very specific about what Clientlibs are on the page.
Moving forward, I think using Adobe Dynamic Tag Manager is the better approach, for many reasons. My understanding is that it's Adobe's recommendation as well. I'd consider moving to that. In AEM 5.6.1, you'll have to customize your integration with DTM, but it's not very hard.

Solution: Add a property on the configuration node for sitecatalyst: (eg. /etc/cloudservices/sitecatalyst/my-sc-configuration)
server=https://api.omniture.com/admin/1.2/rest/
it also seems to work with newer API versions such as https://api3.omniture.com/admin/1.3/rest/
It would appear that for 5.6.1 it ignores the OSGi configuration, at least for the configuration screens. With this extra property, the framework page loads without error and allows selection of the RSID.

Related

Can I get the Swagger interface to appear on a deployed Azure web API?

When one creates an ASP.NET Core Web API in Visual Studio 2022, and tests it locally, one gets a convenient Swagger page built upon an OpenAPI definition, to test all HTTP endpoints.
However, when deployed and trying to access {path-to-api}/swagger, it returns a 404 Not Found error, even while on localhost, when both the API and the database is sitting on my own machine. Even if the database is in the Azure cloud, for that matter, it also works, if I put the Azure SQL Database connection string into appsettings.json.
So is there a way to achieve this, preferably without too much hassle? Or am I wrong in wanting this, do developers mostly test their APIs locally? Because I want the Swagger API online only for testing.
The problem is getting and using the swagger functionality into the cloud. Is it possible and good practice?
If you look at the startup, you will notice that the swagger is only loaded during a development session via an if check. Commenting that out, or expanding it based on evironment, will allow a published version to generate the page on the target host.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
I generally do that for first publishes or to Dev/Test environments to see it running. Once it is not needed, I un-comment it back in.
Also it may be actually viable (turned on) in Dev or UAT server because one is also publishing the open api it to APIM (Azure api manager), which takes the api and generates its own development environment; away from an initial publish.
Also once published, it is not the default page, one still has to path to it such as .../swagger/index.html.
I'm aborting this mission to deploy the Swagger interface to Azure along with my API. It's bad security practice to make the HTTP request methods so visually available to all. So the answer to my question do developers mostly test their APIs locally, is apparently yes.
I wondered if I should remove the question, but I would like to make it still stand, in case anyone else is contemplating about doing the same thing - exposing an API online with the Swagger UI.

ArcGIS Portal inaccessible externally on AWS esri instance

I've been installing our very own ArcGIS Enterprise instance on AWS.
The instance I chose is ArcGIS Enterprise on Ubuntu.
It is important to mention that this installation was conducted without using Cloudbuilder. I know it is a tool that automates the process but I was introduced to it only after I have already started to attack my current instance problems head-on. So, please don't advise me to restart the whole process from scratch using it.
The current status of my instance is that my ArcGIS Server is working. I can access it, upload services and we have already started using it in out Staging environment.
I have authorized all of the software on the server and verified it is licensed. The Portal for ArcGIS is my main problem.
Whenever I try to access it externally(from my office computer) it seems to redirect to the internal IP for some reason, and then times out on that request.
for example typing(from my browser):
https://[dns address]:7443/arcgis/home
redirects to:
https://[internal IP]:7443/arcgis/home
and this times out. (...took too long to respond error)
The funny thing is I can access the portaladmin area.
it's only the portal itself which doesn't work.
Also, another curious thing is that if I type without using the ports, I can access a window but exceptions are thrown in the browser.
For example:
https://[dns address]/arcgis
This will lead to a window where the ArcGIS world icon can be seen but nothing else loads and there are exceptions for "resource not found" 404 on some of the components of this page.
Any ideas? What further information should I include to answer this question?
I've looked everywhere but Esri's documentation is not very forthcoming with examples and information to understand what it is I did wrong.
Also, I don't think this is a ArcGIS software issue. It looks like this might be a proxy issue. Anyone else experienced something like this?
Thanks!
I found the solution.
It was a combination of two problems:
Tomcat that was running the web adaptor service was crashing because of an entirely different and unrelated issue.
The Portal was missing a web adaptor configuration and therefore did not have the WebContext property set with the web adaptor URL.
After fixing both of these problems, I was able to access the portal correctly.

Domino 8.5.3 - Create an organization extension library / codestore

This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information
I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.
To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.
I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.

Login with Facebook via Rails

As a relative newbie to Rails, I'm not sure how to approach this. I am looking to add a basic "Login with Facebook" feature to a practice site I am developing. I am stuck on two fronts:
Most Rails plugins dealing with Facebook seem out of date or poorly documented. I've encountered Facebooker (seems to have died off from what I see) and Mini_FB (more recent, but very little documentation). I tried to install Mini_FB, but I am still very unfamiliar about working with Gems. I ran gem install mini_fb, then bundle install, and finally added gem 'mini_fb' to my Gemfile, but my server complains of a no such file to load error. Are there any other steps necessary to allow your app to use a gem?
I am confused by how the "Login with Facebook" feature works from an overall birds-eye view. I understand that my App ID is passed into the login feature, and I ultimately get an access token (after resubmitting with my App Secret Key and an authorization code). But how does this integrate with some kind of user system on a Rails site? Since this access code doesn't last forever, do I need to renew it periodically? Is that done by simply waiting to catch an access token error from a Graph request and redoing the entire authorization procedure?
Have you tried OmniAuth?
It supports a whole host of external providers, including facebook.
There are also a number of railscasts on it's use.
The correct order of installing a gem on your application would be first adding it to your Gemfile.rb, then running bundle install on your console. That being said, OmniAuth is probably the best path for you

Where does GWT's Hosted Mode Jetty Run From?

I'm trying to call a web service in my back end java code when it's
running in hosted mode. Everything loads fine, the GWT RPC call works
and I can see it on the server, then as soon as it tries to call an
external web service (using jax-ws) the jetty falls over with a
Internal Server Error (500).
I have cranked the log all the way up to
ALL but I still don't see any stack traces or cause for this error. I just get one line about the 500 Error with the request header and response.
Does anyone know if the internal jetty keeps a log file somewhere, or
how I can go about debugging what's wrong?
I'm running GWT 1.7 on OS X 10.6.1
Edit: I know that I can use the -noserver option, but I'm genuinely interested in finding out where this thing lives!
From the documentation:
You can also use a real production
server while debugging in hosted mode.
This can be useful if you are adding
GWT to an existing application, or if
your server-side requirements have
become more than the embedded web
server can handle. See this article on
how to use an external server in
hosted mode.
So the simplest solution would be to use the -noserver option and use your own Java server - much less limitations that way, without any drawbacks (that I know of).
If you are using the Google Plugin for Eclipse, it's easily set up in the properties of the project. Detailed information on configuration can be found on the official site.
Edit: you could try bypassing the Hosted Mode TreeLogger, as described here: http://blog.kornr.net/index.php/2009/01/27/gently-asking-the-gwt-hosted-mode-to-not):
Just create a file called
"commons-logging.properties" at the
root of your classpath, and add the
following line:
[to use the Log4j backend]
org.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger
[to use the JDK14 backend]
org.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger
[to use the SimpleLog backend]
org.apache.commons.logging.Log=org.apache.commons.logging.impl.SimpleLog
Edit2: the trunk of GWT now also supports the -logfile parameter to enable file logging, but it probably won't help in this case, since the problem lies in the way the Hosted Mode treats the exceptions, not the way it presents them.