ArcGIS Portal inaccessible externally on AWS esri instance - redirect

I've been installing our very own ArcGIS Enterprise instance on AWS.
The instance I chose is ArcGIS Enterprise on Ubuntu.
It is important to mention that this installation was conducted without using Cloudbuilder. I know it is a tool that automates the process but I was introduced to it only after I have already started to attack my current instance problems head-on. So, please don't advise me to restart the whole process from scratch using it.
The current status of my instance is that my ArcGIS Server is working. I can access it, upload services and we have already started using it in out Staging environment.
I have authorized all of the software on the server and verified it is licensed. The Portal for ArcGIS is my main problem.
Whenever I try to access it externally(from my office computer) it seems to redirect to the internal IP for some reason, and then times out on that request.
for example typing(from my browser):
https://[dns address]:7443/arcgis/home
redirects to:
https://[internal IP]:7443/arcgis/home
and this times out. (...took too long to respond error)
The funny thing is I can access the portaladmin area.
it's only the portal itself which doesn't work.
Also, another curious thing is that if I type without using the ports, I can access a window but exceptions are thrown in the browser.
For example:
https://[dns address]/arcgis
This will lead to a window where the ArcGIS world icon can be seen but nothing else loads and there are exceptions for "resource not found" 404 on some of the components of this page.
Any ideas? What further information should I include to answer this question?
I've looked everywhere but Esri's documentation is not very forthcoming with examples and information to understand what it is I did wrong.
Also, I don't think this is a ArcGIS software issue. It looks like this might be a proxy issue. Anyone else experienced something like this?
Thanks!

I found the solution.
It was a combination of two problems:
Tomcat that was running the web adaptor service was crashing because of an entirely different and unrelated issue.
The Portal was missing a web adaptor configuration and therefore did not have the WebContext property set with the web adaptor URL.
After fixing both of these problems, I was able to access the portal correctly.

Related

Vercel: Task timed out after 10.01 seconds

I recently deployed a Next.js application for a software engineering boot camp. I am using Vercel for hosting the web app. The problem I am having has been spoken about on the internet before. However, I couldn't find much helpful information.
When I look at the real-time logs for my application from my Vercel dashboard, a 504 error gets thrown for multiple API routes I have created. I am aware that Vercel places restrictions on requests depending on the hosting plan someone subscribes to. However, I can't help but wonder if I have overlooked an important step when deploying my application.
When deploying my application, I did the following things:
Connected a session store to my MongoDB database.
Created a password-protected MongoDB Atlas account (credentials are environment variables).
White-listed all IP addresses so that any user can interact with their portion of the database.
I would appreciate help finding out if these errors are my fault and if there is anything I can do about them or if they are solely caused by the restrictions of the "Hobby" plan.
Thank you very much in advance,
-Sam
Screen Shot:
I had a similar issue, turned out, it was just the fact that I did not add the vercel ip addr to the network access page on mongodb so momgodb was blocking vercel from accessing data.
You need to integrate MongoDB in your project on Vercel.
Go to your project settings in Vercel and go to the Integrations tab. Click the Browse Marketplace button and find MongoDb. Click Add Integration button and follow the instructions.
Hope this helps... I know this was asked quite a while ago.
You have to troubleshoot the issue by doing the following
Eliminate the NextJS app out of the equation - Using postman https://www.postman.com/downloads/ - Confirm the output of your API, what is the time the API takes? Given function invocation has a limit, you need to optimize the API to meet the threshold.
If the API times are fine and resolution occurs outside of your app, the next step is to troubleshoot the API route, remove the DB parts and just echo back a success message and check the function invocation time.
If #2 turns out to be the issue, reach out to vercel support - Another option could be hosting it outside and whitelisting the cross domain API ask from your application.

How to get started with vSphere 6.0 and set up the web client?

Linked from here
I've been tasked with setting up some VMs. I've been given some admin details but no further guidance. The server is a fresh install.
My problem is that I'm on Linux/OSX and don't want to run Windows aside from setting up after which I hope to be able to manage things through the web client.
I think there is an ESXi installation. This would be Version 6. How do I set up the web client?
I've installed vSphere Client on a local Windows VM.. not sure what to do with it though.
The documentation is pretty awful and there hasn't been much useful info on the net. I'm really stuck as I didn't set these up and haven't used servers like this before, so I have no context or understanding of the VMWare ecosystem beyond using a virtual machine locally! (even then I've preferred Virtualbox)
Any advice would be amazing
p.s accessing https://[ipaddress]/vsphere-client does not work. Produces a blank browser page... with no html served as an error
If you have the name of the server on which the VMs are stored, type this into the URL of a web browser then it gives you management options or alternatively use this login screen:

Presence insights server results in an initialization error

The presence insight server on bluemix has been quite unstable now. Cannot get access to the server.
Is there any way to deploy the instance on softlayer server for production?
There were definitely some problems with PI last evening, but the team worked until early this morning to get them resolved. It looks like the system is back up and functioning. Are you still seeing the issue?
Also, as a general reference, this page has service status details on it that may be of help if you notice a problem.

Unable to integrate CQ5.6.1 with Site Catalyst

I'm having difficulty in integrating AEM 5.6.1 with Site Catalyst. It allows me to connect in the configuration successfully, but does not work on the framework setup.
I've followed the standard procedure to connect AEM to SC and it accepts my login in the configuration, but fails on the framework set up with the browser message 'We were not able to login to SiteCatalyst. Please check your credentials and try again.'. Behind the scenes in the server log;
12.12.2014 14:10:06.967 *WARN* [0:0:0:0:0:0:0:1 [1418393406764] POST /libs/cq/analytics/sitecatalyst/service.json HTTP/1.1] com.day.cq.analytics.sitecatalyst.impl.SitecatalystHttpClientImpl Data center 'https://api3.omniture.com/admin/1.3/rest/' responded with errors {"error":{"code":500,"message":"Internal Server Error"}}
12.12.2014 14:10:06.967 *ERROR* [0:0:0:0:0:0:0:1 [1418393406764] POST /libs/cq/analytics/sitecatalyst/service.json HTTP/1.1] com.day.cq.analytics.sitecatalyst.impl.servlets.SitecatalystServlet Call to SiteCatalyst method 'Company.GetReportSuites' failed com.day.cq.analytics.sitecatalyst.SitecatalystException: not authenticated
I've tried accessing via the API Explorer and it works.
I've tried the troubleshooting guide without success.
I can log in to Site Catalyst, I'm an admin, I am in the web services access group.
I've tried using a clean install of CQ5.6.1 with geometrixx - it doesn't work either.
I've tried this from a server and from a localhost/dev machine with the same results. No proxy. I've even tried using the shared secret as the password but then it doesn't connect at all, and fails on the configuration screen.
What might cause this to fail?
If it doesn't work with a fresh install and Geometrixx, then it's probably an Adobe bug. That's typically the first thing support will ask you about.
I would also verify using Geometrixx Outdoors, or a more recent demo site, on your fresh install, just to ensure it's not an outdated ClientLib issue.
I know this isn't a direct answer to your question, but honestly, I would approach the integration differently. I've worked with the AEM-SC framework and it's buggy at best. It's very finicky, it doesn't REALLY work the way the documentation claims, and it requires that you're very specific about what Clientlibs are on the page.
Moving forward, I think using Adobe Dynamic Tag Manager is the better approach, for many reasons. My understanding is that it's Adobe's recommendation as well. I'd consider moving to that. In AEM 5.6.1, you'll have to customize your integration with DTM, but it's not very hard.
Solution: Add a property on the configuration node for sitecatalyst: (eg. /etc/cloudservices/sitecatalyst/my-sc-configuration)
server=https://api.omniture.com/admin/1.2/rest/
it also seems to work with newer API versions such as https://api3.omniture.com/admin/1.3/rest/
It would appear that for 5.6.1 it ignores the OSGi configuration, at least for the configuration screens. With this extra property, the framework page loads without error and allows selection of the RSID.

How to track down errors when Azure web role starts up?

I habe a web role in azure. It is running fine locally in development app fabric, but fails silently when deployed to Azure (simply no response at all for any request).
I assume it's some problem with the web.config, but that is happening so early that it occurs already before I can set up the diagnostic stuff in global asax. As said, it's working fine locally, but there is simply no response at all from the azure system.
How can I find out what specifically is wrong to be able to solve it like get the exception text, stack trace, IIS application system error log or anything that could hint me to the real problem?
The absolute first thing that is run in a Web role is not your application but the OnStart() method in WebRole.cs in your Azure project. This is the place to add code to monitor your Web site.
The standard technique is to copy your application trace logs and the Windows event logs to Azure table storage, together (if appropriate) with instrumentation for CPU usage, IIS statistics and what-have-you.
A good introduction to this is here: http://blog.bareweb.eu/2011/01/beginning-azure-diagnostics/
and a good followup with details on the specifics you'll need in your application is here: http://blog.bareweb.eu/2011/03/implementing-azure-diagnostics-with-sdk-v1-4/
which remains applicable for the Azure SDK 1.5.
Once you are capturing diagnostics, you can either use Visual Studio to view them directly, or you can use a tool like the Cerebrata Azure Diagnostics Manager to graph and filter them automatically. This tool is a little rough around the edges (especially for larger systems with multiple instances: the graphs aren't really useful) but is as good as it gets at the moment.
An alternative approach is to use Remote Desktop to connect to the remote instance and do some spelunking in the Windows event logs and suchlike. You can also use the Internet Explorer browser that's on the remote instance to directly connect locally to the application and see any errors etc. that may otherwise be hidden.
Personally I'd only do this if the diagnostic storage mechanism isn't working: production servers really should have remote desktop access turned off altogether to reduce the possible surface area for external attack.
Setting up diagnostics is the best long term solution to dealing with tracking errors in your application. If you want something a little more ad-hoc you can either catch the errors and write them to blob storage or use your own light weight trace listener.