Chrome failed connecting to static resources (js,css) sometimes, Is this a known bug? - google-chrome-devtools

One of my VIP customers said that he could not open my website.
So I checked on his chrome dev tool remotely. And found out some static resources kept pending until they failed to load. (as shown below)
After I turn on that Disable cache option and cmd + R refresh that page. It remains the same issue.
The weird thing is until I force close all chrome tabs and reopen this page It just started to work.
I'm pretty sure that my CDN network's healthy and covered his region (he's in China, so I used Aliyun which is almost the biggest CDN provider in China)
Also, I opened his terminal and executed:
ping cdn.shulex-voc.com
which shows there was no package loss.
And
curl -I https://cdn.shulex-voc.com/shulex-voc/5618_1936cc20.js
which shows he can actually load this resource at that moment on the same computer.

Related

Unable to add remote node in Rundeck 4.9.0

Following the doc from Rundeck, however the only button I have under "Sources tab" is "ResourceModelSource"
When I click that button I get a blank
PPS Issue happened on previous version - new to RunDeck, so I can't say that it EVER worked
I tried adding a manual resouces.xml in the project director y(Which I had to manually create, which tells me that's another issue) and reloading RD but that did not seem to work
While it's not the likely cause, I'll mention it here incase it IS relevant, I'm hosting on port 4440 however I'm using nginx to forward http (not https) requests on 443 to 4440, this is due to corp net sec policy.
I'm sure it's something where it's having an i/o issue on the local host, however I'm not seeing anything in the logs.
That is a known issue when you have Rundeck installed behind a proxy server, take a look at this: https://github.com/rundeck/rundeck/issues/6278 the solution is to set the grails.ServerURL (rundeck-config.properties file) with the exit URL defined for Rundeck in your proxy server (e.g: grails.serverURL=http://my_domain/rundeck), then restart the Rundeck service.

Gateway Timeout when accessing Bluemix WEB IDE/Node.js logs

I am using Web IDE and want to see the log by clicking on the arrow.I can only see an empty "Untitled" page. The Node.js app is running normally. Live edit is switch off.
After some minutes:
Gateway Timeout
The proxy server did not receive a timely response from the upstream server.
Reference #1.45bf1402.1511018717.3dddb8b
I'm not for sure what Web IDE you are referring to. The only one I'm aware of is the DevOps (which works for me below):
It seems to me like this error that you posted would indicate a temporary outage. Is it still an issue?
In any case, I would advise opening a support ticket if you encounter this issue again (more details about your account would help). I think the Bluemix proxy will time out requests if they take too long.

Tableau Vizql, Backgrounder and Data server down

Yesterday our extracts failed to refresh with the following message (image extract_error):
Failure: Failed 1 time. Sign in failed.
Resolution Details: Check the Data Connection page for necessary updates to an access token or embedded credentials.
I verified that all our passwords were unchanged and test connections which were successful.
The tableau dashboards now give an error message saying:
HTTP 404:
Unable to connect to the server "localhost". Check that the server is running and that you have access privileges to the requested database. (image tableau_error)
Further, when I opened the Server Status page, I saw that our one of our two Vizql, backgrounder and data servers were down. We have two of each and only one of them is active for all three of them. (image server_status)
So, I decided to remote desktop into the server and run the tabadmin status -v command and strangely it is showing that all processes are running. (image tabadmin_status)
Finally, I opened a case with Tableau Customer Portal and letting them know about this issue (they asked me send them the log.zip file) but the mean time I was trying to problem solve this issue. Any help would be really appreciated.
After trying a lot of things, one process seemed to work.
Stopped the tableau server
Configured it to run 1 Vizql server process instead of 2
Started the server again
Finally, it worked. The status page now shows all the processes are active.
Hopefully, this helps someone who is facing a similar problem.
This may be caused by a firewall issue. Since tabadmin status -v returned all as "running" the cluster is healthy and this is a false alert. The firewall rules could be allowing just the first port and not the entire range (see https://onlinehelp.tableau.com/current/server/en-us/ports.htm) to respond to requests from the application server to build that fancy table with the green and red boxes.
The firewall can be reverted/altered behind the scenes for a number of reasons, usually windows updates or regular group policy synchronization.
Try disabling the windows firewall (https://www.faqforge.com/windows-server-2016/turn-off-firewall-windows-server-2016/), or add an inbound rule allowing access to all ports if your org policy doesn't allow you to actually turn it off. (Follow the steps here, except use "All Local Ports" instead of "Specific Local Ports" https://www.parallels.com/blogs/ras/configuring-windows-server-firewall-for-parallels-ras/)
I had a similar problem and followed these similar steps that Sravee mentioned above to bring the all processes back to active.
Stopped the server
Change the configuration for VizQL server from 2 to 1
Started the server
Enter the licence key (else the server status page will show unlicensed error)
Note: This does not bring the site back but this step is for 'tricking' VizQL server
Stopped the server again
Change the VizQL configuration from 1 to 2 now.
Start the server
Enter the license key
This steps did bring back the server back to active for us. Posting to see if this helps who faces the same problem. Thank you so much.

Can Selenium IDE deal effectively with Browser alerts

Hi I am currently writing a Test script for an ecommerce site using Seleneium IDE, this is in a testing environment in HTTP. The issue I am having is the test payment gateway 3D Secure is in HTTPS so when using FireFox the browser displays the security warning message when I am returning from the payment gateway 3D Secure HTTPS to the site testing environment.
'Although this page is encrypted, the information you have entered is to be sent over an unencrypted connection and could easily be read by a third party.
Are you sure you want to continue sending this information?'
I have tried the various commands in the IDE for waitForAlert* and asertAlert* but this javascript alert just seems to over ride any of the commands I use and essentially halts the script until manual intervention is used.
I am unable to turn this particular alert off in FF from what I can assertain from various forums as it is too important to be switched off, I have tried in FF about:config
I can obviusly switch the 3D secure off to allow thee script to run, but I would prefer a complete user scenario to be tested as opposed to a test adapted to suit automation.
Many thanks in advance for your time and assistance.
I had exactly the same problem :
I use Selenium web driver to test against my local http server which sends redirects to https service (3DS as well btw ;). The problem is not with certs, but with this hardcoded warning of switching between https/http.
Based on the link from MacGyver's answer and this answer Key press in (Ctrl+A) Selenium WebDriver, I tested this and I can confirm it closes "Although this page is encrypted, the information you have entered is to be sent over an unencrypted connection and could easily be read by a third party" dialog:
Alert alert = driver.switchTo().alert();
alert.accept();
The other solution, seems to work fine but you'll get UnhandledAlertException with latest Selenium versions (e.g. 2.25.0) :
Actions a = new Actions(driver);
a.sendKeys(Keys.ENTER).perform();
Option #1:
The easiest way is to remove the option in security options for your profile:
http://forums.mozillazine.org/viewtopic.php?f=38&t=665552
Option #2:
Not sure if this applies to an untrusted certifiate or your security warning, but the forum thread seemed to fit. It requires that you use Selenium RC Server.
Profiles are stored here for Firefox: %APPDATA%\Mozilla\Firefox
Profiles can be edited: http://www.dennisplucinik.com/blog/2011/02/04/how-to-install-run-multiple-firefox-versions-in-windows-simultaneously/
Follow the snippet below from this link:
http://old.nabble.com/Security-Warning-on-final-page,-how-to-remove-td22907376.html
If using Firefox 3, see the following post https://developer.mozilla.org/En/Cert_override.txt
The solution I use to get past this security pop-up is only applicable to Firefox 3 browsers and might be more of an hack than a fix but it works.
Run the selenium test
Select "Accept this certificate permanently" when prompted by popup
Click on the OK button (it might be neccessary to have a pause after this because we need to open explorer to find a file now)
Open Windows Explorer and navigate to => "C:\Users\xxxxxxxx\AppData\Local\Temp\customProfileDirxxxx"
This is a temparary profile created by Firefox which contains a file called "cert_override.txt"
Copy "cert_override.txt" to your temp directory
Stop your selenium server.
Open your "selenium-server.jar" file from "c:\selenium-remote-control-xxx\selenium-server-xxx" using WinRar
Drag "cert_override.txt" file into the "selenium-server.jar\customProfileDirCUSTFFCHROME" folder in WinRar (do not delete or edit anything in the .jar file!!!!!)
Close WinRar, start selenium and try it again :)

Has anyone encountered "Win32 Error : The network path was not found" trying to copy files with FinalBuilder 6?

I have a FinalBuilder job that, as a final step, deploys the compiled app and DLLs to a network share on another server.
About 50% of the time, it just fails with
Win32 Error : The network path was not found
Changing the target from \\myserver\myshare to \\myserver.mydomain.com\myshare will often fix it temporarily - the first 2-3 runs after modifying the build file will work, after which it'll start failing again.
The FinalBuilder task is running with domain credentials granting admin access on the target box; and copying files to/from shares on that server via Windows Explorer works reliably.
I'm completely stumped.
Finally tracked this down. The target server was a virtual machine, and the Hyper-V host network settings were set to "Virtual Network" instead of "Virtual Teamed Network"
I have no idea what that means, but having changed it to Virtual Teamed Network, it works flawlessly. O_o
The network path was not found.
This is related to DNS/WINS not being able to look up the name. When I have seen this there are problems with our DNS servers.
Adding an Entry into the lmhost file would prevent the system from looking in DNS/WINS.
If that does not work, another option to consider is to increase the number of retries on the Action. This can be done from the "Runtime" tab of the action by clicking on "Timing Properties"