I've created a basic application (virtually no content, just a blank page) to test the SSO functionality. It doesn't work, when I test on the TV it gives the error error_cp_001.
The config.xml contains, among other things:
<cpauthjs>Authorization8888.js</cpauthjs>
<login>y</login>
The Authorization8888.js file contains:
var Authorization8888 = {};
Authorization8888.checkAccount = function(id, pw, cb) {
cb("TRUE");
};
I have already tried with <cpauthjs>Authorization8888</cpauthjs>. I have also tried with Authorization without 8888 for the filename and vars. It always shows that error.
I also tried in the SDK simulator, same error as on the real TV. On the simulator I see these extra debug lines:
[JS ALERT]: ####################22222eval(accountCheckFunc) error
[JS ALERT]: Fail to load Account check moudule.
Error : Can't find variable: Authorization8888
I can share the zip file containing the whole application, but it's really simple to reproduce since it has nothing except this basic SSO-related code.
I found the answer, posting it in case someone else hits the same problem.
The problem was deploying via USB. Apparently, an application deployed with the USB stick has limited functionality. Deploying the application via a web server fixed a lot of issues, including this one.
Related
Let's say I have some website with the name website.eu. When I deploy it and try to get access to a page online like this website.eu/about I catch the error:
"404 The page you're looking for could not be found. The resource that you are attempting to access does not exist or you don't have the necessary permissions to view it"
When I click on the link that brings me a website.eu/about it works well, but trying to type that URL in the input field it fails.
Everything works fine locally.
The project is developed using Vue3.
The project is no GitLab.
If someone helps I would appreciate it.
Hard to tell without seeing the code, but my guess is your router setup uses the web history mode, which relies on the server to have certain settings applied.
I believe switching to hash mode (while adding # to the routes) will work.
Alternatively, you can update your server to support redirects to have the html mode work.
example server configurations
very new at this. Could someone tell me what is the best method of submitting a form when using phonegap and JMQ? What I want to be able to do is passing the form data to a php file and then having the results passed back into app so that the user isnt directly accessing the php file at any point.
I found the following page link which basically does what I want but I keep getting "Origin null is not allowed by Access-Control-Allow-Origin" when testing out the code. So I'm guessing this will only work if the app is located on a server also?
Any happy would be great. thank <3
To test your solution on the computer you need to launch chrome from the terminal with the argument --disable-web-security. See this answer: Disable same origin policy in Chrome
In your Phonegap application you add a line of code to your config.xml in the www-folder: <access origin="*.yourdomain.com" />. Build, and you are now allowed to request all domains and subdomains from yourdomain.com. For more details on whitelisting see http://docs.phonegap.com/en/3.0.0/guide_appdev_whitelist_index.md.html#Domain%20Whitelist%20Guide
You are not able to make post through the local files, so Yes, you need to have it running in a Web Server.
But if you deploy your application, it should work either in a emulator or in your device.
Problem: The browsers within apples ipad and iphone don't seem to like dynamically generated manifest files (we constantly get errors involving either missing images or .aspx pages that can be accessed from the device or "Application Cache manifest could not be fetched"). We originally had a manifest.ashx acting as our manifest that would dynamically create and pull some pieces from the web server for offline app functionality. This process worked fine for the majority of browsers and mobile devices but failed on the apple products.
Thoughts: For some reason safari doesn't seem to register the manifest.ashx correctly (this is where we dynamically create the manifest file) and just gives up on trying to open it. We truly need a dynamic manifest file for the requirements of the project so switching to a static manifest file would not work. Does anyone have any suggestions towards alternative creation methods for dynamic manifest files?
Code:
manifest.ashx
public class Manifest : IHttpHandler
{
public void ProcessRequest( HttpContext context )
{
ManifestGenerator generator = new ManifestGenerator();
context.Response.ContentType = "text/cache-manifest";
//Create the dynamic manifest file here (returns the manifest as a string)
context.Response.Write( generator.GenerateManifest() );
context.Response.Flush();
}
}
Thanks,
Updated Thoughts v1: Leaning towards thinking this maybe a device specific manifest fault as all the other mobile and desktop devices are accessing the app just fine (including being able to go offline). Currently I have moved back to a dynamically generated manifest (within the manifest.ashx) and the ipad / iphone still dies when trying to fetch but it does get further then it did before (error was: "Application Cache update failed, because "file path goes here" could not be fetched"). A strange aside to this is the fact that the desktop version of safari handles the web app just fine (as well as an install of chrome on the ipad had no troubles accessing the site on/off line) while the mobile versions of it do not.
Updated Thoughts v2: Seems that this issue is safari specific as I have the web app running online/offline with chrome for the apple products (iphone/ipad). Still looking for a fix / work around for the safari browsers though...
For Safari/iPad, the manifest file must end with .manifest. Atleast, that's what my tests determined.
So, in order to make this work, you will have to dynamically generate the .manifest file using a HttpHandler and some changes in web.config to do map cache.maifest to the handler. The idea is that the call to the non-existent cache.manifest would actually get mapped to the handler, which would then send back dynamic content.
This is currently the part I'm stuck at, so I cannot help you here yet.
I'm using Pow.cx for local development servers - Rails, PHP and static. It's working fine locally, but when I try to use the new xip.io functionality to browse from another device I'm getting a different localhost site every time.
This particular incorrectly-served site is not set up in Pow, but I have an older virtual host set up for it.
Put another way:
stm.dev serves the correct site on my desktop.
stm.192.168.1.XXX.xip.io on my iPhone serves up a different site that is not configured in Pow.
I haven't been able to find any mention of a similar problem online, has anyone else come across this? This particular site is static html, if it matters.
So far I have been unable to get Pow to automatically pick up the xip.io addresses. However I did finally get it working to the point that I can continue building the site.
I followed the instructions from this link http://blogs.adobe.com/shadow/2012/06/19/shadow-xip-io-virtual-hosts-workflow-simplified/ in setting up a vhost alias for the site. I believe that cuts Pow out of the loop, but at least it's working now for testing on the other devices I need.
I would love to have Pow working as described, so if there are any suggestions on that end I'd love to hear it.
I need to use ZendAMF on a symfony project and I'm currently working on integrating the two.
I have a frontend app with two modules, one of which is 'gateway' - the AMF gateway. In my frontend app config, I have the following in the configure function:
// load symfony autoloading first
parent::initialize();
// Integrate Zend Framework
require_once('[MY PATH TO ZEND]\Loader.php');
spl_autoload_register(array('Zend_Loader', 'autoload'));
The executeIndex function my the gateway actions.class.php looks like this
// No Layout
$this->setLayout(false);
// Set MIME Type
$this->getResponse()->setContentType('application/x-amf; charset='.sfConfig::get('sf_charset'));
// Disable cause this is a non-html page
sfConfig::set('sf_web_debug', false);
// Create AMF Server
$server = new Zend_Amf_Server();
$server->setClass('MYCLASS');
echo $server->handle();
return sfView::NONE;
Now when I try to visit the url for the gateway module, or even the other module which was working perfectly fine until this attempt, I only see a blank screen, with not even the symfony dev bar loaded. Oddly enough, my symfony logs are not being updated as well, which suggests that Synfony is not even being 'reached'.
So presumably the error has something to do with Zend, but I have no idea how to figure out what the error could be. One thing I do know for sure is that this is not a file path error, because if I change the path in the following line (a part of frontendConfiguration as shown above), I get a Zend_Amf_Server not found error. So the path must be correct. Also if I comment out this very same line, the second module resumes to normality, and my gateway broadcasts a blank x-amf stream.
spl_autoload_register(array('Zend_Loader', 'autoload'));
Does anyone have any tips on how I could attach this problem?
Thanks
P.S. I'm currently running an older version of Zend, which is why I am using Zend_Loader instead of Zend_autoLoader (I think). But I've tried switching to the new lib, but the error still remains. So it's not a version problem as well.
got it...
I was not using
set_include_path()
while loading Zend. It's still odd that it would give such a cryptic error, but this was the missing piece indeed.