I already have a schema of users with authentication-key and wanted to do authentication via that. I tried implementing authentication via sql but due to different structure of my schema I was getting error and so I implemented external-authentication method. The technologies and OS used in my application are :
Node.JS
Ejabberd as XMPP server
MySQL Database
React-Native (Front-End)
OS - Ubuntu 18.04
I implemented the external authentication configuration as mentioned in https://docs.ejabberd.im/admin/configuration/#external-script and took php script https://www.ejabberd.im/files/efiles/check_mysql.php.txt as an example. But I am getting the below mentioned error in error.log. In ejabberd.yml I have done following configuration.
...
host_config:
"example.org.co":
auth_method: [external]
extauth_program: "/usr/local/etc/ejabberd/JabberAuth.class.php"
auth_use_cache: false
...
Also, is there any external auth javascript script?
Here is the error.log and ejabberd.log as mentioned below
error.log
2019-03-19 07:19:16.814 [error]
<0.524.0>#ejabberd_auth_external:failure:103 External authentication
program failed when calling 'check_password' for admin#example.org.co:
disconnected
ejabberd.log
2019-03-19 07:19:16.811 [debug] <0.524.0>#ejabberd_http:init:151 S:
[{[<<"api">>],mod_http_api},{[<<"admin">>],ejabberd_web_admin}]
2019-03-19 07:19:16.811 [debug]
<0.524.0>#ejabberd_http:process_header:307 (#Port<0.13811>) http
query: 'POST' <<"/api/register">>
2019-03-19 07:19:16.811 [debug]
<0.524.0>#ejabberd_http:process:394 [<<"api">>,<<"register">>] matches
[<<"api">>]
2019-03-19 07:19:16.811 [info]
<0.364.0>#ejabberd_listener:accept:238 (<0.524.0>) Accepted connection
::ffff:ip -> ::ffff:ip
2019-03-19 07:19:16.814 [info]
<0.524.0>#mod_http_api:log:548 API call register
[{<<"user">>,<<"test">>},{<<"host">>,<<"example.org.co">>},{<<"password">>,<<"test">>}]
from ::ffff:ip
2019-03-19 07:19:16.814 [error]
<0.524.0>#ejabberd_auth_external:failure:103 External authentication
program failed when calling 'check_password' for admin#example.org.co:
disconnected
2019-03-19 07:19:16.814 [debug]
<0.524.0>#mod_http_api:extract_auth:171 Invalid auth data:
{error,invalid_auth}
Any help regarding this topic will be appreciated.
1) Your config about the auth_method looks good.
2) Here is a python script I've used and upgraded to make an external authentication for ejabberd.
#!/usr/bin/python
import sys
from struct import *
import os
def openAuth(args):
(user, server, password) = args
# Implement your interactions with your service / database
# Return True or False
return True
def openIsuser(args):
(user, server) = args
# Implement your interactions with your service / database
# Return True or False
return True
def loop():
switcher = {
"auth": openAuth,
"isuser": openIsuser,
"setpass": lambda(none): True,
"tryregister": lambda(none): False,
"removeuser": lambda(none): False,
"removeuser3": lambda(none): False,
}
data = from_ejabberd()
to_ejabberd(switcher.get(data[0], lambda(none): False)(data[1:]))
loop()
def from_ejabberd():
input_length = sys.stdin.read(2)
(size,) = unpack('>h', input_length)
return sys.stdin.read(size).split(':')
def to_ejabberd(result):
if result:
sys.stdout.write('\x00\x02\x00\x01')
else:
sys.stdout.write('\x00\x02\x00\x00')
sys.stdout.flush()
if __name__ == "__main__":
try:
loop()
except error:
pass
I didn't created the communication with Ejabberd from_ejabberd() and to_ejabberd(), and unfortunately can't find back the sources.
I tried to use a public end point(eg:api.openweathermap.org/data/2.5/weather?lat=35&lon=139) instead of the local host while configuring dredd and ran the command to run the tool.But I am not able to connect to the end point through dredd. It is throwing Error:getaddrINFO EAI_AGAIN .
But when I tried to connect to the endpoint using post man .I am able to connect successfully
There is no difference in calling a local or remote endpoint.
Some remote endpoints have some sort of authorization requirements.
This an example of Dredd calling external endpoint:
dredd.yml configuration file fragment
...
blueprint: doc/api.md
# endpoint: 'http://api-srv:5000'
endpoint: https://private-da275-notes69.apiary-mock.com
As you see, the only change is the endpoint on Dredd configuration file (created using Dredd init).
But, as I mention, sometimes you'll need to provide authorization through the header or query string parameter.
Dreed has hooks that allow you to change things before each transaction, for instance:
You'd like to add the apikey parameter in each URL before executing the request. This code can handle that.
hook.js
// Writing Dredd Hooks In Node.js
// Ref: http://dredd.org/en/latest/hooks-nodejs.html
var hooks = require('hooks');
hooks.beforeEach(function(transaction) {
hooks.log('before each');
// add query parameter to each transaction here
let paramToAdd = 'api-key=23456';
if (transaction.fullPath.indexOf('?') > -1)
transaction.fullPath += '&' + paramToAdd;
else
transaction.fullPath += '?' + paramToAdd;
hooks.log('before each fullpath: ' + transaction.fullPath);
});
Code at Github gist
Save this hook file anywhere in your project an than run Dredd passing the hook file.
dredd --hookfiles=./hoock.js
That's it, after execution the log will show the actual URL used in the request.
info: Configuration './dredd.yml' found, ignoring other arguments.
2018-06-25T16:57:13.243Z - info: Beginning Dredd testing...
2018-06-25T16:57:13.249Z - info: Found Hookfiles: 0=/api/scripts/dredd-hoock.js
2018-06-25T16:57:13.263Z - hook: before each
2018-06-25T16:57:13.264Z - hook: before each fullpath: /notes?api-key=23456
"/notes?api-key=23456"
2018-06-25T16:57:16.095Z - pass: GET (200) /notes duration: 2829ms
2018-06-25T16:57:16.096Z - hook: before each
2018-06-25T16:57:16.096Z - hook: before each fullpath: /notes?api-key=23456
"/notes?api-key=23456"
2018-06-25T16:57:16.788Z - pass: POST (201) /notes duration: 691ms
2018-06-25T16:57:16.788Z - hook: before each
2018-06-25T16:57:16.789Z - hook: before each fullpath: /notes/abcd1234?api-key=23456
"/notes/abcd1234?api-key=23456"
2018-06-25T16:57:17.113Z - pass: GET (200) /notes/abcd1234 duration: 323ms
2018-06-25T16:57:17.114Z - hook: before each
2018-06-25T16:57:17.114Z - hook: before each fullpath: /notes/abcd1234?api-key=23456
"/notes/abcd1234?api-key=23456"
2018-06-25T16:57:17.353Z - pass: DELETE (204) /notes/abcd1234 duration: 238ms
2018-06-25T16:57:17.354Z - hook: before each
2018-06-25T16:57:17.354Z - hook: before each fullpath: /notes/abcd1234?api-key=23456
"/notes/abcd1234?api-key=23456"
2018-06-25T16:57:17.614Z - pass: PUT (200) /notes/abcd1234 duration: 259ms
2018-06-25T16:57:17.615Z - complete: 5 passing, 0 failing, 0 errors, 0 skipped, 5 total
2018-06-25T16:57:17.616Z - complete: Tests took 4372ms
I am using Datanucleus to perform CRUD. I delete a entity, then perform named query, why the already deleted entity is still among the result list?
Firstly, delete the entity:
MyEntity e = manager.find(MyEntity.class, id);
manager.remove(e);
Then, query:
#NamedQueries({
#NamedQuery(name = MyEntity.FIND_ALL, query = "SELECT a FROM MyEntity a ORDER BY a.updated DESC")
})
public static final String FIND_ALL = "MyEntity.findAll";
TypedQuery<MyEntity> query = manager.createNamedQuery(FIND_ALL, MyEntity.class);
return query.getResultList();
Configure datanucleus.Optimistic persistence.xml:
<property name="datanucleus.Optimistic" value="true" />
The named query will unexpectedly return the list of results which has the deleted entities in it.
However, If the datanucleus.Optimistic=false, then result is correct. Why datanucleus.Optimistic=true doesn't work?
More details about this case:
Below is the CRUD related log:
1. Log of the Save operation:
DEBUG: DataNucleus.Transaction - Transaction begun for ExecutionContext org.datanucleus.ExecutionContextThreadedImpl#6bc3bf (optimistic=true)
INFO : org.springframework.test.context.transaction.TransactionalTestExecutionListener - Began transaction (1): transaction manager [org.springframework.orm.jpa.JpaTransactionManager#7dfefcef]; rollback [true]
DEBUG: DataNucleus.Persistence - Making object persistent : "com.demo.MyEntity#30a7803e"
DEBUG: DataNucleus.Cache - Object with id "com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24" not found in Level 1 cache [cache size = 0]
DEBUG: DataNucleus.Cache - Object with id "com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24" not found in Level 2 cache
DEBUG: DataNucleus.Persistence - Managing Persistence of Class : com.demo.MyEntity [Table : (none), InheritanceStrategy : superclass-table]
DEBUG: DataNucleus.Cache - Object "com.demo.MyEntity#96da65f" (id="com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24") added to Level 1 cache (loadedFlags="[YNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN]")
DEBUG: DataNucleus.Lifecycle - Object "com.demo.MyEntity#96da65f" (id="com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24") has a lifecycle change : "HOLLOW"->"P_NONTRANS"
DEBUG: DataNucleus.Persistence - Fetching object "com.demo.MyEntity#96da65f" (id=07cad778-d1c3-4834-ace7-ac2e4ecacc24) fields [entityId,extensions,objectType,openSocial,published,updated,url,actor,appId,bcc,bto,cc,content,context,dc,endTime,generator,geojson,groupId,icon,inReplyTo,ld,links,location,mood,object,odata,opengraph,priority,provider,rating,result,schema_org,source,startTime,tags,target,title,to,userId,verb]
DEBUG: DataNucleus.Datastore.Retrieve - Object "com.demo.MyEntity#96da65f" (id="07cad778-d1c3-4834-ace7-ac2e4ecacc24") being retrieved from HBase
DEBUG: org.apache.hadoop.hbase.zookeeper.ZKUtil - hconnection opening connection to ZooKeeper with ensemble (master.hbase.com:2181)
....
DEBUG: org.apache.hadoop.hbase.client.MetaScanner - Scanning .META. starting at row=MyEntity,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation#25c7f5b0
...
DEBUG: DataNucleus.Cache - Object with id="com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24" being removed from Level 1 cache [current cache size = 1]
DEBUG: DataNucleus.ValueGeneration - Creating ValueGenerator instance of "org.datanucleus.store.valuegenerator.UUIDGenerator" for "uuid"
DEBUG: DataNucleus.ValueGeneration - Reserved a block of 1 values
DEBUG: DataNucleus.ValueGeneration - Generated value for field "com.demo.BaseEntity.entityId" using strategy="custom" (Generator="org.datanucleus.store.valuegenerator.UUIDGenerator") : value=4aa3c4a8-b450-473e-aeba-943dc6ef30ce
DEBUG: DataNucleus.Cache - Object "com.demo.MyEntity#30a7803e" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") added to Level 1 cache (loadedFlags="[YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY]")
DEBUG: DataNucleus.Transaction - Object "com.demo.MyEntity#30a7803e" (id="4aa3c4a8-b450-473e-aeba-943dc6ef30ce") enlisted in transactional cache
DEBUG: DataNucleus.Persistence - Object "com.demo.MyEntity#30a7803e" has been marked for persistence but its actual persistence to the datastore will be delayed due to use of optimistic transactions or "datanucleus.flush.mode" setting
2. Log of the DELETE operation:
DEBUG: DataNucleus.Cache - Object "com.demo.MyEntity#30a7803e" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") taken from Level 1 cache (loadedFlags="[YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY]") [cache size = 1]
DEBUG: DataNucleus.Persistence - Deleting object from persistence : "com.demo.MyEntity#30a7803e"
DEBUG: DataNucleus.Lifecycle - Object "com.demo.MyEntity#30a7803e" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") has a lifecycle change : "P_NEW"->"P_NEW_DELETED"
3. Log of the named QUERY operation:
DEBUG: DataNucleus.Cache - Query Cache of type "org.datanucleus.query.cache.SoftQueryCompilationCache" initialised
DEBUG: DataNucleus.Cache - Query Cache of type "org.datanucleus.store.query.cache.SoftQueryDatastoreCompilationCache" initialised
DEBUG: DataNucleus.Cache - Query Cache of type "org.datanucleus.store.query.cache.SoftQueryResultsCache" initialised
DEBUG: DataNucleus.Query - JPQL Single-String with "SELECT a FROM MyEntity a ORDER BY a.updated DESC"
DEBUG: DataNucleus.Persistence - ExecutionContext.internalFlush() process started using optimised flush - 0 to delete, 1 to insert and 0 to update
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #7
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #7
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: exists 0
DEBUG: DataNucleus.Datastore.Persist - Object "com.demo.MyEntity#30a7803e" being inserted into HBase with all reachable objects
DEBUG: DataNucleus.Datastore.Native - Object "com.demo.MyEntity#30a7803e" PUT into HBase table "MyEntity" as {"totalColumns":3,"families":{"MyEntity":[{"timestamp":9223372036854775807,"qualifier":"DTYPE","vlen":8},{"timestamp":9223372036854775807,"qualifier":"userId","vlen":5},{"timestamp":9223372036854775807,"qualifier":"entityId","vlen":36}]},"row":"4aa3c4a8-b450-473e-aeba-943dc6ef30ce"}
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #8
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #8
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: multi 2
DEBUG: DataNucleus.Datastore.Persist - Execution Time = 123 ms
DEBUG: DataNucleus.Persistence - ExecutionContext.internalFlush() process finished
DEBUG: DataNucleus.Query - JPQL Query : Compiling "SELECT a FROM MyEntity a ORDER BY a.updated DESC"
DEBUG: DataNucleus.Query - JPQL Query : Compile Time = 13 ms
DEBUG: DataNucleus.Query - QueryCompilation:
[from:ClassExpression(alias=a)]
[ordering:OrderExpression{PrimaryExpression{a.updated} descending}]
[symbols: a type=com.demo.MyEntity]
DEBUG: DataNucleus.Query - JPQL Query : Compiling "SELECT a FROM MyEntity a ORDER BY a.updated DESC" for datastore
DEBUG: DataNucleus.Query - JPQL Query : Compile Time for datastore = 2 ms
DEBUG: DataNucleus.Query - JPQL Query : Executing "SELECT a FROM MyEntity a ORDER BY a.updated DESC" ...
DEBUG: DataNucleus.Datastore.Native - Retrieving objects for candidate=com.demo.MyEntity and subclasses
DEBUG: org.apache.hadoop.hbase.client.ClientScanner - Creating scanner over MyEntity starting at key ''
DEBUG: org.apache.hadoop.hbase.client.ClientScanner - Advancing internal scanner to startKey at ''
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #9
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #9
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: openScanner 1
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #10
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #10
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: next 0
DEBUG: DataNucleus.Cache - Object "com.demo.MyEntity#30a7803e" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") taken from Level 1 cache (loadedFlags="[YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY]") [cache size = 1]
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #11
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #11
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: next 0
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #12
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #12
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: close 1
DEBUG: org.apache.hadoop.hbase.client.ClientScanner - Finished with scanning at {NAME => 'MyEntity,,1457106265917.c6437b9afd33cd225c33e0ed52ff50d4.', STARTKEY => '', ENDKEY => '', ENCODED => c6437b9afd33cd225c33e0ed52ff50d4,}
DEBUG: DataNucleus.Query - JPQL Query : Processing the "ordering" clause using in-memory evaluation (clause = "[OrderExpression{PrimaryExpression{a.updated} descending}]")
DEBUG: DataNucleus.Query - JPQL Query : Processing the "resultClass" clause using in-memory evaluation (clause = "com.demo.MyEntity")
DEBUG: DataNucleus.Query - JPQL Query : Execution Time = 14 ms
Why the following logs (PUT entity with lifecycle "P_NEW_DELETED" into datastore) appear during the QUERY operation? And how to avoid this behavior?
DEBUG: DataNucleus.Datastore.Persist - Object "com.demo.MyEntity#30a7803e" being inserted into HBase with all reachable objects
DEBUG: DataNucleus.Datastore.Native - Object "com.demo.MyEntity#30a7803e" PUT into HBase table "MyEntity" as {"totalColumns":3,"families":{"MyEntity":[{"timestamp":9223372036854775807,"qualifier":"DTYPE","vlen":8},{"timestamp":9223372036854775807,"qualifier":"userId","vlen":5},{"timestamp":9223372036854775807,"qualifier":"entityId","vlen":36}]},"row":"4aa3c4a8-b450-473e-aeba-943dc6ef30ce"}
You turned on optimistic transactions so consequently all data write operations only happen at commit. You executed a query before that happened (and didn't set the flush mode for the query) so consequently your delete is not in the datastore when you execute the query.
Call
em.flush()
before executing the query, or set
query.setFlushMode(FlushModeType.AUTO);
I'm following this tutorial here:
https://ericswann.wordpress.com/2015/04/24/nozus-js-1-intro-to-sails-with-passport-and-jwt-json-web-token-auth/
I'm running into a problem where the User.create(...); line of code doesn't return and so my POSTMAN test times out.
The block of code which I think is causing the problem here is:
signup: function(req, res) {
console.log('\n\nCreating a new user...');
User
.create(_.omit(req.allParams(), 'id'))
.then(function (user) {
console.log('\n\nreturning newly created user...');
return {
// TODO: replace with new type of cipher service
token: CipherService.createToken(user),
user: user
};
})
.then(res.created)
.catch(res.serverError);
},
I see the first console output:
info: Starting app...
info:
info: .-..-.
info:
info: Sails <| .-..-.
info: v0.11.0 |\
info: /|.\
info: / || \
info: ,' |' \
info: .-'.-==|/_--'
info: `--'-------'
info: __---___--___---___--___---___--___
info: ____---___--___---___--___---___--___-__
info:
info: Server lifted in `/Users/MacUser/SailsProjects/myApi`
info: To see your app, visit http://localhost:1337
info: To shut down Sails, press <CTRL> + C at any time.
debug: --------------------------------------------------------
debug: :: Sat Sep 12 2015 15:41:21 GMT+0800 (AWST)
debug: Environment : development
debug: Port : 1337
debug: --------------------------------------------------------
Creating a new user...
But I don't see the second console output =/
Anyone know what I doing wrong?
OK, my bad...seems like I missed a line when following the above tutorial.
I had this in my User.js model class:
beforeCreate: function(values, next) {
CipherService.hashPassword(values);
}
When it should be:
beforeCreate: function(values, next) {
CipherService.hashPassword(values);
next(); // <--- this line was missing
}
SWFUpload Version: 2.5.0 2010-01-15 Beta 2
Flash Player Version:
Operating System: Windows 7, Windows server 2008R2
Browser(s): IE, Chrome
Description:
When I tried to upload a file of max 4 MB there's no problem. Only when the size is over 4 MB: See here the debug info:
---SWFUpload Instance Info---
Version: 2.5.0 2010-01-15 Beta 2
Movie Name: SWFUpload_0
Settings:
use_query_string: false
requeue_on_error: false
http_success:
assume_success_timeout: 0
file_post_name: Filedata
post_params: [object Object]
file_types: .
file_types_description: All Files
file_size_limit: 20 MB
file_upload_limit: 10
file_queue_limit: 1
debug: true
prevent_swf_caching: true
button_placeholder_id: uploadButtonPlaceHolder1
button_placeholder: Not Set
button_width: 200
button_height: 19
button_text: Selecteer
---SWFUpload Instance Info---
Version: 2.5.0 2010-01-15 Beta 2
Movie Name: SWFUpload_1
Settings:
use_query_string: false
requeue_on_error: false
http_success:
assume_success_timeout: 0
file_post_name: Filedata
post_params: [object Object]
file_types: .
file_types_description: All Files
file_size_limit: 20 MB
file_upload_limit: 15
file_queue_limit: 15
debug: true
prevent_swf_caching: true
button_placeholder_id: uploadButtonPlaceHolder2
button_placeholder: Not Set
button_width: 200
button_height: 19
button_text: Selecteer
SWF DEBUG: SWFUpload Init Complete
SWF DEBUG:
SWF DEBUG: ----- SWF DEBUG OUTPUT ----
SWF DEBUG: Version: 2.5.0 2010-02-17 Beta 3
SWF DEBUG: movieName: SWFUpload_0
SWF DEBUG: File Types String: .
SWF DEBUG: Parsed File Types:
SWF DEBUG: HTTP Success: 0
SWF DEBUG: File Types Description: All Files (.)
SWF DEBUG: File Size Limit: 20971520 bytes
SWF DEBUG: File Upload Limit: 10
SWF DEBUG: File Queue Limit: 1
SWF DEBUG: Post Params:
SWF DEBUG: ----- END SWF DEBUG OUTPUT ----
SWF DEBUG:
Removing Flash functions hooks (this should only run in IE and should prevent memory leaks)
SWF DEBUG: SWFUpload Init Complete
SWF DEBUG:
SWF DEBUG: ----- SWF DEBUG OUTPUT ----
SWF DEBUG: Version: 2.5.0 2010-02-17 Beta 3
SWF DEBUG: movieName: SWFUpload_1
SWF DEBUG: File Types String: .
SWF DEBUG: Parsed File Types:
SWF DEBUG: HTTP Success: 0
SWF DEBUG: File Types Description: All Files (.)
SWF DEBUG: File Size Limit: 20971520 bytes
SWF DEBUG: File Upload Limit: 15
SWF DEBUG: File Queue Limit: 15
SWF DEBUG: Post Params:
SWF DEBUG: ----- END SWF DEBUG OUTPUT ----
SWF DEBUG:
Removing Flash functions hooks (this should only run in IE and should prevent memory leaks)
SWF DEBUG: Stage Resize:190 by 18
SWF DEBUG: Stage Resize:190 by 18
SWF DEBUG: Button Image Loaded
SWF DEBUG: Stage Resize:190 by 18
SWF DEBUG: Button Image Loaded
SWF DEBUG: Stage Resize:190 by 18
SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .
SWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...
SWF DEBUG: Event: fileQueued : File ID: SWFUpload_0_0
SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1
SWF DEBUG: StartUpload: First file in queue
SWF DEBUG: Event: uploadStart : File ID: SWFUpload_0_0
SWF DEBUG: StartUpload(): Upload Type: Normal.
SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for standard upload. Starting upload to /File/UploadMain for File ID: SWFUpload_0_0
SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_0_0
SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 32768. Total: 4541051
SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 65536. Total: 4541051
SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 196608. Total: 4541051
.........
SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 4489216. Total: 4541051
SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 4521984. Total: 4541051
SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 4541051. Total: 4541051
SWF DEBUG: Event: uploadError: HTTP ERROR : File ID: SWFUpload_0_0. HTTP Status: 500.
SWF DEBUG: Event: uploadComplete : Upload cycle complete.
EXCEPTION: description: FileProgress is niet gedefinieerd
EXCEPTION: number: -2146823279
EXCEPTION: stack: ReferenceError: FileProgress is niet gedefinieerd
Open up your website/project web.config and add this.
<system.web>
<httpRuntime executionTimeout="240" maxRequestLength="20480" />
</system.web>
maxRequestLength sets the limit of allowed file size to 20MB. Adjust it according to your needs.
For more information, see Large file uploads in ASP.NET.