Required tag missing error on MarketDataRequest query - quickfix

I have a FIX44.MarketDataRequest query based on Trade Client example from QuickFIX-n example applications. But I can't manage to get something useful from the server with this query. I always get the error Required tag missing for this query.
Here is my code:
MDReqID mdReqID = new MDReqID("MARKETDATAID");
SubscriptionRequestType subType = new SubscriptionRequestType(SubscriptionRequestType.SNAPSHOT);
MarketDepth marketDepth = new MarketDepth(0);
QuickFix.FIX44.MarketDataRequest.NoMDEntryTypesGroup marketDataEntryGroup = new QuickFix.FIX44.MarketDataRequest.NoMDEntryTypesGroup();
marketDataEntryGroup.Set(new MDEntryType(MDEntryType.BID));
QuickFix.FIX44.MarketDataRequest.NoRelatedSymGroup symbolGroup = new QuickFix.FIX44.MarketDataRequest.NoRelatedSymGroup();
symbolGroup.Set(new Symbol("EUR/USD"));
QuickFix.FIX44.MarketDataRequest message = new QuickFix.FIX44.MarketDataRequest(mdReqID, subType, marketDepth);
message.AddGroup(marketDataEntryGroup);
message.AddGroup(symbolGroup);
Here is the generated outbound application level message (ToApp):
8=FIX.4.49=15835=V34=249=order.DEMOSUCD.12332150=DEMOSUSD52=20141223-07:02:33.22656=demo.fxgrid128=DEMOSUSD262=MARKETDATAID263=0264=0267=1269=0146=155=EUR/USD10=232
Here is the received ToAdmin message:
8=FIX.4.49=14935=334=249=demo.fxgrid52=20141223-07:02:36.51056=order.DEMOSUCD.12332157=DEMOSUSD115=DEMOSUSD45=258=Required tag missing371=265372=V373=110=136
If I understand correctly the pair 371=265 (RefTagID=MDUpdateType) after 258=Required tag missing indicates which tag is missing, .i.e. is missing MDUpdateType. But this is strange because this tag is optional for MarketDataRequest.
UPDATE
Here is my FIX config file:
[DEFAULT]
FileStorePath=store
FileLogPath=log
ConnectionType=initiator
ReconnectInterval=60
CheckLatency=N
[SESSION]
BeginString=FIX.4.4
TargetCompID=demo.fxgrid
SenderCompID=XXX
SenderSubID=YYY
StartTime=00:00:00
EndTime=00:00:00
HeartBtInt=30
SocketConnectPort=ZZZ
SocketConnectHost=X.X.X.X
DataDictionary=FIX44.xml
ResetOnLogon=Y
ResetOnLogout=Y
ResetOnDisconnect=Y
Username=AAA
Password=BBB

Related

avc: denied default_android_hwservice, violet neverallow

First I got logs as:
11-11 11:11:14.779 2287 2287 E SELinux : **avc: denied** { add } for interface=vendor.abc.wifi.wifidiagnostic::IWifiDiagnostic sid=u:r:wifidiagnostic:s0 pid=2838 scontext=u:r:wifidiagnostic:s0 tcontext=u:object_r:**default_android_hwservice**:s0 tclass=hwservice_manager permissive=1
11-11 11:11:14.781 2838 2838 I ServiceManagement: Registered vendor.abc.wifi.wifidiagnostic#1.0::IWifiDiagnostic/default (start delay of 128ms)
11-11 11:11:14.781 2838 2838 I ServiceManagement: Removing namespace from process name vendor.abc.wifi.wifidiagnostic#1.0-service to wifidiagnostic#1.0-service.
But if I add
allow wifidiagnostic default_android_hwservice:hwservice_manager {add}
Get compile error:
libsepol.report_failure: neverallow on line 511 of system/sepolicy/public/domain.te (or line 11982 of policy.conf) violated by allow wifidiagnostic default_android_hwservice:hwservice_manager { add };
libsepol.check_assertions: **1 neverallow failures occurred**
Error while expanding policy
How can I resolve it?
wifidiagnostic is a native service which do diagnostic feature. I define the type in wifidiagnostic.te
# wifidiagnostic service
type wifidiagnostic, domain;
type wifidiagnostic_exec, exec_type, file_type, vendor_file_type;
init_daemon_domain(wifidiagnostic)
allow wifidiagnostic hwservicemanager_prop:file { getattr map open read };
allow wifidiagnostic hwservicemanager:binder { call transfer };
#allow wifidiagnostic default_android_hwservice:hwservice_manager { add };
allow wifidiagnostic hidl_base_hwservice:hwservice_manager { add };
and add lable in file_contexts
/vendor/bin/hw/vendor.abc.wifi.wifidiagnostic#1.0-service u:object_r:wifidiagnostic_exec:s0
You should also define your service type
Try following
# hwservice_contexts
vendor.abc.wifi.wifidiagnostic::IWifiDiagnostic u:object_r:vendor_abc_wifi_wifidiagnostic_hwservice:s0
# wifidiagnostic.te
type vendor_abc_wifi_wifidiagnostic_hwservice, hwservice_manager_type;
add_hwservice(wifidiagnostic, vendor_abc_wifi_wifidiagnostic_hwservice)
To allow a service to access a HAL you can use the hal_client_domain() macro (defined in system/sepolicy/public/te_macros).
I cannot tell from your description what your hal type is. Allowing access to the wifi HAL would look like this:
type wifidiagnostic, domain;
type wifidiagnostic_exec, exec_type, file_type, vendor_file_type;
# Allow context switch from init to wifidiagnostic.
init_daemon_domain(wifidiagnostic)
# Allow accessing wifi HAL.
hal_client_domain(wifidiagnostic, hal_wifi_hwservice)

How to use Spark BigQuery Connector locally?

For test purpose, I would like to use BigQuery Connector to write Parquet Avro logs in BigQuery. As I'm writing there is no way to read directly Parquet from the UI to ingest it so I'm writing a Spark job to do so.
In Scala, for the time being, job body is the following:
val events: RDD[RichTrackEvent] =
readParquetRDD[RichTrackEvent, RichTrackEvent](sc, googleCloudStorageUrl)
val conf = sc.hadoopConfiguration
conf.set("mapred.bq.project.id", "myproject")
// Output parameters
val projectId = conf.get("fs.gs.project.id")
val outputDatasetId = "logs"
val outputTableId = "test"
val outputTableSchema = LogSchema.schema
// Output configuration
BigQueryConfiguration.configureBigQueryOutput(
conf, projectId, outputDatasetId, outputTableId, outputTableSchema
)
conf.set(
"mapreduce.job.outputformat.class",
classOf[BigQueryOutputFormat[_, _]].getName
)
events
.mapPartitions {
items =>
val gson = new Gson()
items.map(e => gson.fromJson(e.toString, classOf[JsonObject]))
}
.map(x => (null, x))
.saveAsNewAPIHadoopDataset(conf)
As the BigQueryOutputFormat isn't finding the Google Credentials, it fallbacks on metadata host to try to discover them with the following stacktrace:
016-06-13 11:40:53 WARN HttpTransport:993 - exception thrown while executing request
java.net.UnknownHostException: metadata
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589 at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972)
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:160)
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489)
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:207)
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:72)
at com.google.cloud.hadoop.io.bigquery.BigQueryFactory.createBigQueryCredential(BigQueryFactory.java:81)
at com.google.cloud.hadoop.io.bigquery.BigQueryFactory.getBigQuery(BigQueryFactory.java:101)
at com.google.cloud.hadoop.io.bigquery.BigQueryFactory.getBigQueryHelper(BigQueryFactory.java:89)
at com.google.cloud.hadoop.io.bigquery.BigQueryOutputCommitter.<init>(BigQueryOutputCommitter.java:70)
at com.google.cloud.hadoop.io.bigquery.BigQueryOutputFormat.getOutputCommitter(BigQueryOutputFormat.java:102)
at com.google.cloud.hadoop.io.bigquery.BigQueryOutputFormat.getOutputCommitter(BigQueryOutputFormat.java:84)
at com.google.cloud.hadoop.io.bigquery.BigQueryOutputFormat.getOutputCommitter(BigQueryOutputFormat.java:30)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1135)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1078)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1078)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:357)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1078)
It is of course expected but it should be able to use my service account and its key as GoogleCredential.getApplicationDefault() returns appropriate credentials fetched from GOOGLE_APPLICATION_CREDENTIALS environment variable.
As the connector seems to read credentials, from hadoop configuration, what's the keys to set so that it reads GOOGLE_APPLICATION_CREDENTIALS ? Is there a way to configure the output format to use a provided GoogleCredential object ?
If I understand your question correctly - you might want to set:
<name>mapred.bq.auth.service.account.enable</name>
<name>mapred.bq.auth.service.account.email</name>
<name>mapred.bq.auth.service.account.keyfile</name>
<name>mapred.bq.project.id</name>
<name>mapred.bq.gcs.bucket</name>
Here, the mapred.bq.auth.service.account.keyfile should point to the full file path to the older-style "P12" keyfile; alternatively, if you're using the newer "JSON" keyfiles, you should replace the "email" and "keyfile" entries with the single mapred.bq.auth.service.account.json.keyfile key:
<name>mapred.bq.auth.service.account.enable</name>
<name>mapred.bq.auth.service.account.json.keyfile</name>
<name>mapred.bq.project.id</name>
<name>mapred.bq.gcs.bucket</name>
Also you might want to take a look at https://github.com/spotify/spark-bigquery - which is much more civilised way of working with BQ and Spark. The setGcpJsonKeyFile method used in this case is the same JSON file you'd set for mapred.bq.auth.service.account.json.keyfile if using the BQ connector for Hadoop.

Steam OpenId and Play Framework

I am having trouble using Steam as OpenId provider. Everything works fine until the callback to my site is made, I see the steam login web page and I can login with my user, but when the calback executes I get an exception.
I use play 2.2 and Scala. The code is very similar to the one found on the play docs
def loginPost = Action.async { implicit request =>
OpenID.redirectURL("http://steamcommunity.com/openid",
routes.Application.openIDCallback.absoluteURL(),
realm = Option("http://mydomain.com/"))
.map(url => Redirect(url))
.recover { case error => Redirect(routes.Application.login) }
}
def openIDCallback = Action.async { implicit request =>
OpenID.verifiedId.map(info => Ok(info.id + "\n" + info.attributes))
.recover {
case error =>
println(error.getMessage()) //prints null
Redirect(routes.Application.login)
}
}
Stacktrace:
Internal server error, for (GET) [/steam/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=error&openid.error=Invalid+claimed_id+or+identity] ->
play.api.Application$$anon$1: Execution exception[[BAD_RESPONSE$: null]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:165) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:162) [play_2.10.jar:2.2.1]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library-2.10.3.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) [scala-library-2.10.3.jar:na]
Caused by: play.api.libs.openid.Errors$BAD_RESPONSE$: null
at play.api.libs.openid.Errors$BAD_RESPONSE$.<clinit>(OpenIDError.scala) ~[play_2.10.jar:2.2.1]
at play.api.libs.openid.OpenIDClient.verifiedId(OpenID.scala:111) ~[play_2.10.jar:2.2.1]
at play.api.libs.openid.OpenIDClient.verifiedId(OpenID.scala:92) ~[play_2.10.jar:2.2.1]
at controllers.Application$$anonfun$openIDCallback$1.apply(Application.scala:29) ~[classes/:2.2.1]
at controllers.Application$$anonfun$openIDCallback$1.apply(Application.scala:28) ~[classes/:2.2.1]
at play.api.mvc.Action$.invokeBlock(Action.scala:357) ~[play_2.10.jar:2.2.1]
I see in the returned URL this error message openid.error=Invalid+claimed_id+or+identity but couldnt find anything related.
What am I missing? Thanks.
It's because the Play Framework OpenID classes are not properly generating the redirect URL. Print out the value of the url variable from this line in your code:
.map(url => Redirect(url))
It most likely looks something like this:
https://steamcommunity.com/openid/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0
&openid.mode=checkid_setup
&openid.claimed_id=http%3A%2F%2Fsteamcommunity.com%2Fopenid
&openid.identity=http%3A%2F%2Fsteamcommunity.com%2Fopenid
&openid.return_to=http%3A%2F%2Fwww.mydomain.com%2Fsteam%2Flogin
&openid.realm=http%3A%2F%2Fwww.mydomain.com
This is incorrect per the OpenID 2.0 spec, specifically at http://openid.net/specs/openid-authentication-2_0.html#discovered_info:
If the end user entered an OpenID Provider (OP) Identifier, there is no Claimed Identifier. For the purposes of making OpenID Authentication requests, the value "http://specs.openid.net/auth/2.0/identifier_select" MUST be used as both the Claimed Identifier and the OP-Local Identifier when an OP Identifier is entered.
Based on that, the generated redirect url variable should be:
https://steamcommunity.com/openid/login?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0
&openid.mode=checkid_setup
&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select
&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select
&openid.return_to=http%3A%2F%2Fwww.mydomain.com%2Fsteam%2Flogin
&openid.realm=http%3A%2F%2Fwww.mydomain.com
I've written up an issue for this on the Play Framework issue tracker:
https://github.com/playframework/playframework/issues/3740
In the meantime as a temporary hack/fix, you can use any number of string replacement techniques on your url variable to set the openid.claimed_id and openid.identity parameters to the correct http://specs.openid.net/auth/2.0/identifier_select value.

AKKA remote (with SSL) can't find keystore/truststore files on classpath

I'm trying to configure AKKA SSL connection to use my keystore and trustore files, and I want it to be able to find them on a classpath.
I tried to set application.conf to:
...
remote.netty.ssl = {
enable = on
key-store = "keystore"
key-store-password = "passwd"
trust-store = "truststore"
trust-store-password = "passwd"
protocol = "TLSv1"
random-number-generator = "AES128CounterSecureRNG"
enabled-algorithms = ["TLS_RSA_WITH_AES_128_CBC_SHA"]
}
...
This works fine if keystore and trustore files are in the current directory. In my application these files get packaged into WAR and JAR archives, and because of that I'd like to read them from the classpath.
I tried to use getResource("keystore") in application.conf as described here without any luck. Config reads it literally as a string.
I also tried to parse String conf and force it to read the value:
val conf: Config = ConfigFactory parseString (s"""
...
"${getClass.getClassLoader.getResource("keystore").getPath}"
...""")
In this case it finds proper path on the classpath as file://some_dir/target/scala-2.10/server_2.10-1.1-one-jar.jar!/main/server_2.10-1.1.jar!/keystore which is exactly where it's located (in the jar). However, underlying Netty SSL transport can't find the file given this path, and I get:
Oct 03, 2013 1:02:48 PM org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink
WARNING: Failed to initialize an accepted socket.
45a13eb9-6cb1-46a7-a789-e48da9997f0fakka.remote.RemoteTransportException: Server SSL connection could not be established because key store could not be loaded
at akka.remote.netty.NettySSLSupport$.constructServerContext$1(NettySSLSupport.scala:113)
at akka.remote.netty.NettySSLSupport$.initializeServerSSL(NettySSLSupport.scala:130)
at akka.remote.netty.NettySSLSupport$.apply(NettySSLSupport.scala:27)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$.defaultStack(NettyRemoteSupport.scala:74)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$$anon$1.getPipeline(NettyRemoteSupport.scala:67)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$$anon$1.getPipeline(NettyRemoteSupport.scala:67)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.registerAcceptedChannel(NioServerSocketPipelineSink.java:277)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:242)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.FileNotFoundException: file:/some_dir/server/target/scala-2.10/server_2.10-1.1-one-jar.jar!/main/server_2.10-1.1.jar!/keystore (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:97)
at akka.remote.netty.NettySSLSupport$.constructServerContext$1(NettySSLSupport.scala:118)
... 10 more
I wonder if there is any way to configure this in AKKA without implementing custom SSL transport. Maybe I should configure Netty in the code?
Obviously I can hardcode the path or read it from an environment variable, but I would prefer a more flexible classpath solution.
I decided to look at the akka.remote.netty.NettySSLSupport at the code where exception is thrown from, and here is the code:
def initializeServerSSL(settings: NettySettings, log: LoggingAdapter): SslHandler = {
log.debug("Server SSL is enabled, initialising ...")
def constructServerContext(settings: NettySettings, log: LoggingAdapter, keyStorePath: String, keyStorePassword: String, protocol: String): Option[SSLContext] =
try {
val rng = initializeCustomSecureRandom(settings.SSLRandomNumberGenerator, settings.SSLRandomSource, log)
val factory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm)
factory.init({
val keyStore = KeyStore.getInstance(KeyStore.getDefaultType)
val fin = new FileInputStream(keyStorePath)
try keyStore.load(fin, keyStorePassword.toCharArray) finally fin.close()
keyStore
}, keyStorePassword.toCharArray)
Option(SSLContext.getInstance(protocol)) map { ctx ⇒ ctx.init(factory.getKeyManagers, null, rng); ctx }
} catch {
case e: FileNotFoundException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because key store could not be loaded", e)
case e: IOException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because: " + e.getMessage, e)
case e: GeneralSecurityException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because SSL context could not be constructed", e)
}
It looks like it must be a plain filename (String) because that's what FileInputStream takes.
Any suggestions would be welcome!
I also got stuck in similar issue and was getting similar errors. In my case I was trying to hit an https server with self-signed certificates using akka-http, with following code I was able to get through:
val trustStoreConfig = TrustStoreConfig(None, Some("/etc/Project/keystore/my.cer")).withStoreType("PEM")
val trustManagerConfig = TrustManagerConfig().withTrustStoreConfigs(List(trustStoreConfig))
val badSslConfig = AkkaSSLConfig().mapSettings(s => s.withLoose(s.loose
.withAcceptAnyCertificate(true)
.withDisableHostnameVerification(true)
).withTrustManagerConfig(trustManagerConfig))
val badCtx = Http().createClientHttpsContext(badSslConfig)
Http().superPool[RequestTracker](badCtx)(httpMat)
At the time of writing this question there was no way to do it AFAIK. I'm closing this question but I welcome updates if new versions provide such functionality or if there are other ways to do that.

Derby.js with Derby Auth errors

I just start with Derby and started out with integrating MongoDB and derby-auth. However, I am having troubles with that and can't figure it out. The following code returns the stacktrace below it. Finding out what causes the error is really hard. This is a new project generated with derby -c new project. All I know is that it's related to the auth.store(store) line.
UPDATE: I am also having this problem with the example/ in derby-auth, but only on my first request so I guess it would be relate to the database connection.
http = require 'http'
path = require 'path'
express = require 'express'
gzippo = require 'gzippo'
derby = require 'derby'
app = require '../app'
serverError = require './serverError'
auth = require('derby-auth')
MongoStore = require('connect-mongo')(express)
## SERVER CONFIGURATION ##
expressApp = express()
server = http.createServer expressApp
derby.use(require 'racer-db-mongo')
derby.use(derby.logPlugin)
store = derby.createStore
listen: server
db:
type: 'Mongo'
uri: 'mongodb://localhost/admin'
module.exports = server
auth.store(store)
ONE_YEAR = 1000 * 60 * 60 * 24 * 365
root = path.dirname path.dirname __dirname
publicPath = path.join root, 'public'
# Authentication strategies (providers actually)
strategies =
facebook:
strategy: require('passport-facebook').Strategy
conf:
clientID: 'boo'
clientSecret: 'boo'
# Authentication optionss
options =
domain: (if process.env.NODE_ENV is 'production' then "http://www.mydomain.com" else "http://localhost:3000" )
expressApp
.use(express.favicon())
# Gzip static files and serve from memory
.use(gzippo.staticGzip publicPath, maxAge: ONE_YEAR)
# Gzip dynamically rendered content
.use(express.compress())
# Uncomment to add form data parsing support
.use(express.bodyParser())
.use(express.methodOverride())
# Uncomment and supply secret to add Derby session handling
# Derby session middleware creates req.session and socket.io sessions
.use(express.cookieParser())
.use(store.sessionMiddleware
secret: 'mooo'
cookie: {maxAge: ONE_YEAR}
store: new MongoStore({ url: 'mongodb://localhost/admin' })
)
# Adds req.getModel method
.use(store.modelMiddleware())
# Authentication
.use(auth.middleware(strategies, options))
# Creates an express middleware from the app's routes
.use(app.router())
.use(expressApp.router)
.use(serverError root)
## SERVER ONLY ROUTES ##
expressApp.all '*', (req) ->
throw "404: #{req.url}"
Stack Trace:
TypeError: Cannot read property '_at' of undefined
at Model.mixin.proto._createRef ($derbypath/node_modules/derby/node_modules/racer/lib/refs/index.js:158:20)
at Model.mixin.proto.ref ($derbypath/node_modules/derby/node_modules/racer/lib/refs/index.js:124:19)
at $derbypath/lib/app/index.js:18:11
at Object.fail ($derbypath/node_modules/derby/node_modules/racer/lib/descriptor/descriptor.Model.js:268:21)
at Object.module.exports.events.middleware.middleware.subscribe.add._res.fail ($derbypath/node_modules/derby/node_modules/racer/lib/pubSub/pubSub.Store.js:80:40)
at module.exports.events.init.store.eachContext.context.guardReadPath.context.guardReadPath ($derbypath/node_modules/derby/node_modules/racer/lib/accessControl/accessControl.Store.js:25:26)
at next ($derbypath/node_modules/derby/node_modules/racer/lib/middleware.js:7:26)
at guard ($derbypath/node_modules/derby/node_modules/racer/lib/accessControl/accessControl.Store.js:156:36)
at next ($derbypath/node_modules/derby/node_modules/racer/lib/middleware.js:7:26)
at run ($derbypath/node_modules/derby/node_modules/racer/lib/middleware.js:10:12)
at $derbypath/node_modules/derby/node_modules/racer/lib/pubSub/pubSub.Store.js:88:11
at next ($derbypath/node_modules/derby/node_modules/racer/lib/middleware.js:7:26)
at Object.run [as subscribe] ($derbypath/node_modules/derby/node_modules/racer/lib/middleware.js:10:12)
at module.exports.server._addSub ($derbypath/node_modules/derby/node_modules/racer/lib/descriptor/descriptor.Model.js:237:26)
at Object.Promise.on ($derbypath/node_modules/derby/node_modules/racer/lib/util/Promise.js:29:7)
at Model.module.exports.server._addSub ($derbypath/node_modules/derby/node_modules/racer/lib/descriptor/descriptor.Model.js:221:29)
at subscribe ($derbypath/node_modules/derby/node_modules/racer/lib/descriptor/descriptor.Model.js:267:9)
at Model.module.exports.proto.subscribe ($derbypath/node_modules/derby/node_modules/racer/lib/descriptor/descriptor.Model.js:106:7)
at $derbypath/lib/app/index.js:17:16
at onRoute ($derbypath/node_modules/derby/lib/derby.server.js:69:7)
at app.router ($derbypath/node_modules/derby/node_modules/tracks/lib/index.js:96:16)
at callbacks ($derbypath/node_modules/derby/node_modules/tracks/node_modules/express/lib/router/index.js:160:37)
at param ($derbypath/node_modules/derby/node_modules/tracks/node_modules/express/lib/router/index.js:134:11)
at param ($derbypath/node_modules/derby/node_modules/tracks/node_modules/express/lib/router/index.js:131:11)
at pass ($derbypath/node_modules/derby/node_modules/tracks/node_modules/express/lib/router/index.js:141:5)
at Router._dispatch ($derbypath/node_modules/derby/node_modules/tracks/node_modules/express/lib/router/index.js:169:5)
at dispatch ($derbypath/node_modules/derby/node_modules/tracks/lib/index.js:43:21)
at Object.middleware [as handle] ($derbypath/node_modules/derby/node_modules/tracks/lib/index.js:58:7)
at next ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:190:15)
at app.use.fn ($derbypath/node_modules/express/lib/application.js:121:9)
at next ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:127:23)
at pass ($derbypath/node_modules/express/lib/router/index.js:107:24)
at Router._dispatch ($derbypath/node_modules/express/lib/router/index.js:170:5)
at Object.router ($derbypath/node_modules/express/lib/router/index.js:33:10)
at Context.next ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:190:15)
at Context.actions.pass ($derbypath/node_modules/derby-auth/node_modules/passport/lib/passport/context/http/actions.js:77:8)
at SessionStrategy.authenticate ($derbypath/node_modules/derby-auth/node_modules/passport/lib/passport/strategies/session.js:52:10)
at attempt ($derbypath/node_modules/derby-auth/node_modules/passport/lib/passport/middleware/authenticate.js:243:16)
at Passport.authenticate ($derbypath/node_modules/derby-auth/node_modules/passport/lib/passport/middleware/authenticate.js:244:7)
at next ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:190:15)
at Passport.initialize ($derbypath/node_modules/derby-auth/node_modules/passport/lib/passport/middleware/initialize.js:69:5)
at next ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:190:15)
at Object.handle ($derbypath/node_modules/derby-auth/index.js:71:16)
at next ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:190:15)
at Object.module.exports [as handle] ($derbypath/node_modules/derby-auth/node_modules/connect-flash/lib/flash.js:20:5)
at next ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:190:15)
at Object.expressInit [as handle] ($derbypath/node_modules/express/lib/middleware.js:31:5)
at next ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:190:15)
at Object.query [as handle] ($derbypath/node_modules/express/node_modules/connect/lib/middleware/query.js:44:5)
at next ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:190:15)
at Function.app.handle ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:198:3)
at Object.app.use.fn [as handle] ($derbypath/node_modules/express/lib/application.js:117:11)
at next ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:190:15)
at Object.modelMiddleware [as handle] ($derbypath/node_modules/derby/node_modules/racer/lib/session/session.Store.js:119:5)
at next ($derbypath/node_modules/express/node_modules/connect/lib/proto.js:190:15)
at store.get.next ($derbypath/node_modules/derby/node_modules/racer/node_modules/connect/lib/middleware/session.js:313:9)
at $derbypath/node_modules/derby/node_modules/racer/node_modules/connect/lib/middleware/session.js:337:9
at sessStore.load.sessStore.get ($derbypath/node_modules/derby/node_modules/racer/lib/session/session.Store.js:262:11)
at process.startup.processNextTick.process._tickCallback (node.js:245:9)
Check this line at $derbypath/lib/app/index.js:18:11