I am new to Flutter and Couchbase, trying to connect to a sample bucket travel-sample using fluttercouch plugin, but getting error "unable to set target endpoint to ws//10.0.2.2:8091/travel-sample" for setting endpoint as ws//10.0.2.2:8091/travel-sample. Could some one explain to me what will be the endpoint for me and are there any changes needed else where. I am trying to test the server on the master repository of fluttercouch plugin. Here is the main.dart and bucket overview.
Couchbase Lite (the library underlying FlutterCouch implementation) cannot directly sync with Couchbase Server. A Couchbase Sync Gateway has to be deployed with the next instructions to allow syncing between the server and the mobile database:
"enable_shared_bucket_access": true,
"import_docs": "continuous",
You can find further documentation about Couchbase Sync Gateway in the Couchbase Documentation.
The primary point of confusion is you're trying to connect your fluttercouch plugin directly to Couchbase Server, and it's designed for Couchbase Lite, which is what runs on on a mobile device. I don't have enough experience with flutter to tell you what that endpoint should be, but it looks like you're targeting the wrong thing at the moment.
Related
i deployed the neo4j in kubernetes, so when i tried to acess http://:7474/ it show me this error( ServiceUnavailable: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver.), hoe can i solve it?
You need to uncomment a single line inside $NEO4J_HOME/conf/neo4j.conf:
dbms.connector.bolt.address=0.0.0.0:7687
The issue is well described here
If you're using Neo4j Helm instead of neo4j.conf you will have values.yaml. also it is well described here.
Currently, I'm developing a native application using React-Native. I've decided to go with AWS Amplify because of it's real time updates as well as its authentication.
I also have a Web Application that runs on a Node.js with Epxress server. This web application connects to a Mongo database.
My big problem is that I would like to have all of my aws amplify queries run to my existing MongoDb instead of a new dynamoDb database which is provided with AWS AppSync, but unfortunately I dont know where to start. This is especially helpful in adding authentication easily in my existing web application as well.
My first idea was to just create all my API endpoints in a new node js server and have app sync call to these API end points, but I'm not sure how to implement calling end points on an existing server (and this seems kind of counter intuitive to the 'serverless' idea)
My other idea came from this: Can AWS App-Sync be used without dynamoDB
This states to use AWS Lambda to 'pipeline' my data to the existing mongodb, but I'm not really sure what that entails.
TL;DR - I would like to be able to query an existing Mongodb instead of using DynamoDb when using AWS Amplify with AppSync.
I hope this is clear enough and doesn't sound like I'm rambling. Thanks in advance!
I would suggest using either an HTTP datasource to connect to your MongoDB backend or a Lambda function. Here are a couple getting started tutorials for both:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-http-resolvers.html
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers.html
If you go the Lambda route, then you can leverage the new #function feature of the GraphQL Transformer in the Amplify CLI: https://aws-amplify.github.io/docs/cli/graphql#function
When Spark is deployed in YARN cluster mode, how should I issue the Spark monitoring REST API calls http://spark.apache.org/docs/latest/monitoring.html ?
Does YARN have an API that takes the REST call for example (I already know the app-id)
http://localhost:4040/api/v1/applications/[app-id]/jobs
, proxies it to the correct driver port, and returns the JSON back to me? By "me" I mean my client.
Assume (or already by design) I cannot directly talk to the driver machine due to security reasons.
pls have a look at spark docs
- REST API
Yes with the latest api its available.
By this article
It turns out there is a third surprisingly easy option which is not documented. Spark has a hidden REST API which handles application submission, status checking and cancellation.
In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for both running applications, and in the history server. The endpoints are mounted at /api/v1. Eg., for the history server, they would typically be accessible at http://:18080/api/v1, and for a running application, at http://localhost:4040/api/v1.
These are the other options available..
Livy jobserver
Submit Spark jobs remotely to an Apache Spark cluster Linux using Livy
Other options include
Triggering spark jobs with REST
This is what worked for me,
In yarn resource manager UI, click on link of the "application manager" for the running application and note the URL that it directs to
For me the link was something like
http://RM:20888/proxy/application_1547506848892_0002/
Append "api/v1/applications/application_1547506848892_0002" to the URL for the api.
For above case the api url is
curl "http://RM:20888/proxy/application_1547506848892_0002/api/v1/applications/application_1547506848892_0002"
Normally, Node-RED flows are stored somewhere in the filesystem, in a file named flows_XXX.json.
When running Node-RED on Bluemix where are they stored?
This could be important if your node instance doesn't start anymore.
A Node-RED instance on Bluemix when created from the Node-RED boilerplate always comes with a Cloudant database service connected.
Open the Cloudant dashboard
Open the database nodered
Open the document <app_name>/flow (Use the edit icon to open it)
You can now copy all the flows from this Node-RED instance.
Simply remove this part from the beginning:
{
"_id": "HUe-IoT-RED/flow",
"_rev": "6-3813d11089aa3e3adb9e704d4251bcdd",
"flow":
and the tailing }
Everything between the [ ] are the flows. They can be imported into another Node-RED instance.
More info on Node-RED website and Node-RED GitHub repo
For the boilerplate install all data including flows is persisted to the bound cloudant database.
Details can be found in the node-red-bluemix repo - https://github.com/node-red/node-red-bluemix
As described by Harald in a previous answer once you create an instance of nodered boilerplate it is bound to a cloudant nosql instance for data, instead of the classic json file: this because a file on the filesystem would be reset as soon as your application restarts, while a db service persists.
So if you wish to retrieve your application flows once it isn't able to start anymore, you have to access the cloudant nosql dashboard and extract the data locally.
Generally when the node-red instance doesn't start anymore (if something is changed, etc.), you can 're-push' the starter - code on your old bugged application. So, the app is 'resetted' as the first time, but you don't lose the flows, cause they are stored in a Cloudant DB.
How do I add new Database support (MongoDB) in 2.6.3 version of WSO2 Data Service Server.
You can use DSS (2.6.3) with any database type if the database connectivity is exposed via JDBC. In other words, if your preferred database type exposes a JDBC driver/adapter for the users to connect to it via JDBC then you can use DSS to expose your data stored in your data store as a web service. Similarly, if MongoDB too has a JDBC adapter you wouldn't have any (or too many :) ) issues integrating that with DSS. However, there are some exceptions when it comes to exposing flat files such as google spreadsheeets, excel sheets, csv files as DSS uses the relevant client APIs such as Google gdate client API, Apache POI, etc to connect to those datasources and extract data. However, if we consider the general case is you need to have an adapter or a similar mechanism to connect to your datasource via JDBC.
But in the upcoming version of DSS (v3.0.0), it is planned to introduce custom datasource support so you can easily write an adapter to any datasource and use it with DSS.
Regards,
Prabath
I am not sure about this, but I suppose that if is not supported by default you can always download the jar library for MongoDB and put it in CARBON_HOME\repository\components\lib and restart. For example for mysql I have the mysql-connector-java-5.1.7-bin.jar in that folder.
Hope this help