Import test-users into Meteor app with use of npm script? - mongodb

We can easily create a user from meteor shell like this
Accounts.createUser({username: 'john', password: '12345'})
Similarly, I just want to add multiple users via npm script. Any ideas?
In other words, I want to use fixtures functionality via npm command and not on the initial run.
Thank you.

For normal collections (i.e. different than Meteor.users), you can directly tap into your MongoDB collection. Open a Meteor Mongo shell while your project is running in development mode, then directly type Mongo shell commands.
For Meteor.users collection, you want to leverage the accounts-base and accounts-password packages automatic management, so instead of directly fiddling the MongoDB, you want to insert documents / users through your Meteor app.
Unfortunately, your app source files (like your UsersFixtures.js file) are absolutely not suitable for CLI usage.
The usual solution is to embed a dedicated method within your app server:
// On your server.
// Make sure this Method is not available on production.
// When started with `meteor run`, NODE_ENV will be `development` unless set otherwise previously in your environment variables.
if (process.env.NODE_ENV !== 'production') {
Meteor.methods({
addTestUser(username, password) {
Accounts.createUser({
username,
password // If you do not want to transmit the clear password even in dev environment, you can call the method with 2nd arg: {algorithm: "sha-256", digest: sha256function(password)}
})
}
});
}
Then start your Meteor project in development mode (meteor run), access your app in your browser, open your browser console, and directly call the method from there:
Meteor.call('addTestUser', myUsername, myPassword)
You could also use Accounts.createUser directly in your browser console, but it will automatically log you in as the new user.

Related

de- serialize JSON metadata to .qvf using qlik sense API

I am aware of Qlik sense serialize app where we generate a JSON object containing metadata information of a .qvf file using Qlik sense API.
I want to do a reverse operation of this i.e generate .qvf file back from json metadata.
After many research just found this link github and it doesnot have a complete information.
Any solution would be helpfull.
Technically you cant create qvf directly from json. You'll have to create an empty qvf and then use various api to import the json.
Qlik have a very nice tool for un-build/build apps (and more). qlik-cli have dedicated commands for un-build/build:
If you are looking for something more "programmable" then ive create some enigma.js mixin for the same purpouse - enigma-mixin. I still need to perform more detailed testing there but it was working ok with simpler tests
Update 08/10/2021
Using qlik-cli
setup context
first unbuild an app:
qlik app unbuild --app 11111111-2222-3333-4444-555555555555
This will create new folder in the current folder named <app_name>-unbuild. The folder will contain all info about the app in json and/or yaml files
once these files are available then you can use them to build another app. Just to mention that the target app should exists before the build is ran:
qlik.exe app build --config ./config.yml --app 55555555-4444-3333-2222-111111111111
The above command will use all available files (specified in config.yml) and update the target app
If you dont want all files to be used and only want to update the data connections, for example, then the build command can be ran with different arguments:
qlik.exe app build --connections ./connections.yml --app 55555555-4444-3333-2222-111111111111
This command will only update the data connections in the target app and will not update anything else

Securing NodeRED dashboard from unwanted access

I'm trying to create some kind of user authentication to prevent unwanted access to my NodeRED's User Interface. I've searched online and found 2 solutions, that for some reason didn't worked out. Here they are:
Tried to add the httpNodeAuth{user:"user", pass:"password"} key to the bluemix-settings.js but after that my dashboard kept prompting me to type username and password, even after I typed the password defined at pass:"password" field.
Added the user defined Environtment Variables NODE_RED_USERNAME : username and NODE_RED_PASSWORD : password . But nothing has changed.
Those solutions were sugested here: How could I prohibit anonymous access to my NodeRed UI Dashboard on IBM Cloud(Bluemix)?
Thanks for the help, guys!
Here is a little bit of the 'bluemix-settings.js'
autoInstallModules: true,
// Move the admin UI
httpAdminRoot: '/red',
// Serve up the welcome page
httpStatic: path.join(__dirname,"public"),
//GUI password authentication (ALEX)
httpNodeAuth: {user:"admin",pass:"$2y$12$W2VkVHvBTwRyGCEV0oDw7OkajzG3mdV3vKRDkbXMgIjDHw0mcotLC"},
functionGlobalContext: { },
// Configure the logging output
logging: {
As described in the Node-RED docs here, you need to add a section as follows to the settings.js (or in the case of Bluemix/IBM Cloud the bluemix-settings.js file.
...
httpNodeAuth: {user:"user",pass:"$2a$08$zZWtXTja0fB1pzD4sHCMyOCMYz2Z6dNbM6tl8sJogENOMcxWV9DN."},
...
The pass files is a bcrypt hash of the password. There are 2 ways listed in the docs about how to generate the hash in the correct way.
if you have a local copy of Node-RED installed you can use the following command:
node-red admin hash-pw
As long as you have a local NodeJS install you can use the following:
node -e "console.log(require('bcryptjs').hashSync(process.argv[1], 8));" your-password-here
You may need to install bcryptjs first with npm install bcryptjs first.

Deploy and run a Go API server on Ubuntu/Centos

I just finished my first backend with Go using Iris framework but now I need to put it on production so I can use it in the Slack app I built.
In order to test the code locally I only run my file with go run main.go and ngrok to test with the Slack API, it's working and it's finished.
I have a droplet with Ubuntu 16.04.3 and other one with Centos 7... I was searching for something like pm2 for go, running the server and using nginx to point that port but I read that with Go it's different and I have to use something like this https://fabianlee.org/2017/05/21/golang-running-a-go-binary-as-a-systemd-service-on-ubuntu-16-04/
But that's a very long configuration for a simple server and my questions are:
Is this the usual way to config the APIs with Go?
Apart of DigitalOcean, do you recommend to use a different service to run my API?
This is really my first time with Go and I just want to learn more, I am a backend developer with Laravel and NodeJS.
You can use pm2 if you want. When you build a go project it creates a binary executable, lets say backend-server, which you can run from terminal and will start the app like this:
$ ./backend-server
If it's not executable or has permission denied issue, add the executable permission to it.
$ chmod +x backend-server
You binary should be ready to run. I like to do it with a json config file (process.json) so that I can pass extra env variables as well and don't have to type a lot in terminal.
My process.json looks something like this:
{
"apps" : [{
"name" : "backend-app",
"script" : "./backend-server",
"env": {
"DB_USER": "db_user",
"PORT": 8080
}
}]
}
Finally you can start the app using pm2 like this:
$ pm2 start process.json
More details about json config can be found in official doc
I think most people use Supervisor for this purpose, including me.
To make it very easy for you, just take a look at my Golang project, isaac-racing-server and use it as a template for yours by replacing isaac-racing-server with the name of your app. (The Supervisor files are in a subdirectory.)

Setting :deploy_to from server config in Capistrano3

In my Capistrano 3 deployment, I would like to set the set :deploy_to, -> { "/srv/www/#{fetch(:application)}" } so the :deploy_to is different for each server it deploys to.
In my staging.rb file I have:
server 'dev.myserver.com', user: 'deploy', roles: %w{web app db}, install_path: 'mycustom/path'
server 'dev.myserver2.com', user: 'deploy', roles: %w{web app db}, install_path: 'mycustom/other/path'
My question is: would it possible to use the "install_path" I defined, in my :deploy_to? If that's possible, how would you do it?
Finally, after looking around, I came onto an issue from one of the developer of Capistrano, stating specifically that it can't be done
Quote from the Github issue:
Not possible, sorry. fetch() (as is documented widely) reads values
set by set(), the only reason to use set() and fetch() over regular
ruby variables is to provide a consistent API between plugins and
extensions, and because set() can take a Proc to be resolved later.
The variables you are setting in the host object via the server()
command belong to an individual host, some of them, user, roles, etc
have special meanings. For more information see
https://github.com/capistrano/sshkit/blob/master/EXAMPLES.md#do-something-different-on-one-host-or-another-depending-on-a-host-property.
If you specifically need to deploy to a different directory on each
machine you probably should not be using the built-in tasks (they
don't fit your needs), and rather copy the deploy.rake from the Gem
into your own project, and modify it as you need. Which in this case
might be to not take fetch(:deploy_to), but to read that from a host
property.
You could try to do something where before doing anything that relies
on calling fetch(:deploy_to), you set() it using the value from
host.someproperty but I'm pretty sure that'll break in exciting and
interesting ways.

Connecting to a remote MongoDB using Meteor

Apologies in advance for any failing with my terminology and understanding with Meteor/Mongo, I've just started learning and developing with it.
I am trying to connect my local meteor app to a remote mongodb which is hosted elsewhere.
My code looks like this:
Bills = new Mongo.Collection("bills");
if (Meteor.isClient) {
Meteor.subscribe("bills");
// This code only runs on the client
Template.body.helpers({
documentContent: function () {
return Bills.find();
}
});
Template.documentBody.helpers({
documentContent: function ()
{
var thingy = Bills.find();
console.log(thingy);
return Bills.find({_id: "784576346gf874"});
}
});
}
I have connected to the DB via the shell using the following:
$ MONGO_URL="mongodb://mysite.net:27017/legislation" meteor
In my browser I receive no errors and within my defined template I see [object Object]. The console shows a local miniCollection but doesn't return any of my documents from the subscribed collection.
I guess what I am asking is; if you were connecting to a remote MongoDB within your local app, how would you do it?
Thank you for taking the time to read, any helps is massively appreciated.
Rex, If you're not seeing errors in the output on the browser, or in the console where you're running the server then you may be setup ok. That's exactly how I'm doing it.
Run meteor list in server directory and look for insecure and autopublish
You should understand these two packages They are for rapid prototyping. If they are present, then keep digging into MongoDB and the connection.
I recommend Robomongo for viewing documents directly in MongoDB.
If they are absent, then you need to go about publishing data (getting it from the server to the client) and securing it (letting clients only modify their data).
I recommend these two packages for that.
reywood:publish-composite
ongoworks:security
If you haven't read an introduction to meteor book, it's really worth the time. I've been developing for some time and learned meteor recently. It was invaluable.