I am using Redux Toolkit (v1.8) from the umd build, as I am using it inside a platform that does not enable package management integration.
I was trying to access RTK Query, but could not find a way to access it through the global exported by the umd script.
I was wondering if:
I missed something and it's there, or
It is necessary to use another script, or
It is not possible to use RTK Query from the umd script
Thanks!
There is a UMD build at https://unpkg.com/browse/#reduxjs/toolkit#1.8.0/dist/query/rtk-query.umd.js that will populate window.RTKQ.
Generally you should try to use the ESM build instead - most modern environments should be able to load that unbundled. Any kind of UMD build will probably be removed in RTK 2.0 - it is just not up to date to ship UMD any more.
Just out of curiosity: in what environment are you working? I cannot imagine anything that would work with only an UMD build.
Related
I have a project with several routes, most of which use Talend Jobs, some of which use jobs that have use tRunJob to call sub-jobs.
This project all worked fine in 6.5, but in 7.0.1 some of the routes do not build. (They compile fine if running in the studio)
Looking at the lastGenerated.log files for the routes, there is a step which installs the routeName_jobName component for routes that work. However for the ones that don't work it is actually installing routeName_subJobName and then complaining a few lines later that it can't find routeName_jobName.
There is nothing in the route referencing the subJobName directly, so I don't see why Talend is doing this.
There is no error message generated when building the routes that don't work - it just processes for a while then closes the build window having not built the .kar file.
Is there a way of getting better logging from Talend so I can figure out why it is unhappy building these routes?
I'm currently in the middle of upgrading our API from v0.12 of Sails to v1. Not the easiest task, but will be worth it.
The current problem I'm having, is converting our old "ModelName.query" calls to the new style, which is supposedly "sails.getDatastore". Great, fine.
Except, that when trying to do this in config/bootstrap.js, I constantly get the error "sails.getDatastore is not a function".
Yes, I am using the default sails-hook-orm, the .sailsrc has it turned on explicitly; and yes, I have globals turned on.
Is the problem that the function isn't registered until after bootstrap? Because that is not an option for us; bootstrap is validating our database schema before lift (custom code, using native queries), so our production servers fail to deploy if we missed a database update. It eliminates a ton of human error.
Thanks for taking the 1.0 plunge!
I'm not sure what you mean by the "default" sails-hook-orm -- that hook is installed directly as a dependency on each Sails 1.0 project -- but I can almost guarantee that the version you're using is not correct. I would do:
npm cache clean
npm install sails-hook-orm#beta
in your project to make sure you get the latest (currently v2.0.0-21). It adds getDatastore to the app object when it initializes.
We are trying to establish a continuous deployment environment. Conflicted how to do ARM deployments. Deploying all the resources as a group is much better them handling them individually.
ARM has a nice declarative syntax. We are telling what we intend to create" without having to write the sequence of programming commands to create it. Which is great but how should we run them ?
Two options come up to my mind
I.I could download the templates and use power shell.
II. Trigger using Azure automation
III. x
What is the best practice ?
Reference
Octopus integration from source code
If you're doing this as part of your CI/CD chain, you probably want to check in the templates and deployment scripts with your source code. That way, the definition of the infrastructure is kept with the code that's intended to run on it.
If this is part of some other workflow, it really depends on the workflow :)
I would suggest using powershell\cli and just invoke the template from the uri, that is the easiest way of doing that (instead of downloading it). This can be run with anything that is capable of running a custom script task, or specific CI\CD systems that have steps to deploy ARM Template (VSTS\Octopus\probably something else)
I would advice against Azure Automation for that cause.
Also, I do suggest separate code from arm templates
Please note, although my specific example here involves Java/Grails, it really applies to any type of task available in Bamboo.
I have a task that is a part of a Bamboo build where I run a Java/Grails app like so:
grails run-app -Dgrails.env=<ENV>
Where "<ENV>" can be one of several values (dev, prod, staging, etc.). It would be nice to "parameterize" the plan so that, sometimes, it runs like so:
grails run-app -Dgrails.env=dev
And other times, it runs like so:
grails run-app -Dgrails.env=staging
etc. Is this possible, if so, how? And does the REST API allow me to specify parameter info so I can kick off different-parameterized builds using cURL or wget?
This seems to be a work around but I believe it can help resolve your issue. Atlassian has a free plugin call Bamboo Inject Variables Plugin. Basically, with this plugin, you can create an "Inject Bamboo Variables from file" task to read a variable from a file.
So the idea here is to have your script set the variable to a specific file then kick off the build; the build itself will read that variable from the file and use it in the grails task.
UPDATE
After a search, I found that you can use REST API to change plan variables (NOT global). This would make your task simpler: just define a plan variable (in Plan Configuration -> tab Variables) then change it every time you need to. The information on how to change is available at Bamboo Knowledge Base
I've just picked up CoffeeScript and I'm struggling to understand the deployment workflow. It seems you constantly have to compile the .coffee files before using them. (Yes, I'm aware that you can have it embedded in the browser, but that's not recommended for production applications).
Does one have to constantly (manually) compile the files before deploying? (For example, if using Eclipse, a simple Ctrl+S saves and deploys the .war/.ear on the local machine's server.) Do we have to change the build scripts (for a central, possible CI server) for deploying .coffee files? Is there anyway to have integrated compiling via the IDEs (Eclipse/Netbeans)
Any ideas/pointers/examples on this? How/what have you used in the past?
I call browserify in my Cakefile to pre-compile and package my CoffeeScript for the browser. For an example of how I call browserify as well as coffeedoc and coffeedoctest take a look at the Cakefile for my Lumenize project.
If you are using express or some other node based server, you can have your CoffeeScript compiled at request time, using tools like NibJS or as described in The Little Book on CoffeeScript (Applications chapter), you can use Stitch. BTW, I highly recommend, The Little Book. The "Compiling" chapter has information about Cake and compiling that might help you.
Yes, you should have a build script. Most CoffeeScript projects use a Cakefile for this; see, for example, 37signals' pow. With a Cakefile, you can just run
cake build
from the command line to run the build task in the Cakefile.
You can run the Cakefile on a CI server, assuming that you have Node and CoffeeScript installed on that server.
Don't deploy the coffee files, use something like "coffee -cwj" to constantly watch and compile the .coffee files into javascript (.js) files and deploy those.
The options are c=compile, w=watch and j=join the files.
See the coffee-script web site for details of the options you can pass in.