I have installed successfully Drools KIE-WB with MYSQL (in Tomcat), and I want to achieve the same goal with the dashbuilder.
My dashbuilder package is jbpm-dashbuilder-6.1.0-SNAPSHOT-tomcat-7. And I have tested two different options:
The first tested option is:
Executing the file in tomcat-7.0.50\webapps\dashbuilder\WEB-INF\etc\sql\1-create-mysql.sql. It creates some tables, but seems that something is missed, because this error appears when starting the server:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'dashboarddb.processinstancelog' doesn't exist
The second tested option is:
Change tomcat-7.0.50\webapps\dashbuilder\META-INF\context.xml to have the next datasource
<Resource name="jdbc/jbpm" auth="Container"
type="javax.sql.DataSource" username="drools-user" password="pass" driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/dashboarddb?useUnicode=true&characterEncoding=UTF8"
maxActive="8"
/>
And in tomcat-7.0.50\webapps\dashbuilder\WEB-INF\etc\hibernate.cfg.xml I have added the next line:
<property name="hibernate.hbm2ddl.auto">update</property>
To force Hibernate to create all tables in MySQL. This almost work (create several tables) but an error appears:
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1-title-Dashboards Showcase' for key 'PRIMARY'
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
...
And I have no idea how to fix it (because I have no information about what Hibernate is doing and creating).
I have no more ideas of how to install the dashboard using MySQL. Any clue about how to achieve it?
The two webapps (Kie-wb and jBPM dashboard) must share the same database since the jBPM dashboard feeds from the jBPM history-log (more details here https://github.com/droolsjbpm/jbpm-dashboard/tree/master/jbpm-dashboard-distributions/src/main/tomcat7).
So you first need to deploy and run kie-wb against a given data source. Let's say you named it "kie-wb". Once you have kie-wb running (tables created on DB) you can proceed deploying jbpm-dashboard, which must be configured to connect against the same "kie-wb" datasource.
DON'T enable auto ddl update as it's not recommended in production. You can either run the script 1-create-mysql.sql prior to the app. deployment or just let the app run it on start up (auto running the script it's actually done by the app itself if it doesn't detect some required tables).
If you follow the steps above everything should be working fine.
Otherwise, don't hesitate to ask again.
Related
After migrating from Wildfly-8.2.0 to Wildfly-16.0.0, my JEE application launched normally and
displayed expected data read from (PostgreSql) database, but neither of the (insert/update/delete)
operations is saving to database (with no exception fired) !
I redeployed the same application on the old version of Wildfly and (insert/update/delete) operations worked.
What was missing is to add the "eclipselink.jar" file in
"wildfly\modules\system\layers\base\org\eclipse\persistence\main"
and reference it as a "resource-root" in
"wildfly\modules\system\layers\base\org\eclipse\persistence\main\module.xml"
Unfortunatelly, no exception was fired describing the missing configuration of eclipselink!
If you want to configure your VS "Load Tests" to write the results to a database server, you use the following instructions.
If you want to run your "Load Tests" through powershell on a separate machine(think TFS 2018 release step), you use the following instructions.
I would like to do both, on multiple machines, in a automated manner, but there's not a great deal of documentation on this, I can run my tests like this:
.\mstest /testcontainer:"C:\XXX\ABC.loadtest"
But the results are kicked out to a "TRX" file rather than being placed into a database(there is some discussion on this). How do I put the results into a external database like when I run it locally(per instructions above)?
Note: #AdrianHHH points out that the "TRX" file is only a summary and that most of the info is stored locally(MDF/LDF file) in the user folder of current user running the load tests.
Update 1
Hmm I wonder where this is persisted:
(Curious, also click on the "?" icon in the "Manage Test Controller" box, nothing...)
It's not in the saved XML:
<RunConfigurations>
<RunConfiguration Name="Run Settings1" Description="" ResultsStoreType="Database" TimingDetailsStorage="AllIndividualDetails" SaveTestLogsOnError="true" SaveTestLogsFrequency="0" MaxErrorDetails="200" MaxErrorsPerType="1000" MaxThresholdViolations="1000" MaxRequestUrlsReported="1000" UseTestIterations="false" RunDuration="10" WarmupTime="0" CoolDownTime="0" TestIterations="100" WebTestConnectionModel="ConnectionPerUser" WebTestConnectionPoolSize="50" SampleRate="5" ValidationLevel="High" SqlTracingConnectString="" SqlTracingConnectStringDisplayValue="" SqlTracingDirectory="" SqlTracingEnabled="false" SqlTracingFileCount="2" SqlTracingRolloverEnabled="true" SqlTracingMinimumDuration="500" RunUnitTestsInAppDomain="true" CoreCount="0" ResourcesRetentionTimeInMinutes="0" AgentDiagnosticsLevel="Warning">
<CounterSetMappings>
<CounterSetMapping ComputerName="[CONTROLLER MACHINE]">
<CounterSetReferences>
<CounterSetReference CounterSetName="LoadTest" />
<CounterSetReference CounterSetName="Controller" />
</CounterSetReferences>
</CounterSetMapping>
<CounterSetMapping ComputerName="[AGENT MACHINES]">
<CounterSetReferences>
<CounterSetReference CounterSetName="Agent" />
</CounterSetReferences>
</CounterSetMapping>
</CounterSetMappings>
<LoadGeneratorLocations>
<GeoLocation Location="Default" Percentage="100" />
</LoadGeneratorLocations>
</RunConfiguration>
</RunConfigurations>
They're not persisted in my default "testsettings" file either:
<?xml version="1.0" encoding="UTF-8"?>
<TestSettings name="Local" id="02cad612-043b-447d-993e-a9b9b0547c9d"
xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
<Description>These are default test settings for a local test run.</Description>
<Deployment enabled="false" />
<Execution hostProcessPlatform="MSIL">
<TestTypeSpecific />
<AgentRule name="Execution Agents">
</AgentRule>
</Execution>
<Properties>
<Property name="TestSettingsUIType" value="UnitTest" />
</Properties>
</TestSettings>
So I need to find where ever this configuration information is being persisted, then maybe I can find a way to feed it to MSTest. Does anyone else understand how this works?
Update 2
My TRX file does contains a "connection string" but I don't think it's to my database, my database is empty, on running via powershell it completes, but all I see is the "TRX" file.
Update 3
This one is tricky, I keep trying various ways to determine where this "Manage Test Configuration" data/credentials is being stored. One of the ways I did this was to use Microsoft's Process Monitor. You can actually see where it initially is being populated from:
It's from a Application Hive, of course that's begs the question where did the "Application Hive" get populated from, that's where things get a bit murky, there's allot of different calls to many files. A common trend is that the "Temp\Local" folder is often referenced.
I deleted the entire "Temp" folder for my user account(in the process losing all my VS configuration) and upon reopening my solution it appears as though this had an effect. When I pull up my "LoadTest" file, the "Load test results store" line is now empty. In fact the entire "Manage Test Controller" window has been restored back to default(empty).
I know believe that the configuration for this "Manage Test Controller" window is persisted in the temp folder. However, I've yet to locate where it is and/or how to change/automatically populate that information with a powershell script.
Finally figured this out. Basically I used several tools to check what files were being modified when I changed the connection string, the results made it obvious:
privateregistry.bin
Once I found this it was pretty obvious that VS was maintaining it's own little registry hive. It's clearly stated in this post, so I opened it in the way described in the article and found the connection string:
This indicated that:
"The SQL Connection String is NOT stored in the loadtest files. The
setting seems to be PC specific so I had to change it on the build
server - in one loadtest file (address.loadtest) as shown, then all
the other loadtests adopt the same connection string."
So that's basically what I did, I logged into each build server and configured them so that they write all there results to my database rather than locally.
Load tests are clearly not designed to make this process easy, I don't think many people have attempted to do what I've done. All the articles just tell you to use their cloud service. I'm pretty sure that only covers web tests. If your using load testing to test unit tests you pretty much out of luck(without this work around). I really hope this gets official support in the future, it would be really nice to both run/view all types of load tests from TFS. For now though I'm going to have to keep using this work around.
I am running 7.0.0.CR2 of workbench and server in a docker container. It looks on first view that they are working perfectly together. However, when I select the tasks tab in the workbench I get the following error:
Unable to complete your request. The following exception occurred:
Can't lookup on specified data set: jbpmHumanTasksWithUser.
This lead me to this bug: https://issues.jboss.org/browse/JBPM-5432
There they are saying that this is caused by a user not having the kie-server role. There is no kie-server role in my installation, there is however a kie-server group, and the user I am using is a member of this group.
Dockerfile and user and role files can be found here:
https://gist.github.com/martijnburger/c9a1072746d94ffe4beff72830e03ca7
I believe it could be due to a missing login module in your set up, to ensure the role/authentication is passed on to the Kie Server, you need to add a custom login module. Please check this example as reference: https://github.com/cristianonicolai/kie-wb-dev-docker/blob/master/src/main/resources/standalone-full-kie.xml#L379
I am reading data from DB2 table and dumping it into a file.
I execute my simple select query in the chunk listener's beforeChunk() and use the step context to get it in itemreader.
In the chunk i set the checkpoint policy as item and itemcount as 5.
The output is the first 5 records being read and written over and over again.
In this sample java batch code from IBM's site they have start and end parameters in the query.
Is it necessary to have start and end parameters in your query? Is there no other way to make sure that when the query is run again it reads the next chunk of data and not the same chunk again and again?
I am using IBM's implementation of JSR 352 on WebSphere Liberty
Try configuring the datasource to use unshareable connections.
If you are following this sample, you'll see it uses the older deployment descriptor XML files. You can edit batch-bonuspayout-application/src/main/webapp/WEB-INF/web.xml to add the line:
<res-sharing-scope>Unshareable</res-sharing-scope>
So in full you'd have:
<web-app id="BonusPayout" version="3.1" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd">
<display-name>BonusPayout</display-name>
<description>This is the BonusPayout sample.</description>
<resource-ref>
<description>Bonus Payout DS</description>
<res-ref-name>jdbc/BonusPayoutDS</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
<res-sharing-scope>Unshareable</res-sharing-scope>
</resource-ref>
</web-app>
This can also be done with the newer #Resource annotation, but if you've already switched to that then you'll know how to apply this point there too.
With this change, the existing JNDI lookup at location: java:comp/env/jdbc/BonusPayoutDS will now use unshared connections, and the ResultSet will not be closed at the end of each chunk transaction.
This behavior is indirectly documented here in the WebSphere Application Server traditional documentation. (I don't see it in the Liberty documentation, there are some cases like these where the behavior is basically identical in Liberty and the topic is not documented separately in Liberty.) It's a bit indirect for the batch user. Also it's hard to describe completely since as the doc says the exact behavior varies by DB and JDBC provider. But this should work for DB2.
UPDATE: In newer (since 17.0.0.1) version of Liberty the unshareable connection can be obtained without needing to use a resource reference by configuring the connectionManager using the enableSharingForDirectLookups attribute, e.g.:
<connectionManager ... enableSharingForDirectLookups="false"/>
I have an application developed in scala play2.0,
it worked successfully in local, but if failed when deployed to heroku.
the reason of the fail is that locally i was using a H2 database,
and using postgresql in heroku, i have to change one of the data types from "clob" to "text".
the problem now is that the database in heroku is in a "inconsistent state", according to the play20 documentation.
in DEV mode (locally), you can just click on the "Mark it as resolved" when the html appears.
how to "mark it ask resolved" in the heroku PROD environtment?
http://www.playframework.com/documentation/2.1.1/Evolutions
ps: note, because it was a new application, i just deleted the database and re-started.
however, here i am asking what is the proper way to handle evolutions in the PROD env.
that is, the "Mark it as resolved" issue for PROD is not explained here: http://www.playframework.com/documentation/2.1.1/Evolutions
Although I couldn't find a way to do it via the play command, you can do it by editing the database directly.
Imagine you're trying to go from 5.sql to 6.sql. Here's what you do:
Figure out and fix the problem(s) that caused the database to enter an inconsistent state (i.e. manually apply your !Ups and fix all the problems with them).
Manually apply your !Downs so that the database is in the state it was after 5.sql was applied.
Go into your database, find the table called play_evolutions, and look at the row with id 6. It should saying something like applying ups in the state column and have the error message in the last_problem column.
Delete the row with id 6. This will make Play think you are in the state you were with 5.sql.
Now you should be able to run play -DapplyEvolutions.default=true start to evolve to 6.sql.
Inconsistent state just means that the evolutions could not be applied and thus, the application is blocked. Update your evolution scripts and re-deploy.