My goal is basically create a very simple backend application with postgresql, and spring boot. Everytime I run my program I need to insert datas into my database table, because for some reason it does not save permanently. Is this a normal behaviour? To be frank im pretty new to postgresql and spring boot, therefore im sorry if the answer to this question is obvious.
My configuration file:
#Configuration
public class DatabaseConfig {
#Bean
CommandLineRunner commandLineRunner(BlogpostRepository blogrep, CategoryRepository catrep){
return args -> {
blogPost blog1=new blogPost(1,"asd","asd","asd","asd");
blogPost blog2=new blogPost(2,"asd2","asd2","asd2","asd2");
Category cat1=new Category(1,"titles1");
Category cat2=new Category(2,"titles2");
Category cat3=new Category(3,"titles3");
blogrep.saveAll(
List.of(blog1,blog2)
);
catrep.saveAll(
List.of(cat1,cat2,cat3)
);
};
}
}
The solution for this problem was in the application.properties
file. From create and drop I changed it to:
spring.jpa.hibernate.ddl-auto=update
Related
Is there some way to add a post migration method to Code First EF migration?
All the stored proc's are in the Visual Studio project. Right now there is an approach to load the stored proc resource from the file and put it into it's own migration:
protected override void Up(MigrationBuilder migrationBuilder)
{
var script = ScriptMgr.LoadStoredProc( "StoredProcThatChanged.sql" );
migrationBuilder.Sql( script );
}
There is a weak link in this process: Each time the script changes (StoredProcThatChanged.sql) a new migration needs to be created to make sure it executes again. The problem is the previous migration is also loading the same file. When generating a new script, the process reads in the one file both times, effectively changing the previous migration. Which is a classic no-no.
This would be resolved if there is a post migration method where ALL stored proc's can be reapplied to the DB. Is such a step possible? If so, now is it done?
I have been digging into the efcore source code and it looks like it is possible, not ideal, but there might be a way...
It looks like efcore has an interface called IMigrator. It contains the method string GenerateScript(...). The implementation of it, class Migrator, has comments all over the place saying that it's implementation of GenerateScript is internal and subject to change. But... It looks to me like I can achieve my end goal:
class MyMigrator : Microsoft.EntityFrameworkCore.Migrations.Internal.Migrator
{
public string GenerateScript(
string? fromMigration = null,
string? toMigration = null,
MigrationsSqlGenerationOptions options = MigrationsSqlGenerationOptions.Default)
{
var result = base.GenerateScript( fromMigration, toMigration, options);
results += MyPostSteps(...);
return results;
}
}
Will this work and does anyone know how I might go about replacing the default Migrator with the MyMigrator?
I'm aware that all spring steps need to have a reader, a writer, and optionally a processor. So even though my step only needs a writer, I am also fibbing a reader that does nothing but make spring happy.
This is based on the solution found here. Is it outdated, or am I missing something?
I have a spring batch job that has two chunked steps. My first step, deleteCount, is just deleting all rows from the table so that the second step has a clean slate. This means my first step doesn't need a reader, so I followed the above linked stackoverflow solution and created a NoOpItemReader, and added it to my stepbuilder object (code at the bottom).
My writer is mapped to a simple SQL statement that deletes all the rows from the table (code is at the bottom).
My table is not being cleared by the deleteCounts step. I suspect it's because I'm fibbing the reader.
I am expecting that deleteCounts will delete all rows from the table, yet it is not - and I suspect it's because of my "fibbed" reader but am not sure what I'm doing wrong.
My delete statement:
<delete id="delete">
DELETE FROM ${schemaname}.DERP
</delete>
My deleteCounts Step:
#Bean
#JobScope
public Step deleteCounts() {
StepBuilder sb = stepBuilderFactory.get("deleteCounts");
SimpleStepBuilder<ProcessedCountData, ProcessedCountData> ssb = sb.<ProcessedCountData, ProcessedCountData>chunk(10);
ssb.reader(noOpItemReader());
ssb.writer(writerFactory.myBatisBatchWriter(COUNT_DATA_DELETE));
ssb.startLimit(1);
ssb.allowStartIfComplete(true);
return ssb.build();
}
My NoOpItemReader, based on the previously linked solution on stackoverflow:
public NoOpItemReader<? extends ProcessedCountData> noOpItemReader() {
return new NoOpItemReader<>();
}
// for steps that do not need to read anything
public class NoOpItemReader<T> implements ItemReader<T> {
#Override
public T read() throws Exception {
return null;
}
}
I left out some mybatis plumbing, since I know that is working (step 2 is much more involved with the mybatis stuff, and step 2 is inserting rows just fine. deleting is so simple, it must be something with my step config...)
Your NoOpItemReader returns null. An ItemReader returning null indicates that the input has been exhausted. Since, in your case, that's all it returns, the framework assumes that there was no input in the first place.
First the problem statement:
I am using Spring-Batch in my DEV environment fine. When I move the code to a production environment I am running into a problem. In my DEV environment, Spring-Batch is able to create it's transaction data tables in our DB2 database server with out problem. This is not a option when we go to PROD as this is a read only job.
Attempted solution:
Search Stack Overflow I found this posting:
Spring-Batch without persisting metadata to database?
Which sounded perfect, so I added
#Bean
public ResourcelessTransactionManager transactionManager() {
return new ResourcelessTransactionManager();
}
#Bean
public JobRepository jobRepository(ResourcelessTransactionManager transactionManager) throws Exception {
MapJobRepositoryFactoryBean mapJobRepositoryFactoryBean = new MapJobRepositoryFactoryBean(transactionManager);
mapJobRepositoryFactoryBean.setTransactionManager(transactionManager);
return mapJobRepositoryFactoryBean.getObject();
}
I also added it to my Job by calling .reporitory(jobRepository).
But I get
Caused by: java.lang.NullPointerException: null
at org.springframework.batch.core.repository.dao.MapJobExecutionDao.synchronizeStatus(MapJobExecutionDao.java:158) ~[spring-batch-core-3.0.6.RELEASE.jar:3.0.6.RELEASE]
So I am not sure what to do here. I am new to Spring so I am teaching myself as I go. I am open to other solutions, such as an in memory database, but I have not been able to get them to work either. I do NOT need to save any state or session information between runs, but the data base query I am running will return around a million or so rows, so I will need to get that in chunks.
Any suggestions or help would be greatly appreciated.
Add this beans to AppClass
#Bean
public PlatformTransactionManager transactionManager() {
return new ResourcelessTransactionManager();
}
#Bean
public JobExplorer jobExplorer() throws Exception {
MapJobExplorerFactoryBean jobExplorerFactory = new MapJobExplorerFactoryBean(mapJobRepositoryFactoryBean());
jobExplorerFactory.afterPropertiesSet();
return jobExplorerFactory.getObject();
}
#Bean
public MapJobRepositoryFactoryBean mapJobRepositoryFactoryBean() {
MapJobRepositoryFactoryBean mapJobRepositoryFactoryBean = new MapJobRepositoryFactoryBean();
mapJobRepositoryFactoryBean.setTransactionManager(transactionManager());
return mapJobRepositoryFactoryBean;
}
#Bean
public JobRepository jobRepository() throws Exception {
return mapJobRepositoryFactoryBean().getObject();
}
#Bean
public JobLauncher jobLauncher() throws Exception {
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setJobRepository(jobRepository());
return simpleJobLauncher;
}
This doesn't directly answer your question, but that is not a good solution; the map-based repository is supposed to be used only for testing. It will grow in memory indefinitely.
I suggest you use an embedded database like sqlite. The main problem in using a separate database for job metadata is that you should then coordinate the transactions between the two databases that you use (so that the state of metadata matches that of the data), but since it seems you're not even writing in the main database, that probably won't be a problem for you.
You could use an in-memory database (for example H2 or HSQL) quite easily. Examples of that you can find for example here: http://www.mkyong.com/spring/spring-embedded-database-examples/.
As for the Map-backed job repository, it does provide a method to clear its contents:
public void clear()
Convenience method to clear all the map DAOs globally, removing all entities.
Be aware that a Map-based job repository is not fit for use in partitioned steps and other multi-threading.
The following seems to have done the job for me:
#Bean
public DataSource dataSource() {
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
EmbeddedDatabase db = builder
.setType(EmbeddedDatabaseType.HSQL)
.build();
return db;
}
Now Spring is not creating tables in our production database, and when the JVM exits state is lost so nothing seems to be hanging around.
UPDATE: The above code has caused concurrency errors for us. We have addressed this by abandoning the EmbeddedDatabaseBuilder and declaring the HSQLDB this way instead:
#Bean
public BasicDataSource dataSource() {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName("org.hsqldb.jdbcDriver");
dataSource.setUrl("jdbc:hsqldb:mem:testdb;sql.enforce_strict_size=true;hsqldb.tx=mvcc");
dataSource.setUsername("sa");
dataSource.setPassword("");
return dataSource;
}
The primary difference is that we are able to specify mvcc (Multiversion concurrency control) in connection string which resolves the issue.
Situation:
I read url of file on internet from db. In itemProcessor I download this file and I want to save each row to database. Then processing continue and I want to create some new class "summary" which I want to save to db too. How should configure my job in spring batch ?
For your use-case job can be defined using this step sequence (in this way this job is also restartable):
Download file from URL to HDD using a Tasklet: a Tasklet is the strategy to process a single step; in your case something similar to this post can help and store local filename to JobExecutionContext.
Process downloaded file:
2.1. With a FlatFileItemReader<S> (or your own ItemReader/ItemStream implementation) read downloaded file
2.2 With an ItemProcessor<S,T> process each row
2.3 Write each object to processed in 2.2 to database using a custom MyWriter<T> that do summary calculation and delegate to ItemWriter<T> for T's database persistence and to ItemWriter<Summary> to write Summary object.
<S> is the bean contains each file row and
<T> is the bean your write to db
MyWriter<T> can be used in this way:
class MyWriter extends ItemWriter<T> {
private ItemWriter<Summary> summaryWriter;
private ItemWriter<T> tWriter;
public void write(List<? super T> items) {
List<Summary> summaries = new ArrayList<>(items.size());
for(T item : items) {
final Summary summary = /* Here create summary object reading from
* database or creating new object */
/* Do summary or update summary */
summaries.add(summary);
}
/* The code above is trivial: you can group Summary object using a Map<SummaryKey,Summary> to reduce reading and use summaryWriter.write(summariesMap.values()) for example */
tWriter.write(items);
summaryWriter.write(summaries);
}
}
You need to save as stream both MyWriter.summaryWriter and MyWriter.tWriter for restartability.
You can use a CompositeItemWriter.
But perhaps your summary processing should be in another step which reads the rows you previously inserted
I'm using Entity Framework 4 Code First in my .net MVC 2.0 project and I'm having hard times to get my DB sync with my Entities. What I want is a page that I can call, as an exemple : /DB/Recreate, that would drop my current DB and recreate an empty one. Currently in my global.asax I have
protected override void OnApplicationStarted()
{
Database.SetInitializer(new CreateDatabaseOnlyIfNotExists<CorpiqDb>());
AreaRegistration.RegisterAllAreas();
RegisterRoutes(RouteTable.Routes);
RegisterAllControllersIn(Assembly.GetExecutingAssembly());
}
I try to switch my database initializer in my action but I'm realy not sure, since it should already been initialized, that I'm using the right approach :
Database.SetInitializer(new AlwaysRecreateDatabase<CorpiqDb>());
var bidon = _session.All<Admin>();
Database.SetInitializer(new CreateDatabaseOnlyIfNotExists<CorpiqDb>());
bidon = _session.All<Admin>();
I don'y realy know how to do this, thank you for the help!
Ok I found a solution :
var dbContext = new CorpiqDb().Database ;
dbContext.Delete();
dbContext.Initialize();
dbContext.EnsureInitialized();
Database.SetInitializer(new CreateDatabaseOnlyIfNotExists<CorpiqDb>());
This will drop my database and create a new one that reflect the latest models, I can even seed the database with some data for my test.