Add method extremely slow - codefluent

Somewhere along the way (updating codefluent or the model growing) adding/editing a method got extremely slow. Opening the CFQL window takes almost three minutes. I have a fairly fast desktop (Intel I7-2600 #3,5 Ghz, 4Gb DDR3 ram, RAID array of 4 SSD's in RAID0).
I use Visual Studio 2015 update 1. The only plugin I have is ReSharper 10.1.
If I make a new sample advertising project it has the same latency so it does not seem to depend on the model.
What could be wrong?

The version 838 of CodeFluent Entities seems to solve the issue:
TFS7617 - Modeler : CFQL method editor loading performance has been
improved. – meziantou Jan 20 at 8:55
Version 838 DOES solve the issue indeed. Thanks!

Related

Does Firebird have scalability problems in the single-digit GB range?

I'm working on setting up a project with another developer, a very experienced and capable coder whose skill and competence are not in question. Recently I sent him a demo of some work I had done, and he seemed a bit surprised that I had chosen a Firebird database. The conversation went something like this:
Him: Why did you pick Firebird? SQLite would be faster.
Me: SQLite is embedded only, and it doesn't scale well. Plus it's missing a lot of features, including stored proc support.
Him: Firebird has scalability problems too, when your database size gets beyond the amount of RAM available.
Me: What do you mean?
Him (direct quote from his email): Massive slowdowns. Apparently when the indexes + data don't fit RAM (physical RAM, not virtual RAM), it enters a "slow mode" of sorts, we've been able to alleviate it to some extent by increasing the memory usage values of FireBird conf, but then there is a risk of sudden "out of memory" failure if for some reason it can't acquire the memory (as contrarily to MSSQL or MySQL f.i., FireBird doesn't reserve the physical RAM at startup, only progressively). Also somewhere above 8 GB the slowdowns seem to stay regardless of memory, even on 24 GB machines. So we progressively migrate those to Oracle / MSSQL.
Now as I said, this is a very smart, capable developer. But on the other hand, we have the Firebird website's claim that people are using it in production for databases over 11 TB in size, which should be so impractical as to be impossible, for all intents and purposes, if what he says is true.
So I have to wonder, does this problem really exist in Firebird, or is he overlooking something, perhaps some configuration option he's not aware of? Is anyone familiar with the issue he's describing?
As I commented earlier, what the other developer describes could be attributed to a bug that surfaces in a combination between the Windows filesystem cache on Windows 64 bit and the fact that Firebird read its database file(s) with FILE_FLAG_RANDOM_ACCESS. For some reason this would cause the filesystem cache to not release pages from its cache, causing it to grow potentially beyond the available physical memory (and eventually might even grow beyond the available virtual memory), see this blog post for details. This issue has been fixed in Firebird 2.1.5 and 2.5.2 with CORE-3971.
The case studies on firebirdsql.org list several examples of databases in tens or hundreds or gigabytes, and they don't seem to suffer from performance problems.
A company that offers Firebird recovery and performance optimization services did a test with a 1 terabyte database a while back. That page also lists three examples of relatively large Firebird databases.
Disclosure: I develop a database driver for Firebird, so I am probably a bit biased ;)

Data management in matlab versus other common analysis packages

Background:
I am analyzing large amounts of data using an object oriented composition structure for sanity and easy analysis. Often times the highest level of my OO is an object that when saved is about 2 gigs. Loading the data into memory is not an issue always, and populating sub objects then higher objects based on their content is much more java memory efficient than just loading in a lot of mat files directly.
The Problem:
Saving these objects that are > 2 gigs will often fail. It is a somewhat well known problem that I have gotten around by just deleting a number of sub objects until the total size is below 2-3 gigs. This happens regardless of how boss the computer is, a 16 gigs of ram 8 cores etc, will still fail to save the objects correctly. Back versioning the save also does not help
Questions:
Is this a problem that others have solved somehow in MATLAB? Is there an alternative that I should look into that still has a lot of high level analysis and will NOT have this problem?
Questions welcome, thanks.
I am not sure this will help, but here: Do you make sure to use recent version of mat file? Check for instance save. Quoting from the page:
'-v7.3' 7.3 (R2006b) or later Version 7.0 features plus support for data items greater than or equal to 2 GB on 64-bit systems.
'-v7' 7.0 (R14) or later Version 6 features plus data compression and Unicode character encoding. Unicode encoding enables file sharing between systems that use different default character encoding schemes.
Also, could by any chance your object by or contain a graphic handle object? In that case, it is wise to use hgsave

Need a tool for PostgreSQL monitoring on Windows

I have Postgres running on Windows and I'm trying to investigate the strange behaviour: There are 17 postgres processes, 8 out of those 17 consume ~300K memory each.
Does anybody know what such behavior is coused by?
Does anybody know about a tool to investigate the problem?
8 out of those 17 consume ~300K memory
each.
Are you 110% sure? Windows doesn't know how much memory is used from the shared buffers. Each proces could use just a few kb's and using the shared memory together with the other processes.
What problem do you have? Using memory is not a problem, memory is made to use. And 300KB each, that's just a few MB all together, if each proces is realy using 300KB.
And don't forget, PostgreSQL is a multi proces system. That's also why it scales so easy on multi core and multi processor systems.
See pgAdmin: http://www.pgadmin.org/
A tool for analyzing output from postgresql can be found at http://pgfouine.projects.postgresql.org/
pgFouine is a PostgreSQL log analyzer used to generate detailed reports from a PostgreSQL log file. pgFouine can help you to determine which queries you should optimize to speed up your PostgreSQL based application.
I don't think you can find out why you have alot of processes running, but if you feel that it might be because of database usage, this tool might help you find a cause.

Remove items from SWT tables

This is more of an answer I'd like to share for the problem I was chasing for some time in RCP application using large SWT tables.
The problem is the performance of SWT Table.remove(int start, int end) method. It gives really bad performance - about 50msec per 100 items on my Windows XP. But the real show stopper was on Vista and Windows 7, where deleting 100 items would take up to 5 seconds! Looking into the source code of the Table shows that there are huge amount of windowing events flying around in this call.. That brings the windowing system to its knees.
The solution was to hide the damn thing during this call:
table.setVisible(false);
table.remove(from, to);
table.setVisible(true);
That does wonders - deleting 500 items on both XP & Windows7 takes ~15msec, which is just an overhead for printing out time stamps I used.
nice :)
Instead of table.setVisible(), you should rather use table.setRedraw(). This method on Control has exactly the purpose of suppressing drawing operations during expensive updates.

tfs database size - version control

I have TFS installed on a single server and am running out of space on the disk. (We've been using the instance for about 2 years now.)
Looking at the tables in SQL Server what seems to be culprit is the tbl_content table, it is at 70 GB. If I do a get on the entire source tree for all projects it is only about 8 GB of data.
Is this just all the histories of the files? It seems like a 10:1 ratio just the histories...since I would think the deltas would be very small.
Does anyone know if that is a reasonable size given 8 GB of source (and 2 yrs of activity)? And if not what to look at to 'fix' this?
Thanks
I can't help with the ratio question at the moment, sorry. For a short-term fix you might check to see if there is any space within the DB files that can be freed up. You may have already, but if not..
SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB
FROM sys.database_files;
If the statement above returns some space you want to recover you can look into a one time DBCC SHRINKDATABASE or DBCC SHRINKFILE along with scheduling routine SQL maintenance plan that may include defragmenting the database.
DBCC SHRINKDATABASE and DBCC SHRINKFILE aren't things you should do on a regular basis, because SQL Server needs some "swap" space to move things around for optimal performance. So neither should be relied upon as your long term fix, and both could cause some noticeable performance degradation of TFS response times.
JB
Are you seeing data growth every day, even when no activity occurs on the system? If the answer is yes, are you storing any binaries outside of the 8GB of source somewhere?
The reason that I ask is that if TFS is unable to calculate a delta or if the file exceeds the size of delta generation, TFS will duplicate the entire binary file. I don't have the link with me, but I have it on my work machine, which describes this scenario and how to fix it, in the event that this is the cause of your problems.