Recently we were having an issue at work with MySQL consuming large amounts of memory when trying to import a dataset that consisted of not an inordinately large amount of data (about 60GB on disk) but with a large amount of databases – about 6,500 databases with 120 tables per database – making up that dataset. The import was happening on a KVM guest with 8GB RAM which highlighted the memory issue as it was previously running on a host with a much larger amount of memory.

We’d previously castrated InnoDB’s buffer pool in attempt to counteract the problem to get the dataset at least imported in to the guest, but even that wasn’t helping. Whatever the buffer pool was set to, from anywhere between 32MB and 6GB, MySQL showed a slow but consistent memory usage increase to the point that two-thirds of the way through the dataset import the guest would run out of RAM and swap and the OOM killer would remove the MySQL process (as MySQL would be consuming about 98% of all available memory, RAM and swap).

After speaking with Percona and analysing the InnoDB stats (the ever useful SHOW ENGINE INNODB STATUS) it appeared that it was actually InnoDB’s data dictionary that was consuming memory during the import. This is InnoDB’s internal cache or catalog of the tables it’s operating on. It’s similar to MySQL’s table cache, but specific to InnoDB, and information on any tables that InnoDB will access must be stored in the dictionary. Traditionally once tables are added to this dictionary they are never removed, which means on hosts with a large number of tables in which these tables are accessed often (as in our import, all ~78M tables were opened sequentially) MySQL’s memory usage will just grow continuously. Percona have exposed the size this dictionary can be in their build of MySQL, which gave us a way to try and limit it during the import, by setting the following option in my.cnf:

innodb_dict_size_limit = 32M

This will set a soft upper bound on the amount of memory MySQL and InnoDB should use for storing tables in the dictionary. It’s important to note that this is only a soft upper bound; InnoDB knows what information it needs to store in the dictionary at any given time, and if MySQL is opening lots of tables in a small timeframe, it may be that your limit won’t be that realistic and MySQL will consume the memory it needs to anyway. The memory limit is unlimited by default.

More information on the InnoDB data dictionary is available at Percona’s site.

Recently I was having a problem with NSURLConnection showing memory leaks in Instruments even though it was being correctly allocated and released. On investigation it seemed the problem may be lying with the cache policy, however I was using NSURLRequestReloadIgnoringLocalCacheData in the NSURLRequest to completely ignore any caching requirements and was confused as to why allocation problems would then be associated with caching.

I’m still not exactly sure what the problem is (according to this thread it may be due to having the NSURLConnection run in a method that’s detached from the main thread) however, forceably setting the capacity of the URL cache before initiating the connection using the URL request seems to solve the problem. For example:

[[NSURLCache sharedURLCache] setMemoryCapacity:0];
[[NSURLCache sharedURLCache] setDiskCapacity:0];
 
NSURLConnection *connection =
	[[NSURLConnection alloc] initWithRequest:request delegate:self];

Hey presto! No more NSURLConnection leaks.