Ivan Sagalaev has created a mysql_cluster database backend for Django, which allows you to configure master and slave servers, and then specify which should be used on given view with Python decorators. Found via del.icio.us and Simon Willison's blog.
Monday, March 24, 2008
Friday, March 14, 2008
I am not dead
It's been nearly a year since the last post, so you might naturally wonder if I am dead or stopping development of MySQLdb. Actually I've been sick for the last year and a half or so, and hadn't really been motivated enough to do anything. Here's what's been going on:
John Eikenberry has taken over development of ZMySQLDA, since I pretty much don't do anything with Zope these days. He's made a couple of releases on the way to a 3.0 release. As far as I know, ZMySQLDA is still only useful with Zope 2 as Zope 3 has a different architecture and comes with MySQL support directly.
Monty Taylor from MySQL AB has volunteered to help out on MySQLdb. I believe the way this is going to work out is he's going to be doing maintenance on the 1.2 branch, and get some minor bug fixes out there. In addition, he has a good start on a native (i.e. written in Python) MySQL driver.
I have a lot of work done towards the first 1.3 version, which will be the development branch (SVN trunk) for 2.0. To a large extent, this is a refactoring project, which means there are a lot of internal changes that don't affect most users.
When I first started working on MySQLdb (way back in the last millenium), there was only one option for talking to the MySQL server: Use the C API via libmysqlclient. Then the re-entrant/thread-safe libmysqlclient_r was added. Then the embedded server libmysqld. And now there is the prospect of a native Python version. This makes building more complicated because you currently have to build against one library at a time. It's also more complicated for users: Suppose you want to have both a regular client version and an embedded version on the same system?
1.3/2.0 is going to fix this by building all the possible options as separate drivers, and then you can specify which one to use at connect-time, or you can use the default, which will probably go in the order libmysqlclient_r, libmysqlclient, and native. (The embedded version requires special initialization so it should never be a default.)
1.2 and earlier uses a type conversion dictionary to map MySQL column types to functions which convert a string result to a Python type. Additionally, this same dictionary is used to convert Python types into SQL literals. I think in earlier (pre-1.0) versions, these were separate dictionaries, and they were later combined because they have a disjoint set of keys. I'm not sure now if this was a good idea or not.
One of the other complications of this approach is TEXT columns. To the MySQL C API, TEXT columns have the same column type as BLOB columns. The difference is the presence of a flag. This took some kludgy stuff to get to work.
Then unicode came along, not just in Python but in MySQL. (The original target versions where MySQL-3.23 and Python-1.5.) This complicated the type conversion because now it was dependent on the connection's character set, which could be changed, so the converter dictionary had to be tinkered with on the fly. Additionally, there were reference count problems (and maybe still are to an extent) with this approach, due in part to the dictionary being about to be overridden by the user.
I haven't decided entirely how this is going to be fixed, but I will have some method for users to override the type conversion at runtime. I will probably have some hooks that will allow you to use a specific conversion for column based on column type, column name, table name, database name, or any combination thereof. For example, you could have a rule that said that any column name ending with "_ip" with a column type of UNSIGNED INTEGER could be returned as a user-defined IP object, but stored in the database as a four-byte integer.
The type conversion from MySQL to Python currently takes place in the low level driver (_mysql). Since there are going to be multiple drivers, this is going to move up into the Python layer. I don't believe this will adversely affect performance. Looking up the right converter is only a dictionary lookup anyway, and only has to be done once per column per query. Once you have a list/tuple of converters for the result set, these can be applied quickly with a list/generator comprehension.
MySQLdb-1.2 and earlier have several cursor classes which are built with mixin classes. The mixins control things like whether the rows are returned as tuples or dictionaries, or whether a buffered or unbuffered result set is used (i.e. mysql_store_result() vs. mysql_use_result()). This is is pretty messy and is going away.
The format of the row will probably be controlled by a hook of some sort. I'm inclined to using unbuffered cursors, i.e. SSCursor or mysql_use_result(), by default. The tricky part is the entire result set must be fetched before another query can be issued, so if there are multiple cursors, there needs to be a mechanism so that only one can use the connection at a time. Rather than locking the connection, there will need to be a way for one cursor to tell the others that they need to buffer the rest of the result set.
Some of this is already done for the trunk, but needs to be committed. In particular, there is no type conversion at all, and the driver selection is not done yet, but I'll see if I have time to work on it more this week at PyCon 2008.
John Eikenberry has taken over development of ZMySQLDA, since I pretty much don't do anything with Zope these days. He's made a couple of releases on the way to a 3.0 release. As far as I know, ZMySQLDA is still only useful with Zope 2 as Zope 3 has a different architecture and comes with MySQL support directly.
Monty Taylor from MySQL AB has volunteered to help out on MySQLdb. I believe the way this is going to work out is he's going to be doing maintenance on the 1.2 branch, and get some minor bug fixes out there. In addition, he has a good start on a native (i.e. written in Python) MySQL driver.
I have a lot of work done towards the first 1.3 version, which will be the development branch (SVN trunk) for 2.0. To a large extent, this is a refactoring project, which means there are a lot of internal changes that don't affect most users.
When I first started working on MySQLdb (way back in the last millenium), there was only one option for talking to the MySQL server: Use the C API via libmysqlclient. Then the re-entrant/thread-safe libmysqlclient_r was added. Then the embedded server libmysqld. And now there is the prospect of a native Python version. This makes building more complicated because you currently have to build against one library at a time. It's also more complicated for users: Suppose you want to have both a regular client version and an embedded version on the same system?
1.3/2.0 is going to fix this by building all the possible options as separate drivers, and then you can specify which one to use at connect-time, or you can use the default, which will probably go in the order libmysqlclient_r, libmysqlclient, and native. (The embedded version requires special initialization so it should never be a default.)
1.2 and earlier uses a type conversion dictionary to map MySQL column types to functions which convert a string result to a Python type. Additionally, this same dictionary is used to convert Python types into SQL literals. I think in earlier (pre-1.0) versions, these were separate dictionaries, and they were later combined because they have a disjoint set of keys. I'm not sure now if this was a good idea or not.
One of the other complications of this approach is TEXT columns. To the MySQL C API, TEXT columns have the same column type as BLOB columns. The difference is the presence of a flag. This took some kludgy stuff to get to work.
Then unicode came along, not just in Python but in MySQL. (The original target versions where MySQL-3.23 and Python-1.5.) This complicated the type conversion because now it was dependent on the connection's character set, which could be changed, so the converter dictionary had to be tinkered with on the fly. Additionally, there were reference count problems (and maybe still are to an extent) with this approach, due in part to the dictionary being about to be overridden by the user.
I haven't decided entirely how this is going to be fixed, but I will have some method for users to override the type conversion at runtime. I will probably have some hooks that will allow you to use a specific conversion for column based on column type, column name, table name, database name, or any combination thereof. For example, you could have a rule that said that any column name ending with "_ip" with a column type of UNSIGNED INTEGER could be returned as a user-defined IP object, but stored in the database as a four-byte integer.
The type conversion from MySQL to Python currently takes place in the low level driver (_mysql). Since there are going to be multiple drivers, this is going to move up into the Python layer. I don't believe this will adversely affect performance. Looking up the right converter is only a dictionary lookup anyway, and only has to be done once per column per query. Once you have a list/tuple of converters for the result set, these can be applied quickly with a list/generator comprehension.
MySQLdb-1.2 and earlier have several cursor classes which are built with mixin classes. The mixins control things like whether the rows are returned as tuples or dictionaries, or whether a buffered or unbuffered result set is used (i.e. mysql_store_result() vs. mysql_use_result()). This is is pretty messy and is going away.
The format of the row will probably be controlled by a hook of some sort. I'm inclined to using unbuffered cursors, i.e. SSCursor or mysql_use_result(), by default. The tricky part is the entire result set must be fetched before another query can be issued, so if there are multiple cursors, there needs to be a mechanism so that only one can use the connection at a time. Rather than locking the connection, there will need to be a way for one cursor to tell the others that they need to buffer the rest of the result set.
Some of this is already done for the trunk, but needs to be committed. In particular, there is no type conversion at all, and the driver selection is not done yet, but I'll see if I have time to work on it more this week at PyCon 2008.
Subscribe to:
Posts (Atom)