Wednesday, 26 May 2010

MySQL, what are you smoking?

There are a lot of weird things which MySQL does to handle its mix of transactional and non-transactional behaviour, but this one was new to me :)

create table t1 (ID INT NOT NULL PRIMARY KEY, V INT NOT NULL);

Query OK, 0 rows affected (0.01 sec)

mysql> insert into t1 (ID,V) VALUES (2,NULL);
ERROR 1048 (23000): Column 'V' cannot be null

mysql> insert into t1 (ID,V) VALUES (3,1),(4,1);
Query OK, 2 rows affected (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 0

insert into t1 (ID,V) VALUES (5,1),(6,NULL);
Query OK, 2 rows affected, 1 warning (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 1

mysql> show warnings;
+---------+------+---------------------------+
| Level | Code | Message |
+---------+------+---------------------------+
| Warning | 1048 | Column 'V' cannot be null |
+---------+------+---------------------------+
1 row in set (0.00 sec)

select * from t1;

+----+---+
| ID | V |
+----+---+
| 3 | 1 |
| 4 | 1 |
| 5 | 1 |
| 6 | 0 |
+----+---+
4 rows in set (0.00 sec)
What's going on?
MySQL does not consider (some) errors to be errors, if they happen on the second or subsequent row of a multi-row insert.

On the other hand, if it happens on the first row, it's an error.

Why?

Because non-transactional engines can't rollback to a savepoint. This means that if it's inserted one or more rows already, to generate an error would leave some stuff in the database.

No, really why?

I don't know. This is not consistent with, for example, a unique index violation, which makes it stop half way through a multi-row insert anyway, and non-transactional engines can't rollback.

So if you insert a duplicate, THAT still generates an error on the second a subsequent row. It's not even consistent!

mysql> insert into t1 (ID,V) VALUES (10,1),(10,2);
ERROR 1062 (23000): Duplicate entry '10' for key 'PRIMARY'
mysql> select * from t1;
+----+---+
| ID | V |
+----+---+
| 3 | 1 |
| 4 | 1 |
| 5 | 1 |
| 6 | 0 |
| 10 | 1 |
+----+---+
5 rows in set (0.00 sec)

Of course if we use a transactional engine, it looks better:


mysql> ALTER TABLE t1 ENGINE=InnoDB;
Query OK, 5 rows affected (0.00 sec)
Records: 5 Duplicates: 0 Warnings: 0

mysql> insert into t1 (ID,V) VALUES (20,1),(20,2);
ERROR 1062 (23000): Duplicate entry '20' for key 'PRIMARY'
mysql> select * from t1 WHERE ID=20;
Empty set (0.00 sec)



But that won't change the behaviour of a multi-row insert with NULLs in invalid places.

This kind of stuff is nonsense and we need it to GO AWAY NOW.

How?

SET SQL_MODE='STRICT_ALL_TABLES'

And now every error is really an error. Yay! Why can't this be the default? (I know the answer; this is a rhetorical question)

Sunday, 25 April 2010

Streaming data from MySQL

How do we stream data from MySQL? Traditionally, developers have thought that they can simply use their API's cursor-equivalent object (e.g. resultset) and move through the results.

Unfortunately, this does not do streaming in most cases. Normally most client libraries (which call mysql_store_result) will read the entire result into memory, and you're just going through an already-in-memory data set. This will fail if it doesn't fit in memory.

Enter mysql_use_result - if your library can use this instead of mysql_store_result, you can then skip through the records without needing to keep them all in ram at once.

However, there is some bad news - while going through a result set in mysql_use_result, you can't do any OTHER queries on the connection. An attempt to do so generally violates MySQL's protocol and fails. So if you need to do other queries while you're processing the large data set, you should open another connection.

We've found that in Perl DBI, mysql_use_result performs significantly worse on small data sets (i.e. ones which fit in ram, say 100M - 1G) than mysql_store_result (which is used by default). It seems that a lot more requests need to be done into the libraries from Perl.

Enter HANDLER. This is basically an SQL-level object which behaves a bit like a server-side cursor would in some other databases. You can have multiple handlers open on the same connection, and can do other operations while a handler is open. The disadvantage: it cannot be used for arbitrary queries. Only table and index scans of a single table can be done - no joins or sorting. And the client needs to know a bit about the table to choose the best method - the optimiser doesn't get involved.

However, in the cases where a lot of data are likely to be returned, we can either not require joins, or do joins on the client-side - if there are small "dimension tables" that can be held in-memory on the client.

None of this is great of course, it would be better if mysql actually supported server-side cursors. Unfortunately it doesn't (except in stored procedures).