Setting the right packet size in MySQL

I was experimenting with uploading different file types to a MySQL table and I came across an issue where the connection to the server kept getting dropped. I figured out that it was a limitation with the max_allowed_packet variable, which was set to 4194304 bytes (4MB) by default. After I increased the limit to 50MB, the issue went away.

I'm not very familiar with packets and so I have a few questions.
1. Are there any security concerns with setting a large packet size?
2. The increase from the default 4MB to 50MB seems quite overkill to me, but maybe not. Is there a formula or some kind of rule of thumb to determining what the max packet size should be?
3. Any performance hits with setting a larger packet size than what you actually need?

Thanks.

store files on filesystem, store rows/structured data in mysql.
innodb page sizes are 16k and if you have a lot of "extra pages" as they're called in order to allow the row to spill over to other pages, innodb ends up crying. .. it's not optimized for it, it never will be.
You could get around this by chunking your data into smaller chunks, but then you're just implementing a filesystem on top of a database on top of a filesystem -- there's no benefit to it.
and I have no clue what these gian blobs would do to the transaction undo log in innodb, .. argh !!

however, if you're not using innodb, you probably should, most other table engines suck from a durability perspective.

not really, a server is easier to dos / make it run out of ram, but that's true for any mysql, nobody really exposes mysql to the unfiltered internet anyway.

We used to run thousands of servers supporting thousands of devs writing all sorts of different apps with 2M packet size (one of the largest mysql installations in the world). mysql network protocols transfer mysql packets which are serialized structs of data over tcp. A single statement cannot span multiple packets. You should only need bigger sizes if you need to use larger statements, or if you need multistatements, but that is generally a sign of some kind of pathological behavior.

The reason large packets are supported is to minimize the roundtrip and the protocol overhead, since mysql network protocols do not support pipeline-ing and it's half duplex protocol. (client send server waits, server sends client waits, and so on). As well as to support multi row inserts usually produced by mysqldump.
How this optimization works is, mysql can use something called a multistatement to save on the number of roundtrips between the client and the server. For example, let's say that within 1 transaction you want to insert 1 row to one table, and another row into another table, and read something from a third table. You'd send "begin; insert into table foo values(1,2,3); insert into table bar values(4,5,6); select col1,col2 from baz where pkcol = 7; commit;" if your web frontend is a mysql client in this case, and it's 10ms away from the server, using a multistatement, saves you 40ms of latency that the end user won't have to wait for, but it increases your packet size. (most likely not to more than 2MB).

bottom line, stick to 2MB or 4MB whatever the default, (let's not even go into how it affects replication if you want to have a backup server). And don't store large files in mysql - it's perfectly feasible to use a filesystem+mysql combination for that, many people do.

2 Likes

Lots of information to digest, thank you. I will not be using MySQL to store files, I was just messing around with something and stumbled across this issue.

This is interesting; I was not aware of this. I will play around with this.