background

Performance & Hard Things | Scaling Postgres 344

Join Over 1,000 Engineers & Get New Episodes Weekly!

In this episode of Scaling Postgres, we discuss new releases, collation speed, ZFS performance, insert benchmarking and pglz vs. lz4 performance.

Content Discussed

YouTube Video

Podcast Audio

Transcript

In the U.S. this week. We're having Thanksgiving this week followed by Black Friday. So by the time that you see this show it will be I guess time for Cyber Monday and Giving Tuesday and all these different days. So if you celebrate this time or if you have to work extra hard during this time, I hope this particular season is treating you well. In terms of the show this week we actually have a number of performance posts followed by posts explaining why some things are hard in Postgres, but I hope you, your friends, family and co-workers continue to do well.

Our first piece of content is “PostgreSQL 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22 Released!”. These are the follow up releases to handle primarily two bugs that were discovered after the recent releases last week and that was the extensions issue that was discovered because the ResultRelInfo struct changed in size as well as an issue with Alter Role Set Role and Alter Database Set Role. However, surprisingly they added some more fixes in as well, including fixed cases where a logical replication slots restart_lsn could go backwards. That doesn't sound particularly good as well as avoid deleting still needed wall files during pg_rewind, which if you use pg_rewind would probably not make you a happy camper either. But if you want more detail you can review the release notes that are here.

Next piece of content “Talk Fast: The Speed of Various Collations” this is from thebuild.com and he created a simple table with a single text column with a million records in it and wanted to say all right, how fast could this table be selected and sorted? And he compared the speeds of different locale providers and locales and he has the timing benchmark right here. The fastest was using the LIBC provider and the POSIX locale. Basically this is just pure ASCII as far as I understand. So that was the fastest. But not far behind was the new built in C.UTF8 that came in version 17 of postgres. So that's great because this is the one I'm going to want to use as my default moving forward once I move all of my personal apps onto Postgres 17. Following up from that are ICU collations, but still they're about 300 milliseconds behind in running this query compared to the LIBC and the built in. Then finally LIBC with a specific language locale was really bad at over three and a half seconds, but this gives you some good performance numbers to think about as you're choosing a particular locale you want to use.

Next piece of content “PostgreSQL: Maybe we should give ZFS a chance (3) – testing with an aligned record size”, this is from dbi-services.com and I think it was last week or the week before last he was testing out running Postgres on ZFS, comparing a ZFS file system with compression to an ext4 file system. But someone had made a comment with regard to that post and said changing the ZFS record size would be important to do. So he basically ran some of the tests again and he aligned the ZFS block size to Postgres block size. So Postgres block size is 8 kilobytes. So he set up the ZFS system to be 8 kilobytes as well. And then he reran his tests and amazingly the performance almost doubled. So this is a really important configuration change to do if you plan on using ZFS so much so that it looks like it was even a little bit faster than the ext4 file system. And then the other benefit of ZFS is you get a lot of disk savings as well if you're using that compression. And he even tried a 32 kilobyte block size both on ZFS and on PostgreSQL, and he saw an even greater performance. But he said he would have to run more tests to see if this is something he would want to do, because generally he does not change the block size of PostgreSQL. But if you want to learn more, definitely check out this blog post.

Next Piece of content “Benchmarking PostgreSQL Batch Ingest”, this is from timescale.com and this is a follow up to a post last week where he was comparing multirow inserts to inserting arrays using the UNNEST function. And he found that using arrays with unnest was faster than a multirow insert, but he actually wanted to compare other solutions such as copy. So he went ahead and compared all these differences and you can see the results down here. So he compared binary copy, which is basically copying binary data so postgres doesn't have to parse the text that's coming in. He compared straight text copy as well as doing prepared inserts with unnest, as well as just general prepared multirow inserts as well as just inserts. And this is ordered by speed and you can see that the slowest is just inserting multiple values by a pretty wide margin. But the highest performance is the binary copy with the largest batch size. So that's definitely the fastest way to do it. But what's interesting is that at low batch sizes like 1000 records, copy is actually slower than unnested inserts or prepared insert statements. But it's as batch sizes increase that copy becomes the clear winner. So if you're interested in learning more, I encourage you to check out this blog post.

Next Piece of Content “Optimizing PostgreSQL Performance & Compression: pglz vs. LZ4”, this is from timescale.com so if you insert a value into Postgres and one of the columns is just over 2 kilobytes that usually triggers for it to be stored in Toast or the oversized attribute storage technique. Now when it does that, typically it compresses it using postgres built in compression mechanism pglz. However, this post wanted to look at could you get better performance using LZ4 instead? And this option is available on PostgreSQL 14 and higher to be able to change that compression. And looking at I guess nine different queries here you could see sometimes there wasn't much of a difference, but sometimes there was a similar significant difference. So as much as a 72% improvement in query performance from a particular query. But overall for all the different queries the improvement was about 13%. And this is in terms of speed of the query and LZ4 came out on top. So this seems to be a relatively easy way to maybe get a little bit more performance out of your system is if you switch to using PGLZ to LZ4 to use for your toast postgres compression. And how you can do that is you just change the value default toast compression to LZ4 and I think you can also do it to Zstandard as well, although I don't know what the results of that would necessarily be. But if you want to learn more, definitely check out this blog post.

Next Piece of Content “Performance archaeology: OLTP”, this is from vondra.me and he wanted to look at performance improvements over time, not just some recent versions, but basically as far back as he could go. So basically he compared the performance of 20 different Postgres versions and he went back as far as he could go where he started having problems actually building the old versions. So he was able to get 8.0 which was actually released about 20 years ago, but he wasn't able to get anything in version 7 and I think even 8.1 had an issue, but he used PGBench to do these comparisons at a scale of 100, 1000 and 10,000 and up to 256 different clients. And he shows the PostgreSQL config he used in each of those cases. And he is using a 40 CPU server. And he makes the observation, you know, the older versions of Postgres, all of version 8, probably some of the early nine, didn't have a lot of multiprocessor benefits baked in. So clearly you're going to see some changes from this. And if we look at the small batch size, you look at pretty much what you expect. Things go up and to the right essentially. And this chart shows transactions per second on the left and then the Postgres release on the bottom. So you can see version 8, everything is sub 100,000 transactions per second. Whereas if you look at, oh, he's even got version 18 at 128 clients you've got almost 850,000 transactions per second. So quite a significant performance difference. And you can see the different versions at which where different improvements were made in performance. And it looks like he did both simple and prepared statements to compare those. The initial charts were read only these are read and write. So you can see the transactions per second are significantly lower. But again you can see the improved performance with higher numbers of clients in the later releases. And it goes on to the medium batch sizes and then the large batch sizes as well. And made some comments on the general improvements in different versions. So I thought this was pretty interesting, so feel free to check it out as well.

Next piece of content, “Locale Cooking: Common Scenarios and Suggestions”. This is from thebuild.com so he's kind of written this blog post like a cookbook of sorts where you have some sort of objective and he chooses what locale you should probably be using. So the first one here is I want maximum speed. I'm running on Postgres version 17 or higher and it's okay if collation is wacky for non 7 bit ASCII characters. So he says if this is you, you're going to want to use the C.UTF 8 locale from the built in locale provider. And basically this is the one personally that I'm going to be choosing for my own systems and then just add a different correlation if I need distinct language searches. Buddy has about 10 different scenarios here saying I'm looking to do this and then he gives a suggestion for each one. Although I do want to call out some of the final ones where he's saying I'm on Postgres 17 and higher and I never ever want to worry about my locale provider library changing on me. And he basically says use the C.UTF8 locale from the built in locale provider. So definitely sound advice, but check this blog post out if you want to learn more.

Next Piece of content “Why PostgreSQL major version upgrades are hard”, this is from peter.eisentraut.org and it seems this might be a response to the Tembo IO article that was mentioned last couple of weeks in Scaling Postgres where they're talking about the challenges of postgres upgrades. But he says basically there's three ways to do an upgrade. You can use PGUpgrade, you can simply dump and restore all the databases to a new instance, or you can just logically replicate to a new instance. And basically 2 and 3 are identical. You just set up a new instance and you transfer the data in some way to that new instance. But PGUpgrade is a little bit different because what it does is it actually takes a dump using PGDump of only the schema of the old database and restores it to the new one and then it just copies the data files directly from the old instance to the new instance. So basically nothing about the data files changes. The only thing that's changing is essentially the system catalogs. Now when we eventually move from 32 bit transaction IDs to 64 bit transaction IDs, I think that's the point at which the data files are going to change. But historically for many versions you've been able to use PGUpgrade and just copy the data files over as they exist. But he says the challenge is the system catalogs because there's all sorts of stuff to change all over the code base because it really wasn't designed. Thinking about this use case being able to easily upgrade one system catalog to another, like MySQL has this kind of built in that was mentioned in the Tembo.IO article, but it's not something really postgres was designed to do. And he says it's going to take quite a significant amount of work to do that if the community wants to do it. But if you want to learn more about this, check out this blog post.

Next Piece of “DELETEs are difficult”, this is from notsoboringsql.com and he says part of the reason that deletes are difficult is because when you do a delete you have to do these eight things. So the first thing is you need to do Row Identification so your delete statement actually needs to find what rows need to be deleted. So a lot of times if you want deletes to run fast, you may have to add dedicated indexes for the particular delete you want to do. The second thing you have to do is Acquire a Lock. So that row needs to be locked to be able to delete it. Next you need to run Before Delete triggers, if any exist. Next you have to mark the row as deleted because it's not just Removing The Row, it basically just Marks The Row As Deleted and vacuum will come up and remove them later. Next it has to update any indexes to say this row has now been deleted. It then has to do cascaded actions. So if you have on delete cascade enabled on particular tables, those have to be executed. Then it runs any after delete triggers if they exist. And finally it needs to update the wall as well. Then he also describes the actual vacuum process that has to be done of dealing with these now deleted rows. He also goes and talks a little bit about soft deletes and how they're not really a permanent solution. They could be used if you want a quick undo, but you know you still want to probably eventually delete this data. And when you're wanting to delete a lot of rows, the best practice is to definitely batch those deletes. So basically put them in small batches that the system can hardly feel as it does all the steps necessary to delete them and try to minimize the number of rows locked at one time. So how do you change this delete into a batch based delete? Well, you can easily do it with a select query pulling the IDs that should be deleted. Or at least this is the solution that I use as well, because you can limit the select whereas you cannot limit the delete. And he says also make sure that vacuum is configured so that it is running. Although at times when I'm deleting a lot of rows or even updating a lot of rows, I may take a break periodically from those transactions and run a manual vacuum. Then he says, of course the ultimate ways to delete a whole bunch of data is to actually make it a partition table. And when that particular partition of data is no longer needed, you can just detach the partition or just drop the table. But if you want to learn more, definitely check out this blog post.

Next piece of content “Substituting a variable in a SQL script”. This is from I think it's fljd.in and he's actually comparing Oracle's way of variable substitution in a script with doing it in psql and to do it in psql they have the -v option. So here you have an SQL script and you can define your variables like product ID or start date or end date in this case. And then when you run this script using psql, you can then add V options to define the values you want to be replaced in that script. But then he starts getting a little bit more complicated using an anonymous PL SQL block in Oracle, and he says trying to do the same thing in postgres doesn't really work. He runs into an error, but he actually switched to using a /GSET command as well as the PQSL conditionals to achieve the same thing. So he didn't use a PLPGSQL script to do it. He's still using pure SQL here, but he says there is a way to do it. So in his SQL file he defines some set variables at the beginning here that accept the variables passed in from the command line and then used those in the current setting function to set them appropriately in the anonymous PLPGSQL function. So if you want to learn more how to do that, definitely Check out this blog post.

And the last piece of content There was another episode of postgres FM last week. This one is on “Is pg_dump a backup tool?”, and Michael and Nikolai had Gulcin and Robert Haas on to discuss this topic because they wrote blog posts on the subject. So if you want to hear more about this debate, definitely encourage you to listen to the episode here or watch the YouTube video down here.
 

episode_image