Practical Partitioning, Understanding pg_repack, Explaining Buffers, Multi-row Contraints | Scaling Postgres 171
Join Over 1,000 Engineers & Get New Episodes Weekly!
In this episode of Scaling Postgres, we discuss practical partitioning, understanding pg_repack, explaining buffers and enforcing multi-row constraints.
Content Discussed
- PostgreSQL 14 Beta 2 Released!
- Postgres Vision 2021
- Practical Partitioning in Production with Postgres [Postgres Vision 2021]
- Understanding pg_repack: What Can Go Wrong – and How to Avoid It
- Explaining the unexplainable – part 6: buffers
- PostgreSQL: Constraints over multiple rows
- From MD5 to scram-sha-256 in PostgreSQL
- A tale of making company-wide standard psqlrc
- Zero-downtime schema migrations in Postgres using views
- Using GeoDjango and PostGIS in Django
- Federico Campoli
- Rubber Duck Dev Show Episode 1 | It's All About The Tests!
YouTube Video
Podcast Audio
Transcript
All right. I hope you, your friends, family, and coworkers continue to do well. Our first piece of content is "PostgreSQL 14 Beta 2 Released!". So this is from postgresql.org and they've made the announcement of Postgres 14 Beta 2 being released. They mentioned some of the changes that have been done from version 1. Predominantly bug fixes, but if you're interested in trying this version to see if it works for your use case, you can go ahead and check that out.
The next piece of content is the website for Postgres Vision. So this was an online conference that happened this past week. You can just go to postgresvision.com and you can click Watch Now to watch the virtual episodes or the online talks of what's presented. Now, related to that, there's another post called "Practical Partitioning in Production with Postgres" and it took place at Postgres Vision 2021. This was from vyruss.org. It's a very simple post, but it has a link to a YouTube video of the talk that was given at Postgres Vision. This was a very good talk explaining about partitioning. The things to keep in mind when you're considering moving to partitioning. As a reminder, partitioning is where you take a very large table and you break it up.
Maybe you want to partition a large table by date, time, status, or a particular customer ID. Basically, that makes the tables more manageable because vacuum can more easily vacuum each table. The individual indexes on each table will be smaller as well as the tables themselves if you need to do scans, only looking at a very small portion of a table at a time. It's a really great post, including at the tail end where he describes a real-world scenario where you have a 20 terabyte table and how can you convert that table to a partition scheme without having significant downtime? He describes a process for doing it. Now there are two hacks that make me nervous in his description, but it appears that these should work. So if you have a particularly large table and you want to find out a way to move to a partitioning scheme that should result in little or no downtime, definitely check out this presentation from Jimmy Angelakos.
The next piece of content- "Understanding pg_repack: What Can Go Wrong - and How to Avoid It". This is from percona.com. They're talking about a utility called pg_repack, which allows you to compress your indexes or tables. Although versions Postgres 12 and 13 have implemented concurrent reindexing, I've basically moved to use that utility for indexing and would only use pg_repack for tables. Particularly with version 14 coming up, it even handles partition tables and the reindexing, concurrently. But I would say this is probably one of the best tools to use if you want to compact your tables due to bloat issues because it does do it in an online manner, unlike VACUUM FULL, which would lock the whole tables in CLUSTER. So this does it without bringing the table offline. But there are some things it has to do in terms of locking that can cause issues. So you want to be very careful how you do your pg_repack. He emphasizes that in this post.
It goes through the process of what pg_repack does. I'm not going to describe the whole process here but talk about different areas where you could run into problems. Most notably, there are about three places where an ACCESS EXCLUSIVE lock happens, which means it must get full access to the table with nothing else accessing it, including select statements. So it's critical to run pg_repack when the system is in a lower traffic state. So either if you can shut down workers or do it in an off-hour period, basically it needs multiple opportunities to get these access exclusive locks on the table that you're trying to repack. The first one does it for creating an initial empty template of the objects it needs to compact. The next one does it when switching over and then during the cleanup. The other thing to keep in mind is that it also cancels statements to be able to try and acquire this lock.
There is a timeout associated with it, but then also by default, if it doesn't successfully cancel these statements, it then moves to terminate them. He says specifically, that you're going to want to use Hyphen Hyphen no Kill back end when you do this to avoid that termination step. Instead, it will just skip repacking that table and it'll give a warning message similar to something like this: "WARNING: timed out. Do not cancel conflicting backends info skipping repack table name due to timeout". DDLS are also prohibited from running while pg_repack is running. So any time you try to add a column or add an index, it won't succeed because of the locks it obtains while doing the repack operation. But this is a great post and if you want to learn more about pg_repack, definitely check out this one.
The next piece of content- "Explaining the unexplainable- part 6: buffers". This is from depesz.com and he's talking about when you're doing an EXPLAIN or EXPLAIN ANALYZE, you can specify buffers on to give you information with regard to shared buffers and disk access relative to it. So for example, here he's doing an EXPLAIN ANALYZE ON and buffers on and you can see that it outputs buffers and gives you information about it. So this is for the shared memory area. The hit and the cache, or the shared buffers, was 480 blocks, or that's the number of pages. And what was read actually from the disk is 17662. You even hit the buffers at the planning stage. So you could see 19 blocks were hit while doing the planning. Now this is a simple representation of what buffer shows, but it can show all this different variety. So it covers shared, hit, read, dirtied, and written. Local hit, read, dirtied, and written.
Temp read and written and he goes through each one. Basically, a hit is when it finds the information in the shared buffers. A read is that it had to go to disk to read it. A write is that as part of that statement, it has to write to the disk, usually to potentially free some space up in shared buffers he mentions and then finally dirtied. That can happen in certain cases when there are new pages in a table or an index, but something gets marked as dirtied and must be written to the disk. In terms of what's shared local and temp, shared is basically the normal database objects like the tables, indexes, materialized views, and things of that nature. Temporary objects that are part of that session are considered local. So it's basically local to that session you're running in. Then Temp is for temporary data used in sorting or hashing and things of that nature. So this is a very comprehensive post explaining all of this in detail and giving you some examples. So I highly encourage you to check out this one.
The next piece of content- "POSTGRESQL: CONSTRAINTS OVER MULTIPLE ROWS". This is from cybertec-postgresql.com and he's talking about when you want to apply some sort of constraint over multiple rows, not just in one, because it's pretty easy to add a constraint that applies for a single row of data you're trying to INSERT or UPDATE. But what about when you want to constraint over multiple rows? Now, this actually is not an object or something you can create in Postgres, but he explains how you would do the implementation to satisfy this type of constraint. So for example, he has a scenario where you have planes and planes are owned by multiple owners. So he has an owners table that designates the percentage ownership of each plane. So basically the toner ownership should not exceed 100%. So you may have one row where someone owns 100% or two rows where one person owns 50%, and another person owns 50%.
So basically you have a 100% constraint across multiple rows. If you just try to do an UPDATE and have two users updating a particular plane at a particular time, you're going to run into conflicts because you can UPDATE the system such that it's going to have more than 100% ownership for a particular plane that's possible to do and he shows that in this diagram here. One scenario is to lock the table in exclusive mode. That's really hard to do because now suddenly you've got a really heavy lock and not much concurrency on that table now because anytime anybody wants to do an INSERT into this table, it has to lock the whole table. So that's not a great solution and he describes that here. But the most lightweight way to do it is with SERIALIZABLE transaction isolation. So that endeavors to only allow one UPDATE at a time. So for example, if user one needs to do its owner assignment, it does BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE.
It does its INSERTs and it does a SELECT to validate that the resulting value is still 100%. So that's good, but it hasn't committed yet. Meanwhile, user two starts another transaction using the isolation level serializable. It does its INSERTs. But meanwhile, this does a COMMIT. Now this succeeds, but this hasn't been committed yet. But at the point when it's ready to COMMIT, it's going to receive an error that it could not serialize access. So if you need concurrent access to information like this, a serializable isolation level can help you with that. Now, he said someone's proposed, well, could you do SELECT FOR UPDATE which basically locks a single row to do an update. But that's a problem. It's a single row that he mentions here. It doesn't handle existing plus future rows that may happen due to its INSERTs, whereas this technique will. So it's an interesting use case of putting constraints on multiple rows. So if you're interested in that, definitely check out this blog post.
The next piece of content- "FROM MD5 TO SCRAM-SHA-256 IN POSTGRESQL". This is from cybertec-postgresql.com and they're talking about moving from MD5 to SCRAM authentication and they explain the reasoning behind it. Basically, there are more vulnerabilities for MD5 cryptography compared to SCRAM-SHA-256 and it's a better implementation of doing passwords. But he has this great list of steps for doing the upgrade. Number one, upgrade your client software because you're going to want to have all your clients be able to support SCRAM as opposed to just MD5. So that's the first step. Make sure you have up-to-date versions of the client software that talks to your Postgres database. Second, change the password encryption parameter for your Postgres configuration.
Now what this is, is the default password encryption that's used when you create users. So basically you are going to want to in the future when you ever create new users, use SCRAM and not MD5. The third thing is you need to reset all the passwords. So there's a backslash command that you can use here for doing it, or you could do an alter user and set the password that way as well. And then fourth is to change the authentication method that people connect to through pg_hba.conf. So if you have password authentication, usually it will say MD5 in pg_hba.conf. Well, you would change that to SCRAM-SHA-256. So this is a great post explaining how to move from MD5 authentication to SCRAM. So if you're interested in that, check out this blog post.
The next piece of content is "A tale of making company-wide standard psqlrc". This is from depesz.com, and this explains the psqlrc file and how you can configure defaults for working with the psql client. They had a use case where they had thousands of Postgres servers at this company. When they logged into a particular server, they wanted it to output a variety of information to confirm that they were indeed logging into the correct server. Now, they had an existing function in the DBA schema called SMI, and it would output things such as the project related to it, the environment, the cluster, the particular role you're connecting as, et cetera.
So for example, when you logged in, it would say something like, hey, this is the magic project in the production environment for cluster one, and then the backup DB role as well as the process ID. So basically, this was sufficient to output what they were seeing here. But then he asked if he could connect to the Postgres database. He resulted in an error and he had to rework it using some. Then he asked if he could connect up through PgBouncer. But then ran into another issue. So okay, I had to refactor it again. Then he asked if he could connect to the PgBouncer DB. That PgBouncer runs itself, and he runs into more problems. So basically, this is an iteration of this psqlrc file and all the modifications he had to do to get this working. So if you're interested in all sorts of ways you can configure your psqlrc file, definitely check out this post.
The next piece of content- "Zero-downtime schema migrations in Postgres using views". This is from fabianlindfors.se, and he's describing an interesting way of abstracting away table changes from, say, your applications or end users. Because there's a long process for doing things like renaming a column and not disrupting access to the table or the information. But he has a unique way of doing it, combining schemas and views. So basically, the schemas are essentially versioned, and each schema has its own view to access particular tables. So for example, he has a user's table and he has schema A. So this is considered just a versioned schema. And within schema A, he creates a view to access the user's table. So everything is accessed through these views. Then when you're connecting as a user or an application, you set your search path to the A schema.
When you do a SELECT FROM USERS, you are going to get the information in the table as reflected in the view. Now, how do you update a column in that table? So, for example, he did an ALTER TABLE for users and renamed the column from ID to user ID. Now the view will handle that. So this user connects to the A schema and essentially will know to use the user ID column and not ID column anymore. So you don't need to change anything for the A users. But then what you do is you create another version schema. He called it Schema B and created a new view with a new definition.
So, as he mentions, older clients can still access the old column name through schema A, but then the new clients or new people connecting would use schema B. Now, I wouldn't necessarily implement this because it seems to be a fair amount of work, and I'm sure there are use cases where this would not work out as conveniently. And it's so rare to have to do some of these changes like this. I'm not sure if the overhead of managing all of these schemas and views makes sense, but it's definitely an interesting thought experiment and technique to hide backend table changes. So check this blog post out if you're interested.
The next piece of content- "Using GeoDjango and PostGIS in Django". This is from pganalyze.com, and they're talking about using these tools with PostGIS and Postgres to work with spatial data. So if you're interested in that type of content, you can definitely check out this blog post.
The next piece of content, the PostgreSQL person of the week is Federico Campoli. So if you're interested in learning more about Federico and his contributions to Postgres, definitely check out this blog post.
The last piece of content is not exclusively related to Postgres, but it is the first episode of a new show I started called The Rubber Duck Dev Show, and the first episode is all about testing. Now, this is a longer format. It's not as highly edited as Scaling Postgres is, so this is about an hour in length. But if you're a developer and are interested in some more long-form content that talks about testing, maybe you'd like to check that out.