background

Efficient Pagination, Rotating Passwords, BRIN, Row Level Security | Scaling Postgres 78

Join Over 1,000 Engineers & Get New Episodes Weekly!

In this episode of Scaling Postgres, we discuss efficient pagination, how to rotate passwords, BRIN benefits and Row Level Security.

Content Discussed

YouTube Video

Podcast Audio

Transcript

All right. I hope you, your friends, family, and coworkers continue to do well. Our first piece of content is "Pagination with Relative Cursors''. This is from the engineering.shopify.com blog and they're talking about doing pagination. So the way a lot of tools do paganization is they use a limit and then an offset. But this is very inefficient once you get to large record sizes. They're showing an example of it here. The first query with a ten offset is 6 milliseconds, 7 milliseconds, or 8; that's not too bad. Once you get up to 10,000, it gets up to 80 milliseconds. And this apparently is for 14 million products. Once you get up to 100,000 with the offset, it's over 2 seconds. And then it just times out, they say, at the 1,000,000th offset, which probably is something north of 30 seconds. 

So there's no way to be performant. So basically, as other articles have mentioned, using this method for pagination does work for small record sizes, but once you get large, it's very inefficient. So they're talking about relative cursor pagination. Now, this isn't cursors as you typically think of it in terms of server-side programming within a database, like a database cursor, but it's still a cursor nonetheless. You basically keep the last ID of what was looked at. So for example, here, if the last ID someone was looking at as they were paging through something was 67890, then you would show the products where the ID is greater than that last; essentially where the cursor is, and order it by the ID in ascending order. So this is assuming that you're ordering whatever you're ordering by ID and this becomes very efficient. 

So you can keep increasing the offset. Essentially, what's being used, meaning looking at the 100,000th row in that 14 million and you'll still get great performance, the performance doesn't diminish. But of course, maybe you need to order it by something different than an ID. Like maybe you're sorting the records you're looking at by a particular date or a name. How could you do that? And here they're talking about sorting and skipping records where they want to sort by a title and then by the product ID, assuming you have duplicate product titles. Here they use the technique of where the title is greater than "Pants". So all titles are great once you get to the "Pants", the cursor is at "Pants" as well as an ID larger than the last ID you were looking at. 

So look at all titles greater than "Pants", as well as those titles that equal "Pants" and the product ID is greater than two. And then, of course, sorting appropriately by the title ascending and then the ID ascending as well which will allow you to page through these types of records efficiently. Now, one thing I was looking at is this OR statement and I wonder instead if they could use the ability to use a WHERE statement containing two columns. So you can use parentheses to say (title, ID) greater than, and then parentheses around the values that you're searching for. 

I wonder if that is more efficient planning-wise for the Postgres planner. So potentially if you're investigating using a solution like this for your pagination, maybe try that technique as well: querying two columns at the same time and looking at the combined value, that may be more efficient than using the OR. And then they did the timing of this example, it wasn't as efficient as just the ID, but it still results in pretty consistent performance and you would need a multicolumn index to get the most efficiency out of this type of pagination, of course. So if you're looking to improve your pagination and you're using the offsite method, definitely check out this blog post for some ideas to improve that.

The next post is "Rotating PostgreSQL Passwords with no downtime". This is from jannikarndt.de. So this is the case where you have a password for a Postgres user that an application is using and you want to change that password. So we talked about three scenarios here. You could update the database and then update the application, and in between you'll have a short downtime. Make both the old and new password known to the application and let it fall back on the new one once the old one fails. Or create a copy of the database user, update the application, and delete the old user. So, of course, this sounds ideal, but it sounds like it might be a little bit complicated, but this is exactly what this post describes how to do. So essentially they use roles within Postgres. They create a my_app role with no login and that's the one that has the permissions to all the objects in the database. 

Then created my_app_tom, essentially a different role, with no login in the same role as this, and set my_app as the main role. And then created my_app_jerry, essentially another username with presumably a different password with the same permissions. It basically has set its role to my_app. Now, assuming that my_app_tom is the one that's active, you make it WITH LOGIN  and have its password. So essentially this user can log in with the application to get access to all the objects that my_app has access to. And then down the line, if you want to change the password that's being used, you make this secondary role, essentially the inactive one, my_app_jerry, with login, set its new password, and then you change the application to use this new username. It begins using it and then you deactivate the old one by reverting it back to no login. So this is a pretty efficient way to be able to do password changes, coordinating what the application is using in the database without any downtime. So if you're interested in that, definitely a blog post to check out. They also talked about using Liquibase, but I didn't really cover this part and may be of interest to you as well.

The next post is "Create a BRIN Index at a Fraction of the Normal Size". This is from alibabacloud.com. Now they talk about B-trees, which are essentially the most common index, but also BRIN. They say how useful it can be for the Internet of Things or things using time series data. Because BRIN is essentially a block range index, it indexes things in ranges. So for use cases where you have constantly increasing data, like an incremental ID, like a serial data type, or if you have dates that are constantly increasing, a BRIN index could be beneficial in that type of scenario. Now, we've discussed this before in previous episodes of Scaling Postgres, how the BRIN index is super, super small because it's only storing ranges, not the exact data. 

So it targets a range and then it has to look within that range to find the exact data when you're doing a query. So for that reason, it's super, super small. But it's not as performant looking at particular individual records as a B-tree index. But one thing that's interesting about this post, it also looks at INSERT performance. Because you're having fewer records inserted by the BRIN index, it's definitely more performant for INSERTS as well. Now, he goes through all the different testing scenarios here, but I want to kind of jump down to the meat of the material in the graphs here where they talk about space. So, for example, again, what we've heard before, BRINs are super small. So look at this comparison table, over 4,000 MB, the B-tree index is 2,500 MB, whereas the BRIN, I think, is 4.5 KB. 

So super small relative to the table size and the B-tree index. Then he also did queries with regard to a range query and then an exact query. So you'll see, the cost of a full table scan is immensely large doing range or exact queries. Doing the B-tree index is the fastest, so 24 milliseconds and then part of a millisecond for the exact query. Whereas with a BRIN index, it's still vastly faster than a full table scan, but not as fast as a B-tree. But again, for this space savings, is this fast enough? Now, the next part that I've been seeing discussed elsewhere, but it makes total sense, is how INSERT performance compares. So with no index, you get essentially 66,000 rows per second, with a BRIN index it's 62,000. So almost negligible performance on INSERTS. Whereas with a B-tree essentially you're less than half of your INSERT performance. So if you're using a lot of time series data, and things that need fast inserts, definitely check out BRIN indexes again and check out this blog post to see how you could potentially incorporate them in your database system.

The next post is "POSTGRESQL ROW LEVEL SECURITY, VIEWS AND A LOT OF MAGIC". This is from cybertec-postgresql.com. So this post basically goes over row-level security and what it is and how to set it up and some things to watch out for. So first they create a user, Bob, and Alice, and then they create a table with two different types of data. They allow Bob and Alice to access the schema and the tables so they can essentially query this table. They created a little function to be able to debug exactly what was happening when row-level security was turned on. Then they altered the table, specified the table name t_service, and enabled row-level security. Now, once that happens, no one can essentially query anything. So you need to set a policy. So it creates a bob_pol on this table to allow him to select when the service type was open source. 

And that's one of the column values here. So when the service type is open source, Bob can select from it. When the service type is closed-sourced, Alice can select from it. So when you set the role to Bob and select from the service table, you only see the open-source service types. When you set the role to Alice, you only see the closed-source data types. Okay, that works great. But then what if we're using views? So here he sets the role to Postgres and creates two different views. The first view both users can select from, but then the v2 sets the owner to Alice. So Alice is the owner and grants SELECT on v2 to Bob. Now, when you set the role to Bob and query from this view, you get both sets of data. The reason is because the v1 view is owned by the Postgres user who created it. So that's maybe not what you want. 

Essentially, Bob can now see both sets of data because Postgres can see both sets of data. So it's not applying the policy to Bob. Next still is Bob. When you look at the second view, you only see the closed source and they say here quote "The reason is: v2 belongs to alice and therefore PostgreSQL will check alice's RLS policy". So it's showing what she can see even though you're Bob because Bob has view permissions on it. So again, these are corner cases you need to be aware of and how views behave when row-level security is enabled. Then they go through and they talk about another scenario, setting up row-level security that may have some unexpected behavior. Now, in my own application, I don't use row-level security. I tend to do security at the application layer. However, if you have a need to do it, at the database layer. This is definitely a great post to check out to explain some of the corner cases as they describe for row-level security in PostgreSQL.

The next post is "Managing High Availability in PostgreSQL - Part III: Patroni". This is from scalegrid.io. We've covered the previous two posts. The first one on PostgreSQL Automatic Failover or PAF. The second part is Replication Manager. So this covers Partoni. Essentially another high-availability framework for using PostgreSQL. So it goes over how it works, how it handles the split-brain scenario, talks about pros versus cons, and then describes its whole testing methodology. It also discusses the best PostgreSQL High Availability framework and compares PAF Rep Manager and Patroni So. It has these different tests that they did and compares them. So if you're looking for a High Availability solution for PostgreSQL, definitely a blog post to check out.

The next post is "Developing on many versions of PostgreSQL at once". This is from 2ndquadrant.com. So this is talking about someone who uses many versions of PostgreSQL, I believe doing some development work and support work and how he juggles them. Now there are virtual environments which are akin to Ruby's RVM and Python's Virtual End. So there's a tool he mentions called pg_virtualenv and pgnv so these are tools that allow you to set up multiple environments and switch between different versions of PostgreSQL on the fly. He actually does some things a little bit different that he describes in this blog post. He talks about ccache to be able to nuke your cache when you need to, particularly if you're jumping between different versions. He talks about how he works with Git worktrees as well. So if you need to coordinate multiple versions of PostgreSQL that you're working with, definitely a blog post to check out.

The next post is "CLONE SCHEMA IN POSTGRES". This is from pateldenish.com. So basically there's no out-of-the-box way to clone a schema either, just the schema itself, the objects without data, or objects with data. So this blog post has a link to a tool that allows you to clone a schema and does schema only or schema with data as well. So if you want a tool to be able to clone schemas, maybe this is a blog post and a tool to check out.

The next post is "Percona Distribution for Postgres 11 (Beta) Is Now Available". This is from percona.com and it talks about how it sets it up. I don't believe this is a proprietary setup of PostgreSQL, but it's basically they've put together an open-source package, as it were, with version 11.5, the most recent version and complements it with a selection of extensions. So they say an additional extension supported by PostgreSQL Global Development Group, which looks like it's the contrib library. Pg_repack for being able to do something akin to a VACUUM FULL on tables while they're live. Pgaudit to be able to enhance the object audit logging/PostgreSQL logging. pgBackRest as a backup replacement or an alternative to pg_base_backup. Then Patroni, which we just saw a blog post on, is the high-availability solution for PostgreSQL. So essentially it packages these up together in a cohesive unit. So if this is interesting to you, definitely a blog post to check out.

The next post is "Porting a PostgreSQL Extension from Unix to Windows 10". This is from myrkraverk.com. So this is very specific to Windows and it's how they have ported an extension from a Unix system to Windows 10. So if you do extension development, and particularly want to get something working on Windows, there's a blog post to check out.

The last blog post is "Waiting for PostGIS 3: Parallelism in PostGIS". This is from crunchydata.com. Historically, PostGIS hasn't been able to use that many parallel queries here. They say very few queries would parallelize and you had to force certain configurations to get them to use a parallel plan. But they say a quote "With PostgreSQL 12 and PostGIS 3, parallel query plans will be generated and executed far more often because of changes to both pieces of software". PostgreSQL 12 includes a new API that extends the ability to modify query plans and add index clauses. PostGIS 3 has taken advantage, as well, to be able to enable more parallel plans to run. They have a little example here of a particular function where they saw more parallel plans being used and dropping the runtime of a particular query. Although of course, as you ramp up the number of cores, as with anything parallel, you do get diminishing returns. But this is a boon to people using PostGIS because you'll get more parallel query planning.

episode_image