background

Citus Open Source, Time Series Performance, Subscripting Updates, target_session_attrs | Scaling Postgres 155

Join Over 1,000 Engineers & Get New Episodes Weekly!

In this episode of Scaling Postgres, we discuss Citus 10 open source, time series performance in native Postgres, using subscripting for updates and new target_session_attrs.

Content Discussed

YouTube Video

Podcast Audio

Transcript

All right. I hope you, your friends, family, and coworkers continue to do well. Our first piece of content is "Citus 10: Columnar for Postgres rebalancer, single-node, & more". This is from citusdata.com and they basically released a new version of Citus's open-source version 10, and it has a lot of great features to it. The first feature is columnar storage. So basically it sets up a column store as opposed to a row store, which is the default that enables very high compression of your data and speeds up analytical-type queries, like when you want to average or sum a single column. That's what column-based storage speeds up for you. You can now also shard on a single node. So basically, if you want to get prepared early for scaling out your database, you can now do it on a single node and then be able to easily move to multiple nodes to scale out. The next one now available in the open source is a shard rebalancer. 

So now you can rebalance your shards and allow you to add or remove them. The next feature is joining foreign keys between local PostgreSQL tables and Citus tables. The fifth is functions to change the way tables are distributed and more. So, looking at the first feature, the column storage, you basically create a table using columnar, once you have the Citus extension installed, of course, and it uses zstd compression to achieve a 60-90% reduction in data size, which is huge. However, it does have a number of significant limitations. First, it doesn't support indexes, so basically, it relies on the small table size to be able to scan that table very quickly for doing calculations. Secondly, UPDATES and DELETES are not supported, so it's an append-only table. Then third, it's best to avoid single-row inserts. 

So basically, since the compression works well in batches, they say here, this is the kind of thing you want to batch update the table to store data. One suggested way to do it is you use the standard row storage for more recent data and then archive it in the columnar storage. The next one they talked about is being able to start out with Citus on a single node so that you can scale it when you're ready. Again, as I mentioned, you just basically add Citus to your preload_libraries, create the extension, and you can create a table and then create a distributed table based upon a particular shard key. The next item is a shard rebalancer, which allows you to rebalance your load across multiple shards that are now available in the open-source version. Joins and foreign keys now work between Citus tables and Postgres tables. 

They give an example here of where you can only apply Citus to particular tables. So for example, your main PostgreSQL database has publishers, advertisers, and campaigns, but you have a reference table here for a very large table, like clicks are huge. For example, well you would distribute this with Citus and you would classify these ads, which helps reference the connections between these. This would be a reference table and then the rest of these would stay stock Postgres tables so you can get into Citus slowly. They've added some enhancements to be able to alter the sharding and a lot of other new features. So this was particularly interesting because this has now been added to the open-source version of Citus. So if you wanted to try this out for free, you could just download the open source version and give it a try.

The next piece of content is "Designing high-performance time series data tables on Amazon RDS for PostgreSQL". This is from aws.amazon.com and even though this post talks about Amazon RDS, you can do this with stock Postgres. What they are proposing here is basically trying to use stock PostgreSQL to do time series data. They have an example here where they are collecting almost append-only data and need to upload it into Postgres. So they have a reading table and a few different indexes on it. Apparently, this was probably a customer because this walks through them making particular transitions, and changing column types to make things more efficient. So for example, one of the first things they cover is data type considerations. 

So it's basically determining the smallest type of data type to use for something. So for example, with regard to the ID, is this best to be an integer, a small int, or bigint? So they have a discussion about that. They also discussed whether double precision makes the most sense and they actually decided to go with something smaller, a real data type, because it was smaller. So basically making the size of the data as small as possible by choosing the most optimal data type size. Doing that reduced the data size by about 5% and then they looked at the indexes. So they did have a standard B-tree index on the time field. But looking at the use case, particularly with time series data, they said a BRIN makes more sense and a BRIN is a block range index. So it doesn't index each specific value like a b tree does, but it references ranges. 

They chose a particular number of pages per range and that significantly dropped the size down. Now the table and indexes went from 126 GB down to 101 GB. So a 20% space savings by moving to BRIN indexes versus the B-tree. Now, you have to be aware of that. You may lose some query performance, but it depends on your use case. If you're looking for individual values, you want to stick with a B-tree, but with a BRIN, that works best for ranges of values. Then they looked at partitioning because they were doing a metric ingestion test where they did a parallel load using multiple threads into the database, I believe about half a billion records, and they were seeing this performance. So it started off pretty well and then it started dropping down here and then it started getting very irregular here. 

Now, what they said around the 302 mark is when the checkpoint started kicking off, and that generated WAL and full-page writes, which caused the INSERT performance to struggle. Then here, it's basically running out of memory due to the data size increase. So the database exceeded this space in memory and had to go to disk. So they actually mitigated this problem by doing partition tables and they created around 360 partition tables for all of this data and they showed you how you did it here and it resulted in a dramatic increase in performance that looks like this. So the ingestion stayed consistent for about 1.6 million metrics per second over the duration of the test. Now, what's interesting is that they don't really specify why partitioning helped this. 

Because you're inserting the same amount of data and presumably, everything eventually has to get to disk, but they didn't explicitly say why this was such a great improvement to partitioning. Perhaps it has to do with them doing the load in parallel and they were doing a lot of random IO that was changing pages a lot more with single indexes, whereas it was much more efficient when broken up. Or maybe like in a previous episode that was talking about the vicious circle of WAL rights. Maybe autovacuum was a component and because you have to autovacuum the whole table, maybe that was bogging down some of what we see here. Whereas when you have 365 partitions, they're a lot smaller and it's easier to vacuum them up. 

But they didn't really specify why this worked. I would be interested to know how this looks with different numbers of partitions. So if you had more partitions, would you see any difference? If you had fewer partitions, would it start dropping down, and at what point? Then how is that related to loading the data in parallel? So they didn't specify how many threads they used, but does that impact this performance as well? So this is a significant result, but I would be interested to know exactly why this helped the performance so much. But if you want to learn more about how you can use Postgres for time series data without using an extension such as TimescaleDB, you can check out this blog post.

The next piece of content is "How many engineers does it take to make subscripting work?". This is from erthalion.info and basically, he's indicating that there's a patch that's been done that is probably going to be in Postgres 14 where you can actually use subscripting syntax to set keys in JSONB as well as query from it. So for me, this is huge. I love this developer's convenience. I don't know if it's me, but every time I need to update a JSONB column, I need to check the docs on how to do it. So this would be so much easier to work with compared to now. So kudos to everybody who was involved with this. Actually, this post is very long. So this is the TLDR, but it describes all the processes that went through getting this feature into Postgres. So there's a lot of backstory, different developer comments, and different things to be aware of when doing Postgres patches. So if you're interested in the backstory, you can definitely check out this post.

The next piece of content is "NEW TARTET_SESSION_ATTRS SETTINGS FOR HIGH AVAILABILITY AND SCALING IN POSTGRESQL V14". This is from cybertec-postgresql.com and they're talking about target_session_attrs. So basically when you connect up to psql using libpq you can specify more than one server and it will connect to one. If that's not available, it will connect to another, where you can also specify target_session_attrs whether that particular target is read, write, or not. Otherwise classified as any. So this enables you to say just target to the primary server or not kind of. But in Postgres v14 they've added a lot more different attrs that you can use. So now you have primary, you have standby, you have read-only, you have read-write, and then prefer-standby. 

So primary is very similar to read-write only. The difference is whether the default transaction read-only is set or not. Same thing with read-only and standby, it just varies by this parameter as well. Prefer-standby basically, it tries to connect to a standby. If none is available, it just connects to whatever else is available. He also mentions that of course the libpq library is used in a lot of different languages and Postgres connections, so for example they mentioned Python, Go, Rust, and I know Ruby uses it as well, so a lot of them use this. So you can use this syntax today and it will be even better in version 14. So if you're interested in that you can check out this blog post.

Next piece of content- "Starting with PG - where/how can I set configuration parameters?". This is from depesz.com and he had previously written about locating where the config files are, but this talks about where and all the different places you could make changes to them. So there is of course the base postgresql.conf file. However within that, at the bottom, you can include additional files. So you can include files or give an error or include only if the file exists so it won't give an error. It also, usually by default, includes a directory called conf.d that's personally where I stick all of my configurations here and I usually don't change the postgresql.conf file because it just makes it easier for upgrades and whatnot. 

The other place where you can find configuration is in the data directory and it's the postgresql.auto.conf file. This file you should not manually edit and it should only be altered with the ALTER SYSTEM command. So you can from a Postgres prompt say ALTER SYSTEM SET this parameter to a particular value and it will change it here. Now it won't immediately make it live because you may need to do a reload or a restart. Then he goes into how you determine which parameters you can do a reload on and have them work or it requires a restart. He says you can actually query the pg_settings table in Postgres and it will tell you for each value whether it just requires a reload such as the archive command or it requires a restart such as the actual archive mode. 

The next thing he covers is all the items in the postgresql.conf file are global and even the postgresql.auto.conf files are global. However, you can set certain parameters for particular users and that they're specified by the user here for example. You can also set them per database or you can set them per user per database and he goes through and shows an example here of how that hierarchy is established. The last thing he covers is that you can also set them per session. So per connection to Postgres, you can change certain parameters for that session that you're in and lastly cover different ways to reset those values to their default. So if you're interested in learning more about configuration and the different places you can do it, definitely check out this blog post.

The next piece of content- "Starting with PG - where are the logs?". This is also from depesz.com and he's talking about where you can find the logs in your Postgres system. So he considers different places where it can be logged, like in Windows. You can log it to the event log. In Linux you can do it to the syslog, you can also log to the standard error, or you can also do it using the logging collector or not, as well as to the csvlog. So if you want to learn more about logging in to Postgres, definitely check out this blog post.

The next piece of content is actually a YouTube channel and it's EDB's YouTube channel. In the last week, they've posted a lot of different webinars on Postgres. So if you're interested in video content definitely check out this YouTube channel.

The next piece of content is "Security and Containers in Cloud Native PostgreSQL". This is from enterprisedb.com and they're talking about their cloud-native Postgres offering. Basically, they are bundling Red Hat Linux in a container with Postgres all configured for you to be able to use in things like Kubernetes. They go through the security process that they're doing to deliver secured images via this means. Also how they've actually configured the container itself following the principle of least privilege during execution. Then covered some points about why Immutability is a benefit in that. In this scenario, you generally would not install patches on the server, you would just get a new container, install that, and do a failover process to a standby server to upgrade it, and that's the process you would go through for container upgrades. So if you're interested in that, you can check out this blog post.

The next piece of content is "POSTGRESQL: GETTING STARTED ON UBUNTU". This is from cybertec-postgresql.com. This is a very simple post talking about the best way to install Postgres on Ubuntu, especially if you're looking for a specific version and don't want the Postgres version bundled with your particular version of Ubuntu. So if you're interested in that, you can check out this post.

Next piece of content is "How PostgreSQL Handles Sub Transaction Visibility In Streaming Replication". This is from highgo.ca. So if you're interested in how Postgres works under the covers to accomplish this, definitely check out this post. 

Also from highgo.ca is "How to create a system information function in PostgreSQL". So again, if you want to get more detailed Postgres information, you can check out this blog post.

The next piece of content is "Pgpool-II's Cluster Modes". This is from b-peng.blogspot.com, and she discusses all of the different clustering modes that are available in Pgpool II and how they work and how to enable them. So if you're interested, you can check out this post.

The next piece of content is "Regression Analysis in PostgreSQL with Tensorflow: Part 1 - Getting Started". This is from enterprisedb.com, and this is the first post in a series that just gets Tensorflow set up with Postgres and Python 3 so that you can start using Tensorflow for machine learning. So if you're interested in that, you can check out this blog post.

The last piece of content, the PostgreSQL person of the week is David E. Wheeler. So if you're interested in learning more about David and his contributions to Postgres, definitely check out this blog post.

episode_image