17
Distributed postgres. XL, XTM, MultiMaster Stas Kelvich

Distributed Postgres

Embed Size (px)

Citation preview

Page 1: Distributed Postgres

Distributed postgres.XL, XTM, MultiMaster

Stas Kelvich

Page 2: Distributed Postgres

I Started about a year ago.I Konstantin Knizhnik, Constantin Pan, Stas Kelvich

Cluster group in PgPro

2

Page 3: Distributed Postgres

Started to playing with Postgres-XC. 2ndQuadrant also had project(finished now) to port XC to 9.5.

I Fork is painful;I How can we bring functionality of XC in core?

Cluster group in PgPro

3

Page 4: Distributed Postgres

I Distributed transactions - nothing in-core;I Distributed planner - fdw, pg_shard, greenplum planner (?);I HA/Autofailover - can be built on top of logical decoding.

Distributed postgres

4

Page 5: Distributed Postgres

Achieve proper isolation between tx for multi-node transactions.Now in postgres on write tx start:

I Aquire XID;I Get list of running tx’s;I Use that info in visibility checks.

Distributed transactions

5

Page 6: Distributed Postgres

transam/clog.c: GetTransactionStatus SetTransactionStatus transam/varsup.c: GetNewTransactionIdipc/procarray.c: TransactionIdIsInProgress GetOldestXmin GetSnapshotDatatime/tqual.c: XidInMVCCSnapshot

XTM API:vanilla

6

Page 7: Distributed Postgres

transam/clog.c: GetTransactionStatus SetTransactionStatus transam/varsup.c: GetNewTransactionIdipc/procarray.c: TransactionIdIsInProgress GetOldestXmin GetSnapshotDatatime/tqual.c: XidInMVCCSnapshot

TransactionManager

XTM API:after patch

7

Page 8: Distributed Postgres

transam/clog.c: GetTransactionStatus SetTransactionStatus transam/varsup.c: GetNewTransactionIdipc/procarray.c: TransactionIdIsInProgress GetOldestXmin GetSnapshotDatatime/tqual.c: XidInMVCCSnapshot

TransactionManager

pg_dtm.so

XTM API:after tm load

8

Page 9: Distributed Postgres

I Aquire XID centrally (DTMd, arbiter);I No local tx possible;I DTMd is a bottleneck.

XTM implementationsGTM or snapshot sharing

9

Page 10: Distributed Postgres

I Paper from SAP HANA team;I Central daemon is needed, but only for multi-node tx;I Snapshots -> Commit Sequence Number;I DTMd is still a bottleneck.

XTM implementationsIncremental SI

10

Page 11: Distributed Postgres

I XID/CSN are gathered from all nodes that participates in tx;I No central service;I local tx;I possible to reduce communication by using time (Spanner,

CockroachDB).

XTM implementationsClock-SI or tsDTM

11

Page 12: Distributed Postgres

XTM implementationstsDTM scalability

12

Page 13: Distributed Postgres

More nodes, higher probability of failure in system.Possible problems with nodes:

I Node stopped (and will not be back);I Node was down small amount of time (and we should bring it

back to operation);I Network partitions (avoid split-brain).

If we want to survive network partitions than we can have not morethan [N/2] - 1 failures.

HA/autofailover

13

Page 14: Distributed Postgres

Possible usage of such system:I Multimaster replication;I Tables with metainformation in sharded databases;I Sharding with redundancy.

HA/autofailover

14

Page 15: Distributed Postgres

By Multimaster we mean strongly coupled one, that acts as a singledatabase. With proper isolation and no merge conflicts.Ways to build:

I Global order to XLOG (Postgres-R, MySQL Galera);I Wrap each tx as distributed – allows parallelism while applying

tx.

Multimaster

15

Page 16: Distributed Postgres

Our implementation:I Built on top of pg_logical;I Make use of tsDTM;I Pool of workers for tx replay;I Raft-based storage for dealing with failures and distributed

deadlock detection.

Multimaster

16

Page 17: Distributed Postgres

Our implementation:I Approximately half of a speed of standalone postgres;I Same speed for reads;I Deals with nodes autorecovery;I Deals with network partitions (debugging right now).I Can work as an extension (if community accept XTM API in

core).

Multimaster

17