1. 程式人生 > 其它 >PostgreSQL分散式資料庫實踐

PostgreSQL分散式資料庫實踐

為什麼需要分散式資料庫

  有很多原因資料庫需要擴充套件性。1、請求需要訪問的資料量過大(單純的資料量大不是理由,例如從不訪問,歸檔即可);2、伺服器CPU、記憶體、網路、IO到了瓶頸,響應時間大大下降;3、MPP中,集中式資料庫在設計時通常為了開發人員使用更加順暢和絲滑,儘可能的讓資料庫設計和SQL非常簡單,比如不需要指定某些表實際上是存在主外來鍵關係,從而導致並行執行效果打折;或者並行執行在一開始並不包含,後面逐漸增強,導致並行執行有天然的缺陷,分割槽亦如此。這三通常是根本原因。

Citus介紹

  首先提供了比較公正參考的是citus中國寫的一篇文章PG-XL,Citus,GreenPlum如何選擇

,不同於其他分散式資料庫如tidb、oceanbase、tdsql等每家都說自己最厲害,citus還是比較客觀的分析,它自己比較適合於OLTP下的分散式,並不適合於大規模的adhoc場景。實際上很多系統資料量積累的比較大、同時業務流程又很複雜,但是TPS不高,這些系統要應用於分散式資料庫,物理模型的設計是很重要的,不是簡單的庫能套上去的,例如資料中心、歷史庫的包含就很重要了。亦或者上去了就下不來了,維護成本會非常高。

  和其他分散式架構一樣,citus也採用協調者和工作者節點,也可以認為是master和worker,說計算和儲存分離是不合適的(大多數分散式資料庫如oceanbase、goldendb、TDSQL自稱計算和儲存分離也是不合適的)。真正接近了儲存和計算分離的是oracle exadata、tidb。如下所示,協調者和工作者一樣都是postgresql例項。

  

  SQL語句經過語法解析後,在協調者節點的analyze階段被citus擴充套件(和greenplum、xl、xc不同的是,citus採用的是extension機制(pg定義了大量hook供各種extension訪問,具體可參見postgresql核心開發必備之extension機制))替換,並進行SQL語句的fork and join過程。得益於extension這一點,你可以認為citus本質上和greenplum、xl以及xc在事務、語法語義等資料庫本身特性的支援上是差不多的。而不是三方中介軟體如pgpool、pgbouncer中的硬塞實現。因此具有更好的一致性和穩定性保證。

  在分散式事務的實現上,citus也是採用2PC協議。它的實現可以參考

http://citusdb.cn/?p=661

  注:citus架構的優點在於,它認為分散式是一個特性,而不是屬性。這一點LZ在所有場合都是這麼堅信,95%+的系統永遠都不需要微服務架構,資料庫也不需要分散式,因為其到不了那個容量,所以理論上可以擴充套件使得應用能夠同時運行於單例項和分散式,而其它一開始就設計為分散式的資料庫是很難的。

  因為沒有做單獨GTM節點的概念,citus無法的協調者無法實現多活,這種情況下容易出現協調者單點,如下:

  

  對此,Citus還提供了兩個引數use_secondary_node和writable_standby_coordinator以支援寫入能力擴充套件及資料節點讀寫分離。這樣standby cn也可以執行查詢和DML操作。如下所示:

  由此可見,可靠的分散式資料庫架構是非常複雜的,如果沒有非常一體化的監控管理平臺,其維護難度可想而知。

參考:https://blog.csdn.net/weixin_46199817/article/details/117223870

Citus安裝

  可以從https://github.com/citusdata/citus下載原始碼或rpm,一般使用者可以選擇yum install citus101_13-10.1.1.citus-1.el7.x86_64。

[zjh@lightdb1 usr]$ rpm -ql postgresql13-13.3-1PGDG.rhel7.x86_64
/usr/pgsql-13/bin/clusterdb
/usr/pgsql-13/bin/createdb
/usr/pgsql-13/bin/createuser
/usr/pgsql-13/bin/dropdb
/usr/pgsql-13/bin/dropuser
/usr/pgsql-13/bin/pg_basebackup
/usr/pgsql-13/bin/pg_config
/usr/pgsql-13/bin/pg_dump
/usr/pgsql-13/bin/pg_dumpall


[zjh@lightdb1 usr]$ rpm -qa | grep citus
citus_13-10.0.3-1.rhel7.x86_64
r[zjh@lightdb1 usr]$ rpm -ql citus_13-10.0.3-1.rhel7.x86_64
/usr/pgsql-13/doc/extension/README-citus.md
/usr/pgsql-13/lib/citus.so
/usr/pgsql-13/share/extension/citus--10.0-1--10.0-2.sql
/usr/pgsql-13/share/extension/citus--10.0-2--10.0-3.sql
/usr/pgsql-13/share/extension/citus--8.0-1--8.0-2.sql
/usr/pgsql-13/share/extension/citus--8.0-1.sql
/usr/pgsql-13/share/extension/citus--8.0-10--8.0-11.sql
/usr/pgsql-13/share/extension/citus--8.0-11--8.0-12.sql
/usr/pgsql-13/share/extension/citus--8.0-12--8.0-13.sql
/usr/pgsql-13/share/extension/citus--8.0-13--8.1-1.sql
/usr/pgsql-13/share/extension/citus--8.0-2--8.0-3.sql
/usr/pgsql-13/share/extension/citus--8.0-3--8.0-4.sql
/usr/pgsql-13/share/extension/citus--8.0-4--8.0-5.sql
/usr/pgsql-13/share/extension/citus--8.0-5--8.0-6.sql
/usr/pgsql-13/share/extension/citus--8.0-6--8.0-7.sql
/usr/pgsql-13/share/extension/citus--8.0-7--8.0-8.sql
/usr/pgsql-13/share/extension/citus--8.0-8--8.0-9.sql
/usr/pgsql-13/share/extension/citus--8.0-9--8.0-10.sql
/usr/pgsql-13/share/extension/citus--8.1-1--8.2-1.sql
/usr/pgsql-13/share/extension/citus--8.2-1--8.2-2.sql
/usr/pgsql-13/share/extension/citus--8.2-2--8.2-3.sql
/usr/pgsql-13/share/extension/citus--8.2-3--8.2-4.sql
/usr/pgsql-13/share/extension/citus--8.2-4--8.3-1.sql
/usr/pgsql-13/share/extension/citus--8.3-1--9.0-1.sql
/usr/pgsql-13/share/extension/citus--9.0-1--9.0-2.sql
/usr/pgsql-13/share/extension/citus--9.0-2--9.1-1.sql
/usr/pgsql-13/share/extension/citus--9.1-1--9.2-1.sql
/usr/pgsql-13/share/extension/citus--9.2-1--9.2-2.sql
/usr/pgsql-13/share/extension/citus--9.2-2--9.2-4.sql
/usr/pgsql-13/share/extension/citus--9.2-4--9.3-2.sql
/usr/pgsql-13/share/extension/citus--9.3-1--9.2-4.sql
/usr/pgsql-13/share/extension/citus--9.3-2--9.4-1.sql
/usr/pgsql-13/share/extension/citus--9.4-1--9.5-1.sql
/usr/pgsql-13/share/extension/citus--9.5-1--10.0-1.sql
/usr/pgsql-13/share/extension/citus.control
/usr/share/doc/citus_13-10.0.3
/usr/share/doc/citus_13-10.0.3/CHANGELOG.md
/usr/share/licenses/citus_13-10.0.3
/usr/share/licenses/citus_13-10.0.3/LICENSE

  然後正常通過initdb建立postgresql資料庫,1個CN,2個DN。

  如下:

[zjh@lightdb1 pgsql-13]$ ll
total 24
drwxr-xr-x  2 zjh zjh 4096 Jun  1 17:43 bin
drwx------ 21 zjh zjh 4096 Aug 29 00:00 coordinator_1
drwxr-xr-x  3 zjh zjh   23 Jun  1 17:43 doc
drwxr-xr-x  3 zjh zjh 4096 Jun 19 14:58 lib
drwxr-xr-x  7 zjh zjh 4096 Jun  1 17:43 share
drwx------ 21 zjh zjh 4096 Aug 29 00:00 worker_1_13588
drwx------ 21 zjh zjh 4096 Aug 29 00:00 worker_2_23588

  安裝citus外掛:  

-- CN和DN都要配置
shared_preload_libraries='citus'  -- 第一個外掛必須是citus
CREATE EXTENSION citus;  -- 安裝在postgres使用者下即可
SELECT * from citus_add_node('10.0.0.1', 13588);
SELECT * from citus_add_node('10.0.0.1', 23588);

  查詢DN列表:

postgres=# SELECT * FROM citus_get_active_worker_nodes();
  node_name   | node_port 
--------------+-----------
 10.0.0.1 |     23588
 10.0.0.1 |     13588
(2 rows)

概念

  在citus中,分片和節點不是一對一關係,這一點不同於greenplum,更接近nosql如couchbase的設計,一定程度上這麼做也避免了使用了citus之後還需要分割槽的必要性(這是個優點、也是個缺點,平衡的結果)。

Citus表型別

  citus中表分三種類型,1:分庫表(每個DN n個分片,分片數量可配置,一般是訂單表和客戶表);2:廣播表(每個DN一份,CN不包括,一般是字典表、產品表、費率表、機構表、許可權表等);3:全域性表(僅存在於CN,一些系統引數表,統計表,也可能廣播儲存,看情況),全域性表一般不會和廣播表、分庫表進行關聯,預設CN建立表的時候就是local表,也可以通過SELECT undistribute_table('github_events');將分庫表切換回local表(此時會資料先遷移回來,也是縮容的一種方式)。

  廣播表和分庫表,廣播表和廣播表之間關聯會很多。

  同時會存在多種業務存在於同一個資料庫中的情況,例如庫存和客戶,操作日誌和訂單,小二和選單、功能、客戶,並且同時有從選單維度查,也有從小二維度查。所以citus支援對錶進行分組,相關分組的表,citus在生成分散式執行計劃的時候就知道那些是相關的,哪些是無關的。如下:

  SELECT create_distributed_table('event', 'tenant_id');
  SELECT create_distributed_table('page', 'tenant_id', colocate_with => 'event');

  分組的前提是兩個表使用相同欄位作為分片欄位。分組可以使得SQL的優化更加進一步。

  總有一會兒,你會發現庫存和客戶表進行關聯,通過訂單進行的。這個時候庫存是根據產品分片的,客戶是通過客戶id分片的。此時效果會怎麼樣呢?

  不同於greenplum支援distributed by語法,citus因為採用extension實現,沒有擴充套件pg本身的語法,所以採用函式的方式來指定表是否為分散式表。

CREATE TABLE companies (
id bigserial PRIMARY KEY,
name text NOT NULL,
image_url text,
created_at timestamp without time zone NOT NULL,
updated_at timestamp without time zone NOT NULL
);

SELECT create_distributed_table('companies', 'id'); -- companies表為分散式表,id是用於分片的欄位

  需要注意的是,citus分片數量和worker數量不是一一對應,這和gp不同,但類似於現在tidb、oceanbase的做法。如下:

  要建立廣播表,可以使用create_reference_table函式:

SELECT create_reference_table('geo_ips');  -- 所有worker節點廣播,不包含CN

  大多數的DDL語句citus都支援,會負責分散式呼叫所有worker。

自定義資料分佈演算法、副本數、分片數

Citus函式型別

  不管使用者是否承認,相同的功能,儲存過程和函式實現的效率就是要比應用傳送SQL過來效率更高。所以citus支援了分散式函式的概念。

新增節點

  新增節點後,預設不會啟用,需要呼叫rebalance_table_shards讓citus對資料進行遷移,然後才會被訪問。

  SELECT rebalance_table_shards('companies');

執行計劃分析

explain(analyze,verbose,buffers) select
    count(*) as low_stock
from
    (
    select
        s_w_id,
        s_i_id,
        s_quantity
    from
        bmsql_stock
    where
        s_w_id =     975
        and s_quantity < 12
        and s_i_id in (
        select
            ol_i_id
        from
            bmsql_district
        join bmsql_order_line on
            ol_w_id = d_w_id
            and ol_d_id = d_id
            and ol_o_id >= d_next_o_id - 20
            and ol_o_id < d_next_o_id
        where
            d_w_id =     975
            and d_id =    9 ) ) as L

QUERY PLAN                                                                                                                                                                                                                                                     |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Custom Scan (Citus Adaptive)  (cost=0.00..0.00 rows=0 width=0) (actual time=9.781..9.782 rows=1 loops=1)                                                                                                                                                       |
  Output: remote_scan.low_stock                                                                                                                                                                                                                                |
  Task Count: 1                                                                                                                                                                                                                                                |
  Tuple data received from nodes: 1 bytes                                                                                                                                                                                                                      |
  Tasks Shown: All                                                                                                                                                                                                                                             |
  ->  Task                                                                                                                                                                                                                                                     |
        Query: SELECT count(*) AS low_stock FROM (SELECT bmsql_stock.s_w_id, bmsql_stock.s_i_id, bmsql_stock.s_quantity FROM public.bmsql_stock_103384 bmsql_stock WHERE ((bmsql_stock.s_w_id OPERATOR(pg_catalog.=) 975) AND (bmsql_stock.s_quantity OPERATOR(|
        Tuple data received from node: 1 bytes                                                                                                                                                                                                                 |
        Node: host=127.0.0.1 port=13588 dbname=postgres                                                                                                                                                                                                        |
        ->  Aggregate  (cost=25597.32..25597.33 rows=1 width=8) (actual time=1.276..1.277 rows=1 loops=1)                                                                                                                                                      |
              Output: count(*)                                                                                                                                                                                                                                 |
              Buffers: shared hit=810                                                                                                                                                                                                                          |
              ->  Nested Loop  (cost=7612.59..25597.14 rows=73 width=0) (actual time=0.389..1.272 rows=4 loops=1)                                                                                                                                              |
                    Inner Unique: true                                                                                                                                                                                                                         |
                    Buffers: shared hit=810                                                                                                                                                                                                                    |
                    ->  HashAggregate  (cost=7612.16..7646.24 rows=3408 width=4) (actual time=0.163..0.206 rows=186 loops=1)                                                                                                                                   |
                          Output: bmsql_order_line.ol_i_id                                                                                                                                                                                                     |
                          Group Key: bmsql_order_line.ol_i_id                                                                                                                                                                                                  |
                          Batches: 1  Memory Usage: 129kB                                                                                                                                                                                                      |
                          Buffers: shared hit=42                                                                                                                                                                                                               |
                          ->  Nested Loop  (cost=0.71..7603.64 rows=3408 width=4) (actual time=0.055..0.131 rows=189 loops=1)                                                                                                                                  |
                                Output: bmsql_order_line.ol_i_id                                                                                                                                                                                               |
                                Buffers: shared hit=42                                                                                                                                                                                                         |
                                ->  Index Scan using bmsql_district_pkey_103191 on public.bmsql_district_103191 bmsql_district  (cost=0.27..8.30 rows=1 width=12) (actual time=0.014..0.014 rows=1 loops=1)                                                    |
                                      Output: bmsql_district.d_w_id, bmsql_district.d_id, bmsql_district.d_ytd, bmsql_district.d_tax, bmsql_district.d_next_o_id, bmsql_district.d_name, bmsql_district.d_street_1, bmsql_district.d_street_2, bmsql_district.d|
                                      Index Cond: ((bmsql_district.d_w_id = 975) AND (bmsql_district.d_id = 9))                                                                                                                                                |
                                      Buffers: shared hit=3                                                                                                                                                                                                    |
                                ->  Index Scan using bmsql_order_line_pkey_103351 on public.bmsql_order_line_103351 bmsql_order_line  (cost=0.44..7561.26 rows=3408 width=16) (actual time=0.022..0.081 rows=189 loops=1)                                      |
                                      Output: bmsql_order_line.ol_w_id, bmsql_order_line.ol_d_id, bmsql_order_line.ol_o_id, bmsql_order_line.ol_number, bmsql_order_line.ol_i_id, bmsql_order_line.ol_delivery_d, bmsql_order_line.ol_amount, bmsql_order_line.|
                                      Index Cond: ((bmsql_order_line.ol_w_id = 975) AND (bmsql_order_line.ol_d_id = 9) AND (bmsql_order_line.ol_o_id >= (bmsql_district.d_next_o_id - 20)) AND (bmsql_order_line.ol_o_id < bmsql_district.d_next_o_id))        |
                                      Buffers: shared hit=39                                                                                                                                                                                                   |
                    ->  Index Scan using bmsql_stock_pkey_103384 on public.bmsql_stock_103384 bmsql_stock  (cost=0.43..5.27 rows=1 width=4) (actual time=0.006..0.006 rows=0 loops=186)                                                                        |
                          Output: bmsql_stock.s_w_id, bmsql_stock.s_i_id, bmsql_stock.s_quantity, bmsql_stock.s_ytd, bmsql_stock.s_order_cnt, bmsql_stock.s_remote_cnt, bmsql_stock.s_data, bmsql_stock.s_dist_01, bmsql_stock.s_dist_02, bmsql_stock.s_dist_03|
                          Index Cond: ((bmsql_stock.s_w_id = 975) AND (bmsql_stock.s_i_id = bmsql_order_line.ol_i_id))                                                                                                                                         |
                          Filter: (bmsql_stock.s_quantity < 12)                                                                                                                                                                                                |
                          Rows Removed by Filter: 1                                                                                                                                                                                                            |
                          Buffers: shared hit=768                                                                                                                                                                                                              |
            Planning Time: 0.755 ms                                                                                                                                                                                                                            |
            Execution Time: 1.498 ms                                                                                                                                                                                                                           |
Planning:                                                                                                                                                                                                                                                      |
  Buffers: shared hit=3                                                                                                                                                                                                                                        |
Planning Time: 0.324 ms                                                                                                                                                                                                                                        |
Execution Time: 9.796 ms                                                                                                                                                                                                                                       |

  一般SQL,失真不算很嚴重。

高可用

CN成為瓶頸

bypass-CN模式

使用benchmarksql進行TPC-C測試

   因為TPC-C所有的表都co-location到warehouse_id了,所以跑TPCC是沒有問題的。只不過citus的重寫著實有點蠢。如下:

2021-10-07 21:21:47.037945T [239675] LOG:  duration: 97782.322 ms  execute <unnamed>: SELECT count(*) AS low_stock FROM (SELECT bmsql_stock.s_w_id, bmsql_stock.s_i_id, bmsql_stock.s_quantity FROM public.bmsql_stock_103379 bmsql_stock WHERE ((bmsql_stock.s_w_id OPERATOR(pg_catalog.=) $1) AND (bmsql_stock.s_quantity OPERATOR(pg_catalog.<) $2) AND (bmsql_stock.s_i_id OPERATOR(pg_catalog.=) ANY (SELECT bmsql_order_line.ol_i_id FROM (public.bmsql_district_103186 bmsql_district JOIN public.bmsql_order_line_103346 bmsql_order_line ON (((bmsql_order_line.ol_w_id OPERATOR(pg_catalog.=) bmsql_district.d_w_id) AND (bmsql_order_line.ol_d_id OPERATOR(pg_catalog.=) bmsql_district.d_id) AND (bmsql_order_line.ol_o_id OPERATOR(pg_catalog.>=) (bmsql_district.d_next_o_id OPERATOR(pg_catalog.-) 20)) AND (bmsql_order_line.ol_o_id OPERATOR(pg_catalog.<) bmsql_district.d_next_o_id)))) WHERE ((bmsql_district.d_w_id OPERATOR(pg_catalog.=) $3) AND (bmsql_district.d_id OPERATOR(pg_catalog.=) $4)))))) l
2021-10-07 21:21:47.037945T [239675] DETAIL:  parameters: $1 = '974', $2 = '13', $3 = '974', $4 = '10'

  同時,citus到worker節點後,執行計劃的效果很不理想。有些select count(1)執行居然要幾十秒,在單機時只要及時毫秒。tpmC從20萬掉下到6萬。

管理介面

  除了標準的建表功能外,分散式資料庫至少要支援:

  顯示的廣播介面,包括:到每個主worker節點,到每個主副worker節點,到每個主分片,到每個主副分片。

  顯示的單播可用介面,包括:到任一worker節點,到任一分片。

  p14.6 Manual Query Propagation

TPC-H測試

  citus對TPC-H的支援不太好,準確的是說複雜關聯支援不好。但凡涉及到關聯欄位不包含分片鍵、沒有co-location的幾乎都不支援。如下:

Vuser 1:Query Failed : select o_year, sum(case when nation = 'MOZAMBIQUE' then volume else 0 end) / sum(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_partkey and s_suppkey = l_suppkey and l_orderkey = o_orderkey and o_custkey = c_custkey and c_nationkey = n1.n_nationkey and n1.n_regionkey = r_regionkey and r_name = 'AFRICA' and s_nationkey = n2.n_nationkey and o_orderdate between date '1995-01-01' and date '1996-12-31' and p_type = 'STANDARD POLISHED STEEL') all_nations group by o_year order by o_year : ERROR: complex joins are only supported when all distributed tables are co-located and joined on their distribution columns
Vuser 1:Query Failed : select cntrycode, count(*) as numcust, sum(c_acctbal) as totacctbal from ( select substr(c_phone, 1, 2) as cntrycode, c_acctbal from customer where substr(c_phone, 1, 2) in ('23', '32', '17', '18', '16', '20', '25') and c_acctbal > ( select avg(c_acctbal) from customer where c_acctbal > 0.00 and substr(c_phone, 1, 2) in ('23', '32', '17', '18', '16', '20', '25')) and not exists ( select * from orders where o_custkey = c_custkey)) custsale group by cntrycode order by cntrycode : ERROR: direct joins between distributed and local tables are not supported

  因為citus是外掛化,註定了不可能和原生GP一樣預設為分散式MPP而生。開啟citus.enable_repartition_joins後,有10個語句預設跑不通。

CITUS注意點

postgres=# create table t_batch(id int primary key generated always as identity,d1 bigint,d2 bigint,d3 bigint);
CREATE TABLE
postgres=# SELECT create_distributed_table('t_batch','id');
ERROR:  cannot distribute relation: t_batch
DETAIL:  Distributed relations must not use GENERATED ... AS IDENTITY.

但是bigserial居然支援?

postgres=# create table t_batch(id bigserial primary key,d1 bigint,d2 bigint,d3 bigint);
CREATE TABLE
postgres=# SELECT create_distributed_table('t_batch','id');                             
 create_distributed_table 
--------------------------
 
(1 row)

序列及序列作為預設值支援

postgres=# alter table bmsql_history 
postgres-#     alter column hist_id set default nextval('bmsql_hist_id_seq');
ALTER TABLE
postgres=# alter table bmsql_history add primary key (hist_id);   -- 約束必須加名字
ERROR:  cannot create constraint without a name on a distributed table
alter table bmsql_history add constraint bmsql_history_pkey primary key (hist_id);
ERROR: cannot create constraint on "bmsql_history"
  Detail: Distributed relations cannot have UNIQUE, EXCLUDE, or PRIMARY KEY constraints that do not include the partition column (with an equality operator if EXCLUDE).
postgres=# select pg_size_pretty(citus_relation_size('search_doc_new_ic'));
 pg_size_pretty 
----------------
 10045 MB
(1 row)

Time: 1.367 ms
postgres=# select pg_size_pretty(citus_table_size('search_doc_new_ic'));   -- 不應該差這麼多
 pg_size_pretty 
----------------
 216 GB
(1 row)

Time: 14.957 ms
postgres=# select pg_size_pretty(citus_total_relation_size('search_doc_new_ic'));
 pg_size_pretty 
----------------
 243 GB
(1 row)

主外來鍵限制

tpch=# SELECT create_distributed_table('orders', 'o_orderkey');
NOTICE:  Copying data from local table...
NOTICE:  copying the data has completed
DETAIL:  The local data in the table is no longer visible, but is still on disk.
HINT:  To remove the local data, run: SELECT truncate_local_data_after_distributing_table($$public.orders$$)
ERROR:  cannot create foreign key constraint since relations are not colocated or not referencing a reference table
DETAIL:  A distributed table can only have foreign keys if it is referencing another colocated hash distributed table or a reference table
tpch=# \dS+ orders
                                                 Table "public.orders"
     Column      |            Type             | Collation | Nullable | Default | Storage  | Stats target | Description 
-----------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------
 o_orderdate     | timestamp without time zone |           |          |         | plain    |              | 
 o_orderkey      | numeric                     |           | not null |         | main     |              | 
 o_custkey       | numeric                     |           | not null |         | main     |              | 
 o_orderpriority | character(15)               |           |          |         | extended |              | 
 o_shippriority  | numeric                     |           |          |         | main     |              | 
 o_clerk         | character(15)               |           |          |         | extended |              | 
 o_orderstatus   | character(1)                |           |          |         | extended |              | 
 o_totalprice    | numeric                     |           |          |         | main     |              | 
 o_comment       | character varying(79)       |           |          |         | extended |              | 
Indexes:
    "orders_pk" PRIMARY KEY, btree (o_orderkey)
    "order_customer_fkidx" btree (o_custkey)
Foreign-key constraints:
    "order_customer_fk" FOREIGN KEY (o_custkey) REFERENCES customer(c_custkey)
Referenced by:
    TABLE "lineitem" CONSTRAINT "lineitem_order_fk" FOREIGN KEY (l_orderkey) REFERENCES orders(o_orderkey)
Access method: heap


NOTICE:  removing table public.lineitem from metadata as it is not connected to any reference tables via foreign keys

tpch=# SELECT create_distributed_table('part', 'p_partkey');
NOTICE:  Copying data from local table...
NOTICE:  copying the data has completed
DETAIL:  The local data in the table is no longer visible, but is still on disk.
HINT:  To remove the local data, run: SELECT truncate_local_data_after_distributing_table($$public.part$$)
ERROR:  cannot create foreign key constraint since foreign keys from reference tables and local tables to distributed tables are not supported
DETAIL:  Reference tables and local tables can only have foreign keys to reference tables and local tables
LightDB Enterprise Postgres--金融級關係型資料庫,更快、更穩、更懂金融!