Multi-Node Cluster Example
Learn how to create and manage a 3-node dqlite cluster with automatic failover and data replication.
Overview
This example demonstrates:
- Creating multiple dqlite nodes
- Forming a cluster
- Leader election
- Data replication across nodes
- Failover behavior
Perfect for: Understanding distributed consensus and high availability with dqlitepy.
Prerequisites
- Python 3.12 or higher
uvpackage manager installed- dqlitepy installed in your project
- Understanding of basic dqlitepy concepts (see Simple Node)
Quick Start
Run the example with one command:
cd examples/multi_node_cluster
./quickstart.sh
The script will:
- Install dqlitepy from the repository
- Create a 3-node cluster
- Demonstrate data replication
- Show failover behavior
Manual Installation
To run manually:
cd examples/multi_node_cluster
uv sync
uv run python -m multi_node_cluster_example.main
Cluster Architecture
Code Walkthrough
Creating Multiple Nodes
import tempfile
from dqlitepy import Node
# Create data directories for each node
data_dirs = [
tempfile.mkdtemp(prefix=f"dqlite_node{i}_")
for i in range(1, 4)
]
# Initialize three nodes
nodes = []
for i in range(1, 4):
node = Node(
node_id=i,
address=f"127.0.0.1:900{i}",
data_dir=data_dirs[i-1]
)
node.set_bind_address(f"127.0.0.1:900{i}")
nodes.append(node)
Key Points:
- Each node needs a unique ID and port
- All nodes need separate data directories
- Bind addresses must be set before clustering
Forming the Cluster
# Define the cluster configuration
cluster_config = [
{"id": 1, "address": "127.0.0.1:9001"},
{"id": 2, "address": "127.0.0.1:9002"},
{"id": 3, "address": "127.0.0.1:9003"}
]
# Set the same cluster config on all nodes
for node in nodes:
node.set_cluster(cluster_config)
Key Points:
- All nodes must have the same cluster configuration
- The cluster config lists all members
- Raft consensus ensures one leader is elected
Writing Data to the Leader
# Connect to node 1 (will be elected leader)
conn = nodes[0].open("cluster.db")
cursor = conn.cursor()
# Create a table
cursor.execute("""
CREATE TABLE IF NOT EXISTS products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price REAL
)
""")
# Insert data
products = [
(1, "Laptop", 999.99),
(2, "Mouse", 29.99),
(3, "Keyboard", 79.99)
]
cursor.executemany(
"INSERT INTO products (id, name, price) VALUES (?, ?, ?)",
products
)
conn.commit()
Key Points:
- Write operations must go through the leader
- Data is automatically replicated to followers
- Followers maintain read-only copies
Reading from Followers
import time
# Wait for replication
time.sleep(1)
# Read from each node
for i, node in enumerate(nodes, 1):
conn = node.open("cluster.db")
cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) FROM products")
count = cursor.fetchone()[0]
print(f"Node {i} has {count} products")
cursor.close()
conn.close()
Key Points:
- Allow time for replication to complete
- Each node can serve read queries
- Data is eventually consistent across all nodes
Demonstrating Failover
# Simulate leader failure
print("\nSimulating leader failure...")
nodes[0].close()
# Wait for new leader election
time.sleep(2)
# Node 2 or 3 will be the new leader
# Try to write to node 2
conn = nodes[1].open("cluster.db")
cursor = conn.cursor()
cursor.execute(
"INSERT INTO products (id, name, price) VALUES (?, ?, ?)",
(4, "Monitor", 299.99)
)
conn.commit()
print("Successfully wrote to new leader!")
Key Points:
- Cluster remains available if majority (2 of 3) nodes are up
- New leader is elected automatically
- Clients can continue operations with remaining nodes
Expected Output
Creating 3-node dqlite cluster...
Initializing nodes:
Node 1: 127.0.0.1:9001
Node 2: 127.0.0.1:9002
Node 3: 127.0.0.1:9003
Forming cluster...
Cluster configuration set on all nodes.
Writing data to leader (Node 1)...
Inserted 3 products.
Replicating data...
Node 1 has 3 products
Node 2 has 3 products
Node 3 has 3 products
Simulating leader failure...
Node 1 shut down.
New leader elected!
Successfully wrote to new leader!
Node 2 has 4 products
Node 3 has 4 products
Example completed successfully!
Cluster Behavior
Quorum Requirements
- Writes: Require majority (2 of 3 nodes)
- Reads: Can be served by any node
- Leader Election: Requires majority
Failure Scenarios
| Nodes Available | Writes | Reads | Leader Election |
|---|---|---|---|
| 3 of 3 | ✅ | ✅ | ✅ |
| 2 of 3 | ✅ | ✅ | ✅ |
| 1 of 3 | ❌ | ✅ | ❌ |
| 0 of 3 | ❌ | ❌ | ❌ |
Network Partition Handling
In a network partition:
- The partition with majority can elect a leader and accept writes
- The minority partition becomes read-only
- When the partition heals, nodes sync automatically
Common Issues
Nodes Not Forming Cluster
Ensure all nodes have the same cluster configuration:
# Verify cluster config
for i, node in enumerate(nodes, 1):
print(f"Node {i} cluster: {node.get_cluster()}")
Replication Lag
If reads show stale data, increase wait time:
import time
time.sleep(2) # Wait longer for replication
Port Conflicts
Change the port range if needed:
node = Node(
node_id=i,
address=f"127.0.0.1:910{i}", # Use 9101-9103
data_dir=data_dirs[i-1]
)
Source Code
The complete source code is available at:
Next Steps
After understanding clustering, explore:
- Cluster with Client - Remote cluster management
- SQLAlchemy ORM - Use an ORM with clusters
- FastAPI Integration - Build APIs on clusters