Skip to content

Architecture

This page describes how the Soil codebase is organized and the role of each layer.

Layered Design

Soil follows the same layered architecture as Substrate. Every blockchain built with Soil consists of two main components:

graph TB
    subgraph Client["Client (off-chain)"]
        CLI[CLI & Configuration]
        Net[Networking]
        Cons[Consensus]
        RPC[RPC Server]
        TxPool[Transaction Pool]
        DB[(Storage Backend)]
    end

    subgraph Runtime["Runtime (on-chain, Wasm)"]
        Exec[Executive]
        Pallets[Pallets]
        Primitives[Primitives & Traits]
    end

    CLI --> Net
    CLI --> RPC
    Net --> Cons
    Cons --> Exec
    RPC --> Exec
    TxPool --> Net
    Exec --> Pallets
    Pallets --> Primitives

Runtime is the state transition function — it defines how blocks are validated and how state changes in response to transactions. It compiles to Wasm so that every node executes it deterministically. The runtime is composed of pallets, each responsible for a specific domain (balances, staking, consensus hooks, etc.). The Executive module orchestrates block execution by dispatching calls to the appropriate pallets.

Client is the native node software. It manages peer-to-peer networking, block production and import, consensus protocols (BABE, GRANDPA, Aura, etc.), the transaction pool, an RPC server for external queries, and the on-disk storage backend. The client treats the runtime as a black box that it calls through a well-defined API.

Workspace Layout

The workspace is organized into five top-level directories:

main/

Core framework crates. These are the crates most projects depend on directly. Crate names follow a layered convention:

Prefix Layer Substrate equivalent
subsoil Low-level primitives, types, and traits sp-*
topsoil Runtime framework (FRAME) frame-*
soil Client-side node services sc-*

runtime/

Pallets that ship with the framework, using the plant-* prefix (equivalent to pallet-* in Substrate). These cover essential blockchain functionality: account balances, staking, consensus integration, session management, transaction payments, and assets.

contrib/

Optional and community-contributed pallets maintained within the repository. Governance modules (democracy, referenda, treasury), NFTs, nomination pools, identity, and example pallets live here.

harness/

Test-only crates: mock runtimes, test nodes, and support utilities used by the framework's test suite. Not intended for production use.

library/

Standalone tools and utilities: subkey for key management, substrate-wasm-builder for Wasm compilation, and RPC support crates.

Key Crate Relationships

The following diagram shows how the major crates relate to each other:

graph LR
    subsoil["subsoil<br/>(primitives)"]
    topsoil-core["topsoil-core<br/>(FRAME system)"]
    topsoil["topsoil<br/>(FRAME prelude)"]
    topsoil-executive["topsoil-executive<br/>(block execution)"]
    pallets["plant-*<br/>(pallets)"]
    soil-service["soil-service<br/>(node builder)"]
    soil-client["soil-client<br/>(client)"]
    soil-consensus["soil-consensus<br/>(consensus)"]
    soil-network["soil-network<br/>(p2p networking)"]

    subsoil --> topsoil-core
    topsoil-core --> topsoil
    topsoil --> pallets
    topsoil-core --> topsoil-executive
    pallets --> topsoil-executive
    subsoil --> soil-client
    soil-client --> soil-service
    soil-consensus --> soil-service
    soil-network --> soil-service
    topsoil-executive --> soil-service
  • subsoil defines the fundamental types (block headers, hashing, cryptographic primitives, codec traits) that every other crate depends on.
  • topsoil-core provides the FRAME system pallet and the Config trait machinery used to configure pallets.
  • topsoil re-exports the most commonly used items as a convenience prelude.
  • topsoil-executive orchestrates block initialization, extrinsic dispatch, and block finalization.
  • plant-* pallets implement specific on-chain logic (balances, staking, consensus hooks, etc.).
  • soil-service wires everything together into a running node: it creates the client, starts networking, spawns consensus workers, and opens the RPC server.