AI Tensor Matrix

Overview

The AI Tensor Matrix is a multi-dimensional contribution tracking system that records contributions to each AI across multiple contribution types. It serves as the foundation for fair reward distribution to contributors.

Structure: 2D matrix tracking contributions by (contributor address × contribution type)

Contribution Types

Type
Description
Required

TRAIN

Training data or model improvements

No

REFER

Referrals and community growth

No

CREATE

AI creation or significant enhancements

Yes

PROMPT

Prompt engineering and testing

No

REVENUE

Revenue generation contributions

No

MARKET_CAP

Market capitalization increase

No

Note: Additional contribution types can be added via governance proposals as the system evolves.

Constraints:

  • Append-only: New types can only be added at the end

  • Immutable order: Existing types cannot be reordered

  • Immutable names: Type names cannot be changed

  • CREATE required: Must always remain active

These constraints prevent corruption of historical contribution data stored as position-indexed arrays.

Tensor Structure

Example: AI Tensor for "genie-ai" at Block 1000

Address
TRAIN
REFER
CREATE
PROMPT
REVENUE
MARKET_CAP

cosmos1abc...

100

50

1

25

10

5

cosmos1xyz...

200

0

0

10

15

8

cosmos1def...

75

100

0

50

5

3

Each cell represents the quantity of a specific contribution type by a contributor.

Data Structures:

Contribution Attribution Process

Phase 1: Authorization (Core Grant Application)

Before submitting contributions, requesters must obtain permission:

  1. Requester submits governance proposal for core issuance permission

  2. Community votes on proposal

  3. If approved → Requester receives core allocation quota

  4. Quota has expiry block height (default 1,000 blocks)

Phase 2: Attribution (Request for Core)

Once authorized, requesters submit contribution data:

  1. Submit RfC - Approved requester submits MsgRequestForCore containing contribution tensors

  2. Validate - System validates tensors and checks allocation quota

  3. Record - Contributions recorded immutably at current block height

  4. Mint Cores - Cores minted via EVM contract to contributors

RfC Message:

Tensor Merging

Multiple RfCs in the same block are merged by summing contribution values:

Process:

  1. Load existing tensor at current height

  2. For each contributor:

    • If exists: Add contributions element-wise

    • If new: Append row

  3. Store merged tensor

Purpose:

  • Prevents data loss from concurrent submissions

  • Accumulates all contributions within a block

  • Maintains accurate historical records

Example:

Input:

  • RfC 1: Alice TRAIN=100, Bob REFER=50

  • RfC 2: Alice TRAIN=50, Carol CREATE=1

Merged Tensor at Block Height:

Address
TRAIN
REFER
CREATE
PROMPT
REVENUE
MARKET_CAP

Alice

150

0

0

0

0

0

Bob

0

50

0

0

0

0

Carol

0

0

1

0

0

0

Storage and Indexing

Storage Keys

Length-prefixed encoding prevents collisions:

Example:

Benefits:

  • Length-prefix prevents collisions between different AI IDs

  • Enables efficient range queries by AI and height

  • Deterministic ordering across validators

Query Patterns

All contributors for AI at specific height:

Contribution history for specific wallet: Query all tensors, filter by address in application layer.

All block heights containing tensors for an AI:

Usage in Reward Distribution

During inflation distribution, the AI Tensor Matrix determines contributor rewards:

  1. Retrieve tensors from last N blocks (default 1,000)

  2. Tally contributions - Sum all contributions per contributor across historical blocks

  3. Calculate shares - Each contributor's share = their cores / total cores

  4. Distribute rewards - 20% of AI rewards split proportionally

Processing Limits:

  • Max 1,000 historical blocks per AI (max_ai_block_heights)

  • Max 1,000 tensor rows per block (max_tensor_rows_per_block)

  • Prevents unbounded iteration that could halt blockchain

Example:

Security Features

AI App Authorization

  • Only approved AI App can submit tensors

  • AI App address must match requester in Core Grant Application

  • Prevents unauthorized attribution

Tensor Validation

  • All tensors validated before tallying contributions

  • Invalid tensors skipped during reward distribution

  • Array length must match contribution type count

  • CREATE contribution required for new AI creation

Three-Phase Validation (TEN-8 Fix)

Prevents invalid tensor processing:

  1. Validate CGA - Check CGA exists and is not expired

  2. Validate tensors - Check allocation quota sufficient

  3. Execute atomically - Deduct allocation + store tensors + mint cores

This ensures either all operations succeed or all fail—no partial state.

Overflow Protection

  • Safe math operations throughout

  • Contribution quantities checked for overflow

  • Prevents manipulation via extremely large values

Integration with Inflation Mechanism

The tensor matrix feeds directly into inflation distribution:

Design Rationale

Why Multi-Dimensional? Different contribution types have different value. Tracking separately enables future weighted distribution.

Why Position-Indexed Arrays? Gas-efficient storage and prevents need for string matching on contribution type names.

Why Immutable Constraints? Historical data integrity. Changing order or names would corrupt existing tensor data.

Why Block-Height Indexed? Enables historical queries and recognizes long-term contributors, not just recent activity.

Why Merge Same-Block RfCs? Prevents loss of contribution data when multiple requesters submit simultaneously.

Why Bounded Processing? Prevents DoS attacks via excessive historical data that could halt block production.

Last updated