Skip to main content
Model CompositionHidden State InjectionCapability TransferBlades ArchitectureCross-Model ThreadingMixture of Experts

Blades: Compositional Capability Enhancement Through Hidden State Injection

A
Andrew Young
Automate Capture Research

Abstract

We introduce Blades, a framework for enhancing neural network capabilities through hidden state injection between specialized models. Unlike fine-tuning, model merging, or ensembling, Blades enable hot-swappable capability composition within a single forward pass, requiring no additional training. Through systematic experimentation across four model pairs, we identify the conditions for successful capability transfer: matched hidden dimensions, late-layer injection at network depth 87.5%, gated feature selection, and domain-coherent blade stacking. Under these conditions, capability composition can achieve emergent performance exceeding either source model alone. Key finding: Injecting reasoning capabilities from Phi-4-mini-reasoning into MediPhi achieves 69.6% accuracy on medical reasoning tasks, compared to 55.4% for MediPhi alone (+14.2% absolute improvement). We further establish seven validated principles for capability transfer, including the N-4 layer rule for optimal injection depth and domain coherence for multi-blade synergy (+27.8% for same-domain, -27.8% for cross-domain interference).

Introduction

The demand for AI systems with diverse capabilities continuously grows, yet training new models from scratch for each task is computationally prohibitive. Current approaches to capability enhancement each have significant limitations:

  • Fine-tuning: Risk of catastrophic forgetting, requires substantial computational budget
  • Model merging: Parameter averaging approaches often degrade performance (Task Arithmetic, DARE)
  • Ensembles: Multiple forward passes increase inference cost and latency
  • Adapters (LoRA, Prefix Tuning): Additional parameters and potential incompatibilities between modules

We propose Blades, a framework that addresses these limitations by enabling hot-swappable capability injection in a single forward pass. The key insight is that hidden states from specialized models contain structured knowledge that can be transferred to a base model through careful injection at the right layer, with learned gating to select relevant features.

The term “Blades” draws an analogy from mechanical engineering: just as a mechanic swaps engine components to enhance performance, we can inject computational modules (blades) between model layers to dynamically enhance capabilities at runtime.

The Blades Framework

Architecture Overview

Blades consists of three components:

  1. Source Model: A specialized model containing the capability to transfer (e.g., strong reasoning abilities)
  2. Target Model: A base model that lacks or performs poorly on the capability
  3. Injection Mechanism: A gated aggregation layer that inserts source hidden states into target computation

Injection Mechanism

During inference on the target model, at selected layer ℓ:

h_target(ℓ) = h_target(ℓ) + α · g(w) ⊙ h_source(ℓ)

Where:

  • h_target(ℓ): hidden state from target model at layer ℓ
  • h_source(ℓ): hidden state from source model at the same layer
  • g(w): learned gating function (e.g., sigmoid gate) with parameters w
  • α: scalar weight parameter (range 0.1–0.3)
  • : element-wise multiplication

The N-4 Layer Rule

We identify the N-4 rule: for a model with N layers, optimal injection occurs at layer N − 4 (equivalently, at 87.5% network depth). For a 32-layer model like Phi-mini, this corresponds to layer 28.

Intuition: Early layers (0–5) learn low-level features; middle layers (10–20) process semantic content; late layers (25–31) prepare for output logits. Injecting at 87.5% depth captures high-level semantic concepts while avoiding interference with output preparation.

Experiments

Phase 1: Capability Transfer Feasibility

We tested hidden state injection across four model pairs to identify conditions for success:

ExpSource → TargetDim ChangeResultOutcome
T01CLIP → GPT-2512 → 768 (+49%)No effect
T02CLIP → Gemma-270M768 → 640 (-17%)No effect
T03MediPhi → Gemma-270M3072 → 640 (-79%)Degradation
T04Phi-4-reasoning → MediPhi3072 → 3072 (0%)+14.2%

Only same-dimension, same-family transfer (T04) succeeded. Cross-modal (T01, T02) and dimension-mismatch (T03) transfers failed.

Phase 2: Layer Optimization

LayerPosition (%)Accuracyvs. BaselineNotes
2475%48.1%-7.3%Degradation (early interference)
2887.5%67.8%-1.6%Near-baseline, stable
3093.75%60.5%-4.9%Output preparation interference

Layer 28 (N-4 for 32-layer models) provides the most stable transfer.

Phase 3: Multi-Blade Synergy

BladesTargetSynergy ScoreDomain
medical + medical_pubmedMediPhi+27.8%Same
medical + medical_pubmedClinical+22.2%Same
medical_clinical + medical_pubmedMediPhi+16.7%Same
reasoning + medicalMediPhi-27.8%Cross

Same-domain blades synergize; cross-domain blades interfere.

The Seven Principles of Capability Transfer

  1. N-4 Layer Rule: Optimal injection at layer N-4 (87.5% depth)
  2. Same-Dimension Requirement: Source and target must have identical hidden dimensions
  3. Capability Gap Principle: Improvement ∝ (source_capability − target_capability)
  4. Gated > Identity: Learned gating outperforms direct injection by +8.9%
  5. Same-Domain Synergy: Same-domain blades synergize (+27.8%), cross-domain interfere (-27.8%)
  6. MoE Router Control: Router bias enables domain-selective expert activation (1.67× selectivity)
  7. FFN Projection Feasibility: High-dimensional FFN outputs can be projected to lower dimensions

Connection to Model Garage Toolkit

This work is validated through the Model Garage toolkit, an open-source framework for extracting, composing, and managing specialized model components:

  • Hidden State Extraction: Efficient extraction at selected layers
  • Gating Mechanisms: Pre-implemented learned gating (sigmoid, linear, softmax)
  • Injection Automation: Automated injection at specified layers with configurable parameters
  • Validation Pipelines: Benchmarking against standard tasks (MMLU, MedQA, etc.)

Conclusion

We introduce Blades, a framework for hot-swappable capability injection through hidden state transfer between specialized models. Our key contribution is demonstrating that capability transfer is feasible under specific conditions—matched dimensions, late-layer injection, gated selection, and domain coherence—and that when these conditions align, emergent performance can exceed either source model alone (+14.2% improvement in our best case).

The practical implication is that AI systems can be enhanced through modular, hot-swappable capability injection without retraining. The Model Garage toolkit makes this approach reproducible and accessible.

Cite this article

Andrew Young (2026). Blades: Compositional Capability Enhancement Through Hidden State Injection. Automate Capture Research.