# Rollout Structure

Transforming how UME handles data won't happen overnight. It requires deliberate phases that build confidence, validate tools, and create advocates along the way.

# Phase 1: Research and Alignment (Current)

Before writing code, we invested in understanding the landscape:

  • Mapped the pain - Conversations with teams across Atendimento, FinOps, Data Engineering, and Data Science surfaced concrete problems: incorrect reconciliation data, expensive runaway queries, duplicated KPIs, security gaps.
  • Created an architecture view - The Architecture and Tools section captures what a governed data platform looks like, layer by layer.
  • Evaluated tools - Each component (storage, ETL, reporting, catalog) was researched with alternatives weighed against UME's context.

The deliverable: This documentation itself. Shared with stakeholders for alignment before implementation begins. No surprises, no misaligned expectations.

# Phase 2: MVP Implementation (Next)

With alignment in place, we move to implementation - but scoped tightly:

# Two Parallel Verticals

We start with Customer Support (Atendimento) and Financial Operations (FinOps). These aren't random choices:

Criteria Atendimento FinOps
Clear pain point Ungoverned dashboards, expensive queries, no history Incorrect data sent to fund managers daily
Engaged stakeholders Vinicius, Léo Luiz, operations team Sarmanho, Deo, credit operations
Bounded scope Dashboards + data sources are known Reconciliation pipeline is isolated
Demonstrable value Reduced costs, single source of truth Correct reconciliation, audit trail

# What "MVP" Means Here

MVP doesn't mean throwaway. It means:

  • Production-grade from day one - What we deploy stays. No "we'll fix it later" shortcuts.
  • Full vertical slice - From data source to consumption layer. Not just a dashboard or just a pipeline, but the complete flow with governance baked in.
  • Tool validation - Each tool choice is proven in a real scenario before we commit to broader adoption.

# Success Criteria

Before expanding, the MVPs must demonstrate:

  1. Data correctness - The governed data matches or exceeds the accuracy of current ad-hoc solutions.
  2. Discoverability - Users can find data and understand its lineage through the catalog.
  3. Operational improvement - Measurable reduction in time spent on data firefighting.
  4. Stakeholder buy-in - The teams using the new platform prefer it and advocate for expansion.

See First Projects for the specific scope and expected benefits of each MVP.

# Phase 3: Broader Rollout

Once MVPs prove the model, we expand deliberately:

# Creating Organizational Momentum

  • Visibility through the catalog - As more data flows through governed channels, the catalog becomes the default place to find and understand data. Teams outside the MVPs start asking to join.
  • Self-service enablement - With patterns established, other teams can onboard their data following the same blueprints. Data Engineering shifts from firefighting to enabling.
  • Governance as culture - Data ownership, quality checks, and lifecycle management become expected, not exceptional.

# Migration Strategy

For existing assets (dashboards, queries, pipelines), migration is gradual:

  • Flag, don't force - The catalog marks assets as "governed" or "legacy". Users see which sources are trustworthy.
  • Sunset with proof - Legacy sources retire only when governed alternatives demonstrably replace them.
  • Automate discovery - Tools like Metabase metadata exports help map the existing landscape without manual archaeology.

# Scaling the Team Model

The current small DE team can't govern everything directly. The rollout creates:

  • Data stewards per domain - Business areas own their data quality, with DE providing the platform and guardrails.
  • Blueprints and templates - Repeatable patterns for common scenarios (onboarding a new source, creating a certified KPI, publishing a dashboard).
  • Training and documentation - Not just tools, but practices that spread.

# Timeline Perspective

We deliberately avoid hard dates. What matters is the sequence and the gates:

Research & Alignment → MVP Implementation → Validation → Broader Rollout
        ^                      ^                ^              ^
    (we are here)        (2 verticals)    (success criteria)  (expand)

Each phase completes when its criteria are met, not when a calendar date arrives. This protects against rushing incomplete work into production.

# What Could Go Wrong

Honest risks to watch:

  • Stakeholder attention fades - Mitigate by delivering visible wins early in MVPs.
  • Tool choices don't fit - Mitigate by validating in real verticals, not PoCs in isolation.
  • MVP scope creeps - Mitigate by defining success criteria upfront and protecting boundaries.
  • Legacy systems resist retirement - Mitigate by proving governed alternatives before forcing migration.

The structure isn't a guarantee of success, but it's designed to surface problems early and adapt.