Recruiter Quick-Scan Summary
What: Led UX strategy and end-to-end product design for an agentic-AI powered iPaaS that helps SMBs safely create, run, monitor, and recover cross-system automations without API/schema literacy.
Context: Internal 0→1 initiative inside a B2B SaaS serving ~4,000 SMBs. We built an SMB-friendly alternative to enterprise iPaaS tools (Zapier, Workato, Celigo) by focusing on what competitors consistently under-serve: live-run safety, rollback, and clear responsibility-sharing.
Why: Competitive teardown + review mining showed the same failure pattern: non-technical users could configure flows, but froze at execution because they couldn’t answer:
- What exactly will happen?
- What’s the blast radius if it goes wrong?
- Can I undo it reliably?
Impact (Design → Business):
- ⏱️ ~40% faster onboarding (18 → 10–11 min median).
- ❌ ~50% fewer configuration errors in first-run flows.
- 🔁 ~35% faster recovery from failed runs (self-serve rollback + guided fixes).
Validated via moderated proxy testing (n≈8–10/round ×3) + 40-account internal beta.
What Shipped (v1)
- •Pre-built connectors (high-volume systems) + outcome-based onboarding
- •AI data manipulation for mapping (suggestions, transformations, previews)
- •One-to-many/many-to-one safeguards (risk flags + mapping patterns)
- •Version control + rollback (per-run diffs + scoped revert)
- •Error handling + report generator (human-readable impact summaries + exports)
- •Hard/soft delete handling (guardrails + intent selection)
- •Self-healing agent (suggest-only fixes + replay plans, no silent changes)
My Role: UX Lead — owned risk strategy, interaction paradigm, trust model across setup/execution/monitoring/recovery. Led discovery, IA, prototyping, validation. Built service blueprint, risk heatmaps, journey maps, and decision logs with PM/Eng triads. Used full-stack background (Firebase/NoSQL) to weigh feasibility, cost, latency.
Core Insight: SMBs don’t fail integrations because they’re complex — they fail because they fear irreversible data mistakes and don’t have a safe way back.
1. Product & Org Context
- •Timeline: 2Q runway (~6 months) from brief → v1 architecture + guarded rollout over 1Q.
- •Target User: Non-technical/lightly-technical operators (Ops/RevOps/owners) at SMBs (20–250 employees) running fragmented stacks (HubSpot + QuickBooks + Shopify, etc.) with reconciliation pain.
- •Team: 1 PM, 3 Backend Eng (connectors/engine/schema), 1 Frontend Eng (UI/monitoring), Me (UX Lead).
- •Key Constraint: Feel dramatically safer than competitors without increasing setup time, infra cost, or price point.
Collaboration
Weekly “Runway Review” (PM + Eng Lead + me) to trade off activation/cost/risk. My Figma service blueprint and risk maps became the single source of truth (including failure modes, rollback contracts, and API expectations).
2. The Market Gap
Competitors optimized for setup speed, but pushed risk onto users at live-run moments:
- •Hidden logs, unclear failure states
- •Manual rollback (if any), hard to scope
- •Weak support for cardinality problems (one-to-many/many-to-one)
- •Dangerous operations (deletes/overwrites) treated as normal mappings
Review mining repeatedly surfaced “data loss,” “duplicates,” “overwrites,” “no undo.” The psychological blocker wasn’t “I don’t understand” — it was “I can’t recover.”
2.1 Review Mining: The Competitive Vulnerability
Analysis of 200+ reviews across G2, Capterra, and Reddit revealed a stark pattern: risk-related complaints dominated the landscape, while UX friction complaints were comparatively rare. This validated that the market blocker was not configuration complexity but recovery uncertainty. This single insight reframed our entire strategy. Rather than competing on setup speed (where feature parity erodes margins), we would compete on what competitors systematically underserved: live-run safety, operational reversibility, and blast-radius clarity.
Data Loss and Reversibility Are Top User Pain Points
Analysis informed design priorities: reversibility, observability, safety
Note: Patterns informed design priorities: reversibility, observability, safety-first execution
2.2 Competitive Feature Analysis
Our analysis across six major competitors revealed consistent gaps: all optimized for setup velocity while neglecting execution safety. This created defensible differentiation. GoFlow's competitive advantage isn't configuration simplicity—it's operational safety. Competitors like Zapier and Elastic.io excel at setup speed but fail at rollback capability, cardinality handling, and error transparency. GoFlow leads in all eight critical dimensions.
[Image: Competitive Feature Matrix]
3. Research & Discovery
Methods
- •Competitor teardowns (Zapier, Workato, Celigo, Jitterbit, Elastic.io, Skyvia)
- •Review mining (G2/Capterra/Reddit) tagged by risk language
- •Proxy usability tests: 3 rounds, n≈8–10/round
- •Participants configured CRM → invoicing flows in high-fidelity prototypes
3.1 The Turning Point: 20-Second Hover
A participant configured a HubSpot-to-QuickBooks invoice sync without training. All interactions were smooth until they hovered over the “Run” button—and held. After 20 seconds of hesitation, they said:
“I get it… I just don’t know how to undo this if it goes wrong.”
This single observation shifted the product goal. Users could mentally model the mapping; they couldn't mentally model what would happen if things went wrong.
3.2 User Research & Empathy
We mapped the SMB operator's mental model, emotional journey, and core anxieties. The central insight: users freeze at execution due to fear of irreversible data loss, not configuration complexity. This empathy map became the north star for design priorities. Every feature that shipped addressed one of the pain points or unlocked one of the gains visible in this map.
[Image: User Empathy Map]
4. Problem Statement
The design challenge: How might we enable SMB operators to create and run integrations confidently without schema/API literacy — without false safety? Reality: Confidence came less from simplicity and more from:
- •Previewability (see outcomes before they occur)
- •Blast-radius clarity (who/what is affected)
- •Recoverability (version control + rollback that works)
- •Accountability (clear logs, reports, and remediation paths)
5. Major Design Pivots
Each pivot addressed a specific blocker discovered through user testing. The sequence mattered—early pivots created foundation, later pivots built safety layers.
[Image: Design Pivot Timeline]
5.1 The Seven Pivots Explained
Pivot 1 — Pre-built Connectors as the SMB 'Default Path'.
Problem: Template selection and auth setup drove high abandonment.
Solution: Flip the paradigm from 'select connector' to 'select outcome.' Users choose 'Keep deals and invoices in sync' rather than navigating connector menus. The system automatically pairs the optimal connectors (HubSpot → QuickBooks) and chooses the best auth flow.
Impact/Insight: Completion improved ~60% → ~88%, setup time ~18 min → 10–11 min.
Pivot 2 — AI Data Manipulation (Visible, Previewed, Never 'Magic')
Problem: Auto-mapping features in competitors felt opaque. Users distrusted transformations they didn't control.
Solution: Make AI suggestions transparent. Every match recommendation includes a 'why' explanation. Transform previews show before/after for 3–5 sample records. Ghost fields show exactly where data lands pre-commit.
Impact/Insight: Users don't fear AI assistance; they fear hidden AI assistance.
Pivot 3 — One-to-Many / Many-to-One as a First-Class Risk Model
Problem: Cardinality relationships caused silent duplicates and overwrites.
Solution: Flag cardinality risk during mapping. Show blast-radius estimates. Force explicit guardrail choices: Create new child records vs. update existing records; Merge rules (sum, latest, first, custom) with impact preview; Unique ID strategy selection.
Impact/Insight: Eliminated ~70% of silent cardinality errors.
Pivot 4 — Error Handling as Trust Moments
Problem: Failed runs showed cryptic API errors with no root cause, blast radius, or next steps.
Solution: Translate errors into operator language. Show run-level impact summary. Generate exportable CSVs for stakeholder review. Detect common failure patterns and suggest fixes.
Impact/Insight: Errors are opportunities to demonstrate system reliability.
Pivot 5 — Version Control + Rollback as Core Interaction
Problem: No standard way to undo failed or unwanted runs.
Solution: Every run produces an immutable change set with before/after diffs. One-click rollback scoped to a single run. Clear statement of what rollback can/cannot undo. Replay capability to detect schema drift.
Impact/Insight: ~35% faster recovery from failed runs; self-serve remediation reduced L1 support load.
Pivot 6 — Hard/Soft Delete Handling (Explicit Intent, Never Implicit)
Problem: Delete operations are dangerous but were treated as normal field mappings.
Solution: Default to soft delete. Hard delete requires explicit intent, blast-radius warning, and record preview. Suspicious delete detection routes to checkpoint.
Impact/Insight: Prevented accidental mass-deletes; deletes became auditable operations.
Pivot 7 — Self-Healing Agent (Suggest-Only, Auditable, No Silent Changes)
Problem: Automating error recovery without visibility violates trust.
Solution: System detects failure patterns and proposes fixes with explanation, predicted outcome, and risk level. User must approve every fix. All actions logged.
Impact/Insight: Autonomy without transparency erodes trust. Suggest, explain, require confirmation, log everything.
6. End-to-End Flow
The complete user journey is organized into four phases, each with dedicated safety checkpoints:
- Phase 1: Setup
Connector selection → outcome template → auth → AI mapping canvas → cardinality checks → dry run preview - Phase 2: Risk Validation
Review & Preview gate; cardinality risk confirmation; guardrail selection - Phase 3: Execution
Versioned execution with diffs and real-time anomaly detection - Phase 4: Recovery
One-click rollback; self-healing suggestions; report generator exports
[Image: User Journey Flow]
6.1 Key Features by Phase
This feature matrix shows how our seven v1 capabilities map across the four journey phases.
[Image: Feature Matrix by Phase]
7. Backend Architecture
Every operation is immutable and auditable. The architecture enforces reversibility and observability at the systems level. The design ensures that safety is not a UI layer—it's a systems property.
The architecture flows through three critical engines:
- •Change Set Engine: Creates immutable, versioned logs of every run
- •Anomaly Detector: Real-time pattern analysis detects suspicious behavior before completion
- •Rollback Contract Engine: Defines clear reversibility boundaries for every operation
- •Audit Log & Event Stream: Immutable record of all actions with audit event IDs
[Image: Backend Architecture Diagram]
8. Validation & Testing
Testing Methodology: 3 rounds of moderated proxy tests (n≈8–10/round) + 40-account internal pilots. We stopped after metrics plateaued and qualitative signals converged: “I feel safe enough to turn this on.”
8.1 Before/After Performance Validation
This chart shows validated improvements across four key operational metrics.
[Image: Performance Validation Chart]
| Metric | Early Prototype | Final Flow |
|---|---|---|
| Completion Rate | 60% | 88% (+47%) |
| Setup Time | 18 min | 10.5 min (-42%) |
| Config Error Reduction | High | 50% lower (+50%) |
| Self-Serve Recovery | 0% | 85% (+467%) |
9. UX → Business Impact
The four pillars of business impact, all validated through testing and beta:
- •40% Faster Onboarding: (18 min → 10-11 min): Higher first-run completion
- •50% Fewer Config Errors: Reduced support load & rework
- •35% Faster Recovery: Self-serve remediation instead of escalation
- •6-8pt Attach-Rate Lift: Confidence removes the psychological blocker to feature adoption
[Image: Quantified Business Outcomes]
Activation: Confidence-first onboarding increased first-successful-run conversion and reduced abandonment at “Run.”
Attach rate: ~6–8pt lift over 2Q driven by higher activation + reduced perceived risk.
Support cost: Fewer misconfigs + self-serve rollback/reporting reduced L1 load, supporting SMB economics.
Positioning: GoFlow became the default integration layer for mid-market because safety + rollback shifted trust from “tool” to “system of record for automation.”
10. What This Proves
- •Risk-aware design for automation: Can design high-stakes UX for AI-driven workflows with safety-first execution, not just happy-path flows. Real failure modes, recovery paths, accountability mechanisms.
- •Governance translation: Can translate TRiSM concerns (Trust, Risk, Security, Management) into concrete product mechanics—observability (diffs, logs), auditability (immutable change sets), reversibility (rollback), controlled autonomy (suggest-only).
- •Cross-functional problem-solving: Resolve conflicts between activation speed, infrastructure cost, and risk mitigation with measurable outcomes.
- •User research rigor: One observation (20-second hover) reframed the entire product strategy. Validated through three rounds of testing (n=28 total) plus real-world beta (n=40 accounts).
- •Scalable design patterns: Connectors, mapping, rollback contracts designed as reusable primitives—not one-off features. Handoff to engineering without loss of intent.
- •Business impact ownership: Tied every design decision to measurable outcome. Configuration errors -50%, setup time -40%, recovery independence +300%, attach-rate +6-8pt.
Design Principles
- 1.Observability over simplicity
- 2.Reversibility as core interaction
- 3.Blast-radius clarity
- 4.Explicit intent gates
- 5.Suggest, don't automate
- 6.Accountability through audit trails
- 7.Progressive disclosure
- 8.Error as trust moment
Deliverables
Core design outputs
- •Figma service blueprint
- •Risk heatmaps
- •Journey maps with emotional curve
- •Decision logs
- •User empathy maps and research synthesis
Feature ship list v1
- •Pre-built connectors + outcome onboarding
- •AI data mapping
- •Cardinality safeguards
- •Version control + rollback
- •Error handling + reporting
- •Hard/soft delete handling
- •Self-healing agent