
Architecture Before Autonomy: The Path To AI-Native Wealth
McKinsey’s recent vision of wealth management in 2035 describes an industry entering the agentic age, where systems reason, recommend, and act under constraints rather than merely automate tasks6.
The destination is clear.
What is less clear is the path.
There are multiple pathways to pursue that future, and not all of them compound for the benefit of the organization or Financial Advisor. Some plateau at productivity. Some create supervisory and regulatory fragility. Some never move beyond assistive tooling.
The firms that reach 2035 intact, advantaged, and “AI-Native” will not begin with AI compliance overlays or experimental FA or client-facing copilots. They will begin by eliminating friction at the operating level and establishing a core architecture that can safely scale.
The transition from automation to agency is not a feature rollout. It is a staged institutional redesign.
AI Native Phase I: Eliminate Friction and Establish the Core
The instinct in regulated industries is often to start with guardrails. Compliance reviews. Supervisory dashboards
To-date, wealth management has been driving towards advisor workflows and administrative tasks.
These instincts are understandable - and also, we would argue - are backwards. These starting points put adoption in the hands of the individual to embrace the technology, which history has proven is difficult at best.
Wealth management already runs on formal fiduciary obligations such as duty of care and loyalty1, written compliance programs2, codes of ethics3, and structured disclosures through Form ADVs4. These are not weaknesses or constraints. They are latent structure and have historically provided a common architecture for the industry.
In most firms, that structure exists as documentation, supervision processes, and institutional memory. It does not exist as clean, interoperable operational architecture.
AI Native Phase I focuses on common but complex operational use cases that transcend individual advisors. Examples include: portfolio diagnostics across households, pre-trade constraints as executable permissions, cross-entity data reconciliation. Most importantly eliminating the manual processing between disparate platforms and “middle-to-middle” systems in areas like billing, pricing, onboarding, finally resolving the morass of unstructured to structured data flows that still persist in every wealth organization.
This approach resembles centralized platform operating models in which recurring complexity is treated as an institutional capability rather than an individual craft activity.
AI in this phase is not advisory. It is organizational.
The objective is to standardize data structures, establish lineage and auditability, create consistent workflow orchestration, and reduce human coordination overhead.
Human-in-the-loop remains absolute. Fiduciary duties and obligations are untouched. AI assists but does not decide.
Most importantly, Phase I builds the substrate. Without it, later stages cannot scale.
If firms begin instead with compliance checks layered on fragmented systems, they will supervise noise rather than build capacity.
AI Native Phase II: Broaden the Horizon with Structured Human-in-the-Loop
Once the operational core is established, the horizon broadens.
McKinsey’s 2035 vision anticipates AI systems capable of planning across time and interacting conversationally with clients6. That future requires firm-wide constraint coherence.
Phase II extends AI across investing, product selection/exploration, broader compliance, and service while maintaining explicit human-in-the-loop governance across all use cases. M&A integration is streamlined to reduce the dependence on brittle point-to-point middleware by standardizing interfaces and automating exception handling. AI becomes the “high value layer” on an increasingly legacy stack.
Agents may generate portfolio adjustment recommendations, surface tax loss harvesting opportunities, flag liquidity risks, or draft client communication. No material action executes without human authorization. In parallel, the regulatory environment will evolve, and best practices and standards are likely clarified and hardened.
The firm is not yet pursuing autonomy. It is pursuing reliability and constraint integrity.
The key institutional shift is cross functional design. An agent operating across portfolio management, operations, and product teams cannot be owned by one department. Constraint logic must be co-designed.
Rule 206(4)-7 requires policies reasonably designed to prevent violationsviolations2. In Phase II, those policies begin to map directly to system permissions. Codes of ethics under Rule 204A-13 translate into interpretable guardrails. Disclosure obligations described in Form ADV4 begin aligning with how systems evaluate conflicts and costs.
Supervision migrates from retrospective review to embedded constraint validation.
Human-in-the-loop remains universal. The firm is learning how to encode judgment into structure.
This phase can last years. It should.
The objective is not speed. It is institutional muscle memory.
AI Native Phase III: Deterministic Output and Bounded Agency
Only once the core is established and constraint integrity proven does the firm move to deterministic execution.
Deterministic does not mean simplistic or opaque. It means that given defined inputs, permissions, and governing constraints, the system produces predictable and explainable outputs that the firm can act upon.
Phase III is different because the system begins affecting how the firm itself allocates attention, capital, pricing discipline, supervisory resources, and strategic action.
In this phase, AI may continuously evaluate profitability, service complexity, risk exposure, product penetration, and advisor capacity across the book, then recommend or trigger bounded firm-level actions within approved mandates. That could include reprioritizing service models by segment, reallocating specialist coverage, initiating pricing exception reviews, sequencing remediation across supervisory hotspots, recommending household migration into more appropriate program structures, or coordinating interventions when patterns emerge across books rather than within a single account.
In more mature environments, the system may also govern multi-step actions such as advisor transition workflows during M&A integration, repapering campaigns tied to product rationalization, enterprise cash logistics across custodial relationships, or the identification of issues where fiduciary risk, operational drag, and revenue leakage are compounding together.
Every action is memorialized with embedded justification aligned to fiduciary standards1. Evidence becomes continuous rather than periodic.
This is where AI moves beyond operational efficiency.
It begins to influence growth, increase decision quality, expand capacity, and tighten fiduciary evidence.
The difference between assistive AI and deterministic AI is not model sophistication. It is constraint and context maturity. The role of Agents may manifest in a few different ways - at the firm level for discretionary actions or each FA may have an agent executing on their behalf. Time will tell.
McKinsey’s 2035 scenario describes always-on engagement and systems capable of reasoning under constraints6. That outcome is plausible only if constraint architecture precedes autonomy.
Otherwise autonomy magnifies variance.
In Phase III, human-in-the-loop evolves. Humans do not disappear. They supervise exception pathways, redesign constraints, and expand permissioning boundaries.
The primary governor of routine complexity becomes architectural rather than conversational.
Multiple Paths to 2035
There are other paths, of course.
Some firms will begin with client-facing copilots and hope experience drives architecture.
Some will focus narrowly on compliance overlays, ensuring AI outputs are reviewable without redesigning the substrate.
Some will remain assistive indefinitely, extracting productivity without expanding institutional capacity.
All of these approaches will deliver some benefit, and can survive for a time.
The distinction becomes visible only when firms attempt to grant bounded discretion.
One firm can safely allow systems to act within defined mandates and evidence every action in real time. The other cannot move beyond suggestion without expanding risk faster than supervision can contain it.
The difference is not model intelligence.
It is constraint and context throughput, the rate at which a firm can safely expand system permissions without expanding risk faster than oversight can manage.
The scarce asset is not intelligence, but permissioned institutional context.

Sources
- U.S. Securities and Exchange Commission, Commission Interpretation Regarding Standard of Conduct for Investment Advisers (IA-5248, June 5, 2019). https://www.sec.gov/rules-regulations/2019/06/ia-5248
- U.S. Securities and Exchange Commission, Compliance Programs of Investment Companies and Investment Advisers (Advisers Act Release No. 2204; adopting Rule 206(4)-7, Dec. 17, 2003). https://www.sec.gov/files/rules/final/ia-2204.pdf
- U.S. Securities and Exchange Commission, Investment Adviser Codes of Ethics (Rule 204A-1; IA-2256, July 6, 2004). https://www.sec.gov/files/rules/final/ia-2256.htm
- U.S. Securities and Exchange Commission, Form ADV and Investment Adviser Brochure Requirements (Investor Bulletin, June 24, 2016). https://www.investor.gov/introduction-investing/general-resources/news-alerts/alerts-bulletins/investor-bulletins-71
- U.S. Securities and Exchange Commission, Investment Adviser Public Disclosure (IAPD). https://adviserinfo.sec.gov/
- McKinsey & Company, US wealth management in 2035: A transformative decade begins (Jan. 29, 2026). https://www.mckinsey.com/industries/financial-services/our-insights/us-wealth-management-in-2035-a-transformative-decade-begins