A field guide from the team
building enterprise agentic AI.
We are ASCENDING — an AWS Advanced Consulting Partner that builds Jarvis AI, a governance-first, MCP-native agent platform. This is the public research arm of that team: what we've learned shipping agentic systems for enterprises, written for the people doing the same work.
On our desks
Anthropic's MCP changelog, the NIST AI RMF update, and too much coffee.
Four pillars, one argument.
The enterprise AI conversation in 2026 still circles the same four topics. Each pillar here is a long read, not a landing page. Start where your quarter is burning hottest.
Agentic AI
The operating theory of autonomous agents — where they work, where the hype still outruns the evidence, and which workloads actually cost less than the humans.
Model Context Protocol
MCP went from Anthropic research draft to foundation-backed standard in fourteen months. A reader-friendly reference for clients, servers, and gateways.
AI Governance
Policy templates, approval workflows, and the uncomfortable organizational questions — written alongside CISOs who already filed their ISO 42001 paperwork.
Enterprise RAG
Retrieval is still the hardest part of the stack. A pillar on document pipelines, re-rankers, evals, and when agentic RAG earns its seat.
What we published this quarter
A reader's guide to evaluating MCP gateways
The evaluation criteria we use when readers ask which gateway to pilot: tool-level authorization, credential brokering, per-tool observability, egress enforcement, and policy-as-code. Drawn from the published documentation of the ~15 MCP gateway vendors tracked in this space.
How to measure AI agent ROI without embarrassing yourself
Productivity-minute arithmetic is how the first wave of agent programs embarrassed themselves. A framework from CFO-side reviewers who now require direct P&L impact.
Moveworks vs Glean, after the ServiceNow acquisition
Moveworks closed into ServiceNow at $2.85B in late 2025. A side-by-side rebuilt from public product documentation, Moveworks' and Glean's own homepages, AWS Marketplace listings, and analyst commentary.
Practitioner-written, openly sponsored.
We are not an independent publication. We are the ASCENDING team that ships Jarvis AI — the same people building the gateway, the governance layer, and the MCP integrations we write about. Writing from inside the problem is the point; pretending otherwise would be dishonest and bad for trust.
Every claim is anchored to a public source we can link to — vendor documentation, standards bodies (ISO, NIST, Linux Foundation), analyst reports (Gartner, Futurum), and peer-reviewed papers.
Every page that discusses Jarvis opens with a disclosure. Every comparison that includes Jarvis marks it clearly. We rank Jarvis honestly in our own tables — where it loses, we say so.
Pricing pages are dated. Comparisons show sources column-by-column. When our reading is directional rather than authoritative, we say so on the page, not in a footnote.
Read most this month
Who writes here
Every piece carries a byline and — where the claim is load-bearing — a separate reviewer. Contributors' LinkedIn profiles are linked from every byline for transparent verification.
Full masthead & editorial policy
Founder and editor of Explore Agentic. Writes across the enterprise agentic AI stack: MCP, governance, and the buying cycles that determine what actually ships.
Covers MCP server implementation patterns, A2A protocol design, and the runtime trade-offs platform teams face when shipping multi-agent systems.
Covers AWS-native agent infrastructure: Bedrock, AgentCore Runtime, and the deployment patterns that survive enterprise security review.
Covers the identity layer of governed AI: OAuth/OIDC for MCP, RBAC propagation, and the on-behalf-of patterns that pass security review.
Writes the Enterprise RAG pillar and the retrieval- and evaluation-heavy glossary entries on Explore Agentic.
Covers natural-language data interfaces: text-to-SQL, semantic layers, and the edge cases that make BI agents production-fragile.
Covers vector search, embedding models, and the evaluation frameworks that separate retrieval that works from retrieval that demos well.
Covers customer programs and case study methodology. The practitioner side of how Jarvis customers actually deploy and what gets measured.
Covers product strategy for enterprise AI: positioning, pricing, and the buyer journey from pilot to procurement. Anchors the comparisons library.
Covers customer outcomes and the storytelling that turns post-deployment data into actionable case studies, including the metrics that don't show up in the dashboard.
Covers AI governance, procurement, and enterprise buying cycles. Reviews every comparison and playbook on Explore Agentic before publication.
Covers go-to-market patterns for enterprise AI: partner ecosystems, channel motions, and the procurement-to-pilot bridge.
Covers AI vendor evaluation, RFP cycles, and the procurement questions enterprise buyers actually negotiate: cost, contractual data terms, and exit clauses.
Advises on the agentic AI pillar and reviews technical claims across the site before publication.
The team writing this ships Jarvis AI
This hub is the editorial layer. Jarvis is where the patterns we cover — governance, registry, guardrails — get deployed. If you're scoping a program rather than just reading, the product page is the next step.
ascendingdc.com/jarvis-ai — ASCENDING's enterprise agentic AI platform.