Adalberto Soares

Building systems
that build systems.

I architect the infrastructure layer that designs, builds, and governs AI agents—the system beneath the systems. 25 years of software engineering led me here: the architecture that makes agentic AI reliable at scale.

Current Lead AI Engineer, EasyMate AI
Location São Paulo, Brazil
Focus AI Agent Architecture

I work at two levels: production agentic infrastructure at EasyMate AI, and meta-agentic system design on my own time—building architecture where autonomous systems are designed, validated, and managed by other systems.

Agentic Infrastructure at EasyMate AI

Lead AI Engineer · Architect of the agentic development infrastructure

Designed the spec cascade pipeline that structures the full path from product requirements through technical design, solution architecture, and implementation planning to multi-agent orchestrated execution. Built a composable skill-and-agent framework now being adopted across the engineering team to deliver AI-powered applications with repeatable quality.

Spec Cascade PipelineAgent OrchestrationLLM GovernanceDeveloper Tooling

Meta-Agentic System Design

Independent research & build

Designing a system that designs, builds, validates, and manages other autonomous systems—and diagnoses its own architectural evolution. Constitutional governance for graduated AI autonomy. Proof-of-work validation for every agent deliverable. Requirement pipelines that go from idea to audited implementation. A maturity framework where the system assesses its own architecture, surfaces the highest-risk gaps, and drafts the requirements to close them.

Meta-Agentic ArchitectureConstitutional GovernanceProof of WorkAutonomous Systems

Trustworthy AI Governance & Security

Cross-cutting concern

Evidence-based validation frameworks where AI outputs earn trust through captured proof. Multi-model review processes where agents audit each other. Security architecture patterns for agentic systems—guard chains, RBAC/ABAC authorization, permission hooks, and audit reporting. The engineering discipline that makes AI automation safe to trust.

Evidence-Based ValidationMulti-Model ReviewSecurity ArchitecturePermission Systems

Trustworthy AI automation.

Not the hype—the engineering discipline. These are the principles behind everything I build.

Quality of evidence is what builds trust.
Without trust, there is no autonomy.

Evidence over faith

Every AI agent output carries proof of its work. Trust is earned through captured evidence and validated outcomes, not assumed by default.

Graduated autonomy

Systems earn independence through demonstrated reliability. Constitutional governance keeps humans in the loop where it matters.

Cognitive load management

Human-AI interfaces that respect attention. Progressive disclosure over information flooding. The system reduces the human burden.

Security as architecture

Autonomous systems need security built into their structure—guard chains, permission hooks, audit trails—not bolted on as an afterthought.

  • Software has always been a lever.
    Now, the lever learns.
  • We no longer program algorithms.
    We conduct reasoning.
  • Intelligence without traceability
    is power without purpose.
25+
Years in Software Engineering
CTO
Former CTO & VP Engineering
USP
Universidade de São Paulo

Thinking in public.

On agentic AI architecture, meta-system design, the principles of trustworthy automation, and what changes—and what stays the same—after 25 years of building software.

Let's connect.

Engineering leaders navigating agentic AI. Builders thinking about the layer beneath the agents. Founders who want to build something that lasts.

Whether you are evaluating agentic AI architecture for your team, building autonomous systems and want to exchange patterns, or exploring a role where this work matters—I would like to hear from you.