KAINDLY × Live Coverage
HumanX 2026 Day 2 conference stage

HUMAN[X]

San Francisco April 6–9, 2026

Day 2 — The Plain-Language Version

AI stopped being a concept. On Day 2, it started being a coworker.

Day 1 was about understanding what AI is becoming. Day 2 was about what it's already doing — in real companies, right now.

April 7, 2026 | Leanna Baker Williams | KAINDLY Collective

Why This Exists

AI conferences talk to the industry.
We translate it for everyone else.

Most conference coverage assumes you already know the jargon, follow the players, and understand the technical landscape. That leaves out the vast majority of professionals who need to make decisions about AI but didn't grow up in the ecosystem.

KAINDLY's HumanX readouts are written for the person who doesn't have time to attend a four-day conference — but whose budget, team, and strategy depend on understanding what's happening in AI right now. Every term is defined. Every insight comes with a concrete next step. No gatekeeping.

Day 2 of HumanX 2026 just wrapped. If Day 1 showed us where AI is headed, Day 2 showed us where it's already landed. Across 18 sessions, the same message came through again and again: the organizations pulling ahead aren't the ones with the best AI — they're the ones that figured out how to actually use it. Three themes dominated the stage. Here's what they mean for you, in plain English.

Quick Glossary — 10 Terms You'll See Below
AI Agent
Software that doesn't just answer questions — it takes actions. It can book a flight, process a refund, update a database, or coordinate between systems, without a human clicking every button.
Agentic AI
The broader shift from AI that advises to AI that acts. Instead of suggesting what to do, agentic AI goes and does it — within boundaries you set.
Copilot
An AI assistant that works alongside you — suggesting, drafting, summarizing — but waits for you to make every decision. Think autocomplete on steroids. The step before agents.
Human in the Loop
A design choice where AI does the work but a person reviews, approves, or corrects it before anything goes live. A safety net built into the process.
Orchestration
Coordinating multiple AI systems or agents to work together on a task — like a conductor leading an orchestra. One agent might check inventory while another checks schedules.
Governance
The rules, processes, and accountability structures an organization puts around its use of AI. Who decides what AI can do? Who's responsible when it goes wrong?
Data Lineage
Knowing where your data came from, who created it, whether you have the rights to use it, and how it's been transformed along the way. The supply chain of information.
Deterministic
A system that gives the same output every time you give it the same input. Traditional software is deterministic. AI often isn't — which is why oversight matters.
Change Management
The process of helping an organization's people adapt to new tools, roles, or ways of working. The human side of technology adoption.
Digital Worker
An AI agent designed to handle a complete business workflow — not just one task, but a chain of related tasks that used to require a person moving between multiple systems.

What You Need to Know

THREE
TAKEAWAYS

Panel stage: The Future Isn't Autonomous, It's Agentic at HumanX 2026
01

AI Is Learning to Do Things — Not Just Answer Questions

What Happened

Multiple sessions — from AWS CEO Matt Garman's keynote to panels featuring Microsoft, Intercom, and Superhuman — converged on the same message: AI is moving from copilotAn AI assistant that works alongside you — suggesting, drafting, summarizing — but waits for you to make every decision. The step before agents. to AI agentSoftware that doesn't just answer questions — it takes actions. It can book a flight, process a refund, or coordinate between systems without a human clicking every button.. A copilot suggests things for you to do. An agent actually does them.

The examples were concrete. Lufthansa shared how their AI system handles 400,000 customer conversations in a single day — rebooking flights, issuing refunds worth over 100 million euros, answering questions in six languages. Westshore Home described an AI system that schedules home renovation appointments on the spot by coordinating inventory, permits, crew availability, and customer preferences — all in real time, while the customer is still sitting at the kitchen table.

What This Means for You

The AI tools you've been hearing about — ChatGPT, Copilot, AI assistants — mostly help you write, summarize, and brainstorm. The next wave does actual work: scheduling, purchasing, processing orders, handling customer requests, coordinating across systems. If your team is still thinking of AI as "a better search engine" or "a writing assistant," you're looking at last year's landscape.

The shift matters because agentic AIThe broader shift from AI that advises to AI that acts. Instead of suggesting what to do, agentic AI goes and does it — within boundaries you set. doesn't just save time — it changes who does the work. Lufthansa's system doesn't assist a customer service agent. It is the first point of contact for thousands of people simultaneously. Westshore Home's scheduling AI doesn't help a scheduler — it replaces the scheduling step entirely. The question is no longer "how can AI help my team work faster?" It's "which parts of the work can AI handle on its own, and which parts still need a person?"

One Thing to Try

Pick one repetitive workflow in your team — one that involves checking multiple systems, copying information between tools, or coordinating with other departments. Ask: "Could an AI agent handle the routine 80% of this and flag only the exceptions for a human?" That's the shift happening right now.

HumanX 2026 Day 2 conference session on scaling AI
02

The Organizations Winning at AI Figured Out How to Deploy — Not Just Experiment

What Happened

Session after session told the same story: the organizations pulling ahead aren't the ones with the most sophisticated AI. They're the ones that figured out how to get it into production. Westshore Home described putting three people in a room — an AI builder, a domain expert who knows the real workflow, and a product manager — and watching real work happen before writing a line of code. They built a working prototype in hours, not months.

Lufthansa scaled from one chatbot for one airline during the pandemic to a platform handling millions of conversations across four airlines, four channels, and six languages. IFS shared a framework for deploying digital workersAn AI agent designed to handle a complete business workflow — not just one task, but a chain of related tasks that used to require a person moving between systems. — AI agents that handle entire workflows in manufacturing and field service. The pattern repeated across every session: start small, prove value in a limited scope, build organizational trust, then expand.

The Pattern That Kept Repeating

01

Watch the real work first

Don't start with documentation. Sit down and watch someone actually do the job. Your documentation is probably lying to you about how the work really gets done.

02

Find your "all-upside" experiment

Start where failure costs nothing. Westshore Home tested conversational AI on old leads they weren't going to contact anyway. Anything they converted was pure upside — it's now generating millions in monthly sales.

03

Shadow-ship before you launch

Run the AI in the background doing the work — but don't let it act on the results yet. Let your team see what it would do before trusting it to do it.

04

Build trust, then scale

Every small win is a deposit in the organizational trust bank. If your first AI rollout goes badly, the next one is dead on arrival. Start narrow. Prove it works. Then widen.

What This Means for You

The biggest barrier to AI isn't technology — it's change managementThe process of helping an organization's people adapt to new tools, roles, or ways of working. The human side of technology adoption.. Multiple speakers said the same thing: when the "why" is rooted in your mission — helping customers, solving real pain points — adoption becomes natural. When it's rooted in a top-down mandate or cost-cutting, you're fighting an uphill battle.

Westshore Home's AI lead put it bluntly: "You can't scale AI adoption in an organization that fears it." Lufthansa echoed this: they hire for passion and curiosity, not technical skills. Their most important team lead joined at 19 with no AI experience. The companies succeeding are treating AI adoption the way you'd build trust with a new employee — start with low-risk work, prove reliability, then gradually increase responsibility.

One Thing to Try

Ask your team: "Have we tried deploying AI anywhere — even in a low-stakes, limited way?" If the answer is no, the first step isn't buying a tool. It's finding one workflow where AI could run in the background — doing the work, but not yet acting on it — so your team can see what it would do before you trust it to do it.

HumanX 2026 Day 2 governance and policy discussions
03

AI Governance Stopped Being Optional

What Happened

A panel featuring Google's Head of AI Research Standards, the CEO of Binti (AI in child welfare), and the CEO of Defined AI (AI data sourcing) made the case that governanceThe rules, processes, and accountability structures an organization puts around its use of AI. Who decides what AI can do? Who's responsible when it goes wrong? isn't a nice-to-have — it's an executive responsibility. The moderator asked the audience how many felt their organization was ready for AI. Two hands went up.

Defined AI's CEO rattled off active lawsuits: Disney vs. OpenAI, Warner Brothers vs. Midjourney and Stability, The New York Times vs. OpenAI and Microsoft, Universal Music vs. multiple AI companies. She described the reality of data lineageKnowing where your data came from, who created it, whether you have the rights to use it, and how it's been transformed. The supply chain of information.: most companies don't actually know whether they have the legal rights to use the data they're feeding into their AI tools. Binti's CEO described deploying AI in government child welfare agencies — where mistakes affect children's lives — by keeping a human in the loopA design choice where AI does the work but a person reviews, approves, or corrects it before anything goes live. A safety net built into the process. on every decision, maintaining full audit trails, and ensuring AI fills out paperwork but never makes recommendations about a child's welfare.

What This Means for You

If your organization is using AI and you don't have a clear answer to "who is responsible when this goes wrong?" — that's a governance gap. The lawsuits are real and growing. The data rights questions are unresolved. And the regulatory environment is shifting fast.

Google's representative pointed to real, published standards that any organization can use: ISO 42001 for AI management systems and the NIST AI Risk Management Framework. These aren't theoretical — they're frameworks your organization can adopt now, regardless of size. The panelists agreed: in the next 90 days, every executive should audit what data they have, whether they have the rights to use it, and what gaps exist. Not because regulators are coming (though they might be), but because your team is probably already using AI tools without guidance — and that's a risk you're carrying without knowing it.

One Thing to Try

Ask your leadership team: "Do we have a clear policy for how AI tools are used in our organization — and does anyone own what happens when the output is wrong?" If the answer is no, that's your next 90-day priority. Start with the basics: what tools are people using, what data are they feeding in, and who reviews what comes out.

Voices from the Floor

"You can't scale AI adoption in an organization that fears it. When the why is rooted in your mission, adoption becomes natural."

— AI Lead, Westshore Home

"Hire for passion, not for skills. One of our team superstars joined when she was 19 — she'd never worked on AI before. She's 24 now and runs one of our most important teams."

— Conversational AI Lead, Lufthansa Group

"We're not going to let AI say 'therefore, you should do this.' We let it fill out paperwork. But we never let it make a recommendation about a child's welfare."

— CEO, Binti (AI in Child Welfare)

"If you build a bot that has high volume, solves a business problem, but that people don't like — they'll use it once and then find another way to get things fixed."

— Conversational AI Lead, Lufthansa Group

Your Next Step

One question to ask your team this week

"Are we still experimenting with AI — or have we actually deployed anything?"

One decision to sit with

The gap between "we're exploring AI" and "AI is part of how we work" isn't a technology gap. It's a trust gap. And it closes one small, successful deployment at a time. The organizations that figured this out first are already pulling ahead.

That's what KAINDLY helps with. Not selling you AI. Helping you understand it — and build the confidence to use it well.

Try This

Take the Assessment

Not sure where you stand with AI? KAINDLY's AI Readiness Assessment gives you a personalized report on your current comfort level, strengths, and where to focus next.

You don't need to know everything.
You just need to start.

KAINDLY helps professionals build real AI fluency — at your own pace, in your own context, without the jargon.