Stephanie Stranko
Independent researcher focused on emotional safety and behavioral risk in AI-mediated systems. Founder of Ikwe.ai.
Stephanie Stranko
Des Moines, Iowa
Ikwe.ai was founded because the tools to measure what matters didn't exist.
Before building evaluation frameworks and benchmarks, I worked at the intersection of human behavior, systems design, and real-world vulnerability — where technology doesn't just inform decisions, it shapes emotional outcomes.
In 2023, I started building emotionally intelligent AI systems — prototypes designed to support people through grief, relationship strain, and the quiet moments when they needed someone to talk to. The technology worked. People engaged deeply.
But I kept hitting the same wall: I had no reliable way to measure whether these conversations actually helped.
The tools didn't exist. The benchmarks didn't exist. The industry was evaluating AI by what it said, not by what it did to the people using it.
So I built them myself — the evaluation frameworks, the scenario libraries, the behavioral patterns that distinguish safe from unsafe in emotionally vulnerable contexts. What started as internal tooling became the EQ Safety Benchmark. What started as one founder's due diligence became independent research that the entire industry can use.
The thesis
My work focuses on a gap most AI safety efforts overlook: how conversational systems land on people, especially during moments of distress, uncertainty, or cognitive load.
The core finding is uncomfortable: Recognition ≠ Safety. AI systems can identify emotions with high accuracy and still respond in ways that make situations worse. High emotional articulation often correlates with worse behavioral safety scores.
The most fluent systems are sometimes the most dangerous — because they sound so good that people trust them further than they should.
Focus areas
- Emotional safety evaluation for conversational AI
- Behavioral risk in AI-mediated decision-making
- Failure modes in support-oriented systems
- Measurement gaps in current AI safety practice
Why independent
Ikwe.ai doesn't take funding from the companies we evaluate. We don't have equity relationships that create conflicts. We're based in Iowa — far from the hype cycles, close to the work.
Independence isn't a marketing position. It's a structural requirement for doing this work with integrity. If your evaluation framework can be influenced by the people being evaluated, it's not an evaluation framework — it's a negotiation.
Ikwe.ai is built on the belief that safety must be measured where harm actually occurs — in human experience, not just model outputs.