During this year’s SiGMA event in Rome, Yogonet sat down with Dmytro Sorysh, AI Domain Product Officer at RedCore, an innovation-driven business group, who shared insights into how to scale processes with AI.
Generative AI is reshaping the way companies operate. In your view, how is it changing not only the tools businesses use but the entire approach to building and scaling new companies?
Generative AI has completely flipped the build equation. What used to take a 30-50 person team can now be achieved by 5-7 people with access to shared model infrastructure - large language models, speech recognition, telemetry, and so on.
It has collapsed both the cost and time of building. Tiny teams can now ship production-grade systems quickly - as long as they measure ruthlessly and design for escalation, not perfection.
At RedCore, we focus on workflows over features. Our design philosophy is built around end-to-end outcomes - say, “contact → offer → objection → conversion” - instead of chasing model novelty.
Every release is evaluation-driven. Each change ships behind automatic tests that assess accuracy, safety, tone, and ROI. And because we build human-in-the-loop systems by design, there are always clear fallback points where people can step in when confidence thresholds drop. It keeps both quality and ensures compliance and transparency.
RedCore describes itself as an “innovation group of companies”. What inspired that positioning, and how do you balance corporate operations with the pace of a startup?
We often say we’re an innovation group of companies - born from operational pain, scaled with startup speed but governed with enterprise discipline.
Our origin story is simple: growth started to outpace headcount. AI and automation became the only scalable way to absorb real workloads across areas like support, HR, marketing and outbound.
To balance agility and structure, we run on a two-gear model:
Gear 1 – Explore: Small product pods work on 6-10 week MVPs, with strict guardrails.
Gear 2 – Exploit: Once something proves ROI, we harden it, templatize it and deploy it across brands.
We also operate a shared platform - one stack for logging, evaluations, prompts, telephony, and compliance. That drastically shortens cycle times and reduces risk.
Every idea faces clear gates: it either scales or it’s killed based on objective metrics, not enthusiasm. And we have a simple cultural contract - startup speed is welcome, but production standards are non-negotiable.
When deciding which new projects to launch, how do you choose which ideas move into development and which have potential for broader scaling?
We start with one question: Does it solve a repeatable bottleneck, and can it beat the baseline on cost or outcomes? If yes, it ships. If it generalizes across brands, it scales.
We use specific filters:
Our MVPs are intentionally focused: one job to be done, one persona, one region or language, fully instrumented end-to-end.
To scale, a product must perform across two or three brands with minimal re-prompting, maintain compliance and tone at load, and show that unit economics improve with volume - meaning the cost per successful outcome drops as usage grows.
Once an AI product is launched, how do you measure its success within the RedCore? What are the metrics that matter most?
For us, success starts internally. Every AI product is born from our operations and tested on real traffic. If it consistently solves a genuine operational problem and shows potential beyond our business group, that’s when we consider it for external release.
We look for clear improvement over the baseline - faster response times, lower cost per contact, higher conversion rates. It must reduce dependency on manual labor, stay compliant under production load and earn positive feedback from internal teams using it daily.
Once we see consistent value, we move into a market validation phase:
To scale, the product has to solve a common, repeatable workflow across multiple operators, maintain tone and accuracy under stress, and deliver measurable ROI while improving unit economics.
In short, every RedCore product is born from our operations, validated on real traffic, and only then shaped into something the broader market can benefit from.
Looking ahead, how do you see innovation-focused corporate groups like RedCore shaping the global AI and automation landscape in the coming years?
The next big wave of AI success will come from operators with real traffic, real governance, and the discipline to productize what works.
We’re moving from demos to duty cycles - where enterprise operators can turn proven workflows into reusable AI products faster than pure startups.
New standards will emerge: governance as code, transparent audit trails and evaluation suites will become the new normal for production AI.
We’ll also see more composable stacks - shared ASR, TTS and LLM layers - allowing small teams to build new business lines in weeks, not months.
The human role will shift too: from repetitive execution to supervision, coaching and higher-level campaign design. And corporations that can validate on live traffic will become launchpads for co-built solutions, partnering with startups and vendors alike.
At RedCore, that’s exactly where we’re heading - turning operational excellence into a shared advantage for the entire market.