Insights / Human insight meets AI
Where human insight meets AI: investing in the next chapter of product innovation.
The B2B software companies compounding through the current AI wave are not the ones automating empathy out of their product. They are the ones using AI to amplify human judgment - faster cycles, deeper insight, sharper decisions - while keeping the human signal at the centre of every meaningful product call.
The category error in “AI replaces user research”
The dominant narrative in 2026 around AI in product development has settled into a binary: AI either replaces user research or it doesn’t. We think the binary itself is the problem. AI can simulate user journeys, surface behavioural patterns from existing telemetry, and produce real-time feedback loops at a cadence no human-only team can match. None of this replaces the judgment call about which signal matters, which patterns are accidental, and which decisions are reversible enough to ship now versus structural enough to defer.
The companies in our portfolio that are using AI well in product development are doing two things at once: they are running the synthetic feedback loop at machine cadence, and they are preserving a structured human-judgment cadence on top of it. The two are not in conflict. They are complementary.
What “amplification” actually looks like in product
The phrase “AI amplifies human insight” is overused to the point of slogan. The operational shape underneath the phrase is more specific. Three patterns recur:
- Cycle compression on observation. What used to take a 2-week user-research sprint - recruit, run, transcribe, theme, write up - now takes 48 hours when the AI tooling does the transcription, the thematic surfacing, and the first pass at structuring the findings. The PM still owns the interpretation. The cycle is 7× faster.
- Pattern detection at scale. Behavioural telemetry across a customer base of any size used to be too noisy for product teams to read directly. AI-driven pattern detection surfaces the cohorts and behaviours that warrant human investigation. The product team still decides what to build. The signal-to-noise ratio is dramatically different.
- Real-time feedback at the interaction layer. The product itself becomes adaptive - tutorials adjusting to user behaviour, onboarding adjusting to detected friction, recommendations responding to context. This is AI inside the product, not just AI in the build cycle. Done well, it improves the experience. Done badly, it erodes trust.
Why this matters specifically for the Nordic ecosystem
The Nordic software ecosystem has a recognised strength in user-centred design and in the underlying cultural assumption that the end user is a partner, not a target. That strength is structurally suited to the kind of AI-native product development described above - where the AI does the observation work but the human team still owns the design judgment. The companies that get this right do not lose what made their Nordic product culture distinctive. They speed it up.
This is the operating thesis behind several of our portfolio engagements in AI-native software, including inamo, the AI-powered user-research platform formerly known as FeedBackFrog - a category-defining example of human insight amplified rather than displaced by AI.
What this means for founders building in this space
For founders building AI-native product surfaces in 2026, three structural questions matter more than the tooling choice:
- What is the human-judgment cadence on top of the AI output? If the AI is doing the observation but there is no structured cadence for the human team to interpret what it surfaces, you are accumulating signal you cannot act on.
- Where does the trust boundary sit? Customers will tolerate AI augmentation in some surfaces (search, summarisation, onboarding hints) and refuse it in others (decision support, anything that looks like impersonation). Naming the boundary explicitly is design work, not engineering work.
- How do you evaluate the AI surface in production? Eval suites for AI-augmented product surfaces are still under-invested across the category. The companies pulling ahead have rebuilt their QA function around continuous evaluation rather than functional tests.
The investment thesis
TGC’s investment posture in this space is straightforward: capital deployed alongside engineers who have shipped AI-augmented product surfaces at scale and GTM operators who know how to sell AI-augmented enterprise software through procurement. We back founders building product where the AI amplifies the human, not the other way around - because we believe that’s the durable shape of the category, not the loud shape of it.
Related reading
AI adaptation and the new architecture of hyperscaling · Operator-led growth equity, explained · inamo portfolio page