Skip to main content

What Comes After TDD and BDD? Introducing Context Driven Development (CDD)

· 4 min read
Mike Germain
Mike Germain
Engineering Leader

Every few years, a new practice reshapes how we build software. Some ideas stick because they help us work better together.

Test Driven Development (TDD) made testing the first step, so we could catch problems early and design better code. It paid off by reducing the cost of defects found late in the process.

Behavior Driven Development (BDD) gave us a shared language for what the system should do, keeping product and engineering aligned. It helped avoid expensive rework by catching misunderstandings before they reached the build stage.

Both practices worked because they made critical thinking visible earlier, and they reduced waste in the process.

Today, we’re facing a new shift.

AI is in the loop. We’re writing, reviewing, and reasoning with tools that depend on understanding why the work exists, not just what it does. At the same time, teams are moving faster than ever, and traditional documentation struggles to keep up.

We need a way to keep our assumptions, constraints, and decisions close to the code. Something structured, lightweight, and version controlled. Something that helps people and tools stay aligned as the work evolves.

It’s the same idea behind Connected Understanding, the first principle of the Connected Engineering Method: build shared understanding before solving the problem.

Now we need to bring that principle back into the codebase and make it part of how we work.

I’m calling this Context Driven Development (CDD).


What is CDD?

Context Driven Development is the practice of creating structured, in-repo documentation, but that’s only where it starts.

The goal is to give everyone involved in building software a shared, version-controlled source of truth. When context is clear, current, and close to the code, engineers, product managers, QA, and AI tools all benefit.

It’s about keeping everyone aligned as the work evolves.

  • For engineers, that means understanding the problem before writing the solution.
  • For product, it means being part of the technical conversation, not just reacting to the outcome.
  • For QA, it means testing against intent, not just requirements.
  • And for AI assistants, it means making suggestions that reflect how the system is supposed to work, not just how it happens to be written.

This isn’t about duplicating Jira tickets or writing documentation no one reads. It’s about capturing the assumptions, constraints, dependencies, and decisions that shape the work and keeping that context available as the code changes.

If the Connected Engineering Method is about building shared understanding before jumping into solutions, then Context Driven Development brings that understanding into the technical workflow where it can do real work.

It turns human insight into a durable part of the system. It lives in the repo, moves with the code, and stays available for the next decision, the next contributor, or the next tool in the loop.


Why Now?

Modern AI tools like Cursor and Copilot work best when they understand the intent behind your code. They’re not just autocomplete. They are trying to reason through your decisions.

But without context, they hallucinate.
Without structure, they guess.
Without a clear signal, they lose the thread.

Context Driven Development helps prevent that.
It keeps the reasoning close to the work, version controlled and ready to use. Not just by your teammates, but by the tools that are building with you.


What Comes Next

In the next post, I’ll show you what it looks like to implement Context Driven Development in practice. We’ll use a real-world example from an earlier blog on AI hallucinations and customer support coaching.

If you’ve read that piece, you’ll remember we broke down a repeatable pattern to help language models stay grounded in source material, validated by human judgment, and designed for feedback.

Next, we’ll take that concept and build out the supporting documentation as if we were applying CDD from the start. We’ll show where the problem statement lives, how decisions and constraints are captured, and how the context stays available for both humans and AI.

Until then, I’d love to hear from you:

Is your team already feeling the need for this shift?

Are you writing your docs for humans, for tools, or for both?


Want to learn more? Stay tuned, or follow along at connectedengineeringmethod.com

Preventing AI Hallucinations Is Only Half the Story

· 3 min read
Mike Germain
Mike Germain
Engineering Leader

Someone recently asked me a great question:

How do you prevent AI hallucinations when using language models in sensitive use cases?

Let’s use customer support coaching as a real-world example to illustrate a better framing: not just avoiding hallucinations, but building a system that tells the truth and teaches from it.


The Use Case: Coaching Through Conversation

Imagine you’re analyzing customer support calls to understand why some agents consistently de-escalate tough situations or receive higher satisfaction scores.

At a glance, a language model might:

  • Extract direct quotes tied to turning points in the conversation
  • Summarize those quotes to highlight emotional shifts or key decisions
  • Score sentiment for both the agent and the customer, picking up on cues like confidence, empathy, frustration, or receptiveness

All useful, but if you’re going to trust these insights, let alone coach your team based on them—you need a structure that reinforces trust.


A Pattern That Builds Trust

Here’s a repeatable pattern that blends AI with verification, human judgment, and feedback loops:

  1. Constrain the model’s inputs using Retrieval-Augmented Generation (RAG). This limits the model’s response to the actual transcript, not general knowledge, not vibes.
  2. Prompt the model to extract quotes and summarize only from those quotes. Don’t let it get creative.
  3. Validate the quotes using a tool like grep, cat, or fuzzy matching to confirm that what the model pulled actually appears in the source.
  4. Run sentiment scoring on both the quotes and the summary, for both the agent and the customer.
  5. Compare those scores. If a quote sounds confident but the summary reads as uncertain, that mismatch might signal model drift. Internal consistency is a proxy for trust.
  6. Keep a human-in-the-loop during pilots. Experts review and confirm before you move to scale. No exceptions.
  7. Use validated insights for feedback loops. Highlight what language or tone consistently works, and turn that into coaching signals your team can act on.

From Compliance to Clarity

This pattern doesn’t just reduce hallucinations, it creates clarity.

Quotes give you precision. Summaries give you patterns.

By layering both, you gain the ability to cluster by communication style or emotional tone, even when the words themselves vary.

And when you give teams insights they can see, feel, and trust—they learn faster.

Not because the AI is magic, but because the system is designed to support truth, not just output.

Why This Matters to Connected Engineering

At its core, this is a Connected Engineering problem.

You’re aligning people, tools, and processes to build confidence in complexity.

You’re turning messy, emotionally loaded conversations into structured feedback.

This is what it looks like when you apply:

  • Connected Understanding — Include people in the loop, especially early.
  • Realness — Show your work. Don’t let models summarize behind a curtain.
  • Creative Tension — Trust systems that surface mismatches. That’s where learning lives.

The question isn’t just “How do we prevent hallucinations?”

It’s “How do we create systems where truth becomes teachable?”

Introducing Structured Story Points: A Practical Way to Estimate Engineering Work

· 4 min read
Mike Germain
Mike Germain
Engineering Leader

What are Structured Story Points?

Structured Story Points (SSP) is a practical evolution of traditional story point estimation. It keeps what’s useful about story points but adds structure to make them actually usable in real-world planning. Instead of guessing a number or debating between a “3” and a “5,” SSP gives teams a shared way to evaluate work using four focused questions.

The method builds on the idea that engineering work is shaped by four key factors: effort, complexity, uncertainty, and collaboration. By scoring against each, teams can size stories quickly and consistently without losing the flexibility that makes story points useful in the first place.


The Four Key Questions

1. Effort

How much actual work will this take?
Effort refers to the hands-on time needed to complete the task. That includes coding, testing, documentation, and anything that takes focused execution.

  • Minimal – Can be completed quickly with little effort.
  • Moderate – Requires sustained focus over a day or two.
  • Painstaking – Involves a high level of effort or repetition over a longer period.

2. Complexity

How difficult is the task to reason about?
Complexity is about how mentally or technically challenging the task is. It includes the number of moving parts and how much thinking it takes to do the work safely.

  • Simple – The work is easy to follow and unlikely to break anything.
  • Layered – Requires managing multiple concerns, systems, or logic paths.
  • Convoluted – Hard to understand or untangle without deep thought or review.

3. Uncertainty

How well do we understand what’s being asked?
Uncertainty captures how much is still unknown. It helps teams flag when a story needs clarification or research before work can begin.

  • Clear – The task is well understood and scoped.
  • Murky – Some assumptions or open questions still need to be resolved.
  • Uncharted – We don’t know enough to start and need a research or discovery task first.

4. Collaboration

Who needs to be involved to get this done?
Collaboration measures whether one person can complete the task alone or if it requires help from others inside or outside the company.

  • Solo – One person can complete the task independently.
  • Paired – Requires help from someone on your team.
  • Cross-team – Needs coordination with someone in another team.
  • External – Involves someone outside your organization.

Why Use Structured Story Points?

I’ve used this method for nearly 15 years, and I keep coming back to it for one reason: it helps teams run clean sprints. Clean sprints build trust with stakeholders because they show consistent follow-through. That trust becomes the foundation for real collaboration and planning.

SSP also gives your team what it needs to own its work. When velocity is consistent, teams can advocate for more autonomy or clearly show when they need more support. Estimation stops being performative and starts being useful.

This method works because it is:

  • Fast – Estimation takes minutes, not meetings
  • Repeatable – Shared criteria make it easy to calibrate across a team
  • Transparent – Everyone knows what’s behind the number
  • Flexible – Works across product, platform, and devops teams

Try the SSP Calculator

To make things even easier, I've built a tool that walks teams through each of the four questions and produces a score. Try it out on our Structured Story Points page.


Final Thoughts

Story points aren’t broken. They just need more structure. Structured Story Points give teams a way to estimate collaboratively, quickly, and with clarity. It’s not about getting the number perfect. It’s about building shared understanding so you can plan and deliver with confidence.

Try it out and let us know how it works for your team.

Welcome to the Connected Engineering Method Blog

· One min read
Mike Germain
Mike Germain
Engineering Leader

Welcome to the Connected Engineering Method blog! Here, we'll explore topics around engineering leadership, team building, and creating environments where technical innovation thrives through human connection.

Stay tuned for more insights and discussions about:

  • Engineering leadership principles
  • Team dynamics and collaboration
  • Technical innovation strategies
  • Building sustainable engineering cultures