April 2026

Building Agentic Workflows That Actually Work

BDR agents, a Chief of Staff bot, and what I learned about when AI agents help and when they just create more noise.

Over the last year I've built two agentic systems that are actually running in production. One is a fleet of BDR agents that handle outbound prospecting. The other is what I call the Chief of Staff agent, which handles a bunch of the administrative overhead that comes with managing an engineering org. Both work. Neither works the way I expected.

The BDR agent army

The idea was simple: automate the research and initial outreach that business development reps do manually. The agents pull data from intent signals and public sources, enrich it against CRM records, generate personalized email sequences, and manage the follow-up cadence. No human touches the outbound until a prospect responds.

The first version was terrible. The emails were technically personalized but felt generic. “I noticed your company recently raised a Series B” is personalized in the same way a form letter with your name at the top is personalized. The agents were doing the easy part of the research (company size, funding, tech stack) and skipping the hard part (what specific problem would make this person respond to a cold email).

The fix was adding more context to the research step. We started feeding the agents recent job postings, engineering blog posts, and conference talks from the target company. That gave them enough signal to write emails that referenced something the recipient actually cared about. Response rates went from under 2% to around 8%, which is strong for cold outbound.

The Chief of Staff agent

As an engineering leader, I was spending hours every week on tasks that were important but repetitive: reading through Slack threads to catch blockers, summarizing PRs for stakeholder updates, pulling together incident timelines, drafting the weekly status email. None of this was hard. All of it ate time.

The Chief of Staff agent connects to Slack, GitHub, and our incident management system. Every morning it produces a summary of what happened overnight: PRs merged, incidents opened or resolved, threads where someone is blocked and waiting for a response. On Fridays it drafts a stakeholder update that I edit and send. It also flags when someone has been waiting more than 24 hours for a review or a decision.

The part that surprised me: the blocker detection is the most valuable feature. Not the summaries, not the status updates. The ability to surface “this person asked a question in a thread 26 hours ago and nobody responded” catches things I would have missed until the next standup. That alone reduced exec overhead on status-gathering by about 60%.

When agents don't help

Not everything should be an agent. I tried automating sprint retrospective summaries and it was worse than useless. The agent would summarize what people said without capturing why it mattered. A retro summary that says “the team discussed deployment frequency” tells you nothing. The value of a retro is the emotional context and the commitments people make to each other, and that doesn't compress well.

I also tried having an agent triage incoming support tickets and route them to the right team. It worked 80% of the time. The 20% it got wrong created more work than the 80% saved, because misrouted tickets sit in the wrong queue until someone notices. We went back to a human doing the initial triage with the agent as a suggestion tool rather than a decision maker.

What I've learned so far

Agentic workflows work best when the task has clear inputs, a well-defined success criteria, and low cost of failure. BDR outbound fits: if an email is slightly off, nothing bad happens. Incident triage doesn't fit as well: if the routing is wrong, you lose time and trust.

Start with the agent as an assistant, not an autonomous actor. Let it draft, suggest, and flag. Let humans approve, edit, and decide. You can increase autonomy over time as you build confidence in the outputs. Trying to go fully autonomous on day one is how you end up with a system nobody trusts and everyone works around.

Have thoughts on this? I'd like to hear them: isser.akhil@gmail.com