Metrics and attribution is one of those marketing subjects that was talked about a trilion times but that didn’t make it any clearer, easier, or more actionable for practitioners. 

And if you are in b2b developer marketing where attribution is murky at best, conversion paths are long, and cookies and ads get blocked things are even less obvious. 

Why do I think I could say something interesting here?

I spent the last few years running a highly functioning, data-driven dev marketing team. And in my previous life, I was a data scientist, wrote both my master's in theoretical physics and bachelor's in finance on stats-heavy subjects, and worked as a day trader. This mix pushed me to think a lot about risk-taking, data-driven decision-making, metrics, and attribution.  

In this article, I’ll go into:

Let’s start with this. 

Why do you even track marketing metrics?

I believe there are only two good reasons to do it:

  • You track metrics for compliance, reporting
  • You track metrics to make (better) decisions

I don’t want to talk about the first one as if you track metrics because someone wants to see them, the situation is easy. They will tell what they want to see and how. 

The more important situation is the second one. And ultimately when people say they are data-driven or data-informed they are talking about the second one (hopefully). 

So what are the decisions you want to make based on developer marketing metrics?

For me, those are the following:

  • How should the marketing budget be allocated across dev marketing programs?
  • Should we continue doing what we are doing (are we making progress)?
  • Which dev marketing programs should be scaled up, down, or killed?
Marketing program allocation

Ultimately you should look at data and metrics to make decisions. Otherwise, why do you even gather data in the first place?

So most of the time I treat metrics as quantitative signals that help me make better decisions around program or activity. I believe signals are all we can realistically ask for and whoever claims otherwise is likely selling you something. 

My approach is simple:

  • Look at data
  • Refine efficiency of marketing activities and programs
  • Refine allocation across marketing programs
  • Look at data

OK, but what metrics should you look at anyway?

What KPIs should developer marketing organization own

Recently saw a post from Kyle Poyar (and even put it in my newsletter) showing which department owns what KPIs. 

Linkedin post from Kyle Poyar

It was based on OpenView portfolio and since OpenView invests mostly in PLG companies and a lot of those companies are dev tools it is relevant imho. 

As per marketing, it seems that most often we own:

  • Acquisition like new Signups, and leads
  • Self-serve revenue but not free-to-paid conversion

While the first one is obvious, the second one is interesting.

If self-served free-to-paid conversion has to happen for any revenue to come then why doesn’t marketing own it nearly as often?

I think it is the tools that departments can provide.

You get to that first revenue through product onboarding, docs, and initial enablement materials. That feels like product/growth.

Once that happens you need to grow accounts.

And when those accounts don’t talk to sales (and in self-served they don’t by definition) you need to continue growing awareness of features, adoption, and usage. Marketing can definitely help with that and often leads the way.

But let’s get back to acquisition. 

From talking to many dev tool startup founders/marketers, I see this as #1 focus by far. Most often what the dev tool company (and CEO) wants from marketing is to grow the number of:

  • free developer signups (for product-led motion)
  • demo requests (for sales-led motion)

Those should correlate strongly with revenue. To ensure that happens, you need a solid definition of a “Qualified {signups/demos}”. Imho this is crucial to align the org. 

You work on this qualification criteria with sales / growth / product and then focus on that number. The qualification criteria could be product action, ICP qualification or other things. As long as you are aligned with the team and growing Qualified signups correlates with what happens down-funnel you are good.  

Yes, you could look further down the funnel at pipeline or revenue. Yes there are other metrics like # of Product Qualified Accounts in the PLS motion or self-served upgrades for pure PLG. And yes, developer signups may not be the most meaningful intent signal for you. 

But that is why you qualify it together with other teams to get a metric that correlates with long-term but happens early enough in the customer experience that you can connect it to your marketing activities. 

Leading vs Lagging metrics

To me, it is the most crucial part that teams get wrong when doing data-driven optimization of their marketing programs. 

To get things straight: 

  • Lagging metrics are the end results like Qualified signups/demos requests. You should use those to allocate resources to programs, calculate high-level program ROI etc. Your board, CEO, Sales typically only ever care about these.
  • Leading metrics correlate with lagging metrics but happen much earlier in the journey. Think of website visits or engagement on Linkedin. Use these to optimize the activities within the programs. This is what your marketing team cares about and optimizes on a daily. 
Leading vs Lagging metrics

Don’t confuse the two, for example:

  • Don’t treat LinkedIn engagement as a lagging metric that you report to your CEO. They don’t really care about that.
  • Don’t treat Qualified signups as a lagging metric that devrels running weekly live coding sessions optimize. Both your devrel team and your audience will not be happy. 

Now, if you could reliably and quickly connect every activity with lagging metrics you probably should. But the problem is that the end results, especially for enterprise products, happen so much later that you cannot optimize the activity you are actually doing (and have control over). 

“Ok, ok, but I need to attribute my activities to revenue”

Let’s talk about that.

How do you track attribution in developer marketing

My view on attribution is this:

  • 0% chance of making it perfect 
  • You can make it directional at best
  • It should help you make decisions not defend your budget or existence

But there is this question that everyone running marketing had to answer at least once… this month. 

How do you prove the ROI of this developer marketing program/activity?

I wrote a short article about just that but the tldr is this:

You treat it as a product and measure ROI at a program level. You optimize activities within the programs but don’t report ROI on them. 

Perhaps Rand Fishkin said it better though:

Linkedin post from Rand Fishkin

Yeah, well put. 

Now, that doesn’t mean you don’t track things. It means you track them with a lens of direction and decision-making. Not the mirage of clean attribution of every dollar. 

I personally like to use a combination of software attribution and self-reported attribution to guide my decisions. 

Self-reported attribution 

The famous “How did you find out about us?” field during registration or demo form submit flows. 

This method gets you the first, last, or most memorable touchpoint. You never really know.

It is great to find channels you didn’t know you didn’t know about. For example, this is how we found out that people are actually using GPT search tools like perplexity.ai to compare dev tools. 

Implementation-wise I really like this to be a free text field with no suggestions apart from “Be as specific as you want”. One of the problems with this is that you get a lot of junk, a lot of typos, and overall unstructured text. We’ve put a machine learning model on top of this to classify inputs into two-level categories like “word of mouth / friend” or “ads / youtube”. 

How to implement self-reported attribution

A common objection that I hear is that “another question in the form will hurt conversions”. While the “more fields less conversion” argument is generally true, it is mostly true for low-intent conversion events like ebook downloads. For high-intent conversions like demo requests, it has little to no effect on the people you actually want to attract. After all, do you really see yourself deciding not to evaluate one of 5 infra platforms because you didn’t want to put a “found it on devops subreddit” into a form field?

Software attribution 

This is what you get from your analytics software based on referral information, landing/exit pages, cookie tracking, and UTM links. 

This method over indexes on “direct” or “google” traffic.  

There are probably six million ways of using software attribution ranging from super simple to very complex (like marketing mix modeling). 

But over the years I converged to keeping it as simple as possible that still helps you make decisions. I look at:

  • Landing pages and referrals to connect this first interaction with programs.
  • Traffic to high-intent pages coming from the programs identified with the above
  • Conversion rates and # of lagging metric (Qualified signup) from these programs
  • Program-specific leading metrics for each program that correlate with lagging metrics ( YT watch time, # visitors to blog article groups etc)

Then we optimize vs last Q or last year baseline and focus on marketing. 

Do I get full attribution on everything? No chance. But I can see progress, promising programs, and campaigns that didn’t work. And I can allocate our budget in a way I believe makes the most sense.  That is all I need. 

Recently came across this simple framework that I really liked too. 

Basically, you divide your attribution into 3 parts.

Signups by Source: 

  • Where did the conversion happen? What converted people? 
  • Things like: Account creation, Demo Request, Content Download, Webinar registration, Partner registration etc.
  • You drill down by invited or not, particular forms, partners etc. 

Signups by Channel: 

  • How did they arrive at my site?
  • Things like: Organic, Direct, Paid Search, Paid Social etc
  • You drill down by campaign name, creative, platform name, creative etc,

Signups by Landing page group: 

  • What attracted them to my site?
  • Things like: homepage, features, blog, templates, white papers, webinars etc. 
  • You drill down by particular landing pages or subgroups within groups like blog categories, whitepaper focus etc

And then depending on the question you want to answer or the decision you want to make you look high-level or drill down. 

But there may be other goals that you want to optimize that lead to those signups right?

What metrics to monitor for each stage of the developer journey

I like to think in the following stages:

  • Awareness
  • Interest/intent/demand
  • Conversion

There are other options for choosing dev journey stages like exploration/demo internal sale that make a ton of sense too but you can adjust the metrics you choose accordingly.  

On top of that, you have a higher granularity and program-specific metrics for activity optimization. 

Depending on your goals for the year it may actually be more helpful to focus on pushing one of these stages, not always the conversion stage (but indirectly you will always move that too). 

With that in mind here are some solid options to consider.

Awareness:

  • Traffic to home
  • Branded search
  • Share of voice
  • Social mentions
  • Google trends results

Interest/demand:

  • Traffic to high-intent pages like pricing, contact us, demo, terms of service. (That can be coming from your site or other sites)
  • Signups
  • Downloads

Conversion:

  • Activated signups
  • Qualified signups
  • Demo requests
  • Qualified accounts
  • Revenue pipeline (expected deal size x qualified accounts)

Some program-specific metrics:

  • Blog: share of search for core keywords, conversion to signup / high-intent pages, traffic to problem/solution/product aware articles
  • Socials: impressions, ICP followers, # of comments from ICP
  • YouTube: Watch time, Impression CTR, Followers

What is next

Now you should have a better idea as to what metrics to look at for each program and stage. 

Start with aligning with the team on the core lagging metric and qualification criteria for your signup/demo request. 

Then, set up program/stage metrics depending on what you are focusing on right now. Keep it as simple as possible but not simpler ;) 

If you need help with this, reach out and I’d be happy to help you set this up. 

More resources on developer marketing metrics

My favorites

Other

Also, I have a big list of resources like this (400+ and counting). Subscribe to my newsletter and I’ll send it to you in the welcome email.