devtrials
A process proposal for a stepping stone between I2I and I2E
co-conspirators: @reillyg, @costan, @slightlyoff, @panicker, @sshruthi, @ikilpatrick, @petele, @scheib
status: formally part of the blink launch process.
TL;DR:
This is a proposal to publicly introduce an optional milestone in the Blink launch process , called Dev Trials, where web-developers can access a developer preview for a feature under development (in canaries, behind a flag). This comes between an Intent to Implement (I2I) and an Intent to Experiment (I2E). It helps decrease the latency to test product market fit from quarters to weeks and iteration cycles from months to days. After iteration in Dev Trials, features continue normally towards an Origin Trial, and then shipping. Utilizing this milestone should help test product market fit faster and with fewer development resources.
Problem Statement
In our launch process, the first publicly observable milestone* after an intent-to-implement is an origin trial, rightfully gated on launch approvals because it hits real users.
(* "Behind a flag" is an official status on chromestatus.com, but is inconsistently used, not documented as part of the launch process, not publicised, and with unclear criteria.)
It takes, on average, 2 quarters between I2I and I2E with a 1-2 quarters standard deviation [1] [2].
From an Intent to Implement, the challenges to reach an Origin Trial are:
- It conflates two audiences: Origin Trials test developer and user product market fit. Because it encompasses user’s acceptance of a feature, it (rightfully so) maintains a high bar in terms of launch bits (privacy, security, permissions, etc).
- Because of the high bar to protect users, it is hard to estimate: It can typically take anywhere from 2-3 to 3-4 quarters (see supporting data) from Intent to Implement to an Origin Trial due to the nature of unknowns one has at the conception of a project: architectural choices, yak-shaving, privacy, security, UI strings, accessibility, tag reviews and, critically, unknown-unknowns.
- Because of the high bar to protect users, the feedback loop with developers has low cadence and high latency: 6 weeks periods, to be specific, from branch point to branch point and 2 months latency, from branch point to hitting real users in the stable channel, which measures the minimum amount of time it takes to respond to feedback. So, the best case scenario is a developer gives feedback on the day of the branch cut and we submit a CL on that same day (leading to a 2 month response latency) to the worst case being the developer giving feedback on the first day after a branch cut and waiting 6 weeks for a branch to be cut + 2 months to be rolled out, totalling 3.5 months.
- Because it has low cadence and high latency, its exit is unpredictable: the release cycle makes us slow to react and slow to coordinate. We often line-up partners to adopt/implement APIs after origin trials begins, which can lead to unpredictability of when an origin trial ends (causing extensions - example - and anxiety).
This causes frustration for those planning (because features are unpredictable / slip) and for those responsible to give predictions (early on, Origin Trials are so far off that any milestone is an arbitrary milestone, e.g. “asking the color of the fence when we are still building the foundation”).
Importantly, because of the low cadence and high latency of the stable channel, feature teams engage with partners much later than they need to, which often materializes in feature development before product market fit is established (e.g solving the wrong problem).
Proposal
This is a proposal to formalize a stepping stone between Intent to Implement and Origin Trials, which we are calling Dev Trials (see other option names below), to try to:
- Guide sequencing strategies downwards, codified/paved from institutional knowledge
- Make milestone planning upwards more predictable for planners (e.g. TPM and TLMs)
- Unblock cross-functional outreach (i.e. PM and BD) earlier
- Increase responsiveness to developer/partnership feedback (from 2.5 months to days)
- Enter Origin Trials with more work done (decreasing chances of extensions)
- Catch blind spots earlier on
In this proposal, we break Origin Trials into two smaller parts with two separate / distinct goals: testing developers and testing users.
Mechnically, a developer preview is an artifact produced to enter Dev Trials. A developer preview is available/distributed with the canary channel, which are:
- released daily (at least), and
- controlled behind a flag (chrome://flags).
A developer preview is a build in the canary channel that is worth putting in front of a developer (e.g. has enough meat in it that a developer using it would lead to constructive data points), but, importantly, not necessarily ready to be put in front of the users of that developer (the “controlled behind a chrome://flag” is the mechanism that holds that line).
The developer preview is used to co-design the feature with partners. Entrance criteria
A team enters Dev Trials when they collect the following artifacts:
- Minimal (but viable) API surface (e.g. doesn’t cover corner cases / quirks)
- Minimal (but viable) browser implementation (e.g. doesn’t cover all platforms)
- Minimal (but viable) developer sample (e.g. a demo app built with devrel)
- Minimal (but viable) developer instructions (e.g. a HOWTO in a GitHub repo)
- Minimal (but viable) partnership plan (e.g. a conjecture around the players in the space)
- Minimal (but viable) service level agreement in terms of regressions (e.g. automated tests)
- Launch approvals kicked off (but haven’t been necessarily approved)
Notably, as opposed to Origin Trials, they can lack the following:
- Final UI and/or Strings
- A Permission model
- A Security model
- Final API surface (e.g. tag reviews)
- Final launch approvals
During developer trials, one gets:
- A super low bar to start (e.g. O(weeks) from intent-to-implement)
- A super speedy/daily release channel to address feedback
- A sample of ergonomics
- A sample of demand / incentives
- Exit criteria
One exits a developer trial and enters an origin trial when:
- Launch bugs (e.g. privacy, security) have been approved and implemented
- Partners have a working prototype (i.e. with the same expectations in terms of API firmness one has at origin trials, that is, none: a clear expectation should be set about API changes) in production (guarded by feature detection, enabled behind a server-side flag) and are eager to deploy to their users via the origin trials mechanism.
At which point one enters origin trials with the goal to validate how users react to the feature, rather than developers. During dev trials, partners have already implemented and are only waiting to roll out to their users (which depends on a prod binary release), so most of the feedback isn’t in terms of incentives/costs/ergonomics but in terms of whether user’s of their services are going to take that well.
Specifically, one enters origin trials with:
- Sample of Demand: early partnership interest and commitment already in place
- Sample of Ergonomics: early API surface validated
- Rollout: Early implementation available to roll out, including what metrics to collect
Relationship with Origin Trials
Dev Trials have a love relationship with origin trials and augment (rather than replace / dismiss) it. We hope that Dev Trials can help one enter OTs with more confidence (e.g. with product market fit established) and more work done (e.g. partners ready to deploy).
When one exits dev trials, one enters origin trials and go on.
Here are a few concrete notable differences and what each gets you:
Features Dev Trial Origin trial Channel canaries stable Release latency 2 days 2 months Release frequency 1 day 6 weeks ETA from I2I O(weeks) O(quarters) Goal Validate developers: incentives, costs, ergonomics, ROI Validate users: permissions, friction, UX Entrance criteria Minimal but viable API surface Minimal but viable browser implementation Minimal but viable partnership interest Launch reviews kicked off Full API surface Full browser implementation Partnership ready to roll out Launch reviews approved Developer audience 1-5 developers 1-50 developers User audience 0 O(100M)
What does success look like?
We’ll be monitoring the blink-dev channel to quantify if this procedural device contributes positively to the team.
We don’t know yet exactly how to do this (and we can’t stress this enough), but here are a few conjectures that we’ll be monitoring over the years:
- Will dev trials increase how predictable (possibly in terms of standard deviations?) the time between I2E and I2I is (currently at 2 quarters, which lets agree, is really high)?
- Will dev trials help us kill projects more quickly before paying the I2E costs? Freeing up that time, can it increase the number of things we work on and increase the volume of I2S?
We’ll be monitoring the delta (in time) between I2E and I2I events in blink-dev and see if the standard deviation decreases over time if dev trials are established.
We are also plan to monitor the volume of intent to X in blink-dev and see if projects are dying more quickly (i.e. more effectively): when dev trials are established (1) we expect I2Ps to start showing up and I2E to start declining (because bad ideas will have died earlier). If engineers don’t waste time on I2Es that should’ve died earlier, maybe they can increase the volume of ideas that they try, increasing I2Is and I2Ps in (2) further decreasing I2Es. At some point (3) we reach an equilibrium again, hopefully with a positive net impact of increasing I2Ses.
These are highly speculative conjectures and we have absolutely no idea how this is going to go. Nevertheless, it seemed useful to show in what terms we are trying to help.