The Web has grown without paying much attention to digital advertising, and conversely ad technology has evolved without paying much heed to the fundamentals of Web architecture. This mutual ignorance has been the source of growing tension. We outline a set of requirements, detailed as a high-level evaluation framework, to resolve these tensions, make advertising a first-class citizen of the Web, and to design advertising technology as Web technology. The end goal is for the Web to be the best advertising platform that it can be, and better than any other.
Digital advertising is a key business model that supports the production of content on the Web and subsidises access to it. As an advertising context, the Web has a solid foundation. It is an environment designed for trust, and a trustworthy environment is a good one to advertise in.
Unfortunately, however, advertising has not been treated as a first-class citizen on the Web, and as a result the system of digital advertising that developed as a side-cart to the Web sits uncomfortably with the Web's overall architecture and with the expectations of its users.
This has led to substantial tension. Digital advertising is largely done adversarially with respect to users, with lip service paid to dubious notions of control or consent but a decidedly cavalier approach to users' computing resources and privacy. The current system is also failing to provide a sustainable business model atop which publishers can create quality content and is proving inefficient for advertisers, much of whose money is lost to fraud as well as opaque layers of intermediation and arbitrage.
The scale of the Web, of advertising, and of what rides on both makes this set of issues challenging. We can, however, fix this.
The Advertising Use Cases document already does a fine job of detailing some features of digital advertising that can usefully be maintained [[ADVERTISING-UC]]. But reproducing the status quo with new technology, even if it offers a light sprinkling of improvements for privacy, is insufficient. We need to go beyond improvements at the margins and work on turning the Web into the best digital advertising platform.
Let us be frank: there is little doubt that every Web technologist who has ever looked under the hood of how ads work has reflexively recoiled in horror. Everyone who has used the Web has been confronted with sites plastered with ads. And that's just the surface. Starting from this frank assessment, it can be hard to see how one may agree with a key assumption in this document: that ads can, in fact, be good.
The Web can, and does, support multiple business models, but none of the alternatives have proven able, on their own, to make up for the shortfall that the disappearance of advertising would constitute. Advertising is here to stay; the question is how to implement it in a way that matches the Web's architectural values. The Web was designed for broad access, and ensuring that advertising can be a key sustainable contributor to the Web is entirely in line with that goal.
If we look at the content and services available as a whole on the Web, we can consider them to constitute an attention commons. It is the collective public sphere to which we pay attention. In this view, advertising can be understood as a tax on the attention commons, the purpose of which is to support the maintenance and improvement of our collective attention commons in return.
This view of advertising comes with consequences. The first is that advertising is for end-users. Improving the attention commons serves no purpose other than to improve the lives of people, and that is what advertising must be for. It must also, eventually, be presented to users. Today's advertising ecosystem treats the user as an adversary, a target whose data, attention, and computing resources intermediaries are entitled to. This breaks the promise of trust which is at the heart of the Web [[RFC8890]]. Browsers are the right architectural role to counter this adversarial approach to users. Digital systems, thanks notably to automation, have asymmetric power over users whose time and cognitive resources are necessarily limited (and who have better things to do with their time than defend their data). The browser needs to act as the user's digital arm, acting as their fiduciary in all its interactions. In order to avoid exceedingly coarse responses from browsers (ad-blocking vs anything goes), we need to provide them with more machine-readable information about advertising so that they can more readily automate their handling of ads on the Web.
The second is that advertising must be sustainable. It must support and help improve the attention commons. An advertising ecosystem that fails to sustain publishers, which is what we are seeing in today's system, is one that is poorly designed.
This leads directly to a third consideration which is that advertising must be effective. The more effective it is, the less of it is needed. The more attractive it is to advertisers, the better we can improve the attention commons with limited taxation of attention.
Finally, a system of attentional taxation must, by its very nature, have very strong properties of governance and accountability. Users must know how their attention is used, advertisers must know where their money goes, publishers must control what runs on their sites. Today's ecosystem is very far from exhibiting such properties.
Designing technology at scale will always uncover tension between stakeholders. One key tool in the consensus-building tool box of Web governance is to establish a priority of constituencies. The HTML Design Principles famously indicates that in making standards we should consider “users over authors over implementors over specifiers over theoretical purity” [[html-design-principles]]. This is reinforced by the TAG's Ethical Web Principles [[ETHICAL-WEB]]. These provide a useful starting point but they do not map exactly to the list of stakeholders involved in digital advertising.
An appropriate priority of constituencies for advertising on the Web is: users over publishers over advertisers over intermediaries. This is not an arbitrary order. It is an architectural pillar of the Web, and indeed of the Internet as a whole, that users come first [[RFC8890]]. The purpose of advertising is to produce content and services as part of the maintenance and improvement of the attention commons, therefore it is logical to consider publishers next in order. Advertisers are the ones who need to be convinced to put their money into the system, their buy-in is next. Intermediaries come last, which is not, of course, to say that they are unimportant. We do not consider the relative position of specifiers, who are doing this to themselves, or of theoretical purity, who hasn't been seen anywhere near either the Web or digital advertising in decades.
It is worth noting that many of the challenges that confront us as we try to bring about a world in which advertising and the Web are designed to work with each other is that the priority of constituencies that prevails in today's digital advertising is the exact reverse of the one we need to work from.
The properties in this section are intended to be used to evaluate Web advertising technologies alongside the priority of constituencies described above.
The focus of this initial pass is primarily on topics that have the highest priority, but a number of further properties need to be added, notably some from [[ADVERTISING-UC]].
A key principle of the Web's architecture is that the browser works for the user; it is the user's agent. The power of the browser is important for two reasons. First, users are the most important constituency and must be protected. Second, and most important, users are at a natural disadvantage when faced with complex and pervasive technology. They lack the time and cognitive resources to protect themselves against abusive or dangerous behaviour. The automation and rules offered by the browser are intended to level the playing field in service to the user.
Bringing browsers to the prominence they need to be given in order for digital advertising to evolve towards operating according to the Web's architecture is naturally a source of tension and anxiety.
First, granting a seat at the table for users in digital advertising is new, as currently they have none. Since users have a different idea of how advertising should work, this will naturally lead to change.
Second, there is precedent for browsers being used as tools of market dominance. We must be candid that many constituencies have understandable concerns that browser vendors may be tempted to leverage their position in the browser market to develop browsers that act in ways that are more self-dealing than in a user's interests. Users have, in the past, been unwittingly weaponised for market dominance.
A well-governed advertising ecosystem needs to foster trust for the users and amongst stakeholders, and trust can be built with clear binding commitments.
The ecosystem MUST require user agents to act as the users' fiduciaries. This benefits users, whose interests are inherently put first, as well as other stakeholders who can trust that self-dealing at the user-agent level (browser, OS, SSO) is ruled out. We recommend the elaboration of a strong, clear, and binding User / Agent Covenant in order to address this requirement.
The Web is built on trust. This architectural pillar is described particularly well in The Internet is for End Users [[RFC8890]], which is worth citing at length:
User agents act as intermediaries between a service and the end user; rather than downloading an executable program from a service that has arbitrary access into the users' system, the user agent only allows limited access to display content and run code in a sandboxed environment. End users are diverse and the ability of a few user agents to represent individual interests properly is imperfect, but this arrangement is an improvement over the alternative — the need to trust a website completely with all information on your system to browse it.
Defining the user agent role in standards also creates a virtuous cycle; it allows multiple implementations, allowing end users to switch between them with relatively low costs (although there are concerns about the complexity of the Web creating barriers to entry for new implementations). This creates an incentive for implementers to consider the users' needs carefully, which are often reflected into the defining standards. The resulting ecosystem has many remaining problems, but a distinguished user agent role provides an opportunity to improve it.
This articulates a key point: users can trust the Web because they can use it without having to trust each and every website they visit. This trust stems from the fact that the browser is the user's agent and is expected to be loyal to the user's interests, to “consider the users' needs carefully”. To somewhat belabour the point: the Web is an environment of trust because browsers are entrusted with ensuring that sites are as restricted as they can be in their ability to betray the user's trust.
A violation of privacy is a betrayal of trust. Users are adamant that they expect their data not to be shared to third parties [[EUROBAROMETER-443]]. In order to protect users' privacy, Web technology needs to proactively prevent users from being recognised across contexts.
Issues of surveillance in the data economy are well documented ([[SURVEILLANCE-CAPITALISM]], [[PRIVACY-IS-POWER]], [[PRIVACY-PROJECT]]). People's personal data is traded in markets without users having been given anything remotely resembling a genuine say in the matter. On occasion, there are attempts to paper over this issue with so-called "transparency and choice", an approach which Lindsey Barrett aptly describes as "a method of privacy regulation which promises transparency and agency but delivers neither." [[CONFIDING]]
The ecosystem MUST provide strong guarantees that users cannot be recognised across contexts such that their data cannot be reused outside of the first party and its service providers. This benefits users, whose agency is respected, publishers, who remain able to monetise their audience, and advertisers, who can deliver advertising in a safe, trustworthy environment.
One important aspect of interacting with a website is the ability to log into it, as well as to receive asynchronous messages from it (over email or push, typically). Establishing identity with a website, however, currently does not benefit from the expectations of trust that users have when simply browsing. Users hesitate to provide their email address out of (entirely justified) concerns that it may be used for purposes other than those which they expect from that website, such as spam or data leakage when that email address is repurposed as a cross-context identifier. The ecosystem MUST provide a way for users to establish an identified relationship with a website that offers the same guarantees of trust as simply navigating to that site. We recommend the elaboration of a standard for Messageable Opaque Identity (MOI) that enables users to log into websites without risking being reidentified across unrelated contexts but to nevertheless receive messages from this site, with trivial and guaranteed opt outs from these communications.
Modern Web services are complex and often have to be assembled using providers from multiple origins. When these additional origins are service providers that work directly for the first party and may not use any data collected in that interaction for purposes other than serving the first party, they create no privacy issue. However, when they are third parties and free to reuse that data in other contexts, they violate users' privacy. A problem with today's Web technology is that it makes it impossible to distinguish between the two. The Global Privacy Control [[GPC]] helps improve this situation, but it can only apply when jurisdictions render it enforceable, and the expectation remains that users must turn it on. We recommend the development of an HTTP header that an additional origin can use to indicate that it is acting as a service provider. While blocking third-party data controllers entirely would prove difficult (as it may break the Web), other means of pressure could be used (such as speed throttling, switching to VPNs, limiting access to features, or limiting the privacy budget). This enforcement of purpose limitation would do much to render the Web more trustworthy and sustainable.
The advertising ecosystem is so opaque that no one can agree on exactly how much fraud there is; the only consistent outcome to come out of every study is that it is astoundingly high. Juniper Research estimates that ad fraud cost $42 billion in 2019 (growing over 20% year-on-year) and is on pace to exceed $100 billion by 2023 [[JUNIPER-AD-FRAUD]]. Even by the most conservative estimates, ad fraud is a leading form of computer crime.
The impact of this fraud is felt by both advertisers, in lost effectiveness, and by publishers, in lost revenue. Intermediaries make money either way, and not all are working towards the type of increased security and transparency that are needed to combat fraud.
The elimination of fraud may be the biggest reservoir of ad effectiveness that can be tapped. Fraud will always to some degree be a cat-and-mouse game but, given the structure of the market's incentives, the architecture of Web advertising needs to be such that it empowers the parties that suffer most from it — advertisers and publishers — to be in a position to combat it directly.
The ecosystem MUST provide reasonable means to ensure that a user is human and MUST support end-to-end reconciliation of impressions between advertisers and publishers. For instance, trust tokens ([[TRUST-TOKEN-EXPLAINER]]) could be used in order to ascertain the validity of ad impressions, and a single source of truth for this information would be provided to all parties.
There are many different types of bad ads, ranging over being malicious from a security standpoint (eg. trying to load malware), trying to defraud users, exposing users to physical danger (eg. flashing content), violating a publisher's ad policies, annoying the users, or making excessive use of the user's computing resources.
The parties with the greatest interest in policing bad ads — users and publishers — are also the parties least equipped under the existing system to act on them.
A primary impediment to accountability in ad creatives is the fact that they are produced through content and script injection, and load arbitrary third-party resources that can be changed dynamically on the server side and react dynamically on the client side. This makes analysing the behaviour of ads particularly difficult, and makes it hard for browsers to control ads beyond very coarse block-or-not decisions. When users see a bad ad it is often either impossible for them to report it (because the context is already gone and they can't rely on dev tools, often just producing a screenshot about which very little can be done) or they have to rely on the ad reporting tools offered by intermediaries, and those are often slow, bureaucratic, and ineffective.
Beyond resilience to bad ads, publishers must be able to guarantee the sourcing for every piece of content that they put in front of users. In the current system, publishers are largely unable make any guarantees about their supply chain, they can at best pass a signal indicating their preferred configuration and pray. A trustworthy system requires farm-to-table accountability.
The coming [[DSA]] will require traceability and accountability about how ads are shown, who is paying for them, and who is causing them to be shown. We recommend building a user-first, Web-quality standard to meet this regulatory requirement. In this system, detailed machine-readable information about ads makes it possible to support fine-grained user control over the ads they are exposed to.
Several conclusions can be drawn from this requirement. First, the ecosystem
MUST be able to operate without any script injection whatsoever. It is unlikely
that we can remove script injection from the Web entirely, but we should strive to make
it rare and suspect; it should never be required in the normal course of monetising a
website. Furthermore, an ad should not be able to load any content other than a clear
pre-bundled set, should only be able to communicate back to the network through a limited
set of browser-gated channels, and should not be able to communicate with its embedding
context other than for a small set of well-controlled use cases (and it should certainly
never be allowed to trigger a top-level navigation other than through the user clicking an
Second, the ecosystem MUST make it easy for browsers to automatically reason about ads and intervene on them in a fine-grained manner. The more a browser knows about ads, the more it can support user's preferences such as banning a given advertiser from ever showing an ad again or restricting ads that go over certain computing resources budget. Browsers could maintain a log of the ads that were shown to a user recently, and even if a user cannot find a bad ad again (which is a common problem) they could flag the fact that one appeared recently — the intersection between multiple reports containing the same creative would soon narrow down to one.
Finally, the ecosystem MUST support a complete know-your-customer chain of custody paper trail so that users can fully inspect what they being shown. This should be carried with the creative bundle. Browsers should be in a position to intervene on any of the steps listed. For instance, users could block ads that are served through a specific network that is known to be bad at privacy but accept ads from another source that has been audited by a trustworthy party. This would make intermediary behaviour a source of user-level competition.
These requirements can be implemented through several improvements to the Web platform. Building ad creatives not with arbitrary Web technology but in bundles that offer opportunities for user and publisher level control would help; that is the SLIC proposal outlined below. Since SLIC is an application of Web Bundles (same origin, no SxG) it would also be possible to embed the traceability information as an entry in the CBOR [[rfc8949]].
In order to make it possible for browsers to reason about ads, they should be embedded
exclusively declaratively through a new
<ad> element, and its accompanying
<tracker> element. These would make it possible for the user to dictate
rules for the use of their data and resources, and to have the browser enforce them at
least at the entry point. Making ads and trackers declarative would enable numerous
valuable use cases, for instance: implementing protocol solutions for browser-level
frequency capping (eg. relying on
300 responses), enforcing resource quotas,
integrating with pooled bad actor reporting systems such as Safe Browsing, enforcing
sole controllership by blocking the loading of ads and trackers that involve data
controllers other than the first party, or even overlaying an invisible watermark atop ads
so that when users report bad ads with screenshots they can still be traced back to their
source. It's the only way that ad technology can negotiate with users instead of
approaching them as adversaries to be abused. The approach is sketched out in the
HOTDAM proposal below.
Because advertising is a tax on the attention commons, every participant in ad-related transactions is entitled to know who benefitted from any given ad being rendered and how much. Today's environment is completely opaque in this regard and it is typically hard to impossible to audit the flow of money and to get a sense for who profits from the advertising ecosystem. It is common for intermediaries to arbitrage inventory or to provide preferential terms to some partners that decrease monetisation for publishers (and therefore value to users) [[COMPETITION-AD-DISPLAY]], [[TRUST-ME-IM-FAIR]].
Some of the more shameless intermediaries claim that they cannot share this information because of something they have dubbed "advertiser privacy." It is generally agreed by decent people, however, that that's just not a thing.
The opacity makes it easy to just help themselves to revenue that would be better spent elsewhere in the ecosystem for both advertisers and publishers.
The ecosystem MUST support, for every ad impression, providing both the browser and the publishers with a full financial paper trail. This should allow browsers to report to users how much they have supported various parties with their browsing, publishers and advertisers to understand the flow of money (possibly through publicly shared revenue telemetry), or users to block ads for which the publisher does not get more than a given percentage of the revenue.
Today's ad markets are very similar to electronic markets, notably to stock markets and high-frequency trading ([[SUBPRIME-ATTENTION]], [[WHY-GOOGLE-DOMINATES]]). These markets, however, are not regulated in the manner that other electronic markets are and are therefore prone to opaque manipulation, unerodible information advantages, insider trading, and other such practices that people normally go to jail for.
One option is to regulate the ad market in the manner in which other electronic markets are regulated. This approach, however, has at least two primary issues. The first is that it operates on the assumption that personal data becomes traded as a commodity, and that a lot more of it needs to be shared in order to eliminate unerodible information advantages (typically by broadcasting personal identifiers as much as possible). This is evidently incompatible with users' expectations of privacy. The second is that, even when regulated, electronic markets are susceptible to crashes (as Tim Hwang argues in [[SUBPRIME-ATTENTION]]). A crash of the advertising market would be extremely damaging to the attention commons.
A better alternative is to stop operating the ad market with financialised methods. Real-time bidding is only one allocation architecture, and with trading in personal data going away the real-time characteristics might not be quite as necessary as they might have once seemed. The ecosystem MUST NOT be built from the assumption that real-time trading is a requirement. We propose exploring infrastructure to set up campaigns programmatically at scale through publishers in the Advertising Capabilities for Direct Campaigns (ACDC) section and outline some Web-scale allocative approaches in the Trustworthy Unified Matching Market Origin (TUMMO) part.
The current system is built on the assumption that third parties should get free rein to reuse the data they collect on publisher sites for their own independent purposes. This contributes to devaluing publisher audiences since the same valuable people can be targeted in low-CPM contexts (and the difference arbitraged by intermediaries).
In 2007, David Drummond expounded the clear position that intermediaries has "no control over the advertising, no ownership of the data that comes with that that is collected in the process of the advertising. That data is owned by the customers, publishers and advertisers, and DoubleClick or Google cannot do anything with it." [[GOOGLE-DOUBLECLICK-MERGER]] In the intervening decade, the industry has drifted towards the position that intermediaries own the data, a position detrimental to user privacy and to publisher revenue, and that produces no benefits for advertisers (whose audience data also gets reused independently by intermediaries).
The ecosystem MUST build on the expectation — and enforce it where possible — that the first party is the sole data controller. This aligns with users' expectations and with improved audience monetisation. The declarative mechanisms in HOTDAM can bind with legal mechanisms to ensure that sole controllership is made binding. This additionally works in support of purpose limitation, a key expectation of users and fundamental component of privacy and data protection.
This section offers a high level view of concrete work areas that adhere to the vision and requirements described in this document. An explicit goal here is to have pieces of tech that are architecturally orthogonal where possible, intended to work well together even if they might not always shine on their own.
Not every proposal that has been made to date is included here, a lot is missing. Additions are welcome, given an overview of how they fit into the whole picture. Furthermore, a lot of what is below is but sketches and needs a lot of fleshing out.
It would be valuable to go through the existing proposals that have been made prior to this document, and to list those that help achieve the above goals or changes to existing ones that could support a healthier ecosystem.
As more of the advertising ecosystem shifts towards relying on browser capabilities, there is naturally concern over whether the browser can be a trustworthy market participant, notably in light of a history of less-than-ideal market behaviour from some browser vendors going back at least two decades.
A key solution here is to detail exactly in what sense the browser is expected to work for the user, and to establish the foundation of a legal theory strong enough to render the browser's role legally binding.
The broad regime that seems most adapted to describing this relationship is that of fiduciaries. This is not understood in the more recent (and still somewhat vague) sense of information fiduciaries, but in a more specific and restricted — but also much stronger — sense of fiduciary agency, that supports strong duties of honesty, discretion, protection, and loyalty, as well as ruling out self-dealing. The purpose of the Covenant document is to flesh this position out.
One threat to user privacy and trust on the Web is the use of email for purposes beyond login and direct communication with the user. Email is often used as a login key, and has the valuable property that it can then be used for transactional messaging as well as for newsletters and direct marketing (where respectful). It is, however, increasingly being reused in order to recognise users across contexts to track and target them. This use is contrary to the expectation of users and leads to decreased trust which means users are less likely to want to share their email even to publishers who only use it for legitimate purposes.
It is core goal of the Web and of a healthy advertsing ecosystem that one can log into a site with full trust, without worrying that this will enable cross-context recognition or spam.
WebID is an interesting proposal in this direction, and may be part of the MOI solution, but it does not include a messaging component. If we are to make it possible for emails to be eliminated from the Web platform, we need to offer a messaging capability that can advantageously eliminate email and any other cross-context identifier.
Using MOI, users are authenticated to sites and identified with an opaque identifier,
either from a system like WebID [[WEBID]] or, skipping the notion of having an IDP
entirely, directly from credentials generated by the browser and submitted to standard
/.well-known/moi/login). This latter option has the advantage of simplicity,
but assumes that there is a standard to sync data between different browser vendors to
ensure that people don't have to use the same browser account everywhere.
These credentials are rendered messageable by having the browser poll from a simple
mailbox protocol (blinding IP if possible) being limited to list/read/delete operations
/.well-known/moi/inbox. For safety and simplicity, messages are in
SLIC format (and open data can be safely channeled through the
browser rather than snuck in through images). The browser (or a trusted delegate) can of
course build a bridge to email if desired. Unsubscribing is a simple matter of telling
one's browser to stop polling.
Under this approach, there is no reason for a website to know one's email, and we should be able to (slowly) reach a stage at which asking for it is suspect. MOI eliminates not only cross-context identification but also a lot of spam surface. As a nice side-effect, it significantly decreases email address switching costs (as opposed to schemes that generate single-origin email addresses).
Web content is hard to control. It can be hard for a browser to know what a given request will be used for before it completes (and even then), and there have often been only very few restrictions on how content gets loaded (something which we have often lamented). This is mostly a positive property of the Web in the sense that it provides a strong, natural resistance to centralising forces. It does, however, make it difficult to control content that gets injected into one's website. From a publisher's standpoint the alternatives are primarily either: accept no injection into your site whatsoever, or accept that you will with certainty see your website compromised for at least some fraction of requests. It's a fact rarely discussed because it reads like an admission of guilt, but it is simply a fact of the Web that running advertising on one's site means granting third parties control over that site.
Bundles are a powerful way of controlling content, including other people's content. Ad creatives have proven hard to control because they have been granted close to the full power of the Web, but that is not power that they need. Requiring ad creatives to be bundled would be an effective step towards increased security, accountability, and privacy.
The heart of the SLIC proposal is to build atop Web Bundles and Resource Bundles ( [[WEB-BUNDLES]], [[RESOURCE-BUNDLES]]) and reuse the good work done there. Ad creatives would be bundled together such that:
Every use case that a creative has to communicate with an external party (eg. for attribution) can then be mediated through the browser, and the browser can reason about the purpose of this communication and how appropriate it is to the user.
This is a low hanging fruit for something that is also needed by TURTLEDOVE.
One added benefit is that this renders ad creatives much easier to archive, which is key to public accountability notably for political campaigns.
A key way to make it possible for browsers to reason usefully about ads is to integrate
them through declarative components. One such addition would be to replace the arbitrary
(and painfully overloaded) use of
iframe for ads and replace it with an
ad element would provide the correct context into which to load a SLIC,
and can offer the required integration with HARP. One useful feature of
that it can use the metadata in a SLIC creative to overlay an invisible watermark, in turn
when users report bad ads with screenshots the source of the problem can readily be
By default, all requests made by the
ad element have the
header set to
1 [[GPC]]. This prevents third parties from reusing data
obtained from that request for their own purposes, independently of the publisher.
(It might be a better idea to define a header that revokes any license to reuse the data,
even if anonymised, but that will need a legal theory to match.) In rare cases where such
data may need to be reused, the header can be removed by adding a
attribute to the
ad element, in which case the browser will have to obtain
the user's consent for data to be shared.
The browser can selectively block ads depending on some of their internal characteristics that do not match the user's agreed-upon preferences. Ads included through a declarative entry point are also a lot easier to monitor at scale. The browser can provide proof that the ad was viewable and that the user was human.
There may be a role to play for a
tracker element, but it is unclear if that
is still needed in the world to come.
In a privacy-preserving world, the browser will know things that the ad server cannot be told. Putting the browser in charge of certain network interactions will allow it to implement a number of features in a way that cannot be tricked into revealing information.
For instance, SLIC creatives should carry frequency capping information which the browser could partially abide by, for instance using a Bloom filter with a maximum number of entries. A way for the browser to discover more eligible creatives than it has slots to fill could then be used to support client-side frequency capping.
This protocol suite would encompass any necessary reporting, such as private click measurement [[PCM]] or the ability to prove delivery and targeting (possibly with a system like Prio [[PRIO]]).
Smaller publishers often lack the scale to produce their own demographics data, which is unfortunate because relatively broad demographics plus contextual information can suffice for a very large class of targeting requirements.
Modern techniques can enable privacy-preserving analysis of data at large scales ([[PRIO]], [[RAPPOR]], [[PROCHLO]]). This, along with users providing demographics to their browser, can allow for websites, URL path prefixes, and in some cases individual pages (depending on total traffic volume) to be mapped to a demographic breakdown that can complement targeting. This data can be provided at scale since it is sufficiently aggregated.
Buying directly from publishers is an effective way to cut out wasteful spending on intermediaries, but it requires the publisher to be large enough to have a direct sales operation and advertisers to want to go directly to publishers. For small businesses, it is often easier to buy inventory from a large self-serve platform than from a handful of local publishers, even though the latter might be more effective.
An API with which to buy campaigns directly from a publisher, exposed at a predictable
.well-known endpoint, would provide a step forward in this direction.
Standardising an API means that products could be created by independent companies to buy
directly from publishers, and to set up campaigns easily across multiple publishers (eg.
all the papers in the county).
With personal data being progressively expunged from the system, the value of real-time bidding is much less. Replacing real-time bidding with a slower programmatic matching market over which campaigns (probably with smaller volumes per campaign) are traded, will allow smarter decisions to be made in a more transparent manner and without the overhead of running bids at impression time since only the ad server needs be contacted then.
Such a matching market could be operated as a cooperative utility into which buyers and vendors could plug in order to buy and optimise campaigns.
This system would plug into ACDC and SPARTACUS.