Questions & Answers from our ‘How to Make Prebid the Supply Path Buyers Choose’ Webinar
We had some great questions asked during our August 27th, 2020 Webinar on ‘How to Make Prebid the Supply Path Buyers Choose’. Special thanks to our amazing panelists for providing the below answers!
As a publisher, how do we keep up with A/B testing? Do you recommend an analytics provider?
Steph – We use Data Studio (but you can use Tableau or any data visualization tool) to help with analysis of our A/B testing. Some of our business units also use Optimizely which can go beyond just advertising.
Azhar – We push all analytics to our data warehouse (Snowflake) and use Tableau as the visulaization tool for the A/B dashboards).
Shobha – we use Looker for our A/B testing analysis and Roxot for the prebid analytics.
Does Prebid plan to integrate or open communication with the publisher adservers like Google AdManager?
We are already integrated with Google Ad Manager from a bidding perspective, and there are prebid.js implementations with several other ad servers. We would like to promote deeper integration with Google — especially around issues like floors and presold line items — and will continue to push for a deeper partnership with them.
Who acts as the floor data provider in the Prebid floor module? And how is it computed?
Any analytics provider should be able to provide flooring information to prebid.js or prebid server. The floors implementation lets the publisher choose a provider and allow the floor data to be pushed into prebid dynamically. There are already several providers offering flooring services in beta, and we expect that to expand in the coming months.
Considering the floor module doesn’t bake in intelligence of direct campaigns sitting in GAM, how is Prebid planning to bring flooring intelligence out of GAM?
We don’t have a fixed plan to exchange flooring information with GAM — but we certainly have a desire to do so and this has been discussed on a few occasions with Google in the past. The goal would be bi-directional — for GAM to provide direct sold line item information to prebid for flooring purposes (much like it currently does for EB) and for prebid to provide its Prebid flooring values to GAM to floor EB bids. There are several technical ways to achieve this, but all of them require active collaboration between the two parties.
Is there any long-term hope for video ads served via prebid to avoid the VAST redirect round-trip?
Yes, the video task force is exploring ways to eliminate the round trip on VAST calls. There are efficiency mechanisms already in place for VAST calls using RTB, and the hope would be to extend some of those methods to prebid adapters.
With the floor price module in prebid, whichever adaptor you use, could it pass that value as a key value into the ad server so you can target it as a unified floor price in GAM on the same impression?
Yes, this could work in theory. But that decision is governed by the publisher and the floor provider. Prebid does not necessarily have a role in setting up the key value transmission.
What options do publishers have to host prebid servers?
Here is a list of Prebid members offering a managed service.
Can you provide recommended analytics providers?
Shobha – As mentioned above, Roxot, Optimizely, STAQ; a lot of SSPs also provide analytics modules to connect to prebid.
Why is it an issue to ask for non-standard ad units?
Steph – If there is not demand for an ad unit in-market, you’ll be asking to fill an ad and offering it to DSPs in which they will not have demand to fill it. Therefore they will see impressions coming through as “unwinnable”, making your supply path seem less efficient to buyers. If you want to use these ad units specifically for your own use, that makes sense, but offering them to the open exchange makes your Prebid supply paths look like a wasted query to DSPs.
How do I ensure that my engineers are doing all of the recommended code updates or “best practices”?
Azhar – This feels like a multi-part question. “best practices” could apply to a lot of things. I’ll try to break it down below:
prebid – you should setup an engineering pipeline that makes it easy to upgrade prebid as we release new features constantly. Regarding cadence for updating your version of prebid – that is probably a case-by-case decision. If you need any features/bug-fixes that were in a release ahead of your current version, then you should definitely upgrade. You should also upgrade major versions (v3.x, v4.x, etc.) once the previous major version is no longer supported.
site/ad speed – you should be using tools like Page Speed Insights, Lighthouse and GAM’s Ad Speed reports which have best practices outlined and provide recommendations for improvements.
What is P-value on this chart??
Azhar – p-value is just one method of quantifying statistical significance which we use to as a measure of confidence that the results are actionable. https://en.wikipedia.org/wiki/P-value Regarding what value to aim for – that depends on what level of confidence you want. For example, we use 5% as the guideline for which we consider a result good to act on.
We filter out bid requests that are unlikely to generate a bid. Therefore, we have a bidstream that is lower QPS than other supply paths to an ad partner. Will a lower QPS affect SPO algorithms in upstream SSPs or DSPs?
Yes, though this really has very little to do with prebid. Typically there are many DSPs and bidders that have volume bias, and restricting transactions could cause some bidders to view publishers as smaller than they really are, and that could effect bidding levels. That said, filtering is a very common thing in our industry and in most cases the cost benefits of filtering far outweigh the downside adjustments of bidding algorithms.
Are their tools available to change granularity by season? like Steph mentioned- X for January and Y for Dec?
Steph – I have seen this mostly done manually – If you can in GAM I would try and do $.01 increments on your lowest bids in Jan and your highest in Dec, just to cover your basis and then you shouldn’t have to revisit it too often.
How do you handle upgrades from 3.0 to 4.0 and when do you decide to implement the latest version?
Azhar – That is probably a case-by-case decision. If you need any features/bug-fixes that were in a release ahead of your current version, then you should definitely upgrade. You should also upgrade major versions (v3.x, v4.x, etc.) once the previous major version is no longer supported. When upgrading you need to be aware that upgrading major versions could have breaking changes and so all adapters might not be ported over to the latest version.
Shobha – we stay in close communication with our SSPs on the larger upgrades and watch discrepancies very closely; we’ve passed key-values to one particular partner so that we can evaluate same-day effects on discrepancies or issues within a few hours.
When you refer to “native” what do you mean? Taboola/Outbrain?
Ad units that have the look and feel of your site – typically Headline/Body Text/Photo. Can include those content widgets but also includes Triplelift, Nativo, Sharethrough etc.
For Publishers not having monetization teams and having to outsource the management of their inventory, isn’t there a way on the buy side to know this is a reseller with “full exclusivity”?
Shobha – Ads.txt doesn’t currently account for this unique setup that operates more like an “agent”. There are active conversations within the IAB OpenRTB Working Group on updates to the ads.txt specification to better delineate this type of inventory.
I work for a small network of programmatic-only sites. I have long suspected that we have too many resellers, but have had trouble quantifying the impact of removing some (or all). How might you design an A/B test for this?
Azhar – This will be tricky to test since it involves updating ads.txt as well. I’m not aware of a clean way to A/B test ideas like this one where you’re trying to change buying behavior. The reason for that is that buyers aren’t aware that they are in an A/B test and therefore when they look at the supply coming from your sites in aggregate, they will apply the changes you want to see from the test treatment across your site, meaning that it will affect the control treatment as well.
We noticed a drop of user-sync when shifting from prebid.js to Prebid Server with our SSPs. Is it something that you also noticed? Is there any way to optimize and/or make sure the user-sync mechanism is optimal?
Shobha – That’s expected behavior. User matching is one of the big benefits of implementing prebid client-side and prebid server certainly disadvantages SSPs when it comes to user sync. The exact mechanisms of how it works are here: http://docs.prebid.org/prebid-server/developers/pbs-cookie-sync.html.
Notably, it doesn’t allow for iframes that allow the bidder to further sync with their own buyers as .js does. However, a publisher could ask an SSP for an iframe sync for the publisher to fire outside the scope of prebid. Some buyers, eg Criteo and TTD, have ID modules that allow the publisher to establish the buyer id directly, but most do not. Cookie deprecation may eventually level the playing field between client and server.
What is Open Bidding?
Google’s server to server header bidding solution that is implemented in GAM. It was Exchange Bidding Dynamic Allocation (EBDA), rebranded to Exchange Bidding (EB) and now called Open Bidding (OB). While supply paths can look efficient in OB, SSPs are subject to a 5-15% take which increases your tech tax.
From a bidder point of view, when we bid back to prebid.js there are lots of steps in between. e.g. win on prebid.js, then bid reach to the main ad server, then win there, then ad can be rendered, then the tracker is send to bidders. How can we help bidders with better transparency??
Prebid is in the process of developing auction signals to help bidders perform better and bid more accurately. We are exploring ways to inform bidders and bidder adaptors of the winning bid in prebid auctions (or the second price in cases where an adapter is the winner), and expect to release some of these new tools shortly. Of course, winning the prebid auction does not mean a bidder will win the end auction — which is typically determined by the ad server since direct sold line items and ad server-based bids need to be taken into account. To that end, we are continuously exploring better ways to integrate with ad servers and provide information not only on the prebid auction but the definitive downstream winner. We still have a lot of work to do in this area, but the first step — the prebid auction — is actively being explored.
What is an acceptable win rate?
Steph – You want your win rate to be as high as possible for efficiency – capturing the high value bids and not losing them through the value chain. You should always optimize towards what you can.
Whats your take on resellers and what is the best way to manage them? Is there a such thing as too many resellers?
Steph – Resellers can provide value but in that you lose control of how your inventory is valued in market. I would limit resellers to those that have a specific ad unit, special function or truly unique demand, rather than letting all your partners resell your inventory so that your flooring strategy is consistent across supply paths.
Which contract framework has to be signed to use prebid.org?
No contract – Prebid is open-source and you can download and use the code on your site today.
What is meant by efficient? Do you mean efficient in terms of number of steps, or technically most efficient (minimum data loss), or financially most efficient, or other?
Chris – We think about efficiency as the combination of financial distance and technical distance. A supply path with high financial distance has a high total take rate — either because of a high exchange fee or compounded reselling fees. A supply path with high technical distance has a high risk of auction failure — due to latency, ad serving errors, incorrect bid price translation, incorrect advertiser blocks, or a handful of other technical breakpoints. An efficient supply path has both low financial distance (low fees) and low technical distance (low risk of auction failure).
If you optimize towards ROI, how does the SSP take rate matter?
Steph – The goal is to get all SSPs to compete equally – if they do not compete equally they will not be evaluated on true ROI. Due to limitations in how different HB solutions are set up, often times they do not compete equally and therefore not every competitor in the value chain is subject to inefficiency. Since publishers own the inventory, they want to maximize the working dollars that comes back into their pocket.