Accept-Push-Policy Header FieldCanon CRFherve.ruellan@crf.canon.frCanon CRFyouenn.fablet@crf.canon.frCanon CRFromain.bellessort@crf.canon.frCanon CRFfranck.denoual@crf.canon.frCanon CRFfrederic.maze@crf.canon.fr
Art
HttpbisInternet-DraftPushThe Accept-Push-Policy and Push-Policy header fields enable a client and a
server to negotiate the behaviour of the server regarding the usage of push on
a per-request basis.HTTP/2 , the new version of the HTTP protocol, not only provides
significant improvements compared to HTTP/1.1 (see and
), but also provides several new features.
Among these is Server Push, which enables a server to send responses
to a client without having received the corresponding requests.The range of possibilities offered by Server Push is a new domain wide open for
experimentation. A first usage was foreseen early in the addition of this
feature into HTTP/2, which is to replace the inlining of sub-resources inside a
main resource, by pushing these sub-resources in response to the request for
the main resource. As described in , with HTTP/1.1, a web
designer may want to optimize the page load time by packing a whole web page
into a single HTTP response. This can be achieved by inlining the CSS,
JavaScript, and images inside the HTML document. By removing the need for the
client to send requests for these sub-resources, this inlining technique can
reduce the page load time by roughly a RTT. With HTTP/2, the same results can
be obtained by pushing the sub-resources instead of inlining them. Using push
has the advantage of keeping each sub-resource independent.HTTP/2 provides a few ways of controlling Server Push from the client side.
First, the SETTINGS parameter SETTINGS_ENABLE_PUSH allows a client to
globally enable or disable push on a HTTP/2 connection. In addition, HTTP/2
Flow Control can be used to limit the bandwidth used by pushed resources.These options provide only a coarse control of the usage of Server Push from
the client side. In some cases, a more fine-grained control would be useful.
This document describes several use cases where controlling Server Push would
be useful for the client. It then proposes new header fields for realizing this
control.In this document, the key words “MUST”, “MUST NOT”, “REQUIRED”,
“SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”,
and “OPTIONAL” are to be interpreted as described in BCP 14, RFC 2119
and indicate requirement levels for compliant implementations.This document uses the Augmented BNF defined in .A browser may want to ask the server to adapt its behaviour for pushing
resources depending on the user’s actions. For example, after navigating
through a site for some time, the browser may have many sub-resources in its
cache and may prefer that the server stop’s pushing sub-resources to
prevent wasting bandwidth. This could be further optimized with the browser
asking the
server to push only response metadata (i.e., the responses pushed by the server
correspond to requests made with the HEAD method instead of requests made with
the GET method). By receiving in advance the list of sub-resources
corresponding to a specific request, the browser would be able to fetch early
on any sub-resource missing from its cache.As another example, when a user opens many pages on the same site, the browser
may want to receive pushed sub-resources only for the foreground tab and not
for any background tab. This results in a better optimization of the page load
time for the tab that is visible to the user.A second use case is a load balancer serving both HTTP/1.1 and HTTP/2 clients,
and using HTTP/2 to connect to the back-end servers, as described in
.The load balancer uses the same HTTP/2 connection towards a back-end server to
forward the requests received from several clients. When the client is a
HTTP/1.1 client, the load balancer doesn’t want the back-end server to push any
resource in response to the client’s request. On the contrary, when the client
is a HTTP/2 client, the load balancer would like the back-end server to push
sub-resources associated to the client’s request.The load balancer would like to be able to enable or disable push on a
per-request basis. This would enable it to optimize the server behaviour
depending on the client’s capacity.Controlling the server behaviour regarding push may also be useful for specific
applications. As an example, MPEG-DASH is a technology for streaming
media content over HTTP. The media content is split into small file-based
segments that can be retrieved through HTTP requests. Potentially, the media
content is made available with different quality levels. A media presentation
description (MPD) describes the organization of the media.To render a media, an MPEG-DASH client needs to first download the MPD,
process it, and then request the necessary media segments. When requesting a
MPD to play the associated media content, it would be useful for a DASH client
to be able to ask the server to push some initial content (for example, the
initialization segments, and possibly the first content segments).However, there are also cases when it is not useful for the DASH client to
receive in advance this initial content. For example, in a video program guide,
the DASH client may want to download several MPDs corresponding to different
media content, but doesn’t want to receive the initial content for all
of these. Therefore, it is useful for the DASH client to be able to specify in
a request for a MPD whether it wants the server to push some initial content.In addition, when the DASH client asks the server to push some initial content,
it could be useful for it to have some feedback from the server. This feedback
would indicate whether the server is intending to push this initial content.
The client could adapt its behaviour depending on this indication. For example,
the client could start rendering the media sooner if it knows that the server
is pushing the initial content.The previous use case can be expanded to the more generic use case of
downloading quickly a web page. As described in , it
is important for the user perception to keep the perceived latency of loading a
web page under 1000 ms. This can be difficult when using a mobile connection
with a high latency. Part of the solution proposed in
for HTTP/1.1 is to inline all the sub-resources
necessary for achieving a first rendering of the web page. With HTTP/2, the
inlining of these sub-resources can be replaced by having the server push them.Therefore, a client detecting that it is using a high-latency network could
improve the user perceived latency by asking the server to push all the
sub-resources necessary for a first display of a web page.WebPush is a protocol for delivering messages
from an application server to a client through a push server. WebPush is using
Server Push for delivering messages from the push server to the client and
receipts from the push server to the application server.An application server may want to control the rate of incoming receipts to
avoid being overwhelmed by a sudden burst of receipts. However, as a receipt
consists only in HTTP Headers (the receipt is a 204, “No Content”, response),
HTTP/2 provides no mean for controlling the rate of such pushed resources.Providing a possibility for a client to control the rate of pushed
resources sent in reference to a request would enable the client to protect
itself from being overwhelmed by a too large burst of pushed resources.The analysis of these use cases enables to build a list of requirements for
defining a fine-grained control over the usage of push by a server.The client can ask the server not to push any resource in response to a
request.The client can ask the server to only push response metadata.The client can ask the server to limit its usage of push.The client can ask the server to use an application-defined behaviour
regarding push.The server can indicate to the client its behaviour regarding push when
processing a request.A push policy defines the behaviour of a HTTP server regarding
push when processing a request. Different push policies can be used when
processing different requests.This section defines new HTTP header fields enabling a client and a server to
negotiate the push policy used by the server to process a given request.The new Accept-Push-Policy header field enables a client to express its
expectations regarding the server’s push policy for processing a request.The Push-Policy header field enables a server to indicate which push policy
it selected for processing a request.A client can express the desired push policy for a request by sending an
Accept-Push-Policy header field in the request.The header field value contains the push policy that the client expects the
server to use when processing the request.Possibly, the Accept-Push-Policy header field could be extended to support
carrying multiple policies, as a comma-separated list of tokens. The server
could choose its preferred policy among those proposed by the client.A server can indicate to a client the push policy it used when processing a
request by sending a Push-Policy header field in the corresponding response.The server MUST follow the indicated push policy when processing the client
request associated to the response.The Push-Policy header field can be used as an acknowledgement from the
server after receiving a request containing the Accept-Push-Policy header
field.If the Accept-Push-Policy header field can contain a list of push policy
names, the Push-Policy header field can be used to express which push policy
was selected by the server.The server can also choose a push policy not corresponding to the client’s
expectation as expressed in the Accept-Push-Policy header, and specify the
selected push policy in the Push-Policy header field.This section defines some generic push policies. Other push policies can be
standardized for either a generic usage, or for an application-specific usage.
In addition, private push policies can be used by a web application.TBD: select the form of private push policies (URN, “X-“ values…).The None push policy value indicates that no resource is pushed when
processing a request.For example, a browser sending a request for a background tab could ask the
server not to push any resources in response to this request by sending an
Accept-Push-Policy header with the None value. This would result in the
following HTTP/2 header block:The Head push policy value indicates that only response metadata are
pushed (the server is pushing responses corresponding to requests made with the
HEAD method).For example, a browser may already have many resources from a web site in its
cache. It could ask the server to push only response metadata. This would allow
the browser to know early on the resources useful for rendering a web page
(i.e., before receiving and parsing the HTML document), without taking the risk
of wasting bandwidth with resources already in its cache. In this example, the
browser’s request would contain the following HTTP/2 header block:The Default push policy value indicates that the server is using its default
behaviour for pushing resources when processing a request.For example, a server not fulfilling a client’s expectation regarding the push
policy could indicate this with the Default push policy. It would send the
following HTTP/2 header block in its response:The Fast-Load push policy value indicates that the sub-resources necessary
for a first rendering of a main resource are pushed alongside the response
containing this main resource.A server using the Fast-Load push policy while processing a request can push
sub-resources not necessary for a first rendering, but SHOULD prioritize
sub-resources necessary for this first rendering.For example, a client detecting that it is using a high-latency network can try
to improve the user perceived latency by asking the server to push the
sub-resources necessary for a first rendering of a main page by including an
Accept-Push-Policy header with the Fast-Load value. This would result in
the following HTTP/2 header block:The Push-Limit push policy value indicates that the specified number is the
maximum number of resources pushed when processing a request.For example, a client wanting to limit a server to pushing a maximum of 100
resources in relation to a request can indicate it in the request by including
an Accept-Push-Policy header field with the Push-Limit value. This would
result in the following HTTP/2 header block:TBDTBDKey words for use in RFCs to Indicate Requirement LevelsIn many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.Augmented BNF for Syntax Specifications: ABNFInternet technical specifications often need to define a formal syntax. Over the years, a modified version of Backus-Naur Form (BNF), called Augmented BNF (ABNF), has been popular among many Internet specifications. The current specification documents ABNF. It balances compactness and simplicity with reasonable representational power. The differences between standard BNF and ABNF involve naming rules, repetition, alternatives, order-independence, and value ranges. This specification also supplies additional rule definitions and encoding for a core lexical analyzer of the type common to several Internet specifications. [STANDARDS-TRACK]Hypertext Transfer Protocol Version 2 (HTTP/2)This specification describes an optimized expression of the semantics of the Hypertext Transfer Protocol (HTTP), referred to as HTTP version 2 (HTTP/2). HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent exchanges on the same connection. It also introduces unsolicited push of representations from servers to clients.This specification is an alternative to, but does not obsolete, the HTTP/1.1 message syntax. HTTP's existing semantics remain unchanged.Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and RoutingThe Hypertext Transfer Protocol (HTTP) is a stateless application-level protocol for distributed, collaborative, hypertext information systems. This document provides an overview of HTTP architecture and its associated terminology, defines the "http" and "https" Uniform Resource Identifier (URI) schemes, defines the HTTP/1.1 message syntax and parsing requirements, and describes related security concerns for implementations.Hypertext Transfer Protocol (HTTP/1.1): Semantics and ContentThe Hypertext Transfer Protocol (HTTP) is a stateless \%application- level protocol for distributed, collaborative, hypertext information systems. This document defines the semantics of HTTP/1.1 messages, as expressed by request methods, request header fields, response status codes, and response header fields, along with the payload of messages (metadata and body content) and mechanisms for content negotiation.Generic Event Delivery Using HTTP PushA simple protocol for the delivery of realtime events to user agents is described. This scheme uses HTTP/2 server push.High Performance Browser NetworkingPUSH_PROMISE and load balancersDynamic adaptive streaming over HTTP (DASH)Breaking the 1000 ms mobile barrier