OpenQuizz
Une application de gestion des contenus pédagogiques
werkzeug.urls Namespace Reference

Data Structures

class  BaseURL
 
class  BytesURL
 
class  Href
 
class  URL
 

Functions

def url_parse (url, scheme=None, allow_fragments=True)
 
def url_quote (string, charset="utf-8", errors="strict", safe="/:", unsafe="")
 
def url_quote_plus (string, charset="utf-8", errors="strict", safe="")
 
def url_unparse (components)
 
def url_unquote (string, charset="utf-8", errors="replace", unsafe="")
 
def url_unquote_plus (s, charset="utf-8", errors="replace")
 
def url_fix (s, charset="utf-8")
 
def uri_to_iri (uri, charset="utf-8", errors="werkzeug.url_quote")
 
def iri_to_uri (iri, charset="utf-8", errors="strict", safe_conversion=False)
 
def url_decode (s, charset="utf-8", decode_keys=False, include_empty=True, errors="replace", separator="&", cls=None)
 
def url_decode_stream (stream, charset="utf-8", decode_keys=False, include_empty=True, errors="replace", separator="&", cls=None, limit=None, return_iterator=False)
 
def url_encode (obj, charset="utf-8", encode_keys=False, sort=False, key=None, separator=b"&")
 
def url_encode_stream (obj, stream=None, charset="utf-8", encode_keys=False, sort=False, key=None, separator=b"&")
 
def url_join (base, url, allow_fragments=True)
 

Variables

 safe
 
 unsafe
 

Function Documentation

◆ iri_to_uri()

def werkzeug.urls.iri_to_uri (   iri,
  charset = "utf-8",
  errors = "strict",
  safe_conversion = False 
)
Convert an IRI to a URI. All non-ASCII and unsafe characters are
quoted. If the URL has a domain, it is encoded to Punycode.

>>> iri_to_uri('http://\\u2603.net/p\\xe5th?q=\\xe8ry%DF')
'http://xn--n3h.net/p%C3%A5th?q=%C3%A8ry%DF'

:param iri: The IRI to convert.
:param charset: The encoding of the IRI.
:param errors: Error handler to use during ``bytes.encode``.
:param safe_conversion: Return the URL unchanged if it only contains
    ASCII characters and no whitespace. See the explanation below.

There is a general problem with IRI conversion with some protocols
that are in violation of the URI specification. Consider the
following two IRIs::

    magnet:?xt=uri:whatever
    itms-services://?action=download-manifest

After parsing, we don't know if the scheme requires the ``//``,
which is dropped if empty, but conveys different meanings in the
final URL if it's present or not. In this case, you can use
``safe_conversion``, which will return the URL unchanged if it only
contains ASCII characters and no whitespace. This can result in a
URI with unquoted characters if it was not already quoted correctly,
but preserves the URL's semantics. Werkzeug uses this for the
``Location`` header for redirects.

.. versionchanged:: 0.15
    All reserved characters remain unquoted. Previously, only some
    reserved characters were left unquoted.

.. versionchanged:: 0.9.6
   The ``safe_conversion`` parameter was added.

.. versionadded:: 0.6

◆ uri_to_iri()

def werkzeug.urls.uri_to_iri (   uri,
  charset = "utf-8",
  errors = "werkzeug.url_quote" 
)
Convert a URI to an IRI. All valid UTF-8 characters are unquoted,
leaving all reserved and invalid characters quoted. If the URL has
a domain, it is decoded from Punycode.

>>> uri_to_iri("http://xn--n3h.net/p%C3%A5th?q=%C3%A8ry%DF")
'http://\\u2603.net/p\\xe5th?q=\\xe8ry%DF'

:param uri: The URI to convert.
:param charset: The encoding to encode unquoted bytes with.
:param errors: Error handler to use during ``bytes.encode``. By
    default, invalid bytes are left quoted.

.. versionchanged:: 0.15
    All reserved and invalid characters remain quoted. Previously,
    only some reserved characters were preserved, and invalid bytes
    were replaced instead of left quoted.

.. versionadded:: 0.6

◆ url_decode()

def werkzeug.urls.url_decode (   s,
  charset = "utf-8",
  decode_keys = False,
  include_empty = True,
  errors = "replace",
  separator = "&",
  cls = None 
)
Parse a querystring and return it as :class:`MultiDict`.  There is a
difference in key decoding on different Python versions.  On Python 3
keys will always be fully decoded whereas on Python 2, keys will
remain bytestrings if they fit into ASCII.  On 2.x keys can be forced
to be unicode by setting `decode_keys` to `True`.

If the charset is set to `None` no unicode decoding will happen and
raw bytes will be returned.

Per default a missing value for a key will default to an empty key.  If
you don't want that behavior you can set `include_empty` to `False`.

Per default encoding errors are ignored.  If you want a different behavior
you can set `errors` to ``'replace'`` or ``'strict'``.  In strict mode a
`HTTPUnicodeError` is raised.

.. versionchanged:: 0.5
   In previous versions ";" and "&" could be used for url decoding.
   This changed in 0.5 where only "&" is supported.  If you want to
   use ";" instead a different `separator` can be provided.

   The `cls` parameter was added.

:param s: a string with the query string to decode.
:param charset: the charset of the query string.  If set to `None`
                no unicode decoding will take place.
:param decode_keys: Used on Python 2.x to control whether keys should
                    be forced to be unicode objects.  If set to `True`
                    then keys will be unicode in all cases. Otherwise,
                    they remain `str` if they fit into ASCII.
:param include_empty: Set to `False` if you don't want empty values to
                      appear in the dict.
:param errors: the decoding error behavior.
:param separator: the pair separator to be used, defaults to ``&``
:param cls: an optional dict class to use.  If this is not specified
                   or `None` the default :class:`MultiDict` is used.

◆ url_decode_stream()

def werkzeug.urls.url_decode_stream (   stream,
  charset = "utf-8",
  decode_keys = False,
  include_empty = True,
  errors = "replace",
  separator = "&",
  cls = None,
  limit = None,
  return_iterator = False 
)
Works like :func:`url_decode` but decodes a stream.  The behavior
of stream and limit follows functions like
:func:`~werkzeug.wsgi.make_line_iter`.  The generator of pairs is
directly fed to the `cls` so you can consume the data while it's
parsed.

.. versionadded:: 0.8

:param stream: a stream with the encoded querystring
:param charset: the charset of the query string.  If set to `None`
                no unicode decoding will take place.
:param decode_keys: Used on Python 2.x to control whether keys should
                    be forced to be unicode objects.  If set to `True`,
                    keys will be unicode in all cases. Otherwise, they
                    remain `str` if they fit into ASCII.
:param include_empty: Set to `False` if you don't want empty values to
                      appear in the dict.
:param errors: the decoding error behavior.
:param separator: the pair separator to be used, defaults to ``&``
:param cls: an optional dict class to use.  If this is not specified
                   or `None` the default :class:`MultiDict` is used.
:param limit: the content length of the URL data.  Not necessary if
              a limited stream is provided.
:param return_iterator: if set to `True` the `cls` argument is ignored
                        and an iterator over all decoded pairs is
                        returned

◆ url_encode()

def werkzeug.urls.url_encode (   obj,
  charset = "utf-8",
  encode_keys = False,
  sort = False,
  key = None,
  separator = b"&" 
)
URL encode a dict/`MultiDict`.  If a value is `None` it will not appear
in the result string.  Per default only values are encoded into the target
charset strings.  If `encode_keys` is set to ``True`` unicode keys are
supported too.

If `sort` is set to `True` the items are sorted by `key` or the default
sorting algorithm.

.. versionadded:: 0.5
    `sort`, `key`, and `separator` were added.

:param obj: the object to encode into a query string.
:param charset: the charset of the query string.
:param encode_keys: set to `True` if you have unicode keys. (Ignored on
                    Python 3.x)
:param sort: set to `True` if you want parameters to be sorted by `key`.
:param separator: the separator to be used for the pairs.
:param key: an optional function to be used for sorting.  For more details
            check out the :func:`sorted` documentation.

◆ url_encode_stream()

def werkzeug.urls.url_encode_stream (   obj,
  stream = None,
  charset = "utf-8",
  encode_keys = False,
  sort = False,
  key = None,
  separator = b"&" 
)
Like :meth:`url_encode` but writes the results to a stream
object.  If the stream is `None` a generator over all encoded
pairs is returned.

.. versionadded:: 0.8

:param obj: the object to encode into a query string.
:param stream: a stream to write the encoded object into or `None` if
               an iterator over the encoded pairs should be returned.  In
               that case the separator argument is ignored.
:param charset: the charset of the query string.
:param encode_keys: set to `True` if you have unicode keys. (Ignored on
                    Python 3.x)
:param sort: set to `True` if you want parameters to be sorted by `key`.
:param separator: the separator to be used for the pairs.
:param key: an optional function to be used for sorting.  For more details
            check out the :func:`sorted` documentation.

◆ url_fix()

def werkzeug.urls.url_fix (   s,
  charset = "utf-8" 
)
Sometimes you get an URL by a user that just isn't a real URL because
it contains unsafe characters like ' ' and so on. This function can fix
some of the problems in a similar way browsers handle data entered by the
user:

>>> url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffskl\xe4rung)')
'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)'

:param s: the string with the URL to fix.
:param charset: The target charset for the URL if the url was given as
                unicode string.

◆ url_join()

def werkzeug.urls.url_join (   base,
  url,
  allow_fragments = True 
)
Join a base URL and a possibly relative URL to form an absolute
interpretation of the latter.

:param base: the base URL for the join operation.
:param url: the URL to join.
:param allow_fragments: indicates whether fragments should be allowed.

◆ url_parse()

def werkzeug.urls.url_parse (   url,
  scheme = None,
  allow_fragments = True 
)
Parses a URL from a string into a :class:`URL` tuple.  If the URL
is lacking a scheme it can be provided as second argument. Otherwise,
it is ignored.  Optionally fragments can be stripped from the URL
by setting `allow_fragments` to `False`.

The inverse of this function is :func:`url_unparse`.

:param url: the URL to parse.
:param scheme: the default schema to use if the URL is schemaless.
:param allow_fragments: if set to `False` a fragment will be removed
                        from the URL.

◆ url_quote()

def werkzeug.urls.url_quote (   string,
  charset = "utf-8",
  errors = "strict",
  safe = "/:",
  unsafe = "" 
)
URL encode a single string with a given encoding.

:param s: the string to quote.
:param charset: the charset to be used.
:param safe: an optional sequence of safe characters.
:param unsafe: an optional sequence of unsafe characters.

.. versionadded:: 0.9.2
   The `unsafe` parameter was added.

◆ url_quote_plus()

def werkzeug.urls.url_quote_plus (   string,
  charset = "utf-8",
  errors = "strict",
  safe = "" 
)
URL encode a single string with the given encoding and convert
whitespace to "+".

:param s: The string to quote.
:param charset: The charset to be used.
:param safe: An optional sequence of safe characters.

◆ url_unparse()

def werkzeug.urls.url_unparse (   components)
The reverse operation to :meth:`url_parse`.  This accepts arbitrary
as well as :class:`URL` tuples and returns a URL as a string.

:param components: the parsed URL as tuple which should be converted
                   into a URL string.

◆ url_unquote()

def werkzeug.urls.url_unquote (   string,
  charset = "utf-8",
  errors = "replace",
  unsafe = "" 
)
URL decode a single string with a given encoding.  If the charset
is set to `None` no unicode decoding is performed and raw bytes
are returned.

:param s: the string to unquote.
:param charset: the charset of the query string.  If set to `None`
                no unicode decoding will take place.
:param errors: the error handling for the charset decoding.

◆ url_unquote_plus()

def werkzeug.urls.url_unquote_plus (   s,
  charset = "utf-8",
  errors = "replace" 
)
URL decode a single string with the given `charset` and decode "+" to
whitespace.

Per default encoding errors are ignored.  If you want a different behavior
you can set `errors` to ``'replace'`` or ``'strict'``.  In strict mode a
:exc:`HTTPUnicodeError` is raised.

:param s: The string to unquote.
:param charset: the charset of the query string.  If set to `None`
                no unicode decoding will take place.
:param errors: The error handling for the `charset` decoding.

Variable Documentation

◆ safe

safe

◆ unsafe

unsafe