OpenQuizz
Une application de gestion des contenus pédagogiques
|
Data Structures | |
class | Failure |
class | Lexer |
class | OptionalLStrip |
class | Token |
class | TokenStream |
class | TokenStreamIterator |
Functions | |
def | describe_token (token) |
def | describe_token_expr (expr) |
def | count_newlines (value) |
def | compile_rules (environment) |
def | get_lexer (environment) |
def jinja2.lexer.compile_rules | ( | environment | ) |
Compiles all the rules from the environment into a list of rules.
def jinja2.lexer.count_newlines | ( | value | ) |
Count the number of newline characters in the string. This is useful for extensions that filter a stream.
def jinja2.lexer.describe_token | ( | token | ) |
Returns a description of the token.
def jinja2.lexer.describe_token_expr | ( | expr | ) |
Like `describe_token` but for token expressions.
def jinja2.lexer.get_lexer | ( | environment | ) |
Return a lexer which is probably cached.
check_ident |
float_re |
ignore_if_empty |
ignored_tokens |
integer_re |
key |
name_re |
newline_re |
operator_re |
operators |
reverse_operators |
string_re |
TOKEN_ADD |
TOKEN_ASSIGN |
TOKEN_BLOCK_BEGIN |
TOKEN_BLOCK_END |
TOKEN_COLON |
TOKEN_COMMA |
TOKEN_COMMENT |
TOKEN_COMMENT_BEGIN |
TOKEN_COMMENT_END |
TOKEN_DATA |
TOKEN_DIV |
TOKEN_DOT |
TOKEN_EOF |
TOKEN_EQ |
TOKEN_FLOAT |
TOKEN_FLOORDIV |
TOKEN_GT |
TOKEN_GTEQ |
TOKEN_INITIAL |
TOKEN_INTEGER |
TOKEN_LBRACE |
TOKEN_LBRACKET |
TOKEN_LINECOMMENT |
TOKEN_LINECOMMENT_BEGIN |
TOKEN_LINECOMMENT_END |
TOKEN_LINESTATEMENT_BEGIN |
TOKEN_LINESTATEMENT_END |
TOKEN_LPAREN |
TOKEN_LT |
TOKEN_LTEQ |
TOKEN_MOD |
TOKEN_MUL |
TOKEN_NAME |
TOKEN_NE |
TOKEN_OPERATOR |
TOKEN_PIPE |
TOKEN_POW |
TOKEN_RAW_BEGIN |
TOKEN_RAW_END |
TOKEN_RBRACE |
TOKEN_RBRACKET |
TOKEN_RPAREN |
TOKEN_SEMICOLON |
TOKEN_STRING |
TOKEN_SUB |
TOKEN_TILDE |
TOKEN_VARIABLE_BEGIN |
TOKEN_VARIABLE_END |
TOKEN_WHITESPACE |
whitespace_re |