Skip to content
Snippets Groups Projects
Commit 5eee4fe0 authored by Ted Nyman's avatar Ted Nyman
Browse files

Latest pygments

parent e87424cb
No related branches found
No related tags found
No related merge requests found
Showing
with 4514 additions and 3370 deletions
Loading
Loading
@@ -6,8 +6,9 @@ Major developers are Tim Hatch <tim@timhatch.com> and Armin Ronacher
Other contributors, listed alphabetically, are:
 
* Sam Aaron -- Ioke lexer
* Kumar Appaiah -- Debian control lexer
* Ali Afshar -- image formatter
* Thomas Aglassinger -- Rexx lexer
* Kumar Appaiah -- Debian control lexer
* Andreas Amann -- AppleScript lexer
* Timothy Armstrong -- Dart lexer fixes
* Jeffrey Arnold -- R/S, Rd, BUGS, Jags, and Stan lexers
Loading
Loading
@@ -15,6 +16,7 @@ Other contributors, listed alphabetically, are:
* Stefan Matthias Aust -- Smalltalk lexer
* Ben Bangert -- Mako lexers
* Max Battcher -- Darcs patch lexer
* Tim Baumann -- (Literate) Agda lexer
* Paul Baumgart, 280 North, Inc. -- Objective-J lexer
* Michael Bayer -- Myghty lexers
* John Benediktsson -- Factor lexer
Loading
Loading
@@ -29,20 +31,25 @@ Other contributors, listed alphabetically, are:
* Christian Jann -- ShellSession lexer
* Christopher Creutzig -- MuPAD lexer
* Pete Curry -- bugfixes
* Owen Durni -- haXe lexer
* Bryan Davis -- EBNF lexer
* Owen Durni -- Haxe lexer
* Nick Efford -- Python 3 lexer
* Sven Efftinge -- Xtend lexer
* Artem Egorkine -- terminal256 formatter
* James H. Fisher -- PostScript lexer
* William S. Fulton -- SWIG lexer
* Carlos Galdino -- Elixir and Elixir Console lexers
* Michael Galloy -- IDL lexer
* Naveen Garg -- Autohotkey lexer
* Laurent Gautier -- R/S lexer
* Alex Gaynor -- PyPy log lexer
* Richard Gerkin -- Igor Pro lexer
* Alain Gilbert -- TypeScript lexer
* Alex Gilding -- BlitzBasic lexer
* Bertrand Goetzmann -- Groovy lexer
* Krzysiek Goj -- Scala lexer
* Matt Good -- Genshi, Cheetah lexers
* Michał Górny -- vim modeline support
* Patrick Gotthardt -- PHP namespaces support
* Olivier Guibe -- Asymptote lexer
* Jordi Gutiérrez Hermoso -- Octave lexer
Loading
Loading
@@ -53,6 +60,7 @@ Other contributors, listed alphabetically, are:
* Greg Hendershott -- Racket lexer
* David Hess, Fish Software, Inc. -- Objective-J lexer
* Varun Hiremath -- Debian control lexer
* Rob Hoelz -- Perl 6 lexer
* Doug Hogan -- Mscgen lexer
* Ben Hollis -- Mason lexer
* Dustin Howett -- Logos lexer
Loading
Loading
@@ -64,6 +72,7 @@ Other contributors, listed alphabetically, are:
* Igor Kalnitsky -- vhdl lexer
* Pekka Klärck -- Robot Framework lexer
* Eric Knibbe -- Lasso lexer
* Stepan Koltsov -- Clay lexer
* Adam Koprowski -- Opa lexer
* Benjamin Kowarsch -- Modula-2 lexer
* Alexander Kriegisch -- Kconfig and AspectJ lexers
Loading
Loading
@@ -97,6 +106,7 @@ Other contributors, listed alphabetically, are:
* Mike Nolta -- Julia lexer
* Jonas Obrist -- BBCode lexer
* David Oliva -- Rebol lexer
* Pat Pannuto -- nesC lexer
* Jon Parise -- Protocol buffers lexer
* Ronny Pfannschmidt -- BBCode lexer
* Benjamin Peterson -- Test suite refactoring
Loading
Loading
Loading
Loading
@@ -6,6 +6,56 @@ Issue numbers refer to the tracker at
pull request numbers to the requests at
<http://bitbucket.org/birkenfeld/pygments-main/pull-requests/merged>.
 
Version 1.7
-----------
(under development)
- Lexers added:
* Clay (PR#184)
* Perl 6 (PR#181)
* Swig (PR#168)
* nesC (PR#166)
* BlitzBasic (PR#197)
* EBNF (PR#193)
* Igor Pro (PR#172)
* Rexx (PR#199)
* Agda and Literate Agda (PR#203)
- Pygments will now recognize "vim" modelines when guessing the lexer for
a file based on content (PR#118).
- The NameHighlightFilter now works with any Name.* token type (#790).
- Python 3 lexer: add new exceptions from PEP 3151.
- Opa lexer: add new keywords (PR#170).
- Julia lexer: add keywords and underscore-separated number
literals (PR#176).
- Lasso lexer: fix method highlighting, update builtins. Fix
guessing so that plain XML isn't always taken as Lasso (PR#163).
- Objective C/C++ lexers: allow "@" prefixing any expression (#871).
- Ruby lexer: fix lexing of Name::Space tokens (#860).
- Stan lexer: update for version 1.3.0 of the language (PR#162).
- JavaScript lexer: add the "yield" keyword (PR#196).
- HTTP lexer: support for PATCH method (PR#190).
- Koka lexer: update to newest language spec (PR#201).
- Haxe lexer: rewrite and support for Haxe 3 (PR#174).
- Prolog lexer: add different kinds of numeric literals (#864).
- F# lexer: rewrite with newest spec for F# 3.0 (#842).
Version 1.6
-----------
(released Feb 3, 2013)
Loading
Loading
@@ -259,7 +309,7 @@ Version 1.3
* Ada
* Coldfusion
* Modula-2
* haXe
* Haxe
* R console
* Objective-J
* Haml and Sass
Loading
Loading
@@ -318,7 +368,7 @@ Version 1.2
* CMake
* Ooc
* Coldfusion
* haXe
* Haxe
* R console
 
- Added options for rendering LaTeX in source code comments in the
Loading
Loading
157c9feaccb8
7304e4759ae6
Loading
Loading
@@ -83,6 +83,58 @@ If no rule matches at the current position, the current char is emitted as an
1.
 
 
Adding and testing a new lexer
==============================
To make pygments aware of your new lexer, you have to perform the following
steps:
First, change to the current directory containing the pygments source code:
.. sourcecode:: console
$ cd .../pygments-main
Next, make sure the lexer is known from outside of the module. All modules in
the ``pygments.lexers`` specify ``__all__``. For example, ``other.py`` sets:
.. sourcecode:: python
__all__ = ['BrainfuckLexer', 'BefungeLexer', ...]
Simply add the name of your lexer class to this list.
Finally the lexer can be made publically known by rebuilding the lexer
mapping:
.. sourcecode:: console
$ make mapfiles
To test the new lexer, store an example file with the proper extension in
``tests/examplefiles``. For example, to test your ``DiffLexer``, add a
``tests/examplefiles/example.diff`` containing a sample diff output.
Now you can use pygmentize to render your example to HTML:
.. sourcecode:: console
$ ./pygmentize -O full -f html -o /tmp/example.html tests/examplefiles/example.diff
Note that this explicitely calls the ``pygmentize`` in the current directory
by preceding it with ``./``. This ensures your modifications are used.
Otherwise a possibly already installed, unmodified version without your new
lexer would have been called from the system search path (``$PATH``).
To view the result, open ``/tmp/example.html`` in your browser.
Once the example renders as expected, you should run the complete test suite:
.. sourcecode:: console
$ make test
Regex Flags
===========
 
Loading
Loading
Loading
Loading
@@ -5,13 +5,19 @@
 
This is the shell script that was used to extract Lasso 9's built-in keywords
and generate most of the _lassobuiltins.py file. When run, it creates a file
named "lassobuiltins-9.py" containing the types, traits, and methods of the
currently-installed version of Lasso 9.
named "lassobuiltins-9.py" containing the types, traits, methods, and members
of the currently-installed version of Lasso 9.
 
A partial list of keywords in Lasso 8 can be generated with this code:
A list of tags in Lasso 8 can be generated with this code:
 
<?LassoScript
local('l8tags' = list);
local('l8tags' = list,
'l8libs' = array('Cache','ChartFX','Client','Database','File','HTTP',
'iCal','Lasso','Link','List','PDF','Response','Stock','String',
'Thread','Valid','WAP','XML'));
iterate(#l8libs, local('library'));
local('result' = namespace_load(#library));
/iterate;
iterate(tags_list, local('i'));
#l8tags->insert(string_removeleading(#i, -pattern='_global_'));
/iterate;
Loading
Loading
@@ -30,9 +36,12 @@ local(f) = file("lassobuiltins-9.py")
#f->writeString('# -*- coding: utf-8 -*-
"""
pygments.lexers._lassobuiltins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Built-in Lasso types, traits, methods, and members.
 
Built-in Lasso types, traits, and methods.
:copyright: Copyright 2006-'+date->year+' by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
 
')
Loading
Loading
@@ -42,16 +51,16 @@ lcapi_loadModules
// Load all of the libraries from builtins and lassoserver
// This forces all possible available types and methods to be registered
local(srcs =
tie(
dir(sys_masterHomePath + 'LassoLibraries/builtins/')->eachFilePath,
dir(sys_masterHomePath + 'LassoLibraries/lassoserver/')->eachFilePath
)
tie(
dir(sys_masterHomePath + 'LassoLibraries/builtins/')->eachFilePath,
dir(sys_masterHomePath + 'LassoLibraries/lassoserver/')->eachFilePath
)
)
 
with topLevelDir in #srcs
where !#topLevelDir->lastComponent->beginsWith('.')
where not #topLevelDir->lastComponent->beginsWith('.')
do protect => {
handle_error => {
handle_error => {
stdoutnl('Unable to load: ' + #topLevelDir + ' ' + error_msg)
}
library_thread_loader->loadLibrary(#topLevelDir)
Loading
Loading
@@ -61,60 +70,74 @@ do protect => {
local(
typesList = list(),
traitsList = list(),
methodsList = list()
unboundMethodsList = list(),
memberMethodsList = list()
)
 
// unbound methods
with method in sys_listUnboundMethods
where !#method->methodName->asString->endsWith('=')
where #method->methodName->asString->isalpha(1)
where #methodsList !>> #method->methodName->asString
do #methodsList->insert(#method->methodName->asString)
// types
with type in sys_listTypes
where #typesList !>> #type
do {
#typesList->insert(#type)
with method in #type->getType->listMethods
let name = #method->methodName
where not #name->asString->endsWith('=') // skip setter methods
where #name->asString->isAlpha(1) // skip unpublished methods
where #memberMethodsList !>> #name
do #memberMethodsList->insert(#name)
}
 
// traits
with trait in sys_listTraits
where !#trait->asString->beginsWith('$')
where #traitsList !>> #trait->asString
where not #trait->asString->beginsWith('$') // skip combined traits
where #traitsList !>> #trait
do {
#traitsList->insert(#trait->asString)
with tmethod in tie(#trait->getType->provides, #trait->getType->requires)
where !#tmethod->methodName->asString->endsWith('=')
where #tmethod->methodName->asString->isalpha(1)
where #methodsList !>> #tmethod->methodName->asString
do #methodsList->insert(#tmethod->methodName->asString)
#traitsList->insert(#trait)
with method in tie(#trait->getType->provides, #trait->getType->requires)
let name = #method->methodName
where not #name->asString->endsWith('=') // skip setter methods
where #name->asString->isAlpha(1) // skip unpublished methods
where #memberMethodsList !>> #name
do #memberMethodsList->insert(#name)
}
 
// types
with type in sys_listTypes
where #typesList !>> #type->asString
do {
#typesList->insert(#type->asString)
with tmethod in #type->getType->listMethods
where !#tmethod->methodName->asString->endsWith('=')
where #tmethod->methodName->asString->isalpha(1)
where #methodsList !>> #tmethod->methodName->asString
do #methodsList->insert(#tmethod->methodName->asString)
}
// unbound methods
with method in sys_listUnboundMethods
let name = #method->methodName
where not #name->asString->endsWith('=') // skip setter methods
where #name->asString->isAlpha(1) // skip unpublished methods
where #typesList !>> #name
where #traitsList !>> #name
where #unboundMethodsList !>> #name
do #unboundMethodsList->insert(#name)
 
#f->writeString("BUILTINS = {
'Types': [
")
with t in #typesList
do #f->writeString(" '"+string_lowercase(#t)+"',\n")
do !#t->asString->endsWith('$') ? #f->writeString(" '"+string_lowercase(#t->asString)+"',\n")
 
#f->writeString(" ],
'Traits': [
")
with t in #traitsList
do #f->writeString(" '"+string_lowercase(#t)+"',\n")
do #f->writeString(" '"+string_lowercase(#t->asString)+"',\n")
 
#f->writeString(" ],
'Methods': [
'Unbound Methods': [
")
with t in #methodsList
do #f->writeString(" '"+string_lowercase(#t)+"',\n")
with t in #unboundMethodsList
do #f->writeString(" '"+string_lowercase(#t->asString)+"',\n")
 
#f->writeString(" ],
#f->writeString(" ]
}
MEMBERS = {
'Member Methods': [
")
with t in #memberMethodsList
do #f->writeString(" '"+string_lowercase(#t->asString)+"',\n")
#f->writeString(" ]
}
")
 
Loading
Loading
#!/usr/bin/env python
#!/usr/bin/env python2
 
import sys, pygments.cmdline
try:
Loading
Loading
Loading
Loading
@@ -129,7 +129,7 @@ class KeywordCaseFilter(Filter):
 
class NameHighlightFilter(Filter):
"""
Highlight a normal Name token with a different token type.
Highlight a normal Name (and Name.*) token with a different token type.
 
Example::
 
Loading
Loading
@@ -163,7 +163,7 @@ class NameHighlightFilter(Filter):
 
def filter(self, lexer, stream):
for ttype, value in stream:
if ttype is Name and value in self.names:
if ttype in Name and value in self.names:
yield self.tokentype, value
else:
yield ttype, value
Loading
Loading
Loading
Loading
@@ -68,6 +68,9 @@ class Formatter(object):
self.full = get_bool_opt(options, 'full', False)
self.title = options.get('title', '')
self.encoding = options.get('encoding', None) or None
if self.encoding == 'guess':
# can happen for pygmentize -O encoding=guess
self.encoding = 'utf-8'
self.encoding = options.get('outencoding', None) or self.encoding
self.options = options
 
Loading
Loading
Loading
Loading
@@ -15,6 +15,7 @@ import fnmatch
from os.path import basename
 
from pygments.lexers._mapping import LEXERS
from pygments.modeline import get_filetype_from_buffer
from pygments.plugin import find_plugin_lexers
from pygments.util import ClassNotFound, bytes
 
Loading
Loading
@@ -197,6 +198,16 @@ def guess_lexer(_text, **options):
"""
Guess a lexer by strong distinctions in the text (eg, shebang).
"""
# try to get a vim modeline first
ft = get_filetype_from_buffer(_text)
if ft is not None:
try:
return get_lexer_by_name(ft, **options)
except ClassNotFound:
pass
best_lexer = [0.0, None]
for lexer in _iter_lexerclasses():
rv = lexer.analyse_text(_text)
Loading
Loading
This diff is collapsed.
This diff is collapsed.
Loading
Loading
@@ -163,7 +163,7 @@ class RowSplitter(object):
def split(self, row):
splitter = (row.startswith('| ') and self._split_from_pipes
or self._split_from_spaces)
for value in splitter(row.rstrip()):
for value in splitter(row):
yield value
yield '\n'
 
Loading
Loading
# -*- coding: utf-8 -*-
"""
pygments.lexers._stan_builtins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pygments.lexers._stan_builtins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
This file contains the names of functions for Stan used by
``pygments.lexers.math.StanLexer.
This file contains the names of functions for Stan used by
``pygments.lexers.math.StanLexer.
 
:copyright: Copyright 2006-2013 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
:copyright: Copyright 2013 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
 
CONSTANTS=[ 'e',
'epsilon',
'log10',
'log2',
'negative_epsilon',
'negative_infinity',
'not_a_number',
'pi',
'positive_infinity',
'sqrt2']
KEYWORDS = ['else', 'for', 'if', 'in', 'lower', 'lp__', 'print', 'upper', 'while']
TYPES = [ 'corr_matrix',
'cov_matrix',
'int',
'matrix',
'ordered',
'positive_ordered',
'real',
'row_vector',
'simplex',
'unit_vector',
'vector']
 
FUNCTIONS=[ 'Phi',
FUNCTIONS = [ 'Phi',
'Phi_approx',
'abs',
'acos',
'acosh',
Loading
Loading
@@ -30,37 +34,66 @@ FUNCTIONS=[ 'Phi',
'atan',
'atan2',
'atanh',
'bernoulli_cdf',
'bernoulli_log',
'bernoulli_logit_log',
'bernoulli_rng',
'beta_binomial_cdf',
'beta_binomial_log',
'beta_binomial_rng',
'beta_cdf',
'beta_log',
'beta_rng',
'binary_log_loss',
'binomial_cdf',
'binomial_coefficient_log',
'binomial_log',
'binomial_logit_log',
'binomial_rng',
'block',
'categorical_log',
'categorical_rng',
'cauchy_cdf',
'cauchy_log',
'cauchy_rng',
'cbrt',
'ceil',
'chi_square_log',
'chi_square_rng',
'cholesky_decompose',
'col',
'cols',
'cos',
'cosh',
'crossprod',
'cumulative_sum',
'determinant',
'diag_matrix',
'diag_post_multiply',
'diag_pre_multiply',
'diagonal',
'dims',
'dirichlet_log',
'dirichlet_rng',
'dot_product',
'dot_self',
'double_exponential_log',
'eigenvalues',
'double_exponential_rng',
'e',
'eigenvalues_sym',
'eigenvectors_sym',
'epsilon',
'erf',
'erfc',
'exp',
'exp2',
'exp_mod_normal_cdf',
'exp_mod_normal_log',
'exp_mod_normal_rng',
'expm1',
'exponential_cdf',
'exponential_log',
'exponential_rng',
'fabs',
'fdim',
'floor',
Loading
Loading
@@ -69,85 +102,148 @@ FUNCTIONS=[ 'Phi',
'fmin',
'fmod',
'gamma_log',
'gamma_rng',
'gumbel_cdf',
'gumbel_log',
'gumbel_rng',
'hypergeometric_log',
'hypergeometric_rng',
'hypot',
'if_else',
'int_step',
'inv_chi_square_cdf',
'inv_chi_square_log',
'inv_chi_square_rng',
'inv_cloglog',
'inv_gamma_cdf',
'inv_gamma_log',
'inv_gamma_rng',
'inv_logit',
'inv_wishart_log',
'inv_wishart_rng',
'inverse',
'lbeta',
'lgamma',
'lkj_corr_cholesky_log',
'lkj_corr_cholesky_rng',
'lkj_corr_log',
'lkj_corr_rng',
'lkj_cov_log',
'lmgamma',
'log',
'log10',
'log1m',
'log1m_inv_logit',
'log1p',
'log1p_exp',
'log2',
'log_determinant',
'log_inv_logit',
'log_sum_exp',
'logistic_cdf',
'logistic_log',
'logistic_rng',
'logit',
'lognormal_cdf',
'lognormal_log',
'lognormal_rng',
'max',
'mdivide_left_tri_low',
'mdivide_right_tri_low',
'mean',
'min',
'multi_normal_cholesky_log',
'multi_normal_log',
'multi_normal_prec_log',
'multi_normal_rng',
'multi_student_t_log',
'multi_student_t_rng',
'multinomial_cdf',
'multinomial_log',
'multinomial_rng',
'multiply_log',
'multiply_lower_tri_self_transpose',
'neg_binomial_cdf',
'neg_binomial_log',
'neg_binomial_rng',
'negative_epsilon',
'negative_infinity',
'normal_cdf',
'normal_log',
'normal_rng',
'not_a_number',
'ordered_logistic_log',
'ordered_logistic_rng',
'owens_t',
'pareto_cdf',
'pareto_log',
'pareto_rng',
'pi',
'poisson_cdf',
'poisson_log',
'poisson_log_log',
'poisson_rng',
'positive_infinity',
'pow',
'prod',
'rep_array',
'rep_matrix',
'rep_row_vector',
'rep_vector',
'round',
'row',
'rows',
'scaled_inv_chi_square_cdf',
'scaled_inv_chi_square_log',
'scaled_inv_chi_square_rng',
'sd',
'sin',
'singular_values',
'sinh',
'size',
'skew_normal_cdf',
'skew_normal_log',
'skew_normal_rng',
'softmax',
'sqrt',
'sqrt2',
'square',
'step',
'student_t_cdf',
'student_t_log',
'student_t_rng',
'sum',
'tan',
'tanh',
'tcrossprod',
'tgamma',
'trace',
'trunc',
'uniform_log',
'uniform_rng',
'variance',
'weibull_cdf',
'weibull_log',
'wishart_log']
'weibull_rng',
'wishart_log',
'wishart_rng']
 
DISTRIBUTIONS=[ 'bernoulli',
DISTRIBUTIONS = [ 'bernoulli',
'bernoulli_logit',
'beta',
'beta_binomial',
'binomial',
'binomial_coefficient',
'binomial_logit',
'categorical',
'cauchy',
'chi_square',
'dirichlet',
'double_exponential',
'exp_mod_normal',
'exponential',
'gamma',
'gumbel',
'hypergeometric',
'inv_chi_square',
'inv_gamma',
Loading
Loading
@@ -159,16 +255,106 @@ DISTRIBUTIONS=[ 'bernoulli',
'lognormal',
'multi_normal',
'multi_normal_cholesky',
'multi_normal_prec',
'multi_student_t',
'multinomial',
'multiply',
'neg_binomial',
'normal',
'ordered_logistic',
'pareto',
'poisson',
'poisson_log',
'scaled_inv_chi_square',
'skew_normal',
'student_t',
'uniform',
'weibull',
'wishart']
 
RESERVED = [ 'alignas',
'alignof',
'and',
'and_eq',
'asm',
'auto',
'bitand',
'bitor',
'bool',
'break',
'case',
'catch',
'char',
'char16_t',
'char32_t',
'class',
'compl',
'const',
'const_cast',
'constexpr',
'continue',
'decltype',
'default',
'delete',
'do',
'double',
'dynamic_cast',
'enum',
'explicit',
'export',
'extern',
'false',
'false',
'float',
'friend',
'goto',
'inline',
'int',
'long',
'mutable',
'namespace',
'new',
'noexcept',
'not',
'not_eq',
'nullptr',
'operator',
'or',
'or_eq',
'private',
'protected',
'public',
'register',
'reinterpret_cast',
'repeat',
'return',
'short',
'signed',
'sizeof',
'static',
'static_assert',
'static_cast',
'struct',
'switch',
'template',
'then',
'this',
'thread_local',
'throw',
'true',
'true',
'try',
'typedef',
'typeid',
'typename',
'union',
'unsigned',
'until',
'using',
'virtual',
'void',
'volatile',
'wchar_t',
'xor',
'xor_eq']
This diff is collapsed.
Loading
Loading
@@ -25,7 +25,7 @@ class GasLexer(RegexLexer):
For Gas (AT&T) assembly code.
"""
name = 'GAS'
aliases = ['gas']
aliases = ['gas', 'asm']
filenames = ['*.s', '*.S']
mimetypes = ['text/x-gas']
 
Loading
Loading
@@ -244,7 +244,7 @@ class LlvmLexer(RegexLexer):
r'|align|addrspace|section|alias|module|asm|sideeffect|gc|dbg'
 
r'|ccc|fastcc|coldcc|x86_stdcallcc|x86_fastcallcc|arm_apcscc'
r'|arm_aapcscc|arm_aapcs_vfpcc'
r'|arm_aapcscc|arm_aapcs_vfpcc|ptx_device|ptx_kernel'
 
r'|cc|c'
 
Loading
Loading
Loading
Loading
@@ -23,13 +23,14 @@ from pygments.scanner import Scanner
from pygments.lexers.functional import OcamlLexer
from pygments.lexers.jvm import JavaLexer, ScalaLexer
 
__all__ = ['CLexer', 'CppLexer', 'DLexer', 'DelphiLexer', 'ECLexer', 'DylanLexer',
'ObjectiveCLexer', 'ObjectiveCppLexer', 'FortranLexer', 'GLShaderLexer',
'PrologLexer', 'CythonLexer', 'ValaLexer', 'OocLexer', 'GoLexer',
'FelixLexer', 'AdaLexer', 'Modula2Lexer', 'BlitzMaxLexer',
'NimrodLexer', 'FantomLexer', 'RustLexer', 'CudaLexer', 'MonkeyLexer',
__all__ = ['CLexer', 'CppLexer', 'DLexer', 'DelphiLexer', 'ECLexer',
'NesCLexer', 'DylanLexer', 'ObjectiveCLexer', 'ObjectiveCppLexer',
'FortranLexer', 'GLShaderLexer', 'PrologLexer', 'CythonLexer',
'ValaLexer', 'OocLexer', 'GoLexer', 'FelixLexer', 'AdaLexer',
'Modula2Lexer', 'BlitzMaxLexer', 'BlitzBasicLexer', 'NimrodLexer',
'FantomLexer', 'RustLexer', 'CudaLexer', 'MonkeyLexer', 'SwigLexer',
'DylanLidLexer', 'DylanConsoleLexer', 'CobolLexer',
'CobolFreeformatLexer', 'LogosLexer']
'CobolFreeformatLexer', 'LogosLexer', 'ClayLexer']
 
 
class CFamilyLexer(RegexLexer):
Loading
Loading
@@ -231,6 +232,63 @@ class CppLexer(CFamilyLexer):
return 0.1
 
 
class SwigLexer(CppLexer):
"""
For `SWIG <http://www.swig.org/>`_ source code.
*New in Pygments 1.7.*
"""
name = 'SWIG'
aliases = ['Swig', 'swig']
filenames = ['*.swg', '*.i']
mimetypes = ['text/swig']
priority = 0.04 # Lower than C/C++ and Objective C/C++
tokens = {
'statements': [
(r'(%[a-z_][a-z0-9_]*)', Name.Function), # SWIG directives
('\$\**\&?[a-zA-Z0-9_]+', Name), # Special variables
(r'##*[a-zA-Z_][a-zA-Z0-9_]*', Comment.Preproc), # Stringification / additional preprocessor directives
inherit,
],
}
# This is a far from complete set of SWIG directives
swig_directives = (
# Most common directives
'%apply', '%define', '%director', '%enddef', '%exception', '%extend',
'%feature', '%fragment', '%ignore', '%immutable', '%import', '%include',
'%inline', '%insert', '%module', '%newobject', '%nspace', '%pragma',
'%rename', '%shared_ptr', '%template', '%typecheck', '%typemap',
# Less common directives
'%arg', '%attribute', '%bang', '%begin', '%callback', '%catches', '%clear',
'%constant', '%copyctor', '%csconst', '%csconstvalue', '%csenum',
'%csmethodmodifiers', '%csnothrowexception', '%default', '%defaultctor',
'%defaultdtor', '%defined', '%delete', '%delobject', '%descriptor',
'%exceptionclass', '%exceptionvar', '%extend_smart_pointer', '%fragments',
'%header', '%ifcplusplus', '%ignorewarn', '%implicit', '%implicitconv',
'%init', '%javaconst', '%javaconstvalue', '%javaenum', '%javaexception',
'%javamethodmodifiers', '%kwargs', '%luacode', '%mutable', '%naturalvar',
'%nestedworkaround', '%perlcode', '%pythonabc', '%pythonappend',
'%pythoncallback', '%pythoncode', '%pythondynamic', '%pythonmaybecall',
'%pythonnondynamic', '%pythonprepend', '%refobject', '%shadow', '%sizeof',
'%trackobjects', '%types', '%unrefobject', '%varargs', '%warn', '%warnfilter')
def analyse_text(text):
rv = 0.1 # Same as C/C++
# Search for SWIG directives, which are conventionally at the beginning of
# a line. The probability of them being within a line is low, so let another
# lexer win in this case.
matches = re.findall(r'^\s*(%[a-z_][a-z0-9_]*)', text, re.M)
for m in matches:
if m in SwigLexer.swig_directives:
rv = 0.98
break
else:
rv = 0.91 # Fraction higher than MatlabLexer
return rv
class ECLexer(CLexer):
"""
For eC source code with preprocessor directives.
Loading
Loading
@@ -266,6 +324,83 @@ class ECLexer(CLexer):
}
 
 
class NesCLexer(CLexer):
"""
For `nesC <https://github.com/tinyos/nesc>`_ source code with preprocessor
directives.
*New in Pygments 1.7.*
"""
name = 'nesC'
aliases = ['nesc']
filenames = ['*.nc']
mimetypes = ['text/x-nescsrc']
tokens = {
'statements': [
(r'(abstract|as|async|atomic|call|command|component|components|'
r'configuration|event|extends|generic|implementation|includes|'
r'interface|module|new|norace|post|provides|signal|task|uses)\b',
Keyword),
(r'(nx_struct|nx_union|nx_int8_t|nx_int16_t|nx_int32_t|nx_int64_t|'
r'nx_uint8_t|nx_uint16_t|nx_uint32_t|nx_uint64_t)\b',
Keyword.Type),
inherit,
],
}
class ClayLexer(RegexLexer):
"""
For `Clay <http://claylabs.com/clay/>`_ source.
*New in Pygments 1.7.*
"""
name = 'Clay'
filenames = ['*.clay']
aliases = ['clay']
mimetypes = ['text/x-clay']
tokens = {
'root': [
(r'\s', Text),
(r'//.*?$', Comment.Singleline),
(r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline),
(r'\b(public|private|import|as|record|variant|instance'
r'|define|overload|default|external|alias'
r'|rvalue|ref|forward|inline|noinline|forceinline'
r'|enum|var|and|or|not|if|else|goto|return|while'
r'|switch|case|break|continue|for|in|true|false|try|catch|throw'
r'|finally|onerror|staticassert|eval|when|newtype'
r'|__FILE__|__LINE__|__COLUMN__|__ARG__'
r')\b', Keyword),
(r'[~!%^&*+=|:<>/-]', Operator),
(r'[#(){}\[\],;.]', Punctuation),
(r'0x[0-9a-fA-F]+[LlUu]*', Number.Hex),
(r'\d+[LlUu]*', Number.Integer),
(r'\b(true|false)\b', Name.Builtin),
(r'(?i)[a-z_?][a-z_?0-9]*', Name),
(r'"""', String, 'tdqs'),
(r'"', String, 'dqs'),
],
'strings': [
(r'(?i)\\(x[0-9a-f]{2}|.)', String.Escape),
(r'.', String),
],
'nl': [
(r'\n', String),
],
'dqs': [
(r'"', String, '#pop'),
include('strings'),
],
'tdqs': [
(r'"""', String, '#pop'),
include('strings'),
include('nl'),
],
}
class DLexer(RegexLexer):
"""
For D source.
Loading
Loading
@@ -1216,6 +1351,8 @@ def objective(baselexer):
('#pop', 'oc_classname')),
(r'(@class|@protocol)(\s+)', bygroups(Keyword, Text),
('#pop', 'oc_forward_classname')),
# @ can also prefix other expressions like @{...} or @(...)
(r'@', Punctuation),
inherit,
],
'oc_classname' : [
Loading
Loading
@@ -1471,7 +1608,15 @@ class PrologLexer(RegexLexer):
(r'^#.*', Comment.Single),
(r'/\*', Comment.Multiline, 'nested-comment'),
(r'%.*', Comment.Single),
(r'[0-9]+', Number),
# character literal
(r'0\'.', String.Char),
(r'0b[01]+', Number.Bin),
(r'0o[0-7]+', Number.Oct),
(r'0x[0-9a-fA-F]+', Number.Hex),
# literal with prepended base
(r'\d\d?\'[a-zA-Z0-9]+', Number.Integer),
(r'(\d+\.\d*|\d*\.\d+)([eE][+-]?[0-9]+)?', Number.Float),
(r'\d+', Number.Integer),
(r'[\[\](){}|.,;!]', Punctuation),
(r':-|-->', Punctuation),
(r'"(?:\\x[0-9a-fA-F]+\\|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|'
Loading
Loading
@@ -1522,7 +1667,7 @@ class CythonLexer(RegexLexer):
"""
 
name = 'Cython'
aliases = ['cython', 'pyx']
aliases = ['cython', 'pyx', 'pyrex']
filenames = ['*.pyx', '*.pxd', '*.pxi']
mimetypes = ['text/x-cython', 'application/x-cython']
 
Loading
Loading
@@ -2581,6 +2726,88 @@ class BlitzMaxLexer(RegexLexer):
}
 
 
class BlitzBasicLexer(RegexLexer):
"""
For `BlitzBasic <http://blitzbasic.com>`_ source code.
*New in Pygments 1.7.*
"""
name = 'BlitzBasic'
aliases = ['blitzbasic', 'b3d', 'bplus']
filenames = ['*.bb', '*.decls']
mimetypes = ['text/x-bb']
bb_vopwords = (r'\b(Shl|Shr|Sar|Mod|Or|And|Not|'
r'Abs|Sgn|Handle|Int|Float|Str|'
r'First|Last|Before|After)\b')
bb_sktypes = r'@{1,2}|[#$%]'
bb_name = r'[a-z][a-z0-9_]*'
bb_var = (r'(%s)(?:([ \t]*)(%s)|([ \t]*)([.])([ \t]*)(?:(%s)))?') % \
(bb_name, bb_sktypes, bb_name)
flags = re.MULTILINE | re.IGNORECASE
tokens = {
'root': [
# Text
(r'[ \t]+', Text),
# Comments
(r";.*?\n", Comment.Single),
# Data types
('"', String.Double, 'string'),
# Numbers
(r'[0-9]+\.[0-9]*(?!\.)', Number.Float),
(r'\.[0-9]+(?!\.)', Number.Float),
(r'[0-9]+', Number.Integer),
(r'\$[0-9a-f]+', Number.Hex),
(r'\%[10]+', Number), # Binary
# Other
(r'(?:%s|([+\-*/~=<>^]))' % (bb_vopwords), Operator),
(r'[(),:\[\]\\]', Punctuation),
(r'\.([ \t]*)(%s)' % bb_name, Name.Label),
# Identifiers
(r'\b(New)\b([ \t]+)(%s)' % (bb_name),
bygroups(Keyword.Reserved, Text, Name.Class)),
(r'\b(Gosub|Goto)\b([ \t]+)(%s)' % (bb_name),
bygroups(Keyword.Reserved, Text, Name.Label)),
(r'\b(Object)\b([ \t]*)([.])([ \t]*)(%s)\b' % (bb_name),
bygroups(Operator, Text, Punctuation, Text, Name.Class)),
(r'\b%s\b([ \t]*)(\()' % bb_var,
bygroups(Name.Function, Text, Keyword.Type,Text, Punctuation,
Text, Name.Class, Text, Punctuation)),
(r'\b(Function)\b([ \t]+)%s' % bb_var,
bygroups(Keyword.Reserved, Text, Name.Function, Text, Keyword.Type,
Text, Punctuation, Text, Name.Class)),
(r'\b(Type)([ \t]+)(%s)' % (bb_name),
bygroups(Keyword.Reserved, Text, Name.Class)),
# Keywords
(r'\b(Pi|True|False|Null)\b', Keyword.Constant),
(r'\b(Local|Global|Const|Field|Dim)\b', Keyword.Declaration),
(r'\b(End|Return|Exit|'
r'Chr|Len|Asc|'
r'New|Delete|Insert|'
r'Include|'
r'Function|'
r'Type|'
r'If|Then|Else|ElseIf|EndIf|'
r'For|To|Next|Step|Each|'
r'While|Wend|'
r'Repeat|Until|Forever|'
r'Select|Case|Default|'
r'Goto|Gosub|Data|Read|Restore)\b', Keyword.Reserved),
# Final resolve (for variable names and such)
# (r'(%s)' % (bb_name), Name.Variable),
(bb_var, bygroups(Name.Variable, Text, Keyword.Type,
Text, Punctuation, Text, Name.Class)),
],
'string': [
(r'""', String.Double),
(r'"C?', String.Double, '#pop'),
(r'[^"]+', String.Double),
],
}
class NimrodLexer(RegexLexer):
"""
For `Nimrod <http://nimrod-code.org/>`_ source code.
Loading
Loading
Loading
Loading
@@ -529,7 +529,7 @@ class VbNetAspxLexer(DelegatingLexer):
# Very close to functional.OcamlLexer
class FSharpLexer(RegexLexer):
"""
For the F# language.
For the F# language (version 3.0).
 
*New in Pygments 1.5.*
"""
Loading
Loading
@@ -540,91 +540,132 @@ class FSharpLexer(RegexLexer):
mimetypes = ['text/x-fsharp']
 
keywords = [
'abstract', 'and', 'as', 'assert', 'base', 'begin', 'class',
'default', 'delegate', 'do', 'do!', 'done', 'downcast',
'downto', 'elif', 'else', 'end', 'exception', 'extern',
'false', 'finally', 'for', 'fun', 'function', 'global', 'if',
'in', 'inherit', 'inline', 'interface', 'internal', 'lazy',
'let', 'let!', 'match', 'member', 'module', 'mutable',
'namespace', 'new', 'null', 'of', 'open', 'or', 'override',
'private', 'public', 'rec', 'return', 'return!', 'sig',
'static', 'struct', 'then', 'to', 'true', 'try', 'type',
'upcast', 'use', 'use!', 'val', 'void', 'when', 'while',
'with', 'yield', 'yield!'
'abstract', 'as', 'assert', 'base', 'begin', 'class', 'default',
'delegate', 'do!', 'do', 'done', 'downcast', 'downto', 'elif', 'else',
'end', 'exception', 'extern', 'false', 'finally', 'for', 'function',
'fun', 'global', 'if', 'inherit', 'inline', 'interface', 'internal',
'in', 'lazy', 'let!', 'let', 'match', 'member', 'module', 'mutable',
'namespace', 'new', 'null', 'of', 'open', 'override', 'private', 'public',
'rec', 'return!', 'return', 'select', 'static', 'struct', 'then', 'to',
'true', 'try', 'type', 'upcast', 'use!', 'use', 'val', 'void', 'when',
'while', 'with', 'yield!', 'yield',
]
# Reserved words; cannot hurt to color them as keywords too.
keywords += [
'atomic', 'break', 'checked', 'component', 'const', 'constraint',
'constructor', 'continue', 'eager', 'event', 'external', 'fixed',
'functor', 'include', 'method', 'mixin', 'object', 'parallel',
'process', 'protected', 'pure', 'sealed', 'tailcall', 'trait',
'virtual', 'volatile',
]
keyopts = [
'!=','#','&&','&','\(','\)','\*','\+',',','-\.',
'->','-','\.\.','\.','::',':=',':>',':',';;',';','<-',
'<','>]','>','\?\?','\?','\[<','\[>','\[\|','\[',
']','_','`','{','\|\]','\|','}','~','<@','=','@>'
'!=', '#', '&&', '&', '\(', '\)', '\*', '\+', ',', '-\.',
'->', '-', '\.\.', '\.', '::', ':=', ':>', ':', ';;', ';', '<-',
'<\]', '<', '>\]', '>', '\?\?', '\?', '\[<', '\[\|', '\[', '\]',
'_', '`', '{', '\|\]', '\|', '}', '~', '<@@', '<@', '=', '@>', '@@>',
]
 
operators = r'[!$%&*+\./:<=>?@^|~-]'
word_operators = ['and', 'asr', 'land', 'lor', 'lsl', 'lxor', 'mod', 'not', 'or']
word_operators = ['and', 'or', 'not']
prefix_syms = r'[!?~]'
infix_syms = r'[=<>@^|&+\*/$%-]'
primitives = ['unit', 'int', 'float', 'bool', 'string', 'char', 'list', 'array',
'byte', 'sbyte', 'int16', 'uint16', 'uint32', 'int64', 'uint64'
'nativeint', 'unativeint', 'decimal', 'void', 'float32', 'single',
'double']
primitives = [
'sbyte', 'byte', 'char', 'nativeint', 'unativeint', 'float32', 'single',
'float', 'double', 'int8', 'uint8', 'int16', 'uint16', 'int32',
'uint32', 'int64', 'uint64', 'decimal', 'unit', 'bool', 'string',
'list', 'exn', 'obj', 'enum',
]
# See http://msdn.microsoft.com/en-us/library/dd233181.aspx and/or
# http://fsharp.org/about/files/spec.pdf for reference. Good luck.
 
tokens = {
'escape-sequence': [
(r'\\[\\\"\'ntbr]', String.Escape),
(r'\\[\\\"\'ntbrafv]', String.Escape),
(r'\\[0-9]{3}', String.Escape),
(r'\\x[0-9a-fA-F]{2}', String.Escape),
(r'\\u[0-9a-fA-F]{4}', String.Escape),
(r'\\U[0-9a-fA-F]{8}', String.Escape),
],
'root': [
(r'\s+', Text),
(r'false|true|\(\)|\[\]', Name.Builtin.Pseudo),
(r'\b([A-Z][A-Za-z0-9_\']*)(?=\s*\.)',
(r'\(\)|\[\]', Name.Builtin.Pseudo),
(r'\b(?<!\.)([A-Z][A-Za-z0-9_\']*)(?=\s*\.)',
Name.Namespace, 'dotted'),
(r'\b([A-Z][A-Za-z0-9_\']*)', Name.Class),
(r'\b([A-Z][A-Za-z0-9_\']*)', Name),
(r'///.*?\n', String.Doc),
(r'//.*?\n', Comment.Single),
(r'\(\*(?!\))', Comment, 'comment'),
(r'@"', String, 'lstring'),
(r'"""', String, 'tqs'),
(r'"', String, 'string'),
(r'\b(open|module)(\s+)([a-zA-Z0-9_.]+)',
bygroups(Keyword, Text, Name.Namespace)),
(r'\b(let!?)(\s+)([a-zA-Z0-9_]+)',
bygroups(Keyword, Text, Name.Variable)),
(r'\b(type)(\s+)([a-zA-Z0-9_]+)',
bygroups(Keyword, Text, Name.Class)),
(r'\b(member|override)(\s+)([a-zA-Z0-9_]+)(\.)([a-zA-Z0-9_]+)',
bygroups(Keyword, Text, Name, Punctuation, Name.Function)),
(r'\b(%s)\b' % '|'.join(keywords), Keyword),
(r'(%s)' % '|'.join(keyopts), Operator),
(r'(%s|%s)?%s' % (infix_syms, prefix_syms, operators), Operator),
(r'\b(%s)\b' % '|'.join(word_operators), Operator.Word),
(r'\b(%s)\b' % '|'.join(primitives), Keyword.Type),
(r'#[ \t]*(if|endif|else|line|nowarn|light)\b.*?\n',
(r'#[ \t]*(if|endif|else|line|nowarn|light|\d+)\b.*?\n',
Comment.Preproc),
 
(r"[^\W\d][\w']*", Name),
 
(r'\d[\d_]*', Number.Integer),
(r'0[xX][\da-fA-F][\da-fA-F_]*', Number.Hex),
(r'0[oO][0-7][0-7_]*', Number.Oct),
(r'0[bB][01][01_]*', Number.Binary),
(r'-?\d[\d_]*(.[\d_]*)?([eE][+\-]?\d[\d_]*)', Number.Float),
(r'\d[\d_]*[uU]?[yslLnQRZINGmM]?', Number.Integer),
(r'0[xX][\da-fA-F][\da-fA-F_]*[uU]?[yslLn]?[fF]?', Number.Hex),
(r'0[oO][0-7][0-7_]*[uU]?[yslLn]?', Number.Oct),
(r'0[bB][01][01_]*[uU]?[yslLn]?', Number.Binary),
(r'-?\d[\d_]*(.[\d_]*)?([eE][+\-]?\d[\d_]*)[fFmM]?',
Number.Float),
 
(r"'(?:(\\[\\\"'ntbr ])|(\\[0-9]{3})|(\\x[0-9a-fA-F]{2}))'",
(r"'(?:(\\[\\\"'ntbr ])|(\\[0-9]{3})|(\\x[0-9a-fA-F]{2}))'B?",
String.Char),
(r"'.'", String.Char),
(r"'", Keyword), # a stray quote is another syntax element
 
(r'"', String.Double, 'string'),
(r'[~?][a-z][\w\']*:', Name.Variable),
],
'dotted': [
(r'\s+', Text),
(r'\.', Punctuation),
(r'[A-Z][A-Za-z0-9_\']*(?=\s*\.)', Name.Namespace),
(r'[A-Z][A-Za-z0-9_\']*', Name, '#pop'),
(r'[a-z_][A-Za-z0-9_\']*', Name, '#pop'),
],
'comment': [
(r'[^(*)]+', Comment),
(r'[^(*)@"]+', Comment),
(r'\(\*', Comment, '#push'),
(r'\*\)', Comment, '#pop'),
(r'[(*)]', Comment),
# comments cannot be closed within strings in comments
(r'@"', String, 'lstring'),
(r'"""', String, 'tqs'),
(r'"', String, 'string'),
(r'[(*)@]', Comment),
],
'string': [
(r'[^\\"]+', String.Double),
(r'[^\\"]+', String),
include('escape-sequence'),
(r'\\\n', String.Double),
(r'"', String.Double, '#pop'),
(r'\\\n', String),
(r'\n', String), # newlines are allowed in any string
(r'"B?', String, '#pop'),
],
'dotted': [
(r'\s+', Text),
(r'\.', Punctuation),
(r'[A-Z][A-Za-z0-9_\']*(?=\s*\.)', Name.Namespace),
(r'[A-Z][A-Za-z0-9_\']*', Name.Class, '#pop'),
(r'[a-z_][A-Za-z0-9_\']*', Name, '#pop'),
'lstring': [
(r'[^"]+', String),
(r'\n', String),
(r'""', String),
(r'"B?', String, '#pop'),
],
'tqs': [
(r'[^"]+', String),
(r'\n', String),
(r'"""B?', String, '#pop'),
(r'"', String),
],
}
Loading
Loading
@@ -16,9 +16,13 @@ from pygments.token import Text, Comment, Operator, Keyword, Name, \
String, Number, Punctuation, Literal, Generic, Error
 
__all__ = ['RacketLexer', 'SchemeLexer', 'CommonLispLexer', 'HaskellLexer',
'LiterateHaskellLexer', 'SMLLexer', 'OcamlLexer', 'ErlangLexer',
'ErlangShellLexer', 'OpaLexer', 'CoqLexer', 'NewLispLexer',
'ElixirLexer', 'ElixirConsoleLexer', 'KokaLexer']
'AgdaLexer', 'LiterateHaskellLexer', 'LiterateAgdaLexer',
'SMLLexer', 'OcamlLexer', 'ErlangLexer', 'ErlangShellLexer',
'OpaLexer', 'CoqLexer', 'NewLispLexer', 'ElixirLexer',
'ElixirConsoleLexer', 'KokaLexer']
line_re = re.compile('.*?\n')
 
 
class RacketLexer(RegexLexer):
Loading
Loading
@@ -719,7 +723,7 @@ class CommonLispLexer(RegexLexer):
*New in Pygments 0.9.*
"""
name = 'Common Lisp'
aliases = ['common-lisp', 'cl']
aliases = ['common-lisp', 'cl', 'lisp']
filenames = ['*.cl', '*.lisp', '*.el'] # use for Elisp too
mimetypes = ['text/x-common-lisp']
 
Loading
Loading
@@ -808,6 +812,8 @@ class CommonLispLexer(RegexLexer):
(r'"(\\.|\\\n|[^"\\])*"', String),
# quoting
(r":" + symbol, String.Symbol),
(r"::" + symbol, String.Symbol),
(r":#" + symbol, String.Symbol),
(r"'" + symbol, String.Symbol),
(r"'", Operator),
(r"`", Operator),
Loading
Loading
@@ -979,6 +985,8 @@ class HaskellLexer(RegexLexer):
(r'\(', Punctuation, ('funclist', 'funclist')),
(r'\)', Punctuation, '#pop:2'),
],
# NOTE: the next four states are shared in the AgdaLexer; make sure
# any change is compatible with Agda as well or copy over and change
'comment': [
# Multiline Comments
(r'[^-{}]+', Comment.Multiline),
Loading
Loading
@@ -1009,12 +1017,78 @@ class HaskellLexer(RegexLexer):
}
 
 
line_re = re.compile('.*?\n')
bird_re = re.compile(r'(>[ \t]*)(.*\n)')
class AgdaLexer(RegexLexer):
"""
For the `Agda <http://wiki.portal.chalmers.se/agda/pmwiki.php>`_
dependently typed functional programming language and proof assistant.
 
class LiterateHaskellLexer(Lexer):
*New in Pygments 1.7.*
"""
For Literate Haskell (Bird-style or LaTeX) source.
name = 'Agda'
aliases = ['agda']
filenames = ['*.agda']
mimetypes = ['text/x-agda']
reserved = ['abstract', 'codata', 'coinductive', 'constructor', 'data',
'field', 'forall', 'hiding', 'in', 'inductive', 'infix',
'infixl', 'infixr', 'let', 'open', 'pattern', 'primitive',
'private', 'mutual', 'quote', 'quoteGoal', 'quoteTerm',
'record', 'syntax', 'rewrite', 'unquote', 'using', 'where',
'with']
tokens = {
'root': [
# Declaration
(r'^(\s*)([^\s\(\)\{\}]+)(\s*)(:)(\s*)',
bygroups(Text, Name.Function, Text, Operator.Word, Text)),
# Comments
(r'--(?![!#$%&*+./<=>?@\^|_~:\\]).*?$', Comment.Single),
(r'{-', Comment.Multiline, 'comment'),
# Holes
(r'{!', Comment.Directive, 'hole'),
# Lexemes:
# Identifiers
(ur'\b(%s)(?!\')\b' % '|'.join(reserved), Keyword.Reserved),
(r'(import|module)(\s+)', bygroups(Keyword.Reserved, Text), 'module'),
(r'\b(Set|Prop)\b', Keyword.Type),
# Special Symbols
(r'(\(|\)|\{|\})', Operator),
(ur'(\.{1,3}|\||[\u039B]|[\u2200]|[\u2192]|:|=|->)', Operator.Word),
# Numbers
(r'\d+[eE][+-]?\d+', Number.Float),
(r'\d+\.\d+([eE][+-]?\d+)?', Number.Float),
(r'0[xX][\da-fA-F]+', Number.Hex),
(r'\d+', Number.Integer),
# Strings
(r"'", String.Char, 'character'),
(r'"', String, 'string'),
(r'[^\s\(\)\{\}]+', Text),
(r'\s+?', Text), # Whitespace
],
'hole': [
# Holes
(r'[^!{}]+', Comment.Directive),
(r'{!', Comment.Directive, '#push'),
(r'!}', Comment.Directive, '#pop'),
(r'[!{}]', Comment.Directive),
],
'module': [
(r'{-', Comment.Multiline, 'comment'),
(r'[a-zA-Z][a-zA-Z0-9_.]*', Name, '#pop'),
(r'[^a-zA-Z]*', Text)
],
'comment': HaskellLexer.tokens['comment'],
'character': HaskellLexer.tokens['character'],
'string': HaskellLexer.tokens['string'],
'escape': HaskellLexer.tokens['escape']
}
class LiterateLexer(Lexer):
"""
Base class for lexers of literate file formats based on LaTeX or Bird-style
(prefixing each code line with ">").
 
Additional options accepted:
 
Loading
Loading
@@ -1022,17 +1096,15 @@ class LiterateHaskellLexer(Lexer):
If given, must be ``"bird"`` or ``"latex"``. If not given, the style
is autodetected: if the first non-whitespace character in the source
is a backslash or percent character, LaTeX is assumed, else Bird.
*New in Pygments 0.9.*
"""
name = 'Literate Haskell'
aliases = ['lhs', 'literate-haskell']
filenames = ['*.lhs']
mimetypes = ['text/x-literate-haskell']
 
def get_tokens_unprocessed(self, text):
hslexer = HaskellLexer(**self.options)
bird_re = re.compile(r'(>[ \t]*)(.*\n)')
def __init__(self, baselexer, **options):
self.baselexer = baselexer
Lexer.__init__(self, **options)
 
def get_tokens_unprocessed(self, text):
style = self.options.get('litstyle')
if style is None:
style = (text.lstrip()[0:1] in '%\\') and 'latex' or 'bird'
Loading
Loading
@@ -1043,7 +1115,7 @@ class LiterateHaskellLexer(Lexer):
# bird-style
for match in line_re.finditer(text):
line = match.group()
m = bird_re.match(line)
m = self.bird_re.match(line)
if m:
insertions.append((len(code),
[(0, Comment.Special, m.group(1))]))
Loading
Loading
@@ -1054,7 +1126,6 @@ class LiterateHaskellLexer(Lexer):
# latex-style
from pygments.lexers.text import TexLexer
lxlexer = TexLexer(**self.options)
codelines = 0
latex = ''
for match in line_re.finditer(text):
Loading
Loading
@@ -1075,10 +1146,56 @@ class LiterateHaskellLexer(Lexer):
latex += line
insertions.append((len(code),
list(lxlexer.get_tokens_unprocessed(latex))))
for item in do_insertions(insertions, hslexer.get_tokens_unprocessed(code)):
for item in do_insertions(insertions, self.baselexer.get_tokens_unprocessed(code)):
yield item
 
 
class LiterateHaskellLexer(LiterateLexer):
"""
For Literate Haskell (Bird-style or LaTeX) source.
Additional options accepted:
`litstyle`
If given, must be ``"bird"`` or ``"latex"``. If not given, the style
is autodetected: if the first non-whitespace character in the source
is a backslash or percent character, LaTeX is assumed, else Bird.
*New in Pygments 0.9.*
"""
name = 'Literate Haskell'
aliases = ['lhs', 'literate-haskell', 'lhaskell']
filenames = ['*.lhs']
mimetypes = ['text/x-literate-haskell']
def __init__(self, **options):
hslexer = HaskellLexer(**options)
LiterateLexer.__init__(self, hslexer, **options)
class LiterateAgdaLexer(LiterateLexer):
"""
For Literate Agda source.
Additional options accepted:
`litstyle`
If given, must be ``"bird"`` or ``"latex"``. If not given, the style
is autodetected: if the first non-whitespace character in the source
is a backslash or percent character, LaTeX is assumed, else Bird.
*New in Pygments 1.7.*
"""
name = 'Literate Agda'
aliases = ['lagda', 'literate-agda']
filenames = ['*.lagda']
mimetypes = ['text/x-literate-agda']
def __init__(self, **options):
agdalexer = AgdaLexer(**options)
LiterateLexer.__init__(self, agdalexer, litstyle='latex', **options)
class SMLLexer(RegexLexer):
"""
For the Standard ML language.
Loading
Loading
@@ -1663,9 +1780,10 @@ class OpaLexer(RegexLexer):
# but if you color only real keywords, you might just
# as well not color anything
keywords = [
'and', 'as', 'begin', 'css', 'database', 'db', 'do', 'else', 'end',
'external', 'forall', 'if', 'import', 'match', 'package', 'parser',
'rec', 'server', 'then', 'type', 'val', 'with', 'xml_parser',
'and', 'as', 'begin', 'case', 'client', 'css', 'database', 'db', 'do',
'else', 'end', 'external', 'forall', 'function', 'if', 'import',
'match', 'module', 'or', 'package', 'parser', 'rec', 'server', 'then',
'type', 'val', 'with', 'xml_parser',
]
 
# matches both stuff and `stuff`
Loading
Loading
@@ -2399,7 +2517,7 @@ class ElixirConsoleLexer(Lexer):
 
class KokaLexer(RegexLexer):
"""
Lexer for the `Koka <http://research.microsoft.com/en-us/projects/koka/>`_
Lexer for the `Koka <http://koka.codeplex.com>`_
language.
 
*New in Pygments 1.6.*
Loading
Loading
@@ -2411,7 +2529,7 @@ class KokaLexer(RegexLexer):
mimetypes = ['text/x-koka']
 
keywords = [
'infix', 'infixr', 'infixl', 'prefix', 'postfix',
'infix', 'infixr', 'infixl',
'type', 'cotype', 'rectype', 'alias',
'struct', 'con',
'fun', 'function', 'val', 'var',
Loading
Loading
@@ -2450,7 +2568,12 @@ class KokaLexer(RegexLexer):
sboundary = '(?!'+symbols+')'
 
# name boundary: a keyword should not be followed by any of these
boundary = '(?![a-zA-Z0-9_\\-])'
boundary = '(?![\w/])'
# koka token abstractions
tokenType = Name.Attribute
tokenTypeDef = Name.Class
tokenConstructor = Generic.Emph
 
# main lexer
tokens = {
Loading
Loading
@@ -2458,41 +2581,51 @@ class KokaLexer(RegexLexer):
include('whitespace'),
 
# go into type mode
(r'::?' + sboundary, Keyword.Type, 'type'),
(r'alias' + boundary, Keyword, 'alias-type'),
(r'struct' + boundary, Keyword, 'struct-type'),
(r'(%s)' % '|'.join(typeStartKeywords) + boundary, Keyword, 'type'),
(r'::?' + sboundary, tokenType, 'type'),
(r'(alias)(\s+)([a-z]\w*)?', bygroups(Keyword, Text, tokenTypeDef),
'alias-type'),
(r'(struct)(\s+)([a-z]\w*)?', bygroups(Keyword, Text, tokenTypeDef),
'struct-type'),
((r'(%s)' % '|'.join(typeStartKeywords)) +
r'(\s+)([a-z]\w*)?', bygroups(Keyword, Text, tokenTypeDef),
'type'),
 
# special sequences of tokens (we use ?: for non-capturing group as
# required by 'bygroups')
(r'(module)(\s*)((?:interface)?)(\s*)'
r'((?:[a-z](?:[a-zA-Z0-9_]|\-[a-zA-Z])*\.)*'
r'[a-z](?:[a-zA-Z0-9_]|\-[a-zA-Z])*)',
bygroups(Keyword, Text, Keyword, Text, Name.Namespace)),
(r'(import)(\s+)((?:[a-z](?:[a-zA-Z0-9_]|\-[a-zA-Z])*\.)*[a-z]'
r'(?:[a-zA-Z0-9_]|\-[a-zA-Z])*)(\s*)((?:as)?)'
r'((?:[A-Z](?:[a-zA-Z0-9_]|\-[a-zA-Z])*)?)',
bygroups(Keyword, Text, Name.Namespace, Text, Keyword,
Name.Namespace)),
(r'(module)(\s+)(interface\s+)?((?:[a-z]\w*/)*[a-z]\w*)',
bygroups(Keyword, Text, Keyword, Name.Namespace)),
(r'(import)(\s+)((?:[a-z]\w*/)*[a-z]\w*)'
r'(?:(\s*)(=)(\s*)((?:qualified\s*)?)'
r'((?:[a-z]\w*/)*[a-z]\w*))?',
bygroups(Keyword, Text, Name.Namespace, Text, Keyword, Text,
Keyword, Name.Namespace)),
(r'(^(?:(?:public|private)\s*)?(?:function|fun|val))'
r'(\s+)([a-z]\w*|\((?:' + symbols + r'|/)\))',
bygroups(Keyword, Text, Name.Function)),
(r'(^(?:(?:public|private)\s*)?external)(\s+)(inline\s+)?'
r'([a-z]\w*|\((?:' + symbols + r'|/)\))',
bygroups(Keyword, Text, Keyword, Name.Function)),
 
# keywords
(r'(%s)' % '|'.join(typekeywords) + boundary, Keyword.Type),
(r'(%s)' % '|'.join(keywords) + boundary, Keyword),
(r'(%s)' % '|'.join(builtin) + boundary, Keyword.Pseudo),
(r'::|:=|\->|[=\.:]' + sboundary, Keyword),
(r'\-' + sboundary, Generic.Strong),
(r'::?|:=|\->|[=\.]' + sboundary, Keyword),
 
# names
(r'[A-Z]([a-zA-Z0-9_]|\-[a-zA-Z])*(?=\.)', Name.Namespace),
(r'[A-Z]([a-zA-Z0-9_]|\-[a-zA-Z])*(?!\.)', Name.Class),
(r'[a-z]([a-zA-Z0-9_]|\-[a-zA-Z])*', Name),
(r'_([a-zA-Z0-9_]|\-[a-zA-Z])*', Name.Variable),
(r'((?:[a-z]\w*/)*)([A-Z]\w*)',
bygroups(Name.Namespace, tokenConstructor)),
(r'((?:[a-z]\w*/)*)([a-z]\w*)', bygroups(Name.Namespace, Name)),
(r'((?:[a-z]\w*/)*)(\((?:' + symbols + r'|/)\))',
bygroups(Name.Namespace, Name)),
(r'_\w*', Name.Variable),
 
# literal string
(r'@"', String.Double, 'litstring'),
 
# operators
(symbols, Operator),
(symbols + "|/(?![\*/])", Operator),
(r'`', Operator),
(r'[\{\}\(\)\[\];,]', Punctuation),
 
Loading
Loading
@@ -2519,17 +2652,17 @@ class KokaLexer(RegexLexer):
 
# type started by colon
'type': [
(r'[\(\[<]', Keyword.Type, 'type-nested'),
(r'[\(\[<]', tokenType, 'type-nested'),
include('type-content')
],
 
# type nested in brackets: can contain parameters, comma etc.
'type-nested': [
(r'[\)\]>]', Keyword.Type, '#pop'),
(r'[\(\[<]', Keyword.Type, 'type-nested'),
(r',', Keyword.Type),
(r'([a-z](?:[a-zA-Z0-9_]|\-[a-zA-Z])*)(\s*)(:)(?!:)',
bygroups(Name.Variable,Text,Keyword.Type)), # parameter name
(r'[\)\]>]', tokenType, '#pop'),
(r'[\(\[<]', tokenType, 'type-nested'),
(r',', tokenType),
(r'([a-z]\w*)(\s*)(:)(?!:)',
bygroups(Name, Text, tokenType)), # parameter name
include('type-content')
],
 
Loading
Loading
@@ -2538,23 +2671,23 @@ class KokaLexer(RegexLexer):
include('whitespace'),
 
# keywords
(r'(%s)' % '|'.join(typekeywords) + boundary, Keyword.Type),
(r'(%s)' % '|'.join(typekeywords) + boundary, Keyword),
(r'(?=((%s)' % '|'.join(keywords) + boundary + '))',
Keyword, '#pop'), # need to match because names overlap...
 
# kinds
(r'[EPH]' + boundary, Keyword.Type),
(r'[*!]', Keyword.Type),
(r'[EPHVX]' + boundary, tokenType),
 
# type names
(r'[A-Z]([a-zA-Z0-9_]|\-[a-zA-Z])*(?=\.)', Name.Namespace),
(r'[A-Z]([a-zA-Z0-9_]|\-[a-zA-Z])*(?!\.)', Name.Class),
(r'[a-z][0-9]*(?![a-zA-Z_\-])', Keyword.Type), # Generic.Emph
(r'_([a-zA-Z0-9_]|\-[a-zA-Z])*', Keyword.Type), # Generic.Emph
(r'[a-z]([a-zA-Z0-9_]|\-[a-zA-Z])*', Keyword.Type),
(r'[a-z][0-9]*(?![\w/])', tokenType ),
(r'_\w*', tokenType.Variable), # Generic.Emph
(r'((?:[a-z]\w*/)*)([A-Z]\w*)',
bygroups(Name.Namespace, tokenType)),
(r'((?:[a-z]\w*/)*)([a-z]\w+)',
bygroups(Name.Namespace, tokenType)),
 
# type keyword operators
(r'::|\->|[\.:|]', Keyword.Type),
(r'::|\->|[\.:|]', tokenType),
 
#catchall
(r'', Text, '#pop')
Loading
Loading
@@ -2562,6 +2695,7 @@ class KokaLexer(RegexLexer):
 
# comments and literals
'whitespace': [
(r'\n\s*#.*$', Comment.Preproc),
(r'\s+', Text),
(r'/\*', Comment.Multiline, 'comment'),
(r'//.*$', Comment.Single)
Loading
Loading
@@ -2588,11 +2722,10 @@ class KokaLexer(RegexLexer):
(r'[\'\n]', String.Char, '#pop'),
],
'escape-sequence': [
(r'\\[abfnrtv0\\\"\'\?]', String.Escape),
(r'\\[nrt\\\"\']', String.Escape),
(r'\\x[0-9a-fA-F]{2}', String.Escape),
(r'\\u[0-9a-fA-F]{4}', String.Escape),
# Yes, \U literals are 6 hex digits.
(r'\\U[0-9a-fA-F]{6}', String.Escape)
]
}
Loading
Loading
@@ -888,11 +888,11 @@ class CeylonLexer(RegexLexer):
(r'[^\S\n]+', Text),
(r'//.*?\n', Comment.Single),
(r'/\*.*?\*/', Comment.Multiline),
(r'(variable|shared|abstract|doc|by|formal|actual)',
(r'(variable|shared|abstract|doc|by|formal|actual|late|native)',
Name.Decorator),
(r'(break|case|catch|continue|default|else|finally|for|in|'
r'variable|if|return|switch|this|throw|try|while|is|exists|'
r'nonempty|then|outer)\b', Keyword),
r'variable|if|return|switch|this|throw|try|while|is|exists|dynamic|'
r'nonempty|then|outer|assert)\b', Keyword),
(r'(abstracts|extends|satisfies|adapts|'
r'super|given|of|out|assign|'
r'transient|volatile)\b', Keyword.Declaration),
Loading
Loading
@@ -900,16 +900,16 @@ class CeylonLexer(RegexLexer):
Keyword.Type),
(r'(package)(\s+)', bygroups(Keyword.Namespace, Text)),
(r'(true|false|null)\b', Keyword.Constant),
(r'(class|interface|object)(\s+)',
(r'(class|interface|object|alias)(\s+)',
bygroups(Keyword.Declaration, Text), 'class'),
(r'(import)(\s+)', bygroups(Keyword.Namespace, Text), 'import'),
(r'"(\\\\|\\"|[^"])*"', String),
(r"'\\.'|'[^\\]'|'\\u[0-9a-fA-F]{4}'", String.Quoted),
(r"`\\.`|`[^\\]`|`\\u[0-9a-fA-F]{4}`", String.Char),
(r'(\.)([a-zA-Z_][a-zA-Z0-9_]*)',
(r"'\\.'|'[^\\]'|'\\\{#[0-9a-fA-F]{4}\}'", String.Char),
(r'".*``.*``.*"', String.Interpol),
(r'(\.)([a-z_][a-zA-Z0-9_]*)',
bygroups(Operator, Name.Attribute)),
(r'[a-zA-Z_][a-zA-Z0-9_]*:', Name.Label),
(r'[a-zA-Z_\$][a-zA-Z0-9_]*', Name),
(r'[a-zA-Z_][a-zA-Z0-9_]*', Name),
(r'[~\^\*!%&\[\]\(\)\{\}<>\|+=:;,./?-]', Operator),
(r'\d{1,3}(_\d{3})+\.\d{1,3}(_\d{3})+[kMGTPmunpf]?', Number.Float),
(r'\d{1,3}(_\d{3})+\.[0-9]+([eE][+-]?[0-9]+)?[kMGTPmunpf]?',
Loading
Loading
@@ -917,16 +917,19 @@ class CeylonLexer(RegexLexer):
(r'[0-9][0-9]*\.\d{1,3}(_\d{3})+[kMGTPmunpf]?', Number.Float),
(r'[0-9][0-9]*\.[0-9]+([eE][+-]?[0-9]+)?[kMGTPmunpf]?',
Number.Float),
(r'0x[0-9a-fA-F]+', Number.Hex),
(r'#([0-9a-fA-F]{4})(_[0-9a-fA-F]{4})+', Number.Hex),
(r'#[0-9a-fA-F]+', Number.Hex),
(r'\$([01]{4})(_[01]{4})+', Number.Integer),
(r'\$[01]+', Number.Integer),
(r'\d{1,3}(_\d{3})+[kMGTP]?', Number.Integer),
(r'[0-9]+[kMGTP]?', Number.Integer),
(r'\n', Text)
],
'class': [
(r'[a-zA-Z_][a-zA-Z0-9_]*', Name.Class, '#pop')
(r'[A-Za-z_][a-zA-Z0-9_]*', Name.Class, '#pop')
],
'import': [
(r'[a-zA-Z0-9_.]+\w+ \{([a-zA-Z,]+|\.\.\.)\}',
(r'[a-z][a-zA-Z0-9_.]*',
Name.Namespace, '#pop')
],
}
Loading
Loading
This diff is collapsed.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment