Skip to content
Snippets Groups Projects
Commit 4f9c1c16 authored by Ted Nyman's avatar Ted Nyman
Browse files

Merge pull request #92 from tmm1/pygments-bump

Latest pygments
parents e87424cb 5eee4fe0
No related branches found
No related tags found
No related merge requests found
Showing
with 4514 additions and 3370 deletions
Loading
@@ -6,8 +6,9 @@ Major developers are Tim Hatch <tim@timhatch.com> and Armin Ronacher
Loading
@@ -6,8 +6,9 @@ Major developers are Tim Hatch <tim@timhatch.com> and Armin Ronacher
Other contributors, listed alphabetically, are: Other contributors, listed alphabetically, are:
   
* Sam Aaron -- Ioke lexer * Sam Aaron -- Ioke lexer
* Kumar Appaiah -- Debian control lexer
* Ali Afshar -- image formatter * Ali Afshar -- image formatter
* Thomas Aglassinger -- Rexx lexer
* Kumar Appaiah -- Debian control lexer
* Andreas Amann -- AppleScript lexer * Andreas Amann -- AppleScript lexer
* Timothy Armstrong -- Dart lexer fixes * Timothy Armstrong -- Dart lexer fixes
* Jeffrey Arnold -- R/S, Rd, BUGS, Jags, and Stan lexers * Jeffrey Arnold -- R/S, Rd, BUGS, Jags, and Stan lexers
Loading
@@ -15,6 +16,7 @@ Other contributors, listed alphabetically, are:
Loading
@@ -15,6 +16,7 @@ Other contributors, listed alphabetically, are:
* Stefan Matthias Aust -- Smalltalk lexer * Stefan Matthias Aust -- Smalltalk lexer
* Ben Bangert -- Mako lexers * Ben Bangert -- Mako lexers
* Max Battcher -- Darcs patch lexer * Max Battcher -- Darcs patch lexer
* Tim Baumann -- (Literate) Agda lexer
* Paul Baumgart, 280 North, Inc. -- Objective-J lexer * Paul Baumgart, 280 North, Inc. -- Objective-J lexer
* Michael Bayer -- Myghty lexers * Michael Bayer -- Myghty lexers
* John Benediktsson -- Factor lexer * John Benediktsson -- Factor lexer
Loading
@@ -29,20 +31,25 @@ Other contributors, listed alphabetically, are:
Loading
@@ -29,20 +31,25 @@ Other contributors, listed alphabetically, are:
* Christian Jann -- ShellSession lexer * Christian Jann -- ShellSession lexer
* Christopher Creutzig -- MuPAD lexer * Christopher Creutzig -- MuPAD lexer
* Pete Curry -- bugfixes * Pete Curry -- bugfixes
* Owen Durni -- haXe lexer * Bryan Davis -- EBNF lexer
* Owen Durni -- Haxe lexer
* Nick Efford -- Python 3 lexer * Nick Efford -- Python 3 lexer
* Sven Efftinge -- Xtend lexer * Sven Efftinge -- Xtend lexer
* Artem Egorkine -- terminal256 formatter * Artem Egorkine -- terminal256 formatter
* James H. Fisher -- PostScript lexer * James H. Fisher -- PostScript lexer
* William S. Fulton -- SWIG lexer
* Carlos Galdino -- Elixir and Elixir Console lexers * Carlos Galdino -- Elixir and Elixir Console lexers
* Michael Galloy -- IDL lexer * Michael Galloy -- IDL lexer
* Naveen Garg -- Autohotkey lexer * Naveen Garg -- Autohotkey lexer
* Laurent Gautier -- R/S lexer * Laurent Gautier -- R/S lexer
* Alex Gaynor -- PyPy log lexer * Alex Gaynor -- PyPy log lexer
* Richard Gerkin -- Igor Pro lexer
* Alain Gilbert -- TypeScript lexer * Alain Gilbert -- TypeScript lexer
* Alex Gilding -- BlitzBasic lexer
* Bertrand Goetzmann -- Groovy lexer * Bertrand Goetzmann -- Groovy lexer
* Krzysiek Goj -- Scala lexer * Krzysiek Goj -- Scala lexer
* Matt Good -- Genshi, Cheetah lexers * Matt Good -- Genshi, Cheetah lexers
* Michał Górny -- vim modeline support
* Patrick Gotthardt -- PHP namespaces support * Patrick Gotthardt -- PHP namespaces support
* Olivier Guibe -- Asymptote lexer * Olivier Guibe -- Asymptote lexer
* Jordi Gutiérrez Hermoso -- Octave lexer * Jordi Gutiérrez Hermoso -- Octave lexer
Loading
@@ -53,6 +60,7 @@ Other contributors, listed alphabetically, are:
Loading
@@ -53,6 +60,7 @@ Other contributors, listed alphabetically, are:
* Greg Hendershott -- Racket lexer * Greg Hendershott -- Racket lexer
* David Hess, Fish Software, Inc. -- Objective-J lexer * David Hess, Fish Software, Inc. -- Objective-J lexer
* Varun Hiremath -- Debian control lexer * Varun Hiremath -- Debian control lexer
* Rob Hoelz -- Perl 6 lexer
* Doug Hogan -- Mscgen lexer * Doug Hogan -- Mscgen lexer
* Ben Hollis -- Mason lexer * Ben Hollis -- Mason lexer
* Dustin Howett -- Logos lexer * Dustin Howett -- Logos lexer
Loading
@@ -64,6 +72,7 @@ Other contributors, listed alphabetically, are:
Loading
@@ -64,6 +72,7 @@ Other contributors, listed alphabetically, are:
* Igor Kalnitsky -- vhdl lexer * Igor Kalnitsky -- vhdl lexer
* Pekka Klärck -- Robot Framework lexer * Pekka Klärck -- Robot Framework lexer
* Eric Knibbe -- Lasso lexer * Eric Knibbe -- Lasso lexer
* Stepan Koltsov -- Clay lexer
* Adam Koprowski -- Opa lexer * Adam Koprowski -- Opa lexer
* Benjamin Kowarsch -- Modula-2 lexer * Benjamin Kowarsch -- Modula-2 lexer
* Alexander Kriegisch -- Kconfig and AspectJ lexers * Alexander Kriegisch -- Kconfig and AspectJ lexers
Loading
@@ -97,6 +106,7 @@ Other contributors, listed alphabetically, are:
Loading
@@ -97,6 +106,7 @@ Other contributors, listed alphabetically, are:
* Mike Nolta -- Julia lexer * Mike Nolta -- Julia lexer
* Jonas Obrist -- BBCode lexer * Jonas Obrist -- BBCode lexer
* David Oliva -- Rebol lexer * David Oliva -- Rebol lexer
* Pat Pannuto -- nesC lexer
* Jon Parise -- Protocol buffers lexer * Jon Parise -- Protocol buffers lexer
* Ronny Pfannschmidt -- BBCode lexer * Ronny Pfannschmidt -- BBCode lexer
* Benjamin Peterson -- Test suite refactoring * Benjamin Peterson -- Test suite refactoring
Loading
Loading
Loading
@@ -6,6 +6,56 @@ Issue numbers refer to the tracker at
Loading
@@ -6,6 +6,56 @@ Issue numbers refer to the tracker at
pull request numbers to the requests at pull request numbers to the requests at
<http://bitbucket.org/birkenfeld/pygments-main/pull-requests/merged>. <http://bitbucket.org/birkenfeld/pygments-main/pull-requests/merged>.
   
Version 1.7
-----------
(under development)
- Lexers added:
* Clay (PR#184)
* Perl 6 (PR#181)
* Swig (PR#168)
* nesC (PR#166)
* BlitzBasic (PR#197)
* EBNF (PR#193)
* Igor Pro (PR#172)
* Rexx (PR#199)
* Agda and Literate Agda (PR#203)
- Pygments will now recognize "vim" modelines when guessing the lexer for
a file based on content (PR#118).
- The NameHighlightFilter now works with any Name.* token type (#790).
- Python 3 lexer: add new exceptions from PEP 3151.
- Opa lexer: add new keywords (PR#170).
- Julia lexer: add keywords and underscore-separated number
literals (PR#176).
- Lasso lexer: fix method highlighting, update builtins. Fix
guessing so that plain XML isn't always taken as Lasso (PR#163).
- Objective C/C++ lexers: allow "@" prefixing any expression (#871).
- Ruby lexer: fix lexing of Name::Space tokens (#860).
- Stan lexer: update for version 1.3.0 of the language (PR#162).
- JavaScript lexer: add the "yield" keyword (PR#196).
- HTTP lexer: support for PATCH method (PR#190).
- Koka lexer: update to newest language spec (PR#201).
- Haxe lexer: rewrite and support for Haxe 3 (PR#174).
- Prolog lexer: add different kinds of numeric literals (#864).
- F# lexer: rewrite with newest spec for F# 3.0 (#842).
Version 1.6 Version 1.6
----------- -----------
(released Feb 3, 2013) (released Feb 3, 2013)
Loading
@@ -259,7 +309,7 @@ Version 1.3
Loading
@@ -259,7 +309,7 @@ Version 1.3
* Ada * Ada
* Coldfusion * Coldfusion
* Modula-2 * Modula-2
* haXe * Haxe
* R console * R console
* Objective-J * Objective-J
* Haml and Sass * Haml and Sass
Loading
@@ -318,7 +368,7 @@ Version 1.2
Loading
@@ -318,7 +368,7 @@ Version 1.2
* CMake * CMake
* Ooc * Ooc
* Coldfusion * Coldfusion
* haXe * Haxe
* R console * R console
   
- Added options for rendering LaTeX in source code comments in the - Added options for rendering LaTeX in source code comments in the
Loading
Loading
157c9feaccb8 7304e4759ae6
Loading
@@ -83,6 +83,58 @@ If no rule matches at the current position, the current char is emitted as an
Loading
@@ -83,6 +83,58 @@ If no rule matches at the current position, the current char is emitted as an
1. 1.
   
   
Adding and testing a new lexer
==============================
To make pygments aware of your new lexer, you have to perform the following
steps:
First, change to the current directory containing the pygments source code:
.. sourcecode:: console
$ cd .../pygments-main
Next, make sure the lexer is known from outside of the module. All modules in
the ``pygments.lexers`` specify ``__all__``. For example, ``other.py`` sets:
.. sourcecode:: python
__all__ = ['BrainfuckLexer', 'BefungeLexer', ...]
Simply add the name of your lexer class to this list.
Finally the lexer can be made publically known by rebuilding the lexer
mapping:
.. sourcecode:: console
$ make mapfiles
To test the new lexer, store an example file with the proper extension in
``tests/examplefiles``. For example, to test your ``DiffLexer``, add a
``tests/examplefiles/example.diff`` containing a sample diff output.
Now you can use pygmentize to render your example to HTML:
.. sourcecode:: console
$ ./pygmentize -O full -f html -o /tmp/example.html tests/examplefiles/example.diff
Note that this explicitely calls the ``pygmentize`` in the current directory
by preceding it with ``./``. This ensures your modifications are used.
Otherwise a possibly already installed, unmodified version without your new
lexer would have been called from the system search path (``$PATH``).
To view the result, open ``/tmp/example.html`` in your browser.
Once the example renders as expected, you should run the complete test suite:
.. sourcecode:: console
$ make test
Regex Flags Regex Flags
=========== ===========
   
Loading
Loading
Loading
@@ -5,13 +5,19 @@
Loading
@@ -5,13 +5,19 @@
   
This is the shell script that was used to extract Lasso 9's built-in keywords This is the shell script that was used to extract Lasso 9's built-in keywords
and generate most of the _lassobuiltins.py file. When run, it creates a file and generate most of the _lassobuiltins.py file. When run, it creates a file
named "lassobuiltins-9.py" containing the types, traits, and methods of the named "lassobuiltins-9.py" containing the types, traits, methods, and members
currently-installed version of Lasso 9. of the currently-installed version of Lasso 9.
   
A partial list of keywords in Lasso 8 can be generated with this code: A list of tags in Lasso 8 can be generated with this code:
   
<?LassoScript <?LassoScript
local('l8tags' = list); local('l8tags' = list,
'l8libs' = array('Cache','ChartFX','Client','Database','File','HTTP',
'iCal','Lasso','Link','List','PDF','Response','Stock','String',
'Thread','Valid','WAP','XML'));
iterate(#l8libs, local('library'));
local('result' = namespace_load(#library));
/iterate;
iterate(tags_list, local('i')); iterate(tags_list, local('i'));
#l8tags->insert(string_removeleading(#i, -pattern='_global_')); #l8tags->insert(string_removeleading(#i, -pattern='_global_'));
/iterate; /iterate;
Loading
@@ -30,9 +36,12 @@ local(f) = file("lassobuiltins-9.py")
Loading
@@ -30,9 +36,12 @@ local(f) = file("lassobuiltins-9.py")
#f->writeString('# -*- coding: utf-8 -*- #f->writeString('# -*- coding: utf-8 -*-
""" """
pygments.lexers._lassobuiltins pygments.lexers._lassobuiltins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Built-in Lasso types, traits, methods, and members.
   
Built-in Lasso types, traits, and methods. :copyright: Copyright 2006-'+date->year+' by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
""" """
   
') ')
Loading
@@ -42,16 +51,16 @@ lcapi_loadModules
Loading
@@ -42,16 +51,16 @@ lcapi_loadModules
// Load all of the libraries from builtins and lassoserver // Load all of the libraries from builtins and lassoserver
// This forces all possible available types and methods to be registered // This forces all possible available types and methods to be registered
local(srcs = local(srcs =
tie( tie(
dir(sys_masterHomePath + 'LassoLibraries/builtins/')->eachFilePath, dir(sys_masterHomePath + 'LassoLibraries/builtins/')->eachFilePath,
dir(sys_masterHomePath + 'LassoLibraries/lassoserver/')->eachFilePath dir(sys_masterHomePath + 'LassoLibraries/lassoserver/')->eachFilePath
) )
) )
   
with topLevelDir in #srcs with topLevelDir in #srcs
where !#topLevelDir->lastComponent->beginsWith('.') where not #topLevelDir->lastComponent->beginsWith('.')
do protect => { do protect => {
handle_error => { handle_error => {
stdoutnl('Unable to load: ' + #topLevelDir + ' ' + error_msg) stdoutnl('Unable to load: ' + #topLevelDir + ' ' + error_msg)
} }
library_thread_loader->loadLibrary(#topLevelDir) library_thread_loader->loadLibrary(#topLevelDir)
Loading
@@ -61,60 +70,74 @@ do protect => {
Loading
@@ -61,60 +70,74 @@ do protect => {
local( local(
typesList = list(), typesList = list(),
traitsList = list(), traitsList = list(),
methodsList = list() unboundMethodsList = list(),
memberMethodsList = list()
) )
   
// unbound methods // types
with method in sys_listUnboundMethods with type in sys_listTypes
where !#method->methodName->asString->endsWith('=') where #typesList !>> #type
where #method->methodName->asString->isalpha(1) do {
where #methodsList !>> #method->methodName->asString #typesList->insert(#type)
do #methodsList->insert(#method->methodName->asString) with method in #type->getType->listMethods
let name = #method->methodName
where not #name->asString->endsWith('=') // skip setter methods
where #name->asString->isAlpha(1) // skip unpublished methods
where #memberMethodsList !>> #name
do #memberMethodsList->insert(#name)
}
   
// traits // traits
with trait in sys_listTraits with trait in sys_listTraits
where !#trait->asString->beginsWith('$') where not #trait->asString->beginsWith('$') // skip combined traits
where #traitsList !>> #trait->asString where #traitsList !>> #trait
do { do {
#traitsList->insert(#trait->asString) #traitsList->insert(#trait)
with tmethod in tie(#trait->getType->provides, #trait->getType->requires) with method in tie(#trait->getType->provides, #trait->getType->requires)
where !#tmethod->methodName->asString->endsWith('=') let name = #method->methodName
where #tmethod->methodName->asString->isalpha(1) where not #name->asString->endsWith('=') // skip setter methods
where #methodsList !>> #tmethod->methodName->asString where #name->asString->isAlpha(1) // skip unpublished methods
do #methodsList->insert(#tmethod->methodName->asString) where #memberMethodsList !>> #name
do #memberMethodsList->insert(#name)
} }
   
// types // unbound methods
with type in sys_listTypes with method in sys_listUnboundMethods
where #typesList !>> #type->asString let name = #method->methodName
do { where not #name->asString->endsWith('=') // skip setter methods
#typesList->insert(#type->asString) where #name->asString->isAlpha(1) // skip unpublished methods
with tmethod in #type->getType->listMethods where #typesList !>> #name
where !#tmethod->methodName->asString->endsWith('=') where #traitsList !>> #name
where #tmethod->methodName->asString->isalpha(1) where #unboundMethodsList !>> #name
where #methodsList !>> #tmethod->methodName->asString do #unboundMethodsList->insert(#name)
do #methodsList->insert(#tmethod->methodName->asString)
}
   
#f->writeString("BUILTINS = { #f->writeString("BUILTINS = {
'Types': [ 'Types': [
") ")
with t in #typesList with t in #typesList
do #f->writeString(" '"+string_lowercase(#t)+"',\n") do !#t->asString->endsWith('$') ? #f->writeString(" '"+string_lowercase(#t->asString)+"',\n")
   
#f->writeString(" ], #f->writeString(" ],
'Traits': [ 'Traits': [
") ")
with t in #traitsList with t in #traitsList
do #f->writeString(" '"+string_lowercase(#t)+"',\n") do #f->writeString(" '"+string_lowercase(#t->asString)+"',\n")
   
#f->writeString(" ], #f->writeString(" ],
'Methods': [ 'Unbound Methods': [
") ")
with t in #methodsList with t in #unboundMethodsList
do #f->writeString(" '"+string_lowercase(#t)+"',\n") do #f->writeString(" '"+string_lowercase(#t->asString)+"',\n")
   
#f->writeString(" ], #f->writeString(" ]
}
MEMBERS = {
'Member Methods': [
")
with t in #memberMethodsList
do #f->writeString(" '"+string_lowercase(#t->asString)+"',\n")
#f->writeString(" ]
} }
") ")
   
Loading
Loading
#!/usr/bin/env python #!/usr/bin/env python2
   
import sys, pygments.cmdline import sys, pygments.cmdline
try: try:
Loading
Loading
Loading
@@ -129,7 +129,7 @@ class KeywordCaseFilter(Filter):
Loading
@@ -129,7 +129,7 @@ class KeywordCaseFilter(Filter):
   
class NameHighlightFilter(Filter): class NameHighlightFilter(Filter):
""" """
Highlight a normal Name token with a different token type. Highlight a normal Name (and Name.*) token with a different token type.
   
Example:: Example::
   
Loading
@@ -163,7 +163,7 @@ class NameHighlightFilter(Filter):
Loading
@@ -163,7 +163,7 @@ class NameHighlightFilter(Filter):
   
def filter(self, lexer, stream): def filter(self, lexer, stream):
for ttype, value in stream: for ttype, value in stream:
if ttype is Name and value in self.names: if ttype in Name and value in self.names:
yield self.tokentype, value yield self.tokentype, value
else: else:
yield ttype, value yield ttype, value
Loading
Loading
Loading
@@ -68,6 +68,9 @@ class Formatter(object):
Loading
@@ -68,6 +68,9 @@ class Formatter(object):
self.full = get_bool_opt(options, 'full', False) self.full = get_bool_opt(options, 'full', False)
self.title = options.get('title', '') self.title = options.get('title', '')
self.encoding = options.get('encoding', None) or None self.encoding = options.get('encoding', None) or None
if self.encoding == 'guess':
# can happen for pygmentize -O encoding=guess
self.encoding = 'utf-8'
self.encoding = options.get('outencoding', None) or self.encoding self.encoding = options.get('outencoding', None) or self.encoding
self.options = options self.options = options
   
Loading
Loading
Loading
@@ -15,6 +15,7 @@ import fnmatch
Loading
@@ -15,6 +15,7 @@ import fnmatch
from os.path import basename from os.path import basename
   
from pygments.lexers._mapping import LEXERS from pygments.lexers._mapping import LEXERS
from pygments.modeline import get_filetype_from_buffer
from pygments.plugin import find_plugin_lexers from pygments.plugin import find_plugin_lexers
from pygments.util import ClassNotFound, bytes from pygments.util import ClassNotFound, bytes
   
Loading
@@ -197,6 +198,16 @@ def guess_lexer(_text, **options):
Loading
@@ -197,6 +198,16 @@ def guess_lexer(_text, **options):
""" """
Guess a lexer by strong distinctions in the text (eg, shebang). Guess a lexer by strong distinctions in the text (eg, shebang).
""" """
# try to get a vim modeline first
ft = get_filetype_from_buffer(_text)
if ft is not None:
try:
return get_lexer_by_name(ft, **options)
except ClassNotFound:
pass
best_lexer = [0.0, None] best_lexer = [0.0, None]
for lexer in _iter_lexerclasses(): for lexer in _iter_lexerclasses():
rv = lexer.analyse_text(_text) rv = lexer.analyse_text(_text)
Loading
Loading
This diff is collapsed.
This diff is collapsed.
Loading
@@ -163,7 +163,7 @@ class RowSplitter(object):
Loading
@@ -163,7 +163,7 @@ class RowSplitter(object):
def split(self, row): def split(self, row):
splitter = (row.startswith('| ') and self._split_from_pipes splitter = (row.startswith('| ') and self._split_from_pipes
or self._split_from_spaces) or self._split_from_spaces)
for value in splitter(row.rstrip()): for value in splitter(row):
yield value yield value
yield '\n' yield '\n'
   
Loading
Loading
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
""" """
pygments.lexers._stan_builtins pygments.lexers._stan_builtins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   
This file contains the names of functions for Stan used by This file contains the names of functions for Stan used by
``pygments.lexers.math.StanLexer. ``pygments.lexers.math.StanLexer.
   
:copyright: Copyright 2006-2013 by the Pygments team, see AUTHORS. :copyright: Copyright 2013 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details. :license: BSD, see LICENSE for details.
""" """
   
CONSTANTS=[ 'e', KEYWORDS = ['else', 'for', 'if', 'in', 'lower', 'lp__', 'print', 'upper', 'while']
'epsilon',
'log10', TYPES = [ 'corr_matrix',
'log2', 'cov_matrix',
'negative_epsilon', 'int',
'negative_infinity', 'matrix',
'not_a_number', 'ordered',
'pi', 'positive_ordered',
'positive_infinity', 'real',
'sqrt2'] 'row_vector',
'simplex',
'unit_vector',
'vector']
   
FUNCTIONS=[ 'Phi', FUNCTIONS = [ 'Phi',
'Phi_approx',
'abs', 'abs',
'acos', 'acos',
'acosh', 'acosh',
Loading
@@ -30,37 +34,66 @@ FUNCTIONS=[ 'Phi',
Loading
@@ -30,37 +34,66 @@ FUNCTIONS=[ 'Phi',
'atan', 'atan',
'atan2', 'atan2',
'atanh', 'atanh',
'bernoulli_cdf',
'bernoulli_log', 'bernoulli_log',
'bernoulli_logit_log',
'bernoulli_rng',
'beta_binomial_cdf',
'beta_binomial_log', 'beta_binomial_log',
'beta_binomial_rng',
'beta_cdf',
'beta_log', 'beta_log',
'beta_rng',
'binary_log_loss', 'binary_log_loss',
'binomial_cdf',
'binomial_coefficient_log', 'binomial_coefficient_log',
'binomial_log',
'binomial_logit_log',
'binomial_rng',
'block',
'categorical_log', 'categorical_log',
'categorical_rng',
'cauchy_cdf',
'cauchy_log', 'cauchy_log',
'cauchy_rng',
'cbrt', 'cbrt',
'ceil', 'ceil',
'chi_square_log', 'chi_square_log',
'chi_square_rng',
'cholesky_decompose', 'cholesky_decompose',
'col', 'col',
'cols', 'cols',
'cos', 'cos',
'cosh', 'cosh',
'crossprod',
'cumulative_sum',
'determinant', 'determinant',
'diag_matrix', 'diag_matrix',
'diag_post_multiply',
'diag_pre_multiply',
'diagonal', 'diagonal',
'dims',
'dirichlet_log', 'dirichlet_log',
'dirichlet_rng',
'dot_product', 'dot_product',
'dot_self', 'dot_self',
'double_exponential_log', 'double_exponential_log',
'eigenvalues', 'double_exponential_rng',
'e',
'eigenvalues_sym', 'eigenvalues_sym',
'eigenvectors_sym',
'epsilon',
'erf', 'erf',
'erfc', 'erfc',
'exp', 'exp',
'exp2', 'exp2',
'exp_mod_normal_cdf',
'exp_mod_normal_log',
'exp_mod_normal_rng',
'expm1', 'expm1',
'exponential_cdf', 'exponential_cdf',
'exponential_log', 'exponential_log',
'exponential_rng',
'fabs', 'fabs',
'fdim', 'fdim',
'floor', 'floor',
Loading
@@ -69,85 +102,148 @@ FUNCTIONS=[ 'Phi',
Loading
@@ -69,85 +102,148 @@ FUNCTIONS=[ 'Phi',
'fmin', 'fmin',
'fmod', 'fmod',
'gamma_log', 'gamma_log',
'gamma_rng',
'gumbel_cdf',
'gumbel_log',
'gumbel_rng',
'hypergeometric_log', 'hypergeometric_log',
'hypergeometric_rng',
'hypot', 'hypot',
'if_else', 'if_else',
'int_step', 'int_step',
'inv_chi_square_cdf',
'inv_chi_square_log', 'inv_chi_square_log',
'inv_chi_square_rng',
'inv_cloglog', 'inv_cloglog',
'inv_gamma_cdf',
'inv_gamma_log', 'inv_gamma_log',
'inv_gamma_rng',
'inv_logit', 'inv_logit',
'inv_wishart_log', 'inv_wishart_log',
'inv_wishart_rng',
'inverse', 'inverse',
'lbeta', 'lbeta',
'lgamma', 'lgamma',
'lkj_corr_cholesky_log', 'lkj_corr_cholesky_log',
'lkj_corr_cholesky_rng',
'lkj_corr_log', 'lkj_corr_log',
'lkj_corr_rng',
'lkj_cov_log', 'lkj_cov_log',
'lmgamma', 'lmgamma',
'log', 'log',
'log10', 'log10',
'log1m', 'log1m',
'log1m_inv_logit',
'log1p', 'log1p',
'log1p_exp', 'log1p_exp',
'log2', 'log2',
'log_determinant',
'log_inv_logit',
'log_sum_exp', 'log_sum_exp',
'logistic_cdf',
'logistic_log', 'logistic_log',
'logistic_rng',
'logit', 'logit',
'lognormal_cdf', 'lognormal_cdf',
'lognormal_log', 'lognormal_log',
'lognormal_rng',
'max', 'max',
'mdivide_left_tri_low',
'mdivide_right_tri_low',
'mean', 'mean',
'min', 'min',
'multi_normal_cholesky_log', 'multi_normal_cholesky_log',
'multi_normal_log', 'multi_normal_log',
'multi_normal_prec_log',
'multi_normal_rng',
'multi_student_t_log', 'multi_student_t_log',
'multi_student_t_rng',
'multinomial_cdf',
'multinomial_log', 'multinomial_log',
'multinomial_rng',
'multiply_log', 'multiply_log',
'multiply_lower_tri_self_transpose', 'multiply_lower_tri_self_transpose',
'neg_binomial_cdf',
'neg_binomial_log', 'neg_binomial_log',
'neg_binomial_rng',
'negative_epsilon',
'negative_infinity',
'normal_cdf', 'normal_cdf',
'normal_log', 'normal_log',
'normal_rng',
'not_a_number',
'ordered_logistic_log', 'ordered_logistic_log',
'ordered_logistic_rng',
'owens_t',
'pareto_cdf',
'pareto_log', 'pareto_log',
'pareto_rng',
'pi',
'poisson_cdf',
'poisson_log', 'poisson_log',
'poisson_log_log',
'poisson_rng',
'positive_infinity',
'pow', 'pow',
'prod', 'prod',
'rep_array',
'rep_matrix',
'rep_row_vector',
'rep_vector',
'round', 'round',
'row', 'row',
'rows', 'rows',
'scaled_inv_chi_square_cdf',
'scaled_inv_chi_square_log', 'scaled_inv_chi_square_log',
'scaled_inv_chi_square_rng',
'sd', 'sd',
'sin', 'sin',
'singular_values', 'singular_values',
'sinh', 'sinh',
'size',
'skew_normal_cdf',
'skew_normal_log',
'skew_normal_rng',
'softmax', 'softmax',
'sqrt', 'sqrt',
'sqrt2',
'square', 'square',
'step', 'step',
'student_t_cdf',
'student_t_log', 'student_t_log',
'student_t_rng',
'sum', 'sum',
'tan', 'tan',
'tanh', 'tanh',
'tcrossprod',
'tgamma', 'tgamma',
'trace', 'trace',
'trunc', 'trunc',
'uniform_log', 'uniform_log',
'uniform_rng',
'variance', 'variance',
'weibull_cdf', 'weibull_cdf',
'weibull_log', 'weibull_log',
'wishart_log'] 'weibull_rng',
'wishart_log',
'wishart_rng']
   
DISTRIBUTIONS=[ 'bernoulli', DISTRIBUTIONS = [ 'bernoulli',
'bernoulli_logit',
'beta', 'beta',
'beta_binomial', 'beta_binomial',
'binomial',
'binomial_coefficient',
'binomial_logit',
'categorical', 'categorical',
'cauchy', 'cauchy',
'chi_square', 'chi_square',
'dirichlet', 'dirichlet',
'double_exponential', 'double_exponential',
'exp_mod_normal',
'exponential', 'exponential',
'gamma', 'gamma',
'gumbel',
'hypergeometric', 'hypergeometric',
'inv_chi_square', 'inv_chi_square',
'inv_gamma', 'inv_gamma',
Loading
@@ -159,16 +255,106 @@ DISTRIBUTIONS=[ 'bernoulli',
Loading
@@ -159,16 +255,106 @@ DISTRIBUTIONS=[ 'bernoulli',
'lognormal', 'lognormal',
'multi_normal', 'multi_normal',
'multi_normal_cholesky', 'multi_normal_cholesky',
'multi_normal_prec',
'multi_student_t', 'multi_student_t',
'multinomial', 'multinomial',
'multiply',
'neg_binomial', 'neg_binomial',
'normal', 'normal',
'ordered_logistic', 'ordered_logistic',
'pareto', 'pareto',
'poisson', 'poisson',
'poisson_log',
'scaled_inv_chi_square', 'scaled_inv_chi_square',
'skew_normal',
'student_t', 'student_t',
'uniform', 'uniform',
'weibull', 'weibull',
'wishart'] 'wishart']
   
RESERVED = [ 'alignas',
'alignof',
'and',
'and_eq',
'asm',
'auto',
'bitand',
'bitor',
'bool',
'break',
'case',
'catch',
'char',
'char16_t',
'char32_t',
'class',
'compl',
'const',
'const_cast',
'constexpr',
'continue',
'decltype',
'default',
'delete',
'do',
'double',
'dynamic_cast',
'enum',
'explicit',
'export',
'extern',
'false',
'false',
'float',
'friend',
'goto',
'inline',
'int',
'long',
'mutable',
'namespace',
'new',
'noexcept',
'not',
'not_eq',
'nullptr',
'operator',
'or',
'or_eq',
'private',
'protected',
'public',
'register',
'reinterpret_cast',
'repeat',
'return',
'short',
'signed',
'sizeof',
'static',
'static_assert',
'static_cast',
'struct',
'switch',
'template',
'then',
'this',
'thread_local',
'throw',
'true',
'true',
'try',
'typedef',
'typeid',
'typename',
'union',
'unsigned',
'until',
'using',
'virtual',
'void',
'volatile',
'wchar_t',
'xor',
'xor_eq']
This diff is collapsed.
Loading
@@ -25,7 +25,7 @@ class GasLexer(RegexLexer):
Loading
@@ -25,7 +25,7 @@ class GasLexer(RegexLexer):
For Gas (AT&T) assembly code. For Gas (AT&T) assembly code.
""" """
name = 'GAS' name = 'GAS'
aliases = ['gas'] aliases = ['gas', 'asm']
filenames = ['*.s', '*.S'] filenames = ['*.s', '*.S']
mimetypes = ['text/x-gas'] mimetypes = ['text/x-gas']
   
Loading
@@ -244,7 +244,7 @@ class LlvmLexer(RegexLexer):
Loading
@@ -244,7 +244,7 @@ class LlvmLexer(RegexLexer):
r'|align|addrspace|section|alias|module|asm|sideeffect|gc|dbg' r'|align|addrspace|section|alias|module|asm|sideeffect|gc|dbg'
   
r'|ccc|fastcc|coldcc|x86_stdcallcc|x86_fastcallcc|arm_apcscc' r'|ccc|fastcc|coldcc|x86_stdcallcc|x86_fastcallcc|arm_apcscc'
r'|arm_aapcscc|arm_aapcs_vfpcc' r'|arm_aapcscc|arm_aapcs_vfpcc|ptx_device|ptx_kernel'
   
r'|cc|c' r'|cc|c'
   
Loading
Loading
Loading
@@ -23,13 +23,14 @@ from pygments.scanner import Scanner
Loading
@@ -23,13 +23,14 @@ from pygments.scanner import Scanner
from pygments.lexers.functional import OcamlLexer from pygments.lexers.functional import OcamlLexer
from pygments.lexers.jvm import JavaLexer, ScalaLexer from pygments.lexers.jvm import JavaLexer, ScalaLexer
   
__all__ = ['CLexer', 'CppLexer', 'DLexer', 'DelphiLexer', 'ECLexer', 'DylanLexer', __all__ = ['CLexer', 'CppLexer', 'DLexer', 'DelphiLexer', 'ECLexer',
'ObjectiveCLexer', 'ObjectiveCppLexer', 'FortranLexer', 'GLShaderLexer', 'NesCLexer', 'DylanLexer', 'ObjectiveCLexer', 'ObjectiveCppLexer',
'PrologLexer', 'CythonLexer', 'ValaLexer', 'OocLexer', 'GoLexer', 'FortranLexer', 'GLShaderLexer', 'PrologLexer', 'CythonLexer',
'FelixLexer', 'AdaLexer', 'Modula2Lexer', 'BlitzMaxLexer', 'ValaLexer', 'OocLexer', 'GoLexer', 'FelixLexer', 'AdaLexer',
'NimrodLexer', 'FantomLexer', 'RustLexer', 'CudaLexer', 'MonkeyLexer', 'Modula2Lexer', 'BlitzMaxLexer', 'BlitzBasicLexer', 'NimrodLexer',
'FantomLexer', 'RustLexer', 'CudaLexer', 'MonkeyLexer', 'SwigLexer',
'DylanLidLexer', 'DylanConsoleLexer', 'CobolLexer', 'DylanLidLexer', 'DylanConsoleLexer', 'CobolLexer',
'CobolFreeformatLexer', 'LogosLexer'] 'CobolFreeformatLexer', 'LogosLexer', 'ClayLexer']
   
   
class CFamilyLexer(RegexLexer): class CFamilyLexer(RegexLexer):
Loading
@@ -231,6 +232,63 @@ class CppLexer(CFamilyLexer):
Loading
@@ -231,6 +232,63 @@ class CppLexer(CFamilyLexer):
return 0.1 return 0.1
   
   
class SwigLexer(CppLexer):
"""
For `SWIG <http://www.swig.org/>`_ source code.
*New in Pygments 1.7.*
"""
name = 'SWIG'
aliases = ['Swig', 'swig']
filenames = ['*.swg', '*.i']
mimetypes = ['text/swig']
priority = 0.04 # Lower than C/C++ and Objective C/C++
tokens = {
'statements': [
(r'(%[a-z_][a-z0-9_]*)', Name.Function), # SWIG directives
('\$\**\&?[a-zA-Z0-9_]+', Name), # Special variables
(r'##*[a-zA-Z_][a-zA-Z0-9_]*', Comment.Preproc), # Stringification / additional preprocessor directives
inherit,
],
}
# This is a far from complete set of SWIG directives
swig_directives = (
# Most common directives
'%apply', '%define', '%director', '%enddef', '%exception', '%extend',
'%feature', '%fragment', '%ignore', '%immutable', '%import', '%include',
'%inline', '%insert', '%module', '%newobject', '%nspace', '%pragma',
'%rename', '%shared_ptr', '%template', '%typecheck', '%typemap',
# Less common directives
'%arg', '%attribute', '%bang', '%begin', '%callback', '%catches', '%clear',
'%constant', '%copyctor', '%csconst', '%csconstvalue', '%csenum',
'%csmethodmodifiers', '%csnothrowexception', '%default', '%defaultctor',
'%defaultdtor', '%defined', '%delete', '%delobject', '%descriptor',
'%exceptionclass', '%exceptionvar', '%extend_smart_pointer', '%fragments',
'%header', '%ifcplusplus', '%ignorewarn', '%implicit', '%implicitconv',
'%init', '%javaconst', '%javaconstvalue', '%javaenum', '%javaexception',
'%javamethodmodifiers', '%kwargs', '%luacode', '%mutable', '%naturalvar',
'%nestedworkaround', '%perlcode', '%pythonabc', '%pythonappend',
'%pythoncallback', '%pythoncode', '%pythondynamic', '%pythonmaybecall',
'%pythonnondynamic', '%pythonprepend', '%refobject', '%shadow', '%sizeof',
'%trackobjects', '%types', '%unrefobject', '%varargs', '%warn', '%warnfilter')
def analyse_text(text):
rv = 0.1 # Same as C/C++
# Search for SWIG directives, which are conventionally at the beginning of
# a line. The probability of them being within a line is low, so let another
# lexer win in this case.
matches = re.findall(r'^\s*(%[a-z_][a-z0-9_]*)', text, re.M)
for m in matches:
if m in SwigLexer.swig_directives:
rv = 0.98
break
else:
rv = 0.91 # Fraction higher than MatlabLexer
return rv
class ECLexer(CLexer): class ECLexer(CLexer):
""" """
For eC source code with preprocessor directives. For eC source code with preprocessor directives.
Loading
@@ -266,6 +324,83 @@ class ECLexer(CLexer):
Loading
@@ -266,6 +324,83 @@ class ECLexer(CLexer):
} }
   
   
class NesCLexer(CLexer):
"""
For `nesC <https://github.com/tinyos/nesc>`_ source code with preprocessor
directives.
*New in Pygments 1.7.*
"""
name = 'nesC'
aliases = ['nesc']
filenames = ['*.nc']
mimetypes = ['text/x-nescsrc']
tokens = {
'statements': [
(r'(abstract|as|async|atomic|call|command|component|components|'
r'configuration|event|extends|generic|implementation|includes|'
r'interface|module|new|norace|post|provides|signal|task|uses)\b',
Keyword),
(r'(nx_struct|nx_union|nx_int8_t|nx_int16_t|nx_int32_t|nx_int64_t|'
r'nx_uint8_t|nx_uint16_t|nx_uint32_t|nx_uint64_t)\b',
Keyword.Type),
inherit,
],
}
class ClayLexer(RegexLexer):
"""
For `Clay <http://claylabs.com/clay/>`_ source.
*New in Pygments 1.7.*
"""
name = 'Clay'
filenames = ['*.clay']
aliases = ['clay']
mimetypes = ['text/x-clay']
tokens = {
'root': [
(r'\s', Text),
(r'//.*?$', Comment.Singleline),
(r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline),
(r'\b(public|private|import|as|record|variant|instance'
r'|define|overload|default|external|alias'
r'|rvalue|ref|forward|inline|noinline|forceinline'
r'|enum|var|and|or|not|if|else|goto|return|while'
r'|switch|case|break|continue|for|in|true|false|try|catch|throw'
r'|finally|onerror|staticassert|eval|when|newtype'
r'|__FILE__|__LINE__|__COLUMN__|__ARG__'
r')\b', Keyword),
(r'[~!%^&*+=|:<>/-]', Operator),
(r'[#(){}\[\],;.]', Punctuation),
(r'0x[0-9a-fA-F]+[LlUu]*', Number.Hex),
(r'\d+[LlUu]*', Number.Integer),
(r'\b(true|false)\b', Name.Builtin),
(r'(?i)[a-z_?][a-z_?0-9]*', Name),
(r'"""', String, 'tdqs'),
(r'"', String, 'dqs'),
],
'strings': [
(r'(?i)\\(x[0-9a-f]{2}|.)', String.Escape),
(r'.', String),
],
'nl': [
(r'\n', String),
],
'dqs': [
(r'"', String, '#pop'),
include('strings'),
],
'tdqs': [
(r'"""', String, '#pop'),
include('strings'),
include('nl'),
],
}
class DLexer(RegexLexer): class DLexer(RegexLexer):
""" """
For D source. For D source.
Loading
@@ -1216,6 +1351,8 @@ def objective(baselexer):
Loading
@@ -1216,6 +1351,8 @@ def objective(baselexer):
('#pop', 'oc_classname')), ('#pop', 'oc_classname')),
(r'(@class|@protocol)(\s+)', bygroups(Keyword, Text), (r'(@class|@protocol)(\s+)', bygroups(Keyword, Text),
('#pop', 'oc_forward_classname')), ('#pop', 'oc_forward_classname')),
# @ can also prefix other expressions like @{...} or @(...)
(r'@', Punctuation),
inherit, inherit,
], ],
'oc_classname' : [ 'oc_classname' : [
Loading
@@ -1471,7 +1608,15 @@ class PrologLexer(RegexLexer):
Loading
@@ -1471,7 +1608,15 @@ class PrologLexer(RegexLexer):
(r'^#.*', Comment.Single), (r'^#.*', Comment.Single),
(r'/\*', Comment.Multiline, 'nested-comment'), (r'/\*', Comment.Multiline, 'nested-comment'),
(r'%.*', Comment.Single), (r'%.*', Comment.Single),
(r'[0-9]+', Number), # character literal
(r'0\'.', String.Char),
(r'0b[01]+', Number.Bin),
(r'0o[0-7]+', Number.Oct),
(r'0x[0-9a-fA-F]+', Number.Hex),
# literal with prepended base
(r'\d\d?\'[a-zA-Z0-9]+', Number.Integer),
(r'(\d+\.\d*|\d*\.\d+)([eE][+-]?[0-9]+)?', Number.Float),
(r'\d+', Number.Integer),
(r'[\[\](){}|.,;!]', Punctuation), (r'[\[\](){}|.,;!]', Punctuation),
(r':-|-->', Punctuation), (r':-|-->', Punctuation),
(r'"(?:\\x[0-9a-fA-F]+\\|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|' (r'"(?:\\x[0-9a-fA-F]+\\|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|'
Loading
@@ -1522,7 +1667,7 @@ class CythonLexer(RegexLexer):
Loading
@@ -1522,7 +1667,7 @@ class CythonLexer(RegexLexer):
""" """
   
name = 'Cython' name = 'Cython'
aliases = ['cython', 'pyx'] aliases = ['cython', 'pyx', 'pyrex']
filenames = ['*.pyx', '*.pxd', '*.pxi'] filenames = ['*.pyx', '*.pxd', '*.pxi']
mimetypes = ['text/x-cython', 'application/x-cython'] mimetypes = ['text/x-cython', 'application/x-cython']
   
Loading
@@ -2581,6 +2726,88 @@ class BlitzMaxLexer(RegexLexer):
Loading
@@ -2581,6 +2726,88 @@ class BlitzMaxLexer(RegexLexer):
} }
   
   
class BlitzBasicLexer(RegexLexer):
"""
For `BlitzBasic <http://blitzbasic.com>`_ source code.
*New in Pygments 1.7.*
"""
name = 'BlitzBasic'
aliases = ['blitzbasic', 'b3d', 'bplus']
filenames = ['*.bb', '*.decls']
mimetypes = ['text/x-bb']
bb_vopwords = (r'\b(Shl|Shr|Sar|Mod|Or|And|Not|'
r'Abs|Sgn|Handle|Int|Float|Str|'
r'First|Last|Before|After)\b')
bb_sktypes = r'@{1,2}|[#$%]'
bb_name = r'[a-z][a-z0-9_]*'
bb_var = (r'(%s)(?:([ \t]*)(%s)|([ \t]*)([.])([ \t]*)(?:(%s)))?') % \
(bb_name, bb_sktypes, bb_name)
flags = re.MULTILINE | re.IGNORECASE
tokens = {
'root': [
# Text
(r'[ \t]+', Text),
# Comments
(r";.*?\n", Comment.Single),
# Data types
('"', String.Double, 'string'),
# Numbers
(r'[0-9]+\.[0-9]*(?!\.)', Number.Float),
(r'\.[0-9]+(?!\.)', Number.Float),
(r'[0-9]+', Number.Integer),
(r'\$[0-9a-f]+', Number.Hex),
(r'\%[10]+', Number), # Binary
# Other
(r'(?:%s|([+\-*/~=<>^]))' % (bb_vopwords), Operator),
(r'[(),:\[\]\\]', Punctuation),
(r'\.([ \t]*)(%s)' % bb_name, Name.Label),
# Identifiers
(r'\b(New)\b([ \t]+)(%s)' % (bb_name),
bygroups(Keyword.Reserved, Text, Name.Class)),
(r'\b(Gosub|Goto)\b([ \t]+)(%s)' % (bb_name),
bygroups(Keyword.Reserved, Text, Name.Label)),
(r'\b(Object)\b([ \t]*)([.])([ \t]*)(%s)\b' % (bb_name),
bygroups(Operator, Text, Punctuation, Text, Name.Class)),
(r'\b%s\b([ \t]*)(\()' % bb_var,
bygroups(Name.Function, Text, Keyword.Type,Text, Punctuation,
Text, Name.Class, Text, Punctuation)),
(r'\b(Function)\b([ \t]+)%s' % bb_var,
bygroups(Keyword.Reserved, Text, Name.Function, Text, Keyword.Type,
Text, Punctuation, Text, Name.Class)),
(r'\b(Type)([ \t]+)(%s)' % (bb_name),
bygroups(Keyword.Reserved, Text, Name.Class)),
# Keywords
(r'\b(Pi|True|False|Null)\b', Keyword.Constant),
(r'\b(Local|Global|Const|Field|Dim)\b', Keyword.Declaration),
(r'\b(End|Return|Exit|'
r'Chr|Len|Asc|'
r'New|Delete|Insert|'
r'Include|'
r'Function|'
r'Type|'
r'If|Then|Else|ElseIf|EndIf|'
r'For|To|Next|Step|Each|'
r'While|Wend|'
r'Repeat|Until|Forever|'
r'Select|Case|Default|'
r'Goto|Gosub|Data|Read|Restore)\b', Keyword.Reserved),
# Final resolve (for variable names and such)
# (r'(%s)' % (bb_name), Name.Variable),
(bb_var, bygroups(Name.Variable, Text, Keyword.Type,
Text, Punctuation, Text, Name.Class)),
],
'string': [
(r'""', String.Double),
(r'"C?', String.Double, '#pop'),
(r'[^"]+', String.Double),
],
}
class NimrodLexer(RegexLexer): class NimrodLexer(RegexLexer):
""" """
For `Nimrod <http://nimrod-code.org/>`_ source code. For `Nimrod <http://nimrod-code.org/>`_ source code.
Loading
Loading
Loading
@@ -529,7 +529,7 @@ class VbNetAspxLexer(DelegatingLexer):
Loading
@@ -529,7 +529,7 @@ class VbNetAspxLexer(DelegatingLexer):
# Very close to functional.OcamlLexer # Very close to functional.OcamlLexer
class FSharpLexer(RegexLexer): class FSharpLexer(RegexLexer):
""" """
For the F# language. For the F# language (version 3.0).
   
*New in Pygments 1.5.* *New in Pygments 1.5.*
""" """
Loading
@@ -540,91 +540,132 @@ class FSharpLexer(RegexLexer):
Loading
@@ -540,91 +540,132 @@ class FSharpLexer(RegexLexer):
mimetypes = ['text/x-fsharp'] mimetypes = ['text/x-fsharp']
   
keywords = [ keywords = [
'abstract', 'and', 'as', 'assert', 'base', 'begin', 'class', 'abstract', 'as', 'assert', 'base', 'begin', 'class', 'default',
'default', 'delegate', 'do', 'do!', 'done', 'downcast', 'delegate', 'do!', 'do', 'done', 'downcast', 'downto', 'elif', 'else',
'downto', 'elif', 'else', 'end', 'exception', 'extern', 'end', 'exception', 'extern', 'false', 'finally', 'for', 'function',
'false', 'finally', 'for', 'fun', 'function', 'global', 'if', 'fun', 'global', 'if', 'inherit', 'inline', 'interface', 'internal',
'in', 'inherit', 'inline', 'interface', 'internal', 'lazy', 'in', 'lazy', 'let!', 'let', 'match', 'member', 'module', 'mutable',
'let', 'let!', 'match', 'member', 'module', 'mutable', 'namespace', 'new', 'null', 'of', 'open', 'override', 'private', 'public',
'namespace', 'new', 'null', 'of', 'open', 'or', 'override', 'rec', 'return!', 'return', 'select', 'static', 'struct', 'then', 'to',
'private', 'public', 'rec', 'return', 'return!', 'sig', 'true', 'try', 'type', 'upcast', 'use!', 'use', 'val', 'void', 'when',
'static', 'struct', 'then', 'to', 'true', 'try', 'type', 'while', 'with', 'yield!', 'yield',
'upcast', 'use', 'use!', 'val', 'void', 'when', 'while', ]
'with', 'yield', 'yield!' # Reserved words; cannot hurt to color them as keywords too.
keywords += [
'atomic', 'break', 'checked', 'component', 'const', 'constraint',
'constructor', 'continue', 'eager', 'event', 'external', 'fixed',
'functor', 'include', 'method', 'mixin', 'object', 'parallel',
'process', 'protected', 'pure', 'sealed', 'tailcall', 'trait',
'virtual', 'volatile',
] ]
keyopts = [ keyopts = [
'!=','#','&&','&','\(','\)','\*','\+',',','-\.', '!=', '#', '&&', '&', '\(', '\)', '\*', '\+', ',', '-\.',
'->','-','\.\.','\.','::',':=',':>',':',';;',';','<-', '->', '-', '\.\.', '\.', '::', ':=', ':>', ':', ';;', ';', '<-',
'<','>]','>','\?\?','\?','\[<','\[>','\[\|','\[', '<\]', '<', '>\]', '>', '\?\?', '\?', '\[<', '\[\|', '\[', '\]',
']','_','`','{','\|\]','\|','}','~','<@','=','@>' '_', '`', '{', '\|\]', '\|', '}', '~', '<@@', '<@', '=', '@>', '@@>',
] ]
   
operators = r'[!$%&*+\./:<=>?@^|~-]' operators = r'[!$%&*+\./:<=>?@^|~-]'
word_operators = ['and', 'asr', 'land', 'lor', 'lsl', 'lxor', 'mod', 'not', 'or'] word_operators = ['and', 'or', 'not']
prefix_syms = r'[!?~]' prefix_syms = r'[!?~]'
infix_syms = r'[=<>@^|&+\*/$%-]' infix_syms = r'[=<>@^|&+\*/$%-]'
primitives = ['unit', 'int', 'float', 'bool', 'string', 'char', 'list', 'array', primitives = [
'byte', 'sbyte', 'int16', 'uint16', 'uint32', 'int64', 'uint64' 'sbyte', 'byte', 'char', 'nativeint', 'unativeint', 'float32', 'single',
'nativeint', 'unativeint', 'decimal', 'void', 'float32', 'single', 'float', 'double', 'int8', 'uint8', 'int16', 'uint16', 'int32',
'double'] 'uint32', 'int64', 'uint64', 'decimal', 'unit', 'bool', 'string',
'list', 'exn', 'obj', 'enum',
]
# See http://msdn.microsoft.com/en-us/library/dd233181.aspx and/or
# http://fsharp.org/about/files/spec.pdf for reference. Good luck.
   
tokens = { tokens = {
'escape-sequence': [ 'escape-sequence': [
(r'\\[\\\"\'ntbr]', String.Escape), (r'\\[\\\"\'ntbrafv]', String.Escape),
(r'\\[0-9]{3}', String.Escape), (r'\\[0-9]{3}', String.Escape),
(r'\\x[0-9a-fA-F]{2}', String.Escape), (r'\\u[0-9a-fA-F]{4}', String.Escape),
(r'\\U[0-9a-fA-F]{8}', String.Escape),
], ],
'root': [ 'root': [
(r'\s+', Text), (r'\s+', Text),
(r'false|true|\(\)|\[\]', Name.Builtin.Pseudo), (r'\(\)|\[\]', Name.Builtin.Pseudo),
(r'\b([A-Z][A-Za-z0-9_\']*)(?=\s*\.)', (r'\b(?<!\.)([A-Z][A-Za-z0-9_\']*)(?=\s*\.)',
Name.Namespace, 'dotted'), Name.Namespace, 'dotted'),
(r'\b([A-Z][A-Za-z0-9_\']*)', Name.Class), (r'\b([A-Z][A-Za-z0-9_\']*)', Name),
(r'///.*?\n', String.Doc),
(r'//.*?\n', Comment.Single), (r'//.*?\n', Comment.Single),
(r'\(\*(?!\))', Comment, 'comment'), (r'\(\*(?!\))', Comment, 'comment'),
(r'@"', String, 'lstring'),
(r'"""', String, 'tqs'),
(r'"', String, 'string'),
(r'\b(open|module)(\s+)([a-zA-Z0-9_.]+)',
bygroups(Keyword, Text, Name.Namespace)),
(r'\b(let!?)(\s+)([a-zA-Z0-9_]+)',
bygroups(Keyword, Text, Name.Variable)),
(r'\b(type)(\s+)([a-zA-Z0-9_]+)',
bygroups(Keyword, Text, Name.Class)),
(r'\b(member|override)(\s+)([a-zA-Z0-9_]+)(\.)([a-zA-Z0-9_]+)',
bygroups(Keyword, Text, Name, Punctuation, Name.Function)),
(r'\b(%s)\b' % '|'.join(keywords), Keyword), (r'\b(%s)\b' % '|'.join(keywords), Keyword),
(r'(%s)' % '|'.join(keyopts), Operator), (r'(%s)' % '|'.join(keyopts), Operator),
(r'(%s|%s)?%s' % (infix_syms, prefix_syms, operators), Operator), (r'(%s|%s)?%s' % (infix_syms, prefix_syms, operators), Operator),
(r'\b(%s)\b' % '|'.join(word_operators), Operator.Word), (r'\b(%s)\b' % '|'.join(word_operators), Operator.Word),
(r'\b(%s)\b' % '|'.join(primitives), Keyword.Type), (r'\b(%s)\b' % '|'.join(primitives), Keyword.Type),
(r'#[ \t]*(if|endif|else|line|nowarn|light|\d+)\b.*?\n',
(r'#[ \t]*(if|endif|else|line|nowarn|light)\b.*?\n',
Comment.Preproc), Comment.Preproc),
   
(r"[^\W\d][\w']*", Name), (r"[^\W\d][\w']*", Name),
   
(r'\d[\d_]*', Number.Integer), (r'\d[\d_]*[uU]?[yslLnQRZINGmM]?', Number.Integer),
(r'0[xX][\da-fA-F][\da-fA-F_]*', Number.Hex), (r'0[xX][\da-fA-F][\da-fA-F_]*[uU]?[yslLn]?[fF]?', Number.Hex),
(r'0[oO][0-7][0-7_]*', Number.Oct), (r'0[oO][0-7][0-7_]*[uU]?[yslLn]?', Number.Oct),
(r'0[bB][01][01_]*', Number.Binary), (r'0[bB][01][01_]*[uU]?[yslLn]?', Number.Binary),
(r'-?\d[\d_]*(.[\d_]*)?([eE][+\-]?\d[\d_]*)', Number.Float), (r'-?\d[\d_]*(.[\d_]*)?([eE][+\-]?\d[\d_]*)[fFmM]?',
Number.Float),
   
(r"'(?:(\\[\\\"'ntbr ])|(\\[0-9]{3})|(\\x[0-9a-fA-F]{2}))'", (r"'(?:(\\[\\\"'ntbr ])|(\\[0-9]{3})|(\\x[0-9a-fA-F]{2}))'B?",
String.Char), String.Char),
(r"'.'", String.Char), (r"'.'", String.Char),
(r"'", Keyword), # a stray quote is another syntax element (r"'", Keyword), # a stray quote is another syntax element
   
(r'"', String.Double, 'string'),
(r'[~?][a-z][\w\']*:', Name.Variable), (r'[~?][a-z][\w\']*:', Name.Variable),
], ],
'dotted': [
(r'\s+', Text),
(r'\.', Punctuation),
(r'[A-Z][A-Za-z0-9_\']*(?=\s*\.)', Name.Namespace),
(r'[A-Z][A-Za-z0-9_\']*', Name, '#pop'),
(r'[a-z_][A-Za-z0-9_\']*', Name, '#pop'),
],
'comment': [ 'comment': [
(r'[^(*)]+', Comment), (r'[^(*)@"]+', Comment),
(r'\(\*', Comment, '#push'), (r'\(\*', Comment, '#push'),
(r'\*\)', Comment, '#pop'), (r'\*\)', Comment, '#pop'),
(r'[(*)]', Comment), # comments cannot be closed within strings in comments
(r'@"', String, 'lstring'),
(r'"""', String, 'tqs'),
(r'"', String, 'string'),
(r'[(*)@]', Comment),
], ],
'string': [ 'string': [
(r'[^\\"]+', String.Double), (r'[^\\"]+', String),
include('escape-sequence'), include('escape-sequence'),
(r'\\\n', String.Double), (r'\\\n', String),
(r'"', String.Double, '#pop'), (r'\n', String), # newlines are allowed in any string
(r'"B?', String, '#pop'),
], ],
'dotted': [ 'lstring': [
(r'\s+', Text), (r'[^"]+', String),
(r'\.', Punctuation), (r'\n', String),
(r'[A-Z][A-Za-z0-9_\']*(?=\s*\.)', Name.Namespace), (r'""', String),
(r'[A-Z][A-Za-z0-9_\']*', Name.Class, '#pop'), (r'"B?', String, '#pop'),
(r'[a-z_][A-Za-z0-9_\']*', Name, '#pop'), ],
'tqs': [
(r'[^"]+', String),
(r'\n', String),
(r'"""B?', String, '#pop'),
(r'"', String),
], ],
} }
Loading
@@ -16,9 +16,13 @@ from pygments.token import Text, Comment, Operator, Keyword, Name, \
Loading
@@ -16,9 +16,13 @@ from pygments.token import Text, Comment, Operator, Keyword, Name, \
String, Number, Punctuation, Literal, Generic, Error String, Number, Punctuation, Literal, Generic, Error
   
__all__ = ['RacketLexer', 'SchemeLexer', 'CommonLispLexer', 'HaskellLexer', __all__ = ['RacketLexer', 'SchemeLexer', 'CommonLispLexer', 'HaskellLexer',
'LiterateHaskellLexer', 'SMLLexer', 'OcamlLexer', 'ErlangLexer', 'AgdaLexer', 'LiterateHaskellLexer', 'LiterateAgdaLexer',
'ErlangShellLexer', 'OpaLexer', 'CoqLexer', 'NewLispLexer', 'SMLLexer', 'OcamlLexer', 'ErlangLexer', 'ErlangShellLexer',
'ElixirLexer', 'ElixirConsoleLexer', 'KokaLexer'] 'OpaLexer', 'CoqLexer', 'NewLispLexer', 'ElixirLexer',
'ElixirConsoleLexer', 'KokaLexer']
line_re = re.compile('.*?\n')
   
   
class RacketLexer(RegexLexer): class RacketLexer(RegexLexer):
Loading
@@ -719,7 +723,7 @@ class CommonLispLexer(RegexLexer):
Loading
@@ -719,7 +723,7 @@ class CommonLispLexer(RegexLexer):
*New in Pygments 0.9.* *New in Pygments 0.9.*
""" """
name = 'Common Lisp' name = 'Common Lisp'
aliases = ['common-lisp', 'cl'] aliases = ['common-lisp', 'cl', 'lisp']
filenames = ['*.cl', '*.lisp', '*.el'] # use for Elisp too filenames = ['*.cl', '*.lisp', '*.el'] # use for Elisp too
mimetypes = ['text/x-common-lisp'] mimetypes = ['text/x-common-lisp']
   
Loading
@@ -808,6 +812,8 @@ class CommonLispLexer(RegexLexer):
Loading
@@ -808,6 +812,8 @@ class CommonLispLexer(RegexLexer):
(r'"(\\.|\\\n|[^"\\])*"', String), (r'"(\\.|\\\n|[^"\\])*"', String),
# quoting # quoting
(r":" + symbol, String.Symbol), (r":" + symbol, String.Symbol),
(r"::" + symbol, String.Symbol),
(r":#" + symbol, String.Symbol),
(r"'" + symbol, String.Symbol), (r"'" + symbol, String.Symbol),
(r"'", Operator), (r"'", Operator),
(r"`", Operator), (r"`", Operator),
Loading
@@ -979,6 +985,8 @@ class HaskellLexer(RegexLexer):
Loading
@@ -979,6 +985,8 @@ class HaskellLexer(RegexLexer):
(r'\(', Punctuation, ('funclist', 'funclist')), (r'\(', Punctuation, ('funclist', 'funclist')),
(r'\)', Punctuation, '#pop:2'), (r'\)', Punctuation, '#pop:2'),
], ],
# NOTE: the next four states are shared in the AgdaLexer; make sure
# any change is compatible with Agda as well or copy over and change
'comment': [ 'comment': [
# Multiline Comments # Multiline Comments
(r'[^-{}]+', Comment.Multiline), (r'[^-{}]+', Comment.Multiline),
Loading
@@ -1009,12 +1017,78 @@ class HaskellLexer(RegexLexer):
Loading
@@ -1009,12 +1017,78 @@ class HaskellLexer(RegexLexer):
} }
   
   
line_re = re.compile('.*?\n') class AgdaLexer(RegexLexer):
bird_re = re.compile(r'(>[ \t]*)(.*\n)') """
For the `Agda <http://wiki.portal.chalmers.se/agda/pmwiki.php>`_
dependently typed functional programming language and proof assistant.
   
class LiterateHaskellLexer(Lexer): *New in Pygments 1.7.*
""" """
For Literate Haskell (Bird-style or LaTeX) source.
name = 'Agda'
aliases = ['agda']
filenames = ['*.agda']
mimetypes = ['text/x-agda']
reserved = ['abstract', 'codata', 'coinductive', 'constructor', 'data',
'field', 'forall', 'hiding', 'in', 'inductive', 'infix',
'infixl', 'infixr', 'let', 'open', 'pattern', 'primitive',
'private', 'mutual', 'quote', 'quoteGoal', 'quoteTerm',
'record', 'syntax', 'rewrite', 'unquote', 'using', 'where',
'with']
tokens = {
'root': [
# Declaration
(r'^(\s*)([^\s\(\)\{\}]+)(\s*)(:)(\s*)',
bygroups(Text, Name.Function, Text, Operator.Word, Text)),
# Comments
(r'--(?![!#$%&*+./<=>?@\^|_~:\\]).*?$', Comment.Single),
(r'{-', Comment.Multiline, 'comment'),
# Holes
(r'{!', Comment.Directive, 'hole'),
# Lexemes:
# Identifiers
(ur'\b(%s)(?!\')\b' % '|'.join(reserved), Keyword.Reserved),
(r'(import|module)(\s+)', bygroups(Keyword.Reserved, Text), 'module'),
(r'\b(Set|Prop)\b', Keyword.Type),
# Special Symbols
(r'(\(|\)|\{|\})', Operator),
(ur'(\.{1,3}|\||[\u039B]|[\u2200]|[\u2192]|:|=|->)', Operator.Word),
# Numbers
(r'\d+[eE][+-]?\d+', Number.Float),
(r'\d+\.\d+([eE][+-]?\d+)?', Number.Float),
(r'0[xX][\da-fA-F]+', Number.Hex),
(r'\d+', Number.Integer),
# Strings
(r"'", String.Char, 'character'),
(r'"', String, 'string'),
(r'[^\s\(\)\{\}]+', Text),
(r'\s+?', Text), # Whitespace
],
'hole': [
# Holes
(r'[^!{}]+', Comment.Directive),
(r'{!', Comment.Directive, '#push'),
(r'!}', Comment.Directive, '#pop'),
(r'[!{}]', Comment.Directive),
],
'module': [
(r'{-', Comment.Multiline, 'comment'),
(r'[a-zA-Z][a-zA-Z0-9_.]*', Name, '#pop'),
(r'[^a-zA-Z]*', Text)
],
'comment': HaskellLexer.tokens['comment'],
'character': HaskellLexer.tokens['character'],
'string': HaskellLexer.tokens['string'],
'escape': HaskellLexer.tokens['escape']
}
class LiterateLexer(Lexer):
"""
Base class for lexers of literate file formats based on LaTeX or Bird-style
(prefixing each code line with ">").
   
Additional options accepted: Additional options accepted:
   
Loading
@@ -1022,17 +1096,15 @@ class LiterateHaskellLexer(Lexer):
Loading
@@ -1022,17 +1096,15 @@ class LiterateHaskellLexer(Lexer):
If given, must be ``"bird"`` or ``"latex"``. If not given, the style If given, must be ``"bird"`` or ``"latex"``. If not given, the style
is autodetected: if the first non-whitespace character in the source is autodetected: if the first non-whitespace character in the source
is a backslash or percent character, LaTeX is assumed, else Bird. is a backslash or percent character, LaTeX is assumed, else Bird.
*New in Pygments 0.9.*
""" """
name = 'Literate Haskell'
aliases = ['lhs', 'literate-haskell']
filenames = ['*.lhs']
mimetypes = ['text/x-literate-haskell']
   
def get_tokens_unprocessed(self, text): bird_re = re.compile(r'(>[ \t]*)(.*\n)')
hslexer = HaskellLexer(**self.options)
def __init__(self, baselexer, **options):
self.baselexer = baselexer
Lexer.__init__(self, **options)
   
def get_tokens_unprocessed(self, text):
style = self.options.get('litstyle') style = self.options.get('litstyle')
if style is None: if style is None:
style = (text.lstrip()[0:1] in '%\\') and 'latex' or 'bird' style = (text.lstrip()[0:1] in '%\\') and 'latex' or 'bird'
Loading
@@ -1043,7 +1115,7 @@ class LiterateHaskellLexer(Lexer):
Loading
@@ -1043,7 +1115,7 @@ class LiterateHaskellLexer(Lexer):
# bird-style # bird-style
for match in line_re.finditer(text): for match in line_re.finditer(text):
line = match.group() line = match.group()
m = bird_re.match(line) m = self.bird_re.match(line)
if m: if m:
insertions.append((len(code), insertions.append((len(code),
[(0, Comment.Special, m.group(1))])) [(0, Comment.Special, m.group(1))]))
Loading
@@ -1054,7 +1126,6 @@ class LiterateHaskellLexer(Lexer):
Loading
@@ -1054,7 +1126,6 @@ class LiterateHaskellLexer(Lexer):
# latex-style # latex-style
from pygments.lexers.text import TexLexer from pygments.lexers.text import TexLexer
lxlexer = TexLexer(**self.options) lxlexer = TexLexer(**self.options)
codelines = 0 codelines = 0
latex = '' latex = ''
for match in line_re.finditer(text): for match in line_re.finditer(text):
Loading
@@ -1075,10 +1146,56 @@ class LiterateHaskellLexer(Lexer):
Loading
@@ -1075,10 +1146,56 @@ class LiterateHaskellLexer(Lexer):
latex += line latex += line
insertions.append((len(code), insertions.append((len(code),
list(lxlexer.get_tokens_unprocessed(latex)))) list(lxlexer.get_tokens_unprocessed(latex))))
for item in do_insertions(insertions, hslexer.get_tokens_unprocessed(code)): for item in do_insertions(insertions, self.baselexer.get_tokens_unprocessed(code)):
yield item yield item
   
   
class LiterateHaskellLexer(LiterateLexer):
"""
For Literate Haskell (Bird-style or LaTeX) source.
Additional options accepted:
`litstyle`
If given, must be ``"bird"`` or ``"latex"``. If not given, the style
is autodetected: if the first non-whitespace character in the source
is a backslash or percent character, LaTeX is assumed, else Bird.
*New in Pygments 0.9.*
"""
name = 'Literate Haskell'
aliases = ['lhs', 'literate-haskell', 'lhaskell']
filenames = ['*.lhs']
mimetypes = ['text/x-literate-haskell']
def __init__(self, **options):
hslexer = HaskellLexer(**options)
LiterateLexer.__init__(self, hslexer, **options)
class LiterateAgdaLexer(LiterateLexer):
"""
For Literate Agda source.
Additional options accepted:
`litstyle`
If given, must be ``"bird"`` or ``"latex"``. If not given, the style
is autodetected: if the first non-whitespace character in the source
is a backslash or percent character, LaTeX is assumed, else Bird.
*New in Pygments 1.7.*
"""
name = 'Literate Agda'
aliases = ['lagda', 'literate-agda']
filenames = ['*.lagda']
mimetypes = ['text/x-literate-agda']
def __init__(self, **options):
agdalexer = AgdaLexer(**options)
LiterateLexer.__init__(self, agdalexer, litstyle='latex', **options)
class SMLLexer(RegexLexer): class SMLLexer(RegexLexer):
""" """
For the Standard ML language. For the Standard ML language.
Loading
@@ -1663,9 +1780,10 @@ class OpaLexer(RegexLexer):
Loading
@@ -1663,9 +1780,10 @@ class OpaLexer(RegexLexer):
# but if you color only real keywords, you might just # but if you color only real keywords, you might just
# as well not color anything # as well not color anything
keywords = [ keywords = [
'and', 'as', 'begin', 'css', 'database', 'db', 'do', 'else', 'end', 'and', 'as', 'begin', 'case', 'client', 'css', 'database', 'db', 'do',
'external', 'forall', 'if', 'import', 'match', 'package', 'parser', 'else', 'end', 'external', 'forall', 'function', 'if', 'import',
'rec', 'server', 'then', 'type', 'val', 'with', 'xml_parser', 'match', 'module', 'or', 'package', 'parser', 'rec', 'server', 'then',
'type', 'val', 'with', 'xml_parser',
] ]
   
# matches both stuff and `stuff` # matches both stuff and `stuff`
Loading
@@ -2399,7 +2517,7 @@ class ElixirConsoleLexer(Lexer):
Loading
@@ -2399,7 +2517,7 @@ class ElixirConsoleLexer(Lexer):
   
class KokaLexer(RegexLexer): class KokaLexer(RegexLexer):
""" """
Lexer for the `Koka <http://research.microsoft.com/en-us/projects/koka/>`_ Lexer for the `Koka <http://koka.codeplex.com>`_
language. language.
   
*New in Pygments 1.6.* *New in Pygments 1.6.*
Loading
@@ -2411,7 +2529,7 @@ class KokaLexer(RegexLexer):
Loading
@@ -2411,7 +2529,7 @@ class KokaLexer(RegexLexer):
mimetypes = ['text/x-koka'] mimetypes = ['text/x-koka']
   
keywords = [ keywords = [
'infix', 'infixr', 'infixl', 'prefix', 'postfix', 'infix', 'infixr', 'infixl',
'type', 'cotype', 'rectype', 'alias', 'type', 'cotype', 'rectype', 'alias',
'struct', 'con', 'struct', 'con',
'fun', 'function', 'val', 'var', 'fun', 'function', 'val', 'var',
Loading
@@ -2450,7 +2568,12 @@ class KokaLexer(RegexLexer):
Loading
@@ -2450,7 +2568,12 @@ class KokaLexer(RegexLexer):
sboundary = '(?!'+symbols+')' sboundary = '(?!'+symbols+')'
   
# name boundary: a keyword should not be followed by any of these # name boundary: a keyword should not be followed by any of these
boundary = '(?![a-zA-Z0-9_\\-])' boundary = '(?![\w/])'
# koka token abstractions
tokenType = Name.Attribute
tokenTypeDef = Name.Class
tokenConstructor = Generic.Emph
   
# main lexer # main lexer
tokens = { tokens = {
Loading
@@ -2458,41 +2581,51 @@ class KokaLexer(RegexLexer):
Loading
@@ -2458,41 +2581,51 @@ class KokaLexer(RegexLexer):
include('whitespace'), include('whitespace'),
   
# go into type mode # go into type mode
(r'::?' + sboundary, Keyword.Type, 'type'), (r'::?' + sboundary, tokenType, 'type'),
(r'alias' + boundary, Keyword, 'alias-type'), (r'(alias)(\s+)([a-z]\w*)?', bygroups(Keyword, Text, tokenTypeDef),
(r'struct' + boundary, Keyword, 'struct-type'), 'alias-type'),
(r'(%s)' % '|'.join(typeStartKeywords) + boundary, Keyword, 'type'), (r'(struct)(\s+)([a-z]\w*)?', bygroups(Keyword, Text, tokenTypeDef),
'struct-type'),
((r'(%s)' % '|'.join(typeStartKeywords)) +
r'(\s+)([a-z]\w*)?', bygroups(Keyword, Text, tokenTypeDef),
'type'),
   
# special sequences of tokens (we use ?: for non-capturing group as # special sequences of tokens (we use ?: for non-capturing group as
# required by 'bygroups') # required by 'bygroups')
(r'(module)(\s*)((?:interface)?)(\s*)' (r'(module)(\s+)(interface\s+)?((?:[a-z]\w*/)*[a-z]\w*)',
r'((?:[a-z](?:[a-zA-Z0-9_]|\-[a-zA-Z])*\.)*' bygroups(Keyword, Text, Keyword, Name.Namespace)),
r'[a-z](?:[a-zA-Z0-9_]|\-[a-zA-Z])*)', (r'(import)(\s+)((?:[a-z]\w*/)*[a-z]\w*)'
bygroups(Keyword, Text, Keyword, Text, Name.Namespace)), r'(?:(\s*)(=)(\s*)((?:qualified\s*)?)'
(r'(import)(\s+)((?:[a-z](?:[a-zA-Z0-9_]|\-[a-zA-Z])*\.)*[a-z]' r'((?:[a-z]\w*/)*[a-z]\w*))?',
r'(?:[a-zA-Z0-9_]|\-[a-zA-Z])*)(\s*)((?:as)?)' bygroups(Keyword, Text, Name.Namespace, Text, Keyword, Text,
r'((?:[A-Z](?:[a-zA-Z0-9_]|\-[a-zA-Z])*)?)', Keyword, Name.Namespace)),
bygroups(Keyword, Text, Name.Namespace, Text, Keyword,
Name.Namespace)), (r'(^(?:(?:public|private)\s*)?(?:function|fun|val))'
r'(\s+)([a-z]\w*|\((?:' + symbols + r'|/)\))',
bygroups(Keyword, Text, Name.Function)),
(r'(^(?:(?:public|private)\s*)?external)(\s+)(inline\s+)?'
r'([a-z]\w*|\((?:' + symbols + r'|/)\))',
bygroups(Keyword, Text, Keyword, Name.Function)),
   
# keywords # keywords
(r'(%s)' % '|'.join(typekeywords) + boundary, Keyword.Type), (r'(%s)' % '|'.join(typekeywords) + boundary, Keyword.Type),
(r'(%s)' % '|'.join(keywords) + boundary, Keyword), (r'(%s)' % '|'.join(keywords) + boundary, Keyword),
(r'(%s)' % '|'.join(builtin) + boundary, Keyword.Pseudo), (r'(%s)' % '|'.join(builtin) + boundary, Keyword.Pseudo),
(r'::|:=|\->|[=\.:]' + sboundary, Keyword), (r'::?|:=|\->|[=\.]' + sboundary, Keyword),
(r'\-' + sboundary, Generic.Strong),
   
# names # names
(r'[A-Z]([a-zA-Z0-9_]|\-[a-zA-Z])*(?=\.)', Name.Namespace), (r'((?:[a-z]\w*/)*)([A-Z]\w*)',
(r'[A-Z]([a-zA-Z0-9_]|\-[a-zA-Z])*(?!\.)', Name.Class), bygroups(Name.Namespace, tokenConstructor)),
(r'[a-z]([a-zA-Z0-9_]|\-[a-zA-Z])*', Name), (r'((?:[a-z]\w*/)*)([a-z]\w*)', bygroups(Name.Namespace, Name)),
(r'_([a-zA-Z0-9_]|\-[a-zA-Z])*', Name.Variable), (r'((?:[a-z]\w*/)*)(\((?:' + symbols + r'|/)\))',
bygroups(Name.Namespace, Name)),
(r'_\w*', Name.Variable),
   
# literal string # literal string
(r'@"', String.Double, 'litstring'), (r'@"', String.Double, 'litstring'),
   
# operators # operators
(symbols, Operator), (symbols + "|/(?![\*/])", Operator),
(r'`', Operator), (r'`', Operator),
(r'[\{\}\(\)\[\];,]', Punctuation), (r'[\{\}\(\)\[\];,]', Punctuation),
   
Loading
@@ -2519,17 +2652,17 @@ class KokaLexer(RegexLexer):
Loading
@@ -2519,17 +2652,17 @@ class KokaLexer(RegexLexer):
   
# type started by colon # type started by colon
'type': [ 'type': [
(r'[\(\[<]', Keyword.Type, 'type-nested'), (r'[\(\[<]', tokenType, 'type-nested'),
include('type-content') include('type-content')
], ],
   
# type nested in brackets: can contain parameters, comma etc. # type nested in brackets: can contain parameters, comma etc.
'type-nested': [ 'type-nested': [
(r'[\)\]>]', Keyword.Type, '#pop'), (r'[\)\]>]', tokenType, '#pop'),
(r'[\(\[<]', Keyword.Type, 'type-nested'), (r'[\(\[<]', tokenType, 'type-nested'),
(r',', Keyword.Type), (r',', tokenType),
(r'([a-z](?:[a-zA-Z0-9_]|\-[a-zA-Z])*)(\s*)(:)(?!:)', (r'([a-z]\w*)(\s*)(:)(?!:)',
bygroups(Name.Variable,Text,Keyword.Type)), # parameter name bygroups(Name, Text, tokenType)), # parameter name
include('type-content') include('type-content')
], ],
   
Loading
@@ -2538,23 +2671,23 @@ class KokaLexer(RegexLexer):
Loading
@@ -2538,23 +2671,23 @@ class KokaLexer(RegexLexer):
include('whitespace'), include('whitespace'),
   
# keywords # keywords
(r'(%s)' % '|'.join(typekeywords) + boundary, Keyword.Type), (r'(%s)' % '|'.join(typekeywords) + boundary, Keyword),
(r'(?=((%s)' % '|'.join(keywords) + boundary + '))', (r'(?=((%s)' % '|'.join(keywords) + boundary + '))',
Keyword, '#pop'), # need to match because names overlap... Keyword, '#pop'), # need to match because names overlap...
   
# kinds # kinds
(r'[EPH]' + boundary, Keyword.Type), (r'[EPHVX]' + boundary, tokenType),
(r'[*!]', Keyword.Type),
   
# type names # type names
(r'[A-Z]([a-zA-Z0-9_]|\-[a-zA-Z])*(?=\.)', Name.Namespace), (r'[a-z][0-9]*(?![\w/])', tokenType ),
(r'[A-Z]([a-zA-Z0-9_]|\-[a-zA-Z])*(?!\.)', Name.Class), (r'_\w*', tokenType.Variable), # Generic.Emph
(r'[a-z][0-9]*(?![a-zA-Z_\-])', Keyword.Type), # Generic.Emph (r'((?:[a-z]\w*/)*)([A-Z]\w*)',
(r'_([a-zA-Z0-9_]|\-[a-zA-Z])*', Keyword.Type), # Generic.Emph bygroups(Name.Namespace, tokenType)),
(r'[a-z]([a-zA-Z0-9_]|\-[a-zA-Z])*', Keyword.Type), (r'((?:[a-z]\w*/)*)([a-z]\w+)',
bygroups(Name.Namespace, tokenType)),
   
# type keyword operators # type keyword operators
(r'::|\->|[\.:|]', Keyword.Type), (r'::|\->|[\.:|]', tokenType),
   
#catchall #catchall
(r'', Text, '#pop') (r'', Text, '#pop')
Loading
@@ -2562,6 +2695,7 @@ class KokaLexer(RegexLexer):
Loading
@@ -2562,6 +2695,7 @@ class KokaLexer(RegexLexer):
   
# comments and literals # comments and literals
'whitespace': [ 'whitespace': [
(r'\n\s*#.*$', Comment.Preproc),
(r'\s+', Text), (r'\s+', Text),
(r'/\*', Comment.Multiline, 'comment'), (r'/\*', Comment.Multiline, 'comment'),
(r'//.*$', Comment.Single) (r'//.*$', Comment.Single)
Loading
@@ -2588,11 +2722,10 @@ class KokaLexer(RegexLexer):
Loading
@@ -2588,11 +2722,10 @@ class KokaLexer(RegexLexer):
(r'[\'\n]', String.Char, '#pop'), (r'[\'\n]', String.Char, '#pop'),
], ],
'escape-sequence': [ 'escape-sequence': [
(r'\\[abfnrtv0\\\"\'\?]', String.Escape), (r'\\[nrt\\\"\']', String.Escape),
(r'\\x[0-9a-fA-F]{2}', String.Escape), (r'\\x[0-9a-fA-F]{2}', String.Escape),
(r'\\u[0-9a-fA-F]{4}', String.Escape), (r'\\u[0-9a-fA-F]{4}', String.Escape),
# Yes, \U literals are 6 hex digits. # Yes, \U literals are 6 hex digits.
(r'\\U[0-9a-fA-F]{6}', String.Escape) (r'\\U[0-9a-fA-F]{6}', String.Escape)
] ]
} }
Loading
@@ -888,11 +888,11 @@ class CeylonLexer(RegexLexer):
Loading
@@ -888,11 +888,11 @@ class CeylonLexer(RegexLexer):
(r'[^\S\n]+', Text), (r'[^\S\n]+', Text),
(r'//.*?\n', Comment.Single), (r'//.*?\n', Comment.Single),
(r'/\*.*?\*/', Comment.Multiline), (r'/\*.*?\*/', Comment.Multiline),
(r'(variable|shared|abstract|doc|by|formal|actual)', (r'(variable|shared|abstract|doc|by|formal|actual|late|native)',
Name.Decorator), Name.Decorator),
(r'(break|case|catch|continue|default|else|finally|for|in|' (r'(break|case|catch|continue|default|else|finally|for|in|'
r'variable|if|return|switch|this|throw|try|while|is|exists|' r'variable|if|return|switch|this|throw|try|while|is|exists|dynamic|'
r'nonempty|then|outer)\b', Keyword), r'nonempty|then|outer|assert)\b', Keyword),
(r'(abstracts|extends|satisfies|adapts|' (r'(abstracts|extends|satisfies|adapts|'
r'super|given|of|out|assign|' r'super|given|of|out|assign|'
r'transient|volatile)\b', Keyword.Declaration), r'transient|volatile)\b', Keyword.Declaration),
Loading
@@ -900,16 +900,16 @@ class CeylonLexer(RegexLexer):
Loading
@@ -900,16 +900,16 @@ class CeylonLexer(RegexLexer):
Keyword.Type), Keyword.Type),
(r'(package)(\s+)', bygroups(Keyword.Namespace, Text)), (r'(package)(\s+)', bygroups(Keyword.Namespace, Text)),
(r'(true|false|null)\b', Keyword.Constant), (r'(true|false|null)\b', Keyword.Constant),
(r'(class|interface|object)(\s+)', (r'(class|interface|object|alias)(\s+)',
bygroups(Keyword.Declaration, Text), 'class'), bygroups(Keyword.Declaration, Text), 'class'),
(r'(import)(\s+)', bygroups(Keyword.Namespace, Text), 'import'), (r'(import)(\s+)', bygroups(Keyword.Namespace, Text), 'import'),
(r'"(\\\\|\\"|[^"])*"', String), (r'"(\\\\|\\"|[^"])*"', String),
(r"'\\.'|'[^\\]'|'\\u[0-9a-fA-F]{4}'", String.Quoted), (r"'\\.'|'[^\\]'|'\\\{#[0-9a-fA-F]{4}\}'", String.Char),
(r"`\\.`|`[^\\]`|`\\u[0-9a-fA-F]{4}`", String.Char), (r'".*``.*``.*"', String.Interpol),
(r'(\.)([a-zA-Z_][a-zA-Z0-9_]*)', (r'(\.)([a-z_][a-zA-Z0-9_]*)',
bygroups(Operator, Name.Attribute)), bygroups(Operator, Name.Attribute)),
(r'[a-zA-Z_][a-zA-Z0-9_]*:', Name.Label), (r'[a-zA-Z_][a-zA-Z0-9_]*:', Name.Label),
(r'[a-zA-Z_\$][a-zA-Z0-9_]*', Name), (r'[a-zA-Z_][a-zA-Z0-9_]*', Name),
(r'[~\^\*!%&\[\]\(\)\{\}<>\|+=:;,./?-]', Operator), (r'[~\^\*!%&\[\]\(\)\{\}<>\|+=:;,./?-]', Operator),
(r'\d{1,3}(_\d{3})+\.\d{1,3}(_\d{3})+[kMGTPmunpf]?', Number.Float), (r'\d{1,3}(_\d{3})+\.\d{1,3}(_\d{3})+[kMGTPmunpf]?', Number.Float),
(r'\d{1,3}(_\d{3})+\.[0-9]+([eE][+-]?[0-9]+)?[kMGTPmunpf]?', (r'\d{1,3}(_\d{3})+\.[0-9]+([eE][+-]?[0-9]+)?[kMGTPmunpf]?',
Loading
@@ -917,16 +917,19 @@ class CeylonLexer(RegexLexer):
Loading
@@ -917,16 +917,19 @@ class CeylonLexer(RegexLexer):
(r'[0-9][0-9]*\.\d{1,3}(_\d{3})+[kMGTPmunpf]?', Number.Float), (r'[0-9][0-9]*\.\d{1,3}(_\d{3})+[kMGTPmunpf]?', Number.Float),
(r'[0-9][0-9]*\.[0-9]+([eE][+-]?[0-9]+)?[kMGTPmunpf]?', (r'[0-9][0-9]*\.[0-9]+([eE][+-]?[0-9]+)?[kMGTPmunpf]?',
Number.Float), Number.Float),
(r'0x[0-9a-fA-F]+', Number.Hex), (r'#([0-9a-fA-F]{4})(_[0-9a-fA-F]{4})+', Number.Hex),
(r'#[0-9a-fA-F]+', Number.Hex),
(r'\$([01]{4})(_[01]{4})+', Number.Integer),
(r'\$[01]+', Number.Integer),
(r'\d{1,3}(_\d{3})+[kMGTP]?', Number.Integer), (r'\d{1,3}(_\d{3})+[kMGTP]?', Number.Integer),
(r'[0-9]+[kMGTP]?', Number.Integer), (r'[0-9]+[kMGTP]?', Number.Integer),
(r'\n', Text) (r'\n', Text)
], ],
'class': [ 'class': [
(r'[a-zA-Z_][a-zA-Z0-9_]*', Name.Class, '#pop') (r'[A-Za-z_][a-zA-Z0-9_]*', Name.Class, '#pop')
], ],
'import': [ 'import': [
(r'[a-zA-Z0-9_.]+\w+ \{([a-zA-Z,]+|\.\.\.)\}', (r'[a-z][a-zA-Z0-9_.]*',
Name.Namespace, '#pop') Name.Namespace, '#pop')
], ],
} }
Loading
Loading
This diff is collapsed.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment