diff options
| author | mitchell <unknown> | 2020-04-25 16:26:31 -0400 |
|---|---|---|
| committer | mitchell <unknown> | 2020-04-25 16:26:31 -0400 |
| commit | fad15f79b1230b3076be515d6894c8919562809b (patch) | |
| tree | 72c848ef02c3331de5ca54eff7adaea3a9a6fb88 /lexlua/glsl.lua | |
| parent | 1fd02a367dec125c0b49dd9246a0928433866b96 (diff) | |
| download | scintilla-mirror-fad15f79b1230b3076be515d6894c8919562809b.tar.gz | |
Reformatted Lua LPeg lexers and added new convenience functions and pattern.
`lexer.range()` replaces `lexer.delimited_range()` and `lexer.nested_pair()`.
`lexer.to_eol()` replaces `patt * lexer.nonnewline^0` constructs.
`lexer.number` replaces `lexer.float + lexer.integer`.
Also added unit tests for lexer functions.
Diffstat (limited to 'lexlua/glsl.lua')
| -rw-r--r-- | lexlua/glsl.lua | 24 |
1 files changed, 9 insertions, 15 deletions
diff --git a/lexlua/glsl.lua b/lexlua/glsl.lua index 31440f0a6..d6bef2b44 100644 --- a/lexlua/glsl.lua +++ b/lexlua/glsl.lua @@ -19,21 +19,15 @@ lex:modify_rule('keyword', token(lexer.KEYWORD, word_match[[ ]]) + lex:get_rule('keyword')) -- Types. -lex:modify_rule('type', - token(lexer.TYPE, - S('bdiu')^-1 * 'vec' * R('24') + - P('d')^-1 * 'mat' * R('24') * ('x' * R('24')^-1) + - S('iu')^-1 * 'sampler' * R('13') * 'D' + - 'sampler' * R('12') * 'D' * P('Array')^-1 * 'Shadow' + - S('iu')^-1 * 'sampler' * (R('12') * 'DArray' + - word_match[[ - Cube 2DRect Buffer 2DMS 2DMSArray 2DMSCubeArray - ]]) + - word_match[[ - samplerCubeShadow sampler2DRectShadow - samplerCubeArrayShadow - ]]) + - lex:get_rule('type') + +lex:modify_rule('type', token(lexer.TYPE, S('bdiu')^-1 * 'vec' * R('24') + + P('d')^-1 * 'mat' * R('24') * ('x' * R('24')^-1) + + S('iu')^-1 * 'sampler' * R('13') * 'D' + + 'sampler' * R('12') * 'D' * P('Array')^-1 * 'Shadow' + + S('iu')^-1 * 'sampler' * (R('12') * 'DArray' + word_match[[ + Cube 2DRect Buffer 2DMS 2DMSArray 2DMSCubeArray + ]]) + + word_match[[samplerCubeShadow sampler2DRectShadow samplerCubeArrayShadow]]) + + lex:get_rule('type') + -- Functions. token(lexer.FUNCTION, word_match[[ |
