|
|
|
|
|
/* parser generated by jison 0.6.1-215 */
|
|
|
|
|
|
/*
|
|
|
* Returns a Parser object of the following structure:
|
|
|
*
|
|
|
* Parser: {
|
|
|
* yy: {} The so-called "shared state" or rather the *source* of it;
|
|
|
* the real "shared state" `yy` passed around to
|
|
|
* the rule actions, etc. is a derivative/copy of this one,
|
|
|
* not a direct reference!
|
|
|
* }
|
|
|
*
|
|
|
* Parser.prototype: {
|
|
|
* yy: {},
|
|
|
* EOF: 1,
|
|
|
* TERROR: 2,
|
|
|
*
|
|
|
* trace: function(errorMessage, ...),
|
|
|
*
|
|
|
* JisonParserError: function(msg, hash),
|
|
|
*
|
|
|
* quoteName: function(name),
|
|
|
* Helper function which can be overridden by user code later on: put suitable
|
|
|
* quotes around literal IDs in a description string.
|
|
|
*
|
|
|
* originalQuoteName: function(name),
|
|
|
* The basic quoteName handler provided by JISON.
|
|
|
* `cleanupAfterParse()` will clean up and reset `quoteName()` to reference this function
|
|
|
* at the end of the `parse()`.
|
|
|
*
|
|
|
* describeSymbol: function(symbol),
|
|
|
* Return a more-or-less human-readable description of the given symbol, when
|
|
|
* available, or the symbol itself, serving as its own 'description' for lack
|
|
|
* of something better to serve up.
|
|
|
*
|
|
|
* Return NULL when the symbol is unknown to the parser.
|
|
|
*
|
|
|
* symbols_: {associative list: name ==> number},
|
|
|
* terminals_: {associative list: number ==> name},
|
|
|
* nonterminals: {associative list: rule-name ==> {associative list: number ==> rule-alt}},
|
|
|
* terminal_descriptions_: (if there are any) {associative list: number ==> description},
|
|
|
* productions_: [...],
|
|
|
*
|
|
|
* performAction: function parser__performAction(yytext, yyleng, yylineno, yyloc, yystate, yysp, yyvstack, yylstack, yystack, yysstack),
|
|
|
*
|
|
|
* The function parameters and `this` have the following value/meaning:
|
|
|
* - `this` : reference to the `yyval` internal object, which has members (`$` and `_$`)
|
|
|
* to store/reference the rule value `$$` and location info `@$`.
|
|
|
*
|
|
|
* One important thing to note about `this` a.k.a. `yyval`: every *reduce* action gets
|
|
|
* to see the same object via the `this` reference, i.e. if you wish to carry custom
|
|
|
* data from one reduce action through to the next within a single parse run, then you
|
|
|
* may get nasty and use `yyval` a.k.a. `this` for storing you own semi-permanent data.
|
|
|
*
|
|
|
* `this.yy` is a direct reference to the `yy` shared state object.
|
|
|
*
|
|
|
* `%parse-param`-specified additional `parse()` arguments have been added to this `yy`
|
|
|
* object at `parse()` start and are therefore available to the action code via the
|
|
|
* same named `yy.xxxx` attributes (where `xxxx` represents a identifier name from
|
|
|
* the %parse-param` list.
|
|
|
*
|
|
|
* - `yytext` : reference to the lexer value which belongs to the last lexer token used
|
|
|
* to match this rule. This is *not* the look-ahead token, but the last token
|
|
|
* that's actually part of this rule.
|
|
|
*
|
|
|
* Formulated another way, `yytext` is the value of the token immediately preceeding
|
|
|
* the current look-ahead token.
|
|
|
* Caveats apply for rules which don't require look-ahead, such as epsilon rules.
|
|
|
*
|
|
|
* - `yyleng` : ditto as `yytext`, only now for the lexer.yyleng value.
|
|
|
*
|
|
|
* - `yylineno`: ditto as `yytext`, only now for the lexer.yylineno value.
|
|
|
*
|
|
|
* - `yyloc` : ditto as `yytext`, only now for the lexer.yylloc lexer token location info.
|
|
|
*
|
|
|
* WARNING: since jison 0.4.18-186 this entry may be NULL/UNDEFINED instead
|
|
|
* of an empty object when no suitable location info can be provided.
|
|
|
*
|
|
|
* - `yystate` : the current parser state number, used internally for dispatching and
|
|
|
* executing the action code chunk matching the rule currently being reduced.
|
|
|
*
|
|
|
* - `yysp` : the current state stack position (a.k.a. 'stack pointer')
|
|
|
*
|
|
|
* This one comes in handy when you are going to do advanced things to the parser
|
|
|
* stacks, all of which are accessible from your action code (see the next entries below).
|
|
|
*
|
|
|
* Also note that you can access this and other stack index values using the new double-hash
|
|
|
* syntax, i.e. `##$ === ##0 === yysp`, while `##1` is the stack index for all things
|
|
|
* related to the first rule term, just like you have `$1`, `@1` and `#1`.
|
|
|
* This is made available to write very advanced grammar action rules, e.g. when you want
|
|
|
* to investigate the parse state stack in your action code, which would, for example,
|
|
|
* be relevant when you wish to implement error diagnostics and reporting schemes similar
|
|
|
* to the work described here:
|
|
|
*
|
|
|
* + Pottier, F., 2016. Reachability and error diagnosis in LR(1) automata.
|
|
|
* In Journées Francophones des Languages Applicatifs.
|
|
|
*
|
|
|
* + Jeffery, C.L., 2003. Generating LR syntax error messages from examples.
|
|
|
* ACM Transactions on Programming Languages and Systems (TOPLAS), 25(5), pp.631–640.
|
|
|
*
|
|
|
* - `yyrulelength`: the current rule's term count, i.e. the number of entries occupied on the stack.
|
|
|
*
|
|
|
* This one comes in handy when you are going to do advanced things to the parser
|
|
|
* stacks, all of which are accessible from your action code (see the next entries below).
|
|
|
*
|
|
|
* - `yyvstack`: reference to the parser value stack. Also accessed via the `$1` etc.
|
|
|
* constructs.
|
|
|
*
|
|
|
* - `yylstack`: reference to the parser token location stack. Also accessed via
|
|
|
* the `@1` etc. constructs.
|
|
|
*
|
|
|
* WARNING: since jison 0.4.18-186 this array MAY contain slots which are
|
|
|
* UNDEFINED rather than an empty (location) object, when the lexer/parser
|
|
|
* action code did not provide a suitable location info object when such a
|
|
|
* slot was filled!
|
|
|
*
|
|
|
* - `yystack` : reference to the parser token id stack. Also accessed via the
|
|
|
* `#1` etc. constructs.
|
|
|
*
|
|
|
* Note: this is a bit of a **white lie** as we can statically decode any `#n` reference to
|
|
|
* its numeric token id value, hence that code wouldn't need the `yystack` but *you* might
|
|
|
* want access this array for your own purposes, such as error analysis as mentioned above!
|
|
|
*
|
|
|
* Note that this stack stores the current stack of *tokens*, that is the sequence of
|
|
|
* already parsed=reduced *nonterminals* (tokens representing rules) and *terminals*
|
|
|
* (lexer tokens *shifted* onto the stack until the rule they belong to is found and
|
|
|
* *reduced*.
|
|
|
*
|
|
|
* - `yysstack`: reference to the parser state stack. This one carries the internal parser
|
|
|
* *states* such as the one in `yystate`, which are used to represent
|
|
|
* the parser state machine in the *parse table*. *Very* *internal* stuff,
|
|
|
* what can I say? If you access this one, you're clearly doing wicked things
|
|
|
*
|
|
|
* - `...` : the extra arguments you specified in the `%parse-param` statement in your
|
|
|
* grammar definition file.
|
|
|
*
|
|
|
* table: [...],
|
|
|
* State transition table
|
|
|
* ----------------------
|
|
|
*
|
|
|
* index levels are:
|
|
|
* - `state` --> hash table
|
|
|
* - `symbol` --> action (number or array)
|
|
|
*
|
|
|
* If the `action` is an array, these are the elements' meaning:
|
|
|
* - index [0]: 1 = shift, 2 = reduce, 3 = accept
|
|
|
* - index [1]: GOTO `state`
|
|
|
*
|
|
|
* If the `action` is a number, it is the GOTO `state`
|
|
|
*
|
|
|
* defaultActions: {...},
|
|
|
*
|
|
|
* parseError: function(str, hash, ExceptionClass),
|
|
|
* yyError: function(str, ...),
|
|
|
* yyRecovering: function(),
|
|
|
* yyErrOk: function(),
|
|
|
* yyClearIn: function(),
|
|
|
*
|
|
|
* constructParseErrorInfo: function(error_message, exception_object, expected_token_set, is_recoverable),
|
|
|
* Helper function **which will be set up during the first invocation of the `parse()` method**.
|
|
|
* Produces a new errorInfo 'hash object' which can be passed into `parseError()`.
|
|
|
* See it's use in this parser kernel in many places; example usage:
|
|
|
*
|
|
|
* var infoObj = parser.constructParseErrorInfo('fail!', null,
|
|
|
* parser.collect_expected_token_set(state), true);
|
|
|
* var retVal = parser.parseError(infoObj.errStr, infoObj, parser.JisonParserError);
|
|
|
*
|
|
|
* originalParseError: function(str, hash, ExceptionClass),
|
|
|
* The basic `parseError` handler provided by JISON.
|
|
|
* `cleanupAfterParse()` will clean up and reset `parseError()` to reference this function
|
|
|
* at the end of the `parse()`.
|
|
|
*
|
|
|
* options: { ... parser %options ... },
|
|
|
*
|
|
|
* parse: function(input[, args...]),
|
|
|
* Parse the given `input` and return the parsed value (or `true` when none was provided by
|
|
|
* the root action, in which case the parser is acting as a *matcher*).
|
|
|
* You MAY use the additional `args...` parameters as per `%parse-param` spec of this grammar:
|
|
|
* these extra `args...` are added verbatim to the `yy` object reference as member variables.
|
|
|
*
|
|
|
* WARNING:
|
|
|
* Parser's additional `args...` parameters (via `%parse-param`) MAY conflict with
|
|
|
* any attributes already added to `yy` by the jison run-time;
|
|
|
* when such a collision is detected an exception is thrown to prevent the generated run-time
|
|
|
* from silently accepting this confusing and potentially hazardous situation!
|
|
|
*
|
|
|
* The lexer MAY add its own set of additional parameters (via the `%parse-param` line in
|
|
|
* the lexer section of the grammar spec): these will be inserted in the `yy` shared state
|
|
|
* object and any collision with those will be reported by the lexer via a thrown exception.
|
|
|
*
|
|
|
* cleanupAfterParse: function(resultValue, invoke_post_methods, do_not_nuke_errorinfos),
|
|
|
* Helper function **which will be set up during the first invocation of the `parse()` method**.
|
|
|
* This helper API is invoked at the end of the `parse()` call, unless an exception was thrown
|
|
|
* and `%options no-try-catch` has been defined for this grammar: in that case this helper MAY
|
|
|
* be invoked by calling user code to ensure the `post_parse` callbacks are invoked and
|
|
|
* the internal parser gets properly garbage collected under these particular circumstances.
|
|
|
*
|
|
|
* yyMergeLocationInfo: function(first_index, last_index, first_yylloc, last_yylloc, dont_look_back),
|
|
|
* Helper function **which will be set up during the first invocation of the `parse()` method**.
|
|
|
* This helper API can be invoked to calculate a spanning `yylloc` location info object.
|
|
|
*
|
|
|
* Note: %epsilon rules MAY specify no `first_index` and `first_yylloc`, in which case
|
|
|
* this function will attempt to obtain a suitable location marker by inspecting the location stack
|
|
|
* backwards.
|
|
|
*
|
|
|
* For more info see the documentation comment further below, immediately above this function's
|
|
|
* implementation.
|
|
|
*
|
|
|
* lexer: {
|
|
|
* yy: {...}, A reference to the so-called "shared state" `yy` once
|
|
|
* received via a call to the `.setInput(input, yy)` lexer API.
|
|
|
* EOF: 1,
|
|
|
* ERROR: 2,
|
|
|
* JisonLexerError: function(msg, hash),
|
|
|
* parseError: function(str, hash, ExceptionClass),
|
|
|
* setInput: function(input, [yy]),
|
|
|
* input: function(),
|
|
|
* unput: function(str),
|
|
|
* more: function(),
|
|
|
* reject: function(),
|
|
|
* less: function(n),
|
|
|
* pastInput: function(n),
|
|
|
* upcomingInput: function(n),
|
|
|
* showPosition: function(),
|
|
|
* test_match: function(regex_match_array, rule_index, ...),
|
|
|
* next: function(...),
|
|
|
* lex: function(...),
|
|
|
* begin: function(condition),
|
|
|
* pushState: function(condition),
|
|
|
* popState: function(),
|
|
|
* topState: function(),
|
|
|
* _currentRules: function(),
|
|
|
* stateStackSize: function(),
|
|
|
* cleanupAfterLex: function()
|
|
|
*
|
|
|
* options: { ... lexer %options ... },
|
|
|
*
|
|
|
* performAction: function(yy, yy_, $avoiding_name_collisions, YY_START, ...),
|
|
|
* rules: [...],
|
|
|
* conditions: {associative list: name ==> set},
|
|
|
* }
|
|
|
* }
|
|
|
*
|
|
|
*
|
|
|
* token location info (@$, _$, etc.): {
|
|
|
* first_line: n,
|
|
|
* last_line: n,
|
|
|
* first_column: n,
|
|
|
* last_column: n,
|
|
|
* range: [start_number, end_number]
|
|
|
* (where the numbers are indexes into the input string, zero-based)
|
|
|
* }
|
|
|
*
|
|
|
* ---
|
|
|
*
|
|
|
* The `parseError` function receives a 'hash' object with these members for lexer and
|
|
|
* parser errors:
|
|
|
*
|
|
|
* {
|
|
|
* text: (matched text)
|
|
|
* token: (the produced terminal token, if any)
|
|
|
* token_id: (the produced terminal token numeric ID, if any)
|
|
|
* line: (yylineno)
|
|
|
* loc: (yylloc)
|
|
|
* }
|
|
|
*
|
|
|
* parser (grammar) errors will also provide these additional members:
|
|
|
*
|
|
|
* {
|
|
|
* expected: (array describing the set of expected tokens;
|
|
|
* may be UNDEFINED when we cannot easily produce such a set)
|
|
|
* state: (integer (or array when the table includes grammar collisions);
|
|
|
* represents the current internal state of the parser kernel.
|
|
|
* can, for example, be used to pass to the `collect_expected_token_set()`
|
|
|
* API to obtain the expected token set)
|
|
|
* action: (integer; represents the current internal action which will be executed)
|
|
|
* new_state: (integer; represents the next/planned internal state, once the current
|
|
|
* action has executed)
|
|
|
* recoverable: (boolean: TRUE when the parser MAY have an error recovery rule
|
|
|
* available for this particular error)
|
|
|
* state_stack: (array: the current parser LALR/LR internal state stack; this can be used,
|
|
|
* for instance, for advanced error analysis and reporting)
|
|
|
* value_stack: (array: the current parser LALR/LR internal `$$` value stack; this can be used,
|
|
|
* for instance, for advanced error analysis and reporting)
|
|
|
* location_stack: (array: the current parser LALR/LR internal location stack; this can be used,
|
|
|
* for instance, for advanced error analysis and reporting)
|
|
|
* yy: (object: the current parser internal "shared state" `yy`
|
|
|
* as is also available in the rule actions; this can be used,
|
|
|
* for instance, for advanced error analysis and reporting)
|
|
|
* lexer: (reference to the current lexer instance used by the parser)
|
|
|
* parser: (reference to the current parser instance)
|
|
|
* }
|
|
|
*
|
|
|
* while `this` will reference the current parser instance.
|
|
|
*
|
|
|
* When `parseError` is invoked by the lexer, `this` will still reference the related *parser*
|
|
|
* instance, while these additional `hash` fields will also be provided:
|
|
|
*
|
|
|
* {
|
|
|
* lexer: (reference to the current lexer instance which reported the error)
|
|
|
* }
|
|
|
*
|
|
|
* When `parseError` is invoked by the parser due to a **JavaScript exception** being fired
|
|
|
* from either the parser or lexer, `this` will still reference the related *parser*
|
|
|
* instance, while these additional `hash` fields will also be provided:
|
|
|
*
|
|
|
* {
|
|
|
* exception: (reference to the exception thrown)
|
|
|
* }
|
|
|
*
|
|
|
* Please do note that in the latter situation, the `expected` field will be omitted as
|
|
|
* this type of failure is assumed not to be due to *parse errors* but rather due to user
|
|
|
* action code in either parser or lexer failing unexpectedly.
|
|
|
*
|
|
|
* ---
|
|
|
*
|
|
|
* You can specify parser options by setting / modifying the `.yy` object of your Parser instance.
|
|
|
* These options are available:
|
|
|
*
|
|
|
* ### options which are global for all parser instances
|
|
|
*
|
|
|
* Parser.pre_parse: function(yy)
|
|
|
* optional: you can specify a pre_parse() function in the chunk following
|
|
|
* the grammar, i.e. after the last `%%`.
|
|
|
* Parser.post_parse: function(yy, retval, parseInfo) { return retval; }
|
|
|
* optional: you can specify a post_parse() function in the chunk following
|
|
|
* the grammar, i.e. after the last `%%`. When it does not return any value,
|
|
|
* the parser will return the original `retval`.
|
|
|
*
|
|
|
* ### options which can be set up per parser instance
|
|
|
*
|
|
|
* yy: {
|
|
|
* pre_parse: function(yy)
|
|
|
* optional: is invoked before the parse cycle starts (and before the first
|
|
|
* invocation of `lex()`) but immediately after the invocation of
|
|
|
* `parser.pre_parse()`).
|
|
|
* post_parse: function(yy, retval, parseInfo) { return retval; }
|
|
|
* optional: is invoked when the parse terminates due to success ('accept')
|
|
|
* or failure (even when exceptions are thrown).
|
|
|
* `retval` contains the return value to be produced by `Parser.parse()`;
|
|
|
* this function can override the return value by returning another.
|
|
|
* When it does not return any value, the parser will return the original
|
|
|
* `retval`.
|
|
|
* This function is invoked immediately before `parser.post_parse()`.
|
|
|
*
|
|
|
* parseError: function(str, hash, ExceptionClass)
|
|
|
* optional: overrides the default `parseError` function.
|
|
|
* quoteName: function(name),
|
|
|
* optional: overrides the default `quoteName` function.
|
|
|
* }
|
|
|
*
|
|
|
* parser.lexer.options: {
|
|
|
* pre_lex: function()
|
|
|
* optional: is invoked before the lexer is invoked to produce another token.
|
|
|
* `this` refers to the Lexer object.
|
|
|
* post_lex: function(token) { return token; }
|
|
|
* optional: is invoked when the lexer has produced a token `token`;
|
|
|
* this function can override the returned token value by returning another.
|
|
|
* When it does not return any (truthy) value, the lexer will return
|
|
|
* the original `token`.
|
|
|
* `this` refers to the Lexer object.
|
|
|
*
|
|
|
* ranges: boolean
|
|
|
* optional: `true` ==> token location info will include a .range[] member.
|
|
|
* flex: boolean
|
|
|
* optional: `true` ==> flex-like lexing behaviour where the rules are tested
|
|
|
* exhaustively to find the longest match.
|
|
|
* backtrack_lexer: boolean
|
|
|
* optional: `true` ==> lexer regexes are tested in order and for invoked;
|
|
|
* the lexer terminates the scan when a token is returned by the action code.
|
|
|
* xregexp: boolean
|
|
|
* optional: `true` ==> lexer rule regexes are "extended regex format" requiring the
|
|
|
* `XRegExp` library. When this `%option` has not been specified at compile time, all lexer
|
|
|
* rule regexes have been written as standard JavaScript RegExp expressions.
|
|
|
* }
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
|
|
var parser = (function () {
|
|
|
|
|
|
|
|
|
// See also:
|
|
|
// http://stackoverflow.com/questions/1382107/whats-a-good-way-to-extend-error-in-javascript/#35881508
|
|
|
// but we keep the prototype.constructor and prototype.name assignment lines too for compatibility
|
|
|
// with userland code which might access the derived class in a 'classic' way.
|
|
|
function JisonParserError(msg, hash) {
|
|
|
Object.defineProperty(this, 'name', {
|
|
|
enumerable: false,
|
|
|
writable: false,
|
|
|
value: 'JisonParserError'
|
|
|
});
|
|
|
|
|
|
if (msg == null) msg = '???';
|
|
|
|
|
|
Object.defineProperty(this, 'message', {
|
|
|
enumerable: false,
|
|
|
writable: true,
|
|
|
value: msg
|
|
|
});
|
|
|
|
|
|
this.hash = hash;
|
|
|
|
|
|
var stacktrace;
|
|
|
if (hash && hash.exception instanceof Error) {
|
|
|
var ex2 = hash.exception;
|
|
|
this.message = ex2.message || msg;
|
|
|
stacktrace = ex2.stack;
|
|
|
}
|
|
|
if (!stacktrace) {
|
|
|
if (Error.hasOwnProperty('captureStackTrace')) { // V8/Chrome engine
|
|
|
Error.captureStackTrace(this, this.constructor);
|
|
|
} else {
|
|
|
stacktrace = (new Error(msg)).stack;
|
|
|
}
|
|
|
}
|
|
|
if (stacktrace) {
|
|
|
Object.defineProperty(this, 'stack', {
|
|
|
enumerable: false,
|
|
|
writable: false,
|
|
|
value: stacktrace
|
|
|
});
|
|
|
}
|
|
|
}
|
|
|
|
|
|
if (typeof Object.setPrototypeOf === 'function') {
|
|
|
Object.setPrototypeOf(JisonParserError.prototype, Error.prototype);
|
|
|
} else {
|
|
|
JisonParserError.prototype = Object.create(Error.prototype);
|
|
|
}
|
|
|
JisonParserError.prototype.constructor = JisonParserError;
|
|
|
JisonParserError.prototype.name = 'JisonParserError';
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// helper: reconstruct the productions[] table
|
|
|
function bp(s) {
|
|
|
var rv = [];
|
|
|
var p = s.pop;
|
|
|
var r = s.rule;
|
|
|
for (var i = 0, l = p.length; i < l; i++) {
|
|
|
rv.push([
|
|
|
p[i],
|
|
|
r[i]
|
|
|
]);
|
|
|
}
|
|
|
return rv;
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
// helper: reconstruct the defaultActions[] table
|
|
|
function bda(s) {
|
|
|
var rv = {};
|
|
|
var d = s.idx;
|
|
|
var g = s.goto;
|
|
|
for (var i = 0, l = d.length; i < l; i++) {
|
|
|
var j = d[i];
|
|
|
rv[j] = g[i];
|
|
|
}
|
|
|
return rv;
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
// helper: reconstruct the 'goto' table
|
|
|
function bt(s) {
|
|
|
var rv = [];
|
|
|
var d = s.len;
|
|
|
var y = s.symbol;
|
|
|
var t = s.type;
|
|
|
var a = s.state;
|
|
|
var m = s.mode;
|
|
|
var g = s.goto;
|
|
|
for (var i = 0, l = d.length; i < l; i++) {
|
|
|
var n = d[i];
|
|
|
var q = {};
|
|
|
for (var j = 0; j < n; j++) {
|
|
|
var z = y.shift();
|
|
|
switch (t.shift()) {
|
|
|
case 2:
|
|
|
q[z] = [
|
|
|
m.shift(),
|
|
|
g.shift()
|
|
|
];
|
|
|
break;
|
|
|
|
|
|
case 0:
|
|
|
q[z] = a.shift();
|
|
|
break;
|
|
|
|
|
|
default:
|
|
|
// type === 1: accept
|
|
|
q[z] = [
|
|
|
3
|
|
|
];
|
|
|
}
|
|
|
}
|
|
|
rv.push(q);
|
|
|
}
|
|
|
return rv;
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
// helper: runlength encoding with increment step: code, length: step (default step = 0)
|
|
|
// `this` references an array
|
|
|
function s(c, l, a) {
|
|
|
a = a || 0;
|
|
|
for (var i = 0; i < l; i++) {
|
|
|
this.push(c);
|
|
|
c += a;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
// helper: duplicate sequence from *relative* offset and length.
|
|
|
// `this` references an array
|
|
|
function c(i, l) {
|
|
|
i = this.length - i;
|
|
|
for (l += i; i < l; i++) {
|
|
|
this.push(this[i]);
|
|
|
}
|
|
|
}
|
|
|
|
|
|
// helper: unpack an array using helpers and data, all passed in an array argument 'a'.
|
|
|
function u(a) {
|
|
|
var rv = [];
|
|
|
for (var i = 0, l = a.length; i < l; i++) {
|
|
|
var e = a[i];
|
|
|
// Is this entry a helper function?
|
|
|
if (typeof e === 'function') {
|
|
|
i++;
|
|
|
e.apply(rv, a[i]);
|
|
|
} else {
|
|
|
rv.push(e);
|
|
|
}
|
|
|
}
|
|
|
return rv;
|
|
|
}
|
|
|
|
|
|
|
|
|
var parser = {
|
|
|
// Code Generator Information Report
|
|
|
// ---------------------------------
|
|
|
//
|
|
|
// Options:
|
|
|
//
|
|
|
// default action mode: ............. ["classic","merge"]
|
|
|
// test-compile action mode: ........ "parser:*,lexer:*"
|
|
|
// try..catch: ...................... true
|
|
|
// default resolve on conflict: ..... true
|
|
|
// on-demand look-ahead: ............ false
|
|
|
// error recovery token skip maximum: 3
|
|
|
// yyerror in parse actions is: ..... NOT recoverable,
|
|
|
// yyerror in lexer actions and other non-fatal lexer are:
|
|
|
// .................................. NOT recoverable,
|
|
|
// debug grammar/output: ............ false
|
|
|
// has partial LR conflict upgrade: true
|
|
|
// rudimentary token-stack support: false
|
|
|
// parser table compression mode: ... 2
|
|
|
// export debug tables: ............. false
|
|
|
// export *all* tables: ............. false
|
|
|
// module type: ..................... commonjs
|
|
|
// parser engine type: .............. lalr
|
|
|
// output main() in the module: ..... true
|
|
|
// has user-specified main(): ....... false
|
|
|
// has user-specified require()/import modules for main():
|
|
|
// .................................. false
|
|
|
// number of expected conflicts: .... 0
|
|
|
//
|
|
|
//
|
|
|
// Parser Analysis flags:
|
|
|
//
|
|
|
// no significant actions (parser is a language matcher only):
|
|
|
// .................................. false
|
|
|
// uses yyleng: ..................... false
|
|
|
// uses yylineno: ................... false
|
|
|
// uses yytext: ..................... false
|
|
|
// uses yylloc: ..................... false
|
|
|
// uses ParseError API: ............. false
|
|
|
// uses YYERROR: .................... false
|
|
|
// uses YYRECOVERING: ............... false
|
|
|
// uses YYERROK: .................... false
|
|
|
// uses YYCLEARIN: .................. false
|
|
|
// tracks rule values: .............. true
|
|
|
// assigns rule values: ............. true
|
|
|
// uses location tracking: .......... false
|
|
|
// assigns location: ................ false
|
|
|
// uses yystack: .................... false
|
|
|
// uses yysstack: ................... false
|
|
|
// uses yysp: ....................... true
|
|
|
// uses yyrulelength: ............... false
|
|
|
// uses yyMergeLocationInfo API: .... false
|
|
|
// has error recovery: .............. false
|
|
|
// has error reporting: ............. false
|
|
|
//
|
|
|
// --------- END OF REPORT -----------
|
|
|
|
|
|
trace: function no_op_trace() { },
|
|
|
JisonParserError: JisonParserError,
|
|
|
yy: {},
|
|
|
options: {
|
|
|
type: "lalr",
|
|
|
hasPartialLrUpgradeOnConflict: true,
|
|
|
errorRecoveryTokenDiscardCount: 3
|
|
|
},
|
|
|
symbols_: {
|
|
|
"$accept": 0,
|
|
|
"$end": 1,
|
|
|
"ADD": 6,
|
|
|
"ANGLE": 12,
|
|
|
"CALC": 3,
|
|
|
"CHS": 19,
|
|
|
"DIV": 9,
|
|
|
"EMS": 17,
|
|
|
"EOF": 1,
|
|
|
"EXS": 18,
|
|
|
"FREQ": 14,
|
|
|
"FUNCTION": 10,
|
|
|
"LENGTH": 11,
|
|
|
"LPAREN": 4,
|
|
|
"MUL": 8,
|
|
|
"NUMBER": 26,
|
|
|
"PERCENTAGE": 25,
|
|
|
"REMS": 20,
|
|
|
"RES": 15,
|
|
|
"RPAREN": 5,
|
|
|
"SUB": 7,
|
|
|
"TIME": 13,
|
|
|
"UNKNOWN_DIMENSION": 16,
|
|
|
"VHS": 21,
|
|
|
"VMAXS": 24,
|
|
|
"VMINS": 23,
|
|
|
"VWS": 22,
|
|
|
"dimension": 30,
|
|
|
"error": 2,
|
|
|
"expression": 27,
|
|
|
"function": 29,
|
|
|
"math_expression": 28,
|
|
|
"number": 31
|
|
|
},
|
|
|
terminals_: {
|
|
|
1: "EOF",
|
|
|
2: "error",
|
|
|
3: "CALC",
|
|
|
4: "LPAREN",
|
|
|
5: "RPAREN",
|
|
|
6: "ADD",
|
|
|
7: "SUB",
|
|
|
8: "MUL",
|
|
|
9: "DIV",
|
|
|
10: "FUNCTION",
|
|
|
11: "LENGTH",
|
|
|
12: "ANGLE",
|
|
|
13: "TIME",
|
|
|
14: "FREQ",
|
|
|
15: "RES",
|
|
|
16: "UNKNOWN_DIMENSION",
|
|
|
17: "EMS",
|
|
|
18: "EXS",
|
|
|
19: "CHS",
|
|
|
20: "REMS",
|
|
|
21: "VHS",
|
|
|
22: "VWS",
|
|
|
23: "VMINS",
|
|
|
24: "VMAXS",
|
|
|
25: "PERCENTAGE",
|
|
|
26: "NUMBER"
|
|
|
},
|
|
|
TERROR: 2,
|
|
|
EOF: 1,
|
|
|
|
|
|
// internals: defined here so the object *structure* doesn't get modified by parse() et al,
|
|
|
// thus helping JIT compilers like Chrome V8.
|
|
|
originalQuoteName: null,
|
|
|
originalParseError: null,
|
|
|
cleanupAfterParse: null,
|
|
|
constructParseErrorInfo: null,
|
|
|
yyMergeLocationInfo: null,
|
|
|
|
|
|
__reentrant_call_depth: 0, // INTERNAL USE ONLY
|
|
|
__error_infos: [], // INTERNAL USE ONLY: the set of parseErrorInfo objects created since the last cleanup
|
|
|
__error_recovery_infos: [], // INTERNAL USE ONLY: the set of parseErrorInfo objects created since the last cleanup
|
|
|
|
|
|
// APIs which will be set up depending on user action code analysis:
|
|
|
//yyRecovering: 0,
|
|
|
//yyErrOk: 0,
|
|
|
//yyClearIn: 0,
|
|
|
|
|
|
// Helper APIs
|
|
|
// -----------
|
|
|
|
|
|
// Helper function which can be overridden by user code later on: put suitable quotes around
|
|
|
// literal IDs in a description string.
|
|
|
quoteName: function parser_quoteName(id_str) {
|
|
|
return '"' + id_str + '"';
|
|
|
},
|
|
|
|
|
|
// Return the name of the given symbol (terminal or non-terminal) as a string, when available.
|
|
|
//
|
|
|
// Return NULL when the symbol is unknown to the parser.
|
|
|
getSymbolName: function parser_getSymbolName(symbol) {
|
|
|
if (this.terminals_[symbol]) {
|
|
|
return this.terminals_[symbol];
|
|
|
}
|
|
|
|
|
|
// Otherwise... this might refer to a RULE token i.e. a non-terminal: see if we can dig that one up.
|
|
|
//
|
|
|
// An example of this may be where a rule's action code contains a call like this:
|
|
|
//
|
|
|
// parser.getSymbolName(#$)
|
|
|
//
|
|
|
// to obtain a human-readable name of the current grammar rule.
|
|
|
var s = this.symbols_;
|
|
|
for (var key in s) {
|
|
|
if (s[key] === symbol) {
|
|
|
return key;
|
|
|
}
|
|
|
}
|
|
|
return null;
|
|
|
},
|
|
|
|
|
|
// Return a more-or-less human-readable description of the given symbol, when available,
|
|
|
// or the symbol itself, serving as its own 'description' for lack of something better to serve up.
|
|
|
//
|
|
|
// Return NULL when the symbol is unknown to the parser.
|
|
|
describeSymbol: function parser_describeSymbol(symbol) {
|
|
|
if (symbol !== this.EOF && this.terminal_descriptions_ && this.terminal_descriptions_[symbol]) {
|
|
|
return this.terminal_descriptions_[symbol];
|
|
|
}
|
|
|
else if (symbol === this.EOF) {
|
|
|
return 'end of input';
|
|
|
}
|
|
|
var id = this.getSymbolName(symbol);
|
|
|
if (id) {
|
|
|
return this.quoteName(id);
|
|
|
}
|
|
|
return null;
|
|
|
},
|
|
|
|
|
|
// Produce a (more or less) human-readable list of expected tokens at the point of failure.
|
|
|
//
|
|
|
// The produced list may contain token or token set descriptions instead of the tokens
|
|
|
// themselves to help turning this output into something that easier to read by humans
|
|
|
// unless `do_not_describe` parameter is set, in which case a list of the raw, *numeric*,
|
|
|
// expected terminals and nonterminals is produced.
|
|
|
//
|
|
|
// The returned list (array) will not contain any duplicate entries.
|
|
|
collect_expected_token_set: function parser_collect_expected_token_set(state, do_not_describe) {
|
|
|
var TERROR = this.TERROR;
|
|
|
var tokenset = [];
|
|
|
var check = {};
|
|
|
// Has this (error?) state been outfitted with a custom expectations description text for human consumption?
|
|
|
// If so, use that one instead of the less palatable token set.
|
|
|
if (!do_not_describe && this.state_descriptions_ && this.state_descriptions_[state]) {
|
|
|
return [
|
|
|
this.state_descriptions_[state]
|
|
|
];
|
|
|
}
|
|
|
for (var p in this.table[state]) {
|
|
|
p = +p;
|
|
|
if (p !== TERROR) {
|
|
|
var d = do_not_describe ? p : this.describeSymbol(p);
|
|
|
if (d && !check[d]) {
|
|
|
tokenset.push(d);
|
|
|
check[d] = true; // Mark this token description as already mentioned to prevent outputting duplicate entries.
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
return tokenset;
|
|
|
},
|
|
|
productions_: bp({
|
|
|
pop: u([
|
|
|
27,
|
|
|
s,
|
|
|
[28, 9],
|
|
|
29,
|
|
|
s,
|
|
|
[30, 17],
|
|
|
s,
|
|
|
[31, 3]
|
|
|
]),
|
|
|
rule: u([
|
|
|
2,
|
|
|
4,
|
|
|
s,
|
|
|
[3, 5],
|
|
|
s,
|
|
|
[1, 19],
|
|
|
2,
|
|
|
2,
|
|
|
c,
|
|
|
[3, 3]
|
|
|
])
|
|
|
}),
|
|
|
performAction: function parser__PerformAction(yystate /* action[1] */, yysp, yyvstack) {
|
|
|
|
|
|
/* this == yyval */
|
|
|
|
|
|
// the JS engine itself can go and remove these statements when `yy` turns out to be unused in any action code!
|
|
|
var yy = this.yy;
|
|
|
var yyparser = yy.parser;
|
|
|
var yylexer = yy.lexer;
|
|
|
|
|
|
|
|
|
|
|
|
switch (yystate) {
|
|
|
case 0:
|
|
|
/*! Production:: $accept : expression $end */
|
|
|
|
|
|
// default action (generated by JISON mode classic/merge :: 1,VT,VA,-,-,-,-,-,-):
|
|
|
this.$ = yyvstack[yysp - 1];
|
|
|
// END of default action (generated by JISON mode classic/merge :: 1,VT,VA,-,-,-,-,-,-)
|
|
|
break;
|
|
|
|
|
|
case 1:
|
|
|
/*! Production:: expression : math_expression EOF */
|
|
|
|
|
|
// default action (generated by JISON mode classic/merge :: 2,VT,VA,-,-,-,-,-,-):
|
|
|
this.$ = yyvstack[yysp - 1];
|
|
|
// END of default action (generated by JISON mode classic/merge :: 2,VT,VA,-,-,-,-,-,-)
|
|
|
|
|
|
|
|
|
return yyvstack[yysp - 1];
|
|
|
break;
|
|
|
|
|
|
case 2:
|
|
|
/*! Production:: math_expression : CALC LPAREN math_expression RPAREN */
|
|
|
case 7:
|
|
|
/*! Production:: math_expression : LPAREN math_expression RPAREN */
|
|
|
|
|
|
this.$ = yyvstack[yysp - 1];
|
|
|
break;
|
|
|
|
|
|
case 3:
|
|
|
/*! Production:: math_expression : math_expression ADD math_expression */
|
|
|
case 4:
|
|
|
/*! Production:: math_expression : math_expression SUB math_expression */
|
|
|
case 5:
|
|
|
/*! Production:: math_expression : math_expression MUL math_expression */
|
|
|
case 6:
|
|
|
/*! Production:: math_expression : math_expression DIV math_expression */
|
|
|
|
|
|
this.$ = { type: 'MathExpression', operator: yyvstack[yysp - 1], left: yyvstack[yysp - 2], right: yyvstack[yysp] };
|
|
|
break;
|
|
|
|
|
|
case 8:
|
|
|
/*! Production:: math_expression : function */
|
|
|
case 9:
|
|
|
/*! Production:: math_expression : dimension */
|
|
|
case 10:
|
|
|
/*! Production:: math_expression : number */
|
|
|
|
|
|
this.$ = yyvstack[yysp];
|
|
|
break;
|
|
|
|
|
|
case 11:
|
|
|
/*! Production:: function : FUNCTION */
|
|
|
|
|
|
this.$ = { type: 'Function', value: yyvstack[yysp] };
|
|
|
break;
|
|
|
|
|
|
case 12:
|
|
|
/*! Production:: dimension : LENGTH */
|
|
|
|
|
|
this.$ = { type: 'LengthValue', value: parseFloat(yyvstack[yysp]), unit: /[a-z]+$/i.exec(yyvstack[yysp])[0] };
|
|
|
break;
|
|
|
|
|
|
case 13:
|
|
|
/*! Production:: dimension : ANGLE */
|
|
|
|
|
|
this.$ = { type: 'AngleValue', value: parseFloat(yyvstack[yysp]), unit: /[a-z]+$/i.exec(yyvstack[yysp])[0] };
|
|
|
break;
|
|
|
|
|
|
case 14:
|
|
|
/*! Production:: dimension : TIME */
|
|
|
|
|
|
this.$ = { type: 'TimeValue', value: parseFloat(yyvstack[yysp]), unit: /[a-z]+$/i.exec(yyvstack[yysp])[0] };
|
|
|
break;
|
|
|
|
|
|
case 15:
|
|
|
/*! Production:: dimension : FREQ */
|
|
|
|
|
|
this.$ = { type: 'FrequencyValue', value: parseFloat(yyvstack[yysp]), unit: /[a-z]+$/i.exec(yyvstack[yysp])[0] };
|
|
|
break;
|
|
|
|
|
|
case 16:
|
|
|
/*! Production:: dimension : RES */
|
|
|
|
|
|
this.$ = { type: 'ResolutionValue', value: parseFloat(yyvstack[yysp]), unit: /[a-z]+$/i.exec(yyvstack[yysp])[0] };
|
|
|
break;
|
|
|
|
|
|
case 17:
|
|
|
/*! Production:: dimension : UNKNOWN_DIMENSION */
|
|
|
|
|
|
this.$ = { type: 'UnknownDimension', value: parseFloat(yyvstack[yysp]), unit: /[a-z]+$/i.exec(yyvstack[yysp])[0] };
|
|
|
break;
|
|
|
|
|
|
case 18:
|
|
|
/*! Production:: dimension : EMS */
|
|
|
|
|
|
this.$ = { type: 'EmValue', value: parseFloat(yyvstack[yysp]), unit: 'em' };
|
|
|
break;
|
|
|
|
|
|
case 19:
|
|
|
/*! Production:: dimension : EXS */
|
|
|
|
|
|
this.$ = { type: 'ExValue', value: parseFloat(yyvstack[yysp]), unit: 'ex' };
|
|
|
break;
|
|
|
|
|
|
case 20:
|
|
|
/*! Production:: dimension : CHS */
|
|
|
|
|
|
this.$ = { type: 'ChValue', value: parseFloat(yyvstack[yysp]), unit: 'ch' };
|
|
|
break;
|
|
|
|
|
|
case 21:
|
|
|
/*! Production:: dimension : REMS */
|
|
|
|
|
|
this.$ = { type: 'RemValue', value: parseFloat(yyvstack[yysp]), unit: 'rem' };
|
|
|
break;
|
|
|
|
|
|
case 22:
|
|
|
/*! Production:: dimension : VHS */
|
|
|
|
|
|
this.$ = { type: 'VhValue', value: parseFloat(yyvstack[yysp]), unit: 'vh' };
|
|
|
break;
|
|
|
|
|
|
case 23:
|
|
|
/*! Production:: dimension : VWS */
|
|
|
|
|
|
this.$ = { type: 'VwValue', value: parseFloat(yyvstack[yysp]), unit: 'vw' };
|
|
|
break;
|
|
|
|
|
|
case 24:
|
|
|
/*! Production:: dimension : VMINS */
|
|
|
|
|
|
this.$ = { type: 'VminValue', value: parseFloat(yyvstack[yysp]), unit: 'vmin' };
|
|
|
break;
|
|
|
|
|
|
case 25:
|
|
|
/*! Production:: dimension : VMAXS */
|
|
|
|
|
|
this.$ = { type: 'VmaxValue', value: parseFloat(yyvstack[yysp]), unit: 'vmax' };
|
|
|
break;
|
|
|
|
|
|
case 26:
|
|
|
/*! Production:: dimension : PERCENTAGE */
|
|
|
|
|
|
this.$ = { type: 'PercentageValue', value: parseFloat(yyvstack[yysp]), unit: '%' };
|
|
|
break;
|
|
|
|
|
|
case 27:
|
|
|
/*! Production:: dimension : ADD dimension */
|
|
|
|
|
|
var prev = yyvstack[yysp]; this.$ = prev;
|
|
|
break;
|
|
|
|
|
|
case 28:
|
|
|
/*! Production:: dimension : SUB dimension */
|
|
|
|
|
|
var prev = yyvstack[yysp]; prev.value *= -1; this.$ = prev;
|
|
|
break;
|
|
|
|
|
|
case 29:
|
|
|
/*! Production:: number : NUMBER */
|
|
|
case 30:
|
|
|
/*! Production:: number : ADD NUMBER */
|
|
|
|
|
|
this.$ = { type: 'Number', value: parseFloat(yyvstack[yysp]) };
|
|
|
break;
|
|
|
|
|
|
case 31:
|
|
|
/*! Production:: number : SUB NUMBER */
|
|
|
|
|
|
this.$ = { type: 'Number', value: parseFloat(yyvstack[yysp]) * -1 };
|
|
|
break;
|
|
|
|
|
|
}
|
|
|
},
|
|
|
table: bt({
|
|
|
len: u([
|
|
|
26,
|
|
|
1,
|
|
|
5,
|
|
|
1,
|
|
|
25,
|
|
|
s,
|
|
|
[0, 19],
|
|
|
19,
|
|
|
19,
|
|
|
0,
|
|
|
0,
|
|
|
s,
|
|
|
[25, 5],
|
|
|
5,
|
|
|
0,
|
|
|
0,
|
|
|
18,
|
|
|
18,
|
|
|
0,
|
|
|
0,
|
|
|
6,
|
|
|
6,
|
|
|
0,
|
|
|
0,
|
|
|
c,
|
|
|
[11, 3]
|
|
|
]),
|
|
|
symbol: u([
|
|
|
3,
|
|
|
4,
|
|
|
6,
|
|
|
7,
|
|
|
s,
|
|
|
[10, 22, 1],
|
|
|
1,
|
|
|
1,
|
|
|
s,
|
|
|
[6, 4, 1],
|
|
|
4,
|
|
|
c,
|
|
|
[33, 21],
|
|
|
c,
|
|
|
[32, 4],
|
|
|
6,
|
|
|
7,
|
|
|
c,
|
|
|
[22, 16],
|
|
|
30,
|
|
|
c,
|
|
|
[19, 19],
|
|
|
c,
|
|
|
[63, 25],
|
|
|
c,
|
|
|
[25, 100],
|
|
|
s,
|
|
|
[5, 5, 1],
|
|
|
c,
|
|
|
[149, 17],
|
|
|
c,
|
|
|
[167, 18],
|
|
|
30,
|
|
|
1,
|
|
|
c,
|
|
|
[42, 5],
|
|
|
c,
|
|
|
[6, 6],
|
|
|
c,
|
|
|
[5, 5]
|
|
|
]),
|
|
|
type: u([
|
|
|
s,
|
|
|
[2, 21],
|
|
|
s,
|
|
|
[0, 5],
|
|
|
1,
|
|
|
s,
|
|
|
[2, 27],
|
|
|
s,
|
|
|
[0, 4],
|
|
|
c,
|
|
|
[22, 19],
|
|
|
c,
|
|
|
[19, 37],
|
|
|
c,
|
|
|
[63, 25],
|
|
|
c,
|
|
|
[25, 103],
|
|
|
c,
|
|
|
[148, 19],
|
|
|
c,
|
|
|
[18, 18]
|
|
|
]),
|
|
|
state: u([
|
|
|
1,
|
|
|
2,
|
|
|
5,
|
|
|
6,
|
|
|
7,
|
|
|
33,
|
|
|
c,
|
|
|
[4, 3],
|
|
|
34,
|
|
|
38,
|
|
|
40,
|
|
|
c,
|
|
|
[6, 3],
|
|
|
41,
|
|
|
c,
|
|
|
[4, 3],
|
|
|
42,
|
|
|
c,
|
|
|
[4, 3],
|
|
|
43,
|
|
|
c,
|
|
|
[4, 3],
|
|
|
44,
|
|
|
c,
|
|
|
[22, 5]
|
|
|
]),
|
|
|
mode: u([
|
|
|
s,
|
|
|
[1, 228],
|
|
|
s,
|
|
|
[2, 4],
|
|
|
c,
|
|
|
[6, 8],
|
|
|
s,
|
|
|
[1, 5]
|
|
|
]),
|
|
|
goto: u([
|
|
|
3,
|
|
|
4,
|
|
|
24,
|
|
|
25,
|
|
|
s,
|
|
|
[8, 16, 1],
|
|
|
s,
|
|
|
[26, 7, 1],
|
|
|
c,
|
|
|
[27, 21],
|
|
|
36,
|
|
|
37,
|
|
|
c,
|
|
|
[18, 15],
|
|
|
35,
|
|
|
c,
|
|
|
[18, 17],
|
|
|
39,
|
|
|
c,
|
|
|
[57, 21],
|
|
|
c,
|
|
|
[21, 84],
|
|
|
45,
|
|
|
c,
|
|
|
[168, 4],
|
|
|
c,
|
|
|
[128, 17],
|
|
|
c,
|
|
|
[17, 17],
|
|
|
s,
|
|
|
[3, 4],
|
|
|
30,
|
|
|
31,
|
|
|
s,
|
|
|
[4, 4],
|
|
|
30,
|
|
|
31,
|
|
|
46,
|
|
|
c,
|
|
|
[51, 4]
|
|
|
])
|
|
|
}),
|
|
|
defaultActions: bda({
|
|
|
idx: u([
|
|
|
s,
|
|
|
[5, 19, 1],
|
|
|
26,
|
|
|
27,
|
|
|
34,
|
|
|
35,
|
|
|
38,
|
|
|
39,
|
|
|
42,
|
|
|
43,
|
|
|
45,
|
|
|
46
|
|
|
]),
|
|
|
goto: u([
|
|
|
s,
|
|
|
[8, 19, 1],
|
|
|
29,
|
|
|
1,
|
|
|
27,
|
|
|
30,
|
|
|
28,
|
|
|
31,
|
|
|
5,
|
|
|
6,
|
|
|
7,
|
|
|
2
|
|
|
])
|
|
|
}),
|
|
|
parseError: function parseError(str, hash, ExceptionClass) {
|
|
|
if (hash.recoverable) {
|
|
|
if (typeof this.trace === 'function') {
|
|
|
this.trace(str);
|
|
|
}
|
|
|
hash.destroy(); // destroy... well, *almost*!
|
|
|
} else {
|
|
|
if (typeof this.trace === 'function') {
|
|
|
this.trace(str);
|
|
|
}
|
|
|
if (!ExceptionClass) {
|
|
|
ExceptionClass = this.JisonParserError;
|
|
|
}
|
|
|
throw new ExceptionClass(str, hash);
|
|
|
}
|
|
|
},
|
|
|
parse: function parse(input) {
|
|
|
var self = this;
|
|
|
var stack = new Array(128); // token stack: stores token which leads to state at the same index (column storage)
|
|
|
var sstack = new Array(128); // state stack: stores states (column storage)
|
|
|
|
|
|
var vstack = new Array(128); // semantic value stack
|
|
|
|
|
|
var table = this.table;
|
|
|
var sp = 0; // 'stack pointer': index into the stacks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
var symbol = 0;
|
|
|
|
|
|
|
|
|
|
|
|
var TERROR = this.TERROR;
|
|
|
var EOF = this.EOF;
|
|
|
var ERROR_RECOVERY_TOKEN_DISCARD_COUNT = (this.options.errorRecoveryTokenDiscardCount | 0) || 3;
|
|
|
var NO_ACTION = [0, 47 /* === table.length :: ensures that anyone using this new state will fail dramatically! */];
|
|
|
|
|
|
var lexer;
|
|
|
if (this.__lexer__) {
|
|
|
lexer = this.__lexer__;
|
|
|
} else {
|
|
|
lexer = this.__lexer__ = Object.create(this.lexer);
|
|
|
}
|
|
|
|
|
|
var sharedState_yy = {
|
|
|
parseError: undefined,
|
|
|
quoteName: undefined,
|
|
|
lexer: undefined,
|
|
|
parser: undefined,
|
|
|
pre_parse: undefined,
|
|
|
post_parse: undefined,
|
|
|
pre_lex: undefined,
|
|
|
post_lex: undefined // WARNING: must be written this way for the code expanders to work correctly in both ES5 and ES6 modes!
|
|
|
};
|
|
|
|
|
|
var ASSERT;
|
|
|
if (typeof assert !== 'function') {
|
|
|
ASSERT = function JisonAssert(cond, msg) {
|
|
|
if (!cond) {
|
|
|
throw new Error('assertion failed: ' + (msg || '***'));
|
|
|
}
|
|
|
};
|
|
|
} else {
|
|
|
ASSERT = assert;
|
|
|
}
|
|
|
|
|
|
this.yyGetSharedState = function yyGetSharedState() {
|
|
|
return sharedState_yy;
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
function shallow_copy_noclobber(dst, src) {
|
|
|
for (var k in src) {
|
|
|
if (typeof dst[k] === 'undefined' && Object.prototype.hasOwnProperty.call(src, k)) {
|
|
|
dst[k] = src[k];
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
|
|
|
// copy state
|
|
|
shallow_copy_noclobber(sharedState_yy, this.yy);
|
|
|
|
|
|
sharedState_yy.lexer = lexer;
|
|
|
sharedState_yy.parser = this;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// Does the shared state override the default `parseError` that already comes with this instance?
|
|
|
if (typeof sharedState_yy.parseError === 'function') {
|
|
|
this.parseError = function parseErrorAlt(str, hash, ExceptionClass) {
|
|
|
if (!ExceptionClass) {
|
|
|
ExceptionClass = this.JisonParserError;
|
|
|
}
|
|
|
return sharedState_yy.parseError.call(this, str, hash, ExceptionClass);
|
|
|
};
|
|
|
} else {
|
|
|
this.parseError = this.originalParseError;
|
|
|
}
|
|
|
|
|
|
// Does the shared state override the default `quoteName` that already comes with this instance?
|
|
|
if (typeof sharedState_yy.quoteName === 'function') {
|
|
|
this.quoteName = function quoteNameAlt(id_str) {
|
|
|
return sharedState_yy.quoteName.call(this, id_str);
|
|
|
};
|
|
|
} else {
|
|
|
this.quoteName = this.originalQuoteName;
|
|
|
}
|
|
|
|
|
|
// set up the cleanup function; make it an API so that external code can re-use this one in case of
|
|
|
// calamities or when the `%options no-try-catch` option has been specified for the grammar, in which
|
|
|
// case this parse() API method doesn't come with a `finally { ... }` block any more!
|
|
|
//
|
|
|
// NOTE: as this API uses parse() as a closure, it MUST be set again on every parse() invocation,
|
|
|
// or else your `sharedState`, etc. references will be *wrong*!
|
|
|
this.cleanupAfterParse = function parser_cleanupAfterParse(resultValue, invoke_post_methods, do_not_nuke_errorinfos) {
|
|
|
var rv;
|
|
|
|
|
|
if (invoke_post_methods) {
|
|
|
var hash;
|
|
|
|
|
|
if (sharedState_yy.post_parse || this.post_parse) {
|
|
|
// create an error hash info instance: we re-use this API in a **non-error situation**
|
|
|
// as this one delivers all parser internals ready for access by userland code.
|
|
|
hash = this.constructParseErrorInfo(null /* no error! */, null /* no exception! */, null, false);
|
|
|
}
|
|
|
|
|
|
if (sharedState_yy.post_parse) {
|
|
|
rv = sharedState_yy.post_parse.call(this, sharedState_yy, resultValue, hash);
|
|
|
if (typeof rv !== 'undefined') resultValue = rv;
|
|
|
}
|
|
|
if (this.post_parse) {
|
|
|
rv = this.post_parse.call(this, sharedState_yy, resultValue, hash);
|
|
|
if (typeof rv !== 'undefined') resultValue = rv;
|
|
|
}
|
|
|
|
|
|
// cleanup:
|
|
|
if (hash && hash.destroy) {
|
|
|
hash.destroy();
|
|
|
}
|
|
|
}
|
|
|
|
|
|
if (this.__reentrant_call_depth > 1) return resultValue; // do not (yet) kill the sharedState when this is a reentrant run.
|
|
|
|
|
|
// clean up the lingering lexer structures as well:
|
|
|
if (lexer.cleanupAfterLex) {
|
|
|
lexer.cleanupAfterLex(do_not_nuke_errorinfos);
|
|
|
}
|
|
|
|
|
|
// prevent lingering circular references from causing memory leaks:
|
|
|
if (sharedState_yy) {
|
|
|
sharedState_yy.lexer = undefined;
|
|
|
sharedState_yy.parser = undefined;
|
|
|
if (lexer.yy === sharedState_yy) {
|
|
|
lexer.yy = undefined;
|
|
|
}
|
|
|
}
|
|
|
sharedState_yy = undefined;
|
|
|
this.parseError = this.originalParseError;
|
|
|
this.quoteName = this.originalQuoteName;
|
|
|
|
|
|
// nuke the vstack[] array at least as that one will still reference obsoleted user values.
|
|
|
// To be safe, we nuke the other internal stack columns as well...
|
|
|
stack.length = 0; // fastest way to nuke an array without overly bothering the GC
|
|
|
sstack.length = 0;
|
|
|
|
|
|
vstack.length = 0;
|
|
|
sp = 0;
|
|
|
|
|
|
// nuke the error hash info instances created during this run.
|
|
|
// Userland code must COPY any data/references
|
|
|
// in the error hash instance(s) it is more permanently interested in.
|
|
|
if (!do_not_nuke_errorinfos) {
|
|
|
for (var i = this.__error_infos.length - 1; i >= 0; i--) {
|
|
|
var el = this.__error_infos[i];
|
|
|
if (el && typeof el.destroy === 'function') {
|
|
|
el.destroy();
|
|
|
}
|
|
|
}
|
|
|
this.__error_infos.length = 0;
|
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
return resultValue;
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// NOTE: as this API uses parse() as a closure, it MUST be set again on every parse() invocation,
|
|
|
// or else your `lexer`, `sharedState`, etc. references will be *wrong*!
|
|
|
this.constructParseErrorInfo = function parser_constructParseErrorInfo(msg, ex, expected, recoverable) {
|
|
|
var pei = {
|
|
|
errStr: msg,
|
|
|
exception: ex,
|
|
|
text: lexer.match,
|
|
|
value: lexer.yytext,
|
|
|
token: this.describeSymbol(symbol) || symbol,
|
|
|
token_id: symbol,
|
|
|
line: lexer.yylineno,
|
|
|
|
|
|
expected: expected,
|
|
|
recoverable: recoverable,
|
|
|
state: state,
|
|
|
action: action,
|
|
|
new_state: newState,
|
|
|
symbol_stack: stack,
|
|
|
state_stack: sstack,
|
|
|
value_stack: vstack,
|
|
|
|
|
|
stack_pointer: sp,
|
|
|
yy: sharedState_yy,
|
|
|
lexer: lexer,
|
|
|
parser: this,
|
|
|
|
|
|
// and make sure the error info doesn't stay due to potential
|
|
|
// ref cycle via userland code manipulations.
|
|
|
// These would otherwise all be memory leak opportunities!
|
|
|
//
|
|
|
// Note that only array and object references are nuked as those
|
|
|
// constitute the set of elements which can produce a cyclic ref.
|
|
|
// The rest of the members is kept intact as they are harmless.
|
|
|
destroy: function destructParseErrorInfo() {
|
|
|
// remove cyclic references added to error info:
|
|
|
// info.yy = null;
|
|
|
// info.lexer = null;
|
|
|
// info.value = null;
|
|
|
// info.value_stack = null;
|
|
|
// ...
|
|
|
var rec = !!this.recoverable;
|
|
|
for (var key in this) {
|
|
|
if (this.hasOwnProperty(key) && typeof key === 'object') {
|
|
|
this[key] = undefined;
|
|
|
}
|
|
|
}
|
|
|
this.recoverable = rec;
|
|
|
}
|
|
|
};
|
|
|
// track this instance so we can `destroy()` it once we deem it superfluous and ready for garbage collection!
|
|
|
this.__error_infos.push(pei);
|
|
|
return pei;
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
function getNonTerminalFromCode(symbol) {
|
|
|
var tokenName = self.getSymbolName(symbol);
|
|
|
if (!tokenName) {
|
|
|
tokenName = symbol;
|
|
|
}
|
|
|
return tokenName;
|
|
|
}
|
|
|
|
|
|
|
|
|
function stdLex() {
|
|
|
var token = lexer.lex();
|
|
|
// if token isn't its numeric value, convert
|
|
|
if (typeof token !== 'number') {
|
|
|
token = self.symbols_[token] || token;
|
|
|
}
|
|
|
|
|
|
return token || EOF;
|
|
|
}
|
|
|
|
|
|
function fastLex() {
|
|
|
var token = lexer.fastLex();
|
|
|
// if token isn't its numeric value, convert
|
|
|
if (typeof token !== 'number') {
|
|
|
token = self.symbols_[token] || token;
|
|
|
}
|
|
|
|
|
|
return token || EOF;
|
|
|
}
|
|
|
|
|
|
var lex = stdLex;
|
|
|
|
|
|
|
|
|
var state, action, r, t;
|
|
|
var yyval = {
|
|
|
$: true,
|
|
|
_$: undefined,
|
|
|
yy: sharedState_yy
|
|
|
};
|
|
|
var p;
|
|
|
var yyrulelen;
|
|
|
var this_production;
|
|
|
var newState;
|
|
|
var retval = false;
|
|
|
|
|
|
|
|
|
try {
|
|
|
this.__reentrant_call_depth++;
|
|
|
|
|
|
lexer.setInput(input, sharedState_yy);
|
|
|
|
|
|
// NOTE: we *assume* no lexer pre/post handlers are set up *after*
|
|
|
// this initial `setInput()` call: hence we can now check and decide
|
|
|
// whether we'll go with the standard, slower, lex() API or the
|
|
|
// `fast_lex()` one:
|
|
|
if (typeof lexer.canIUse === 'function') {
|
|
|
var lexerInfo = lexer.canIUse();
|
|
|
if (lexerInfo.fastLex && typeof fastLex === 'function') {
|
|
|
lex = fastLex;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
vstack[sp] = null;
|
|
|
sstack[sp] = 0;
|
|
|
stack[sp] = 0;
|
|
|
++sp;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
if (this.pre_parse) {
|
|
|
this.pre_parse.call(this, sharedState_yy);
|
|
|
}
|
|
|
if (sharedState_yy.pre_parse) {
|
|
|
sharedState_yy.pre_parse.call(this, sharedState_yy);
|
|
|
}
|
|
|
|
|
|
newState = sstack[sp - 1];
|
|
|
for (;;) {
|
|
|
// retrieve state number from top of stack
|
|
|
state = newState; // sstack[sp - 1];
|
|
|
|
|
|
// use default actions if available
|
|
|
if (this.defaultActions[state]) {
|
|
|
action = 2;
|
|
|
newState = this.defaultActions[state];
|
|
|
} else {
|
|
|
// The single `==` condition below covers both these `===` comparisons in a single
|
|
|
// operation:
|
|
|
//
|
|
|
// if (symbol === null || typeof symbol === 'undefined') ...
|
|
|
if (!symbol) {
|
|
|
symbol = lex();
|
|
|
}
|
|
|
// read action for current state and first input
|
|
|
t = (table[state] && table[state][symbol]) || NO_ACTION;
|
|
|
newState = t[1];
|
|
|
action = t[0];
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// handle parse error
|
|
|
if (!action) {
|
|
|
var errStr;
|
|
|
var errSymbolDescr = (this.describeSymbol(symbol) || symbol);
|
|
|
var expected = this.collect_expected_token_set(state);
|
|
|
|
|
|
// Report error
|
|
|
if (typeof lexer.yylineno === 'number') {
|
|
|
errStr = 'Parse error on line ' + (lexer.yylineno + 1) + ': ';
|
|
|
} else {
|
|
|
errStr = 'Parse error: ';
|
|
|
}
|
|
|
if (typeof lexer.showPosition === 'function') {
|
|
|
errStr += '\n' + lexer.showPosition(79 - 10, 10) + '\n';
|
|
|
}
|
|
|
if (expected.length) {
|
|
|
errStr += 'Expecting ' + expected.join(', ') + ', got unexpected ' + errSymbolDescr;
|
|
|
} else {
|
|
|
errStr += 'Unexpected ' + errSymbolDescr;
|
|
|
}
|
|
|
// we cannot recover from the error!
|
|
|
p = this.constructParseErrorInfo(errStr, null, expected, false);
|
|
|
r = this.parseError(p.errStr, p, this.JisonParserError);
|
|
|
if (typeof r !== 'undefined') {
|
|
|
retval = r;
|
|
|
}
|
|
|
break;
|
|
|
}
|
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
switch (action) {
|
|
|
// catch misc. parse failures:
|
|
|
default:
|
|
|
// this shouldn't happen, unless resolve defaults are off
|
|
|
if (action instanceof Array) {
|
|
|
p = this.constructParseErrorInfo('Parse Error: multiple actions possible at state: ' + state + ', token: ' + symbol, null, null, false);
|
|
|
r = this.parseError(p.errStr, p, this.JisonParserError);
|
|
|
if (typeof r !== 'undefined') {
|
|
|
retval = r;
|
|
|
}
|
|
|
break;
|
|
|
}
|
|
|
// Another case of better safe than sorry: in case state transitions come out of another error recovery process
|
|
|
// or a buggy LUT (LookUp Table):
|
|
|
p = this.constructParseErrorInfo('Parsing halted. No viable error recovery approach available due to internal system failure.', null, null, false);
|
|
|
r = this.parseError(p.errStr, p, this.JisonParserError);
|
|
|
if (typeof r !== 'undefined') {
|
|
|
retval = r;
|
|
|
}
|
|
|
break;
|
|
|
|
|
|
// shift:
|
|
|
case 1:
|
|
|
stack[sp] = symbol;
|
|
|
vstack[sp] = lexer.yytext;
|
|
|
|
|
|
sstack[sp] = newState; // push state
|
|
|
|
|
|
++sp;
|
|
|
symbol = 0;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// Pick up the lexer details for the current symbol as that one is not 'look-ahead' any more:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
continue;
|
|
|
|
|
|
// reduce:
|
|
|
case 2:
|
|
|
|
|
|
|
|
|
|
|
|
this_production = this.productions_[newState - 1]; // `this.productions_[]` is zero-based indexed while states start from 1 upwards...
|
|
|
yyrulelen = this_production[1];
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
r = this.performAction.call(yyval, newState, sp - 1, vstack);
|
|
|
|
|
|
if (typeof r !== 'undefined') {
|
|
|
retval = r;
|
|
|
break;
|
|
|
}
|
|
|
|
|
|
// pop off stack
|
|
|
sp -= yyrulelen;
|
|
|
|
|
|
// don't overwrite the `symbol` variable: use a local var to speed things up:
|
|
|
var ntsymbol = this_production[0]; // push nonterminal (reduce)
|
|
|
stack[sp] = ntsymbol;
|
|
|
vstack[sp] = yyval.$;
|
|
|
|
|
|
// goto new state = table[STATE][NONTERMINAL]
|
|
|
newState = table[sstack[sp - 1]][ntsymbol];
|
|
|
sstack[sp] = newState;
|
|
|
++sp;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
continue;
|
|
|
|
|
|
// accept:
|
|
|
case 3:
|
|
|
if (sp !== -2) {
|
|
|
retval = true;
|
|
|
// Return the `$accept` rule's `$$` result, if available.
|
|
|
//
|
|
|
// Also note that JISON always adds this top-most `$accept` rule (with implicit,
|
|
|
// default, action):
|
|
|
//
|
|
|
// $accept: <startSymbol> $end
|
|
|
// %{ $$ = $1; @$ = @1; %}
|
|
|
//
|
|
|
// which, combined with the parse kernel's `$accept` state behaviour coded below,
|
|
|
// will produce the `$$` value output of the <startSymbol> rule as the parse result,
|
|
|
// IFF that result is *not* `undefined`. (See also the parser kernel code.)
|
|
|
//
|
|
|
// In code:
|
|
|
//
|
|
|
// %{
|
|
|
// @$ = @1; // if location tracking support is included
|
|
|
// if (typeof $1 !== 'undefined')
|
|
|
// return $1;
|
|
|
// else
|
|
|
// return true; // the default parse result if the rule actions don't produce anything
|
|
|
// %}
|
|
|
sp--;
|
|
|
if (typeof vstack[sp] !== 'undefined') {
|
|
|
retval = vstack[sp];
|
|
|
}
|
|
|
}
|
|
|
break;
|
|
|
}
|
|
|
|
|
|
// break out of loop: we accept or fail with error
|
|
|
break;
|
|
|
}
|
|
|
} catch (ex) {
|
|
|
// report exceptions through the parseError callback too, but keep the exception intact
|
|
|
// if it is a known parser or lexer error which has been thrown by parseError() already:
|
|
|
if (ex instanceof this.JisonParserError) {
|
|
|
throw ex;
|
|
|
}
|
|
|
else if (lexer && typeof lexer.JisonLexerError === 'function' && ex instanceof lexer.JisonLexerError) {
|
|
|
throw ex;
|
|
|
}
|
|
|
|
|
|
p = this.constructParseErrorInfo('Parsing aborted due to exception.', ex, null, false);
|
|
|
retval = false;
|
|
|
r = this.parseError(p.errStr, p, this.JisonParserError);
|
|
|
if (typeof r !== 'undefined') {
|
|
|
retval = r;
|
|
|
}
|
|
|
} finally {
|
|
|
retval = this.cleanupAfterParse(retval, true, true);
|
|
|
this.__reentrant_call_depth--;
|
|
|
} // /finally
|
|
|
|
|
|
return retval;
|
|
|
}
|
|
|
};
|
|
|
parser.originalParseError = parser.parseError;
|
|
|
parser.originalQuoteName = parser.quoteName;
|
|
|
/* lexer generated by jison-lex 0.6.1-215 */
|
|
|
|
|
|
/*
|
|
|
* Returns a Lexer object of the following structure:
|
|
|
*
|
|
|
* Lexer: {
|
|
|
* yy: {} The so-called "shared state" or rather the *source* of it;
|
|
|
* the real "shared state" `yy` passed around to
|
|
|
* the rule actions, etc. is a direct reference!
|
|
|
*
|
|
|
* This "shared context" object was passed to the lexer by way of
|
|
|
* the `lexer.setInput(str, yy)` API before you may use it.
|
|
|
*
|
|
|
* This "shared context" object is passed to the lexer action code in `performAction()`
|
|
|
* so userland code in the lexer actions may communicate with the outside world
|
|
|
* and/or other lexer rules' actions in more or less complex ways.
|
|
|
*
|
|
|
* }
|
|
|
*
|
|
|
* Lexer.prototype: {
|
|
|
* EOF: 1,
|
|
|
* ERROR: 2,
|
|
|
*
|
|
|
* yy: The overall "shared context" object reference.
|
|
|
*
|
|
|
* JisonLexerError: function(msg, hash),
|
|
|
*
|
|
|
* performAction: function lexer__performAction(yy, yyrulenumber, YY_START),
|
|
|
*
|
|
|
* The function parameters and `this` have the following value/meaning:
|
|
|
* - `this` : reference to the `lexer` instance.
|
|
|
* `yy_` is an alias for `this` lexer instance reference used internally.
|
|
|
*
|
|
|
* - `yy` : a reference to the `yy` "shared state" object which was passed to the lexer
|
|
|
* by way of the `lexer.setInput(str, yy)` API before.
|
|
|
*
|
|
|
* Note:
|
|
|
* The extra arguments you specified in the `%parse-param` statement in your
|
|
|
* **parser** grammar definition file are passed to the lexer via this object
|
|
|
* reference as member variables.
|
|
|
*
|
|
|
* - `yyrulenumber` : index of the matched lexer rule (regex), used internally.
|
|
|
*
|
|
|
* - `YY_START`: the current lexer "start condition" state.
|
|
|
*
|
|
|
* parseError: function(str, hash, ExceptionClass),
|
|
|
*
|
|
|
* constructLexErrorInfo: function(error_message, is_recoverable),
|
|
|
* Helper function.
|
|
|
* Produces a new errorInfo 'hash object' which can be passed into `parseError()`.
|
|
|
* See it's use in this lexer kernel in many places; example usage:
|
|
|
*
|
|
|
* var infoObj = lexer.constructParseErrorInfo('fail!', true);
|
|
|
* var retVal = lexer.parseError(infoObj.errStr, infoObj, lexer.JisonLexerError);
|
|
|
*
|
|
|
* options: { ... lexer %options ... },
|
|
|
*
|
|
|
* lex: function(),
|
|
|
* Produce one token of lexed input, which was passed in earlier via the `lexer.setInput()` API.
|
|
|
* You MAY use the additional `args...` parameters as per `%parse-param` spec of the **lexer** grammar:
|
|
|
* these extra `args...` are added verbatim to the `yy` object reference as member variables.
|
|
|
*
|
|
|
* WARNING:
|
|
|
* Lexer's additional `args...` parameters (via lexer's `%parse-param`) MAY conflict with
|
|
|
* any attributes already added to `yy` by the **parser** or the jison run-time;
|
|
|
* when such a collision is detected an exception is thrown to prevent the generated run-time
|
|
|
* from silently accepting this confusing and potentially hazardous situation!
|
|
|
*
|
|
|
* cleanupAfterLex: function(do_not_nuke_errorinfos),
|
|
|
* Helper function.
|
|
|
*
|
|
|
* This helper API is invoked when the **parse process** has completed: it is the responsibility
|
|
|
* of the **parser** (or the calling userland code) to invoke this method once cleanup is desired.
|
|
|
*
|
|
|
* This helper may be invoked by user code to ensure the internal lexer gets properly garbage collected.
|
|
|
*
|
|
|
* setInput: function(input, [yy]),
|
|
|
*
|
|
|
*
|
|
|
* input: function(),
|
|
|
*
|
|
|
*
|
|
|
* unput: function(str),
|
|
|
*
|
|
|
*
|
|
|
* more: function(),
|
|
|
*
|
|
|
*
|
|
|
* reject: function(),
|
|
|
*
|
|
|
*
|
|
|
* less: function(n),
|
|
|
*
|
|
|
*
|
|
|
* pastInput: function(n),
|
|
|
*
|
|
|
*
|
|
|
* upcomingInput: function(n),
|
|
|
*
|
|
|
*
|
|
|
* showPosition: function(),
|
|
|
*
|
|
|
*
|
|
|
* test_match: function(regex_match_array, rule_index),
|
|
|
*
|
|
|
*
|
|
|
* next: function(),
|
|
|
*
|
|
|
*
|
|
|
* begin: function(condition),
|
|
|
*
|
|
|
*
|
|
|
* pushState: function(condition),
|
|
|
*
|
|
|
*
|
|
|
* popState: function(),
|
|
|
*
|
|
|
*
|
|
|
* topState: function(),
|
|
|
*
|
|
|
*
|
|
|
* _currentRules: function(),
|
|
|
*
|
|
|
*
|
|
|
* stateStackSize: function(),
|
|
|
*
|
|
|
*
|
|
|
* performAction: function(yy, yy_, yyrulenumber, YY_START),
|
|
|
*
|
|
|
*
|
|
|
* rules: [...],
|
|
|
*
|
|
|
*
|
|
|
* conditions: {associative list: name ==> set},
|
|
|
* }
|
|
|
*
|
|
|
*
|
|
|
* token location info (`yylloc`): {
|
|
|
* first_line: n,
|
|
|
* last_line: n,
|
|
|
* first_column: n,
|
|
|
* last_column: n,
|
|
|
* range: [start_number, end_number]
|
|
|
* (where the numbers are indexes into the input string, zero-based)
|
|
|
* }
|
|
|
*
|
|
|
* ---
|
|
|
*
|
|
|
* The `parseError` function receives a 'hash' object with these members for lexer errors:
|
|
|
*
|
|
|
* {
|
|
|
* text: (matched text)
|
|
|
* token: (the produced terminal token, if any)
|
|
|
* token_id: (the produced terminal token numeric ID, if any)
|
|
|
* line: (yylineno)
|
|
|
* loc: (yylloc)
|
|
|
* recoverable: (boolean: TRUE when the parser MAY have an error recovery rule
|
|
|
* available for this particular error)
|
|
|
* yy: (object: the current parser internal "shared state" `yy`
|
|
|
* as is also available in the rule actions; this can be used,
|
|
|
* for instance, for advanced error analysis and reporting)
|
|
|
* lexer: (reference to the current lexer instance used by the parser)
|
|
|
* }
|
|
|
*
|
|
|
* while `this` will reference the current lexer instance.
|
|
|
*
|
|
|
* When `parseError` is invoked by the lexer, the default implementation will
|
|
|
* attempt to invoke `yy.parser.parseError()`; when this callback is not provided
|
|
|
* it will try to invoke `yy.parseError()` instead. When that callback is also not
|
|
|
* provided, a `JisonLexerError` exception will be thrown containing the error
|
|
|
* message and `hash`, as constructed by the `constructLexErrorInfo()` API.
|
|
|
*
|
|
|
* Note that the lexer's `JisonLexerError` error class is passed via the
|
|
|
* `ExceptionClass` argument, which is invoked to construct the exception
|
|
|
* instance to be thrown, so technically `parseError` will throw the object
|
|
|
* produced by the `new ExceptionClass(str, hash)` JavaScript expression.
|
|
|
*
|
|
|
* ---
|
|
|
*
|
|
|
* You can specify lexer options by setting / modifying the `.options` object of your Lexer instance.
|
|
|
* These options are available:
|
|
|
*
|
|
|
* (Options are permanent.)
|
|
|
*
|
|
|
* yy: {
|
|
|
* parseError: function(str, hash, ExceptionClass)
|
|
|
* optional: overrides the default `parseError` function.
|
|
|
* }
|
|
|
*
|
|
|
* lexer.options: {
|
|
|
* pre_lex: function()
|
|
|
* optional: is invoked before the lexer is invoked to produce another token.
|
|
|
* `this` refers to the Lexer object.
|
|
|
* post_lex: function(token) { return token; }
|
|
|
* optional: is invoked when the lexer has produced a token `token`;
|
|
|
* this function can override the returned token value by returning another.
|
|
|
* When it does not return any (truthy) value, the lexer will return
|
|
|
* the original `token`.
|
|
|
* `this` refers to the Lexer object.
|
|
|
*
|
|
|
* WARNING: the next set of options are not meant to be changed. They echo the abilities of
|
|
|
* the lexer as per when it was compiled!
|
|
|
*
|
|
|
* ranges: boolean
|
|
|
* optional: `true` ==> token location info will include a .range[] member.
|
|
|
* flex: boolean
|
|
|
* optional: `true` ==> flex-like lexing behaviour where the rules are tested
|
|
|
* exhaustively to find the longest match.
|
|
|
* backtrack_lexer: boolean
|
|
|
* optional: `true` ==> lexer regexes are tested in order and for invoked;
|
|
|
* the lexer terminates the scan when a token is returned by the action code.
|
|
|
* xregexp: boolean
|
|
|
* optional: `true` ==> lexer rule regexes are "extended regex format" requiring the
|
|
|
* `XRegExp` library. When this %option has not been specified at compile time, all lexer
|
|
|
* rule regexes have been written as standard JavaScript RegExp expressions.
|
|
|
* }
|
|
|
*/
|
|
|
|
|
|
|
|
|
var lexer = function() {
|
|
|
/**
|
|
|
* See also:
|
|
|
* http://stackoverflow.com/questions/1382107/whats-a-good-way-to-extend-error-in-javascript/#35881508
|
|
|
* but we keep the prototype.constructor and prototype.name assignment lines too for compatibility
|
|
|
* with userland code which might access the derived class in a 'classic' way.
|
|
|
*
|
|
|
* @public
|
|
|
* @constructor
|
|
|
* @nocollapse
|
|
|
*/
|
|
|
function JisonLexerError(msg, hash) {
|
|
|
Object.defineProperty(this, 'name', {
|
|
|
enumerable: false,
|
|
|
writable: false,
|
|
|
value: 'JisonLexerError'
|
|
|
});
|
|
|
|
|
|
if (msg == null)
|
|
|
msg = '???';
|
|
|
|
|
|
Object.defineProperty(this, 'message', {
|
|
|
enumerable: false,
|
|
|
writable: true,
|
|
|
value: msg
|
|
|
});
|
|
|
|
|
|
this.hash = hash;
|
|
|
var stacktrace;
|
|
|
|
|
|
if (hash && hash.exception instanceof Error) {
|
|
|
var ex2 = hash.exception;
|
|
|
this.message = ex2.message || msg;
|
|
|
stacktrace = ex2.stack;
|
|
|
}
|
|
|
|
|
|
if (!stacktrace) {
|
|
|
if (Error.hasOwnProperty('captureStackTrace')) {
|
|
|
// V8
|
|
|
Error.captureStackTrace(this, this.constructor);
|
|
|
} else {
|
|
|
stacktrace = new Error(msg).stack;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
if (stacktrace) {
|
|
|
Object.defineProperty(this, 'stack', {
|
|
|
enumerable: false,
|
|
|
writable: false,
|
|
|
value: stacktrace
|
|
|
});
|
|
|
}
|
|
|
}
|
|
|
|
|
|
if (typeof Object.setPrototypeOf === 'function') {
|
|
|
Object.setPrototypeOf(JisonLexerError.prototype, Error.prototype);
|
|
|
} else {
|
|
|
JisonLexerError.prototype = Object.create(Error.prototype);
|
|
|
}
|
|
|
|
|
|
JisonLexerError.prototype.constructor = JisonLexerError;
|
|
|
JisonLexerError.prototype.name = 'JisonLexerError';
|
|
|
|
|
|
var lexer = {
|
|
|
|
|
|
// Code Generator Information Report
|
|
|
// ---------------------------------
|
|
|
//
|
|
|
// Options:
|
|
|
//
|
|
|
// backtracking: .................... false
|
|
|
// location.ranges: ................. false
|
|
|
// location line+column tracking: ... true
|
|
|
//
|
|
|
//
|
|
|
// Forwarded Parser Analysis flags:
|
|
|
//
|
|
|
// uses yyleng: ..................... false
|
|
|
// uses yylineno: ................... false
|
|
|
// uses yytext: ..................... false
|
|
|
// uses yylloc: ..................... false
|
|
|
// uses lexer values: ............... true / true
|
|
|
// location tracking: ............... false
|
|
|
// location assignment: ............. false
|
|
|
//
|
|
|
//
|
|
|
// Lexer Analysis flags:
|
|
|
//
|
|
|
// uses yyleng: ..................... ???
|
|
|
// uses yylineno: ................... ???
|
|
|
// uses yytext: ..................... ???
|
|
|
// uses yylloc: ..................... ???
|
|
|
// uses ParseError API: ............. ???
|
|
|
// uses yyerror: .................... ???
|
|
|
// uses location tracking & editing: ???
|
|
|
// uses more() API: ................. ???
|
|
|
// uses unput() API: ................ ???
|
|
|
// uses reject() API: ............... ???
|
|
|
// uses less() API: ................. ???
|
|
|
// uses display APIs pastInput(), upcomingInput(), showPosition():
|
|
|
// ............................. ???
|
|
|
// uses describeYYLLOC() API: ....... ???
|
|
|
//
|
|
|
// --------- END OF REPORT -----------
|
|
|
|
|
|
EOF: 1,
|
|
|
ERROR: 2,
|
|
|
|
|
|
// JisonLexerError: JisonLexerError, /// <-- injected by the code generator
|
|
|
|
|
|
// options: {}, /// <-- injected by the code generator
|
|
|
|
|
|
// yy: ..., /// <-- injected by setInput()
|
|
|
|
|
|
__currentRuleSet__: null, /// INTERNAL USE ONLY: internal rule set cache for the current lexer state
|
|
|
|
|
|
__error_infos: [], /// INTERNAL USE ONLY: the set of lexErrorInfo objects created since the last cleanup
|
|
|
__decompressed: false, /// INTERNAL USE ONLY: mark whether the lexer instance has been 'unfolded' completely and is now ready for use
|
|
|
done: false, /// INTERNAL USE ONLY
|
|
|
_backtrack: false, /// INTERNAL USE ONLY
|
|
|
_input: '', /// INTERNAL USE ONLY
|
|
|
_more: false, /// INTERNAL USE ONLY
|
|
|
_signaled_error_token: false, /// INTERNAL USE ONLY
|
|
|
conditionStack: [], /// INTERNAL USE ONLY; managed via `pushState()`, `popState()`, `topState()` and `stateStackSize()`
|
|
|
match: '', /// READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: tracks input which has been matched so far for the lexer token under construction. `match` is identical to `yytext` except that this one still contains the matched input string after `lexer.performAction()` has been invoked, where userland code MAY have changed/replaced the `yytext` value entirely!
|
|
|
matched: '', /// READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: tracks entire input which has been matched so far
|
|
|
matches: false, /// READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: tracks RE match result for last (successful) match attempt
|
|
|
yytext: '', /// ADVANCED USE ONLY: tracks input which has been matched so far for the lexer token under construction; this value is transferred to the parser as the 'token value' when the parser consumes the lexer token produced through a call to the `lex()` API.
|
|
|
offset: 0, /// READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: tracks the 'cursor position' in the input string, i.e. the number of characters matched so far
|
|
|
yyleng: 0, /// READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: length of matched input for the token under construction (`yytext`)
|
|
|
yylineno: 0, /// READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: 'line number' at which the token under construction is located
|
|
|
yylloc: null, /// READ-ONLY EXTERNAL ACCESS - ADVANCED USE ONLY: tracks location info (lines + columns) for the token under construction
|
|
|
|
|
|
/**
|
|
|
* INTERNAL USE: construct a suitable error info hash object instance for `parseError`.
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
constructLexErrorInfo: function lexer_constructLexErrorInfo(msg, recoverable, show_input_position) {
|
|
|
msg = '' + msg;
|
|
|
|
|
|
// heuristic to determine if the error message already contains a (partial) source code dump
|
|
|
// as produced by either `showPosition()` or `prettyPrintRange()`:
|
|
|
if (show_input_position == undefined) {
|
|
|
show_input_position = !(msg.indexOf('\n') > 0 && msg.indexOf('^') > 0);
|
|
|
}
|
|
|
|
|
|
if (this.yylloc && show_input_position) {
|
|
|
if (typeof this.prettyPrintRange === 'function') {
|
|
|
var pretty_src = this.prettyPrintRange(this.yylloc);
|
|
|
|
|
|
if (!/\n\s*$/.test(msg)) {
|
|
|
msg += '\n';
|
|
|
}
|
|
|
|
|
|
msg += '\n Erroneous area:\n' + this.prettyPrintRange(this.yylloc);
|
|
|
} else if (typeof this.showPosition === 'function') {
|
|
|
var pos_str = this.showPosition();
|
|
|
|
|
|
if (pos_str) {
|
|
|
if (msg.length && msg[msg.length - 1] !== '\n' && pos_str[0] !== '\n') {
|
|
|
msg += '\n' + pos_str;
|
|
|
} else {
|
|
|
msg += pos_str;
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
|
|
|
/** @constructor */
|
|
|
var pei = {
|
|
|
errStr: msg,
|
|
|
recoverable: !!recoverable,
|
|
|
text: this.match, // This one MAY be empty; userland code should use the `upcomingInput` API to obtain more text which follows the 'lexer cursor position'...
|
|
|
token: null,
|
|
|
line: this.yylineno,
|
|
|
loc: this.yylloc,
|
|
|
yy: this.yy,
|
|
|
lexer: this,
|
|
|
|
|
|
/**
|
|
|
* and make sure the error info doesn't stay due to potential
|
|
|
* ref cycle via userland code manipulations.
|
|
|
* These would otherwise all be memory leak opportunities!
|
|
|
*
|
|
|
* Note that only array and object references are nuked as those
|
|
|
* constitute the set of elements which can produce a cyclic ref.
|
|
|
* The rest of the members is kept intact as they are harmless.
|
|
|
*
|
|
|
* @public
|
|
|
* @this {LexErrorInfo}
|
|
|
*/
|
|
|
destroy: function destructLexErrorInfo() {
|
|
|
// remove cyclic references added to error info:
|
|
|
// info.yy = null;
|
|
|
// info.lexer = null;
|
|
|
// ...
|
|
|
var rec = !!this.recoverable;
|
|
|
|
|
|
for (var key in this) {
|
|
|
if (this.hasOwnProperty(key) && typeof key === 'object') {
|
|
|
this[key] = undefined;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
this.recoverable = rec;
|
|
|
}
|
|
|
};
|
|
|
|
|
|
// track this instance so we can `destroy()` it once we deem it superfluous and ready for garbage collection!
|
|
|
this.__error_infos.push(pei);
|
|
|
|
|
|
return pei;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* handler which is invoked when a lexer error occurs.
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
parseError: function lexer_parseError(str, hash, ExceptionClass) {
|
|
|
if (!ExceptionClass) {
|
|
|
ExceptionClass = this.JisonLexerError;
|
|
|
}
|
|
|
|
|
|
if (this.yy) {
|
|
|
if (this.yy.parser && typeof this.yy.parser.parseError === 'function') {
|
|
|
return this.yy.parser.parseError.call(this, str, hash, ExceptionClass) || this.ERROR;
|
|
|
} else if (typeof this.yy.parseError === 'function') {
|
|
|
return this.yy.parseError.call(this, str, hash, ExceptionClass) || this.ERROR;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
throw new ExceptionClass(str, hash);
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* method which implements `yyerror(str, ...args)` functionality for use inside lexer actions.
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
yyerror: function yyError(str /*, ...args */) {
|
|
|
var lineno_msg = '';
|
|
|
|
|
|
if (this.yylloc) {
|
|
|
lineno_msg = ' on line ' + (this.yylineno + 1);
|
|
|
}
|
|
|
|
|
|
var p = this.constructLexErrorInfo(
|
|
|
'Lexical error' + lineno_msg + ': ' + str,
|
|
|
this.options.lexerErrorsAreRecoverable
|
|
|
);
|
|
|
|
|
|
// Add any extra args to the hash under the name `extra_error_attributes`:
|
|
|
var args = Array.prototype.slice.call(arguments, 1);
|
|
|
|
|
|
if (args.length) {
|
|
|
p.extra_error_attributes = args;
|
|
|
}
|
|
|
|
|
|
return this.parseError(p.errStr, p, this.JisonLexerError) || this.ERROR;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* final cleanup function for when we have completed lexing the input;
|
|
|
* make it an API so that external code can use this one once userland
|
|
|
* code has decided it's time to destroy any lingering lexer error
|
|
|
* hash object instances and the like: this function helps to clean
|
|
|
* up these constructs, which *may* carry cyclic references which would
|
|
|
* otherwise prevent the instances from being properly and timely
|
|
|
* garbage-collected, i.e. this function helps prevent memory leaks!
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
cleanupAfterLex: function lexer_cleanupAfterLex(do_not_nuke_errorinfos) {
|
|
|
// prevent lingering circular references from causing memory leaks:
|
|
|
this.setInput('', {});
|
|
|
|
|
|
// nuke the error hash info instances created during this run.
|
|
|
// Userland code must COPY any data/references
|
|
|
// in the error hash instance(s) it is more permanently interested in.
|
|
|
if (!do_not_nuke_errorinfos) {
|
|
|
for (var i = this.__error_infos.length - 1; i >= 0; i--) {
|
|
|
var el = this.__error_infos[i];
|
|
|
|
|
|
if (el && typeof el.destroy === 'function') {
|
|
|
el.destroy();
|
|
|
}
|
|
|
}
|
|
|
|
|
|
this.__error_infos.length = 0;
|
|
|
}
|
|
|
|
|
|
return this;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* clear the lexer token context; intended for internal use only
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
clear: function lexer_clear() {
|
|
|
this.yytext = '';
|
|
|
this.yyleng = 0;
|
|
|
this.match = '';
|
|
|
|
|
|
// - DO NOT reset `this.matched`
|
|
|
this.matches = false;
|
|
|
|
|
|
this._more = false;
|
|
|
this._backtrack = false;
|
|
|
var col = (this.yylloc ? this.yylloc.last_column : 0);
|
|
|
|
|
|
this.yylloc = {
|
|
|
first_line: this.yylineno + 1,
|
|
|
first_column: col,
|
|
|
last_line: this.yylineno + 1,
|
|
|
last_column: col,
|
|
|
range: [this.offset, this.offset]
|
|
|
};
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* resets the lexer, sets new input
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
setInput: function lexer_setInput(input, yy) {
|
|
|
this.yy = yy || this.yy || {};
|
|
|
|
|
|
// also check if we've fully initialized the lexer instance,
|
|
|
// including expansion work to be done to go from a loaded
|
|
|
// lexer to a usable lexer:
|
|
|
if (!this.__decompressed) {
|
|
|
// step 1: decompress the regex list:
|
|
|
var rules = this.rules;
|
|
|
|
|
|
for (var i = 0, len = rules.length; i < len; i++) {
|
|
|
var rule_re = rules[i];
|
|
|
|
|
|
// compression: is the RE an xref to another RE slot in the rules[] table?
|
|
|
if (typeof rule_re === 'number') {
|
|
|
rules[i] = rules[rule_re];
|
|
|
}
|
|
|
}
|
|
|
|
|
|
// step 2: unfold the conditions[] set to make these ready for use:
|
|
|
var conditions = this.conditions;
|
|
|
|
|
|
for (var k in conditions) {
|
|
|
var spec = conditions[k];
|
|
|
var rule_ids = spec.rules;
|
|
|
var len = rule_ids.length;
|
|
|
var rule_regexes = new Array(len + 1); // slot 0 is unused; we use a 1-based index approach here to keep the hottest code in `lexer_next()` fast and simple!
|
|
|
var rule_new_ids = new Array(len + 1);
|
|
|
|
|
|
for (var i = 0; i < len; i++) {
|
|
|
var idx = rule_ids[i];
|
|
|
var rule_re = rules[idx];
|
|
|
rule_regexes[i + 1] = rule_re;
|
|
|
rule_new_ids[i + 1] = idx;
|
|
|
}
|
|
|
|
|
|
spec.rules = rule_new_ids;
|
|
|
spec.__rule_regexes = rule_regexes;
|
|
|
spec.__rule_count = len;
|
|
|
}
|
|
|
|
|
|
this.__decompressed = true;
|
|
|
}
|
|
|
|
|
|
this._input = input || '';
|
|
|
this.clear();
|
|
|
this._signaled_error_token = false;
|
|
|
this.done = false;
|
|
|
this.yylineno = 0;
|
|
|
this.matched = '';
|
|
|
this.conditionStack = ['INITIAL'];
|
|
|
this.__currentRuleSet__ = null;
|
|
|
|
|
|
this.yylloc = {
|
|
|
first_line: 1,
|
|
|
first_column: 0,
|
|
|
last_line: 1,
|
|
|
last_column: 0,
|
|
|
range: [0, 0]
|
|
|
};
|
|
|
|
|
|
this.offset = 0;
|
|
|
return this;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* edit the remaining input via user-specified callback.
|
|
|
* This can be used to forward-adjust the input-to-parse,
|
|
|
* e.g. inserting macro expansions and alike in the
|
|
|
* input which has yet to be lexed.
|
|
|
* The behaviour of this API contrasts the `unput()` et al
|
|
|
* APIs as those act on the *consumed* input, while this
|
|
|
* one allows one to manipulate the future, without impacting
|
|
|
* the current `yyloc` cursor location or any history.
|
|
|
*
|
|
|
* Use this API to help implement C-preprocessor-like
|
|
|
* `#include` statements, etc.
|
|
|
*
|
|
|
* The provided callback must be synchronous and is
|
|
|
* expected to return the edited input (string).
|
|
|
*
|
|
|
* The `cpsArg` argument value is passed to the callback
|
|
|
* as-is.
|
|
|
*
|
|
|
* `callback` interface:
|
|
|
* `function callback(input, cpsArg)`
|
|
|
*
|
|
|
* - `input` will carry the remaining-input-to-lex string
|
|
|
* from the lexer.
|
|
|
* - `cpsArg` is `cpsArg` passed into this API.
|
|
|
*
|
|
|
* The `this` reference for the callback will be set to
|
|
|
* reference this lexer instance so that userland code
|
|
|
* in the callback can easily and quickly access any lexer
|
|
|
* API.
|
|
|
*
|
|
|
* When the callback returns a non-string-type falsey value,
|
|
|
* we assume the callback did not edit the input and we
|
|
|
* will using the input as-is.
|
|
|
*
|
|
|
* When the callback returns a non-string-type value, it
|
|
|
* is converted to a string for lexing via the `"" + retval`
|
|
|
* operation. (See also why: http://2ality.com/2012/03/converting-to-string.html
|
|
|
* -- that way any returned object's `toValue()` and `toString()`
|
|
|
* methods will be invoked in a proper/desirable order.)
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
editRemainingInput: function lexer_editRemainingInput(callback, cpsArg) {
|
|
|
var rv = callback.call(this, this._input, cpsArg);
|
|
|
|
|
|
if (typeof rv !== 'string') {
|
|
|
if (rv) {
|
|
|
this._input = '' + rv;
|
|
|
}
|
|
|
// else: keep `this._input` as is.
|
|
|
} else {
|
|
|
this._input = rv;
|
|
|
}
|
|
|
|
|
|
return this;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* consumes and returns one char from the input
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
input: function lexer_input() {
|
|
|
if (!this._input) {
|
|
|
//this.done = true; -- don't set `done` as we want the lex()/next() API to be able to produce one custom EOF token match after this anyhow. (lexer can match special <<EOF>> tokens and perform user action code for a <<EOF>> match, but only does so *once*)
|
|
|
return null;
|
|
|
}
|
|
|
|
|
|
var ch = this._input[0];
|
|
|
this.yytext += ch;
|
|
|
this.yyleng++;
|
|
|
this.offset++;
|
|
|
this.match += ch;
|
|
|
this.matched += ch;
|
|
|
|
|
|
// Count the linenumber up when we hit the LF (or a stand-alone CR).
|
|
|
// On CRLF, the linenumber is incremented when you fetch the CR or the CRLF combo
|
|
|
// and we advance immediately past the LF as well, returning both together as if
|
|
|
// it was all a single 'character' only.
|
|
|
var slice_len = 1;
|
|
|
|
|
|
var lines = false;
|
|
|
|
|
|
if (ch === '\n') {
|
|
|
lines = true;
|
|
|
} else if (ch === '\r') {
|
|
|
lines = true;
|
|
|
var ch2 = this._input[1];
|
|
|
|
|
|
if (ch2 === '\n') {
|
|
|
slice_len++;
|
|
|
ch += ch2;
|
|
|
this.yytext += ch2;
|
|
|
this.yyleng++;
|
|
|
this.offset++;
|
|
|
this.match += ch2;
|
|
|
this.matched += ch2;
|
|
|
this.yylloc.range[1]++;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
if (lines) {
|
|
|
this.yylineno++;
|
|
|
this.yylloc.last_line++;
|
|
|
this.yylloc.last_column = 0;
|
|
|
} else {
|
|
|
this.yylloc.last_column++;
|
|
|
}
|
|
|
|
|
|
this.yylloc.range[1]++;
|
|
|
this._input = this._input.slice(slice_len);
|
|
|
return ch;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* unshifts one char (or an entire string) into the input
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
unput: function lexer_unput(ch) {
|
|
|
var len = ch.length;
|
|
|
var lines = ch.split(/(?:\r\n?|\n)/g);
|
|
|
this._input = ch + this._input;
|
|
|
this.yytext = this.yytext.substr(0, this.yytext.length - len);
|
|
|
this.yyleng = this.yytext.length;
|
|
|
this.offset -= len;
|
|
|
this.match = this.match.substr(0, this.match.length - len);
|
|
|
this.matched = this.matched.substr(0, this.matched.length - len);
|
|
|
|
|
|
if (lines.length > 1) {
|
|
|
this.yylineno -= lines.length - 1;
|
|
|
this.yylloc.last_line = this.yylineno + 1;
|
|
|
|
|
|
// Get last entirely matched line into the `pre_lines[]` array's
|
|
|
// last index slot; we don't mind when other previously
|
|
|
// matched lines end up in the array too.
|
|
|
var pre = this.match;
|
|
|
|
|
|
var pre_lines = pre.split(/(?:\r\n?|\n)/g);
|
|
|
|
|
|
if (pre_lines.length === 1) {
|
|
|
pre = this.matched;
|
|
|
pre_lines = pre.split(/(?:\r\n?|\n)/g);
|
|
|
}
|
|
|
|
|
|
this.yylloc.last_column = pre_lines[pre_lines.length - 1].length;
|
|
|
} else {
|
|
|
this.yylloc.last_column -= len;
|
|
|
}
|
|
|
|
|
|
this.yylloc.range[1] = this.yylloc.range[0] + this.yyleng;
|
|
|
this.done = false;
|
|
|
return this;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* cache matched text and append it on next action
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
more: function lexer_more() {
|
|
|
this._more = true;
|
|
|
return this;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* signal the lexer that this rule fails to match the input, so the
|
|
|
* next matching rule (regex) should be tested instead.
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
reject: function lexer_reject() {
|
|
|
if (this.options.backtrack_lexer) {
|
|
|
this._backtrack = true;
|
|
|
} else {
|
|
|
// when the `parseError()` call returns, we MUST ensure that the error is registered.
|
|
|
// We accomplish this by signaling an 'error' token to be produced for the current
|
|
|
// `.lex()` run.
|
|
|
var lineno_msg = '';
|
|
|
|
|
|
if (this.yylloc) {
|
|
|
lineno_msg = ' on line ' + (this.yylineno + 1);
|
|
|
}
|
|
|
|
|
|
var p = this.constructLexErrorInfo(
|
|
|
'Lexical error' + lineno_msg + ': You can only invoke reject() in the lexer when the lexer is of the backtracking persuasion (options.backtrack_lexer = true).',
|
|
|
false
|
|
|
);
|
|
|
|
|
|
this._signaled_error_token = this.parseError(p.errStr, p, this.JisonLexerError) || this.ERROR;
|
|
|
}
|
|
|
|
|
|
return this;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* retain first n characters of the match
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
less: function lexer_less(n) {
|
|
|
return this.unput(this.match.slice(n));
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* return (part of the) already matched input, i.e. for error
|
|
|
* messages.
|
|
|
*
|
|
|
* Limit the returned string length to `maxSize` (default: 20).
|
|
|
*
|
|
|
* Limit the returned string to the `maxLines` number of lines of
|
|
|
* input (default: 1).
|
|
|
*
|
|
|
* Negative limit values equal *unlimited*.
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
pastInput: function lexer_pastInput(maxSize, maxLines) {
|
|
|
var past = this.matched.substring(0, this.matched.length - this.match.length);
|
|
|
|
|
|
if (maxSize < 0)
|
|
|
maxSize = past.length;
|
|
|
else if (!maxSize)
|
|
|
maxSize = 20;
|
|
|
|
|
|
if (maxLines < 0)
|
|
|
maxLines = past.length; // can't ever have more input lines than this!
|
|
|
else if (!maxLines)
|
|
|
maxLines = 1;
|
|
|
|
|
|
// `substr` anticipation: treat \r\n as a single character and take a little
|
|
|
// more than necessary so that we can still properly check against maxSize
|
|
|
// after we've transformed and limited the newLines in here:
|
|
|
past = past.substr(-maxSize * 2 - 2);
|
|
|
|
|
|
// now that we have a significantly reduced string to process, transform the newlines
|
|
|
// and chop them, then limit them:
|
|
|
var a = past.replace(/\r\n|\r/g, '\n').split('\n');
|
|
|
|
|
|
a = a.slice(-maxLines);
|
|
|
past = a.join('\n');
|
|
|
|
|
|
// When, after limiting to maxLines, we still have too much to return,
|
|
|
// do add an ellipsis prefix...
|
|
|
if (past.length > maxSize) {
|
|
|
past = '...' + past.substr(-maxSize);
|
|
|
}
|
|
|
|
|
|
return past;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* return (part of the) upcoming input, i.e. for error messages.
|
|
|
*
|
|
|
* Limit the returned string length to `maxSize` (default: 20).
|
|
|
*
|
|
|
* Limit the returned string to the `maxLines` number of lines of input (default: 1).
|
|
|
*
|
|
|
* Negative limit values equal *unlimited*.
|
|
|
*
|
|
|
* > ### NOTE ###
|
|
|
* >
|
|
|
* > *"upcoming input"* is defined as the whole of the both
|
|
|
* > the *currently lexed* input, together with any remaining input
|
|
|
* > following that. *"currently lexed"* input is the input
|
|
|
* > already recognized by the lexer but not yet returned with
|
|
|
* > the lexer token. This happens when you are invoking this API
|
|
|
* > from inside any lexer rule action code block.
|
|
|
* >
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
upcomingInput: function lexer_upcomingInput(maxSize, maxLines) {
|
|
|
var next = this.match;
|
|
|
|
|
|
if (maxSize < 0)
|
|
|
maxSize = next.length + this._input.length;
|
|
|
else if (!maxSize)
|
|
|
maxSize = 20;
|
|
|
|
|
|
if (maxLines < 0)
|
|
|
maxLines = maxSize; // can't ever have more input lines than this!
|
|
|
else if (!maxLines)
|
|
|
maxLines = 1;
|
|
|
|
|
|
// `substring` anticipation: treat \r\n as a single character and take a little
|
|
|
// more than necessary so that we can still properly check against maxSize
|
|
|
// after we've transformed and limited the newLines in here:
|
|
|
if (next.length < maxSize * 2 + 2) {
|
|
|
next += this._input.substring(0, maxSize * 2 + 2); // substring is faster on Chrome/V8
|
|
|
}
|
|
|
|
|
|
// now that we have a significantly reduced string to process, transform the newlines
|
|
|
// and chop them, then limit them:
|
|
|
var a = next.replace(/\r\n|\r/g, '\n').split('\n');
|
|
|
|
|
|
a = a.slice(0, maxLines);
|
|
|
next = a.join('\n');
|
|
|
|
|
|
// When, after limiting to maxLines, we still have too much to return,
|
|
|
// do add an ellipsis postfix...
|
|
|
if (next.length > maxSize) {
|
|
|
next = next.substring(0, maxSize) + '...';
|
|
|
}
|
|
|
|
|
|
return next;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* return a string which displays the character position where the
|
|
|
* lexing error occurred, i.e. for error messages
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
showPosition: function lexer_showPosition(maxPrefix, maxPostfix) {
|
|
|
var pre = this.pastInput(maxPrefix).replace(/\s/g, ' ');
|
|
|
var c = new Array(pre.length + 1).join('-');
|
|
|
return pre + this.upcomingInput(maxPostfix).replace(/\s/g, ' ') + '\n' + c + '^';
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* return an YYLLOC info object derived off the given context (actual, preceding, following, current).
|
|
|
* Use this method when the given `actual` location is not guaranteed to exist (i.e. when
|
|
|
* it MAY be NULL) and you MUST have a valid location info object anyway:
|
|
|
* then we take the given context of the `preceding` and `following` locations, IFF those are available,
|
|
|
* and reconstruct the `actual` location info from those.
|
|
|
* If this fails, the heuristic is to take the `current` location, IFF available.
|
|
|
* If this fails as well, we assume the sought location is at/around the current lexer position
|
|
|
* and then produce that one as a response. DO NOTE that these heuristic/derived location info
|
|
|
* values MAY be inaccurate!
|
|
|
*
|
|
|
* NOTE: `deriveLocationInfo()` ALWAYS produces a location info object *copy* of `actual`, not just
|
|
|
* a *reference* hence all input location objects can be assumed to be 'constant' (function has no side-effects).
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
deriveLocationInfo: function lexer_deriveYYLLOC(actual, preceding, following, current) {
|
|
|
var loc = {
|
|
|
first_line: 1,
|
|
|
first_column: 0,
|
|
|
last_line: 1,
|
|
|
last_column: 0,
|
|
|
range: [0, 0]
|
|
|
};
|
|
|
|
|
|
if (actual) {
|
|
|
loc.first_line = actual.first_line | 0;
|
|
|
loc.last_line = actual.last_line | 0;
|
|
|
loc.first_column = actual.first_column | 0;
|
|
|
loc.last_column = actual.last_column | 0;
|
|
|
|
|
|
if (actual.range) {
|
|
|
loc.range[0] = actual.range[0] | 0;
|
|
|
loc.range[1] = actual.range[1] | 0;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
if (loc.first_line <= 0 || loc.last_line < loc.first_line) {
|
|
|
// plan B: heuristic using preceding and following:
|
|
|
if (loc.first_line <= 0 && preceding) {
|
|
|
loc.first_line = preceding.last_line | 0;
|
|
|
loc.first_column = preceding.last_column | 0;
|
|
|
|
|
|
if (preceding.range) {
|
|
|
loc.range[0] = actual.range[1] | 0;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
if ((loc.last_line <= 0 || loc.last_line < loc.first_line) && following) {
|
|
|
loc.last_line = following.first_line | 0;
|
|
|
loc.last_column = following.first_column | 0;
|
|
|
|
|
|
if (following.range) {
|
|
|
loc.range[1] = actual.range[0] | 0;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
// plan C?: see if the 'current' location is useful/sane too:
|
|
|
if (loc.first_line <= 0 && current && (loc.last_line <= 0 || current.last_line <= loc.last_line)) {
|
|
|
loc.first_line = current.first_line | 0;
|
|
|
loc.first_column = current.first_column | 0;
|
|
|
|
|
|
if (current.range) {
|
|
|
loc.range[0] = current.range[0] | 0;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
if (loc.last_line <= 0 && current && (loc.first_line <= 0 || current.first_line >= loc.first_line)) {
|
|
|
loc.last_line = current.last_line | 0;
|
|
|
loc.last_column = current.last_column | 0;
|
|
|
|
|
|
if (current.range) {
|
|
|
loc.range[1] = current.range[1] | 0;
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
|
|
|
// sanitize: fix last_line BEFORE we fix first_line as we use the 'raw' value of the latter
|
|
|
// or plan D heuristics to produce a 'sensible' last_line value:
|
|
|
if (loc.last_line <= 0) {
|
|
|
if (loc.first_line <= 0) {
|
|
|
loc.first_line = this.yylloc.first_line;
|
|
|
loc.last_line = this.yylloc.last_line;
|
|
|
loc.first_column = this.yylloc.first_column;
|
|
|
loc.last_column = this.yylloc.last_column;
|
|
|
loc.range[0] = this.yylloc.range[0];
|
|
|
loc.range[1] = this.yylloc.range[1];
|
|
|
} else {
|
|
|
loc.last_line = this.yylloc.last_line;
|
|
|
loc.last_column = this.yylloc.last_column;
|
|
|
loc.range[1] = this.yylloc.range[1];
|
|
|
}
|
|
|
}
|
|
|
|
|
|
if (loc.first_line <= 0) {
|
|
|
loc.first_line = loc.last_line;
|
|
|
loc.first_column = 0; // loc.last_column;
|
|
|
loc.range[1] = loc.range[0];
|
|
|
}
|
|
|
|
|
|
if (loc.first_column < 0) {
|
|
|
loc.first_column = 0;
|
|
|
}
|
|
|
|
|
|
if (loc.last_column < 0) {
|
|
|
loc.last_column = (loc.first_column > 0 ? loc.first_column : 80);
|
|
|
}
|
|
|
|
|
|
return loc;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* return a string which displays the lines & columns of input which are referenced
|
|
|
* by the given location info range, plus a few lines of context.
|
|
|
*
|
|
|
* This function pretty-prints the indicated section of the input, with line numbers
|
|
|
* and everything!
|
|
|
*
|
|
|
* This function is very useful to provide highly readable error reports, while
|
|
|
* the location range may be specified in various flexible ways:
|
|
|
*
|
|
|
* - `loc` is the location info object which references the area which should be
|
|
|
* displayed and 'marked up': these lines & columns of text are marked up by `^`
|
|
|
* characters below each character in the entire input range.
|
|
|
*
|
|
|
* - `context_loc` is the *optional* location info object which instructs this
|
|
|
* pretty-printer how much *leading* context should be displayed alongside
|
|
|
* the area referenced by `loc`. This can help provide context for the displayed
|
|
|
* error, etc.
|
|
|
*
|
|
|
* When this location info is not provided, a default context of 3 lines is
|
|
|
* used.
|
|
|
*
|
|
|
* - `context_loc2` is another *optional* location info object, which serves
|
|
|
* a similar purpose to `context_loc`: it specifies the amount of *trailing*
|
|
|
* context lines to display in the pretty-print output.
|
|
|
*
|
|
|
* When this location info is not provided, a default context of 1 line only is
|
|
|
* used.
|
|
|
*
|
|
|
* Special Notes:
|
|
|
*
|
|
|
* - when the `loc`-indicated range is very large (about 5 lines or more), then
|
|
|
* only the first and last few lines of this block are printed while a
|
|
|
* `...continued...` message will be printed between them.
|
|
|
*
|
|
|
* This serves the purpose of not printing a huge amount of text when the `loc`
|
|
|
* range happens to be huge: this way a manageable & readable output results
|
|
|
* for arbitrary large ranges.
|
|
|
*
|
|
|
* - this function can display lines of input which whave not yet been lexed.
|
|
|
* `prettyPrintRange()` can access the entire input!
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
prettyPrintRange: function lexer_prettyPrintRange(loc, context_loc, context_loc2) {
|
|
|
loc = this.deriveLocationInfo(loc, context_loc, context_loc2);
|
|
|
const CONTEXT = 3;
|
|
|
const CONTEXT_TAIL = 1;
|
|
|
const MINIMUM_VISIBLE_NONEMPTY_LINE_COUNT = 2;
|
|
|
var input = this.matched + this._input;
|
|
|
var lines = input.split('\n');
|
|
|
var l0 = Math.max(1, (context_loc ? context_loc.first_line : loc.first_line - CONTEXT));
|
|
|
var l1 = Math.max(1, (context_loc2 ? context_loc2.last_line : loc.last_line + CONTEXT_TAIL));
|
|
|
var lineno_display_width = 1 + Math.log10(l1 | 1) | 0;
|
|
|
var ws_prefix = new Array(lineno_display_width).join(' ');
|
|
|
var nonempty_line_indexes = [];
|
|
|
|
|
|
var rv = lines.slice(l0 - 1, l1 + 1).map(function injectLineNumber(line, index) {
|
|
|
var lno = index + l0;
|
|
|
var lno_pfx = (ws_prefix + lno).substr(-lineno_display_width);
|
|
|
var rv = lno_pfx + ': ' + line;
|
|
|
var errpfx = new Array(lineno_display_width + 1).join('^');
|
|
|
var offset = 2 + 1;
|
|
|
var len = 0;
|
|
|
|
|
|
if (lno === loc.first_line) {
|
|
|
offset += loc.first_column;
|
|
|
|
|
|
len = Math.max(
|
|
|
2,
|
|
|
((lno === loc.last_line ? loc.last_column : line.length)) - loc.first_column + 1
|
|
|
);
|
|
|
} else if (lno === loc.last_line) {
|
|
|
len = Math.max(2, loc.last_column + 1);
|
|
|
} else if (lno > loc.first_line && lno < loc.last_line) {
|
|
|
len = Math.max(2, line.length + 1);
|
|
|
}
|
|
|
|
|
|
if (len) {
|
|
|
var lead = new Array(offset).join('.');
|
|
|
var mark = new Array(len).join('^');
|
|
|
rv += '\n' + errpfx + lead + mark;
|
|
|
|
|
|
if (line.trim().length > 0) {
|
|
|
nonempty_line_indexes.push(index);
|
|
|
}
|
|
|
}
|
|
|
|
|
|
rv = rv.replace(/\t/g, ' ');
|
|
|
return rv;
|
|
|
});
|
|
|
|
|
|
// now make sure we don't print an overly large amount of error area: limit it
|
|
|
// to the top and bottom line count:
|
|
|
if (nonempty_line_indexes.length > 2 * MINIMUM_VISIBLE_NONEMPTY_LINE_COUNT) {
|
|
|
var clip_start = nonempty_line_indexes[MINIMUM_VISIBLE_NONEMPTY_LINE_COUNT - 1] + 1;
|
|
|
var clip_end = nonempty_line_indexes[nonempty_line_indexes.length - MINIMUM_VISIBLE_NONEMPTY_LINE_COUNT] - 1;
|
|
|
var intermediate_line = new Array(lineno_display_width + 1).join(' ') + ' (...continued...)';
|
|
|
intermediate_line += '\n' + new Array(lineno_display_width + 1).join('-') + ' (---------------)';
|
|
|
rv.splice(clip_start, clip_end - clip_start + 1, intermediate_line);
|
|
|
}
|
|
|
|
|
|
return rv.join('\n');
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* helper function, used to produce a human readable description as a string, given
|
|
|
* the input `yylloc` location object.
|
|
|
*
|
|
|
* Set `display_range_too` to TRUE to include the string character index position(s)
|
|
|
* in the description if the `yylloc.range` is available.
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
describeYYLLOC: function lexer_describe_yylloc(yylloc, display_range_too) {
|
|
|
var l1 = yylloc.first_line;
|
|
|
var l2 = yylloc.last_line;
|
|
|
var c1 = yylloc.first_column;
|
|
|
var c2 = yylloc.last_column;
|
|
|
var dl = l2 - l1;
|
|
|
var dc = c2 - c1;
|
|
|
var rv;
|
|
|
|
|
|
if (dl === 0) {
|
|
|
rv = 'line ' + l1 + ', ';
|
|
|
|
|
|
if (dc <= 1) {
|
|
|
rv += 'column ' + c1;
|
|
|
} else {
|
|
|
rv += 'columns ' + c1 + ' .. ' + c2;
|
|
|
}
|
|
|
} else {
|
|
|
rv = 'lines ' + l1 + '(column ' + c1 + ') .. ' + l2 + '(column ' + c2 + ')';
|
|
|
}
|
|
|
|
|
|
if (yylloc.range && display_range_too) {
|
|
|
var r1 = yylloc.range[0];
|
|
|
var r2 = yylloc.range[1] - 1;
|
|
|
|
|
|
if (r2 <= r1) {
|
|
|
rv += ' {String Offset: ' + r1 + '}';
|
|
|
} else {
|
|
|
rv += ' {String Offset range: ' + r1 + ' .. ' + r2 + '}';
|
|
|
}
|
|
|
}
|
|
|
|
|
|
return rv;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* test the lexed token: return FALSE when not a match, otherwise return token.
|
|
|
*
|
|
|
* `match` is supposed to be an array coming out of a regex match, i.e. `match[0]`
|
|
|
* contains the actually matched text string.
|
|
|
*
|
|
|
* Also move the input cursor forward and update the match collectors:
|
|
|
*
|
|
|
* - `yytext`
|
|
|
* - `yyleng`
|
|
|
* - `match`
|
|
|
* - `matches`
|
|
|
* - `yylloc`
|
|
|
* - `offset`
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
test_match: function lexer_test_match(match, indexed_rule) {
|
|
|
var token, lines, backup, match_str, match_str_len;
|
|
|
|
|
|
if (this.options.backtrack_lexer) {
|
|
|
// save context
|
|
|
backup = {
|
|
|
yylineno: this.yylineno,
|
|
|
|
|
|
yylloc: {
|
|
|
first_line: this.yylloc.first_line,
|
|
|
last_line: this.yylloc.last_line,
|
|
|
first_column: this.yylloc.first_column,
|
|
|
last_column: this.yylloc.last_column,
|
|
|
range: this.yylloc.range.slice(0)
|
|
|
},
|
|
|
|
|
|
yytext: this.yytext,
|
|
|
match: this.match,
|
|
|
matches: this.matches,
|
|
|
matched: this.matched,
|
|
|
yyleng: this.yyleng,
|
|
|
offset: this.offset,
|
|
|
_more: this._more,
|
|
|
_input: this._input,
|
|
|
|
|
|
//_signaled_error_token: this._signaled_error_token,
|
|
|
yy: this.yy,
|
|
|
|
|
|
conditionStack: this.conditionStack.slice(0),
|
|
|
done: this.done
|
|
|
};
|
|
|
}
|
|
|
|
|
|
match_str = match[0];
|
|
|
match_str_len = match_str.length;
|
|
|
|
|
|
// if (match_str.indexOf('\n') !== -1 || match_str.indexOf('\r') !== -1) {
|
|
|
lines = match_str.split(/(?:\r\n?|\n)/g);
|
|
|
|
|
|
if (lines.length > 1) {
|
|
|
this.yylineno += lines.length - 1;
|
|
|
this.yylloc.last_line = this.yylineno + 1;
|
|
|
this.yylloc.last_column = lines[lines.length - 1].length;
|
|
|
} else {
|
|
|
this.yylloc.last_column += match_str_len;
|
|
|
}
|
|
|
|
|
|
// }
|
|
|
this.yytext += match_str;
|
|
|
|
|
|
this.match += match_str;
|
|
|
this.matched += match_str;
|
|
|
this.matches = match;
|
|
|
this.yyleng = this.yytext.length;
|
|
|
this.yylloc.range[1] += match_str_len;
|
|
|
|
|
|
// previous lex rules MAY have invoked the `more()` API rather than producing a token:
|
|
|
// those rules will already have moved this `offset` forward matching their match lengths,
|
|
|
// hence we must only add our own match length now:
|
|
|
this.offset += match_str_len;
|
|
|
|
|
|
this._more = false;
|
|
|
this._backtrack = false;
|
|
|
this._input = this._input.slice(match_str_len);
|
|
|
|
|
|
// calling this method:
|
|
|
//
|
|
|
// function lexer__performAction(yy, yyrulenumber, YY_START) {...}
|
|
|
token = this.performAction.call(
|
|
|
this,
|
|
|
this.yy,
|
|
|
indexed_rule,
|
|
|
this.conditionStack[this.conditionStack.length - 1] /* = YY_START */
|
|
|
);
|
|
|
|
|
|
// otherwise, when the action codes are all simple return token statements:
|
|
|
//token = this.simpleCaseActionClusters[indexed_rule];
|
|
|
|
|
|
if (this.done && this._input) {
|
|
|
this.done = false;
|
|
|
}
|
|
|
|
|
|
if (token) {
|
|
|
return token;
|
|
|
} else if (this._backtrack) {
|
|
|
// recover context
|
|
|
for (var k in backup) {
|
|
|
this[k] = backup[k];
|
|
|
}
|
|
|
|
|
|
this.__currentRuleSet__ = null;
|
|
|
return false; // rule action called reject() implying the next rule should be tested instead.
|
|
|
} else if (this._signaled_error_token) {
|
|
|
// produce one 'error' token as `.parseError()` in `reject()`
|
|
|
// did not guarantee a failure signal by throwing an exception!
|
|
|
token = this._signaled_error_token;
|
|
|
|
|
|
this._signaled_error_token = false;
|
|
|
return token;
|
|
|
}
|
|
|
|
|
|
return false;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* return next match in input
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
next: function lexer_next() {
|
|
|
if (this.done) {
|
|
|
this.clear();
|
|
|
return this.EOF;
|
|
|
}
|
|
|
|
|
|
if (!this._input) {
|
|
|
this.done = true;
|
|
|
}
|
|
|
|
|
|
var token, match, tempMatch, index;
|
|
|
|
|
|
if (!this._more) {
|
|
|
this.clear();
|
|
|
}
|
|
|
|
|
|
var spec = this.__currentRuleSet__;
|
|
|
|
|
|
if (!spec) {
|
|
|
// Update the ruleset cache as we apparently encountered a state change or just started lexing.
|
|
|
// The cache is set up for fast lookup -- we assume a lexer will switch states much less often than it will
|
|
|
// invoke the `lex()` token-producing API and related APIs, hence caching the set for direct access helps
|
|
|
// speed up those activities a tiny bit.
|
|
|
spec = this.__currentRuleSet__ = this._currentRules();
|
|
|
|
|
|
// Check whether a *sane* condition has been pushed before: this makes the lexer robust against
|
|
|
// user-programmer bugs such as https://github.com/zaach/jison-lex/issues/19
|
|
|
if (!spec || !spec.rules) {
|
|
|
var lineno_msg = '';
|
|
|
|
|
|
if (this.options.trackPosition) {
|
|
|
lineno_msg = ' on line ' + (this.yylineno + 1);
|
|
|
}
|
|
|
|
|
|
var p = this.constructLexErrorInfo(
|
|
|
'Internal lexer engine error' + lineno_msg + ': The lex grammar programmer pushed a non-existing condition name "' + this.topState() + '"; this is a fatal error and should be reported to the application programmer team!',
|
|
|
false
|
|
|
);
|
|
|
|
|
|
// produce one 'error' token until this situation has been resolved, most probably by parse termination!
|
|
|
return this.parseError(p.errStr, p, this.JisonLexerError) || this.ERROR;
|
|
|
}
|
|
|
}
|
|
|
|
|
|
var rule_ids = spec.rules;
|
|
|
var regexes = spec.__rule_regexes;
|
|
|
var len = spec.__rule_count;
|
|
|
|
|
|
// Note: the arrays are 1-based, while `len` itself is a valid index,
|
|
|
// hence the non-standard less-or-equal check in the next loop condition!
|
|
|
for (var i = 1; i <= len; i++) {
|
|
|
tempMatch = this._input.match(regexes[i]);
|
|
|
|
|
|
if (tempMatch && (!match || tempMatch[0].length > match[0].length)) {
|
|
|
match = tempMatch;
|
|
|
index = i;
|
|
|
|
|
|
if (this.options.backtrack_lexer) {
|
|
|
token = this.test_match(tempMatch, rule_ids[i]);
|
|
|
|
|
|
if (token !== false) {
|
|
|
return token;
|
|
|
} else if (this._backtrack) {
|
|
|
match = undefined;
|
|
|
continue; // rule action called reject() implying a rule MISmatch.
|
|
|
} else {
|
|
|
// else: this is a lexer rule which consumes input without producing a token (e.g. whitespace)
|
|
|
return false;
|
|
|
}
|
|
|
} else if (!this.options.flex) {
|
|
|
break;
|
|
|
}
|
|
|
}
|
|
|
}
|
|
|
|
|
|
if (match) {
|
|
|
token = this.test_match(match, rule_ids[index]);
|
|
|
|
|
|
if (token !== false) {
|
|
|
return token;
|
|
|
}
|
|
|
|
|
|
// else: this is a lexer rule which consumes input without producing a token (e.g. whitespace)
|
|
|
return false;
|
|
|
}
|
|
|
|
|
|
if (!this._input) {
|
|
|
this.done = true;
|
|
|
this.clear();
|
|
|
return this.EOF;
|
|
|
} else {
|
|
|
var lineno_msg = '';
|
|
|
|
|
|
if (this.options.trackPosition) {
|
|
|
lineno_msg = ' on line ' + (this.yylineno + 1);
|
|
|
}
|
|
|
|
|
|
var p = this.constructLexErrorInfo(
|
|
|
'Lexical error' + lineno_msg + ': Unrecognized text.',
|
|
|
this.options.lexerErrorsAreRecoverable
|
|
|
);
|
|
|
|
|
|
var pendingInput = this._input;
|
|
|
var activeCondition = this.topState();
|
|
|
var conditionStackDepth = this.conditionStack.length;
|
|
|
token = this.parseError(p.errStr, p, this.JisonLexerError) || this.ERROR;
|
|
|
|
|
|
if (token === this.ERROR) {
|
|
|
// we can try to recover from a lexer error that `parseError()` did not 'recover' for us
|
|
|
// by moving forward at least one character at a time IFF the (user-specified?) `parseError()`
|
|
|
// has not consumed/modified any pending input or changed state in the error handler:
|
|
|
if (!this.matches && // and make sure the input has been modified/consumed ...
|
|
|
pendingInput === this._input && // ...or the lexer state has been modified significantly enough
|
|
|
// to merit a non-consuming error handling action right now.
|
|
|
activeCondition === this.topState() && conditionStackDepth === this.conditionStack.length) {
|
|
|
this.input();
|
|
|
}
|
|
|
}
|
|
|
|
|
|
return token;
|
|
|
}
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* return next match that has a token
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
lex: function lexer_lex() {
|
|
|
var r;
|
|
|
|
|
|
// allow the PRE/POST handlers set/modify the return token for maximum flexibility of the generated lexer:
|
|
|
if (typeof this.pre_lex === 'function') {
|
|
|
r = this.pre_lex.call(this, 0);
|
|
|
}
|
|
|
|
|
|
if (typeof this.options.pre_lex === 'function') {
|
|
|
// (also account for a userdef function which does not return any value: keep the token as is)
|
|
|
r = this.options.pre_lex.call(this, r) || r;
|
|
|
}
|
|
|
|
|
|
if (this.yy && typeof this.yy.pre_lex === 'function') {
|
|
|
// (also account for a userdef function which does not return any value: keep the token as is)
|
|
|
r = this.yy.pre_lex.call(this, r) || r;
|
|
|
}
|
|
|
|
|
|
while (!r) {
|
|
|
r = this.next();
|
|
|
}
|
|
|
|
|
|
if (this.yy && typeof this.yy.post_lex === 'function') {
|
|
|
// (also account for a userdef function which does not return any value: keep the token as is)
|
|
|
r = this.yy.post_lex.call(this, r) || r;
|
|
|
}
|
|
|
|
|
|
if (typeof this.options.post_lex === 'function') {
|
|
|
// (also account for a userdef function which does not return any value: keep the token as is)
|
|
|
r = this.options.post_lex.call(this, r) || r;
|
|
|
}
|
|
|
|
|
|
if (typeof this.post_lex === 'function') {
|
|
|
// (also account for a userdef function which does not return any value: keep the token as is)
|
|
|
r = this.post_lex.call(this, r) || r;
|
|
|
}
|
|
|
|
|
|
return r;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* return next match that has a token. Identical to the `lex()` API but does not invoke any of the
|
|
|
* `pre_lex()` nor any of the `post_lex()` callbacks.
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
fastLex: function lexer_fastLex() {
|
|
|
var r;
|
|
|
|
|
|
while (!r) {
|
|
|
r = this.next();
|
|
|
}
|
|
|
|
|
|
return r;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* return info about the lexer state that can help a parser or other lexer API user to use the
|
|
|
* most efficient means available. This API is provided to aid run-time performance for larger
|
|
|
* systems which employ this lexer.
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
canIUse: function lexer_canIUse() {
|
|
|
var rv = {
|
|
|
fastLex: !(typeof this.pre_lex === 'function' || typeof this.options.pre_lex === 'function' || this.yy && typeof this.yy.pre_lex === 'function' || this.yy && typeof this.yy.post_lex === 'function' || typeof this.options.post_lex === 'function' || typeof this.post_lex === 'function') && typeof this.fastLex === 'function'
|
|
|
};
|
|
|
|
|
|
return rv;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* backwards compatible alias for `pushState()`;
|
|
|
* the latter is symmetrical with `popState()` and we advise to use
|
|
|
* those APIs in any modern lexer code, rather than `begin()`.
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
begin: function lexer_begin(condition) {
|
|
|
return this.pushState(condition);
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* activates a new lexer condition state (pushes the new lexer
|
|
|
* condition state onto the condition stack)
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
pushState: function lexer_pushState(condition) {
|
|
|
this.conditionStack.push(condition);
|
|
|
this.__currentRuleSet__ = null;
|
|
|
return this;
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* pop the previously active lexer condition state off the condition
|
|
|
* stack
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
popState: function lexer_popState() {
|
|
|
var n = this.conditionStack.length - 1;
|
|
|
|
|
|
if (n > 0) {
|
|
|
this.__currentRuleSet__ = null;
|
|
|
return this.conditionStack.pop();
|
|
|
} else {
|
|
|
return this.conditionStack[0];
|
|
|
}
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* return the currently active lexer condition state; when an index
|
|
|
* argument is provided it produces the N-th previous condition state,
|
|
|
* if available
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
topState: function lexer_topState(n) {
|
|
|
n = this.conditionStack.length - 1 - Math.abs(n || 0);
|
|
|
|
|
|
if (n >= 0) {
|
|
|
return this.conditionStack[n];
|
|
|
} else {
|
|
|
return 'INITIAL';
|
|
|
}
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* (internal) determine the lexer rule set which is active for the
|
|
|
* currently active lexer condition state
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
_currentRules: function lexer__currentRules() {
|
|
|
if (this.conditionStack.length && this.conditionStack[this.conditionStack.length - 1]) {
|
|
|
return this.conditions[this.conditionStack[this.conditionStack.length - 1]];
|
|
|
} else {
|
|
|
return this.conditions['INITIAL'];
|
|
|
}
|
|
|
},
|
|
|
|
|
|
/**
|
|
|
* return the number of states currently on the stack
|
|
|
*
|
|
|
* @public
|
|
|
* @this {RegExpLexer}
|
|
|
*/
|
|
|
stateStackSize: function lexer_stateStackSize() {
|
|
|
return this.conditionStack.length;
|
|
|
},
|
|
|
|
|
|
options: {
|
|
|
trackPosition: true,
|
|
|
caseInsensitive: true
|
|
|
},
|
|
|
|
|
|
JisonLexerError: JisonLexerError,
|
|
|
|
|
|
performAction: function lexer__performAction(yy, yyrulenumber, YY_START) {
|
|
|
var yy_ = this;
|
|
|
var YYSTATE = YY_START;
|
|
|
|
|
|
switch (yyrulenumber) {
|
|
|
case 0:
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: \s+ */
|
|
|
/* skip whitespace */
|
|
|
break;
|
|
|
|
|
|
default:
|
|
|
return this.simpleCaseActionClusters[yyrulenumber];
|
|
|
}
|
|
|
},
|
|
|
|
|
|
simpleCaseActionClusters: {
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (-(webkit|moz)-)?calc\b */
|
|
|
1: 3,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: [a-z][a-z0-9-]*\s*\((?:(?:"(?:\\.|[^\"\\])*"|'(?:\\.|[^\'\\])*')|\([^)]*\)|[^\(\)]*)*\) */
|
|
|
2: 10,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: \* */
|
|
|
3: 8,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: \/ */
|
|
|
4: 9,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: \+ */
|
|
|
5: 6,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: - */
|
|
|
6: 7,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)em\b */
|
|
|
7: 17,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)ex\b */
|
|
|
8: 18,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)ch\b */
|
|
|
9: 19,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)rem\b */
|
|
|
10: 20,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)vw\b */
|
|
|
11: 22,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)vh\b */
|
|
|
12: 21,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)vmin\b */
|
|
|
13: 23,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)vmax\b */
|
|
|
14: 24,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)cm\b */
|
|
|
15: 11,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)mm\b */
|
|
|
16: 11,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)Q\b */
|
|
|
17: 11,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)in\b */
|
|
|
18: 11,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)pt\b */
|
|
|
19: 11,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)pc\b */
|
|
|
20: 11,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)px\b */
|
|
|
21: 11,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)deg\b */
|
|
|
22: 12,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)grad\b */
|
|
|
23: 12,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)rad\b */
|
|
|
24: 12,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)turn\b */
|
|
|
25: 12,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)s\b */
|
|
|
26: 13,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)ms\b */
|
|
|
27: 13,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)Hz\b */
|
|
|
28: 14,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)kHz\b */
|
|
|
29: 14,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)dpi\b */
|
|
|
30: 15,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)dpcm\b */
|
|
|
31: 15,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)dppx\b */
|
|
|
32: 15,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)% */
|
|
|
33: 25,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)\b */
|
|
|
34: 26,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: (([0-9]+(\.[0-9]+)?|\.[0-9]+)(e(\+|-)[0-9]+)?)-?([a-zA-Z_]|[\240-\377]|(\\[0-9a-fA-F]{1,6}(\r\n|[ \t\r\n\f])?|\\[^\r\n\f0-9a-fA-F]))([a-zA-Z0-9_-]|[\240-\377]|(\\[0-9a-fA-F]{1,6}(\r\n|[ \t\r\n\f])?|\\[^\r\n\f0-9a-fA-F]))*\b */
|
|
|
35: 16,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: \( */
|
|
|
36: 4,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: \) */
|
|
|
37: 5,
|
|
|
|
|
|
/*! Conditions:: INITIAL */
|
|
|
/*! Rule:: $ */
|
|
|
38: 1
|
|
|
},
|
|
|
|
|
|
rules: [
|
|
|
/* 0: */ /^(?:\s+)/i,
|
|
|
/* 1: */ /^(?:(-(webkit|moz)-)?calc\b)/i,
|
|
|
/* 2: */ /^(?:[a-z][\d\-a-z]*\s*\((?:(?:"(?:\\.|[^"\\])*"|'(?:\\.|[^'\\])*')|\([^)]*\)|[^()]*)*\))/i,
|
|
|
/* 3: */ /^(?:\*)/i,
|
|
|
/* 4: */ /^(?:\/)/i,
|
|
|
/* 5: */ /^(?:\+)/i,
|
|
|
/* 6: */ /^(?:-)/i,
|
|
|
/* 7: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)em\b)/i,
|
|
|
/* 8: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)ex\b)/i,
|
|
|
/* 9: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)ch\b)/i,
|
|
|
/* 10: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)rem\b)/i,
|
|
|
/* 11: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)vw\b)/i,
|
|
|
/* 12: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)vh\b)/i,
|
|
|
/* 13: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)vmin\b)/i,
|
|
|
/* 14: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)vmax\b)/i,
|
|
|
/* 15: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)cm\b)/i,
|
|
|
/* 16: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)mm\b)/i,
|
|
|
/* 17: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)Q\b)/i,
|
|
|
/* 18: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)in\b)/i,
|
|
|
/* 19: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)pt\b)/i,
|
|
|
/* 20: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)pc\b)/i,
|
|
|
/* 21: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)px\b)/i,
|
|
|
/* 22: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)deg\b)/i,
|
|
|
/* 23: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)grad\b)/i,
|
|
|
/* 24: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)rad\b)/i,
|
|
|
/* 25: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)turn\b)/i,
|
|
|
/* 26: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)s\b)/i,
|
|
|
/* 27: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)ms\b)/i,
|
|
|
/* 28: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)Hz\b)/i,
|
|
|
/* 29: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)kHz\b)/i,
|
|
|
/* 30: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)dpi\b)/i,
|
|
|
/* 31: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)dpcm\b)/i,
|
|
|
/* 32: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)dppx\b)/i,
|
|
|
/* 33: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)%)/i,
|
|
|
/* 34: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)\b)/i,
|
|
|
/* 35: */ /^(?:((\d+(\.\d+)?|\.\d+)(e(\+|-)\d+)?)-?([^\W\d]|[ -ÿ]|(\\[\dA-Fa-f]{1,6}(\r\n|[\t\n\f\r ])?|\\[^\d\n\f\rA-Fa-f]))([\w\-]|[ -ÿ]|(\\[\dA-Fa-f]{1,6}(\r\n|[\t\n\f\r ])?|\\[^\d\n\f\rA-Fa-f]))*\b)/i,
|
|
|
/* 36: */ /^(?:\()/i,
|
|
|
/* 37: */ /^(?:\))/i,
|
|
|
/* 38: */ /^(?:$)/i
|
|
|
],
|
|
|
|
|
|
conditions: {
|
|
|
'INITIAL': {
|
|
|
rules: [
|
|
|
0,
|
|
|
1,
|
|
|
2,
|
|
|
3,
|
|
|
4,
|
|
|
5,
|
|
|
6,
|
|
|
7,
|
|
|
8,
|
|
|
9,
|
|
|
10,
|
|
|
11,
|
|
|
12,
|
|
|
13,
|
|
|
14,
|
|
|
15,
|
|
|
16,
|
|
|
17,
|
|
|
18,
|
|
|
19,
|
|
|
20,
|
|
|
21,
|
|
|
22,
|
|
|
23,
|
|
|
24,
|
|
|
25,
|
|
|
26,
|
|
|
27,
|
|
|
28,
|
|
|
29,
|
|
|
30,
|
|
|
31,
|
|
|
32,
|
|
|
33,
|
|
|
34,
|
|
|
35,
|
|
|
36,
|
|
|
37,
|
|
|
38
|
|
|
],
|
|
|
|
|
|
inclusive: true
|
|
|
}
|
|
|
}
|
|
|
};
|
|
|
|
|
|
return lexer;
|
|
|
}();
|
|
|
parser.lexer = lexer;
|
|
|
|
|
|
|
|
|
|
|
|
function Parser() {
|
|
|
this.yy = {};
|
|
|
}
|
|
|
Parser.prototype = parser;
|
|
|
parser.Parser = Parser;
|
|
|
|
|
|
return new Parser();
|
|
|
})();
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
if (typeof require !== 'undefined' && typeof exports !== 'undefined') {
|
|
|
exports.parser = parser;
|
|
|
exports.Parser = parser.Parser;
|
|
|
exports.parse = function () {
|
|
|
return parser.parse.apply(parser, arguments);
|
|
|
};
|
|
|
|
|
|
}
|
|
|
|