Building a parser (Part I)

Most toolkits split the complete process into two separate parts

  • lexer (aka. tokenizer)
  • parser (aka. grammar)

The tokenizer will split the input data into tokens. The parser will only operate on the token "stream" and build the structure.

Your question seems to be focused on the tokenizer. But your second solution mixes the grammar parser and the tokenizer into one step. Theoretically this is also possible but for a beginner it is much easier to do it the same way as most other tools/framework: keep the steps separate.

To your first solution: I would tokenize your example like this:

T_KEYWORD_IF   "if"
T_LPAREN       "("
T_IDENTIFIER   "x"
T_GT           ">"
T_LITARAL      "5"
T_RPAREN       ")"
T_KEYWORD_RET  "return"
T_KEYWORD_TRUE "true"
T_TERMINATOR   ";"

In most languages keywords cannot be used as method names, variable names and so on. This is reflected already on the tokenizer level (T_KEYWORD_IF, T_KEYWORD_RET, T_KEYWORD_TRUE).

The next level would take this stream and - by applying a formal grammar - would build some datastructure (often called AST - Abstract Syntax Tree) which might look like this:

IfStatement:
    Expression:
        BinaryOperator:
            Operator:     T_GT
            LeftOperand: 
               IdentifierExpression:
                   "x"
            RightOperand:
                LiteralExpression
                    5
    IfBlock
        ReturnStatement
            ReturnExpression
                LiteralExpression
                    "true"
    ElseBlock (empty)

Implementing the parser by hand is usually done by some frameworks. Implementing something like that by hand and efficiently is usually done at a university in the better part of a semester. So you really should use some kind of framework.

The input for a grammar parser framework is usually a formal grammar in some kind of BNF. Your "if" part migh look like this:

IfStatement: T_KEYWORD_IF T_LPAREN Expression T_RPAREN Statement ;

Expression: LiteralExpression | BinaryExpression | IdentifierExpression | ... ;

BinaryExpression: LeftOperand BinaryOperator RightOperand;

....

That's only to get the idea. Parsing a realworld-language like Javascript correctly is not an easy task. But funny.


Generally, you want to separate the functions of the tokeniser (also called a lexer) from other stages of your compiler or interpreter. The reason for this is basic modularity: each pass consumes one kind of thing (e.g., characters) and produces another one (e.g., tokens).

So you’ve converted your characters to tokens. Now you want to convert your flat list of tokens to meaningful nested expressions, and this is what is conventionally called parsing. For a JavaScript-like language, you should look into recursive descent parsing. For parsing expressions with infix operators of different precedence levels, Pratt parsing is very useful, and you can fall back on ordinary recursive descent parsing for special cases.

Just to give you a more concrete example based on your case, I’ll assume you can write two functions: accept(token) and expect(token), which test the next token in the stream you’ve created. You’ll make a function for each type of statement or expression in the grammar of your language. Here’s Pythonish pseudocode for a statement() function, for instance:

def statement():

  if accept("if"):
    x = expression()
    y = statement()
    return IfStatement(x, y)

  elif accept("return"):
    x = expression()
    return ReturnStatement(x)

  elif accept("{")
    xs = []
    while True:
      xs.append(statement())
      if not accept(";"):
        break
    expect("}")
    return Block(xs)

  else:
    error("Invalid statement!")

This gives you what’s called an abstract syntax tree (AST) of your program, which you can then manipulate (optimisation and analysis), output (compilation), or run (interpretation).


Is my way better or worse that the original way? Note that my code will be read and compiled (translated to another language, like PHP), instead of interpreted all the time.

What's the original way ? There are many different ways to implement languages. I think yours is fine actually, I once tried to build a language myself that translated to C#, the hack programming language. Many language compilers translate to an intermediate language, it's quite common.

After I tokenizer, what I need do exactly? I'm really lost on this pass!

After tokenizing, you need to parse it. Use some good lexer / parser framework, such as the Boost.Spirit, or Coco, or whatever. There are hundreds of them. Or you can implement your own lexer, but that takes time and resources. There are many ways to parse code, I generally rely on recursive descent parsing.

Next you need to do Code Generation. That's the most difficult part in my opinion. There are tools for that too, but you can do it manually if you want to, I tried to do it in my project, but it was pretty basic and buggy, there's some helpful code here and here.

There are some good tutorial to learn how I can do it?

As I suggested earlier, use tools to do it. There are a lot of pretty good well-documented parser frameworks. For further information, you can try asking some people who know about this stuff. @DeadMG , over at the Lounge C++ is building a programming language called "Wide". You may try consulting him.