What are tokens used for?

There are some technical differences but mostly it's just the "idiosyncratic" terminology used by D. Knuth, almost all systems are parsed in stages that separate lexical analysis (eg distinguishing a name from a numeric literal) from parsing itself (identifying the program structure). tokens are the output of lexical analysis.

TeX's version is fairly unique is that you can change the lexical analysis during the run so in tex, \foo@bar might be a single token (\foo@bar) or 5 (\foo,@,b,a,r) or 8 (\,f, o, o, @, b, a, r) of tokens depending on the catcode of \ and @ (or other numbers) of tokens for more exotic catcode settings) whereas most languages use a fixed tokenisation.

This dynamic aspect of tokenisation in TeX means that it plays a far more visible role, in C or java it is just assumed and usually left unsaid that an expression such as 1+abc is three tokens 1,+, abc, and it does not depend on some run time values whether abc is a single token representing a variable or two tokens a and bc juxtaposed.


Let me ignore the eyes–mouth–stomach (and…) terminology mentioned in the question, which is truly unique's to Knuth's description of TeX, and focus specifically on the tokens that the question asks about:

I have never seen other languages using this token system. It is not used by Pascal, C, PHP, SQL, LISP, etc. […] Why do other computer languages do without problems?

This is not so. Just a few seconds of searching will show you that the compilers for all these languages use tokens in their implementation:

Pascal:

  • GNU Pascal: The GNU Pascal Manual, Chapter 12: The GPC Source Reference, 12.2 12.2 GPC's Lexical Analyzer: “This very first stage of the compiler is responsible for reading what you have written and dividing it into tokens, the “atoms” of a computer language.”
  • Free Pascal: Free Pascal Reference guide, Chapter 1: Pascal Tokens: “Tokens are the basic lexical building blocks of source code: they are the ’words’ of the language: characters are combined into tokens according to the rules of the programming language.”

C:

  • C tokens: “In a C source program, the basic element recognized by the compiler is the "token." A token is source-program text that the compiler does not break down into component elements.” (Similarly see C++ tokens: A token is the smallest element of a C++ program that is meaningful to the compiler.)

PHP:

  • PHP Manual, Appendix List of Parser Tokens

SQL:

  • PostgreSQL 9.6.3 Documentation, Chapter 4. SQL Syntax, 4.1. Lexical Structure: “SQL input consists of a sequence of commands. A command is composed of a sequence of tokens, terminated by a semicolon (";").”
  • DB2 SQL, Language elements, Tokens: “The basic syntactical units of the SQL language are called tokens.”

Lisp:

  • ANSI Common Lisp standard, Chapter 2 Syntax, 2.3 Interpretation of Tokens
  • Common Lisp the Language, 22.1. Printed Representation of Lisp Objects, 22.1.1. What the Read Function Accepts: “Constituent and escape characters are accumulated to make a token, which is then interpreted as a number or symbol.”

A subtle difference is that when learning these languages you may get away without encountering any mention of tokens, as they can be treated as an implementation detail of the compiler of the language. In TeX's case though, there is strictly speaking no “language” (e.g. a language standard with multiple compilers written for that language): there is a single system, the TeX program, which happens to be written like a compiler.

The TeXbook doesn't use the word “compiler” anywhere, but TeX: The Program (invoke texdoc tex to read it) starts with

1. Introduction. This is TeX, a document compiler intended to produce typesetting of high quality.

So Knuth conceived of TeX as a “document compiler” at least when writing the program. Remember that Knuth's programming background was in writing compilers:

  • as a student at Case Institute of Technology in the late 50s he had co-written a compiler called RUNCIBLE (his second ever publication, after the one in MAD Magazine) (watch talk about it starting here),
  • on the strength of this he got a job as a consultant at Burroughs (while still a student) to write an Algol 58 compiler (read Richard Waychoff's account of it in “III. The Summer Of 1960 (Time Spent with don knuth)”
  • …and so on…, until in 1962 he was approached by Addison-Wesley to write a book on compilers, which morphed into his (ongoing) life's work, The Art of Computer Programming.
  • In 1977 when he wanted to write TeX, many of the precursors such as PUB (see Knuth's note PUB and pre-TeX history in TUGboat) were written as (or called themselves) compilers: the PUB manual is titled “PUB: The Document Compiler”.

So it's natural that TeX is written like a compiler.

And the overall plan of execution—read characters into tokens (syntax), then turn them into commands (semantics)—is (part of) how most compilers are written. In fact, the terms syntax and semantics aren't even restricted to programming languages: they are used in linguistics, for human languages.