Theoretical computer science is the collection of topics of computer science that focuses on the more abstract, logical and mathematical aspects of computing, such as the theory of computation, analysis of algorithms, and semantics of programming languages. Although not itself a single topic, its practitioners form a distinct subgroup within computer science researchers.
Contents |
It is not easy to circumscribe the theory areas precisely; the ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT), which describes its mission as the promotion of theoretical computer science, says
Even so, the "theory people" in computer science self-identify as different. Some characterize themselves as doing the "'science' underlying the field of computing"[1], although this neglects the experimental science done in non-theoretical areas such as software system research.
While formal algorithms have existed for millennia (Euclid's algorithm for determining the greatest common divisor of two numbers is still used in computation), it was not until 1936 that Alan Turing and Alonzo Church formalized the definition of an algorithm in terms of computation. Similarly, while binary and logical systems of mathematics have long existed, Gottfried Leibniz only formalized logic in 1703 with binary values for true and false. The nature of mathematical proof also has an ancient history, but in 1931 Kurt Gödel proved with his incompleteness theorem that there were fundamental limitations on what statements, even if true, could be proved.
These developments have led to the modern study of logic and computability, and indeed the field of theoretical computer science as a whole. Information theory was added to the field with a 1948 theory of the statistical mechanics of information by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established.
With the development of quantum mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously. This led to the concept of a quantum computer in the latter half of the 20th century that took off in the 1990s when Peter Shor showed that such methods could be used to factor large numbers in polynomial time, which, if implemented, would render all modern public key cryptography systems uselessly insecure.
Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed.