The term Computer Science encompasses three different types of research areas: computability, efficiency, and methodology.
Computability deals with the question of what is "mechanically" computable. The most natural way to describe a "problem" is as a numerical function, i.e., as an operation that gets numbers as input and produces numbers as output. A crucial observation is that there is an inherent property in functions that makes them "computable in an organized fashion," e.g., by a series of rules. Most numerical functions do not have this property and the field of computability is concerned with the functions that do. In order to rigorously define "organized fashion" one needs to define formal models of computation. The conclusion of decades of different models that were developed in the beginning of the 20th century was the "Church-Turing Thesis." This thesis states that all reasonable models of computation are equivalent. Thus the property of being "computable" is considered to be inherent to the function and not dependent on an external computing machine.
Once it is established that a function is computable, it is important to find out whether it is efficient. Efficiency is also inherent in the function, rather than the machine computing it. A faster machine will only be able to compute a function faster by a constant multiple. However, a function that is not efficiently computable will cease to be realistically computable when presented with larger inputs, even on a fast machine. Consider, as an example, the sorting problem. Given a list of n numbers, we would like to sort them in ascending order. The naive way of doing it is to choose the smallest number and move it to the front. Then choose the next smaller and move it to the front, and continue until all numbers are sorted. This scheme takes in the order of n2 operations. Thus, sorting 1,000 numbers will require roughly 1,000,000 operations. Suppose we have two machines, one of which is 10 times faster than the other. Suppose also that someone came up with a scheme that sorts n numbers in time n, rather than n2. The slow machine would sort the 1,000 numbers using 1,000 operations using the faster scheme. The fast machine would use 1,000,000 operations using the first scheme, but being 10 times faster than the other machine, it would do it in the time the slow machine would be able to do 100,000 operations. Nevertheless, the slow machine wins by a factor of 100.
We conclude that the computation scheme, and not the machine, is the main contributor to the efficiency of computing a function. This computation scheme is called an "algorithm" in computer science, and the efficiency of the algorithm is called its "complexity."
The fields of Computational Complexity and Design and Analysis of Algorithms are the two main fields of computer science dealing with the efficiency of programs. Computational complexity can be likened to the study of the "forest" of functions, and the different traits causing different classes of complexity. Algorithm design and analysis is the study of methods that can lead to efficient algorithms for specific problems.
The final part of the science of computing is the methodology part. In view of the above discussion one can study computability and efficiency even in a world without computers and electricity. Nevertheless, the existence of computing machines creates many new problems. A machine that computes functions must deal with numerous peripheral devices and multiple functions being computed at the same time. The best ways of organizing these tasks are studied in the research area called operating systems. People who want to write down the code for very large and complex algorithms, need ways that would make it easy to write in the most error-free ways, easy to test, and easy to maintain and understand. These topics are researched in the areas of programming languages and software engineering. Dealing with huge data sets requires ways to index, search, and retrieve data efficiently. These methods are studied in the research areas of databases and information retrieval. The field of Natural Language Processing aims at the goal of having computers understand our speech. The desire to have systems that see and react, e.g., for self-driving cars, necessitates the area of computer vision and image processing. The proliferation of computers requires that they communicate, which leads to the areas of networks and communications. Robotics and Artificial Intelligence allows machines to be able to autonomously perform a range of tasks. All above research areas are concerned with methods that enable easier, better, and more efficient use of computing machines.
Computer Science in Jewish Sources
It is clear that one will not find too many hints of the methodology part of computer science in Jewish sources, since that branch of computer science evolved around the computer. Artificial life or robotics seems to be hinted at by the golem concept. The Talmud (Sanh. 65b) mentions that
created a man and sent it to Rav
. There are additional midrashic and later references to the power of creating "artificial life" by use of the Holy Name. The relationship between these passages and Artificial Intelligence is only superficial. The point made in these passages seems to be the creative power of holiness, rather than the potential of the physical sciences.
A pervasive method in web technologies and digital libraries is the hypertext method. This method has been very successfully used in Jewish literature. The traditional page
format in the Vilna Shas, for example, is a pure use of hypertext. The main text is centered in large letters, the main commentators are arranged around it in smaller letters, and links to appropriate passages in the Bible and in the main posekim are suitably incorporated. The printings of many other Jewish texts are in a similar format (e.g., Rambam, Shulḥan Arukh). These Jewish texts represent the most extensive use of windows and hypertext technology prior to the end of the 20th century. Some research papers in computer science were motivated by the hypertext in Jewish texts.
Computability and efficiency, especially the algorithmic part, do not require a machine, therefore it is not surprising that such topics are considered in the ancient world as well as in Jewish sources. Algorithms have a natural place in mathematics. For example, the sieve of Eratosthenes is a method for automatically finding prime numbers.
Such algorithms abound in the Judaic literature. Most of these algorithms deal with the methods of arriving at the
. The baraita of Rabbi
(Sifra 1:1) gives the 13 rules by which the Torah is interpreted. Even after the codification of the Mishnah, the problems of deciding halakhah were not solved, since the Mishnah leaves many issues in a state of dispute (maḥeloket). The Talmud, although far from settling the disputes in the Mishnah, does offer numerous rules to settling mishnaic disputes. Examples are "yaḥid verabbim – halakhah ke-rabbim" (in a dispute between one and many the halakhah follows the many), "halakhah ke-veit Hillel" (the halakhah is according to the House of Hillel), "halakhah ke-Rav Akiva me-ḥaverav" (the halakhah follows Rabbi Akiva's view when he is opposed by his colleagues). Nevertheless, in numerous places, the Talmud and its commentators have declared that halakhah is not to be deduced from the Mishnah (TJ, Hag. 1:8; Rashi, Sanh. 100:2, "Rava Amar Ipkha") The Rif goes further and says that halakhah cannot be deduced from the Talmud either (Er. 11:2).
Rabbi Ovadiah *Yosef
(Yabi'a Omer, introduction) states that it is not in our power to derive the law from the Talmud without consulting the
. These opinions discourage the posek from applying the rules as an algorithm.
A research project in Machon Lev gathered the given rules and meta-rules of deciding halakhah in the Mishnah and constructed a rule-based system to compare decisions deduced strictly by the algorithm with the halakhot as decided by the Shulḥan Arukh, or the Rambam when the Halakhah did not appear in the Shulḥan Arukh. The system was run and tested on Mishnayot in the tractates Yoma, Ta'anit, and Ḥagigah. The system achieved 90.3% success,
Jewish Contribution to Computer Science
The study of computability began in the early 20th century, before the advent of computers. Among the leading scientists who studied models of computation was the Jewish mathematician Emil Leon Post (1897–1954), who was born in Poland and educated in New York. He invented the model of computation named after him, the Post Machine, and proved results similar to those of Gödel, Church, and Turing. Post was the inventor of recursive function theory – the formal theory dealing with computability.
Most undecidability results (functions that are inherently not computable) are proved by a technique called diagonalization. In this technique values are placed in an infinite two-dimensional matrix and then a perturbation of its diagonal is proven not to be a row in this infinite matrix, leading to a contradiction. This method was first studied by Georg Ferdinand Ludwig Philipp Cantor (1845–1918), born to a Jewish Danish father, who converted to Protestantism, and a Danish Catholic mother. Cantor was the first to introduce Hebrew to modern mathematics. He used the letter א to denote infinite continuous sets, such as the total number of numerical functions, and אo to denote countable infinite sets, such as the number of computable functions. He also proved that אo is strictly less than א.
For a rigorous study of an algorithm's complexity, one needs a carefully defined model. The model on which most algorithmic analysis is calculated is the sequential von Neumann model.
Johann von *Neumann
(1903–1957) was born into a Jewish Hungarian family. He spent most of his adult life in the U.S. and was one of the original six mathematics professors at the Princeton Institute for Advanced Study. He was one of the leading mathematicians of the 20th century. His ideas on logic design were used in the development of the first computers and he pioneered game theory, fault tolerance in systems, and cellular automata.
Recently, newer models of computation have been sought. These models do not enhance the power of computation but it is hoped that they can achieve greater efficiency. For example, one of the most famous currently open problems in computer science and mathematics is the P=? NP problem. The question is whether non-determinism adds computation power. The computation in the von Neumann model is deterministic, i.e., there is a unique instruction that follows every program instruction. In non-deterministic computation the next instruction is "guessed" following certain rules. One of the scientists who introduced non-determinism is Michael Rabin (1931– ) of the Hebrew University of Jerusalem. Intuitively, non-determinism should allow us to compute problems faster, using the power of the "guesses." However, it is still an open question whether there exist problems that can be computed efficiently non-deterministically yet cannot be computed efficiently deterministically. Specific efficient nondeterministic problems have a unique trait that if they can be computed efficiently deterministically, then all efficient nondeterministic problems can be efficiently computed deterministically. These problems are called NP-complete problems. The major theorem in the study of NP-completeness proves that deciding whether a logical formula can be satisfied is NP-complete. This theorem was proven independently by Steve Cook and Leonid Levin (1948– ), a Jewish Russian computer scientist who emigrated to the U.S. in 1978. The theory of NP-completeness took off when Richard Karp (1935– ), an
American computer scientist, published the first set of NP-complete problems.
New models of computation were suggested, which, possibly, compute efficiently problems that are inefficient in the von Neumann model. Some notable examples are Quantum Computation, pioneered in the 1980s by Paul Benioff,
, and David Deutsch. The quantum model assumes that bits behave in a quantum fashion. An alternate model, basing computing on DNA, has been introduced by the American scientist Leonard Adelman (1945– ).
One may mention another fundamental concept in complexity, that of Kolmogorov complexity. Kolmogorov complexity is the minimum size necessary to encode a function. It is named after Kolmogorov, who wrote a paper on it in 1965. Nevertheless, a year earlier, the Jewish mathematician Ray Solomonoff (1926– ), published two papers on what is termed Solomonff induction and algorithmic probability, that independently tackle many of the same concepts.
Jewish contributions in the area of algorithms is also quite prominent. Some fundamental algorithmic methods were invented by Jewish scientists. Examples are linear programming and dynamic programming. Linear programming problems are optimization problems where one needs to optimize a linear function, i.e., a function that describes a line, subject to constraints that are also linear functions. This field of optimization is important since many problems in operations research, such as multi-commodity flow problems and scheduling problems, can be defined as linear programs. Linear programming was discovered by the Soviet mathematician and Nobel laureate in economics
Leonid Vitaliyevich *Kantorovich
(1912–1986). One of the most widely used algorithms for solving linear programs is the SIMPLEX method, developed by the American mathematicians George B. Dantzig (1914–2005).
Dynamic programming is a method of solving a large problem incrementally, by first solving it for small instances and subsequently constructing solutions for larger and larger instances based on previously computed solutions to the smaller cases. It is the core of many important algorithms in all areas of Computer Science. Dynamic Programming was invented by the American mathematician Richard Bellman (1920–1984).
Jewish contribution abounds in the methodology part of computer science as well. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs, where the term "intelligent" is left as an intuitive notion. The field tries to make programs behave more as "intelligent" entities than programmed functions. among its most notable founders are the American scientist Marvin Minsky (1927– ), Nobel laureate in economics
(1916–2001), whose father was Jewish, and Boston scientist John McCarthy (1927– ), who had a Jewish mother.
The field of cryptography deals with the ability to encrypt information. This is especially critical for information that gets transmitted publicly, as over the Internet, and is what makes electronic commerce possible. Public-key cryptography was co-invented by American computer scientist Martin Hellman (1945– ). He was one of the co-authors of the Diffie-Hellman algorithm for secure key distribution over non-secure channels. The most widely used public-key algorithm today is the RSA algorithm, named after MIT scientist Ronald Rivest, Adi Shamir (1952– ) of the Weizmann Institute, and Leonard Adleman (1945– ).
The A.M. Turing Award is given annually by the Association for Computing Machinery to a person selected for lasting and major contributions made to the computing community. The Turing Award is often recognized as the "Nobel Prize of computing." It is sponsored by Intel Corporation and currently is accompanied by a prize of $100,000. Almost a third of the Turing Award winners to date are of Jewish descent. These are Alan Perlis (1966), Marvin Minsky (1969), John McCarthy (1971), Herbert Simon (1975), Michael Rabin (1976), Richard Karp (1985), William Kahan (1989), Edward Feigenbaum (1994), Manuel Blum (1995), Amir Pnueli (1996), Adi Shamir (2002), Leonard Adleman (2002), and Robert Kahn (2004).
It should be noted that three of the above 14 names are Israelis in Israeli universities. Indeed Israel is an international power in computer science. Israeli research is at the cutting edge of the scientific research. Five of the top 100 most cited computer scientists in the world are Israelis. Israeli universities are ranked at the top of international lists of leading computer science department.
Computer applications are not in the scope of this article, but we will mention in passing that many ubiquitous applications, such as the BASIC programming language, spreadsheets, the automated electronically switched telephone network, spread spectrum communications, the Internet, Google, and more, were co-invented by Jews.
Computer Science as an Aid to Judaism
The proliferation of electronic databases has not skipped the Jewish world. There are currently over a dozen different Jewish databases on the market, both as text and as scanned images. In addition there are numerous Internet sites on Jewish topics ranging from providing candle lighting times all over the globe to hospitality information in different communities. The first Jewish database was the Bar-Ilan Responsa Project.
The project was conceived in 1963 by Weizmann Institute scientist Aviezri Fraenkel and later migrated to Bar-Ilan University. Fraenkel was the project's director from 1963 to 1974, succeeded by Ya'akov Choueka, who headed the project from 1974 to 1986. The idea was to create an electronic library of the responsa with a search engine to enable easy access to information. The project required research and solutions in areas such as information retrieval, data compression, Hebrew computational linguistics, and Human-Computer Interaction. It led to many graduate theses and publications in computer science and for many years was at the cutting edge of technology. In its beginnings the database resided on an IBM
mainframe. From 1979, it also became usable in a time-sharing mode from terminals on the Bar-Ilan campus, in Israel, and abroad. During Uri Schild's tenure as project director, it was decided to compress the database to a single compact disk. This made the system accessible to every home, scholar, rabbi, and dayyan. Because of the care the Project takes in seeking error-free text, it is unique in the fact that it is indeed a tool for pesikat halakhah, and used by many posekim today.
An emotionally charged and controversial current phenomenon is the Torah codes, or
. This issue has involved Jews and Christians, scholars, scientists, and laymen, and has even produced best-selling books such as The Bible Code by Michael Drosnin.
Underlying the codes is the traditional Jewish idea that there are several layers to the Torah, and that the remez is a valid form of learning Torah. Rabbi
ben Asher's Baal ha-Turim commentary to the Torah is perhaps the most famous early concerted use of this form of learning. The modern code methods involve Equidistant Letters Sequences (ELS) and the idea is to find names, dates, and "prophecies" encoded as ELS's in the Torah. The first scientific claim to the statistical validity of the codes appeared in a 1988 paper by the mathematician Eliyahu Rips. It was followed by the 1994 paper by Doron Witztum, Eliyahu Rips, and Yoav Rosenberg and generated a very emotional response. Without taking a stand in the controversy, it is important to note that this entire line of research and school of thought is almost impossible without computers.