Close
Register
Close Window

Show Source |    | About   «  8.1. Properties of Context-Free Languages   ::   Contents   ::   9.2. Turing Machine Exercises  »

9.1. Turing Machines

9.1.1. A General Model of Computation

So far we have seen a few simple machine types, such as DFA, NFA, PDA. And we have seen corresponding grammars: Regular grammars, context-free grammars, etc. We have investigated the concept of non-determinism. Nondeterminism does not affect the power of a DFA (the languages that are accepted by NFAs are the same as for DFAs). But nondeterminism does affect the power of PDAs (nondetermistic PDAs recognize more languages than do determistic PDAs).

These machines all have some similarities. They all take an input string. They all march across the string from left to right, one character at each step. They stop when they reach the end of the string and then make a simple decision: If the machine is in a final state, then it accepts the string, and otherwise it rejects the string. In other words, they are acceptors for some language. All that they can do is determine if a string is a member of the language, and the “more powerful” machines can accept or reject string from more “complicated” languages.

But there is a lot more that we typically expect when we talk about “computation”. Like, well, the ability to “compute” rather than simply accept. What does “compute” actually mean? For example, we would like to be able to take as input the arithmetic expression “\(2 + 3\)” and compute the value 5. This means that we need to be able to output that computed value 5. Or put another way, we can “compute” a value by taking the input string “\(2 + 3\)” and replacing it with the output string “5”.

In its most general form, we can think of everything that a computer does as taking some string as input, and then providing some string as output. Of course, modern peripheral devices like keyboards, mice, and computer monitors give us rich ways to express input strings (perhaps as button presses), and rich ways to interpret output strings (say, as pictures). But it’s not a huge stretch of the imagination to consider computation as converting an input string to an output string.

This concept of converting is far more powerful than simple accepting. A machine that takes an input and provides an output is called a transducer.

In the next section, we will introduce a simple machine, the Turing Machine, that is a transducer. It is only slightly more complicated than the machines that we have seen so far, and only slightly different in its operation. But these differences are significant. Ultimately, we will see that a Turing Machine can do any computation (in its fundamental abilty to convert one string into another) that even the most sophisticated modern computer can do.

Another issue is memory. DFAs have no memory beyond whatever process took them to their current state. PDAs have a stack. This made a huge difference in the languages that can be accepted: DFAs only accept regular languages, while (nondeterministic) PDAs accept any CFL.

Consider these three languages:

  • \(L_1 = \{wcw^R\}\) for \(w\) in \(\Sigma^*, \Sigma = \{a, b\}\).

  • \(L_2 = \{ww^R\}\) for \(w\) in \(\Sigma^*, \Sigma = \{a, b\}\).

  • \(L_1 = \{ww\}\) for \(w\) in \(\Sigma^*, \Sigma = \{a, b\}\).

The differences seem pretty minor, but the outcomes are quite different. \(L_1\) is a deterministic CFL. \(L_2\) is a non-determistic CFL. \(L_3\) is not a CFL at all.

The difference between \(L_3\) and the others seems to relate to the limitiations of the stack. As we read the first \(w\) and load it into the stack, we don’t have a good way to get to the bottom of the stack to see the first letter to compare against the first letter of the second \(w\). But what if we considered other memory models? For example, a queue would solve our problem nicely. But then, a queue would not be able to handle \(L_2\). Two stacks could simulate a queue (do you see how?), and so a machine with two stacks can handle both \(L_1\) and \(L_2\). Perhaps we should investigate what class of languages a two stack machine can handle? Perhaps we should consider other memory models?

As it turns out, none of these ideas are as effective as the simple memory model that the Turing Machine uses.

9.1.2. Turing Machines

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

9.1.3. Interpreting Turing Machines

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

9.1.4. Turing-Decidable vs. Turing-Acceptable Languages

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

9.1.5. Making More Complicated Machines

Obviously, Turing Machines can take an input and modify it. We will see examples of how this leads to powerful computational capability, even if it does not seem yet like they are so powerful. To get a quick idea of their power, consider the following relatively simple machine to accept \(L(a^nb^nc^n)\). This is significant, because this language is in fact not context free. Which means that this simple Turing Machine is doing something that no DFA, NFA, or PDA can do!

Settings

Proficient Saving... Error Saving
Server Error
Resubmit


Settings

Proficient Saving... Error Saving
Server Error
Resubmit


Settings

Proficient Saving... Error Saving
Server Error
Resubmit

9.1.6. Turing Machine Extensions

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

   «  8.1. Properties of Context-Free Languages   ::   Contents   ::   9.2. Turing Machine Exercises  »

nsf
Close Window