Home | Audio | DIY | Guitar | iPods | Music | Brain/Problem Solving | Links| Site Map

This work is licensed under a Creative Commons License.

Searching the Problem Space

Even though most of the procedures that will be discussed in this section are from the fields of Discrete Mathematics and Artificial Intelligence, it shouldn’t be too hard to conceive of the human brain as using such search procedures in solving its own problems. This is not to say that the individual is always consciously aware of using the specific algorithms presented, but that instead the brain may also use such search routines on the unconscious neuronal level in communications between neural networks.
To support the contention that it is possible to associate mathematical search routines to basic neurological functions, we need to first turn to the work of Warren McCulloch (1965), who pioneered mathematical modeling of the brain. In his work he was able to show that because of the “all—or—none” character of nervous activity, neural events and the relations among them could be treated by means of propositional logic. He furthered showed that nets of neurons with common inputs embody hierarchies of function, each of which can be reduced to input—output functions pictured in their truth tables.
Wilke (1981), in referring to neural models of the brain, proposed that the more primitive functions sustain the higher ones. For example, analytical thinking does not proceed apart from or supercede primary process events. Instead, primary processes are prerequisite to the calling up of higher functions, and also to the execution and termination of their involvement in the problem solving process.
The work of Sommerhoff (1974) is along the same line as that of Wilke. Sommerhoff begins qualifying his model by saying that the models deal with logically distinct sets of input—output relations, and not necessarily with the anatomical relations of the brain. He simplifies his model even further by breaking it up into discrete stages, behavioral patterns, and sequences of events. In his words (p100):
Cons ider some agent—plus—environment system and assume that the activities of the agent are directively correlated to the environment in a manner which has the occurrence of an event A as ultimate goal. It will then be generally true that A is the node of sets of behaviors which converge and fuse at various points of the preceding time interval. Assume, for example, the occurrence of the goal A presupposes the occurrence of one of two alternative conditions, B, and C, at some time t preceeding the occurrence of A. Assume also that each of these presupposes the occurrence of A. Assume also that each of these presupposes the occurrence of one of two alternative conditions at a time t , and each of these in turn presupposes one of two alternative conditions at a time t . If t is taken as the initial time, we thus visualize eight hypothetical alternatives (D -D ) for the initial state of the system as preconditions for the occurrence of A.
What Sommerhoff has done is to reduce behavior or, as we are concerned, problem solving to a tree diagram. Since neurologists, in general, seem to be in agreement that information is represented in the brain hierarchically via tree diagrams, whereby the nodes of the tree may represent goals/subgoals, then logically tree search algorithms may be a natural component of human problem solving. A hypothesis proposed here is that search algorithms, which differ in the time needed to generate a solution path, may help to explain the difference in response times between impulsive thinkers and reflective thinkers who may utilize significantly different search techniques. With this in mind, it is appropriate to study some of the search algorithms that may be utilized by the problem solver.
Basically, problem solving consists of three main components. The first is a data base, which describes the current task—domain situation and the goal. The second component is the set of operators that are used to manipulate the database. The third component is a control strategy for deciding what to do next, in particular, what operator to apply and where to apply it. The application of operators to those structures in the database which describe the task— domain situation in order to produce a modified situation, is referred to as reasoning forward. The object is to bring the situation forward from the initial condition to one which satisfies the goal state. Another strategy is reasoning backward, whereby the operator is applied to the goal rather than to the task-domain. Hopefully, the goal statement can be converted to one or more subgoals which are easier to solve and in which conditions are sufficient to solve the original problem.
Backward and forward reasoning can be combined to produce means—end analysis. This involves comparing the current goal with a current task—domain situation in order to extract the difference between them. Once this has been done, the difference is used to index the operator most relevant to reducing the difference. If a relevant operator cannot be immediately applied to the present problem state, then subgoals are established which will change the problem state so that the relevant operator can be applied. Once the subgoals are solved, the relevant operator is applied and the modified situation is used as a starting point from which to solve the original goal.
The term “search” used here to refer to both human and machine, can be understood as the search for an appropriate operator sequence. In implementing control strategies for the search, trees are commonly used, with the tree representing the set of problem states produced by the operator. The root node of the tree may be used to represent the initial problem situation or state. Each new problem state that can be produced from the initial state by just one operator is represented by the successor node of the root node. Likewise, subsequent operator applications produce successors of these nodes, etc. Each operator application is represented by a directed arc of the tree. These states are represented by graphs instead of trees since there may be different paths from the root to any given node. The problem of producing a state that satisfies a goal condition can now be formulated as the problem of searching a graph to find a node whose associated description satisfies the goal. Similarly, search based on problem reduction representation can be represented by the search of and AND/OR graph.
The crucial component of searching for a problem solution is the amount of time and space necessary to find it. What is sought is a method for reducing the number of nodes that must be examined in the search. Many different approaches exist for doing this. One important method is to recast the problem so that the size of the search is reduced. Another aspect concerns search efficiency within a given search space. Here a heuristic search can be used to prevent a combinatorial explosion of possible solutions, the result being more efficient problem solving. Before the actual problem solution can be searched for, it must be represented in either a biological or mechanical substrate.
A state—space representation of a problem employs two kinds of entities: 1) states which are data structures giving “snapshots” of the condition of the problem at each stage of its solution, and 2) operators, which are the means of transforming the problem from one state to another. It should be noted that the operator may be inapplicable in certain states. The set of all attainable states of a problem is called a state space. If, for a particular state and operator, the operation can be represented in more than one way, then each representation yields a new state (Barr and Feigenbaum, 1981). The state space problem consists of three components. One is the set 0 of operator, another is the set S of initial states, and a third is the set G of goals states. Hence a state-space problem can be represented by the triple (S,O,G). A solution to the problem is a sequence of finite application of the operator on the initial state, resulting in the goal state (Barr and
. Feignebaum, 1981). A state space can be treated as a directed graph whose states are nodes and arcs are operations transforming one state into another.
Distinguishable from the state-space representation of problems is a technique called problem—reduction representation. With this approach the principal data structures are goals. The procedure begins by first describing the initial problem. It is then solved by a sequence of transformations (these are defined as operators) that ultimately change it into a set of sub-problems whose solutions are immediate. It should be noted that an operator may change a single problem into several subproblems. Furthermore, several operators may be applied to the same problem, or a single operator may be used in several different ways. According to Barr and Feigenbaum (1981), problem-reduction representation is defined as consisting of: 

1. An initial problem description. 
2. A set of operators for transforming problems into subproblems. 
3. A set of primitive problem descriptions--a primitive problem being a problem whose solution is immediate.

When problem reduction procedures are used, tree notation must be generalized if they are to represent the full variety of situations that may occur. This generalized notation is called an AND/OR graph.
According to Nilsson (1971), an AND/OR graph is constructed according to the following rules:

1. Each node represents either a single problem or a set of problems to be solved. The graph contains a start node corresponding to the original problem. 
2. A node representing a primitive problem is called a terminal node and has no descendants. 
3.  For each possible application of an operator to problem P, there is a directed arc from P to a node representing the resulting subproblem set. 
4.  For each node representing a set of two or more subproblems, there are directed arcs from the node for the set to individual nodes for each subproblem. Since a set of subproblems can all be solved, the subproblem nodes are called AND nodes. 
5.  A simplification of the graph produced by rules 3 and 4 may be made in the special case where only one application of an operator is possible for a problem, and where this operator produces a set of more than one subproblem set, it may then be omitted.

Furthermore, in order to find a solution to an initial problem, one only has to build enough of the graph to show that the start node can be solved.
Barr and Feigenbaum (1981) give the following rules for determining if a node is solvable:

1.  it is a terminal node (a primitive problem), 
2.  it is a nonterminal node whose successors are solvable AND nodes, or 
3.  it is a nonterminal node whose successors are OR nodes, and at least one of them is solvable.

Similarly, a node is unsolvable if:

1.  it is a nonterminal node that has no successor (a nonprimitive problem in which no operator applies), 
2.  it is a nonterminal node whose successors are AND nodes and at least one of them is not solvable, or 
3.  it is a nonterminal node whose successors are OR nodes and all of them are unsolvable.

Several searching methods are available. Two will be covered here: the blind—state space search, and the bidirectional search. Since the algorithms for these types of searches are readily available {see Winston (1984) and Roberts (1984)), only a description of each type and how it attempts to generate problem solutions will be covered.
With the blind state—space search, the search for a solution is conducted by making explicit just enough of the state-space graph to contain a solution path. The order in which the potential solution paths are considered is arbitrary and no domain-specific information is used to judge where the solution is likely to lie. Although blind search is impractical for nontrivial problems because of the large volume of state space that may be explored, it does provide a useful tool for understanding heuristic search techniques. The blind search can be broken down further into breadth first, uniform cost, depth first and bidirectional searches. Each of these differs from one another mainly in the order in which the nodes are searched. It is assumed that there is a procedure for finding all of the successors of a given node.
The breadth-first method expands nodes in order of their proximity to the startnode, measured by the number of arcs between them. In other words, it considers every possible operator sequence of length n before any sequence of length n + 1. What is actually taking place is that the goal node is searched for among all nodes at a given level before using the children of those nodes to push on. Although the search may be an extremely long one, it guarantees that the shortest possible solution sequence, if one exists, will be found.
The breadth-first algorithm can be generalized slightly to solve the problem of finding the cheapest path from the start node to a goal state. A nonnegative cost is associated with every arc joining two nodes; the cost of the solution path is then the sum of the arc costs along the path. The now generalized algorithm is called a uniform—cost search. If all arcs have equal cost, the algorithm reduces to breadth-first search. The need for assigning costs to the arcs is illustrated by the traveling salesman problem, in which the different distances between cities correspond to the arc costs and the problem is to minimize the total distance traveled.
Depth-first search is characterized by the expansion of the most recently generated of deepest node first. The depth of a node in a tree is defined: (1) the depth of the start node is 0. (2) The depth of any other node is one more than the depth of its predecessor. As a consequence of expanding the deepest node first, the search follows a single path
through the state space downward from the start node; only if it reaches a state having no successors does it consider an alternate path. Alternate paths systematically vary those previously tried, changing only the last n-steps while keeping n as small as possible.
Each of the algorithms above uses forward reasoning, working from the start node of a state-space tree towards a goal node and using operators that map each node i to a successor node j. In some cases, the search could equally as well have used backward reasoning, moving from the goal state to the start state. Forward and backward reasoning can be combined in a technique called bidirectional search. The idea is to replace a single search graph, which is likely to grow exponentially, with two smaller graphs--one starting from the initial state and one starting from the goal and pursuing both lines of reasoning in parallel. The search terminates when the two graphs intersect.

Home | Audio | DIY | Guitar | iPods | Music | Links | Brain and Problem Solving | Site Map | Contact


Creative Commons License