Abstract

Establishing various frameworks for managing uncertainties in decision-making systems have been posing many fundamental challenges to the system design engineers. Quantum paradigm has been introduced to the area of decision and control communities as a possible supporting platform in such uncertainty management. This paper presents an overview of how a quantum framework and, in particular, probability amplitude has been proposed and utilized in the literature to complement two classical probabilistic decision-making approaches. The first such framework is based in the Bayesian network, and the second is based on an element of Dempster–Shafer (DS) theory using the definition of mass function. The paper first presents a summary of these classical approaches, followed by a review of their preliminary enhancements using the quantum model framework. Particular attention was given on how the notion of probability amplitude is utilized in such extensions to the quantum-like framework. Numerical walk-through examples are combined with the presentation of each method in order to better demonstrate the extensions of the proposed frameworks. The main objective is to better define and develop a common platform in order to further explore and experiment with this alternative framework as a part of a decision support system.

1. Introduction

Design of autonomous systems or systems which can assist or collaborate with people in a semiautonomous framework has been gaining considerable attention for the past decades [1, 2]. For the majority of these systems, a typical system architecture involves a layer of sequenced sensing and perception, followed by a layer of decision-making or actuations. Sensing and sequences of sensing in general gives noisy measurement of information from the environment which can then be used and be interpreted by the next stages of decision-making [3, 4].

In order to construct a basis of such decision-making frameworks in presence of uncertainties, two different sets of axioms were proposed in the area of probability theory. One formulation is based on Kolmogorov axioms (Kolmogorov, 1933/1950) and the other is based on von Neumann axioms (von Neumann, 1932/1955). Former organized the principles underlying the classical probabilistic applications, and the other was based on the probabilistic interpretation of laws underlying quantum mechanics. At the conceptual level, a key difference is that the classical probability theory relies on a set-theoretic representation, whereas quantum theory used a vector space representation. There have been a number of interpretations of quantum mechanics in the context of classical decision-making. In this paper, we present an overview of two classical frameworks which have been used in the context of decision-making and presents how the quantum model and, in particular, quantum probability amplitudes have been used within these two classical models.

1.1. Background Review

There exist a number of methodologies proposed in the literature for decision-making under uncertainties. In this section, we highlight a number of these methods. Expected utility theory (EUT) is a traditional method of decision-making that takes uncertainty into account by tying probabilities to potential outcomes. It entails figuring out each alternative’s expected utility based on its likelihoods and potential rewards. The choice is then made based on which option has the most projected usefulness. Prospect Theory: By taking into account how people view and assess uncertainties, prospect theory expands on the conventional expected utility theory. It takes into account the concepts of risk aversion and loss aversion, where people choose losses over equal rewards. To account for these biases, prospect theory includes value functions and probability weighting. Fuzzy logic allows for degrees of truth or membership rather than binary values in order to deal with uncertainty. By designating linguistic phrases to express uncertainty, it enables decision-makers to handle hazy or ambiguous information. Fuzzy logic makes it easier to reason and make decisions in challenging, unpredictable situations. Making Robust Decisions: Robust decision-making techniques look for tactics that work effectively in a variety of conceivable future scenarios. These techniques take into account the robustness or resilience of judgements under diverse uncertainty rather than optimising for a particular result. Techniques like robust optimisation and sensitivity analysis are frequently applied in this situation. Decision Trees: Decision trees are visual representations of decisions and their outcomes in the form of a tree-like structure. Decision trees make it easier to evaluate options when there are uncertainties by assigning probability and values to various outcomes. Decision trees can be used in conjunction with methods such as expected value analysis and Monte Carlo simulation. Bayesian Decision Analysis: Bayesian decision analysis uses decision theory and Bayesian inference to make decisions in the face of uncertainty. It entails re-evaluating prior assumptions in light of new information to derive posterior probabilities, which are then used to determine the best course of action based on variables like expected utility or expected regret. A mathematical optimisation technique called stochastic programming takes into account uncertainties modelled by probabilistic distributions. It creates decision models with specified probabilistic aims and restrictions and then solves them to get solid conclusions that hold up under a variety of circumstances.

In the still-emerging discipline of quantum decision-making, theories from quantum mechanics are applied to the problem of decision-making. It offers alternative models for decision-making processes by utilising ideas and mathematical frameworks from quantum physics. Traditional approaches to decision-making frequently rely on conventional probabilities and logic. However, quantum decision-making contends that the use of quantum phenomena like superposition, entanglement, and interference can be advantageous for decision-making processes. There has been a number of work establishing fundamentals of quantum mechanics within the notion decision-making under uncertainties. For example, recently [5] applied the quantum computational model to represents modelling the human affective system and applying lessons learned to human-robot interaction, in handling ambiguous emotional states and probabilistic decisions using quantum logic on fuzzy-sets hardware. Reference [6] presents some results in exploring the notion of interference effect within the context of Bayesian network and fuzzy set theory. In addition, they have utilized the Dempster–Shafer’s evidence theory to transform fuzzy number into probability. Reference [7] argues that human decision-making follows Bayes probabilistic reasoning and there are relationships between Bayes probability, fuzzy sets, and quantum probability.

This paper presents a tutorial overview of how the notion of quantum probability has been integrated within the two classical approaches in the decision-making, namely, Bayesian network and Dempster–Shafer theory (in particular the notion of the mass functions). Through the step-by-step and break-down of basic approaches, it is hoped that the reader can gain a better understanding of such quantum extensions and allow for further study of future extensions.

The paper is organized as follows: Section 2 presents the main results of this paper which starts with the Bayesian network. It contains a definition of naive Bayesian network and the associated marginalization, followed by an example of how quantum model can be integrated within the classical definitions. This section also presents an overview of Dempster–Shafer theory of combining information. In particular, how the notion of the mass function, which is the cornerstone of this formulation is used in the literature as a basis of its extension to the quantum framework. Section 3 presents some discussions and concluding remarks. Throughout the presentation, simple numerical examples are used in order to better demonstrate both of these classical methods and their extensions to the quantum framework and quantum probability amplitudes. It is hoped that this paper can provide a better understanding of some of these example extensions in order to further investigate their properties.

2. Proposed Approach: Overview of Probability Amplitude Applications

This section presents the main results of this paper. The section is divided into two parts: Bayesian network and Dempster–Shafer theory. The contribution of this section is a step-by-step discretion and walk-through examples of the general methods which have been proposed in the literature in defining how the notion of quantum probability amplitude can be used to enhance these two classical methods.

2.1. Bayesian Network

This section presents an application of the quantum paradigm and probability amplitude which have been proposed in the literature in the construction of the Bayesian network. First, we present a background review of the general Bayesian networks in context of decision-making, followed by an extension to the quantum-enhanced Bayesian network.

The Bayesian method offers an approach for combining evidence according to probability theory [8, 9]. Uncertainties are represented using the conditional probability and through Bayes rules. The framework can also be defined in a sequential scheme where the degree of belief in a hypothesis can be updated based on new information [1012].

Naive Bayes model is proposed to represent the conditional independence of parameters. This model states that the random variables are conditionally independent given the instances of the parent in the context of a graph model representation. For example, the instance of the parent node is given by the propagation instances of its children’s random variables which can be represented by the following conditional probability:which can further be expanded as follows:where the probability representation presented in the denominator can be assumed to be constant of proportionality due to the fact that this joint probability do not dependent on the instance of the parent node . As a result, equation (2) can further be expressed as the reduced condition of joint distributions, or

Give the above joint probability distribution of equation (3), it can be further expanded using the chain rule to obtain

The Naive Bayesian assumption states that each feature defined in the above is conditionally independent of every other feature , given the instance of their parent class . This as a result implies that

With the above assumption, the joint probability distribution defined in equation (3) can be written as follows:where by using the naive independence assumption again in the normalization factor, equation (6) can be written as follows:

The above equation can be extended to the case of the Bayesian network which uses the conditional independence and the Markov assumption [9]. Let be a set of n random variables of a Bayesian network graph structure. Let be the parent of the random variable and be the variables in the graph that are nondescendants of . The Markov assumption states that each variable is independent of its nondescendant given its parent . Combining the naive Bayes formula given in equation (7) with the definition of local independences, we can write the following relationship:

2.1.1. Quantum-Enhanced Bayesian Network

Within the past decades, there have been many interpretations of how some of the basic definitions of quantum mechanics can be extended to the framework of Bayesian reasoning and decision-making; see [1317]. Using the Naive Bayesian network reasoning, interpretations of quantum mechanics have allowed various exploratory studies in generalization of the Bayesian decision-making. Such studies can allow formalization of general tools which can further be enhanced and complemented within the autonomous and/or human-in-the-loop decision-making process.

In this section and through a descriptive example, we explore how some of the basic reasoning in the classical Bayesian network can be complemented with the quantum type representation. Classical Bayesian network is represented by a directed acyclic graph structure where each node is a representation of a random variable and each edge represents a direct influence of the parent on the child node. Another interpretation of the graph is its representation of the independence relationship in terms of conditional probability over the values of a node given each possible joint assignment of values of its parent.

As an example, let us considered a two-node Bayesian network ( is the parent node and the child node), where the probability distribution of node is directly influenced by node . For example, node can be associated with the state of a robot detecting an overhead light and observing its color changes as it passes a certain location along its discrete trajectory (e.g., light turning blue or orange). Node would be the decision that the robot will face in turning left or right Table 1. Here, we are assuming that the light will have an equal probability of changing its color to either blue or orange.

The right column of Table 2 shows the joint distributions on the conditions of this two node example. As it can be seen from the table, the full value of the sum of this joint distributions over all the values are equal to 1.

In general, and given two random variables and , the marginal probability of is simply the probability of averaging over the information about [9]. For the discrete random variables, such marginalization can be defined as follows:

The full joint distribution of the Bayesian network is defined based on equation (8), where is a list of random variables and corresponds to all parent nodes of . For example, after computing the joint distributions of the two-node Bayesian network, we need to sum out all the variables that are unknown, say the outcome of the (i.e., light turning blue or orange). This can be accomplished by applying the probability expansion shown in equation (9). For example, the probability of the robot taking either of the actions given that the observed information was the light turning blue, e.g., can be computed as follows:with the negation probability ofwhich satisfying the requirement of: .

The marginal probability, e.g., the robot turning left or turning right without detecting or knowing the outcome of the observation which would be color change of light can be computed from equation (9) as follows:

Similarly, we can compute which also satisfied condition of .

In quantum representation of the probability, the finite choices in the example which are available at each node of the graph can be represented by a state vector and as a superposition of the corresponding basis representing each of the choices (Appendix A–E), [18]). For example, for each of the nodes of our simple Bayesian network, the corresponding state vector in each of the Hilbert spaces can be defined at each node as follows:which can be represented the combined Hilbert space of the each of the subspaces as follows: . The basis of this combined space can be defined using the following tensor product (outer product) or

The new combined state vector description can then be defined as follows:where represents the corresponding probability amplitude along each of the combined basis subspaces in the new space. Referring to example defined in Table 2 and the Bayesian model, the following probability amplitudes can be defined:along the corresponding bases space represents a quantum superposition over all possible states of the Bayesian network. Such superposition can be view as a quantum-like joint full distribution. Table 3 shows such distribution associated with the working example of this section.

In the quantum-enhanced Bayesian network, marginal probability can be carried by following a similar procedural approach as highlighted above. However, in the quantum framework, the procedure for computing marginal probability can follow the definition of Feynman’s second rule for combining probabilities in the associated probability path diagram (Appendix A–E). The second rule states that the probability amplitude of transition from a state which is defined as the initial node to the final state node, taking multiple indistinguishable paths, is given by the sum of the amplitudes of each path. For example, for the two-node example in Table 3, the quantum-enhanced marginal probability of the robot turning left can be computed as follows:

Comparing the above computation with the classical example of computing the marginal probability solved previously, it can be seen that for the case when , the quantum enhanced marginal probability is the same as the classical computation. In the above expansion, the term is referred to as the interference term.

2.2. Dempster–Shafer (DS) Formulation of Mass Function

Another broad approach which has been proposed in the literature in combining conditional information obtained through various sources of sensing modalities is based on Dempster–Shafer evidence theory, which also referred to as the generalization of the Bayesian theory [19, 20]. It has been used in various applications of data fusion in a sensor network, e.g., [21]. It provides a formalism that could be used to represent incomplete knowledge, updating beliefs, and a combination of evidence for explicitly representing uncertainty. In the following, we present first an overview of DS theory and its interpretation in the context of multiple sensor fusion. This can offer the reader a better sense of how the enhanced model using quantum formulation is utilized within a component of the DS formulation, namely the mass function.

In Dempster–Shafer (DS) theory, each inquiry into a fact has a degree of support between 0 and 1 (i.e., 0 no support for the fact and 1 full support for the fact). Let a set of possible conclusions be a mutually exclusive and exhaustive set of all possible outcomes be given as: (for example, these can be measurements from a collection of sensors monitoring a common event).where is referred to as the universe or frame of discernment, where at least one of the must be true. DS theory is concerned with pieces of evidence which support subsets of outcomes in which have elements represented in its power set over the frame of discernment. For example, for a three element membership set defined in relation to equation (19), the associated power set can be written as follows:where represents the empty set and has the probability of 0, and each of the other elements in the power set has a probability between 0 and 1.

Mass function for is defined as a mapping of from 0 to 1: . Mass function of a member of the power set, or , is equal to the portion of all evidence that supports the existence of the element of the power set. The value of each is between 0 and 1, where . This also implies the evidence tells us the truth is in for sure and we also have a logical or categorical mass function. Each element of the mass function is called a focal element. is said to be Bayesian if all focal sets of are singletons. The value represents the allocation of belief to the possibility that looks for the true value of belonging to . As an example, for our three sensor model example, i.e., , let us define the following values for the members of the power set defined in equation (20): , , and .

The belief in one of the members of the power set is defined as the sum of the masses of subset of power set (including the set itself) that represents the existing evidence supporting . In the above example, .

The plausibility of , , is defined as the sum of all the masses in the power set that intersect with the set . For example, . The certainty which we can assign to a given subset of A is defined by what is referred to as the interval, . For the working example, this is defined to be . This will imply that the probability of , falls somewhere between and .

2.2.1. Combining Mass Functions

When different decision-making agents are responsible in gathering information from a shared sensed or observed data, various methodologies have been proposed in the literature in combining this information based on the DS framework. For example, if two mass functions and from two reliable and distinct sources of information are available, conjunctive rule of combination can be followed. If the sources are distinct but only one of the sources is reliable (and we do not know which one), the disjunctive rule of combination can be followed to construct a combined mass function [22, 23].

For the cases, where and are two reliable and distinct mass functions over , Dempster’s rule of conjunctive of combination offers an approach for defining such combination as follows [24]:where is referred to as the degree of conflict defined as follows:

As an example of utilization of the above rule, let us assume that we have again three sensors, i.e., and let us assume that there exists two decision-making modules which need to be fused in order to obtain their combined information for determining the presence of an object in the common monitoring. Let us define the two mass functions based on the power set description of the sensor set as follows:

For this example, a measure (or degree) of conflict defined in equation (22) can be computed to be:

A measure of common shared belief of the two decision-makers given the measure or observed value of a sensor one, i.e., is computed as follows:similarly, the shared belief for the second sensor is computed as follows:

Table 4 summarizes the complete combined shared believes using the definition of the two mass functions.

2.2.2. Quantum-Enhanced Mass Function

There have been a number of approaches proposed in the literature to extend the framework of DS evidence theory and, in particular, the notion of the mass function within the context of quantum probability construction [2528].

Here we present an overview of one of the approaches where the state description captures the quantized representation of the mass function over the basis defined by the Dempster combination rule of combining mass functions [29]. The representation is defined as a quantum state with a well-defined quantum superposition.

Referring to the previous section, given a set of observations/measurements defined over , its quantum representation is written as: (Appendix A–E). This quantum state can further be written as follows: . The probability of each basis state is where .

The quantum mass function is defined on as follows:where , , , and (where from the previous section we have as the belief and as the plausibility of an event) and is the corresponding probability amplitude.

Equation (27) represents the mass function as a superposition of state composed of in which is the probability amplitude associated with basis state , and and are the modulus and phase angle of this probability amplitude, respectively (Appendix A–E). represents the normalization condition required by the definition of standard quantum state. In addition, the modulus of each probability amplitude, i.e., , has to meet the DS constraint defined in the previous section over the associated power set such that .

As an example, let us again define a frame of discernment consists of outcome of three sensors with the associated power set defined in the previous section. Let us defined the following two mass functions over as follows: and .

From definition of equation (27), the first mass function in its quantum representation, over is defined as follows:with the following constraints on the modulus of each probability amplitudes (i.e., using the belief and plausibility constraints), we have , , , , , , and with , and .

The second example mass function over is defined as follows:with the following constraints on the modulus of the probability amplitudes: and with .

2.2.3. Combining Quantum Mass Functions

There exist a number of approaches for obtaining the combined (average) description of the quantum mass functions which are extensions of classical methods [30]. This section presents an overview example of a method which is used to obtain such combined (average) description of the quantum mass function proposed by [29]. Let the frame of discernment be define by and similar to the previous section, let us define the following description of the quantum mass functions and of the two pieces of evidence and with the associated constraints on their modulus of the probability amplitudes. Let us also define and which correspond to weight assignments to these information evidences, Table 5.

A weighted combination of the two quantum mass functions can be defined as follows:subject to constraints defined in Table 5 and the following additional constraints: , , and where

The probability amplitudes of the basis state in and are and , respectively. The quantum averaging makes the weighted average of and in order to generate the combined probability amplitude of , i.e., . This in turn can further be defined as the square of the probability amplitude defined in the relationship: .

The term define in equation (32) is referred to as the interference term between and on (Appendix A–E). The phase difference represents the interaction of the mass functions on the same sensed event which cannot be determined through individual mass function. For the case when , the above averaging reduced to the classical averaging techniques. With reduction of this angle, the effect of interference between the two mass function increases. When , this can imply that there exist a maximum information for mutual support between the two mass functions. For the purpose of demonstration, in the following example we assume that there exists maximum information and mutual support between the mass functions. The quantum averaging of equation (30) can be written as follows:subject to the constraints associated with the probability amplitudes of each of the mass functions and simplified constraint relationships defined in equations (31) and (32), or

The following example presents a demonstration of above algorithm in order to combine two quantum presentation of mass functions. Let be the frame of discernment be where we have defined the two mass functions and , where , , , , and . Following the description of Table 5 the quantum representation of these mass functions can be written as follows:

Using the definition of the combined quantum mass functions (equations (34) and (35)) with the weight , we can define an instant of the combined quantum mass functions by satisfying the constraints associated with the individual and combined probability amplitudes.

The quantum combination of the two mass functions can be written as follows:with the following coefficients (Table 6)

In the expansion of the above equation (and as highlighted in equations (31) and (35)), the term represents the interference term in the quantum combination of mass functions.

3. Discussion and Conclusions

Managing uncertainties in a decision-making process which needs to incorporate many sources of information/sensing as a part of the process is a challenging task. The difficulty of this challenge is further compounded when, for example, dealing with a design of an autonomous decision-making system involving multiple sensing modalities mounted on robots responsible for monitoring movements and activities of people. Historically, a number of decision-making frameworks have been proposed in the literature based on deterministic reasoning and various extensions of Bayesian/fuzzy reasoning. This paper presents an overview of examples proposed in the literature in extending the basis of the classical framework of decision-making with various interpretations of quantum mechanics. These extensions which may be viewed as parsimonious models can offer frameworks to further study various similarities, limitations of the classical decision-making methods. In this paper, we focus only on review of two of the classical methods, namely, the Bayesian network and Dempster–Shafer theory. For both approaches, we first present a review of each method, followed by their proposed quantum counterpart. The review of the quantum extensions presented here also only focuses on the probability amplitude part defined in quantum framework. The paper highlights through various walk-through examples on where the quantum framework differ from the classical one. There still remain many questions regarding the feasibility of the quantum framework within the decision-making paradigms. Recently, there have been many initiatives which try to explore relationships between human way of reasoning, fuzzy concepts, and quantum mechanics.

Appendix

A. Quantum State

Outcomes or events which are associated with random phenomenon or experiments are defined within the sample space which is usually represented by . As a descriptive example, let us consider a planar movement of an object is constrained to move in the horizontal plane and we are interested in tracking this object. The sample space can be defined as the instantiation (probability) of movements that the object can take which can be either of the independent directions of movements namely, movement to the left/right (lateral) (La) or movement to the left/right (longitudinal) (Lo) incremental (discrete) movements, e.g., . We refer to these instances of example movements as mutually exclusive events. This means that the choices that the tracked object has in selecting either direction is mutually exclusive or does not dependent on each other. Or in another words, at any instant, the decision that the object makes are in a superposition of these basic primitives (states).

In the quantum framework, outcomes and events are defined within the finite dimension Hilbert space . In general, this would be a vector space of complex numbers embedded with the structure of inner product which allows defining a distance function. Similar to Euclidean space, this space is also defined within the span of the basis vectors which are orthonormal to each other. For the working example of this paper, this can be the choices of movement if an object in a room where its space is defined as ( is a column vector and is referred to as a ket or, and are the two bases that defines the instantaneous choices of movements.

Outcomes and events are given geometrical meaning. Uncertainties in selection for the possible direction of movements by the object is represented by a state vector defined by , which can capture the occurrence of all events [31]. This would be in contrary to classical probability theory where at any instance, the occurrence of each event is represented in a sequential order belonging to a set. In the quantum framework, we can represent all possible occurrences of events at the same time through vector representation. Similar to classical theory, the mutual exclusiveness of events is represented by orthonormal vectors. In our example, an instance of state can be defined as a superposition of choices for the probability of movements.where and are the instances of the relative position angles that the state can make with respect to each of the basis in the unit sphere (ball) description. and are generally referred to as probability amplitudes along each of the basis space. These amplitudes are now can be used to define the probability of subject in selecting either of the directions. The magnitude of these probabilities is obtained by squaring the associated projected magnitude along a given basis subspace through utilization of Born’s rule (see the following). To compute such probability, each of the probability amplitudes can be multiplied by its corresponding complex conjugate. For example, the probability (Pr) of the tracked subject moving in the lateral direction, i.e., Pr , at the instant of measurement can be obtained as follows [18, 32]:

The quantum state description, such as the one described above, is rotating in a unit circle (as seen from the quantum theory framework) and requires by one of its axioms to have the sum of the squared magnitudes of each amplitude (projections) to be equal to one, or

This axiom is called the normalization axiom (Unitarily) [33] and also corresponds to the classical probability theory that constraint that the probability of all events in a sample space to sum-up to one. Give the quantum state , the probability of an event along any of the independent subspaces can also be obtained by squaring the magnitude of the projection of the state onto the corresponding subspace. As an example, for the quantum state defined above, projection operators and of the state onto each of the subspaces are defined by exterior product, orwhich are the outer product (tensor product) of the basis states (subspaces) and , respectively. The square measure of the length of the vector (i.e., probability measure), which is the square of the resultant projection of the superposition state vector to a desired subspace can now be computed. For example, the projection of the state onto subspace is [32]

Using complex angle description and expansion, the probability of the along the longitudinal direction is

The collapse of the state (or the wave function) onto a subspace gives the probability of associating a measurement (or observation) along the subspace. An interpretation of the above can be that until the autonomous object committee to a direction of movement, it is in the state of superposition of all the choices (subspaces) (for example, the probability of choosing a movement along the Lo direction would be 0.75%).

B. Rule

Probabilities are assigned to histories or sequences of events based on an interpretation Rule [34]. This rule states that in order to compute the probability of an ordered events, say events and which are spanned by two different basis in two spaces and , one should compute the probability of first observing/measuring the event and then calculating the probability of occurrence of , followed by . The following extensions presents the associated sequences for such computation which are based on the results presented in [35, 36].

Let be the first sensor having basis of and that are spanned in and let be the sensor which has spanned basis of and spanned in . After observing/measuring the movement of the subject corresponding to the event , we can compute the following probability which would be also the new definition of the revised state of the subject, i.e., :with the revised state

The probability of an event given that we have observed/measured event is given by square of the projection of the revised state vector corresponding to event , onto the subspace related to event . This is equivalent to . According to rule, the probability of , followed by is given by , orthe above equation is the projection of the initial state onto the subspace of the first sensor measurement through the projection operator . This is then results in the new state which is then followed by the projection of this new state onto the subspace of the second sensor given be the projector operator .

C. Probability Path Diagrams

Path diagrams are very important visualization approach for describing the interconnection and dependencies between set of event uncertainty variables. It is a description which was originally inspired by the effect of interference observed using the famous double slit experiment and the associated probability distributions [36, 37].

Using the example set-up of this paper, an interpretation of the interference effect can be stated as follows. Given the state of the tracked subject as described above , suppose that the subject is at the instance of selecting between the directions of movement and has available to it information/measurement (with a probability distribution) (here, by default we are assuming that outcome of measurement from any sensing modality has associate to it some uncertainty (probability distribution)) that can be used to assist in deciding between the choices in the incremental movements. Given this information, the subject will then decide on whether or not to use it in order to decide on the direction of movement. This interpretation can also be extended to the case where the subject decides on the direction of movement not knowing (or having access to) to such available information or simply ignores the availability and presence of such distribution of the information [31, 36].

In order to capture and compute the probability model of the subject given various path configurations which can arise from the above scenarios (see Figure 1). Feynman [37] devices number of rules for asserting the probability of transition between various nodes. For example, the first rule for asserting the probability for a sequence of events along a single path state that, the probability amplitude of the path is the product of the amplitudes of each of the transition paths. This is equivalent to the classical product of the conditional probability P of transitions between each of the node along a single path, oror the computation of the probability using the associate projection operator can be written as follows:where the symbol is used to represent complex probability amplitude. For such single paths, the probability of the path is similar to the Markov model where the square of amplitudes replacing the magnitudes for the conditional probabilities.

An indistinguishable path is referred to the case when the subject arrives on decision on moving either to the lateral direction or longitudinal direction having some measure of information available to it in regard to the selection of the direction. But the subject does not consider using these information (or not being aware of the presence of such information). Referring to Figure 1, an example can correspond to the case where the subject starts at initial state and arrives at the final state by transiting from multiple possible paths without knowing for certain which path was taken to reach the goal state.

Quantum probability theory states that when the path is undistinguishable, then the goal state can be reached through a superposition of paths trajectories. This is known as Feynman’s second rule. It states, for example, that the amplitude of transition from an initial state to the final state , taking multiple indistinguishable paths, is given by the sum of all amplitudes for each path. This rule is in accordance with the law of total amplitude and the probability and computed by taking the squared magnitude of this sum, or

Contrary to the above decomposition, if the above mentioned paths were observed/measured, the definition of the quantum probability theory is the same as Markov model (which is also known as Feynman’s third rule). The rule states that the probability amplitude of observed/measured multiple path trajectories is the sum of the amplitudes of each individual paths. Following Born’s rule, the probability of each path is the squared magnitude along the path. For the case example mentioned above, this can be written as follows:

D. Born’s Rule

The following interpretation of the Born rule is partly based on the results presented in [38]. Let be a quantum-mechanical model of a system which is observable and can be represented by a self-adjoint operator on with inner product . Assume that has nondegenerated discrete spectrum which in our case of self-adjoint operator which implies that has an orthonormal basis of eigenvectors with corresponding eigenvalues , i.e., .

A fundamental assumption underlying the Born rule is that in and measurement of the observable will produce one of its eigenvalues . Let be a unit vector, then the Born rule states: if the system in a state , then the probability that the eigenvalue of is found when is measured is

In other words, if (with ), then,

E. Interference Effects

As stated previously, the relationship between the classical probability density function and a quantum probability amplitude is given by Born’s rule, or: is the square magnitude of a complex amplitude which is obtained as follows: .

Given a set of disjointed events , the law of total probability can be formulated as follows [39]:

The quantum probability equivalent of the above total probability can be written as follows:

Multiplying the expression with its complex conjugate and simplifying, the expression for can be written as follows:

As it can be seen from the above, when , the quantum probability theory converges to its classical counterpart because the interference term will be zero. For nonzero values, this interference term can affect destructively the classical probability or constructively.

Data Availability

The data that used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was partially supported by the Discovery Grant through Natural Sciences and Engineering Research Council of Canada and Simon Fraser University.