Classical and Quantum Information Acquisition Measurement and POVM

This paper shows that classical and quantum measurement can be treated on the same foot provided that we make use of effects instead of projectors, of POVM instead ordinary projection measurement, and of amplitude operators instead of amplitudes. 1 The Classical Case To measure is to acquire some piece of information about something and could be therefore also considered a kind of information exchange. An unjustified generalization of the classical theory of communication has produced two misunderstandings (or two unwarranted generalizations) [2, Sec. 2.3]: • There is already information selection at the start of the information exchange. This is true for controlled information exchange in certain conditions but it is not true for quantum–mechanical systems (which can be in an initial superposition state) but it is neither true for living systems, which in general try to extrapolate (or to guess) the vital meaning of an uncertain signal (Who is the sender? To which purpose?) once they receive a certain piece of information. The case of mimicry is sufficiently clear: an innocent signal may obscure a real danger. • Given the previous assumption, the model that has been imposed is the match or mismatch between input and output. However, this does not correspond to the real situation of most information exchanges. This is the reason why in AI and PDP the so–called hidden units (bridging between inputs and outputs) were introduced. In the following I shall introduce an interface of this kind between input and output, classically represented by data and quantum–mechanically by the coupling between object system and apparatus. Seen in this perspective, classical and quantum measurement are less far away than it is customary to assume. Classically, we have an unknown parameter k whose value we wish to know and some data d pertaining to a set D at our disposal. We never have direct access to systems or events (whose properties are described by k) but always to things or systems through data. These data can be represented by the position of the pointer of our measuring apparatus or simply by the impulse our sensory system has received, or even by the way we receive information about the position of the pointer through our sensory system. It does not matter how long this chain may be. The important point is a matter of ae-mail: gennaro.auletta@gmail.com DOI: 10.1051/ C © Owned by the authors, published by EDP Sciences, 2014 , / 0 (2014) 2014 00 Conferences EP Web of J 70 0 7 0 ep conf j 00 00 This is an Open Access article distributed under the terms of the Creative Commons Attribution License 2.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 3 3 Article available at http://www.epj-conferences.org or http://dx.doi.org/10.1051/epjconf/20147000003 principle: we can receive information about objects and events only conditionally from the data at our disposal. Obviously, once we have observed or acquired data, we must perform an information extrapolation that allows us to guess about the value of the parameter k. This is information selection. The probability that we select a response j having an event represented by an unknown parameter k (i.e. the probability that both event k and event j occur) is given by p( j, k). Now, we may expand this probability by taking into account the data d that are somehow the interface between the source event k and our final selection event j: p( j|k) = ∑ d∈D p( j|d)p(d|k), (1) where I have made use of a discrete case for the sake of simplicity and we are summing over all the data d pertaining to the set D. By inserting the last equation into the know classical equation for the total probability we obtain: p( j, k) = ∑ d∈D p( j|d)p(d|k)p(k) = ∑ d∈D p( j|d)p(d, k). (2) Eq. (2) can be considered as a generalization of the well known formula p( j) = ∑ d∈D p( j|d)p(d), (3) and it reduces to the latter when p(k) = 1, i.e. when the event k occurs with certainty. It is important to stress that the two conditional probabilities p( j|d) and p(d|k) are quite different. This can be seen formally by the fact that in Eq. (1) we sum over the data d, which represents the conditioned result relative to k on the one hand and the condition for information extrapolation on the other. This means that the probability p(d|k) represents how faithful our data are relative to k, that is, how reliable our apparatus (or our sensory system) is. Instead, the probability p( j|d) represents our ability to select a single j able to interpret the parameter in the best way. Probability (2) is Bayesian, since, by making use of the result p(k| j) = p( j, k) p( j) , (4) it is also true that p(k| j) = p(k) ∑ d∈D p( j|d)p(d|k) p( j) = p(k) p( j|k) p( j) . (5) In other words, we can invert the kind of question we pose and try to guess the unknown parameter k conditionally on having detected j. Now I shall show that the quantum case is not different.


The Classical Case
To measure is to acquire some piece of information about something and could be therefore also considered a kind of information exchange.An unjustified generalization of the classical theory of communication has produced two misunderstandings (or two unwarranted generalizations) [2, Sec.

2.3]:
• There is already information selection at the start of the information exchange.This is true for controlled information exchange in certain conditions but it is not true for quantum-mechanical systems (which can be in an initial superposition state) but it is neither true for living systems, which in general try to extrapolate (or to guess) the vital meaning of an uncertain signal (Who is the sender?To which purpose?) once they receive a certain piece of information.The case of mimicry is sufficiently clear: an innocent signal may obscure a real danger.
• Given the previous assumption, the model that has been imposed is the match or mismatch between input and output.However, this does not correspond to the real situation of most information exchanges.This is the reason why in AI and PDP the so-called hidden units (bridging between inputs and outputs) were introduced.In the following I shall introduce an interface of this kind between input and output, classically represented by data and quantum-mechanically by the coupling between object system and apparatus.Seen in this perspective, classical and quantum measurement are less far away than it is customary to assume.
Classically, we have an unknown parameter k whose value we wish to know and some data d pertaining to a set D at our disposal.We never have direct access to systems or events (whose properties are described by k) but always to things or systems through data.These data can be represented by the position of the pointer of our measuring apparatus or simply by the impulse our sensory system has received, or even by the way we receive information about the position of the pointer through our sensory system.It does not matter how long this chain may be.principle: we can receive information about objects and events only conditionally from the data at our disposal.
Obviously, once we have observed or acquired data, we must perform an information extrapolation that allows us to guess about the value of the parameter k.This is information selection.The probability that we select a response j having an event represented by an unknown parameter k (i.e. the probability that both event k and event j occur) is given by p( j, k).Now, we may expand this probability by taking into account the data d that are somehow the interface between the source event k and our final selection event j: where I have made use of a discrete case for the sake of simplicity and we are summing over all the data d pertaining to the set D. By inserting the last equation into the know classical equation for the total probability we obtain: Eq. ( 2) can be considered as a generalization of the well known formula and it reduces to the latter when p(k) = 1, i.e. when the event k occurs with certainty.
It is important to stress that the two conditional probabilities p( j|d) and p(d|k) are quite different.This can be seen formally by the fact that in Eq. ( 1) we sum over the data d, which represents the conditioned result relative to k on the one hand and the condition for information extrapolation on the other.This means that the probability p(d|k) represents how faithful our data are relative to k, that is, how reliable our apparatus (or our sensory system) is.Instead, the probability p( j|d) represents our ability to select a single j able to interpret the parameter in the best way.
Probability (2) is Bayesian, since, by making use of the result it is also true that In other words, we can invert the kind of question we pose and try to guess the unknown parameter k conditionally on having detected j. Now I shall show that the quantum case is not different.

The Quantum Case
I assume that the initial state of the apparatus is some ready-state |A 0 while the state of the object system is the superposition

EPJ Web of Conferences
It is also in agreement with quantum mechanics to assume that the entanglement between apparatus and object system occurring during the pre-measurement step (at time t) is the result of a unitary transformation (I indeed remind that only the final step of selection or detection is not unitary): We may describe the initial state of the object system and apparatus in terms of the (initially factorized) density matrices: so that in matrix terms the pre-measurement stage can be described as To be easy, in the case of a bidimensional system (with components m and n) we can write for the result of the pre measurement step [4, Sec.12.1]: It is understood that |m and |n are the eigenstates of the observable that is selected in the premeasurement set-up.Just before the detection, the probability distribution to read the value a m of the apparatus observable will be simply given by We can prove this quite easily.Indeed, computation of the partial trace will kill all of the system's states in (10) The expression is a projection-like operator that selects the outcome |a m but does not satisfy the requirement of orthogonality that is typical of projectors.Indeed, for any couple of projectors pertaining to the same set (expressing kets of the same orthonormal set) we have: The operator Ê(a m ) is called effect and is the quintessence of the Positive-Operator-Valued-Measure.Effects allow us to write the above probability (11) as which is formally similar to the traditional expression Note that the probability ℘(a m ) is a conditional one.Starting from the definition (17), we compute now the effect explicitly: The expressions are not probability amplitudes because the involved unitary operators represents the coupling of the apparatus and the system, whereas the above ket and bra belong to system's and the apparatus' Hilbert space only, respectively.Therefore they are operators.In particular, the amplitude operator θ(a m ) describes all steps of the measurement of a given observable: • Preparation of the initial state of the system (input |ψ ), • Unitary evolution (coupling or premeasurement) of the apparatus together with the object system (the bridge provided by Ût ), and • Detection by the apparatus (output a m |).
This fully corresponds to the classically case previous examined.In both cases, we pass information from the past to the future through some current connection.

Experimental Set-up
I assume that both the system S and the apparatus A are represented by two-level systems.In particular, I choose the operator σS z for the observable of S and σA x for the observable of the apparatus [1,

EPJ Web of Conferences
Sec. 9.1].The system S is initially prepared in a superposition of the two eigenstates of σS z , which are the spin-up and the spin-down state, respectively, given by The apparatus is initially in the z spin-down state so that if, after the interaction, the system is in |↓ S , the state of A remains unchanged.Otherwise, it will become |↑ A .The interaction Hamiltonian can then be explicitly written as where ε is some coupling function.
The first step we need to make in order to calculate the action of the unitary operator Ût on the initial state |Ψ(0) SA , is to diagonalize the matrix ĤSM .The observable σS z is already diagonal with respect to the basis states (23).In fact, we have On the other hand, the two eigenvalues of σA x are ±1 and the eigenkets are given by respectively.In terms of the z spin-up and spin-down states, these eigenkets are Now, in order to find the time evolution of the quantum state of the compound system, we only need to write its initial state in terms of the eigenkets (23) and ( 28 where the initial state is factorized and the unitary operator is written according to the Stone's theorem: |Ψ(0) SA = |ψ(0) S |φ(0) A and Ût = e −ı t Ĥ = e − ı tε(1+ σS z ) σA x . (31) The action of the Hamiltonian onto its eigenkets simply returns the corresponding eigenvalues.Therefore, since we have ICFP 2012 ) of ĤSA |Ψ(t) SA = Ût |Ψ(0) SA = e − ı tε(1+ σS z ) σA x c ↑ |↑ S + c ↓ |↓ S |↓ M ,