Re: Is State Vector Reduction a 'Process'?



Arnold Neumaier wrote:
> Seratend wrote:
>
> > Arnold Neumaier wrote:
> >
> >>Seratend wrote:
> >>
> > The formal measurement (collapse and born rules) are just the
> > statistical description of experiments:
> >
> > a) The collapse postulate describes 2 topics, a property of the system
> > (Outcome a of a given observable A is true) and the state of the system
> > if this property is true (stability of the property: the "true
> > remains true").
> > b) the born rules: a state and an observable define the probability law
> > of the outcomes of the observable.
> >
> > Therefore, the probability law of the outcomes (the statistics) of an
> > experiment is completely defined by the couple (|psi>, A). However to
> > calculate the statistics on experiments (the mapping), we need the
> > collapse: the formal mapping between the experiment outcomes and the
> > statistics simply expressed by the collapse property: the outcome
> > "a" for this experiment trial is true.
> > Note that in order to recover the probability law in the frequency of
> > outcomes we must have the independence of identical systems (hence, we
> > need a "preparation" to select the systems).
>
> But this is not satisfied in many experiments analyzed by quantum
> mechanics. For example, in an ion trap, one has the continuous
> measurement of a single system, in which the observation at different
> times can by no means considered to be observations of independent
> systems.
>
Where is the problem (logically)? (I may miss something). Do you mean
the collapse postulate is not satisfied in continuous measurements?

Please note that when you say a continuous measurement over time of a
single system, you mean a single measurement that spans over time of
one instance of this system (the measurement has to be specified over
time and space or the equivalent observables to have a meaning). The
"continuous measurement" in the QM formalism helps one to
understand the logical meaning of "before" and "after" of the
collapse postulate: It is "an acknowledgement" of the property of a
given system, the property may be related to time or independent of
time.

For an example of what I mean, please see "When the unitary evolution
is derived from the collapse postulate in QM" in
http://www.physicsforums.com/showthread.php?t=72181

>
> Thus your conceptual framework for interpreting quantum mechanical
> experiments is too restrictive.
>
I do not understand on how it is too restrictive.
I am using the "chut up and calculate" view.
When we just have a single experiment trial in all the universe, we
just have the only interesting result: the unique set of results of
this experiment. There is no way to make predictions (this is also true
for CM). However, if this experiment belongs to the set of experiments
where we know, with P~100%, the ion is trapped, we have a direct
prediction of what will occur with such a single experiment (the
results where P~100%).

In addition, it raises an interesting question when we consider the
continuous measurement (we suppose all the interactions may be modelled
into interaction Hamiltonians, including the interactions of the
measurement apparatus):
Is it the state of the system that changes upon measurement due to the
collapse ("action" of the collapse) or is it the measured values
that change continuously upon time ("acknowledgement" of the
collapse)?

>
> > A physical theory is mainly a choice of description (formally, we are
> > free to choose what we want)
>
> ..but only if we don't care about the quality of our predictions.
> If we want to have good predictions, we must choose what quantum
> mechanics tells us to choose.
>
Every physical theory is a description choice. Good predictions is
somewhat a matter of taste (i.e. a practical choice).
For example, take My god determinist theory: the collection of all the
"measurable" properties of the universe we may ever know. I call it
determinist because, labelling all the properties (and assuming the
collection of labels and properties is a ZF set: the only restiction),
it defines an implicit function. Does it make good predictions?
>
> > Statistical description is a very pragmatic description choice (and not
> > a mysterious physical process). With QM or with a basic coin flipping
> > experiment we always do the same thing: we label the experimental
> > trials and compute the frequencies of the outcomes:
>
> What is a true outcome in a world described by quantum mechanics/
>
I have outcomes, hence they are true otherwise I cannot say (logical
meaning) I have outcomes (I assume you understand that QM does not
explain the outcomes, just their probability, i.e. their occurrence),
this is a circular reasoning. In QM theory, the outcomes are
externally given by the experimental realisation (when "we"
associate/map the experimental realisations to the formal outcomes of
the collapse postulate). When we have thought experiments, the outcomes
are externally given (by saying the "outcome a of A" is true): they
are the properties of the considered system.

Note we have an analogue problem in classical mechanics. How can we say
the proposition "a particle at position q at time t" is true? The
theory does not explain that, it uses it as QM does.

However, the main difference between classical mechanics and QM comes
from the preferred basis of the outcomes. In CM we assume it is the
position while in QM it may be any basis (position, energy momentum).
>
> > 1) We have an implicitly defined random variable, an abstract function
> > f, which expresses the experiment logic results: for the trial labelled
> > e, we associate the result a (logic true): "if e then result a".
> > The function/random variable is defined by the set {(e, a), for all e}
> > is equivalent to a=f(e) (i.e. the proposition "the result of the
> > experiment trial label e is a" is true).
>
> Even classical statistics is surrounded by a foundational mystery,
> causing as heated debates as in QM.
>
> Your description is by no means universally accepted. The frequentist
> approach you favor here has severe problems in that the predicted
> probabilities and the observed frequencies only match approximately.

However, we only need this to describe what we ever see. This is the
restriction choice of the frequentist approach.
It is not a question of acceptation, it is a question of the
mathematical objects we choose (not an obligation) to describe the
"reality". The theorems are always true, we choose to apply them or
not.

In other words, I may choose to view the "reality" by different
mathematical concepts, this does not change the reality, just the
description of the reality. The only required property is the
conservation of the logic (as I do not know what to take to replace the
usual logic).

> One can encounter long strings of heads although the probability
> of a head is 1/2.

And one can choose the set of these trials and define the associated
probability law assuming the independence of the trials. She/He will
obtain a different probability law (i.e. p(heads)=/=1/2).
Where is the problem, once we select these set of trials with this new
probability law? (if you prefer the context of the experiment). This is
the logic and the description choices.

Question, why does the computed frequency of head/tails of a coin
flipping is 50/50?
Is it due to a mysterious ontic property, or is it due to the coin
flipping experiment and its description choice (by saying p= n(a)/N)?

I am just trying to promote the epistemic view and the relativity of
descriptions.
>
> You can read about my view of probability theory in my theoretical
> physics FAQ at
> http://www.mat.univie.ac.at/~neum/physics-faq.txt
>
I am surprised because, we seem to say almost the same (math results)
(except may be for your comments on Bayesian probability).
(Note: I have implicitly selected my preferred interpretation of your
words : )).

However, I have a suggestion: you should explicitly say that the sample
space of the sequence of trials of a random variable of more than one
value (i.e. P=/=100%) is uncountable. This property explains most of
the problems with probabilities and the "uniqueness"
(reproducibility) of each infinite sequence.

> What I want as a basis of physics is a mathematically defined
> model of the world in which one can give unambiguous descriptions
> of all that matters in physics - physical systems, detectors, observers,
> individual observations, statistics about these observations,
> error analysis, etc. in such a way that it mirrors reality.

I understand, however, there is more than one model. And, I think you
should also accept conceptually that the only unambiguous model should
be what I call the "god determinist theory" that is not very
interesting (the absolute knowledge) and not practical.
> Just as in matheamtical logic, one models the whole logical process
> in a concise mathematical framework.
>
This is a sort of compression process of the "god determinist
theory". Therefore, I hope you should accept loss of information
(description) in this process.

I think the QM theory is a concise one (may be too). The main problem
seems to be the preferred basis prediction. However, If we look at
general relativity we encounter an almost analogue problem: the
preferred frame to describe the events.
>
> >>Although not very clearly separated in many discussions,
> >>these two processes happen never simultaneously but context
> >>dependent, and are of course only approximations to more
> >>realistic measurement situations.
> >>
> >>For example, in a Stern-Gerlach experiment, the system (silver atom)
> >>moves from the source along the magnet towards the screen with very
> >>good accuracy in a unitary (and indeed reversible) way. But a few
> >>split moments before it hits the screen it feels its interactions,
> >>and describing it as a closed system becomes hopelessly inaccurate.
> >>Instead, since the interaction time is very short, it can be
> >>described very accurately by an instantaneous collapse.
> >>
> > Why do you say it becomes hopelessly inaccurate?
>
> Because the closed system in this setting contains >10^20 degrees of
> freedom, and we cannot model such systems accurately. We need the
> thermodynamic approximation, and with it an unavoidable inaccuracy
> in the response to the microscopic particle state.
>
You can describe the stern-gerlach experiment with an excellent
approximation as a closed quantum system. The thermodynamic
approximation will just define macroscopic variables compatible with
such a closed description (it will show what a simple quantum toy model
shows). The value of the macroscopic variable is a property and hence a
collapse for the considered system.
One remaining question is to know if the thermodynamic approximation is
able to predict the preferred basis.

>
> > And How can you really
> > apply a collapse to a non closed system? In this case, don't you
> > think the collapse result (the outcome) should be independent of the
> > partial system description versus the whole system (including the
> > universe if necessary)?
>
> Look at the corresponding classical situation. A classical particle
> encounters a classical screen (say, a thin foil through which
> the particle will most likely escape) involving a huge number
> of classical particles bound by (and interacting with the
> incident particle) by empirical forces. It ends up in some state
> that is determined only probablilistically, once you ignore the
> detailed structure of the screen. But it ends up in a _definite_
> state.

If you say it ends up in a _definite_ state you are implicitly
_defining_ a true property for this system instance and hence a whole
collapse! Do you see what I mean? You have no choice, this is simple
logic, you can avoid it, otherwise you cannot assume the particle ends
up in a _definite_ state. This is the formalism of QM.

If you prefer, we can use the Hilbert space formulation of classical
statistical mechanics, in order to view on the collapse is an
acknowledgement of the property and not a physical evolution that is
described by the unitary evolution (of the whole interactions).

QM formalism just says that there exist many other properties that may
be true (the different basis) respectively to the Classical mechanics
(|q,p> seems the preferred basis: the superselection rule of CM).

> To describe it, however, without reference to the state of
> the screen, necessitates a probabilistic description and a collapse.
>
But you have a property for the screen, otherwise you cannot apply the
collapse ("the don't care property or if you prefer, the Identity
projector).

Note: for me P=100% is a probabilistic description (it means 100% of
the systems have the considered property).

> The quantum system is - in the consistent experiment interpretation -
> completely analogous, except that the dynamics differs in detail
> significantly from the classical dynamics.
>
I am still working on this (I need to understand the logic).

Seratend.

.