Ranté Markov: Béda antarrépisi
Konten dihapus Konten ditambahkan
m Ngarapihkeun éjahan, replaced: oge → ogé (2), nyaeta → nyaéta (4), rea → réa, ngarupakeun → mangrupa, ea → éa (7), eo → éo, Didinya → di dinya |
|||
Baris ka-12:
Ranté Markov aya sanggeus [[Andrei Andreevich Markov|A.A. Markov]], nu mimiti ngahasilkeun ieu proses dina taun (1906). Dijadikeun bisa diitung keur tetapan dina ruang anu "tidak tebatas" ku [[Andrey Nikolaevich Kolmogorov|Kolmogorov]] (1936).
Ranté Markov patali jeung [[Brownian motion|gerak Brown]] sarta [[ergodic hypothesis|hipotesa ergodik]], dua topik penting dina widang fisik di awal [[twentieth century|abad kaduapuluh]],
tapi Markov digunakeun
== Sifat ranté Markov ==
Baris ka-21:
nu disebut prosés ''transition probability''.
Kadangkala disebut
''Transition probability'' dua tahap, tilu tahap atawa leuwih dijentrekeun tina ''transition probability'' satahap sarta sifat Markov:
Baris ka-33:
Rumus ieu nyaruakeun keur kayaan waktu nu teu ''teratur'' di mangsa datang ''n''+''k'' ku cara ngalikeun transition probabilities sarta nga-''integral''-keun waktu ''k''.
[[Marginal distribution]] ''P''(''X''<sub>''n''</sub>)
Distiribusi awal
Evolusi proses ngaliwatan sakali tahap waktu nu dijentrekeun ku
: <math> P(X_{n+1}) = \int P(X_{n+1}|X_n)\,P(X_n)\,dX_n </math>
Ieu
:<math> \pi(X) = \int P(X|Y)\,\pi(Y)\,dY</math>
Baris ka-46:
numana ''Y'' ngan sakadar ngaran variabel integrasi.
Saperti distribution π disebut ''stationary distribution'' atawa ''steady-state distribution''.
Stationary distribution
Whether or not there is a stationary distribution,
and whether or not it is unique if it does exist,
are determined by certain properties of the process.
''Irreducible''
''Aperiodic''
Sometimes the terms ''indecomposable'', ''acyclic'', and ''persistent'' are used as synonyms for "irreducible", "aperiodic", and "recurrent", respectively.
Baris ka-67:
In particular,
this holds for ''f'' equal to the identity function.
Mangka nilai average sampel dina waktu
Furthermore,
Baris ka-75:
= \int_A \pi(X)\,dX = \mu_{\pi}(A) </math>
where μ<sub>π</sub> is the
This makes it possible to approximate the stationary distribution by a [[histogram]] or other density estimate of a sequence of samples.
Baris ka-83:
the transition probability distribution can be represented as a matrix,
called the ''transition matrix'',
with the (''i'', ''j'')'th element equal to
:<math>P(X_{n+1}=i\mid X_n=j)</math>
Baris ka-105:
associated with the [[eigenvalue]] 1.
If the transition matrix ''P'' is positive recurrent, irreducible, and aperiodic,
then ''P''<sup>''k''</sup> converges elementwise to a matrix in which
A transition matrix which is positive (that is, every element of the matrix is positive)
Baris ka-113:
== Scientific applications ==
Markov chains are used to model various processes in [[queueing theory]] and [[statistics]], and can also be used as a signal model in [[entropy coding]] techniques such as [[arithmetic coding]]. Markov chains also have many biological applications, particularly [[population process]]es, which are useful in modelling processes that are (at
Markov processes can also be used to generate superficially "real-looking" text given a sample document: they are used in various pieces of
== Tempo oge ==
Baris ka-125:
== Rujukan ==
* A.A. Markov. "Rasprostranenie zakona bol'shih chisel na velichiny, zavisyaschie drug ot druga". ''Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete'', 2-ya seriya, tom 15, pp
* A.A. Markov. "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. ''Dynamic Probabilistic Systems, volume 1: Markov Chains''. John Wiley and Sons, 1971.
*
▲* A.A. Markov. "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. ''Dynamic Probabilistic Systems, volume 1: Markov Chains''. John Wiley and Sons, 1971.
▲* Leo Breiman. ''Probability''. Original edition published by Addison-Wesley, 1968; reprinted by Society for Industrial and Applied Mathematics, 1992. ISBN 0-89871-296-3. ''(See Chapter 7.)''
* J.L. Doob. ''Stochastic Processes''. New York: John Wiley and Sons, 1953. ISBN 0-471-52369-0.
Baris 136 ⟶ 133:
* [http://crypto.mat.sbg.ac.at/~ste/diss/node6.html Markov Chains]
* [http://www.cs.bell-labs.com/cm/cs/pearls/sec153.html Generating Text] ''(About generating random text using a Markov chain.)''
* [http://www.mathworks.com/company/newsletters/news_notes/clevescorner/oct02_cleve.html The World's Largest Matrix Computation] ''(Google's PageRank as the stationary distribution of a random walk through the Web.)''
* [http://www.gnu.org/software/emacs/manual/html_node/emacs_473.html Disassociated Press] in [[Emacs]] approximates a Markov process
|